uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,877,628,088,854 | arxiv | \section{Introduction}
Offering excellent stiffness-to-weight ratios, high damping, and a~low sensitivity to fatigue and corrosion, \MT{filament wound }{}carbon fiber reinforced polymers (CFRPs) are employed in high-tech applications\hl{, including bodies of racing cars} in automotive\MT{}{~\citep{Friedrich2013}}, \hl{propulsors and turbines in} naval~\MT{}{\citep{Challis2001,Young2016}}, \hl{wing boxes and ailerons in} aerospace~\MT{}{\citep{irving2015}}, and \hl{rocket bodies in the} space industr\MT{ies}{y}~\citep{Vasiliev2012}. Considerable attention has been therefore paid to methods for optimizing \MT{their }{}structural performance\hl{ of these laminates}, particularly the laminate layup~\citep{Ghiasi2009,Ghiasi2010}. To concurrently maximize bending stiffness and keep weight low, the outer dimensions of these structures tend to be maximized, while the wall thickness is minimized. The ``thin-walledness'' of the resulting structures, combined with their anisotropy, renders them highly sensitive to shear and wall buckling instabilities manifested in low fundamental free-vibration eigenfrequencies.
Below, we review common approaches to topology optimization that reduce wall instabilities by designing an internal structure, Section~\ref{sec:to}. Section~\ref{sec:sdp} provides a brief introduction to semidefinite programming and highlights several applications in structural optimization. Finally, Section~\ref{sec:novelty} reveals the merits of designing internal structures using semidefinite programming.
\subsection{Topology optimization}\label{sec:to}
Topology optimization techniques~\citep{Bendsoe2003} provide the means for reducing wall instabilities when designing sufficiently stiff yet lightweight structures. In the simplest setting---beam cross section optimization---we search an optimal two-dimensional cross-sec\-tional shape or a~stiffening of structures whose outer shape is predefined~\citep{Kim2000}. \citet{Blasques2014}, for example, maximized the fundamental free-vibration eigenfrequency while accounting for the mass and shear center position constraints; \citet{Nguyen2018} optimized cross-sections of prismatic beams to maximize their buckling loads.
The design of optimal core sandwich structures, whose skins are stiffened by a thick core is a related challenge. For honeycomb, solid, truss, and foam rectangular panels under in-plane compression or shear loads, optimal periodic topologies can be found analytically by considering the optimality criterium of all failure modes occurring simultaneously~\citep{Vinson2005a}. For complex boundary conditions, parametric shape-opti\-mization studies are usually performed. \citet{Wang2003} studied the geometry of a~metal honeycomb sandwich beam core under torsion and bending and \citet{Xu2013} optimized the lattice core of a composite sandwich panel to increase the fundamental eigenfrequency while accounting for uncertainties in the model. They concluded that bending eigenfrequencies increase with increasing strut thicknesses, with an increase in the elastic and shear modulus of the composite, and with a~decrease in density. Although \citet{Daynes2017} optimized spatially-graded lattice structures within a single sandwich panel domain, surprisingly, almost no prior research seems to have stepped beyond parametric intuition-based designs~\citep{Birman2018,Helou2018}, the rare exception being the multi-scale topology optimization approach investigated by \citet{Coelho2015a}.
Questioning whether, where, and how to stiffen already engineered designs in order to further improve their structural performance constitutes the central question of the reinforcement problem~\citep{Olhoff1983,Diaaz1992}, superseding the former dimensional reduction and periodicity assumptions. Initial studies in this area have considered maximization of the fundamental eigenfrequency~\citep{Diaaz1992} and improving the structural frequency response of plane elastic structures~\citep{Ma1993} using the homogenization and optimality criteria methods, respectively.
Using the ground structure approach for topology optimization of truss structures, \citet{Bendsoe1994b} fixed cross-sectional areas of a set of bars and searched for their stiffest truss reinforcement, a (non-smooth) convex quadratic programming formulation. Alternatively, the effect of a fixed boundary structure has been approximated by an appropriate application of nodal forces to the ground structure~\citep{Balabanov1996,Opgenoord2018}, but this choice influences, however, the optimized design.
In the setting of continuous topology optimization, \citet{Luo1998} developed a systematic optimization approach for the topology and orientation design of composite stiffeners of plates and shells in both static and dynamic settings, and \citet{Wang2004} optimized the overall structural rigidity of an automobile body through a maximization of the fundamental eigenfrequency. In aerospace applications, \citet{Maute2004} optimized a wing's internal structure, subjected to fluid-surface interactions; \citet{Aage2017} performed an extremely large-scale optimization of the internal structure of a Boeing 777 wing, while avoiding the traditional rib and spar designs~\citep{Stanford2014}. In military applications, topology optimization was the basis for the design of additively-manufactured lattice-reinforced penetrative warheads~\citep{Provchy2018} and for optimizing the layout weight of stiffeners in composite submarines subjected to nonsymmetric wave slap loads~\citep{Rais-Rohani2007}.
Other methods relevant to internal structure design have arisen in conjunction with recently introduced coating and infill optimization problems. \citet{Clausen2015} developed a formulation for the optimization of (uniformly) \textit{coated} structures, wherein a base material, \textit{infill}, was surrounded by another material at the interfaces, finding a~porous, complex infill significantly improves both structural buckling resistance and robustness to local perturbations when compared to optimized solid structures of equal weight and similar stiffnesses~\citep{Clausen2016,Clausen2017}. In three dimensions, optimized designs further exploit the merits of closed shell surfaces through the sandwich effect~\citep{Clausen2017}.
Inspired by natural, bone-like microstructures, \citet{Wu2017} optimized a spatially non-uniform porous infill, \citet{Wang2018} developed a sequential approach for generating graded lattice mesostructures, and \citet{Zhu2019} introduced a novel asymptotic-analysis-based homogenization approach. All these methods automatically design stiff yet porous infills for additive manufacturing products while superseding the traditional pattern-based designs~\citep{Livesu2017}. Finally,~\citet{Wu2018} extended their approach to the ultimate setting of a~concurrent optimization of coated structures and porous infills, and \citet{Groen2018} have developed a homogeni\-zation-based method to accelerate solutions.
\subsection{Semidefinite programming}\label{sec:sdp}
It has been shown in recent decades that several structural optimization problems can be modeled as se\-mi\-de\-fi\-ni\-te programs. Linear semidefinite programming (SDP) is a subset of convex optimization of the form
\begin{subequations}\label{eq:can}
\begin{align}
\min_{\mathbf{x}} \; & \mathbf{c}^\mathrm{T} \mathbf{x}\label{eq:can:obj}\\
\mathrm{s.t.}\; & \mathbf{X} = \mathbf{F}_0 + \sum_{i=1}^{m} x_i \mathbf{F}_i,\label{eq:affine}\\
& \mathbf{X} \succeq \mathbf{0},\label{eq:can:lmi}
\end{align}
\end{subequations}
and involves minimization of a~linear function \eqref{eq:can:obj} over a~spectrahedron, which is an intersection of an affine space \hl{{\eqref{eq:affine}}} with the cone of symmetric positive semidefinite matrices \eqref{eq:can:lmi}. In \eqref{eq:can:lmi}, the notation ``$\succeq \mathbf{0}$'' enforces positive semidefiniteness of the left hand side. Due to the linear dependence of $\mathbf{X}$ on $\mathbf{x}$ \eqref{eq:affine}, \eqref{eq:can:lmi} is commonly referred to as a~linear matrix inequality (LMI).
Applications of semidefinite programming to structural design were pioneered by~\citet{Ben-Tal2000SD}, \citet{DeKlerk1995}, and \citet{Vandenberghe1996} who developed formulations for minimum-compliance and weight truss topology optimizations. The main added value of SDP lies in its ability to effectively avoid the non-differentiability of multiple eigenvalues for free-vibrations~\citep{Ohsaki1999,Achtziger2007} and buckling~\citep{Ben-Tal2000ODT,Kocvara2002}, robust optimization~\citep{Ben-Tal1997}, and bounds improvement for optimization problems in a discrete setting~\citep{Cerveira2011}. Semidefinite programming has also found applications in optimal materials design, the Free Material Optimization approach~\citep{Ben-Tal1999}, or in the limit analyses~\citep{Bisbos2007}.
\begin{figure*}
\centering
\begin{subfigure}{0.45\linewidth}
\def\svgwidth{\textwidth}\footnotesize
\import{./include/}{dimensions.pdf_tex}
\normalsize\caption{}
\label{fig:dimensions_simple}
\end{subfigure}%
\hfill\begin{subfigure}{0.12\linewidth}
\def\svgwidth{\textwidth}\footnotesize
\import{./include/}{dimensions2.pdf_tex}
\normalsize\caption{}
\label{fig:dimensions_section}
\end{subfigure}%
\hfill\begin{subfigure}{0.35\linewidth}
\def\svgwidth{\textwidth}\footnotesize
\import{./include/}{compression.pdf_tex}
\normalsize\caption{}
\label{fig:dimensions_compression}
\end{subfigure}
\caption{Case study setup. (a) Outer dimensions and simply supported boundary conditions, (b) prismatic cross-section, and (c) compression molding load case.}
\label{fig:dimensions}
\end{figure*}
\subsection{Aims and novelty}\label{sec:novelty}
In this contribution, we consider an industrial problem of designing the least-weight internal structure of a thin-walled filament-wound composite \MT{beam}{machine tool component} prone to shear and buckling wall instabilities. The beam laminate was designed for bearing dynamic loads, allowing us to describe the wall instabilities naturally in terms of free-vibrations eigenfrequencies.
In current production process, the wall instabilities are reduced by inserting a uniform foam core structure into the beam interior, an uneconomical and labor-intensive process. Conversely, we have aimed to automatically design a structurally-efficient internal structure which can easily be manufactured using conventional low-cost $3$D printers.
To this goal, we extended the convex (linear) semidefinite programming formulation introduced by \citet{Ohsaki1999} and \citet{Ben-Tal1997} to design globally-optimal least-weight lattice-like internal structures and apply it to increasing the fundamental eigenfrequency and decreasing the compression-molding compliance of a thin-walled composite beam prototype. Note that \citet{Achtziger2007} avoided prescribed structural elements but allowed for a~non-structural mass and \citet{Ohsaki1999} did not consider prescribed mass or stiffness.
After introducing the case study of a simply-supported CFRP beam design in Section~\ref{sec:simply}, we develop its finite element representation in Section~\ref{sec:fem}. For this representation, a~semidefinite programming formulation for truss topology optimization of internal structures is developed in Section~\ref{sec:optprob}. Having designed the optimal internal structure, we post-process the optimization outputs and export, in a fully-automated way, the internal structure for additive manufacturing in Section~\ref{sec:post}. During manufacturing, the internal structure serves as the support for carbon fibers in the filament-winding production phase, and a prototype is created. Section~\ref{sec:results} describes verification and experimental validation of the prototype and concludes that its response agreed well with the model prediction.
\section{Case study}\label{sec:simply}
\begin{table*}[t]
\centering
\caption{Material properties of the wound composite beam laminae. $E_1$ and $E_2$ stand for the Young moduli in the fiber and transverse directions, respectively\MT{,}{;} $G_{12}$ denotes the shear modulus, $\nu_{12}$ and $\nu_{23}$ are Poisson's ratios\MT{;}{.} $\theta$ constitutes the angle between the $1$-direction and $x$, rotating around the beam surface normals. Finally, $\rho$ and $t$ denote the density and thickness of the plies.}
\label{tab:material}
\scriptsize
\begin{tabular}{lrrrrrrrr}
\hline
Layer & $E_1$ [GPa] & $E_2$ [GPa] & $G_{12}$ [GPa] & $\nu_{12}$ [-] & $\nu_{23}$ [-] & $\theta$ [$\deg$] & $\rho$ [kg/m$^3$] & $t$ [mm] \\
\hline
$1$ & $128.2$ & $5.0$ & $3.4$ & $0.34$ & $0.35$ & $89.3$ & $1,428$ & $0.25$ \\
$2$ & $421.9$ & $3.7$ & $3.2$ & $0.37$ & $0.35$ & $0.0$ & $1,680$ & $1.25$ \\
$3$ & $130.9$ & $5.0$ & $3.4$ & $0.34$ & $0.35$ & $26.9$ & $1,458$ & $0.18$ \\
$4$ & $130.9$ & $5.0$ & $3.4$ & $0.34$ & $0.35$ & $-26.9$ & $1,458$ & $0.36$ \\
$5$ & $130.9$ & $5.0$ & $3.4$ & $0.34$ & $0.35$ & $26.9$ & $1,458$ & $0.18$ \\
$6$ (casing) & $2.0$ & $2.0$ & $0.7$ & $0.37$ & $0.37$ & $0.0$ & $1,040$ & $0.80$\\
\hline
\end{tabular}
\end{table*}
As the basic structure, we consider a~prismatic, laminated composite beam \MT{of }{}$1,000$~mm\MT{}{ long}\MT{ in length}{}, with a~$80\times80$~mm thin-walled cross-section\MT{ of}{} $2.2$~mm \MT{in thickness}{thick}, Fig.~\ref{fig:dimensions_section}. According to \MT{the }{}current manufacturing technology, \MT{the }{}beam production consists of several steps, in which a~supporting structure made of manually processed high-density foam is wound biaxially with a~combination of ultra high modulus (UHM) and high modulus (HM) carbon fibers saturated with epoxy resin. The supporting structure prevents cross-section distortions induced by compression-molding loads \hl{as }shown in Fig.~\ref{fig:dimensions_compression}. Subsequently, the beam is cured, the supporting structure is pulled out, and the beam outer surface is finalized.
The final product is exposed primarily to loads that induce bending. For this purpose, most of the carbon fibers are aligned with the beam\hl{'s} longitudinal axis (layer $2$ in Table~\ref{tab:material}), denoted by $x$ in Fig.~\ref{fig:dimensions_simple}, whereas the remaining layers reduce the susceptibility to delamination. See Table~\ref{tab:material}, where all layers are listed by their orientations relative to the beam\hl{'s} longitudinal axis, $\theta$. This layered composition \MT{transmits reliably}{reliably transmits} the design forces to the supports, and is thus fully sufficient in this sense.
Attributed \MT{with}{to} transversely isotropic material properties, the beam\hl{'s} walls are, however, prone to elastic wall instabilities under shear and buckling, which also manifests in free-vibration modes and frequencies of the non-reinforced beam. Figure~\ref{fig:empty_simple} confirms that the first fundamental eigenmode\hl{ with a frequency} of \MT{frequency}{} $128.5$~Hz corresponds to shear wall instabilities, whereas the second eigenmode combines bending with buckling; all higher eigenmodes (not shown) exhibit similar wall instabilities. Because the fundamental eigenfrequency limits the maximum working frequency of the machine part, its increase is of \MT{a~}{}considerable interest.
\begin{figure}[!b]
\begin{subfigure}[c]{0.48\linewidth}
\centering
\begin{tikzpicture}
\node (b) at (-0.5,0.5) {\includegraphics[width=0.94\linewidth]{./include/supported_empty/supported_empty_mode1_128_454496hz.png}};
\node (a) at (1.0,-0.3) {\includegraphics[width=0.18\linewidth]{./include/supported_empty/supported_emptyCS_mode1_128_454496hz.png}};
\end{tikzpicture}
\caption{First eigenmode \hl{with a frequency }of \MT{frequency }{}$128.5$~Hz.}
\end{subfigure}%
\hfill\begin{subfigure}[c]{0.48\linewidth}
\centering
\begin{tikzpicture}
\node (b) at (-0.5,0.5) {\includegraphics[width=0.94\linewidth]{./include/supported_empty/supported_empty_mode2_403_084249hz.png}};
\node (a) at (1.0,-0.3) {\includegraphics[width=0.18\linewidth]{./include/supported_empty/supported_emptyCS_mode2_403_084249hz.png}};
\end{tikzpicture}
\caption{Second eigenmode \hl{with a frequency }of \MT{frequency }{}$403.1$~Hz.}
\end{subfigure}
\caption{Axonometric and front view on the (a) first and (b) second eigenmodes of the composite beam predicted by the finite element model.}
\label{fig:empty_simple}
\end{figure}
Although the effect of these instabilities can be reduced by additional laminate layers or by \hl{also} keeping the uniform foam structure \MT{also}{ }for operational loads, the added weight, decrease in the bending eigenfrequencies, and labor-intensive production process render these approaches both time-inefficient and uneconomical.
\section{Optimal design of internal structure}\label{sec:opt}
The aim of this section is to cast the optimal internal structure design problem in the form of a linear semidefinite program \eqref{eq:can}. The internal structure has to withstand \MT{the }{}compression molding loads with a maximum deflection bound, while the internal structure is temporarily supported by a~steel mandrel passing through the beam interior, Fig.~\ref{fig:dimensions_compression}. Most importantly, the internal structure is supposed to increase the beam fundamental eigenfrequency via reduction of \MT{the }{}wall instabilities.
In this section, we first describe the finite element model of the composite beam. This finite element model serves then as the basis for establishing the optimization problem formulation, yielding an optimal internal structure design. The section is concluded by discussing\MT{ necessary}{} post-processing steps\hl{ necessary} to maintain manufacturability of the design.
\begin{figure}[!b]
\centering
\begin{tikzpicture}
\node (a) {\includegraphics[width=0.975\linewidth]{./include/composition.png}};
\node (b) at (-2.65,1.3) {\footnotesize Internal structure, ABS};
\node (c) at (-0.2,1.75) {\footnotesize Casing, ABS};
\node (d) at (2.5,2.6) {\footnotesize Composite beam, CFRP};
\end{tikzpicture}
\caption{The entire structure of considered composite beam design: internal structure (used for the reduction of wall instabilities and for increase of the lowest free-vibration frequency); casing of the internal beam structure (to allow for wounding the final composite layer); composite layers, which transmits working load applied to the beam.}
\label{fig:composition}
\end{figure}
\subsection{Finite element model}\label{sec:fem}
The outer composite beam surface is discretized with shell elements which are supplied with the material properties from Table~\ref{tab:material}. The beam internal structure is modeled by bar (truss) elements, with the isotropic Acrylonitrile Butadiene Styrene (ABS) material properties~\citep{Cantrell2017}: elastic modulus $E_\mathrm{ABS} = 2$ GPa, Poisson ratio $\nu_\mathrm{ABS} = 0.37$, and density $\rho_\mathrm{ABS} = 1,040$ kg/m$^3$.
\MT{A s}{S}pecial care needs to be paid to establishing a rigid connection between the internal structure and the carbon composite. The so-called \textit{casing}, see Fig.~\ref{fig:composition}, which is a~$0.8$~mm thin layer of printed beam walls, further prevents leaking \hl{of }the epoxy resin into the beam\hl{'s} interior. Casing is modeled as the bottom layer of the laminate composition, recall Table~\ref{tab:material}.
The finite element model for the optimization part was developed in \textsc{Matlab}. In this model, the outer laminate was modeled with four-node \textsc{Mitc4} elements \cite{Dvorkin_Bathe_1984}. The composite beam interior was discretized into the \textit{ground structure}~\cite{Dorn1964}, a set of admissible truss\footnote{Based on comparative simulations (not shown), modeling internal structure with trusses or beams leads to \hl{an }insignificant difference in the structural response\MT{,}{} which enabled us to employ truss topology optimization approaches in Section \ref{sec:optprob}.} elements, whose cross-sections we search in the optimization part. The ground structure was constructed from $47\times4\times4$ modular building blocks shown in Fig.~\ref{fig:gs}, to guarantee manufacturability of the entire internal structure with $3$D printing. Note that the bars placed within the location of the steel mandrel were removed from the ground structure and that the shell element nodes coincided with the ground structure nodes, resulting in a rather coarse discretization of the outer layer.
\begin{figure}[!b]
\centering
\def\svgwidth{4cm}\footnotesize
\import{./include/}{gs.pdf_tex}
\caption{Ground structure building block, which fill in the entire internal volume of the composite beam to represent a (to be optimized) internal beam structure. Cross-sectional areas of individual trusses are design variables of the optimization problem~\eqref{eq:nonconvex}.}
\label{fig:gs}
\end{figure}
\subsection{Formulation of the optimization problem}\label{sec:optprob}
\subsubsection{Non-convex formulation}
Adopting the previously described discretization, our goal is to find the cross-sectional areas $\mathbf{a}$ of $n_\mathrm{b}$ bars in the minimum-weight (or volume) ground structure, such that the fundamental eigenfrequency exceeds the user-defined lower threshold $\overline{f}$, taken as $300$~Hz in what follows, while exhibiting limit displacements $\overline{u}$ of the reinforced structure during the compression molding load case. This leads to the following optimization problem
\begin{subequations}\label{eq:nonconvex}
\begin{alignat}{4}
\negthickspace\negthinspace\min_{\mathbf{a}, \mathbf{u}_\mathrm{cm}, \mathbf{u} }\; &&
\bm{\ell}^\mathrm{T} \mathbf{a} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad&& && \label{eq:nonconvex:obj} \\
\mathrm{s.t.} \; && \negthickspace\inf_{\left( \mathbf{M}_\mathrm{fv}^\mathrm{IS}(\mathbf{a}) + \mathbf{M}_{\mathrm{fv}}^\mathrm{C} \right) \mathbf{u}\neq \mathbf{0}} \frac{\mathbf{u}^\mathrm{T} \left( \mathbf{K}^\mathrm{IS}_\mathrm{fv}(\mathbf{a}) + \mathbf{K}_{\mathrm{fv}}^\mathrm{C} \right) \mathbf{u}}{\mathbf{u}^\mathrm{T} \left( \mathbf{M}_\mathrm{fv}^\mathrm{IS}(\mathbf{a}) + \mathbf{M}_{\mathrm{fv}}^\mathrm{C} \right) \mathbf{u}} && \;\ge\; && \overline{\lambda},\label{eq:nonconvex:eigen}\\
&& \left(\mathbf{K}^\mathrm{IS}_\mathrm{cm}(\mathbf{a})+\mathbf{K}_{\mathrm{cm}}^\mathrm{C} \right) \mathbf{u}_\mathrm{cm} && \;=\; && \mathbf{f}_\mathrm{cm},\label{eq:nonconvex:equilibrium}\\
&& -\overline{u}\mathbf{1} \le \mathbf{u}_{\mathrm{cm,disp}} && \;\le\; && \overline{u}\mathbf{1},\label{eq:nonconvex:disp}\\
&& \mathbf{0} \le \mathbf{a} && \;\le\; && \overline{a}\mathbf{1},\label{eq:nonconvex:areas}
\end{alignat}
\end{subequations}
with
\begin{equation}
\overline{\lambda} = 4 \pi^2 \overline{f}^2.
\end{equation}
In this formulation, the vector $\bm{\ell}$ appearing in the objective function \eqref{eq:nonconvex:obj} collects the bar lengths in the truss ground structure. The Rayleigh quotient in \eqref{eq:nonconvex:eigen} involves\MT{ the}{} stiffness, $\mathbf{K}^\mathrm{C}_\mathrm{fv}$, and mass, $\mathbf{M}^\mathrm{C}_\mathrm{fv}$, matrices of the outer shell structure for free-vibration analysis\MT{,}{} together with\MT{ the}{} stiffness,
$\mathbf{K}^\mathrm{IS}_\mathrm{fv}(\mathbf{a})$, and mass, $\mathbf{M}_\mathrm{fv}^\mathrm{IS}(\mathbf{a})$, matrices of the internal structure. The design-dependent contributions of the internal structure are obtained as
\begin{equation}\label{eq:fem}
\mathbf{K}_{\mathrm{fv}}^{\mathrm{IS}}(\mathbf{a}) = \sum_{e=1}^{n_\mathrm{b}} \hat{\mathbf{K}}_{\mathrm{fv},e}^{\mathrm{IS}} a_e, \qquad
\mathbf{M}_{\mathrm{fv}}^{\mathrm{IS}}(\mathbf{a}) = \sum_{e=1}^{n_\mathrm{b}} \hat{\mathbf{M}}_{\mathrm{fv},e}^{\mathrm{IS}} a_e,
\end{equation}
where $\hat{\mathbf{K}}_{\mathrm{fv},e}^{\mathrm{IS}}$ and $\hat{\mathbf{M}}_{\mathrm{fv},e}^{\mathrm{IS}}$ stand for the stiffness and mass matrix of individual bars in the free-vibration (fv) setting, respectively; $a_e$ is the $e$-th component of $\mathbf{a}$ and $\overline{\lambda}$ the limit fundamental free-vibrations eigenvalue.
The constraints \eqref{eq:nonconvex:equilibrium} and \eqref{eq:nonconvex:disp} address the compression-molding (cm) load case, recall Fig.~\ref{fig:dimensions_compression}. Specifically, \eqref{eq:nonconvex:equilibrium} introduces the generalized nodal displacements $\mathbf{u}_\mathrm{cm}$ in response to the generalized load vector $\mathbf{f}_\mathrm{cm}$ corresponding to the compressive load, and $\mathbf{u}_{\mathrm{cm,disp}}$ denotes the displacement components of $\mathbf{u}_\mathrm{cm}$. The stiffness matrix corresponding to this load case consists again of the design-independent, $\mathbf{K}_\mathrm{cm}^\mathrm{C}$, and design-dependent, $\mathbf{K}_\mathrm{cm}^\mathrm{IS}(\mathbf{a})$, parts; the latter is obtained as in \eqref{eq:fem}. The symbol $\mathbf{1}$ denotes a column vector of all ones. Notice that the stiffness matrices in \eqref{eq:nonconvex:eigen} and \eqref{eq:nonconvex:equilibrium} differ because of different boundary conditions in the operational, Fig.~\ref{fig:dimensions_simple}, and manufacturing, Fig.~\ref{fig:dimensions_compression}, load cases. The constraint \eqref{eq:nonconvex:disp} requires the displacement components of $\mathbf{u}_\mathrm{cm}$ to remain smaller than the user-defined limit value $\overline{u}$, considered \MT{as}{to be} $0.5$~mm in this study. Finally, \eqref{eq:nonconvex:areas} requires the cross-sectional areas of the bars to be non-negative and smaller than $\overline{a} = 200$~mm$^2$, a~value set by the additive manufacturing constraints.
A closer comparison of the optimization problem of Eq.~\eqref{eq:nonconvex} and that of Eq.~\eqref{eq:can} reveals that the problem of Eq.~\eqref{eq:nonconvex} lacks the structure of a semidefinite program. Namely, the objective function \eqref{eq:nonconvex:obj} and the matrices in the constraints depend affinely on the design variables\hl{,} $\mathbf{a}$. However, the constraints \eqref{eq:nonconvex:eigen} and \eqref{eq:nonconvex:equilibrium} are non-convex as the stiffness and mass matrices may become singular when the zero lower-bound for cross-sectional areas is attained in \eqref{eq:nonconvex:areas}. Moreover, \eqref{eq:nonconvex:eigen} might become non-differentiable when an eigenvalue with multiplicity higher than one is encountered. Altogether, this renders the problem \eqref{eq:nonconvex} extremely difficult to solve in its original form. In the following section, we show how to re-cast the problem of Eq.~\eqref{eq:nonconvex} as a linear semidefinite programming problem.
\subsubsection{Convex semidefinite program}
\begin{figure*}[!htbp]
\includegraphics[width=\linewidth]{./include/cut.png}
\caption{Symmetric half of the beam as cut off by the $xz$ plane. The top shell surface is hidden to reveal the internal structure.}
\label{fig:cut}
\end{figure*}
Similar eigenvalue constraint\hl{s such} as \eqref{eq:nonconvex:eigen} ha\MT{s}{ve} already been studied in detail by \citet{Ohsaki1999} and \citet{Achtziger2007}. Their results allow us to rewrite \eqref{eq:nonconvex:eigen} equivalently as a~convex LMI
\begin{equation}
\mathbf{K}^\mathrm{IS}_\mathrm{fv}(\mathbf{a}) + \mathbf{K}_{\mathrm{fv}}^\mathrm{C} - 4 \pi^2 \overline{f}^2 \left(\mathbf{M}_\mathrm{fv}^\mathrm{IS}(\mathbf{a}) + \mathbf{M}_{\mathrm{fv}}^\mathrm{C}\right) \succeq \mathbf{0},
\end{equation}
where the left hand side expression is a~linear function of $\mathbf{a}$. This constraint also avoids the non-differtiability of multiple eigenvalues, see, e.g., \citep{Achtziger2007}, and effectively eliminates the kinematic variables $\mathbf{u}$ from the problem formulation.
To attain convexity of the final formulation, the compression molding constraints \eqref{eq:nonconvex:equilibrium}--\eqref{eq:nonconvex:disp} must be enforced only approximately in the form of the LMI \cite{DeKlerk1995,Vandenberghe1996,Ben-Tal1997}:
\begin{equation}
\begin{pmatrix}
c_\mathrm{cm} & -\mathbf{f}^\mathrm{T}_\mathrm{cm} \\
-\mathbf{f}_\mathrm{cm} & \mathbf{K}^\mathrm{IS}_\mathrm{cm}(\mathbf{a})+\mathbf{K}_{\mathrm{cm}}^\mathrm{C}
\end{pmatrix} \succeq \mathbf{0},
\end{equation}
in which $c_\mathrm{cm}$ denotes a prescribed upper bound on compliance (work done by external forces) of the compression molding load case. As found from parametric studies (not shown), an appropriate value of the bound is provided as
\begin{equation}
c_\mathrm{cm} = c_\mathrm{cm,0} \frac{\overline{u}}{\max\left\{ \lvert\mathbf{u}_{\mathrm{cm,disp}} \rvert\right\}},
\end{equation}
where $c_\mathrm{cm,0}$ stands for the compliance of the non-reinforced structure:
\begin{equation}
c_\mathrm{cm,0} = \tilde{\mathbf{f}}_\mathrm{cm}^\mathrm{T} \left( \tilde{\mathbf{K}}_{\mathrm{cm}}^\mathrm{C}\right)^{-1} \tilde{\mathbf{f}}_{\mathrm{cm}}.
\end{equation}%
Here, $\tilde{\mathbf{K}}_{\mathrm{cm}}^\mathrm{C}$ and $\tilde{\mathbf{f}}_\mathrm{cm}$ are constructed from $\mathbf{K}_{\mathrm{cm}}^\mathrm{C}$ and $\mathbf{f}_\mathrm{cm}$, respectively, by application of appropriate boundary conditions. For this particular problem, this compliance bound resulted in a maximum deflection of $0.4$~mm.
The final linear semidefinite programming formulation eventually reads as
\begin{subequations}\label{eq:sdp}
\begin{alignat}{4}
\min_{\MT{}{\mathbf{a}}}\; &&
\bm{\ell}^\mathrm{T} \mathbf{a} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\;&& && \label{eq:weight}\\
\mathrm{s.t.} \; && \mathbf{K}^\mathrm{IS}_\mathrm{fv}(\mathbf{a}) + \mathbf{K}_{\mathrm{fv}}^\mathrm{C} - 4 \pi^2 \overline{f}^2 \left(\mathbf{M}_\mathrm{fv}^\mathrm{IS}(\mathbf{a}) + \mathbf{M}_{\mathrm{fv}}^\mathrm{C}\right) && \;\succeq\; && \mathbf{0},\label{eq:eigen}\\
&& \begin{pmatrix}
c_\mathrm{cm} & -\mathbf{f}^\mathrm{T}_\mathrm{cm} \\
-\mathbf{f}_\mathrm{cm} & \mathbf{K}^\mathrm{IS}_\mathrm{cm}(\mathbf{a})+\mathbf{K}_{\mathrm{cm}}^\mathrm{C}
\end{pmatrix} && \;\succeq\; && \mathbf{0},\label{eq:static}\\
%
&& \mathbf{1} \overline{a} \ge \mathbf{a} && \;\ge\; && \mathbf{0}.\label{eq:nonneg}
\end{alignat}
\end{subequations}
This formulation now possesses the structure of \MT{a}{the} linear semidefinite program introduced in Section \ref{sec:sdp}, and thus can be solved efficiently via modern interior-point methods.
For numerical solution, we adopted the state-of-the-art industrial optimizer \textsc{Mosek}~\citep{mosek}. After discretization, the problem \MT{of}{in} Eq.~\eqref{eq:sdp} has $10,216$ admissible bars in total, with the corresponding sizes of the linear matrix inequalities \MT{$7,170\times7,170$}{$5,154\times5,154$} (free-vibration, Eq.~\eqref{eq:eigen}) and \MT{$5,761\times5,761$}{$4,608\times4,608$} (compliance, Eq.~\eqref{eq:static}). \hl{After tweaking the optimization problem with the steps outlined in the following subsection, t}he optimization process itself required \MT{$41$}{$13$}~GB of memory, and terminated after \MT{$42.3$}{$5.75$} core hours running on Intel$^\text{\textregistered}$ Xeon$^\text{\textregistered}$ \MT{E5-2650v4}{Gold 6130} processors at the MetaCentrum\footnote{\url{https://metavo.metacentrum.cz/}} virtual organization cluster. The resulting distribution of the optimal internal structure is shown in Fig.~\ref{fig:cut}. Note that the internal structure increased the original weight of the beam, $1,094$~g, \MT{with}{by an} additional $488$~g ($280$~g of casing and $208$~g of reinforcing bars).
\paragraph{Improving solver \MT{convergence}{performance}} To reduce the number of iterations and time per iteration to solve \MT{the }{}problem \eqref{eq:sdp}, we rescale the cross-sectional areas to obtain the optimal values of the order of $1.0$~mm. Second, to improve both the numerical stability and convergence of the algorithm considerably, we rescale Eqs.~\eqref{eq:eigen} and \eqref{eq:static} with the square root of the Frobenius norm estimates of $\mathbf{K}_{\mathrm{fv}}^\mathrm{C}$ (Eq.~\eqref{eq:eigen}), and $\mathbf{K}_{\mathrm{cm}}^\mathrm{C}$ (Eq.~\eqref{eq:static}).
\hl{Finally, using the static condensation, Appendix~{\hyperref[app:a]{A}}, and Schur complement, Appendix~{\hyperref[app:b]{B}}, decomposition techniques, the sizes of LMIs reduce to $3,426 \times 3,426$ (free-vibration, Eq.~{\eqref{eq:eigen}}) and $2,880\times2,880$ (compliance, Eq.~{\eqref{eq:static}}). Consequently, memory usage was decreased from $21$~GB to $13$~GB, and the solution process was accelerated by $71 \%$ (from $19.5$ to $5.75$ core hours).}
\subsection{Post-processing}\label{sec:post}
Manufacturing of the optimal design is preceded by three preprocessing steps addressing individual bars, segmentation into modules, and conversion to a solid model. Note that we checked that none of the steps \MT{leads}{led} to the constraint violation and have a~rather negligible impact on the objective function, i.e., after all post-processing steps\hl{,} the internal structure volume increased from $168.3$~cm$^3$ to $175$~cm$^3$.
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth]{./include/segments.png}
\caption{Segmentation of the beam internal structure.}
\label{fig:segmentation}
\end{figure}
\paragraph{Bars post-processing}
In the initial step of \MT{the }{}module post-processing, we assign square cross-sections to each bar\MT{,}{} with the square side length according to the optimal area, $d_e = \sqrt{a_e}$. Next, we check potential intersection of bars\MT{,}{} and place a node at each intersection, which subdivides them into two and defines the new element lengths. Third, for each bar, we set the cross-sectional size $d_e$ to at least $l_e/40$, because more slender bars are difficult to manufacture with the Prusa $3$D printers used in this study. In addition to the optimized bars, the internal structure is extended with short L-shaped beams that ensure \MT{the }{}mechanical interaction between the internal structure and the steel mandrel, thus defining an empty $20.05\times20.05$~mm space along the beam longitudinal axis $x$ for its insertion, see Figs.~\ref{fig:segment_top} and~\ref{fig:endoscope}a.
\begin{figure}[!t]
\begin{subfigure}{\linewidth}
\hfill\includegraphics[width=2.0cm]{./include/alignment_1.pdf}\hfill\raisebox{0.9cm}{$\overset{\scriptsize\text{re-alignment}}{\longrightarrow}$}\hfill\includegraphics[width=2.0cm]{./include/alignment_2.pdf}\hfill\hfill
\caption{}
\end{subfigure}
\par\smallskip
\begin{subfigure}{\linewidth}
{\hfill\scriptsize re-alignment\hfill}
\par\vspace*{-2mm}
\hfill\includegraphics[width=4cm]{./include/alignment_new.pdf}\hfill\raisebox{0.7cm}{$\longrightarrow$}\hfill\includegraphics[width=4cm]{./include/alignment_new2.pdf}\hfill\hfill\\
\scriptsize \hspace*{0.5mm} left end \hspace{5mm} center \hspace{4mm} right end \hspace{5mm} left end \hspace{5mm} center \hspace{4mm} right end
\caption{}
\end{subfigure}
\caption{Illustration of bar cross-sections re-alignment along the beam (a) $yz$ section, and (b) longitudinal axis $x$.}
\label{fig:alignment}
\end{figure}
\paragraph{Segmentation}
To enable parallel manufacturing with conventional 3D printers, we split the optimized internal structure into $48$ segments of approximately 20 mm in length, see Fig.~\ref{fig:segmentation} where ten selected segments are shown, to be \MT{later assembled}{assembled later} on the steel mandrel. Such segmentation requires re-alignment of bars within each beam cross-section and along the beam longitudinal axis\MT{,}{} to ensure the correct external beam dimension and a clearly defined interface among adjacent modules, see Fig.~\ref{fig:alignment} for an illustration. Note that \MT{the }{}segment production does not require any supporting material when printed along the beam longitudinal axis $x$, which would be impossible when printing the internal structure as a single-piece product.
\begin{figure*}[b]
\centering
\begin{tikzpicture}
\begin{scope}
\node (render) {\includegraphics[width=7.5cm]{./include/part.png}};
\node[inner sep=0pt,below=\belowcaptionskip of render,text width=1cm,align=center]{\footnotesize(c)};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=4mm] (a2) at (-0.35,2.55) {\footnotesize a};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=4mm] (b2) at (0.47,-1.0) {\footnotesize b};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=4mm] (c2) at (1.67,2.65) {\footnotesize d};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=4mm] (d2) at (1.0,-2.1) {\footnotesize e};
\end{scope}
\begin{scope}[xshift=-6.7cm, yshift=2cm]
\node (01) {\includegraphics[height=3.5cm]{./include/endoscope/endoscope02.png}};
\node[inner sep=0pt,below=\belowcaptionskip of 01,text width=1cm,align=center]{\footnotesize(a)};
\end{scope}
\begin{scope}[xshift=-6.7cm, yshift=-2cm]
\node (02) {\includegraphics[height=3.5cm]{./include/endoscope/endoscope07.png}};
\node[inner sep=0pt,below=\belowcaptionskip of 02,text width=1cm,align=center]{\footnotesize(b)};
\end{scope}
\begin{scope}[xshift=6.7cm, yshift=2cm]
\node (03) {\includegraphics[height=3.5cm]{./include/endoscope/endoscope08.png}};
\node[inner sep=0pt,below=\belowcaptionskip of 03,text width=1cm,align=center]{\footnotesize(d)};
\end{scope}
\begin{scope}[xshift=6.7cm, yshift=-2cm]
\node (04) {\includegraphics[height=3.5cm]{./include/endoscope/endoscope03.png}};
\node[inner sep=0pt,below=\belowcaptionskip of 04,text width=1cm,align=center]{\footnotesize(e)};
\end{scope}
\end{tikzpicture}
\caption{Endoscope camera photographs (a), (b), (d) and (e) as captured in the manufactured beam interior (c)\MT{,}{} showing that the $3$D-printed internal structure\hl{ successfully} withstood the compression-molding loads\MT{ successfully}{}.}
\label{fig:endoscope}
\end{figure*}
{
\paragraph{Solid conversion}
\MT{The a}{A}xial model conversion is performed independently and in parallel for each node of the ground\parfillskip=0pt\par}
\begin{figure}[H]
\begin{subfigure}{0.15\linewidth}
\includegraphics[width=\linewidth]{./include/segments/47.png}
\end{subfigure}%
\hfill\begin{subfigure}{0.15\linewidth}
\includegraphics[width=\linewidth]{./include/segments/46.png}
\end{subfigure}%
\hfill\begin{subfigure}{0.15\linewidth}
\includegraphics[width=\linewidth]{./include/segments/45.png}
\end{subfigure}%
\hfill\begin{subfigure}{0.15\linewidth}
\includegraphics[width=\linewidth]{./include/segments/44.png}
\end{subfigure}%
\hfill\begin{subfigure}{0.15\linewidth}
\includegraphics[width=\linewidth]{./include/segments/41.png}
\end{subfigure}%
\hfill\begin{subfigure}{0.15\linewidth}
\includegraphics[width=\linewidth]{./include/segments/36.png}
\end{subfigure}%
\caption{Solid models of typical topologies of segments.}
\label{fig:segment_top}
\end{figure}
\noindent structure. We determine first all bars attached to the considered node, elongate them by one half of their cross-sectional side lengths at both of their ends, and cut the more distant half of each of these bars off. These half-bars are then modeled by a mesh-based representation. Geometries of individual nodes then result from the mesh-boolean operations\MT{,}{} performed with the \textsc{Cork}\footnote{\url{https://github.com/gilbo/cork}} library. Finally, the overall segment geometry consists of the union of all nodal geometries, see Fig.~\ref{fig:segment_top} for typical topologies of post-processed segments, and can be readily exported to\MT{ the}{} patch-based \textsc{Stl} file format, for example.
\section{Results}\label{sec:results}
\subsection{Manufacturing}
After the automated export of the optimized internal structure into\MT{ the}{} \textsc{Stl} format, the part was additively manufactured using the Fused Deposition Modeling method with Prusa i3 MK3 printers. Printed segments were inserted on a~$20\times20\times1,200$~mm steel mandrel of $1.5$~mm wall thickness, with its surface lubricated with \MT{v}{V}aseline to simplify the pull-out process, and connected with acetone etching and a thin layer of epoxy glue.
\hl{The prototype beam was produced by CompoTech Plus company using the filament winding technology with axial fiber placement. This technology relies on the positioning of the tows of carbon fibers impregnated by the epoxy resin on the casing, placed in specified directions and specified quantity to reach expected dimensions and mechanical properties of the final product. The casing defines the internal shape of the beam and acts as an internal mold. After the fiber placement operation, the product (with still liquid resin) is placed into the press, the outer shape is formed, and the composite is consolidated. In the press, the product hardens at the room temperature. Finally, the prototype, Fig.~{\ref{fig:prototype}}, is postcured at the elevated temperature of $90^\circ$C.}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{./include/manbeam2.jpg}
\caption{Manufactured prototype of a composite beam with optimized stiffening internal structure.}
\label{fig:prototype}
\end{figure}
\MT{S}{The s}uccessful manufacturing process was followed by inspection of the prototype using an endoscope camera. Video and photograph sequences, see Fig.~\ref{fig:endoscope}, revealed that the internal structure successfully withstood the compression molding pressure without any significant visual defects. The minor deviations from the assumed model reside in a small amount of \MT{v}{V}aseline residue\MT{s}{} and \MT{a seldom}{slight} leakage of\hl{ the} epoxy resin through casing interfaces. Another difference appeared in \hl{the }increased outer dimensions of the beam, $0.32$~mm on average, caused by an insufficiently closed press cover. The total prototype weight of $1.768$~kg \MT{is}{was} therefore \MT{by}{ }$186$~g higher than the model predictions due to the additional epoxy resin.
\subsection{Verification}\label{sec:verification}
Recall that in the optimization we required increasing the fundamental free-vibration eigenfrequency above $300$~Hz. To check this value, we employed an independent model in \textsc{Ansys}. Compared to the model used for optimization, this model employs the dimensions measured in-situ, more refined discretization of the outer shells (element type \textsc{Shell181}), and models the internal structure with beam elements (\textsc{Beam188}) instead of trusses. Besides, the composite shells are supplemented with an additional layer of epoxy resin to account for the increased epoxy content. As a result, the model predicts that the beam fundamental eigenfrequency was increased by $92$\MT{~}{}\% from $128.5$~Hz to $246.7$~Hz, compare Figs.~\ref{fig:empty_simple}~and~\ref{fig:internal_simple}. The effect of wall instabilities was reduced jointly in all the remaining eigenmodes (not shown).
Even though the fundamental eigenfrequency did not exceed the limit value, we find these results satisfactory because of two reasons: First, we attribute this discrepancy mainly to the manufacturing imperfections, which can be attributed to the prototype character of the manufacturing process and can be easily resolved in \MT{a~}{}serial production. Second, the constraint violation is comparable to the difference between numerics and experiments as shown in the next section.
\begin{figure}[!t]
\begin{subfigure}[c]{0.48\linewidth}
\centering
\begin{minipage}{\linewidth}
\begin{tikzpicture}
\node (b) at (-0.5,0.5) {\includegraphics[width=0.94\textwidth]{./include/supported_internal/supported_internal_mode1_246_681797hz.png}};
\node (a) at (1.0,-0.3) {\includegraphics[width=0.18\textwidth]{./include/supported_internal/supported_internalCS_mode1_246_681797hz.png}};
\end{tikzpicture}
\end{minipage}
\caption{First eigenmode\hl{ with a frequency} of\MT{ frequency}{} $246.7$~Hz.}
\end{subfigure}%
\hfill\begin{subfigure}[c]{0.48\linewidth}
\centering
\begin{tikzpicture}
\node (b) at (-0.5,0.5) {\includegraphics[width=0.94\textwidth]{./include/supported_internal/supported_internal_mode2_347_029804hz.png}};
\node (a) at (1.0,-0.3) {\includegraphics[width=0.15\textwidth]{./include/supported_internal/supported_internalCS_mode2_347_029804hz.png}};
\end{tikzpicture}
\caption{Second eigenmode\hl{ with a frequency} of\MT{ frequency}{} $347.0$~Hz.}
\end{subfigure}
\caption{Axonometric and front view on the (a) first and (b) second eigenmodes of the reinforced composite beam predicted by the refined finite element model.}
\label{fig:internal_simple}
\end{figure}
\begin{figure}[!b]
\centering
\begin{tikzpicture}
\node (a) {\includegraphics[height=8cm]{./include/test.png}};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=3mm] (a2) at (0.5,-3.75) {\footnotesize 3};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=3mm] (a2) at (0.3,-2.95) {\footnotesize 2};
\node[circle,color=black,thick,fill=white,inner sep=0pt,minimum size=3mm] (a2) at (-0.1,-2.95) {\footnotesize 1};
\end{tikzpicture}
\caption{Free-free-vibration validation setup. Locations of $54$ impact points are indicated by gray squares and positions of $3$ accelerometers are marked by white circles.}
\label{fig:hammer}
\end{figure}
\subsection{Validation}
Dynamic response was validated with the roving hammer test in the free-free-vibration setting because it eliminates the need to reproduce the simply supported kinematic boundary condition in the experiment. To this goal, the beam was suspended at one of its ends, three piezoelectric acceleration transducers Type 4507B005 Br\"{u}el\&Kjaer were placed on the beam\hl{'s} outer surface, two of which were located in the middle of adjacent sides of the beam\hl{'s} cross-section \MT{in}{at} one\MT{ }{-}eighth of the beam\hl{'s} length, and the third one was placed \MT{in the beam}{at the} corner\hl{ of the beam}, Fig.~\ref{fig:hammer}. Two adjacent sides of the beam surface were marked with a~regularly spaced grid of $54$ points, $27$ on each side. These points then served as the excitation points for the impact hammer Type 8206 Br\"{u}el\&Kjaer equipped with \MT{the}{a~}force transducer.
\MT{The m}{M}easurement was realized \MT{by}{using} data acquisition front-end hardware Type 3560B Br\"{u}el\&Kjaer. The frequency response functions (FRFs) were evaluated from the recorded response (acceleration) and excitation (force) using the Fast Fourier Transform for all 54 points. The natural frequencies and mode shapes were evaluated from the FRFs with\MT{ the}{} MEscope software developed by\hl{ the} Vibrant Technology company.
Experimentally determined natural modes and the values of natural eigenfrequencies, Fig.~\ref{fig:expdata} top, were compared with the results of numerical simulations, Fig~\ref{fig:numdata} bottom. Direct comparison in Table~\ref{tab:comparison} reveals\MT{ a}{} sufficient agreement\hl{ of} up to $9 \%$ for eigenfrequencies of shear, bending, and torsional eigenmodes. In the case of buckling, we failed to measure the first and second buckling natural modes, and for the higher eigenmodes\hl{,} the model predictions underestimate the natural frequencies by more than $20 \%$. We attribute these deviations to the overall difficulty of measuring the buckling natural modes and to the manufacturing defects discussed in the previous section.
\begin{table}[!b]
\centering
\caption{Comparison of model prediction of eigenfrequencies $f_{\mathrm{FEM}}$ and measured natural frequencies $f_{\mathrm{EXP}}$ using the roving hammer test. Accuracy of individual measurements is denoted by $A$ and the deviation of the model from the experiment by $D$.}
\label{tab:comparison}
\scriptsize
\begin{tabular}{lcccc}
\hline
Eigenmode & $f_{\mathrm{FEM}}$ [Hz] & $f_{\mathrm{EXP}}$ [Hz] & $A$ [Hz] & $D$ [$\%$] \\
\hline
First shear & $600.1$ & $658$ & $2$ & $-8.8$ \\
First bending $y$ & $747.0$ & $714$ & $2$ & $+4.6$ \\
First bending $z$ & $748.2$ & $724$ & $2$ & $+3.3$ \\
Third buckling & $682.3$ & $846$ & $4$ & $-19.3$ \\
First torsion & $833.7$ & $864$ & $4$ & $-3.5$ \\
Fourth buckling & $708.9$ & $896$ & $4$ & $-20.9$ \\
Fifth buckling & $741.4$ & $942$ & $6$ & $-21.3$ \\
Sixth buckling & $790.6$ & $1004$ & $6$ & $-21.3$ \\
Second bending $z$ & $1020.0$ & $1102$ & $6$ & $-7.4$\\
\hline
\end{tabular}
\end{table}
\begin{figure*}[!htbp]
\centering
\begin{minipage}{\linewidth}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/01shear658b.png}
\caption{$658$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/02bend714b.png}
\caption{$714$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/03bend724b.png}
\caption{$724$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/04buck846b.png}
\caption{$846$~Hz}
\end{subfigure}\\
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/05tors864b.png}
\caption{$864$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/06buck896b.png}
\caption{$896$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/07buck942b.png}
\caption{$942$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/naturalmodes/09bend1102b.png}
\caption{$1102$~Hz}
\end{subfigure}
\end{minipage}
\setcounter{subfigure}{0}
\begin{minipage}{\linewidth}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/07.png}
\caption{$600.057$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/13.png}
\caption{$747.011$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/14.png}
\caption{$748.241$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/10.png}
\caption{$682.283$~Hz}
\end{subfigure}\\
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/16.png}
\caption{$833.719$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/11.png}
\caption{$708.852$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/12.png}
\caption{$741.389$~Hz}
\end{subfigure}%
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=3.5cm]{./include/eigenmodes_free_internal/24.png}
\caption{$1019.972$~Hz}
\end{subfigure}
\end{minipage}
\caption{Selected experimentally determined natural frequencies and mode shapes, (a)--(h) top, and finite element model predictions of eigenmodes and eigenfrequencies, (a)--(h) bottom.}
\label{fig:numdata}\label{fig:expdata}
\end{figure*}
\section{Summary and outlook}
This contribution introduces and investigates a unique, fully-automatized \MT{pipeline}{procedure} from an idea to prototyping, with applications to the manufacturing of thin-walled structural composite \MT{tubes}{hollow beams}. In particular, the~considered prototype product is stiffened with a~low weight internal structure designed by an efficient convex linear semidefinite programming formulation. This formulation increased the fundamental free-vibration eigenfrequency above a specified threshold value while avoiding the traditional issue of non-differentiability of multiple eigenvalues~\cite{Achtziger2007}, and limited structural compliance of a compression-molding load case. The optimization output of the non-uniformly distributed lattice-like internal structure was further automatically post-processed and converted into a solid model ready for support-less additive manufacturing.
Our methodology was verified by designing and producing the simply-supported CFRP beam prototype. Optimization yielded an internal structure of $488$~g which increased the fundamental eigenfrequency by $92\%$ and limited the effect of wall instabilities. Moreover, the deflections within the compression-molding load case were limited to $\pm0.5$~mm.
After a successful prototype production, the structural response was validated using the roving hammer test, which showed that bending, torsional, and shear eigenmodes exhibited good agreement with model predictions. For the wall buckling eigenmodes, however, the finite element model underestimated the natural frequencies by almost $22\%$. We attribute this to difficulties in measuring these natural modes and to manufacturing defects associated with compression-molding deformations of the casing.
Improving the structural response with a material more than two orders of magnitude more compliant when compared to CFRP suggests concentrating on substituting ABS with high-stiffness continuous carbon fiber in future studies. Another essential future enhancement resides in accelerating the optimization algorithm by exploiting the range-space sparsity \citep{Kim2011} associated with the segment-based internal-structure decomposition.
\section*{Acknowledgments}
We thank Edita Dvo\v{r}\'{a}kov\'{a} for providing us with her implementation of the \textsc{Mitc4} shell elements~\cite{Dvorakova}, and Ond\v{r}ej Roko\v{s} and Stephanie Krueger for a critical review of the initial versions of this manuscript.
The work of Jan Nov\'{a}k and Robin Poul was supported by the Technology Agency of the Czech Republic, through the project TA\v{C}R TH02020420. Marek Tyburec, Jan Zeman, and Mat\v{e}j Lep\v{s} acknowledge the support of the Czech Science Foundation project No. 19-26143X.
Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the program "Projects of Large Research, Development, and Innovations Infrastructures" (CESNET LM2015042), is greatly appreciated.
\section*{Data availability}
The raw/processed data required to reproduce these findings cannot be shared at this time due to legal or ethical reasons.
\section*{\hl{Appendix A. Static condensation of static LMI}}\label{app:a}
\hl{Consider the equilibrium equation}
\begin{equation}[box=\ybox]
\mathbf{K}(\mathbf{a}) \mathbf{u} = \mathbf{f}\label{eq:system}
\end{equation}
\hl{split into two sets of equations}
\begin{equation}[box=\ybox]
\begin{pmatrix}
\mathbf{K}_a (\mathbf{a}) & \mathbf{K}_b\\
\mathbf{K}_b^\mathrm{T} & \mathbf{K}_c
\end{pmatrix}
\begin{pmatrix}
\mathbf{u}_a\\
\mathbf{u}_b
\end{pmatrix} =
\begin{pmatrix}
\mathbf{f}_a\\
\mathbf{f}_b
\end{pmatrix},
\end{equation}
\hl{such that only the principal submatrix $\mathbf{K}_a (\mathbf{a})$ depends affinely on $\mathbf{a}$\footnote{\hl{In the context of this article, the matrix $\mathbf{K}_a(\mathbf{a})$ comprises the degrees of freedom of the truss ground structure, $\mathbf{K}_c$ contains the remaining (rotational) degrees of freedom, and $\mathbf{K}_b$ is the coupling term.}}. Assuming that the system~{\eqref{eq:system}} is solvable uniquely for some $\mathbf{a}$, i.e., it holds that $\exists \mathbf{a}\ge\mathbf{0}: \mathbf{K}(\mathbf{a}) \succ \mathbf{0}$, where ``$\succ \mathbf{0}$'' denotes positive definiteness of the left hand side. Note that $\mathbf{a} = \mathbf{1}$ is sufficient for verification that no rigid movement within the structure can occur. Because $\mathbf{K}_c$ is therefore invertible, the degrees of freedom $\mathbf{u}_b$ can be expressed from the second row in terms of $\mathbf{u}_a$}
\begin{equation}[box=\ybox]
\mathbf{u}_b = \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b - \left(\mathbf{K}_c\right)^{-1} \mathbf{K}_b^\mathrm{T} \mathbf{u}_a \label{eq:ub}
\end{equation}
\hl{and inserted back into the first row,}
\begin{equation}[box=\ybox]
\left[ \mathbf{K}_a (\mathbf{a}) - \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1} \mathbf{K}_b^\mathrm{T}\right] \mathbf{u}_a = \mathbf{f}_a - \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b.\label{eq:cond_equilibrium}
\end{equation}
\hl{Structural compliance (work done by external forces) is expressed as}
\begin{equation}[box=\ybox]
c = \mathbf{u}_a^\mathrm{T} \mathbf{f}_a + \mathbf{u}_b^\mathrm{T} \mathbf{f}_b.
\end{equation}
\hl{After inserting {\eqref{eq:ub}} and acknowledging that $\mathbf{K}_c^{-1}$ is Hermitian, we obtain}
\begin{equation}[box=\ybox]
c =
\mathbf{u}_a^\mathrm{T}
\left[
\mathbf{f}_a
- \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1}
\mathbf{f}_b
\right]
+ \mathbf{f}_b^\mathrm{T} \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b, \label{eq:cond_comp}
\end{equation}
\hl{i.e., compliance of the condensed problem {\eqref{eq:cond_equilibrium}} and a constant term. Because the compliance of the condensed problem is positive by definition, the constant term represents a~non-negative lower bound on compliances achievable by the internal structure design.}
\hl{Finally, the LMI}
\begin{equation}[box=\ybox]
\begin{pmatrix}
c & -\mathbf{f}^\mathrm{T}\\
-\mathbf{f} & \mathbf{K}(\mathbf{a})
\end{pmatrix} \succeq \mathbf{0}\label{eq:lmi}
\end{equation}
\hl{is equivalent to a smaller LMI}
{\setlength{\mathindent}{0cm}
\begin{equation}[box=\ybox]
\negthickspace\negthickspace\negthickspace%
\begin{pmatrix}
c -\mathbf{f}_b^\mathrm{T} \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b &
-\mathbf{f}_a^\mathrm{T} + \mathbf{f}_b^\mathrm{T} \left(\mathbf{K}_c\right)^{-1} \mathbf{K}_b^\mathrm{T}\\
-\mathbf{f}_a + \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b &
\mathbf{K}_a (\mathbf{a}) - \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1} \mathbf{K}_b^\mathrm{T}
\end{pmatrix} \succeq \mathbf{0}.\negthickspace\label{eq:statconlmi}
\end{equation}}%
\hl{Further, if $c > \mathbf{f}_b^\mathrm{T} \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b$ is a prescribed constant (i.e., not a variable), then {\eqref{eq:statconlmi}} is further reducible, using the Schur complement lemma, e.g., \mbox{\citep[Proposition 16.1]{Gallier2011}}, to a~yet smaller LMI}
\begin{equation}[box=\ybox]
\begin{split}
\mathbf{K}_a (\mathbf{a}) - \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1} \mathbf{K}_b^\mathrm{T} -
\left(-\mathbf{f}_a^\mathrm{T} + \mathbf{f}_b^\mathrm{T} \left(\mathbf{K}_c\right)^{-1} \mathbf{K}_b^\mathrm{T}\right)\;\;\\
\quad\left(c -\mathbf{f}_b^\mathrm{T} \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b\right)^{-1}
\left(-\mathbf{f}_a + \mathbf{K}_b \left(\mathbf{K}_c\right)^{-1} \mathbf{f}_b\right) \succeq \mathbf{0}.
\end{split}
\end{equation}
\section*{\hl{Appendix B. Reducing size of free-vibration LMI}}\label{app:b}
\hl{In the case of the free-vibration constraint, we need to directly apply the (generalized) Schur complement lemma. Beginning with reordering of rows and columns, we split the symmetric LMI {\eqref{eq:eigen}} such that only the $\mathbf{K}_a (\mathbf{a})$ and $\mathbf{M}_a (\mathbf{a})$ matrices are functions of $\mathbf{a}$, and the other blocks are constant,}
\begin{equation}[box=\ybox]
\begin{pmatrix}
\mathbf{K}_a (\mathbf{a}) - 4 \pi^2 \overline{f}^2 \mathbf{M}_a(\mathbf{a}) &
\mathbf{K}_b - 4 \pi^2 \overline{f}^2 \mathbf{M}_b \\
\mathbf{K}_b^\mathrm{T} - 4 \pi^2 \overline{f}^2 \mathbf{M}_b^\mathrm{T} &
\mathbf{K}_c - 4 \pi^2 \overline{f}^2 \mathbf{M}_c
\end{pmatrix} \succeq \mathbf{0}.\label{eq:schur_LMI}
\end{equation}
\hl{For the (standard) Schur complement trick we require $\mathbf{K}_c - 4 \pi^2 \overline{f}^2 \mathbf{M}_c \succ \mathbf{0}$ \mbox{\cite[Proposition 16.1]{Gallier2011}}. Since $\mathbf{K}_c \succ \mathbf{0}$ (boundary conditions exclude rigid motions), and $\mathbf{M}_c \succ \mathbf{0}$ by definition, we only need to secure that the fundamental eigenfrequency $f_0$ of the generalized eigenvalue problem}
\begin{equation}[box=\ybox]
\mathbf{K}_c \mathbf{u}_b - \lambda \mathbf{M}_c \mathbf{u}_b = 0, \label{eq:schur_eig}
\end{equation}
\hl{with $\lambda = 4 \pi^2 f^2$, is strictly greater than $\overline{f}$.}
\hl{Let us therefore first assume that $0 \le \overline{f} < f_0$. Then, the inverse of $\mathbf{K}_c - 4 \pi^2 \overline{f}^2 \mathbf{M}_c$ exists and {\eqref{eq:schur_LMI}} can be rewritten equivalently using the Schur complement lemma into a smaller-sized LMI}
\begin{equation}[box=\ybox]
\begin{split}
\mathbf{K}_a (\mathbf{a}) - 4 \pi^2 \overline{f}^2 \mathbf{M}_a(\mathbf{a}) - \left(\mathbf{K}_b - 4 \pi^2 \overline{f}^2 \mathbf{M}_b\right)\quad\quad\quad \\
\quad\quad\left(\mathbf{K}_c - 4 \pi^2 \overline{f}^2 \mathbf{M}_c\right)^{-1}
\left(\mathbf{K}_b^\mathrm{T} - 4 \pi^2 \overline{f}^2 \mathbf{M}_b^\mathrm{T}\right)
\succeq \mathbf{0}.
\end{split}
\end{equation}
\hl{Second, consider that $f_0 < \overline{f}$. Because the matrix $\mathbf{K}_c - 4 \pi^2 (f_0 + \varepsilon)^2 \mathbf{M}_c$ is indefinite for any $\varepsilon>0$, which renders the original LMI {\eqref{eq:schur_LMI}} infeasible, the eigenfrequency $f_0$ constitutes an upper bound for achievable fundamental eigenfrequencies of the reinforced structure. From the mechanical point of view, the eigenmodes $\mathbf{u}_b$ associated with $f_0$ excite degrees of freedom not reinforced by the internal structure, and therefore the associated eigenfrequencies can not be increased by any admissible internal structure design (given the specific discretization).}
\hl{In the case $\overline{f} = f_0$, reduction of {\eqref{eq:schur_LMI}} relies on the generalized Schur complement lemma \mbox{\cite[Theorem 16.1]{Gallier2011}}, so that {\eqref{eq:schur_LMI}} is equivalent to}
\begin{subequations}
\begin{align}[box=\ybox]
\begin{split}
\mathbf{K}_a (\mathbf{a}) - 4 \pi^2 \overline{f}^2 \mathbf{M}_a(\mathbf{a}) - \left(\mathbf{K}_b - 4 \pi^2 \overline{f}^2 \mathbf{M}_b\right) \qquad\\
\qquad\left(\mathbf{K}_c - 4 \pi^2 \overline{f}^2 \mathbf{M}_c\right)^{\dagger}
\left(\mathbf{K}_b^\mathrm{T} - 4 \pi^2 \overline{f}^2 \mathbf{M}_b^\mathrm{T}\right)
\succeq \mathbf{0},
\end{split}\label{eq:schurG}\\
\begin{split}
\left[\mathbf{I} - \left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right)\left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right)^{\dagger}\right]\qquad\\
\qquad\left(\mathbf{K}_b^\mathrm{T} - 4 \pi^2 \overline{f}^2 \mathbf{M}_b^\mathrm{T}\right) = \mathbf{0},\label{eq:schur_gen}
\end{split}
\end{align}
\end{subequations}
\hl{where $\left(\bullet \right)^\dagger$ denotes the Moore-Penrose pseudo-inverse of $\bullet$, and $\mathbf{I}$ is the identity matrix. The second condition {\eqref{eq:schur_gen}} holds iff the columns of $\mathbf{K}_b^\mathrm{T} - 4 \pi^2 \overline{f}^2 \mathbf{M}_b^\mathrm{T}$ are in the image of $\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c$. Indeed, {\eqref{eq:schur_gen}} can then be rewritten to}
\begin{equation}[box=\ybox]
\begin{split}
\left[\left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right) - \left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right)\right.\qquad\qquad\\
\quad\quad\left.\left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right)^{\dagger} \left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right) \right] \mathbf{C} = \mathbf{0}.\label{eq:schur_gen2}
\end{split}
\end{equation}
\hl{with the columns of $\mathbf{C}$ being the coefficients of linear combinations of the columns of $\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c$, making the term in the square brackets vanish~\mbox{\cite[Lemma 14.1]{Gallier2011}}}.
\hl{Because $\mathrm{Im}( \mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c ) = \mathrm{Ker}( \mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c )^\perp$ by \mbox{\cite[Lemma 13.1]{Gallier2011}}, it is spanned by}
\begin{equation}[box=\ybox]
\text{span}\left\{\mathbf{u}_b: \left(\mathbf{K}_c - 4\pi^2 \overline{f}^2 \mathbf{M}_c\right)\mathbf{u}_b = \mathbf{0} \right\}^\perp.
\end{equation}
\hl{Clearly, $\overline{f} = f_0$ might be achieved iff the columns of the coupling term $\mathbf{K}_b^\mathrm{T} - 4 \pi^2 \overline{f}^2 \mathbf{M}_b^\mathrm{T}$ are orthogonal to the eigenmodes occurring in {\eqref{eq:schur_eig}} at $f_0$. From the mechanical point of view, induction of these eigenmodes would result in a~decrease of the associated eigenfrequencies. Note that in practice, equation {\eqref{eq:schur_gen}} can be verified numerically, but it does not guarantee a feasible solution to {\eqref{eq:schurG}}, because other (higher) eigenfrequencies associated with eigenmodes of {\eqref{eq:schur_eig}} may decrease below $f_0$ due to the coupling term.}
|
2,877,628,088,855 | arxiv | \section{Introduction}
Cardiac auscultation is a critical stage in the diagnosis and examination of heart functionality. Phonocardiogram (PCG) provides a recording of subaudible sounds and murmurs from the heart and allows cardiologists to interpret the closure of the heart valves. Heart sounds can reflect the hemodynamical processes of the heart and provide important screening indications of disease in early evaluation stages. The PCG has been proven as an effective tool to reveal several pathological heart defects such as arrhythmias, valve disease, and heart failure \cite{Liu2016}. The goal of this paper is to develop an automatic method for heart sounds analysis, particularly the segmentation and classification of fundamental heart sounds, which is useful to detect heart pathology in clinical applications.
Several automatic methods for heart sound segmentation have been proposed in the literature. Three main problems must be tackled jointly towards fully automatic heart sound analysis. The first is to detect noise to identify the non-cardiac sounds. The second is to segment heart sounds to localize the main sound components. The third is to classify heart sounds into healthy and pathological classes. The performance of the heart sound segmentation is highly dependent on the preprocessing step. This is relatively simple in noise-free recordings. However, in clinical environments, this is difficult due to both endogenous or exogenous in-band noise sources that overlap with the heart sounds frequency range \cite{Kumar2011}. Accurate localization of the fundamental heart sounds will lead to a more accurate classification of any pathology in systolic or diastolic regions \cite{Springer2016, Springer2014}.
The heart sound segmentation methods proposed in the literature can be categorized into three groups: the first is the envelope based methods \cite{Liang1997, Huiying1997, Moukadem2013, Sun2014, Choi2008, Yan2010, Ari2008}; the second is feature based methods \cite{Naseri2013, Kumar2006, Varghees2014, Pedrosa2014, Nigam2005, Vepa2008, Papadaniil2014, Arash2011}; the third is machine learning based methods \cite{Oskiper2002, Sepehri2010, Chen2010, Gupta2007, HongTang2010, Rajan2006}, further reviews and details of these methods can be found in \cite{Liu2016, Springer2016}.
Machine learning methods based on probabilistic models show an improved accuracy on heart sound segmentation. Gamero and Watrous \cite{Gamero2003} proposed a hidden Markov model (HMM) approach to detect the S1 and S2 sounds. They used a topology combining two separate HMMs to model the mel-frequency cepstral coefficients (MFCC) of the systolic and diastolic intervals, respectively. The method was evaluated on 80 mostly healthy subjects and achieved a sensitivity of 95\% and positive predictivity of 97\%. Ricke \textit{et al.} \cite{Ricke2005} extended the conventional HMM to a variable-state embedded HMMs method to model the heart sound components (S1, Systole, S2, and Diastole) along with time-variant MFCC, Shannon energy, and regression coefficients. Evaluation only on 9 subjects shows an accuracy of 98\% using eight-fold cross-validation. Gill \textit{et al.} \cite{Gill2005} suggested a modified HMM to allow for a smooth transition between states.
On 44 heart sound recordings from 17 subjects, the method showed a sensitivity and positive predictivity of 98.6\% and 96.9\% for S1, and 98.3\% and 96.5\% for S2 sound detection. Sedighian \textit{et al.} \cite{Sedighian2014} also used a homomorphic filtering approach to extract envelograms from the heart sound recordings. Envelope peak detection method was used along with two-states HMM to identify the S1 and S2 sound. The method was evaluated on the PASCAL database \cite{Bentley2011} and obtained an average accuracy of 92.4\% for S1 and 93.5\% for S2 sound segmentation. Shmidt \textit{et al.} \cite{Schmidt2010} proposed a duration-dependent HMM method to model the transition duration of each HMM state.
The performance was evaluated on 113 subjects (40 for the training set and 73 for the testing set), the results obtained on the unseen test set were 98.8\% sensitivity and 98.6 positive predictivities. Springer \textit{et al.} \cite{Springer2016} extended the work of \cite{Schmidt2010} by using the hidden semi-Markov model (HSMM) with the modified Viterbi algorithm to detect the beginning and end state of the heart sound signal.
The method was evaluated on larger heart sound recordings, 10,172 seconds of heart sound collected from 112 (healthy and pathological) subjects admitted to the Massachusetts General Hospital for cardiac screening or in-home recordings including patients with mitral valve prolapse (MVP). The data was split equally into train and test sets. The method obtained an average F1 score of 95.63\% on the unseen test dataset.
Despite the noticeable performance in identifying heart sounds pathologies, many of the above-mentioned methods were only evaluated on relatively small datasets and mostly from a single source. In contrast, our proposed method will be evaluated on a large standard database. Another major advantage of our approach to heart sounds segmentation is that it is based on modeling of the raw heart sound signals directly, and thus does not require any preliminary stage of feature extraction.
Switching linear dynamic systems (SLDS) \cite{Shumway1991, Ghahramani2000} has been introduced as a generalization of HMM and state space model (SSM). SLDS is capable of modeling changes in time series with a mixture of distinct underlying dynamics which reoccur at certain time intervals.
Most real-world processes are not discrete or exhibit purely linear dynamics. The SLDS is a non-linear model that iteratively segments the data into piecewise stationary regimes by switching between a set of approximately linear dynamic models \cite{Fox2009}. SLDS is widely used in many domains of applications including financial time series \cite{Hamilton1989, Carvalho2007}; motion tracking \cite{Oh2008, Pavlovic2000, Fox2007, XRong2005}; anomaly detection \cite{Ghahramani2000, Oster2015, Montazeri2015, Melnyk2016}; environment \cite{Monbet2017}.
Oster \textit{et al.} \cite{Oster2015} introduced the use of a switching Kalman filter (SKF) for ventricular beat detection in electrocardiogram (ECG) signals. Nasim \textit{et al.} \cite{Montazeri2015} also proposed SKF-based methods with two different switching schemes for apnea bradycardia detection in ECG signals, which showed better performance than a conventional HMM. Samdin \textit{et al.} \cite{Samdin2017} employed a Markov-switching vector autoregressive (MS-VAR) model formulated into a SLDS form to track the state-related changes in functional magnetic resonance imaging (fMRI) and epileptic electroencephalogram (EEG) signals. The approach is able to automatically segment the directed connectivity structure in the multivariate signals into a finite number of reoccurring quasi-stable states. Heart sound signal components exhibit distinct dynamics in the autocorrelation structure at different time intervals, which can be well-captured by a switching autoregressive (AR) process.
In this paper, we develop a unified framework based on Markov-switching AR (MSAR) models with enhanced state inference algorithms to segment the fundamental components of heart sound for subsequent use in classification of heart pathologies. To characterize dynamic cardiac events, we use MSAR models with four states each associated with one of heart sound components. Conventional HMM is less effective when used to segment the raw heart sound signals corrupted by various noise sources (with low signal-to-noise ratio) typically present in the clinical environment. To overcome this limitation, we develop a SLDS formulation by specifying the MSAR as an unobserved latent process to capture the underlying time-variant autocorrelations, and the measured heart sound signals as a contaminated version of this latent process to accommodate the noise effects. To the best of our knowledge, this is the first to apply a MSAR-SLDS for heart sound segmentation.
We introduce two approaches to sequentially infer the latent states of heart sound components. The first is inspired by \cite{Samdin2017} which uses the forward-backward Kalman filter recursions to estimate and smooth the state transition probabilities.
This approach imposed a constraint on the Markovian transition matrix to form a left-to-right non-ergodic Markov chain allowing only certain pre-specified state transitions according to the temporal order of the heart sound components; The second approach incorporates the Viterbi algorithm to replace the backward-Kalman smoother. In addition to the constrained transition matrix, this approach allows the self-transitions and ensures that mode changes to another state at a certain limit of duration, which corresponds to the durations of each major component in a heart cycle.
We further employed a continuous-density HMM with Gaussian mixtures for heart sound classification, using the SKF-derived heart-sound segments in the model training.
The Mel-frequency cepstral coefficients (MFCC's) method which widely used in speech analysis was adopted in this paper to extract acoustic features from the heart sound signals. The MFCC is able to represent the frequency contents of the heart sounds in a quasi-logarithmic manner, mimicking the human auditory system. The extracted sequences of MFCC features were computed over sliding windows from each heartbeat. The MFCC features were then modeled using a Gaussian mixture-based HMM approach which shows an improved heart sound classification performance. We consider classification of heart sound classes into three main classes: normal, abnormal and unsure (noisy or X-Factor)\cite{Liu2016}. Incorporating X-Factor class allows the technique to detect the unknown or unclassifiable heart events and reduce the classification of false alarms. In HMM model estimation, each heart sound segment is clustered into four states with 16-Gaussian mixtures, the standard Viterbi algorithm is used to obtain the state sequence, the HMM parameters are then iteratively re-estimated using the expectation-maximization algorithm. The segmentation and classification performance of the proposed method was evaluated under various experimental conditions.
A preliminary version of this work on the segmentation has been reported in \cite{noman2017}. This paper provides a significant extension by presenting a novel, unified framework for both segmentation and classification of heart sounds based on the Markov-switching approach with thorough experimental evaluation on a large database.
\section{Materials and Methods}
\subsection{Heart Sound Database}
An open access heart-sound database which recently published and available online in Physionet/Computing in Cardiology (CinC) Challenge 2016 was used in this study to evaluate the proposed segmentation method \cite{Liu2016}. The database as depicted in Table \ref{Table:table1}, consists of six datasets (\textit{a} through \textit{f}), collected from different sources by different research groups in both clinical and nonclinical environments \cite{GariD2016}. The database consists of 764 subjects, manually labeled by experts into three classes (2302 normal; 572 abnormal; and 279 unsure), giving a total of 3153 heart sound recordings. The data were recorded at 2000Hz using heterogenous equipment from the four common locations on chest area (aortic, pulmonary, tricuspid, and mitral) with a variety of durations lasting from 5.3s to 122s, 19 hours and 73 minutes in total. \iffalse However, some or part of the recordings are very noisy and have been labeled as noise, the noisy segments duration is about one hour and 64 minutes representing 8.3\% in total.\fi Table \ref{Table:table1} summarizes the number of complete heart-beat segments in the dataset, where each segment begins at the start of $S1$ sound until the start of the next $S1$ sound, giving a total of 81498 beats (with 65152 normal and 16346 abnormal segments).
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Distribution of complete heart-beat segments in Physionet database.}
\vspace{0.1 cm}
\label{Table:table1}
\centering
\resizebox{0.4\textheight}{!}{
\begin{tabular}{m{1cm} cccc}
\hline \hline
\multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Beat count} & \multirow{2}{*}{Total beats} & \multirow{2}{*}{Ignored rec.$^\dagger$} \\
\cline{2-3} & Normal & Abnormal & \\
\hline
Ds-\textit{a} & 4301 & 9860 & 14161 & 17 \\
Ds-\textit{b} & 2396 & 589 & 2985 & 122 \\
Ds-\textit{c} & 356 & 1425 & 1781 & 4 \\
Ds-\textit{d} & 308 & 493 & 801 & 3 \\
Ds-\textit{e} & 54783 & 2841 & 57624 & 129$^\ddagger$\\
Ds-\textit{f} & 3008 & 1138 & 4146 & 6$^\star$ \\
\hline
Total & 65152 & 16346 & 81498 & 281\\
\hline \hline
\multicolumn{4}{m{7cm}}{Those recordings are labeled as noise $^\dagger$
Including recording (e00210)$^\ddagger \hskip 2cm$
Including recording (f0043)$^\star $}
\end{tabular}}
\vspace{-0.1in}
\end{table}
The recordings labeled as all--noises were discarded from the segmentation analysis, the remaining recordings were split into train and test datasets with each dataset containing approximately the same number of recordings and heartbeat segments. Table \ref{Table:table2} shows the breakdown of each dataset by heartbeat type (normal or abnormal), this split of the data was chosen to balance the train-test subsets for the performance evaluation of the proposed segmentation and classification methods.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Distribution of the Train and Test sets (Segments and Recordings).}
\vspace{0.1 cm}
\label{Table:table2}
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{m{1cm} cccc|cccc}
\hline \hline
\multirow{3}{*}{Dataset} & \multicolumn{4}{c}{Heart Beats} & \multicolumn{4}{c}{Recordings} \\
\cline{2-9}
\ & \multicolumn{2}{c}{Normal} & \multicolumn{2}{c}{Abnormal} & \multicolumn{2}{|c}{Normal} & \multicolumn{2}{c}{Abnormal} \\
\cline{2-9}
& Train & Test & Train & Test & Train & Test & Train & Test\\
\hline
Ds-\textit{a} & 2148 & 2153 & 4932 & 4928 & 59 & 57 & 139 & 137\\
Ds-\textit{b} & 1198 & 1198 & 294 & 295 & 147 & 148 & 36 & 37\\
Ds-\textit{c} & 177 & 179 & 710 & 715 & 3 & 4 & 10 & 10 \\
Ds-\textit{d} & 154 & 154 & 246 & 247 & 14 & 12 & 14 & 12\\
Ds-\textit{e} & 27392 & 27391 & 1420 & 1421 & 889 & 890 & 74 & 72\\
Ds-\textit{f} & 1502 & 1506 & 568 & 570 & 38 & 39 & 15 & 16\\
\hline
Total & 32571 & 32581 & 8170 & 8176 & 1150 & 1150 & 288 & 284\\
\hline \hline
\end{tabular}}
\vspace{-0.1in}
\end{table}
\subsection{Heart Sound Segmentation}
Figure \ref{Fig:fig1} shows the proposed framework for heart sound segmentation. The procedure consists of five steps: (1.) Pre-processing to assess the signal quality and filter out the redundant frequency bands (Section B.2). (2.) Dynamic clustering using the reference data labels. (3.) Model parameters initialization. (4.) Switching Kalman filter (SKF) to compute (estimate) the observation likelihood. (5.) Approximate inference algorithms (switching Kalman smoother (SKS) and Viterbi) to estimate the most likely state sequence.
\vspace{-0.14in}
\noinden
\begin{figure*}[!th]
\vspace{-0.05in}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{minipage}[t]{1\linewidth}
\subfloat[]{\includegraphics[width=1\linewidth,keepaspectratio]{figs/fig1.pdf}}
\end{minipage}
\vspace{-0.4 cm}
\caption{The proposed MSAR-based framework for heart sound segmentation.}
\label{Fig:fig1}
\vspace{-0.1in}
\end{figure*}
\vspace{-0.05in}
\subsubsection{Pre-processing}
However, the recordings labeled with low-quality index were discarded \cite{Liu2016}, different noise sources are still marginally represented in the database. Hence signals were filtered using a Butterworth band-pass filter with cut-off frequencies of 25Hz and 400Hz. The noise spikes were identified and removed using a windowed-outlier filter \cite{Schmidt2010}. Each recording in the database was shifted and scaled prior to analysis, by subtracting the mean and dividing by standard deviation \cite{Springer2016}.
\subsubsection{Markov-Switching Autoregression (MSAR)}
Modeling the heart sound signal is very challenging because it is nonstationary, nonlinear and periodic time series which consist of repeated heartbeats. Moreover, the clean heart sounds are embedded in various physiological noises and artifacts with a very low SNR. Let ${\bf y}=[\mathrm {y}_1 \ldots,\mathrm{y}_T]'$ be a vector of heart sound time series of length $T$ for the entire recording. We assume an additive noise model for the measured raw heart sound signals as follows
\begin{equation}
\mathrm{y}_t = \mathrm{x}_t + {\varepsilon}_{t}
\label{Eqn:eqn1*}
\end{equation}
where ${\varepsilon}_{t}$ is a i.i.d. Gaussian observational noise with zero mean and covariance $R$, $\varepsilon_t \sim{N}(0,R)$. The underlying switching dynamics of the clean heart sound signals are assumed to follow a Markov-switching AR process (MSAR), a collection of stationary AR processes that alternate among themselves over time according to an indicator variable $S_t$
\begin{equation}
\mathrm{x}_t = \sum_{p=1}^{P} \varphi_p^{(S_t)} \mathrm{x}_{t-p} + \eta_t
\label{Eqn:eqn2}
\end{equation}
where $S_t, t=1,\ldots,T$ is a sequence of time-varying state variables taking values in a discrete space $j=1,\ldots,K$; $\{\varphi_p^{(j)}, p=1,\ldots,P \}$ are the AR coefficients at different lags for state $j$; and $\eta_t \sim{N}(0,q)$ is a white Gaussian noise. We assume $S_t$ to follow a hidden Markov chain with transition matrix $Z=[z_{ij}], 1 \leq i,j \leq K$ where $z_{ij} = P(S_t=j|S_{t-1}=i)$ denotes the probability of transition from state $i$ at time $t-1$ to state $j$ at $t$. Each cardiac cycle of heart sound consists of four fundamental components: S1 sound; systolic interval (Sys); S2 sound; and diastolic interval (Dia). The heart sound components exhibit distinct dynamic patterns during different time periods, where each can be modeled as a piecewise-stationary AR process of the MSAR model (\ref{Eqn:eqn2}). Thus, we assume the number of states or regimes as $K=4$ each corresponding to one of the four components ($j=1$: S1, $j=2$: Sys, $j=3$: S2 and $j=4$: Dia). The switching in autocorrelation structure as captured by the state-specific AR coefficients $\varphi_p^{(S_t)}$ between the components is driven by the changes in latent states $S_t$ which indicate which heart-sound component is active at time point $t$. The segmentation of the heart-sound components can be derived indirectly from the state sequence $S_t$. The topology of the Markov chain is set to constrain the transition from one state (or component) to the other in a strict left-to-right sequential order.
Defining a $P \times 1$ hidden state vector of stacked clean heart sound signals $\mathrm{X}_t = \left[ \mathrm{x}_t, \mathrm{x}_{t-1}, \ldots, \mathrm{x}_{t-P+1} \right]$, we can formulate the MSAR plus noise model defined in (\ref{Eqn:eqn1*})-(\ref{Eqn:eqn2}) in a switching linear-Gaussian SSM
\begin{eqnarray}
\label{Eqn:eqn3*}
\mathrm{X}_t & = & A^{(S_t)} \mathrm{X}_{t-1} + \mathrm{w}_t \\
\label{Eqn:eqn4}
\mathrm{y}_t & = & C \mathrm{X}_t + \varepsilon_t
\end{eqnarray}
In the state equation (\ref{Eqn:eqn3*}), the switching AR($P$) process (\ref{Eqn:eqn2}) is written as an $P$-dimensional switching AR(1), where $\mathrm{w}_t = \left[\eta_t,0,\ldots,0\right]$ is a $P \times 1$ state noise, and $A^{(S_t)}$ is a $P$ matrix of AR coefficients switching according to state variables $S_t$
\[
{A}^{(S_t)} =
\left[
\begin{array}{ccccc}
\varphi_1^{(S_t)} & \varphi_2^{(S_t)} & \ldots & \varphi_{P-1}^{(S_t)} & \varphi_P^{(S_t)} \\
1 & 0 & \ldots & 0 & 0 \\
0 & 1 & \ldots & 0 & 0 \\
\vdots & & \ddots & & \vdots \\
0 & 0 & \ldots & 1 & 0 \\
\end{array}
\right].
\]
In the observation equation (\ref{Eqn:eqn4}), the latent MSAR process is observed under noise $\varepsilon_t$ as the measured heart sound signals $\mathrm{y}_t$ via the $1 \times P$ mapping matrix $C = [1,0,\ldots,0]$. We further assume the observation and state noise as white Gaussian processes, i.e. $\varepsilon_t \sim{N}(0,R^{(S_t)})$ and $\mathrm{w}_t \sim{N}(0,Q^{(S_t)})$ with
\[
{Q}^{(S_t)} =
\left[
\begin{array}{ccccc}
q^{(S_t)} & 0 & \ldots & 0 & 0 \\
0 & 0 & \ldots & 0 & 0 \\
0 & 0 & \ldots & 0 & 0 \\
\vdots & & \ddots & & \vdots \\
0 & 0 & \ldots & 0 & 0 \\
\end{array}
\right].
\]
The noise covariance matrices $R^{(S_t)}$ and $Q^{(S_t)}$ are allowed to switch according to $S_t$. The MSAR model in a state-space form is now fully specified with the model parameters denoted by $\Theta = \left\{Z,A^{(j)},Q^{(j)},R^{(j)}\right\}, j=1,$ $\ldots,K$. The estimation algorithms for the unknown state sequence $S_t$ and model parameters $\Theta$ are given in the following section.
\subsubsection{Dynamic Clustering and Model Initialization}
To initialize the MSAR model parameters, we first perform the dynamic clustering to group the heart sound time series data that belongs to the same state or component. This is followed by fitting a separate stationary AR model to the clustered data of each state to obtain the estimators for the state-specific parameters. Conditioned on the known state sequence derived from the expert's manual annotation labels), we partition temporally the time sequence of the heart sound recording in the training set into similar underlying dynamics according to the $K=4$ components. Let ${\bf y}^{(j)}=[\mathrm {y}^{(j)}_1 \ldots,\mathrm{y}^{(j)}_{T_j}]'$ be $T_j \times 1$ vector of concatenated data being clustered to each heart sound component $j=1,\ldots,K$, consisting of the $\mathrm {y}_t$ with $S_t = j$. Figure \ref{Fig:fig3} shows an example of clustering a healthy heart sound signal into four dynamic clusters. Note that the time series data of systoles exhibits the similar dynamic structure as that of the diastole.
\begin{figure}[!t]
\vspace{-0.05in}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{minipage}[t]{0.6\linewidth}
\subfloat[]{\includegraphics[width=1\linewidth,keepaspectratio]{figs/fig3.pdf}}
\end{minipage}
\vspace{-0.4 cm}
\caption{Dynamic clustering of heart sound into four fundamental components.}
\label{Fig:fig3}
\vspace{-0.1in}
\end{figure}
Assuming local stationarity for each of these temporal clusters of heart sound signals, we use a simple procedure to initialize the estimates of the MSAR model parameters. Precisely, we assume the concatenated time series of each component to follow a distinct stationary AR($P$) process
\begin{equation}
\mathrm{y}_t^{(j)} = \sum_{p=1}^{P} {\varphi}^{(j)}_p \mathrm{y}^{(j)}_{t-p} + {\eta}^{(j)}_{t}
\label{Eqn:eqn1}
\end{equation}
We compute the initial estimates of the state-specific AR coefficients $\widehat{\varphi}^{(j)}_p$ by a least-square fitting of the AR($P$) to ${\bf y}^{(j)}$, and the noise variance $\widehat{q}^{(j)}$ based on the estimated residuals $\widehat{\eta}^{(j)}_{t} = \mathrm{y}_t^{(j)} - \sum_{p=1}^{P} \widehat{\varphi}^{(j)}_p \mathrm{y}^{(j)}_{t-p}$ by $\widehat{q}^{(j)} = 1/T_j \sum_{t=1}^{T_j} \left(\widehat{\eta}^{(j)}_{t}\right)^2$. Note that the estimators are initialized based on the manual annotations of the heart sound components, which are subsequently refined based the switching Kalman filter-derived segmentation. The observation noise variance $R$ is also estimated based averaged residuals of the fitted AR over sliding-windowed segments of heart sound signal. The state transition probabilities $z_{ij}$ can be initialized by the frequency of transitions from $S_{t-1} = j$ to $S_t = i$.
\subsubsection{MSAR-based Segmentation Algorithms}
\iffalse Since the AR dynamics, measurement, and state priors are Gaussian. Kalman Filter is the best among all linear estimators if the noises and initial distributions of the SSM model are assumed to be Gaussian [47].\fi Segmenting the heart-sounds can be cast as the problem of estimating the unknown state sequence $S_t$. Given the sequence of observations $\{\mathrm{y}_t\}_{t=1}^T$, the problem of inference in the switching state-space models is to estimate the posterior probabilities $Pr(S_t=j|\{\mathrm{y}_t\}_{t=1}^T)$ of the hidden state variables $S_t$.
In this paper, we consider three approaches to estimating the state probabilities given the observation sequence. (1) Switching Kalman filter (SKF) which computes sequentially in a forward recursion the probability densities of the hidden states $P(\mathrm{x}_t |\{\mathrm{y}_t\}_{t=1}^t)$ and $P(S_t |\{\mathrm{y}_t\}_{t=1}^t)$ given observations up to time $t$; (2) Switching Kalman smoother (SKS) (or Rauch-Tung-Streibel smoother RTS) computes in a backward recursion refined estimates of densities $P(\mathrm{x}_t |\{\mathrm{y}_t\}_{t=1}^T)$ and $P(S_t |\{\mathrm{y}_t\}_{t=1}^T)$ given the entire observation sequence of length $T$; (3) Fusion of SKF and extended duration-dependent Viterbi algorithm (SKS-Viterbi) suggested by \cite{Springer2016, Schmidt2010} which decodes the most likely sequence of states given the state probabilities from the one-step ahead Kalman Filter predictions $P(S_t=j | M_{t|t}^j )$
\paragraph{Switching Kalman Filter (SKF):}
Algorithm \ref{Algo:alg3} summarizes the procedure of SKF for estimating the hidden state parameters given the raw heart sound observations $\{\mathrm{y}_t\}_{t=1}^T$ and estimated model parameters for each state $\widehat{\Theta} = \left\{\widehat{Z},\widehat{A}^{(j)},\widehat{Q}^{(j)},\widehat{R}^{(j)}, j=1,\ldots,K \right\}$. Refer to \cite{Murphy1998} for further details. Given $\widehat{\Theta}$ and initial state probabilities $M_0^j=[1,0,\ldots,0]$, for each time $t$, a run of $K^2$ Kalman filters is performed recursively to compute the mean and covariance of the component filtered densities of $\mathrm{x}_t$ (denoted as $\mathrm{x}_{t|t}^{ij}$ and $P_{t|t}^{ij}$) for all pairs $(i,j)$ and the corresponding likelihood function $L_t^{ij}$. The filtered state probability of $S_t$ can be defined by
\vspace{-0.02in}
\begin{eqnarray}
M_{t|t}^j & = & P(S_t=j | \{\mathrm{y}_t\}_{t=1}^t) \notag \\
& = & \sum_i M_{t-1,t|t}^{ij} \label{Eqn:eqn8}
\end{eqnarray}
where $M_{t-1,t|t}^{i,j}= P(S_{t-1}=i,S_t=j | \{\mathrm{y}_t\}_{t=1}^t)$ is computed from the $M_{t-1|t-1}^i$ at previous time $t-1$ weighted by the likelihood $L_t^{ij}$ and the transition probabilities $z_{ij}$ as follows
\begin{equation*}
M_{t-1,t|t}^{ij} = \frac{L_t^{ij} z_{ij} M_{t-1|t-1}^i}{\sum_i \sum_j L_t^{ij} z_{ij} M_{t-1|t-1}^i} \label{Eqn:eqn8}
\end{equation*}
After the filtering at each time $t$, the component densities ($\mathrm{x}_{t|t}^{ij}$ and $P_{t|t}^{ij}$) weighted by $W_t^{i|j} = M_{t-1,t|t}^{ij}/M_{t|t}^j$ are collapsed to give the mean and covariance of the filtered densities ($\mathrm{x}_{t|t}^{j}$ and $P_{t|t}^{j}$).
\begin{algorithm}[!t]
\noindent\textbf{Inputs}: $\mathrm{x}_0^{ij}, P_0^{ij}, M_0^j, \{\mathrm{y}_t\}_{t=1}^T, A, C, R, Q, Z$\\
\noindent\textbf{Outputs}: $M_{t|t}^j$,$\mathrm{x}_{t|t}^j$, $P_{t|t}^j$
\vspace{-0.08in}
\noindent\hrulefill
\begin{algorithmic}[1]
\For {$t = 1, 2, \ldots, T$}
\For {$j = 1, \dots , K$}
\For {$i = 1, \dots , K$}
\State {$[\mathrm{x}_{t|t}^{ij}, P_{t|t}^{ij},L_t^{ij}] = $ Filter$(\mathrm{x}_{t-1|t-1}^i,P_{t-1|t-1}^i,$\\ \hspace{12em} $ A^j, C, Q^j, R^j)$}
\EndFor
\EndFor
\For {$j = 1, \dots , K$}
\State {$[M_{t|t}^j,W_t^{i|j}] = $ FilterProbs$(L_t^{ij},Z^{ij}, M_{t-1|t-1}^i)$}
\State {$[\mathrm{x}_{t|t}^j, P_{t|t}^j]$ = Collapse$(\mathrm{x}_{t|t}^{ij},P_{t|t}^{ij},W_t^{i|j})$}
\EndFor
\EndFor
\end{algorithmic}
\caption{: Switching Kalman filter}
\label{Algo:alg3}
\end{algorithm}
\paragraph{Switching Kalman Smoother (SKS):}
Algorithm \ref{Algo:alg4} summarizes the procedure of SKS. In a backward recursion, a mixture of $K^2$ Kalman smoothers is run to compute component smoothed densities of $\mathrm{x}_t$ for all pairs $(j,k)$ (with mean $\mathrm{x}_{t|T}^{jk}$ and covariance $P_{t|T}^{jk}$) given the entire observation $\{\mathrm{y}_t\}_{t=1}^T$ based on the filtered densities computed in the SKF. The smoother state probability of $S_t$ is defined as
\vspace{-0.02in}
\begin{eqnarray}
M_{t|T}^j & = & P(S_t=j | \{\mathrm{y}_t\}_{t=1}^T) \notag \\
& = & \sum_k M_{t,t+1|T}^{jk} \label{Eqn:smoothSt}
\end{eqnarray}
where $M_{t,t+1|T}^{jk} = P(S_{t}=j,S_{t+1}=k | \{\mathrm{y}_t\}_{t=1}^T)$ can be computed based on the filtered state probabilities $M_{t|t}^j$ and the smoothed probabilities $M_{t+1|T}^k$ at $t+1$ as follows
\begin{equation*}
M_{t,t+1|T}^{jk} = \frac{M_{t|t}^j z_{jk}}{\sum_j' M_{t|t}^{j'} z_{j'k}} M_{t+1|T}^k
\end{equation*}
Finally, the component densities ($\mathrm{x}_{t|T}^{jk}$ and $P_{t|T}^{jk}$) weighted by $W_t^{k|j} = M_{t,t+1|T}^{jk}/M_{t|T}^j$ are collapsed to give the mean and covariance of the smoothed densities ($\mathrm{x}_{t|T}^{j}$ and $P_{t|T}^{j}$).
\begin{algorithm}[!t]
\noindent\textbf{Inputs}: $\{\mathrm{y}_t\}_{t=1}^T, A, R, Q, Z, \mathrm{x}_{t|t}^j, P_{t|t}^j, M_{t|t}^j$ \\
\noindent\textbf{Outputs}: $M_{t|T}^j$, $\mathrm{x}_{t|T}^j$, $P_{t|T}^j$
\vspace{-0.08in}
\noindent\hrulefill
\begin{algorithmic}[1]
\For {$t = T, T-1, \ldots, 1$}
\For {$j = 1, \dots , K$}
\For {$k = 1, \dots , K$}
\State {$[\mathrm{x}_{t|T}^{jk}, P_{t|T}^{jk}] = $ Smooth$(\mathrm{x}_{t+1|T}^k,P_{t+1|T}^k,\mathrm{x}_{t|t}^j,$\\ \hspace{12em} $ P_{t|t}^j, A^k, Q^k, Z^{jk})$}
\EndFor
\EndFor
\For {$j = 1, \dots , K$}
\State {$[M_{t|T}^j,W_t^{k|j}] = $ SmoothProbs$(M_{t|t}^j, M_{t+1|T}^k)$}
\State {$[\mathrm{x}_t^j, P_t^j]$ = Collapse$(\mathrm{x}_t^{jk},P_t^{jk},W_t^{k|j})$}
\EndFor
\EndFor
\end{algorithmic}
\caption{: Switching Kalman Smoother}
\label{Algo:alg4}
\end{algorithm}
\paragraph{SKF with Viterbi Algorithm:}
Under the Markovian assumption of the standard SKF, the sojourn time or dwell time (the number of consecutive time points spent in a specific state before transitioning to other states) is geometrically distributed, i.e., the probability of remaining in a state decreases as the sojourn time increases. This tends to induce unrealistically fast switching states and may not be appropriate for stationary processes such as each heart sound component with possibly long period of time in the same regime. To overcome this limitation, we introduce a two-step procedure by combining the SKF with the duration-dependent Viterbi algorithm which was first introduced by \cite{Schmidt2010} and extended in \cite{Springer2016}.
The duration-dependent Viterbi algorithm incorporates explicitly the information about each state expected duration (i.e. heart rate \textemdash HR, systolic interval \textemdash tSys) which are estimated from the testing heart sound recording using autocorrelation analysis. The duration probabilities $dP$ are estimated from the data for each of the four heart sound states.
With an initialized $\delta_1^j$, the algorithm computes the state probability in a forward recursion
\vspace{-0.2in}
\begin{center}
\begin{equation}
\delta_t^j = \max_{d}\Biggl[\max_{i\neq{j}} \quad [\delta_{t-d}^i a_{ij}]\quad {dP}_d^j \quad {\displaystyle \prod_{s=0}^{d-1}} \{M^j_{t|t}\}_{t=t-s} \Biggr]
\label{Eqn:eqn9}
\end{equation}
\end{center}
\noindent
for $1\leq t \leq T$, $1\leq i,j \leq K$, ${dP}_d^j$ is the duration probabilities for state $j$ for $1\leq d \leq d_{max}$ with $d_{max}$ the number of time points for each heartbeat with reference to the estimated heart rate. Note that we incorporate the SKF state probability $M_{t|t}^j = P(S_t=j | \{\mathrm{y}_t\}_{t=1}^t) \propto P(\{\mathrm{y}_t\}_{t=1}^t| S_t=j)P(S_t=j)$ which takes into account the observations up to time $t$ instead of only the current observation $P(\mathrm{y}_t| S_t=j)$ in the original duration-dependent Viterbi algorithm.
The state duration argument and the state sequence that maximize (\ref{Eqn:eqn9}) are stored in $D_t^j$ and $\psi_t^j$ respectively. The most likely state sequence is obtained stored in $\psi_t^j$, $\psi_t^j = \underset{1\leq i \leq K}{\mathrm{argmax}} [\delta_{t-D_t^j}^i a_{ij}]$.
The psuedocode of the extended Viterbi algorithm is shown in Algorithm (\ref{Algo:alg5}). Refer \cite{Springer2016} for more details. In Algorithm (\ref{Algo:alg5}), the $\delta_t^j$ is the highest state probability for each state $j$ at time $t$ for all duration probabilities ${dP}_d^j$ from $1$ to $d_{max}$. the state probabilities are updated only if current $\delta_t^i$ is higher than the $\delta_{t-1}^i$ in the processing window $1$ to $d_{max}$. The back-tracking procedure is initialized by finding the maximum probability of $\delta_t^i$ in the interval $T:T+d_{max}-1$ after the end of actual signal. The state index that maximizes $\delta_{T*}^i$ is stored in $q_{T*}^*= \mathrm{argmax}_i [\delta_t^i]$. The optimal path $q_t^*$ is obtained by back-tracking $\psi_T^{q_t^*}$ and $D_T^{q_t^*}$ such that $q_{t-d^*-1}^* = \psi_{q_t^*}$, where $t=T-1,\ldots,1$.
\iffalse
\vspace{-0.2in}
\begin{center}
\begin{equation}
q_t = \arg\max_{j} [\delta_t^j],\qquad t=T,\ldots,1 \textrm{ and } q_t=1,\ldots,K
\label{Eqn:eqn11}
\end{equation}
\end{center}
\fi
\iffalse
\vspace{-0.12in}
\begin{algorithm}[!t]
\noindent\textbf{Inputs}: $D, \psi, q$\\
\noindent\textbf{Outputs}: $q$.
\vspace{-0.12in}
\noindent\hrulefill
\begin{algorithmic}[1]
\While {$t > 1$}
\State {$dq = D_t^{q_t}$}
\State {$\{q\}_{t-dq}^{t-1}=q_t$}
\State {$q_{t-dq-1}=\psi_t^{q_t}$}
\State {$t = t-dq$}
\EndWhile
\end{algorithmic}
\caption{: Viterbi backtrack procedure.}
\label{Algo:backtrackVrb}
\end{algorithm}
\fi
\begin{algorithm}[!t]
\noindent\textbf{Inputs}: initials $\pi_0,HR,tSys$\\
\noindent\textbf{Outputs}: $q_t$.
\vspace{-0.12in}
\noindent\hrulefill
\begin{algorithmic}[1]
\State {$[\{M_t^j\}_{t=1}^T]=$ SKF$(\{\mathrm{y}_t\}_{t=1}^T), A, R, Q, Z, \mathrm{x}_0, P_0, M_0^j)$}
\State {Initialization: $[a_{ij}, \delta_1^j,d_{max}] = $(\textit{HR}, $tSys, \{M^j_{t|t}\}_{t=1}, \pi_0)$}
\For {$t = 2 : T+d_{max}-1$}
\For {$i,j = 1 : K$}
\For {$d = 1 : d_{max}$}
\State {$w_s = t-d,\quad 1\leq w_s \leq T-1$}
\State {$w_e = t,\quad 2\leq w_e \leq T$}
\State {$\delta_t^j = \max_{d}\Bigl[\max_{i\neq{j}} [\delta_{w_s}^i a_{ij}]\hspace{0.3em} . \hspace{0.3em}{dP}_d^j \hspace{0.3em} . \hspace{0.3em}$ \\ \hspace{10em}${\prod_{s=w_s}^{w_e}} \{M^j_{t|t}\}_{t=s} \Bigr]$}
\State {$D_t^j = \arg \max_{d}\Bigl[\max_{i\neq{j}} [\delta_{w_s}^i a_{ij}]\hspace{0.3em} . \hspace{0.3em}{dP}_d^j \hspace{0.3em} . \hspace{0.3em}$ \\ \hspace{12em}${\prod_{s=w_s}^{w_e}} \{M^j_{t|t}\}_{t=s} \Bigr]$}
\State {$\psi_t^j = \arg \max_{1\leq i \leq K} [\delta_{t-D_t^j}^i a_{ij}]$}
\EndFor
\EndFor
\EndFor
\noindent
\State {$T* = \arg \max_{t}[\{\delta_t^i\}_{t=T}^{T+d_{max}-1}] \qquad 1 \leq i \leq K$}
\State {$q_{T*}^*= \arg \max_i[\delta_{T*}^i]$}
\State {$t=T*$}
\While {$t > 1$} $//$Backward Viterbi procedure
\State {$d^* = D_t^{q_t^*}$}
\State {$\{q\}_{t-d^*}^{t-1}=q_t^*$}
\State {$q_{t-d^*-1}^*=\psi_t^{q_t^*}$}
\State {$t = t-d^*$}
\EndWhile
\end{algorithmic}
\caption{: SKF-Viterbi Algorithm.}
\label{Algo:alg5}
\end{algorithm}
\subsection{Heart Sound Classification}
In this section, we present an automatic classification of healthy and pathological heart sound recordings using hidden Markov models (HMM) based on the heart-beat segmentation obtained by the switching Kalman filters. The distribution of train and test sets in the database used for evaluation is given in Table \ref{Table:table2}. The heart sound recordings were preprocessed and then segmented using procedures described in Section 2.B, such that each segment covers a complete heart-beat cycle (start of S1 sound to the consequent S1 sound). The Mel-frequency cepstral coefficients (MFCCs) widely used in speech signal processing are adapted for feature extraction.
These MFCC features are then used as input to the HMMs with Gaussian mixture observation density. Figure \ref{Fig:fig4a} illustrates the different steps used in the evaluation of the heart sound classification system.
\subsubsection{Feature Extraction}
A sequence of short-time MFCC feature vectors was extracted from each heart sound segment based on a sliding-window approach using windowed frames of 50ms with 10ms overlap. A Hamming window was used to minimize the discontinuities at the frame edges. For each frame, a set of MFCCs is computed from the short-time spectrum. Each frame was first passed through a first order FIR to spectrally flatten the signal. A discrete Fourier transform (DFT) was applied to each windowed frame and energy at each $mel$ bandwidth (with a value of 20 to 24 in $mel$ scale) was calculated. By taking the logarithm and cosine transform, a vector of 12 MFCCs was derived for each frame.
\begin{figure}[!t]
\begin{minipage}[t]{0.7\linewidth}
\centering
\subfloat[\label{Fig:fig4a}]{\includegraphics[width=1\linewidth,keepaspectratio]{figs/fig4.pdf}}
\end{minipage}
\centering
\begin{minipage}[t]{0.7\linewidth}
\centering
\subfloat[\label{Fig:fig4b}]{\includegraphics[width=1\linewidth,keepaspectratio]{figs/fig5_1.pdf}}
\end{minipage
\vspace{-0.2 cm}
\caption{ (a) The overall classification system design for training and testing the HMM models. (b) HMM testing procedure.}
\label{Fig:hmm_train_test}
\vspace{-0.1in}
\end{figure}
\subsubsection{HMM Training and Evaluation}
The HMM is a probabilistic model that can capture the dynamical changes of the heart sounds by making inferences about the likelihood of being in certain discrete states. In this paper, a continuous HMM with Gaussian mixtures (GM) consisting of four states (left-to-right, no skipping) and 16 Gaussian mixtures (probability density functions) for each state was used. A set of HMM parameters is denoted by $\lambda =(\boldsymbol{\pi},{\bf A},{\bf B})$ where $\boldsymbol{\pi} = [\pi_{i}]$ with $\pi_{i} = P[q_1=S_i], 1\leq i\leq K$ are the initial state probabilities and ${\bf A} = [a_{ij}]$ is $K \times K$ transition matrix with $a_{ij} = P[q_{t+1} = S_i|q_t=S_j], 1\leq i, j\leq K$. Let $\mathbf{O}_t = [o_{1t}, \ldots, o_{Nt}]^{'}$ be the $N \times 1$ MFCC feature vector at time $t$. The observational emission probability ${\bf B} = \{b_j(x)\}, 1 \leq j \leq K$ at each state $j$ is defined by a Gaussian mixture model
\begin{equation}
b_j(\mathbf{O}_t) = \sum_{m=1}^{M} c_{jm} N(\mathbf{O}_t;\boldsymbol{\mu}_{jm},\boldsymbol{\Sigma}_{jm}), 1 \leq j \leq K
\label{Eqn:eqnHMM1}
\end{equation}
where $\boldsymbol{\mu}_{jm}$ and $\boldsymbol{\Sigma}_{jm}$ are respectively the mean vector and covariance matrix of the $m$-th mixture component with mixture weight $c_{jm}$ at state $j$. Here, we set the number of mixture components as $M=16$ per state.
\paragraph{Training \& Testing:}
The training and testing of the HMMs are illustrated in Fig. \ref{Fig:fig4a} and Fig.\ref{Fig:fig4b}. Given the training observation sequences $\mathbf{O}_1, \ldots, \mathbf{O}_T$ (a complete heart-beat cycle $–$ $S1$, systole, $S2$, diastole), the HMM model parameters were estimated by maximizing the likelihood function. The training of an HMM involves initialization of model parameters followed by iterative re-estimation of the parameters via expectation-maximization algorithm (the Baum-Welch algorithm) until convergence. The segmental K-means algorithm was used in model initialization by first aligning the observations to the corresponding state via the Viterbi algorithm and partitioning the observations into each mixture component by K-means clustering. Separate HMMs were trained for the normal and abnormal heart sounds. Given an unknown testing heart sound segment, the Viterbi algorithm was used to compute the approximate likelihood scores for each HMM model based on the most likely state sequence. The testing heart sound signal will be classified to the model with the highest likelihood score.
\paragraph{Model evaluation:}
The performances of trained HMM models were evaluated on their ability to correctly classify a given heart sound heartbeat segment within the test set of recordings, into normal or abnormal classes. In order to build the confusion matrix to assess the classification performance, each heartbeat was compared to the existing HMM models. Three different classes were considered in this study, the normal class is denoted by $N$, the abnormal by $A$, and the unsure (X-Factor) class by $Q$. One main motivation of this study is the detection of abnormal heartbeats (or records). We used a large database collected from different sources in different clinical environments where some of the recordings are labeled as noisy or unclassifiable. The proposed approach was evaluated with and without incorporating the noisy (X-Factor) recordings for both heartbeat and recording classification levels. For classification without involving the X-Factor segments or recordings, we used performance metrics as in \cite{Springer2016} such as sensitivity ($Se$), positive productivity ($P_+$), accuracy ($Acc$), and ($F1$) score computed from the confusion matrix.
For classification including the X-Factor class, we used a performance metric proposed by \cite{Liu2016} to compute the overall performance based on the number of beats or recordings classified as normal, abnormal, or X-Factor. The signal quality indices are provided along with the database, Table \ref{Table:table8} illustrates the partitioning of X-Factor recordings into the train and test sets. Total 279 recordings were labeled by cardiologists as unsure (hard to classify) which we consider it as X-Factor recordings in this study.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Training and testing sets for X-Factor class.}
\vspace{-0.1 cm}
\label{Table:table8}
\centering
\begin{tabular}{m{1cm} cccccccc}
\hline \hline
\multirow{3}{*}{Dataset} & \multicolumn{4}{c}{Abnormal} & \multicolumn{4}{c}{Normal}\\
\cline{2-9}
& \multicolumn{2}{c}{Segments} & \multicolumn{2}{c}{Records} & \multicolumn{2}{c}{Segments} & \multicolumn{2}{c}{Records}\\
\cline{2-9}
& Train & Test & Train & Test & Train & Test & Train & Test \\
\hline
DS-\textit{a} & 216 & 222 & 8 & 8 & 35 & 0 & 1 & 0 \\
DS-\textit{b} & 120 & 125 & 15 & 16 & 360 & 368 & 45 & 46\\
DS-\textit{c} & 45 & 91 & 2 & 2 & 0 & 0 & 0 & 0 \\
DS-\textit{d} & 12 & 21 & 1 & 1 & 8 & 0 & 1 & 0 \\
DS-\textit{e} & 497 & 472 & 18 & 19 & 1045 & 1044 & 45 & 46\\
DS-\textit{f} & 32 & 63 & 1 & 2 & 30 & 40 & 1 & 1 \\
\hline
Total & 904 & 994 & 45 & 48 & 1478 & 1452 & 93 & 93\\
\hline \hline
\end{tabular}
\vspace{-0.1in}
\end{table}
We computed the modified sensitivity ($Se$), specificity ($Sp$), accuracy ($MAcc$), and $F1$ from the confusion matrix including X-Factor as
\vspace{-0.1in}
\begin{equation}
Se = \frac{wa_1 \times Aa_1}{Aa_1 + Aq_1 + An_1} + \frac{wa_2 \times (Aa_2 + Aq_2)}{Aa_2 + Aq_2 + An_2}
\label{Eqn:eqn16}
\end{equation}
\begin{equation}
Sp = \frac{wn_1 \times Nn_1}{Na_1 + Nq_1 + Nn_1} + \frac{wn_2 \times (Nn_2 + Nq_2)}{Na_2 + Nq_2 + Nn_2}
\label{Eqn:eqn17}
\end{equation}
\begin{equation}
MAcc = \frac{Se+Sp}{2}
\label{Eqn:eqn18}
\end{equation}
\noindent
where $wa_{1,2}$ and $wn_{1,2}$ are the percentages of good/poor signal quality in all abnormal and normal recordings (training set) which were used as weights to calculate the $Se$ and $Sp$ respectively. $A$ and $N$ are the true labels of abnormal and normal classes, where the $a$, $q$ and $n$ are the algorithm labels of abnormal, X-Factor and normal classes respectively. For example, $A_{a1,2}$ are the total number of good/poor abnormal (beats or recordings) which were recognized as abnormal class.
We followed \cite{Oster2015} method to calculate the penalized $F1$ score, where a penalty $\alpha$ was applied to $An$ and $Na$ to ensure that all beats that are not considered as belonging to X-Factor. The penalized $F1$ score was computed as follows
\begin{equation}
F1 = \frac{2(\alpha +1)Aa_1}{2(\alpha+1)Aa_1 + \alpha(An_1 + Na_1)+(Aq_1+Nq_1)}
\label{Eqn:eqn19}
\end{equation}
\noindent
where $\alpha = 10$ is the weight or penalty to control the incorrect normal or abnormal classification due to the inclusion of X-Factor class. The $Aq$ beats were considered the pseudo false negative ($PFN$), and $Nq$ the pseudo false positive ($PFP$).
\section{Results and Discussions}
\subsection{Heart Sound Segmentation}
We compare the performance of the three different segmentation algorithms: SKF, SKS, and KF-Viterbi, in annotating the dynamic changes in the heart sound recordings. The performance was evaluated on all recordings in the unseen testing dataset, can be seen in Table \ref{Table:table2} and Table \ref{Table:table8}. The switching Kalman filter algorithms were initialized by fitting a stationary autoregressive model of order ($P=4$) on each state observation sequence in a recording-specific manner. The parameters of the MSAR model were computed by averaging parameter estimates overall recordings in the training dataset.
\vspace{-0.14in}
\noinden
\begin{center}
\begin{figure}[!t]
\vspace{-0.05in}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{minipage}[t]{0.6\linewidth}
\subfloat[]{\includegraphics[width=1\linewidth,keepaspectratio]{figs/fig6b.pdf}}
\end{minipage}
\vspace{-0.8 cm}
\caption{Segmentation performance box-plots using the test dataset (Table \ref{Table:table2}). KF: Kalaman filter segmentation approach, KF\_KS: Kalman Smoother segmentation, KF\_Vrb: fusion of Kalman filter and Viterbi algorithm.}
\label{Fig:fig6b}
\vspace{-0.1in}
\end{figure}
\end{center}
\vspace{-0.2in}
\iffalse
\begin{figure*}%
\centering
\subfloat[]{%
\label{Fig:fig6a}%
\includegraphics[height=2in]{figs/fig6a.pdf}}%
\qquad
\subfloat[]{%
\label{Fig:fig6b}%
\includegraphics[height=2in]{figs/fig6b.pdf}}%
\caption{Segmentation performance boxplots. (a) The performance of proposed segmentation approaches using the whole Physionet CinC training dataset (non-split). (b) The segmentation performance using the test dataset (Table \ref{Table:table2}). KF: Kalaman filter segmentation approach, KF\_KS: Kalman Smoother segmentation, KF\_Vrb: fusion of Kalman filter and Viterbi algorithm.}%
\end{figure*}
\fi
Fig. \ref{Fig:fig6b} shows the results on unseen testing datasets. The models were initialized by fitting the AR(4) on the train dataset dynamic clusters. We can see that the segmentation accuracies of unseen dataset dropped slightly in both SKF and SKS, while the SKF-Viterbi maintained higher performance of 84.2\%. The fusion of SKF and duration-dependent Viterbi algorithm improves the average performance of SKF form 71\% to 84.2\%.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Average segmentation performance on selected balanced testing set.}
\vspace{0.1 cm}
\label{Table:table91}
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{cccccc}
\hline \hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Interval} & \multicolumn{4}{c}{\textit{Performance mean $\pm$ SD (\%)}} \\
\cline{3-6}
& & $Se$ & $P_+$ & $F1$ & $Acc$\\
\hline
\multirow{4}{*}{\begin{tabular} [m{1cm} ]{@{} m{1cm} @{}} SKF \end{tabular}} & \textit{S1} & 74 $\pm$ 12 & 69 $\pm$ 17 & 71 $\pm$ 13 & \multirow{4}{*}{71 $\pm$ 13}\\
& \textit{Sys} & 61 $\pm$ 21 & 64 $\pm$ 18 & 61 $\pm$ 19 & \\
& \textit{S2} & 33 $\pm$ 17 & 61 $\pm$ 28 & 40 $\pm$ 20 & \\
& \textit{Dia} & 85 $\pm$ 12 & 78 $\pm$ 10 & 81 $\pm$ 10 & \\
\hline
\multirow{4}{*}{\begin{tabular} [m{1cm} ]{@{} m{1cm} @{}} SKS \end{tabular}} & \textit{S1} & 77 $\pm$ 16 & 74 $\pm$ 20 & 74 $\pm$ 17 & \multirow{4}{*}{74 $\pm$ 18}\\
& \textit{Sys} & 67 $\pm$ 25 & 68 $\pm$ 23 & 67 $\pm$ 24 & \\
& \textit{S2} & 55 $\pm$ 24 & 60 $\pm$ 28 & 55 $\pm$ 25 & \\
& \textit{Dia} & 81 $\pm$ 21 & 83 $\pm$ 14 & 81 $\pm$ 17 & \\
\hline
\multirow{4}{*}{\begin{tabular} [m{1cm} ]{@{} m{1cm} @{}} SKF-Viterbi \end{tabular}} & \textit{S1} & 77 $\pm$ 15 & 85 $\pm$ 16 & 81 $\pm$ 15 & \multirow{4}{*}{84 $\pm$ 14}\\
& \textit{Sys} & 86 $\pm$ 18 & 87 $\pm$ 17 & 81 $\pm$ 17 & \\
& \textit{S2} & 63 $\pm$ 20 & 76 $\pm$ 21 & 68 $\pm$ 19 & \\
& \textit{Dia} & 91 $\pm$ 12 & 89 $\pm$ 12 & 90 $\pm$ 12 & \\
\hline\hline
\multicolumn{6}{m{10cm}}{\textit{S1}: S1 sound, \textit{Sys}: systolic, \textit{S2}: S2 sound, \textit{Dia}: diastolic
SD: standard deviation, KF: Kalman filter, KS: Kalman smoother.}
\end{tabular}}
\vspace{-0.2in}
\end{table}
The study presented here investigated new approaches for the segmentation of fundamental heart sounds (S1, Systole, S2, and Diastole) from a single channel heart sound recording without using any reference signals for the labeling process. The results show that using the backward SKS slightly outperforms the SKF method, increasing the accuracy by almost 4\%. However, fusing the duration-dependent Viterbi with the SKF resulted in a significant improvement in heart sound segmentation, achieving almost 10\% higher accuracy.
The overall performance results of the three proposed approaches on the unseen (not trained) data set, for each fundamental heart sound, are presented in Table \ref{Table:table91}. It is important to note that, the results in this table are calculated with zero tolerance between the ground truth and the estimated labels. The confusion matrix is calculated such that the observation at time $t$ is true positive if it's state matching the ground truth labels, otherwise is considered as false positive. The set of equations provided in \cite{Mariano2011} were used in this paper to calculate the $Se$, $P_+$, $F1$ and global accuracy $Acc$. The Viterbi based approach outperforms both the SKF and SKS achieving global accuracy of 84 $\pm$ 14\% on the hidden testing set, with the highest detection of diastolic intervals.
The state-of-the-art method \cite{Springer2016} involves a logistic regression model with multivariate normal (MVN) distribution computed from four-dimensional feature vectors extracted from each heart sound recording. The use of such higher dimensional feature space allows the model to adequately best capture the underlying dynamics of the four-state observations. However, the proposed methods in this paper ignore the feature extraction phase and use a down-sampled version of the raw heart sound recordings, in which the Kalman filter infers the state probabilities given a univariate heart sound observation sequence.
\subsection{Heart Sound Classification}
In this section, we evaluate the performance of HMM in abnormal heart sound morphology classification. \iffalse as shown in Figure \ref{Fig:fig6}.\fi The proposed technique can perform classification based on beat-level and recording-level paradigms. In the beat-level approach, each heartbeat (segment) was individually classified and assigned to a normal, abnormal, or X-Factor class. Where in recording-level, the classification scores for all heartbeats belonging to the same recording were combined (voting), each recording is classified as abnormal only when the proportion of beats assigned to abnormal class is dominant. The beat-level approach substantially expands the number of training instances, which allows the machine learning application to learn more about the heart sound underlying dynamics for each class. The database provides global (recording-level) labels where each record has been assigned to an abnormal or normal class, we assumed all the beats of a given abnormal recording are also abnormal. Hence, if only a small portion of a recording was corrupted by noise, the recording will not be classified as noisy (X-Factor).
In addition to the beat-level and recording-level classification, two approaches of train-test data partitioning were also investigated to evaluate the performance of the HMM models. The first approach, the whole beats were split into balanced normal, abnormal, with and without X-Factor by using K-Fold cross-validation (5-Folds). This is necessary to avoid over-fitting the model, but it might result in including patients$'$ beats in the training set and reporting on testing set that include the same data which will falsely inflate the measures of accuracy. 5-fold cross validation was performed, since the X-Factor beats (segments) are much less than the normal and abnormal, in which 5-folds is keeping enough X-Factor beats for testing. The second approach, the recordings were split into two balanced training and testing sets, where the recordings in the testing set include almost the same portion of beats/recordings from normal, abnormal, with or without X-factor class. This approach provides a more thorough analysis of the reported classification performance and measures the ability of the trained HMM models to classify any unseen heart sound data.
\subsubsection{Beat-level Classification using 5-Fold Cross-validation (Without X-Factor)}
Table \ref{Table:table9} shows the corresponding results from 5-fold cross validation for a total of 81,498 normal and abnormal beats. We partition the database to include balanced proportions of normal and abnormal beats for both training and testing, note that each fold might not contain the exact amount of recordings compared to the other folds.
The overall results for the normal/abnormal classification can be seen in the last two rows of the table, showing an average $Se$ of $94.39 \pm 1.22$, $P_+$ of $86.37 \pm 0.9$, $Acc$ of $87\pm0.52$, and $F1$ score of $90.19 \pm 0.26$. The four evaluation metrics ($Se$,$P_+$,$Acc$, and $F1$) \iffalse are calculated using equations (\ref{Eqn:eqn12}), (\ref{Eqn:eqn13}), (\ref{Eqn:eqn13}), and (\ref{Eqn:eqn15}) respectively\fi. Note that some of normal/abnormal beats are corrupted by varied levels of noise; nevertheless, the total noisy recordings were excluded from this experiment. Moreover, the database does not provide the beat-level cardiologists$'$ labeling. This may result in miss-classification of a beat with noise as abnormal as it can be noticed in $FP$ column (see Table \ref{Table:table9}).
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{K-Fold (5-Fold) cross validation of Physionet CinC training dataset (Table \ref{Table:table1}) without X-Factor.}
\vspace{0.1 cm}
\label{Table:table9}
\centering
\resizebox{1\textwidth}{!}{
\begin{tabular}{c|cccccccc|cccccccc}
\hline \hline
\multirow{2}{*}{\begin{tabular}[c]{@{} c@{}}Fold\\iterate \end{tabular}} & \multicolumn{8}{c}{Beat-level without X-Factor class} & \multicolumn{8}{c}{Recording-level without X-Factor class}\\
\cline{2-17}
& $TP$ & $FP$ & $TN$ & $FN$ & $Se$ & $P_+$ & $Acc$ & $F1$ & $TP$ & $FP$ & $TN$ & $FN$ & $Se$ & $P_+$ & $Acc$ & $F1$ \\
\hline
1 & 3079 & 1741 & 11290 & 190 & 94.19 & 86.64 & 88.15 & 90.26 & 516 & 310 & 1899 & 35 & 93.67 & 85.97 & 87.51 & 89.65\\
2 & 3040 & 1721 & 11309 & 229 & 92.99 & 86.79 & 88.04 & 89.79 & 512 & 362 & 1864 & 35 & 93.60 & 83.74 & 85.68 & 88.40\\
3 & 3134 & 1887 & 11143 & 135 & 95.87 & 85.52 & 87.59 & 90.40 & 530 & 409 & 1818 & 17 & 96.89 & 81.63 & 84.64 & 88.61\\
4 & 3118 & 1903 & 11128 & 151 & 95.38 & 85.40 & 87.40 & 90.11 & 538 & 392 & 1827 & 15 & 97.29 & 82.33 & 85.32 & 89.19\\
5 & 3058 & 1627 & 11403 & 212 & 93.52 & 87.51 & 88.72 & 90.42 & 505 & 325 & 1884 & 40 & 92.66 & 85.29 & 86.75 & 88.82\\
\hline
Mean & 3086 & 1776 & 11255 & 183 & 94.39 & 86.37 & 87.98 & 90.19 & 521 & 360 & 1858 & 28 & 94.82 & 83.79 & 85.98 & 88.93\\
SD$^\dagger$ & 40 & 117 & 117 & 40 & 1.22 & 0.90 & 0.52 & 0.26 & 13 & 42 & 35 & 12 & 2.11 & 1.85 & 1.14 & 0.50\\
\hline \hline
\multicolumn{9}{m{3cm}}{Standard deviation$^\dagger$}
\end{tabular}}
\vspace{-0.2in}
\end{table*}
\subsubsection{Beat-level Classification using 5-Fold Cross-validation (With X-Factor)}
The heart beats assigned to X-Factor were used together with the normal and abnormal classes. Three HMM models were trained for normal, abnormal, X-Factor class. The objective of this experiment is to test the ability of the proposed method to automatically reject the beats which labeled as unsure, this is a challenging task in the biomedical signal analysis. The metrics used to evaluate the classification performance are $Se$, $P_+$, $Acc$, and $F1$ score. In order to confirm the overall performance of the beats being classified as normal or abnormal with the existence of X-Factor class, a modified performance measure metrics as defined in equations (\ref{Eqn:eqn16}), (\ref{Eqn:eqn17}), (\ref{Eqn:eqn18}), and (\ref{Eqn:eqn19}) were used. The confusion matrix is obtained for each of 5-fold cross-validation dataset, in which the reference beat labels A-good represent the beats confirmed to be abnormal and A-poor refers to those beats considered as unsure (X-Factor). The incorporation of X-Factor class came at cost of almost 13.3\% of the X-Factor beats goes to abnormal class and 7.6\% classified as normal. Table \ref{Table:table11} shows the average performance of the 5-fold cross validation, the method achieved an average $Se$ of $83.82\pm 2.47$, $81.63\pm 1.4$ $Sp$, $82.73 \pm 1.67$ $Acc$, and $82.7 \pm 1.66$ $F1$ score. The small values of the standard deviations in the last row indicate consistent results across the 5-folds.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{K-Fold (5-Fold) cross validation performance for Physionet CinC training dataset (Table \ref{Table:table1}) with X-Factor.}
\vspace{-0.1 cm}
\label{Table:table11}
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{c|cccc|cccc}
\hline \hline
\multirow{2}{*}{\begin{tabular}[c]{@{} c@{}}Fold\\iterate \end{tabular}} & \multicolumn{4}{c}{Beat-level with X-Factor} & \multicolumn{4}{c}{Recording-level with X-Factor class}\\
\cline{2-9}
& $Se$ & $Sp$ & $MAcc$ & $F1$ & $Se$ & $Sp$ & $MAcc$ & $F1$ \\
\hline
1 & 81.45 & 82.12 & 81.78 & 76.25 & 77.93 & 81.59 & 79.76 & 76.57\\
2 & 82.62 & 79.63 & 81.12 & 76.46 & 81.29 & 78.78 & 80.04 & 76.29\\
3 & 85.47 & 81.07 & 83.27 & 77.15 & 81.88 & 79.22 & 80.55 & 75.56\\
4 & 87.31 & 83.43 & 85.37 & 77.36 & 86.76 & 80.66 & 83.71 & 77.48\\
5 & 82.24 & 81.92 & 82.08 & 77.83 & 79.87 & 80.44 & 80.16 & 76.51\\
\hline
Mean & 83.82 & 81.63 & 82.73 & 77.01 & 81.55 & 80.14 & 80.84 & 76.48\\
SD$^\dagger$ & 2.47 & 1.40 & 1.67 & 0.65 & 3.29 & 1.13 & 1.63 & 0.69\\
\hline \hline
\multicolumn{5}{m{3cm}}{standard deviation$^\dagger$}
\end{tabular}}
\vspace{-0.2in}
\end{table}
\subsubsection{Recording-level Classification using 5-Fold Cross-validation (Without X-Factor)}
In this experiment, the whole heart sound recording was classified either as normal or abnormal in discarding the inter-beat classification. Table \ref{Table:table9} shows the detailed performance of 5-fold cross validation on the selected balanced normal-abnormal dataset. The FP rate for detecting the abnormal recordings is showing that almost 16.23\% of the normal recordings were classified as abnormal which increases the probability of false classification. However, the proposed method obtains a $Se$ of $94.82\pm 2.11$, $83.79\pm 1.85$ $P_+$, $85.98\pm 1.14$ $Acc$, and $88.98\pm 0.5$ $F1$ score. Compared to beat-level classification performance in Table \ref{Table:table9}, the performance shows a slightly drop for record-level classification. This indicates that some of the recordings may be considered as abnormal based on the existence of abnormality in some beats while other beats are still holding the normal morphologies.
\subsubsection{Recording-level Classification using 5-Fold Cross-validation (With X-Factor)}
In the recording-based classification with X-Factor class, each recording labeled as unsure was considered as X-Factor. Since the X-Factor recordings do not include the fundamental heart sounds ($S1$, Systole, $S2$, Diastole), the X-Factor recordings are segmented using non-overlap window of size one-$second$. This segmentation was considered an equivalent to the complete heart beat cycle ($S1$ sound to end of diastole) in the normal or abnormal recordings. Compared to the beat-level classification without incorporating X-Factor class, we can observe that the average $Se$ dropped from $94.82\pm 2.11$ to $81.55±\pm 3.29$ (see Table \ref{Table:table11}), so as the other metrics. This drop in performance occurs due to the recordings considered as X-Factor may still holds underlying dynamics of the heart sounds in some portions, which in turn miss-classified as normal or abnormal.
\subsubsection{Beat-level Classification using Leave-one-out (unseen) Cross-validation (Without X-Factor)}
Each dataset (DS-\textit{a} to DS-\textit{e}) is split into train and test set (see Table \ref{Table:table2}) where the testing set contains a balanced and totally unseen recordings to the trained classifier. The HMM classification performance was investigated at both the beat-level and recording-level with or without considering the X-Factor class.
The training and testing sets are shown in Table \ref{Table:table2}, where a total of 1438 normal and abnormal recordings were assigned to training dataset and 1434 normal and abnormal recordings were assigned to testing dataset. Table \ref{Table:table15} shows the performance for abnormal beat detection. Our method achieved an overall accuracy of 86.79\% compared to 87.98\% for 5-fold cross-validation. This provides an evidence that the trained HMM models can achieve almost similar accuracies for both seen and unseen heartbeat testing sets.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\vspace{0.2in}
\caption{Classification performance for unseen testing set (Table \ref{Table:table2}).}
\vspace{0.1 cm}
\label{Table:table15}
\centering
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{ccccc|cccc}
\hline \hline
\multirow{2}{*}{\begin{tabular}[c]{@{} c@{}}Classification\\approach \end{tabular}} & \multicolumn{4}{c}{Without X-Factor} & \multicolumn{4}{c}{With X-Factor class}\\
\cline{2-9}
& $Se$ & $P_+$ & $Acc$ & $F1$ & $Se$ & $Sp$ & $MAcc$ & $F1$\\
\hline
Beat-level & 91.45 & 85.97 & 87.07 & 88.63 & 81.50 & 83.97 & 82.74 & 75.60 \\
Recording-level & 92.25 & 81.74 & 83.82 & 86.68 & 78.92 & 79.65 & 79.28 & 74.47 \\
\hline \hline
\end{tabular}}
\vspace{-0.1in}
\end{table}
\subsubsection{Beat-level Classification using Leave-one-out (unseen) Cross-validation (With X-Factor)}
Merging the X-Factor train-test dataset in Table \ref{Table:table8} with the normal-abnormal train-test datasets in Table \ref{Table:table2}, a total of 43123/1576 segments/recordings were assigned to training dataset and 43203/1575 segments/recordings were assigned to testing dataset. The modified $Se$, $P_+$, and $Acc$ were calculated as defined in \cite{Springer2016}, while $F1$ was found using equation (\ref{Eqn:eqn19}). Including the X-Factor, the resulting $Se$ was almost similar compared to the $Se$ discarding X-Factor class; however, the $F1$ score dropped by 13.03\% (see Table \ref{Table:table15}). This is mainly due to the significant imbalanced data classes, where X-Factor contains much smaller amount of data compared to normal and abnormal classes.
\subsubsection{Recording-level Classification using Leave-one-out (unseen) Cross-validation (Without X-Factor)}
The HMM models were trained using 1150 normal and 288 abnormal recordings. The HMM performance was evaluated on the totally unseen testing set containing 1150 normal and 284 abnormal. Table \ref{Table:table15} summarizes the confusion matrix and overall classification performance for the heart sound abnormal recordings detection. We can see an improvement in $Se$ by 3.38\% compared to the beat-level classification. However, the $F1$ score dropped by 1.95\%. This is because of a lower $P_+$ as a trade-off in the increment of $Se$.
\subsubsection{Recording-level Classification using Leave-one-out (unseen) Cross-validation (With X-Factor)}
A total 1150, 288, and 138 normal, abnormal, and X-Factor recordings respectively were used to train the HMM models. The HMM performance was evaluated on the totally unseen testing set containing 1150 normal, 284 abnormal, and 141 X-Factor. The classification confusion matrix is obtained to compute the performance of heart sound recordings using unseen testing set incorporating X-Factor class, as shown in Table \ref{Table:table15}. We can see a significant drop in $Sp$ which in turn affects the $F1$ score, the classification of heart sound recordings with the inclusion of X-Factor class shows the lowest $F1$ score while maintaining abnormal class $Se$.
\section{Conclusion}
\label{sec:prior}
We have developed a Markov-switching linear dynamic model of the piece-wise AR process for heart sound segmentation. Results showed that the fusion of SKF and Viterbi algorithm was able to achieve remarkable segmentation accuracy on a challenging dataset. This work focuses on modeling of raw heart sound signals. Future work will consider an extension of the currently proposed model to a multivariate case for modeling multi-dimensional feature vectors extracted from a raw heart sound as in logistic regression model with multivariate normal (MVN) distribution a state-of-the-art method \cite{Springer2016} for heart-sound segmentation. We also investigated the classification performance
of the MFCC-based continuous density HMM which models---not only the normal and abnormal morphologies of heart sound signals but also morphologies considered as unclassifiable or unknown morphologies (denoted as X-Factor). The HMM classification performance was examined with and without incorporating the X-Factor on the 2016 Physionet/CinC Challenge database. Our proposed method shows the best gross $F1$ score of 90.19 and 82.7 on abnormal beat classification with and without incorporating the X-Factor mode respectively.
\newpage
\bibliographystyle{IEEEbib}
|
2,877,628,088,856 | arxiv | \section{Introduction}\label{se:introduction}
\emph{Graph drawing} is a well-established research area that studies how to automatically compute visual representations of relational data sets in many application domains, including software engineering, circuit design, computer networks, database design, social sciences, and biology (see, e.g.,~\cite{dett-gd-99,dl-gvdm-07,jm-gds-03,kw-dg-01,s-gd-02,t-hgd-13}). The aim of a graph visualization is to clearly convey the structure of the data and their relationships, in order to support users in their analysis tasks. In this respect, there is a general consensus that edges with many crossings and bends negatively affect the readability of a drawing of a graph, as also witnessed by several user studies on the subject (see, e.g.,~\cite{DBLP:journals/iwc/Purchase00,DBLP:journals/ese/PurchaseCA02,DBLP:journals/ivs/WarePCM02}). At the same time, more recent cognitive experiments suggest that edge crossings do not inhibit user task performance if the edges cross at large angles~\cite{DBLP:conf/apvis/Huang07,DBLP:journals/vlc/HuangEH14,DBLP:conf/apvis/HuangHE08}. As observed in~\cite{del-dgrac-2011}, intuitions of this fact can be found in real-life applications: for example, in hand-drawn metro maps and circuit schematics, where edge crossings are conventionally at 90 degrees (see, e.g.,~\cite{Vignelli}), and in the guidelines of the CCITT (Comit\'e Consultatif International T\'el\'ephonique et T\'el\'egraphique) for drawing Petri nets, where it is written: ``There should be no acute angles where arcs cross''~\cite{CCITT-85}.
The above practical considerations naturally motivate the theoretical study of families of graphs that can be drawn with straight-line edges, few crossings per edge, and right angle crossings at the same time. This kind of research falls in an emerging topic of graph drawing and graph theory, informally called ``beyond planarity''. The general framework of this topic is to relax the planarity constraint by allowing edge crossings, but still forbidding those configurations that would affect the readability of the drawing too much. Different types of forbidden edge-crossing configurations give rise to different families of beyond planar graphs. For example, for any integer $k \geq 3$, the family of \emph{$k$-quasi planar graphs} is the set of graphs that have a drawing with no $k$ mutually crossing edges (see, e.g.,~\cite{DBLP:journals/jct/AckermanT07,aapps-qpgln-C97,DBLP:journals/siamdm/FoxPS13}). For any positive integer $k$, the family of \emph{$k$-planar graphs} is the set of graphs that admit a drawing with at most $k$ crossings per edge~\cite{pt-gdfce-C97}; in particular, \emph{$1$-planar graphs} have been widely studied in the literature (see, e.g.,~\cite{abk-slgd3-GD13,DBLP:journals/ipl/Didimo13,DBLP:journals/algorithmica/GrigorievB07,help-ft1pg-COCOON12,DBLP:journals/jgt/KorzhikM13,r-esadk-AMS65}). \emph{\Rac (Right Angle Crossing) graphs} are those graphs that admit a drawing where edges cross only at right angles (see, e.g.,~\cite{del-dgrac-2011}). Several algorithms and systems for computing \Rac drawings or, more in general, large angle crossing drawings, have been described in the literature (see, e.g.,~\cite{DBLP:journals/cj/ArgyriouBS13,DBLP:journals/algorithmica/GiacomoDEL14,DBLP:journals/cj/DiGiacomoDGLR15,DBLP:journals/comgeo/GiacomoDLM13,dlr-tdfda-10,DBLP:journals/jgaa/DidimoLR11,nehh-lcacl-2010,DBLP:conf/vl/HuangEHL10}). See also~\cite{dl-cargd-12} for a survey.
In this scenario, special attention is receiving the study of the relationships between 1-planar drawings and RAC drawings with straight-line edges. The maximum number of edges of an $n$-vertex 1-planar graph is $4n-8$~\cite{pt-gdfce-C97}, while $n$-vertex straight-line 1-planar drawings and $n$-vertex straight-line RAC drawings have at most $4n-9$ edges~\cite{DBLP:journals/ipl/Didimo13} and $4n-10$ edges~\cite{del-dgrac-2011}, respectively. This implies that not all 1-planar graphs admit a straight-line drawing and not all 1-planar graphs with a straight-line drawing admit a straight-line RAC drawing. The characterization of the 1-planar graphs that can be drawn with straight-line edges was given by Thomassen in 1988~\cite{t-rdg-JGT88}.
\smallskip\noindent{\bf Our Contribution.}
In this paper we give new results on the relationship between 1-planar graphs, \Rac graphs, and straight-line drawings. We concentrate on a subfamily of 1-planar graphs known as \emph{IC-planar graphs}, which stands for graphs with \emph{independent crossings}~\cite{a-cnirc-AMC08}. An IC-planar graph is a graph that admits a 1-planar drawing where no two crossed edges share an endpoint, i.e., all crossing edges form a matching. IC-planar graphs have been mainly studied both in terms of their structure and in terms of their applications to coloring problems~\cite{a-cnirc-AMC08,ks-cpgic-JGT10,z-dcmgp-AMS14,zl-spgic-CEJM13}. We prove that:
\begin{itemize}
\item Every IC-planar graph with $n$ vertices admits a (non-RAC) straight-line drawing in~$O(n^2)$ area, which can be computed in $O(n)$ time (Theorem~\ref{th:straightline}). Our bound on the area requirement is worst-case optimal. We recall that there are embedded 1-planar graphs whose straight-line drawings require~$\Omega(2^n)$ area~\cite{help-ft1pg-COCOON12}.
\item Every IC-planar graph is a \Rac graph (Theorem~\ref{th:rac-drawings}), but we also show that a straight-line \Rac drawing of an $n$-vertex IC-plane graph may require~$\Omega(q^n)$ area, for a suitable constant~$q > 1$ (Theorem~\ref{th:rac-area}).
\end{itemize}
Moreover, as a natural problem related to the results above, we study the computational complexity of recognizing IC-planar graphs. Namely:
\begin{itemize}
\item We prove that IC-planarity testing is \NP-complete both in the variable embedding setting (Theorem~\ref{th:np-hard}) and when the rotation system of the graph is fixed (Theorem~\ref{th:np-rot}). Note that, 1-planarity testing is already known to be \NP-complete in general~\cite{DBLP:journals/algorithmica/GrigorievB07,DBLP:journals/jgt/KorzhikM13} and for a fixed rotation system~\cite{JGAA-347}. Testing 1-planarity remains NP-hard even for \emph{near-planar} graphs, i.e., graphs that can be obtained by a planar graph by adding an edge~\cite{DBLP:journals/siamcomp/CabelloM13}.
\item On the positive side, we give a polynomial-time algorithm that tests whether a triangulated plane graph augmented with a given set of edges that form a matching is IC-planar (Theorem~\ref{th:triang-test}).
The interest in this special case is also motivated by the fact that every $n$-vertex IC-planar graph with maximum number of edges (i.e., with $13n/4-6$ edges) is the union of a triangulated planar graph and of a set of edges that form a matching~\cite{zl-spgic-CEJM13}.
\end{itemize}
We finally recall that the problem of recognizing 1-planar graphs has been studied also in terms of parameterized complexity. Namely, Bannister, Cabello, and Eppstein describe fixed-parameter tractable algorithms with respect to different parameters: vertex cover number, tree-depth, and cyclomatic number~\cite{DBLP:conf/wads/BannisterCE13}. They also show that the problem remains NP-complete for graphs of bounded bandwidth, pathwidth, and treewidth, which makes unlikely to find parameter tractable algorithms with respect to these parameters. Fixed-parameter tractable algorithms have been also described for computing the crossing number of a graph, a problem that is partially related to the research on beyond planarity (see, e.g.,~\cite{DBLP:conf/stoc/Grohe01,DBLP:conf/gd/PelsmajerSS07a}).
The remainder of the paper is organized as follows.
In Section~\ref{se:preliminaries} we recall basic definitions used in the paper. Section~\ref{ic:sec:drawing} is devoted to prove Theorem~\ref{th:straightline}. In Section~\ref{se:rac} we prove Theorems~\ref{th:rac-drawings} and~\ref{th:rac-area}. Section~\ref{se:recognition} proves Theorems~\ref{th:np-hard},~\ref{th:np-rot}, and~\ref{th:triang-test}. Conclusions and open problems are in Section~\ref{se:conclusions}.
\section{Preliminaries}\label{se:preliminaries}
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[scale=0.6]{ic-drawing}\label{fi:ic-drawing}}
\hfil
\subfigure[]{\includegraphics[page=1, scale=0.8]{rotsys-diffemd-ex}
\includegraphics[page=2, scale=0.8]{rotsys-diffemd-ex}\label{fi:rotemb}}
\hfil
\subfigure[]{\includegraphics[scale=0.8,page=3]{thomassen}\label{fi:thomassen-1}}
\hfil
\subfigure[]{\includegraphics[scale=0.8]{removeB}\label{fi:thomassen-2}}
\caption{(a) An IC-planar drawing. (b) Two different IC-planar embeddings of
the same graph with the same rotation system. (c) An X-configuration. (d)
A B-configuration.}
\end{figure}
We consider simple undirected graphs~$G$. A \emph{drawing}~$\Gamma$
of~$G$ maps the vertices of~$G$ to distinct points in the plane and the edges
of~$G$ to simple Jordan curves between their endpoints. If the vertices are
drawn at integer coordinates,~$\Gamma$ is a \emph{grid drawing}.
$\Gamma$ is \emph{planar} if no edges cross, and \emph{1-planar} if each edge
is crossed at most once. $\Gamma$ is \emph{IC-planar} if it is 1-planar and
there are no crossing edges that share a vertex. An example of an IC-planar graph is shown in Figure~\ref{fi:ic-drawing}.
A planar drawing~$\Gamma$ of a graph~$G$ induces an \emph{embedding},
which is the class of topologically equivalent drawings. In particular, an
embedding specifies the regions of the plane, called \emph{faces}, whose boundary
consists of a cyclic sequence of edges. The unbounded face is called the
\emph{outer face}. For a 1-planar drawing, we can still derive an embedding
by allowing the boundary of a face to consist also of edge segments from a
vertex to a crossing point.
A graph with a given planar (1-planar, IC-planar) embedding is called a
\emph{plane} (\emph{1-plane, IC-plane}) graph.
A \emph{rotation system}~$\mathcal{R}(G)$ of a graph~$G$ describes a possible
cyclic ordering of the edges around the vertices. $\mathcal{R}(G)$ is planar
(1-planar, IC-planar) if~$G$ admits a planar (1-planar, IC-planar)
embedding that preserves~$\mathcal{R}(G)$. Observe that~$\mathcal{R}(G)$ can
directly be retrieved from a drawing or an embedding. The converse does not
necessarily hold, as shown in Figure~\ref{fi:rotemb}.
A \emph{kite}~$K$ is a graph isomorphic to~$K_4$ with an embedding such that all
the vertices are on the boundary of the outer face, the four edges on the
boundary are planar, and the remaining two edges cross each other; see Figure~\ref{fi:thomassen-1}.
Thomassen~\cite{t-rdg-JGT88} characterized the possible crossing configurations
that occur in a 1-planar drawing. Applying this characterization to IC-planar
drawings gives rise to the following property:
\begin{property}\label{pr:char-crossins}
Every crossing of an IC-planar drawing is either an X- or a B-crossing.
\end{property}
\noindent
Here, an X-crossing has the crossing ``inside'' the 4-cycle
(see Figure~\ref{fi:thomassen-1}),
and a B-crossing has the crossing ``outside'' the 4-cycle
(see Figure~\ref{fi:thomassen-2}).
We remark that, according to Thomassen~\cite{t-rdg-JGT88}, a
$1$-planar drawing may contain crossings that are neither X- nor
B-crossings, but W-crossings. This third type of crossing is not
possible in an IC-planar drawing since it contains two vertices
incident to two crossed edges.
Let~$G$ be a plane (1-plane, IC-plane) graph. $G$ is \emph{maximal} if no edge
can be added without violating planarity (1-planarity, IC-planarity). A planar
(1-planar, IC-planar) graph~$G$ is maximal if every planar (1-planar, IC-planar)
embedding is maximal.
If we restrict to 1-plane (IC-plane) graphs, we say that~$G$ is
\emph{plane-maximal} if no edge can be added without creating at least an edge
crossing on the newly added edge (or making the graph not simple). We call the
operation of adding edges to~$G$ until it becomes plane-maximal a
\emph{plane-maximal augmentation}.
\section{Straight-line drawings of IC-planar graphs}\label{ic:sec:drawing}
We show that every IC-planar graph admits an IC-planar straight-line grid
drawing in quadratic area, and this area is worst-case optimal
(Theorem~\ref{th:straightline}). The result is based on first using a new technique
that augments an embedding of the input graph to a maximal IC-plane graph (the
resulting embedding might be different from the original one) with specific
properties (Lemma~\ref{lem:3-connected}), and then suitably applying a drawing
algorithm by Alam {\em et al.} for triconnected 1-plane
graphs~\cite{abk-slgd3-GD13} on the augmented graph.
We say that a kite $(a,b,c,d)$ with crossing edges $(a,d)$ and $(b,c)$ is
\emph{empty} if it contains no other
vertices, that is, the edges $(a,c)$, $(a,d)$, and $(a,b)$ are consecutive in
the counterclockwise order around~$a$; see Figure~\ref{fi:maximal-planar-augment-2}. The condition for the edges around~$b$, $c$, and~$d$ is analogous. We are now ready to prove the next lemma.
\newcommand{\lemThreeConText}[1]{
Let~$G=(V,E)$ be an IC-plane graph with~$n$ vertices. There exists an
$O(n)$-time algorithm that computes a plane-maximal IC-plane
graph~$G^+ = (V, E^+)$ with~$E \subseteq E^+$ such that the following
conditions hold:
\begin{enumerate}[label={\bfseries (c\arabic*)}]
\item \label{#1-kite} The four endpoints of each pair of crossing edges
induce a kite.
\item \label{#1-empty} Each kite is empty.
\item \label{#1-triangulated} Let~$C$ be the set of crossing edges
in~$G^+$. Let~$C^* \subset C$ be a subset containing exactly one edge for
each pair of crossing edges. Then~$G^+ \setminus C^*$ is plane and
triangulated.
\item \label{#1-3cycle} The outer face of~$G^+$ is a $3$-cycle of non-crossed edges.
\end{enumerate}
}
\begin{lemma} \label{lem:3-connected}
\lemThreeConText{main}
\end{lemma}
\begin{figure}[t]
\centering
\subfigure[The kite (drawn bold) is not empty]{\includegraphics[page=1]{maximal-planar-augment}\label{fi:maximal-planar-augment-1}}\hfil
\subfigure[Rerouting edge $(a,b)$ to make the kite empty]{\includegraphics[page=2]{maximal-planar-augment}\label{fi:maximal-planar-augment-2}}\hfil
\subfigure[Triangulating the remaining faces]{\includegraphics[page=3]{maximal-planar-augment}\label{fi:maximal-planar-augment-3}}
\caption{Illustration for the proof of Lemma~\ref{lem:3-connected}.}
\label{fi:maximal-planar-augment}
\end{figure}
\begin{proof}
Let~$G$ be an IC-plane graph; we augment $G$ by adding edges such that for each pair of
crossing edges~$(a,d)$ and~$(b,c)$ the subgraph induced by vertices $\{a,b,c,d\}$ is
isomorphic to~$K_4$; see the dashed edges in Figures~\ref{fi:thomassen-1} and~\ref{fi:thomassen-2}.
Next, we want to make sure that this subgraph forms an X-configuration and
the resulting kite is empty.
Since $G$ is IC-planar, it has no two B-configurations sharing an edge.
Thus, we remove a B-configuration with vertices $\{a,b,c,d\}$ by rerouting
the edge~$(a,b)$ to follow the edge~$(a,d)$ from vertex~$a$ until the
crossing point, then edge~$(b,c)$ until vertex~$b$, as shown by the dotted
edge in Figure~\ref{fi:thomassen-2}. This is always possible, because
edges~$(a,c)$ and~$(b,d)$ only cross each other; hence, following their curves,
we do not introduce any new crossing. The resulting IC-plane graph
satisfies~\ref{main-kite} (recall that, by Property~\ref{pr:char-crossins},
only X- and B-configurations are possible).
Now, assume that a kite $(a,b,c,d)$ is not empty; see
Figure~\ref{fi:maximal-planar-augment-1}. Following the same argument as above,
we can reroute the edges $(a,b)$, $(b,d)$, $(c,d)$ and $(a,d)$ to follow the
crossing edges $(a,d)$ and $(b,c)$; see Figure~\ref{fi:maximal-planar-augment-2}.
The resulting IC-plane graph is denoted by~$G'$ and satisfies~\ref{main-empty}.
We now augment~$G'$ to~$G^+$, such that~\ref{main-triangulated} is satisfied.
Let~$C$ be the set of all pairs of crossing edges in~$G'$.
Let~$C^*$ be a subset constructed from~$C$ by keeping only one (arbitrary)
edge for each pair of crossing edges. The graph~$G' \setminus C^*$
is clearly plane. To ensure~\ref{main-triangulated},
graph~$G^+ \setminus C^*$ must be plane and triangulated. Because~$G'$
satisfies~\ref{main-empty}, each removed edge spans two triangular faces
in~$G' \setminus C^*$. Thus, no face incident to a crossing edge has to be
triangulated. We internally triangulate the other faces by picking any vertex
on its boundary and connecting it to all other vertices (avoiding multiple
edges) of the boundary; see e.g. Figure~\ref{fi:maximal-planar-augment-3}.
Graph $G^+$ is then obtained by reinserting the edges in $C^*$ and
satisfies~\ref{main-triangulated}. To satisfy~\ref{main-3cycle}, notice
that~$G^+$ is IC-plane, hence, it has a face~$f$ whose boundary contains only
non-crossed edges. Also, $f$ is a $3$-cycle by construction. Thus, we can
re-embed~$G^+$ such that~$f$ is the outer face.
It remains to prove that the
described algorithm runs in~$O(n)$ time. Let~$m$ be the number of
edges of~$G$. Augmenting the graph such that for each pair of crossing edges
their endpoints induce a subgraph isomorphic to~$K_4$ can be done in~$O(m)$
time (the number of added edges is~$O(n)$). Similarly, rerouting some edges
to remove all B-configurations requires~$O(m)$ time. Also, triangulating
the graph $G' \setminus C^*$ can be done in time proportional to the number of
faces of $G' \setminus C^*$, which is~$O(n+m)$. Since IC-planar graphs are
sparse~\cite{zl-spgic-CEJM13}, the time complexity follows.
\end{proof}
\newcommand{\thStraightLine}{
There is an $O(n)$-time algorithm that takes an IC-plane
graph~$G$ with~$n$ vertices as input and constructs an IC-planar
straight-line grid drawing of~$G$ in~$O(n) \times O(n)$ area.
This area is worst-case optimal.
}
\begin{theorem}\label{th:straightline}
\thStraightLine
\end{theorem}
\begin{proof}
Augment~$G$ into a plane-maximal IC-plane graph~$G^+$ in~$O(n)$ time using
Lemma~\ref{lem:3-connected}. Graph $G^+$ is triconnected, as it contains a
triangulated plane subgraph. Draw~$G^+$ with the algorithm by Alam
{\em et al.}~\cite{abk-slgd3-GD13} which takes as input a 1-plane
triconnected graph with~$n$ vertices and computes a 1-planar drawing on the
$(2n -2 )\times(2n-3)$ grid in~$O(n)$ time; this drawing is straight-line, but for
the outer face, which may contain a bent edge if it has two crossing
edges. Since by Lemma~\ref{lem:3-connected} the outer face of~$G^+$ has no
crossed edges,~$\Gamma$ is straight-line and IC-planar. The edges added during the augmentation are removed from~$\Gamma$.
It remains to prove that the area bound of the algorithm is worst-case
optimal. To this aim, we show that for every $n \geq 2$ there exists an
IC-planar graph~$G$ with $\Theta(n)$ vertices, such that every IC-planar
straight-line grid drawing of~$G$ requires~$\Omega(n^{2})$ area.
Dolev {\em et al.}~\cite{dlt-pepg-ACR84}
described an infinite family of planar graphs, called nested triangle
graphs, such that every planar straight-line drawing of an $n$-vertex
graph~$G$ (for $n \geq 6$) of this family requires~$\Omega(n^2)$ area.
We augment~$G$ as follows. For every edge~$(u,v)$ of~$G$, we add a
vertex~$c_{uv}$, and two edges~$(u,c_{uv})$ and~$(c_{uv},v)$. Denote by~$G^+$
the resulting augmented graph, which clearly has~$\Theta(n)$ vertices.
We now show that in every possible IC-planar
straight-line drawing of~$G^+$ there are no two edges of~$G$ that cross each
other.
Observe that the subgraph induced by the vertices~$u,v,c_{uv}$ is a
3-cycle.
Any drawing of two cycles must cross an even number of times.
If two edges~$(u,v)$ and~$(w,z)$ of~$G$
cross each other in an IC-planar drawing of~$G^+$, then the cycles
$u,v,c_{uv}$ and $w,z,c_{wz}$ must cross at least twice.
Since these are 3-cycles, either some edge is crossed at least twice
or two adjacent edges are crossed. In either case, this violates
the IC-planar condition.
Hence, the
subgraph~$G$ must be drawn planar and this implies that any
straight-line IC-planar drawing of $G^+$ requires~$\Omega(n^2)$ area.
\end{proof}
\section{IC-planarity and \Rac graphs}\label{se:rac}
It is known that every $n$-vertex maximally dense \Rac graph (i.e., \Rac graph with $4n-10$ edges) is 1-planar, and that there
exist both 1-planar graphs that are not \Rac and \Rac graphs that are not
1-planar~\cite{el-rac1p-DAM13}.
Here, we further investigate the intersection between the classes of
1-planar and \Rac graphs, showing that all IC-planar graphs are \Rac. To
this aim, we describe a polynomial-time constructive algorithm. The computed
drawings may require exponential area, which is however worst-case optimal;
indeed, we exhibit IC-planar graphs that require exponential area in any
possible IC-planar straight-line \Rac drawing.
Our construction extends the linear-time algorithm by de
Fraysseix {\em et al.}~\cite{dpp-hdpgg-C90} that computes a planar straight-line grid drawing of a
maximal (i.e., triangulated) plane graph in quadratic area;
we call it the~\FPP algorithm. We need to recall the
idea behind~\FPP before describing our extension.
\smallskip\noindent{\bfseries Algorithm~\FPP.} Let~$G$ be a maximal plane graph
with~$n \geq 3$ vertices. The \FPP algorithm first computes a suitable linear
ordering of the vertices of~$G$, called a \emph{canonical ordering} of~$G$,
and then incrementally constructs a drawing of~$G$ using a technique called
\emph{shift method}. This method adds one vertex per time, following the
computed canonical ordering and shifting vertices already in the drawing when
needed. Namely, let $\sigma=(v_1,v_2,\dots,v_n)$ be a linear ordering of the
vertices of~$G$. For each integer~$k \in [3, n]$, denote by~$G_k$ the plane
subgraph of~$G$ induced by the~$k$ vertices~$v_1,v_2,\dots,v_k$ ($G_n=G$) and
by~$C_k$ the boundary of the outer face of~$G_k$, called the \emph{contour}
of~$G_k$. Ordering~$\sigma$ is a canonical ordering of~$G$ if the following
conditions hold for each integer~$k \in [3, n]$:
\begin{enumerate*}[label=(\roman{*})]
\item $G_k$ is biconnected and internally triangulated;
\item $(v_1,v_2)$ is an outer edge of~$G_k$; and
\item if~$k+1 \leq n$, vertex~$v_{k+1}$ is located in the outer face
of~$G_k$, and all neighbors of~$v_{k+1}$ in~$G_k$ appear on~$C_k$
consecutively.
\end{enumerate*}
We call \emph{lower neighbors} of~$v_k$ all neighbors~$v_j$ of~$v_k$ for
which~$j < k$. Following the canonical ordering~$\sigma$, the shift method
constructs a drawing of~$G$ one vertex
per time. The drawing~$\Gamma_k$ computed at step~$k$ is a drawing of~$G_k$.
Throughout the computation, the following invariants are maintained for
each~$\Gamma_k$, with~$3 \leq k \leq n$:
\begin{enumerate*}[label=({\bfseries I\arabic*})]
\item \label{inv1} $p_{v_1}=(0,0)$ and $p_{v_2}=(2k-4,0)$;
\item \label{inv2} $x(w_1)<x(w_2)<\dots<x(w_t)$, where
$w_1=v_1, w_2,\dots,w_t=v_2$ are the
vertices that appear along~$C_k$, going from~$v_1$
to~$v_2$.
\item \label{inv3} Each edge~$(w_i,w_{i+1})$ (for $i=1,2,\dots,t-1$) is drawn
with slope either~$+1$ or~$-1$.
\end{enumerate*}
More precisely,~$\Gamma_3$ is constructed placing~$v_1$ at~$(0,0)$, $v_2$
at~$(2,0)$, and~$v_3$ at~$(1,1)$. The
addition of~$v_{k+1}$ to~$\Gamma_k$ is executed as follows. Let
$w_p,w_{p+1},\dots,w_ q$ be the lower neighbors of~$v_{k+1}$ ordered from left
to right. Denote by~$\mu(w_p,w_q)$ the intersection point between the line
with slope~$+1$ passing through~$w_p$ and the line with slope~$-1$ passing
through~$w_q$. Point~$\mu(w_p,w_q)$ has integer coordinates and thus it
is a valid placement for~$v_{k+1}$. With this placement,
however,~$(v_{k+1},w_p)$ and~$(v_{k+1},w_q)$ may overlap
with ~$(w_p,w_{p+1})$ and~$(w_{q-1},w_q)$, respectively; see
Figure~\ref{fi:shift-1}. To avoid this, a
\emph{shift} operation is applied: $w_{p+1}$, $w_{p+2}$,$\dots$,$w_{q-1}$
are shifted to the right by~1 unit, and $w_q, w_{q+1}, \dots, w_t$
are shifted to the right by~2 units. Then~$v_{k+1}$ is placed at
point~$\mu(w_p,w_q)$ with no overlap; see Figure~\ref{fi:shift-2}.
We recall that, to keep planarity, when the algorithm
shifts a vertex~$w_i$ ($p+1 \leq i \leq t$) of~$C_k$, it also shifts some of the
inner vertices together with it; for more details on this point refer
to~\cite{cp-ltadp-IPL95,dpp-hdpgg-C90}. By Invariants~\ref{inv1} and~\ref{inv3},
the area of the final drawing is $(2n-4) \times (n-2)$.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[scale=0.65,page=1]{shift}\label{fi:shift-1}}\hfil
\subfigure[]{\includegraphics[scale=0.65,page=2]{shift}\label{fi:shift-2}}
\caption{Illustration of the shift algorithm at the addition step of
$v_{k+1}$. The shift operation changed the slopes of the edges drawn bold. (a) Placing $v_{k+1}$ at $\mu(w_p,w_q)$ would create overlaps. (b) After the shift operation, $v_{k+1}$ can be placed at
$\mu(w_p,w_q)$ without overlaps. }
\label{fi:shift}
\end{figure}
\medskip\noindent{\bfseries Our extension.} Let~$G$ be an IC-plane graph, and
assume that~$G^+$ is the plane-maximal IC-plane graph obtained from~$G$ by
applying the technique of Lemma~\ref{lem:3-connected}. Our drawing algorithm
computes an IC-planar drawing of~$G^+$ with right angle crossings, by
extending algorithm \FPP. It adds to the classical shift operation \emph{move}
and \emph{lift} operations to guarantee that one of the crossing edges of a kite
is vertical and the other is horizontal. We now give an idea of our technique,
which we call \EFPP. Details are given in the proof of Theorem~\ref{th:rac-drawings}.
Let~$\sigma$ be a canonical ordering constructed from the underlying maximal
plane graph of~$G^+$.
Vertices are incrementally added to the drawing, according to~$\sigma$,
following the same approach as for \FPP. However, suppose that $K =(a,b,c,d)$
is a kite of~$G^+$, and that~$a$ and~$d$ are the first and the last vertex
of~$\sigma$ among the vertices in~$K$, respectively. Once~$d$ has been added to
the drawing, the algorithm applies a suitable combination of move and lift
operations to the vertices of the kite to rearrange their positions so to
guarantee a right angle crossing. Note that, following the \FPP technique,~$a$
was placed at a $y$-coordinate smaller than the $y$-coordinate of~$d$. A
move operation is then used to shift~$d$ horizontally to the same
$x$-coordinate as~$a$ (i.e.,~$(a,d)$ becomes a vertical segment in the drawing);
a lift operation is used to vertically shift the lower between~$b$ and~$c$,
such that these two vertices get the same $y$-coordinates.
Both operations are applied so to preserve planarity and to maintain
Invariant~\ref{inv3} of \FPP; however, they do not maintain Invariant~\ref{inv1},
thus the area can increase more than in the \FPP algorithm and
may be exponential. The application of move/lift operations on the vertices
of two distinct kites do not interfere each other, as the kites do not share
vertices in an IC-plane graph. The main operations of the algorithm are depicted
in Figure~\ref{fi:lift}.
\newcommand{\thRacDrawings}{Let $G$ be an IC-plane graph with $n$ vertices. There exists an $O(n^3)$-time algorithm that constructs a straight-line IC-plane \Rac grid drawing of $G$.}
\begin{theorem}\label{th:rac-drawings}
\thRacDrawings
\end{theorem}
\begin{proof}
Let $G^+$ be the augmented graph constructed from~$G$ by using
Lemma~\ref{lem:3-connected}. Call~$G'$ the subgraph obtained from~$G^+$ by
removing one edge from each pair of crossing edges;~$G'$ is a maximal plane
graph (see condition~\ref{main-triangulated} of Lemma~\ref{lem:3-connected}).
We apply on~$G'$ the shelling procedure used by de Fraysseix {\em et al.}
to compute a canonical ordering~$\sigma$ of~$G'$ in~$O(n)$
time~\cite{dfpp-sssfe-STOC88}; it goes backwards, starting from a vertex on
the outer face of~$G'$ and successively removing a vertex per time from the
current contour. However, during this procedure, some edges of~$G'$ can be
replaced with some other edges of~$G^+$ that were previously excluded,
although~$G'$ remains maximal planar. Namely, whenever the shelling procedure
encounters the first vertex~$d$ of a kite~$K=(a,b,c,d)$, it marks~$d$
as~$\ktop(K)$, and considers the edge~$e$ of~$K$ that is missing in~$G'$.
If~$e$ is incident to~$d$ in~$K$, the procedure reinserts it and
removes from~$G'$ the other edge of~$K$ that crosses~$e$ in~$G^+$.
If~$e$ is not incident to~$d$, the procedure continues without varying~$G'$.
We say that $u\prec v$ if $\sigma(u)<\sigma(v)$.
We then compute a drawing of~$G^+$ by using the \EFPP algorithm.
Let vertex~$v=v_{k+1}$ be the next vertex to be placed according to~$\sigma$.
Let~$\U(v)$ be the set of lower neighbors of~$v$, and let~$\Left(v)$
and~$\Right(v)$ be the leftmost and the rightmost vertex in~$\U(v)$,
respectively. Also, denote by~$\A_l(v)$ the vertices to the top-left of~$v$,
and by~$\A_r(v)$ the vertices to the top-right of~$v$. If~$v$ is
not~$\ktop(K)$ for some kite~$K$, then~$v$ is placed by following the
rules of \FPP, that is, at the intersection of the~$\pm 1$ diagonals
through~$\Left(v)$ and~$\Right(v)$ after applying a suitable shift operation.
If~$v = \ktop(K)$ for some kite~$K$, the algorithm proceeds as follows.
Let~$K = (a,b,c,d)$ with~$v = d = \ktop(K)$. The remaining three
vertices of~$K$ are in~$G_k$ and are consecutive along the contour~$C_k$,
as they all belong to $\U(d)$ (by construction, $G'$ contains edge~$(a,d)$).
W.l.o.g., assume that they are encountered in the order~$\{b,a,c\}$ from left
to right. The following cases are now possible:
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[scale=0.5,page=1]{rac-undo} \label{fi:lift-a}}\hfil
\subfigure[]{\includegraphics[scale=0.5,page=2]{rac-undo}\label{fi:lift-b}}
\subfigure[]{\includegraphics[scale=0.5,page=3]{rac-undo} \label{fi:move-a}}\hfil
\subfigure[]{\includegraphics[scale=0.5,page=4]{rac-undo}\label{fi:move-b}}
\caption{(a-b) The lift operation: (a) Vertex $b$ is $r$ units below $c$. (b) Lifting $b$. (c-d) The move operation: (c) Vertex $d$ is $s$ units to the left of $b$. (d) Moving $d$. }
\label{fi:lift}
\end{figure}
\smallskip\noindent{\bfseries Case 1:} $a \prec b$ and~$a \prec c$. This
implies that~$a = \Right(b)$ and~$a = \Left(c)$. The edges~$(a,b)$ and~$(a,c)$
have slope~$-1$ and~$+1$, respectively, as they belong to~$C_k$. We now
aim at having~$b$ and~$c$ at the same $y$-coordinate, by applying a lift
operation. Suppose first that~$r = y(c) - y(b)>0$; see Figure~\ref{fi:lift-a}.
We apply the following steps:
\begin{enumerate*}[label=(\roman{*})]
\item Temporarily undo the placement of~$b$ and of all vertices
in~$\A_l(b)$.
\item Apply the shift operation to vertex~$\Right(b)=a$ by~$2r$ units to the
right, which implies that the intersection of the diagonals
through~$\Left(b)$ and~$\Right(b)$ is moved by~$r$ units to the right and
by~$r$ units above their former intersection point. Hence,~$b$ and~$c$ are
placed at the same $y$-coordinate; see also Figure~\ref{fi:lift-b}.
\item Reinsert the vertices of~$\A_l(b)$ and modify~$\sigma$ accordingly.
Namely, by definition, each vertex in~$\A_l(b)$ does not belong to~$\U(b)$
and it is not an inner vertex below~$b$; therefore, vertices in~$\A_l(b)$
can be safely removed. Hence,~$\sigma$ can be modified such that~$b\prec w$
for each $w\in A_l(b)$.
\end{enumerate*}
If~$r = y(c) - y(b)<0$, a symmetric operation is applied:
\begin{enumerate*}[label=(\roman{*})]
\item Undo the placement of~$c$ and of all vertices in~$\A_r(c)$.
\item Apply the shift operation to vertex~$\Right(c)$ by~$|2r|$ additional
units to the right.
\item Reinsert the vertices of~$\A_r(c)$.
\end{enumerate*}
Finally, we place~$d$ vertically above~$a$. To this aim, we first apply the
shift operation according to the insertion step of \FPP. After that, we may
need to apply a move operation; see Figure~\ref{fi:move-a}.
If~$s=x(d)-x(a)>0$, then we apply the shift operation to vertex~$\Right(d)=c$
by~$2s$ units to the right and then place~$d$ (see Figure~\ref{fi:move-b}).
If $s=x(d)-x(a)<0$, then we apply the shift operation to vertex~$\Left(d)=b$ by~$2s$
units to the left and then place~$d$ (clearly, the shift operation can be used
to operate in the left direction with a procedure that is symmetric to the one
that operates in the right direction).
Edges~$(a,d)$ and~$(b,c)$ are now vertical and horizontal, respectively. In
the next steps, their slopes do not change, as their endpoints are shifted
only horizontally (they do not belong to other kites); also,~$a$ is shifted
along with~$d$, as it belongs to~$\U(d)$.
{\smallskip\noindent\bfseries Case 2}: $b \prec a \prec c$ or $c \prec a \prec b$.
We describe how to handle the case that $b \prec a \prec c$, as the other case
can be handled by symmetric operations. This
implies that~$b = \Left(a)$ and~$a = \Left(c)$. The edges~$(b,a)$ and~$(a,c)$
both have slope~$+1$ respectively, as they belong to~$C_k$.
Let $\{z_1 \prec \ldots \prec z_r\}$ be the sequence
of $r \geq1 $ neighbors of $b$ inside $\A_r(b)$, with~$a = z_r$
and~$b = \Left(z_i)$, with $1 \leq i \leq r$, as shown in Figure~\ref{fi:triang_sep}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{triang_sep}
\caption{Illustration for the proof of Theorem.~\ref{th:rac-drawings}. The edge
with slope $\alpha_{min}$ is thicker.}\label{fi:triang_sep}
\end{figure}
Consider the slopes~$\alpha_i$ of the
edges~$(z_i, z_{i+1})$ for~$1 \leq i < r$,
and let~$\alpha_{\min}$ be the negative slope with the
least absolute value among them; see the bold edge in Figure~\ref{fi:triang_sep}.
Let $s$ be the (negative) slope of the edge~$(b, \Right(b))$. We aim at
obtaining a drawing where~$|s|\leq |\alpha_{\min}|$. To this aim, we apply the
shift operation on~$\Right(b)$ by~$x$ units to the right, which stretches and
flattens the edge~$(b, \Right(b))$. If~$|\alpha_{\min}| = h/w$,
and~$|s| = h'/w'$, then the value of~$x$ is the first even integer such
that $x \geq (h'w - hw')/h$. Now we have that
$|s|= h'/(w'+x)\le h'/(w'-h'w/h-w')=h/w=|\alpha_{\min}|$.
The fact that~$x$ is even preserves the even
length of the edges on the contour. This preliminary operation will be useful
in the following.
Next, let~$\Delta(b) = y(b) - y(\Right(b)) >0$
and let~$\Delta(c) = y(c) - y(b)>0$ , i.e.,~$b$ lies~$\Delta(b)$ rows
above~$\Right(b)$, and~$c$ lies~$\Delta(c)$ rows above~$b$, where the
edges~$(b, a)$ and~$(a, c)$ have slope~$+1$. We apply the following procedure
to lift $b$ at the same $y$-coordinate of $c$.
\begin{enumerate*}[label=(\roman*)]
\item We undo the placement of all vertices in $\A_l(b)$.
\item If~$\Delta(c)$ is not a multiple of~$\Delta(b)$,
say $\Delta(b) + \delta = q \cdot \Delta(c)$ for some integer~$q$, then we
shift $\Right(c)$ by~$2\delta$ units to the right. This
implies that~$c$ moves by~$(\delta,\delta)$ above its former position.
\item We set the $y$-coordinate of vertex~$b$ equal to the $y$-coordinate
of~$c$. To that end, we stretch the edge~$(\Right(b),b)$ by the
factor~$q$. Let~$w'$ be the width and~$h'$ be the height of the
edge~$(\Right(b),b)$. The new edge has the same slope as before, and has
width~$q w'$ and height~$q h'$. This implies shifting all
vertices $\Right(b),z_1,\ldots,z_{r-1},a,c$ by~$(q-1) w'$ units to the
right. Vertex~$b$ may need a further adjustment by a single unit left
shift if~$b= \Left(d)$ and the intersection point of the $\pm 1$ diagonals
through~$\Left(d)$ and~$\Right(d)$ is not a grid point. Also, we apply
the shift operation on~$\Left(b)$ by~$(q-1) h'$ units to the left. This
particular lifting operation applied on vertex $b$ preserves planarity,
which could be violated only by edges incident to~$b$.
Namely, if vertex~$w$ is a neighbor of~$b$ in~$\U(b)$, then the
edge~$(w,b)$ is vertically stretched by~$(q-1)h'$ units.
This cannot enforce a crossing, since it means a vertical shift of~$w$.
Clearly,~$b$ can see~$\Right(b)$, since the edge was stretched.
Consider the upper right neighbors~$z_1, \ldots, z_r$ with~$z_r=a$ of~$b$.
The edges~$(b, z_i)$ change direction from right upward to right
downward. Since the absolute value of the slope of the
edge~$(b,\Right(b))$ is bounded from above by $\alpha_{\min}$ and
since~$y(\rho(b))\le y(b)\le y(z_i); 1\le i\le r$, the new
position of~$b$ is below the line spanned by each edge~$(z_i, z_{i+1})$
for~$1 \leq i <r$. Hence,~$b$ can see each such neighbor~$z_i$,
including~$a = z_r$. The lifting of~$b$ has affected all
vertices~$v \in \A_l(b)$ with~$y(v)<y(d)$.
\item We re-insert the vertices in $\A_l(b)$, by changing $\sigma$
accordingly, as already explained for {\bfseries Case 1}.
\end{enumerate*}
Finally, we place~$d = \ktop(K)$. First, we place~$d$ at the intersection
point of the $\pm 1$ diagonals through~$\Left(d)$ and~$\Right(d)$. Then, we
adjust~$d$ such that it lies vertically above~$a$. If the preliminary position
of~$d$ is~$t$ units to the left (right) of~$a$, then we apply the shift
operation on~$\Right(d)$ by $2t$ units to the right (on $\Left(d)$ by $2t$
units to the left).
{\smallskip\noindent\bfseries Case 3}: $b \prec a$ and $c \prec a$.
This implies that~$b = \Left(a)$ and~$c = \Right(c)$. The edges~$(b,a)$ and~$(c,a)$
have slope~$+1$ and~$-1$, respectively, as they belong to~$C_k$. We now
aim at having both~$b$ and~$c$ at the $y$-coordinate $y(a)+1$.
To this end, we use the procedure described in \textbf{Case 2}
to lift~$b$ upwards by $y(a)+1-y(b)$ rows by using a dummy vertex $c'$ at
position $(x(c),y(a)+1)$ as a reference point. Note that this lifting only
affects~$b$ and the vertices in~$\A_l(b)$ (all other vertices are moved
uniformly), so both~$a$ and~$c$ remain at their position. Hence, we now have
the situation $c \prec a \prec b$ and can again use the procedure of
\textbf{Case 2} to solve this case.
\smallskip To conclude the proof, we need to consider the first edge of the construction
which is drawn horizontal. Since the lift operation requires an edge that does
not have slope~0, we may need to introduce dummy vertices and edges.
Namely, if there is a kite including the base edge~$(v_1,v_2)$, then we add
two dummy vertices~$1',2'$ below it that form a new base edge~$(v_1',v_2')$.
We add the additional edges $(v_1',v_1)$, $(v_1',v_2)$, $(v_1',v_n)$,
$(v_2',v_2)$ and $(v_2',v_n)$ to make the graph maximal planar. These dummy
vertices and edges will be removed once the last vertex $v_n$ is placed.
In terms of time complexity, $G^+$ can be computed in $O(n)$ time, by
Lemma~\ref{lem:3-connected}. Furthermore, each shift, move and the lift
operation can be implemented in $O(n)$ time, hence the placement of a single
vertex costs $O(n)$ time. However, in some cases (in particular, when we are
placing the top vertex of a kite), we may need to undo the placement of a set
of vertices and re-insert them afterwards. Since when we undo and reinsert a
set of vertices we also update $\sigma$ accordingly, this guarantees that the
placement of the same set of vertices will not be undone anymore. Thus, the
reinsertion of a set of $O(n)$ vertices costs $O(n^2)$. Hence, we
have $\sum^n_{i=1} O(n + n^2)$ which gives an $O(n^3)$ time complexity.
\end{proof}
Figure~\ref{fi:dfpp} shows a running example of our algorithm.
Theorem~\ref{th:rac-drawings} and the fact that there exist $n$-vertex \Rac graphs
with~$4n-10$ edges~\cite{del-dgrac-2011} while an $n$-vertex IC-planar graph
has at most~$13n/4-6$ edges~\cite{zl-spgic-CEJM13} imply
that IC-planar graphs are a proper subfamily of \Rac graphs.
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[scale=0.55,page=1]{dfpp}\label{fi:dfpp-1}}\hfill
\subfigure[]{\includegraphics[scale=0.55,page=2]{dfpp}\label{fi:dfpp-2}}\hfill
\subfigure[]{\includegraphics[scale=0.55,page=3]{dfpp}\label{fi:dfpp-3}}
\subfigure[]{\includegraphics[scale=0.6,page=4]{dfpp}\label{fi:dfpp-4}}
\subfigure[]{\includegraphics[scale=0.75,page=5]{dfpp}\label{fi:dfpp-5}}
\caption{Example run of our algorithm on an IC-planar graph~$G$ with a
separating triangle. The crossing edges are drawn bold, the edges inside
the separating triangle are drawn gray.
(a) Input graph~$G$.
(b) Output of \FPP after vertex~7.
(c) Output of \FPP after vertex~8.
(d) Lifting~3 to the level of~7.
(e) Moving~8 directly above~6.}
\label{fi:dfpp}
\end{figure}
We now show that exponential area is required for \Rac drawings of IC-planar
graphs. Since the vertices are not drawn on the integer grid, the drawing area
is measured as the proportion between the longest and the shortest edge.
\newcommand{\thRacArea}{
For every integer $k \geq 1$, there exists an IC-plane graph $G_k$ with $n_k$ vertices such that every IC-planar straight-line \Rac drawing of $G_k$ takes area $\Omega(q^{n_k})$, for some constant $q>1$.
}
\begin{theorem}\label{th:rac-area}
\thRacArea
\end{theorem}
\begin{proof}
Refer to Figures~\ref{fi:exponential-1} and~\ref{fi:exponential-2} for the construction of $G_k$, for $k \geq 1$. Each graph $G_i$, for $1 \leq i \leq k$, has a 4-cycle as outer face, while all other faces are triangles (a triangle is composed of either three vertices or of two vertices and one crossing point). Two non-adjacent edges of the outer face are called the \emph{marked} edges of $G_i$.
\begin{figure}[t]
\centering
\subfigure[The first three levels of the construction.]{\includegraphics[page=1]{area-G-2}\label{fi:exponential-1}}
\subfigure[Going from level $i-1$ to level $i$.]{\includegraphics[page=2]{area-G-2}\label{fi:exponential-2}}
\caption{Illustration for the proof of Theorem~\ref{th:rac-area}. }\label{fi:exponential}
\end{figure}
Graph~$G_1$ has~8 vertices,~4 inner vertices forming a kite, and~4 vertices
on the outer face. All inner faces are triangles. The two marked edges of the
outer face of~$G_1$ are any two non-adjacent edges of this face. Graph~$G_{i}$
is constructed from~$G_{i-1}$ as follows, see also Figure~\ref{fi:exponential-2}.
Let $\{A,B,C,D\}$ be the four vertices of the outer face of~$G_{i-1}$, and
let~$(A,D)$ and~$(B,C)$ be the two marked edges of~$G_{i-1}$ (bold in
Figure~\ref{fi:exponential-2}). We attach a kite $(A,D,a,d)$ on the marked
edge~$(A,D)$ and a kite $(B,C,b,c)$ on the marked edge~$(B,C)$. We connect the
two kites with the edges~$(a,B)$, $(a,b)$, $(c,D)$, and $(c,d)$. We
then add a cycle between four further vertices $\{\alpha,\beta,\gamma,\delta\}$
that form the outer face of $G_i$, and we triangulate the inner face between
the cycles $(\alpha,\beta,\gamma,\delta)$ and $(a,b,c,d)$. We set the
edges $(\alpha,\beta)$ and $(\gamma,\delta)$ as the marked edges of $G_i$.
An embedding-preserving straight-line \Rac drawing $\Gamma_k$
of $G_k$ ($k \geq 1$) with minimum area can be obtained by drawing each kite as
a quadrilateral, as shown in Figures~\ref{fi:exponential-1} and~\ref{fi:exponential-2}.
Since vertex~$a$ is connected to vertex~$B$, $a$ has to lie above the line
spanned by the edge $(A,B)$. Further, since vertex~$d$ is connected to vertex~$C$,
$d$ has to lie below the line spanned by the edge $(C,D)$. This implies that
the quadrilateral $(A,D,a,d)$ contains the square that has the edge~$(A,D)$
as its right side. Hence, the width increases by at least the length of edge~$(A,D)$.
Note that one might extend the edge~$(B,C)$ to make~$(A,D,a,d)$
smaller. However, since we also have the edges~$(a,b)$ and~$(c,d)$, this
procedure increases the size of~$(B,C,b,c)$ such that, as soon as $(a,d)$ is
smaller than~$(A,D)$, it has to be that~$(b,c)$ is larger than~$(B,C)$
(to keep the visibility required for~$(a,b)$ and~$(c,d)$).
Then, this implies that the quadrilateral $(B,C,b,c)$ contains the square that
has the edge~$(B,C)$ as its left side, so the width increases by at least the
length of edge~$(B,C)$.
The minimum-area drawing forces the outer face of every subgraph
$G_i$ ($i \leq k$) of~$G_k$ to be a rectangle $R_i$. We denote by $l_i$ ($L_i$)
the length of the shortest (longest) side of $R_i$. The area of $R_i$ is
$A_i = l_i \times L_i$. Observe that, by construction, the marked edges of $G_i$
have length $L_i$. It follows that $l_{i} \ge L_{i-1} +4$ and $L_{i}\ge l_{i-1} + L_{i-1}+2$. Therefore,
$A_{i} \ge (L_{i-1} +4) \times (l_{i-1} + L_{i-1}+2) \geq A_{i-1} +A_{i-1}
= 2 A_{i-1}$, hence $A_k \geq 2 A_{k-1} \geq 2^{k-1} A_1 \geq 2^{k+1}$.
The number of vertices of $G_k$ is $n_k = 8k$, and hence $k = \frac{n_k}{8}$. Thus $A_k \geq 2^{\frac{n_k}{8}}\geq 1.09^{n
_k}$, which proves the statement.
\end{proof}
\section{Recognizing IC-planar graphs}\label{se:recognition}
The \emph{IC-planarity testing} problem asks if a graph~$G$ admits an IC-planar embedding.
\medskip
{\noindent \bfseries Hardness of the problem.} The next theorem shows that IC-planarity testing is \NP-complete.
\newcommand{\thNpVar}{IC-planarity testing is \NP-complete.}
\begin{theorem}\label{th:np-hard}
\thNpVar
\end{theorem}
\begin{figure}[tb]
\centering
\subfigure[$G$]{\includegraphics[scale=1]{1planar-instance}\label{fig:1planar-instance}}\hfil
\subfigure[$G^*$]{\includegraphics[scale=1]{ICplanar-instance}\label{fig:ICplanar-instance}}
\caption{Illustration of the proof of Theorem~\ref{th:np-hard}.}
\label{fi:reduction-gadget}
\end{figure}
\begin{proof}
IC-planarity is in \NP, as one can guess an embedding and
check whether it is IC-planar~\cite{gj-1983}. For the hardness proof, the reduction is from
the \emph{1-planarity testing} problem, which asks whether a given graph is 1-planar or not.
The reduction uses a $3$-cycle gadget and exploits the fact that at most one edge of
a $3$-cycle is crossed in an IC-planar drawing. We transform an instance~$G$ of 1-planarity testing
into an instance~$G^*$ of IC-planarity testing, by
replacing each edge~$(u,v)$ of $G$ with a graph~$G_{uv}$
consisting of two $3$-cycles,~$T_{uv}$ and~$T_{vu}$, with vertices~$\{u, c_{uv}, a_{uv}\}$
and~$\{v, c_{vu}, a_{vu}\}$, respectively, plus edge~$(a_{uv},a_{vu})$, called the \emph{attaching edge} of~$u$
and~$v$; see Figure~\ref{fi:reduction-gadget}.
Let~$\Gamma$ be a 1-planar drawing of~$G$. An IC-planar drawing~$\Gamma^*$
of~$G^*$ can be easily constructed by replacing each curve representing an
edge~$(u,v)$ in~$\Gamma$ with a drawing of~$G_{uv}$ where~$T_{uv}$
and~$T_{vu}$ are drawn planar and sufficiently small, such that the possible
crossing that occurs on the edge~$(u,v)$ in~$\Gamma$ occurs on the attaching
edge~$(a_{uv},a_{vu})$ in~$\Gamma^*$. Hence, since all the attaching edges are
independent,~$\Gamma^*$ is IC-planar.
Let~$\Gamma^*$ be an IC-planar drawing of~$G^*$. We show that it is possible
to transform the drawing in such a way that all crossings occur only between
attaching edges. Once this condition is satisfied, in order to construct a
1-planar drawing~$\Gamma$ of~$G$, it suffices to remove, for each
edge~$(u,v)$, the vertices~$c_{uv}$ and~$c_{vu}$, and to replace~$a_{uv}$
and~$a_{vu}$ with a bend point. Namely, as already observed, no more than one
edge can be crossed for every gadget~$T_{uv}$ of~$G^*$. Suppose now that the
edge~$(u,a_{uv})$ of~$T_{uv}$ is crossed. Since the other two edges
of~$T_{uv}$ are not crossed, we can reroute~$(u,a_{uv})$ such that it follows
the curves that represent~$(u,c_{uv})$ and~$(c_{uv},a_{uv}$); see
Figure~\ref{fi:reduction-reroute-a}.
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[scale=0.8]{reduction-reroute-a}\label{fi:reduction-reroute-a}}
\subfigure[]{\includegraphics[scale=0.8]{reduction-reroute-b}\label{fi:reduction-reroute-b}}
\caption{Illustration of the proof of Theorem~\ref{th:np-hard}.
(a) Rerouting the crossed edge $(u,a_{uv})$ via~$c_{uv}$ to be planar.
(b) Rerouting the crossing edges~$(u,v)$ and~$(u,w)$ to be planar.}
\label{fi:reduction-reroute}
\end{figure}
In order to complete the proof, we need to take care of the following
particular case. Two attaching edges~$a_{uv}$ and~$a_{uw}$ that cross and that
are connected to two gadgets~$T_{uv}$ and~$T_{uw}$ with a common vertex~$u$
represent a valid configuration in~$\Gamma^*$, while they give rise to a
crossing between two adjacent edges in~$\Gamma$, which is not allowed
since~$\Gamma$ must be a simple drawing. However, this case can be easily
solved, since the two edges do not cross any others, by rerouting them in~$\Gamma$ as shown in
Figure~\ref{fi:reduction-reroute-b}.
\end{proof}
\smallskip Note that this construction does not work for IC-planarity
testing with a given rotation system since the rerouting step changes
the rotation system. However, we now prove \NP-hardness of IC-planarity testing for graphs
with a given rotation system. We rely on the membrane technique introduced by Auer {\em et
al.}~\cite{JGAA-347} to prove the \NP-hardness of 1-planarity testing for graphs
with a given rotation system. In particular, we design gadgets that make it possible to use the membrane technique in the case of IC-planar graphs.
First, we replace the U-graphs~\cite{JGAA-347} by M-graphs, called \emph{mesh graphs}. These graphs have a unique embedding with a fixed rotation system. Namely, an M-graph is a mesh, where cells are filled with two crossing edges, following a checkerboard pattern to ensure independent crossings; see Figure~\ref{fi:m-graph}. To see that with a given rotation system, an M-graph has a unique embedding, observe that each subgraph isomorphic to $K_4$ can be embedded planarly or as a kite, and this is determined by the rotation system~\cite{Kyncl20091676}. The rotation system defined by the drawing in Figure~\ref{fi:m-graph} implies that each subgraph isomorphic to $K_4$ is embedded as a kite, and therefore the embedding of an M-graph is unique.
Let~$M$ be an M-graph with a given fixed embedding. At its bottom line,~$M$ has sufficiently many \emph{free} vertices that are not incident with a crossing edge in~$M$. These vertices are consecutively ordered, say from left to right. The edges on the bottom line are not crossed in any IC-planar embedding of~$M$, so they are crossing-free in the given embedding. Finally,~$M$ cannot be crossed by any path from a free vertex. In what follows, we attach further gadgets to~$M$ by connecting these gadgets to consecutive free vertices. If there are several gadgets, then they are separated and placed next to each other.
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[scale=0.3]{m-graph}}\hfil
\subfigure[]{\includegraphics[scale=0.4]{m-graph-abbrv}}
\caption{The structure of an M-graph (a) and its abbreviation (b). }
\label{fi:m-graph}
\end{figure}
\paragraph{General Construction}
Consider an instance~$\alpha$ of planar-3SAT with its corresponding plane
graph~$G$ and its dual~$G^*$. Recall that the vertices of~$G$ represent
variables~$x$ and clauses~$c$, also, there is an edge~$(x,c)$ if~$x$ or its negation occurs as a
literal in~$c$; see Figure~\ref{fi:planar3sat}. We transform $G^*$ into an
\emph{M-supergraph} $G^*_{\alpha}$ (see
Figure~\ref{fi:m-supergraph}) as follows.
Each vertex of~$G^*$, corresponding to a face of the embedded graph~$G$, is
replaced by a sufficiently large M-graph. Further, each edge of~$G^*$ is
replaced by a \emph{barrier} of~$l$ parallel edges between~$l$
consecutive free vertices on the boundaries of the two M-graphs of the
adjacent faces. These edges will be
crossed by paths that are called \emph{ropes}. The size of~$l$ will be
determined later.
For each variable~$x$, we construct a \emph{V-gadget} $\gamma(x)$, and
similarly we build a \emph{C-gadget} for each clause. These gadgets are described
below.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics{planar3sat}\label{fi:planar3sat}}\hfil
\subfigure[]{\includegraphics[scale=0.4]{m-supergraph}\label{fi:m-supergraph}}
\caption{(a) The planar graph~$G$ (solid) and its dual~$G^*$ (dotted) corresponding to the planar-3SAT formula
$(a\vee b\vee\neg c)\wedge(a\vee \neg b\vee c)$. (b) The corresponding M-supergraph~$G^*_\alpha$ with the clause gadgets
(vertical) and the variable gadgets (horizontal).}
\end{figure}
Each vertex~$u$ of $G$ lies on the boundary of a face~$f$ of~$G$.
We attach the gadget~$\gamma(u)$ of~$u$ to the M-graph at $f$
so that~$\gamma(u)$ lies in~$f$.
It does not matter which face $f$ incident to $u$ is chosen. Similarly,
each edge~$e$ of~$G$ between a variable and a clause is replaced by
a rope of length~$2l + 3$. Since~$e$ is crossed by its dual edge, the rope is
crossed by a barrier. A rope acts as a communication line that
``passes'' a crossing at a V-gadget across a barrier to a
C-gadget. We denote by a \emph{membrane} (similarly as
in~\cite{JGAA-347}) a path between free vertices on the boundary of a single
M-graph, or between particular vertices of a variable gadget. We call a
vertex \IN if it is placed inside the region of a membrane and the
boundary of the M-graph in an IC-drawing, and \OUT otherwise.
\IN and \OUT are exactly defined by edges which cross the membrane.
Note that the framework is basically a simultaneous embedding of~$G$ and~$G^*$
by means of our gadgets, M-graphs, barriers and ropes. The subgraph without
V- and C-gadgets is 3-connected, since the M-graphs are 3-connected and
the barriers have size~$l$ for~$l \geq 3$, and it has a unique planar embedding
if one edge from each pair of crossing edges in each M-graph is removed.
\paragraph{Construction of the C-gadgets}
The C-gadget~$c = (l_1, l_2, l_3)$ with three literals~$l_1$,~$l_2$ and~$l_3$
is attached to eight consecutive free boundary vertices of an
M-graph~$M$, say $v_1, \ldots, v_8$. For each literal~$l_i$, there are three
vertices~$u_i$,~$a_i$ and~$b_i$, and four edges~$(u_i, a_i)$, $(u_i, b_i)$,
$(a_i, v_{2i})$ and $(b_i,v_{2i+1})$, where~$u_i$ is the initial vertex of the
rope towards the corresponding variable gadget. There is a membrane of nine
edges from~$v_1$ to~$v_8$, see Figure~\ref{fi:clause-gadget}.
By construction, at most two vertices among~$u_1$,~$u_2$ and~$u_3$ can be moved
outside the membrane, and at least one initial vertex of a rope (and maybe
all) must be \IN. \IN shall correspond to the value
\true~of the literal and thus of the clause.
\paragraph{Construction of the V-gadgets}
Let~$x$ be a variable and let~$v$ be the vertex corresponding
to~$x$ in~$G$. Suppose that the literal~$x$ occurs in~$k$ clauses
for some~$k \geq 1$, which are ordered by the embedding of~$G$.
Denote this sequence by~$x_1, \ldots x_k$, where each~$x_i$
corresponds to~$x$ or~$\neg x$. The V-gadget of~$x$ is
$\gamma(x) = \gamma(x_0), \gamma(x_1), \ldots, \gamma(x_k),
\gamma(x_{k+1})$. This gadget is connected to~$7(k+2)$ free consecutive vertices
on the border of the M-graph~$M$ to which it is attached; see
Figure~\ref{fi:variable-gadget} for an illustration. The gadgets~$\gamma(x_0)$
and~$\gamma(x_{k+1})$ are called the
(left and right) \emph{terminal gadgets} and each~$\gamma(x_i)$ ($1
\leq i \leq k$) is called a
\emph{literal gadget}. Gadget $\gamma_i$ for $0 \leq i \leq k+1$ is similar
to a clause gadget and is connected to seven consecutive free variables~$v^i_1, \ldots, v^i_7$ on the
boundary of~$M$.
There is a \emph{local membrane} of seven edges from $v^i_1$ to $v^i_7$.
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[scale=0.33]{clause-gadget}\label{fi:clause-gadget}}\hfil
\subfigure[]{\includegraphics[scale=0.31]{variable-gadget}\label{fi:variable-gadget}}
\caption{(a) The clause gadget. (b) The variable gadget.}
\label{fi:gadgets}
\end{figure}
The left terminal gadget~$\gamma(x_0)$ has two primary vertices~$x_0^+$ and~$x_0^-$.
The primary vertex~$x_0^+$ is connected to~$v^0_2$,~$v^0_3$ and~$v^0_4$ by
paths of length two.
The other
primary vertex~$x_0^-$ is connected to $v^0_5, v^0_6$ also by paths of
length two.
Analogously, the right terminal gadget has two primary
vertices~$x_{k+1}^+$ and~$x_{k+1}^-$, with the same requirements.
The gadget~$\gamma(x)$ has an \emph{outer membrane} of length~$2k+1$ from~$x_0^+$
to~$x_{k+1}^-$.
Each literal gadget~$\gamma(x_i)$ has has two primary vertices~$x_i^+$
and~$x_i^-$. If~$x_i$ is positive, then $x_i^+$ is attached to three
free vertices~$v^i_2$,~$v^i_3$ and~$v^i_4$ of~$M$, and~$x_i^-$
is attached to two free vertices~$v^i_5$ and~$v^i_6$ by two paths of length two,
respectively. The rope to the literal begins at $x_i^+$. Otherwise, if~$x_i$ is
negated, then the gadget is reflected, such that~$x_i^+$ has two, and~$x_i^-$
has three paths of length two to the M-graph. The rope to the literal begins
at~$x_i^-$.
The rope is a path of~$2l+3$ edges from vertex~$x_i^{\pm}$
of the V-gadget to the vertex~$u_j$ of the clause gadget representing
the literal of $x$.
In addition, there is a path of length two that connects $x_i^-$ to
$x_{i+1}^-$ for all $0 \leq i \leq k$.
The M-graph must have sufficiently many free vertices for
the edges from the gadgets and barriers. The smallest bound can easily be
computed from the embedding of~$G$ and the attached gadgets; see
Figure~\ref{fi:m-supergraph}.
The rotation system of the gadgets is retrieved from the drawing and
the ordering of the vertices on the border of M-graphs.
\paragraph{Correctness}
We will now prove several lemmas on the structure of our construction. With
these lemmas, we will show that an IC-planar drawing of the resulting
graph~$G_\alpha$ immediately yields a valid solution to the underlying
planar-3SAT problem. First, we show that the M-graph is not crossed.
\begin{lemma}
The boundary edges of an M-graph (with a fixed rotation system) are
never crossed in an IC-planar drawing of $G_{\alpha}$.
\end{lemma}
\begin{proof}
This lemma follows directly from the construction. Each~$K_4$ must be embedded
as a kite, and further edge crossings violate IC-planarity.
\end{proof}
Consequently, the following corollary holds.
\begin{corollary}
A path from a free boundary vertex cannot cross any M-graph.
\end{corollary}
Now, we show that every clause, terminal and literal gadget
has at least one primary vertex that is not \OUT.
\begin{lemma} \label{lem:inoutC-gadget}
In every IC-planar drawing of~$G_{\alpha}$, at most two of the
primary vertices~$u_1$,~$u_2$ and~$u_3$ of a clause gadget can be \OUT,
and at most one of the primary vertices~$x_i^+, x_i^-$ of a terminal or a
literal gadget can be \OUT of the local membrane.
\end{lemma}
\begin{proof}
Assume that~$u_1$,~$u_2$ and~$u_3$ all are \OUT. Then, the membrane
must cross five edges. Since the membrane has length seven, either
one membrane edge is crossed twice or two adjacent membrane edges
are crossed, which contradicts the IC-planarity of the drawing.
The proof for terminal and literal gadgets works
analogously.
\end{proof}
Next, we show that the outer membrane crosses each rope.
\begin{lemma} \label{lem:ropeXmemberane}
In every IC-planar drawing of $G_{\alpha}$, each rope
connected to a V-gadget is crossed by
the outer membrane of the V-gadget.
\end{lemma}
\begin{proof}
Suppose some rope is not crossed by the outer membrane.
Either the outer membrane crosses at least one barrier or it crosses
the three paths of length two that connect the V-gadget endpoint of
the rope to its M-graph. It cannot do the first if the size of the
barrier is chosen to be
\begin{displaymath}
l \geq \max \{k\mid\text{a variable }x\text{ occurs in }k\text{
clauses of }\alpha\} +2 .
\end{displaymath}
It cannot do the second since the outer membrane, of length $2k+1$,
would be crossed at least $k+2$ times.
\end{proof}
The fact that a rope propagates a truth value is due to the fact
that its length is tight, as the following lemma shows.
\begin{lemma}
In every IC-planar drawing of $G_{\alpha}$, respecting the rotation
system, the endpoint of a rope at a C-gadget is \OUT if the endpoint
at the vertex is \IN (its local membrane).
\end{lemma}
\begin{proof}
M-graphs, by construction, cannot be crossed by a rope. Thus, the rope
must cross a barrier of~$l$ edges.
In addition, a rope is crossed by the
outer membrane of the variable gadget. If the endpoint at the
vertex is \IN (its local membrane), then the rope is crossed by
$l+2$ edges. Hence, it cannot cross another membrane, since its length
is $2l+3$.
\end{proof}
The consistency of the truth assignment of the variable is granted by the
following lemma.
\begin{lemma}
In every IC-planar drawing of~$G_{\alpha}$, and every variable~$x$,
either $x_i^+$ is \OUT (and $x_i^-$ is \IN) for all $0 \leq i \leq
k+1$
or $x_i^+$ is \IN (and $x_i^-$ is \OUT) for all $0 \leq i \leq
k+1$.
\end{lemma}
\begin{proof}
If~$x_0^+$ is \OUT, then by
Lemma~\ref{lem:inoutC-gadget} $x_0^-$ is \IN and the local membrane
must cross an edge of the path of length two from~$x_0^+$ to~$x_1^-$.
This implies that the local membrane of the first literal gadget
cannot cross this path, and therefore must cross the paths from~$x_1^+$ to the
M-graph. It follows by induction that all~$x_i^+$
are \OUT and all~$x_i^-$ are \IN; see
Figure~\ref{fi:variable-gadget}.
If $x_0^+$ is \IN, then the outer membrane insures that $x_{k+1}^-$
is \OUT.
We then proceed from right to left. Now, all~$x_i^-$ are \OUT and
all~$x_i^+$ are \IN.
\end{proof}
With these lemmas, we can finally prove the following theorem.
\newcommand{\thNpRot}{IC-planarity testing with given rotation system is \NP-complete.}
\begin{theorem}\label{th:np-rot}
\thNpRot
\end{theorem}
\begin{proof}
We have already stated in the proof of Theorem~\ref{th:np-hard} that
IC-planarity is in \NP. We reduce from planar-3SAT and show
that an expression~$\alpha$ is satisfiable if and only if the
constructed graph~$G^*_{\alpha}$ has a IC-planar drawing.
If~$\alpha$ is satisfiable, then we draw the V- and C-gadgets according
to the assignment, such that the initial vertex of each rope from the
gadget of a variable~$x$ is \IN at the C-gadget if the literal is
assigned the value true. The resulting drawing is IC-planar by construction.
If~$G^*_{\alpha}$ has an IC-planar drawing, then we obtain the truth
assignment of~$\alpha$ from the drawing. Thus, IC-planarity with a given
rotation system is \NP-complete.
\end{proof}
Note that the construction for the proof of Theorem.~\ref{th:np-rot} also holds
in the variable embedding setting, since the used graphs have an almost fixed
IC-planar embedding. From this, we can obtain an alternative \NP-completeness
proof of IC-planarity testing.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[scale=0.65,page=1]{dual}\label{fig:dual-1}}\hfil
\subfigure[]{\includegraphics[scale=0.65,page=2]{dual}\label{fig:dual-2}}\hfil
\subfigure[]{\includegraphics[scale=0.65]{routing-edges}\label{fig:routing-edges}}
\caption{(a) A triconnected graph~$T$ (solid) and its dual~$T^*$ (dotted),
(b) The extended graph~$T^*\cup\{u^*,v^*\}$ and the three length-3 paths
between~$u^*$ and~$v^*$ (bold). (c) The ordered routing
edges~$e_1,\ldots,e_k$ lie inside the quadrangle~$(u,l_{uv},v,r_{uv})$.}
\label{fig:dual}
\end{figure}
\subsection{Polynomial-time test for a triangulated plane graph plus a matching}
On the positive side, we now describe an $O(n^3)$-time algorithm to test whether a graph~$G=(V,E_T \cup E_M)$
that consists of a triangulated plane graph~$T=(V,E_T)$ and a
matching~$M=(V_M,E_M)$ with~$V_M\subseteq V, E_M\cap E_T=\emptyset$
admits an IC-planar drawing that preserves the embedding of~$T$. In the positive case the algorithm also computes an IC-planar drawing.
An outline of the algorithm is as follows.
\begin{enumerate*}[label=(\arabic{*})]
\item \label{alg1} Check for every matching edge if there is a
way to draw it such that it crosses only one edge of~$T$.
\item \label{alg2} Split~$T$ into subgraphs that form a
hierarchical tree structure.
\item \label{alg3} Traverse the 4-block tree bottom-up and solve
a 2SAT formula for each tree node.
\end{enumerate*}
In order to check whether there is a valid placement for each matching
edge~$(u,v)\in M$, we have to find two adjacent faces, one of which is incident
to~$u$, while the other one is incident to~$v$. To this end, we consider the
dual~$T^*$ of~$T$ that contains a vertex for each face in~$T$ that is not
incident to a vertex~$w\in V_M\setminus\{u,v\}$, and an edge for
each edge in~$T$ that separates two faces. Further, we add two additional
vertices~$u^*$ and~$v^*$ to~$T^*$ that are connected to all faces that are
incident to~$u$ and~$v$, respectively. In the resulting graph~$T^*\cup\{u^*,v^*\}$,
we look for all paths of length~3 from~$u^*$ to~$v^*$. These paths are
equivalent to routing~$(u,v)$ through two faces that are separated by a
single edge. Note that no path of length~1 or~2 can exist, since $(i)$ by
construction~$u^*$ and~$v^*$ are not connected by an edge and $(ii)$ if there
was a path of length~2 between~$u^*$ and~$v^*$, then~$u$ and~$v$ would lie on a
common face in the triangulated graph~$T$; thus, the edge~$(u,v)$ would exist
both in~$E_T$ and in~$E_M$, which is not possible since
$E_T \cap E_M = \emptyset$. See Figure~\ref{fig:dual} for an illustration.
If there is an edge that has no valid placement, then~$G$ is not
IC-planar and the algorithm stops. Otherwise, we save each
path that we found as a possible routing for the corresponding edge in~$M$.
Now, we make some observations on the structure of the possible routings of
an edge~$(u,v)\in M$ that we can use to get a hierarchical tree structure of
the graph~$T$. Every routing is uniquely represented by an edge
that separates a face incident to~$u$ and a face incident to~$v$ and that might
be crossed by~$(u,v)$. We call these
edges \emph{routing edges}. Let there be~$k$ routing edges for the pair~$(u,v)$.
Each of these edges forms a triangular face with~$u$. From the embedding, we
can enumerate the edges by the counterclockwise order of their corresponding
faces at~$u$. This gives an ordering~$e_1,\ldots,e_k$ of the routing edges.
Let~$e_1=(l_{uv},l'_{uv})$ and~$e_k=(r'_{uv},r_{uv})$ such that the
edge~$(u,l_{uv})$ comes before the edge~$(u,l'_{uv})$, and the
edge~$(u,r'_{uv})$ comes before~$(u,r_{uv})$ in the counterclockwise order
at~$u$. Then, all edges~$e_1,\ldots,e_k$ lie within the
\emph{routing quadrangle}~$(u,l_{uv},v,r_{uv})$; see
Figure~\ref{fig:routing-edges}. Note that there may be more complicated structures
between the edges, but they do not interfere with the ordering.
Denote by~$Q_{uv}=(u,l_{uv},v,r_{uv})$ the routing quadrilateral
of the matching edge~$(u,v)\in M$. We define the
\emph{interior}~$\mathcal I_{uv}=(\mathcal V_{uv},\mathcal E_{uv})$
as the maximal subgraph of~$T$ such that, for every
vertex~$w\in \mathcal V_{uv}$, each path from~$w$ to a vertex on the outer face
of~$T$ contains~$u$, $l_{uv}$, $v$, or~$r_{uv}$.
Consequently,~$Q_{uv}\in\mathcal V_{uv}$. We will now show that two interiors
cannot overlap.
\newcommand{\lemInteriors}[1]{
For each pair of interiors~$\mathcal I_{uv},\mathcal I_{ab}$, exactly one of
the following conditions holds:
\begin{enumerate*}[label=(\alph*)]
\item \label{#1-a1} $\mathcal I_{uv}\cap\mathcal I_{ab}=\emptyset$
\item \label{#1-a2} $\mathcal I_{uv}\subset\mathcal I_{ab}$
\item \label{#1-a3} $\mathcal I_{ab}\subset\mathcal I_{uv}$
\item \label{#1-a4} $\mathcal I_{uv}\cap\mathcal I_{ab}=Q_{uv}\cap Q_{ab}$.
\end{enumerate*}
}
\begin{lemma}\label{lem:interiors}
\lemInteriors{main}
\end{lemma}
\begin{proof}
Assume that neither of the conditions holds. Recall that~$Q_{uv}$ and~$Q_{ab}$
are the boundaries of the interiors. Note that $\mathcal I_{uv}\cap\mathcal I_{ab}=\emptyset$
corresponds to disjointness,
$\mathcal I_{uv}\subset\mathcal I_{ab}$ and
$\mathcal I_{ab}\subset\mathcal I_{uv}$ correspond to inclusion, and
$\mathcal I_{uv}\cap\mathcal I_{ab}=Q_{uv}\cap Q_{ab}$
corresponds to touching in their
boundary of the two interiors~$\mathcal I_{uv}$ and~$\mathcal I_{ab}$.
Thus, if the conditions do not hold, the interiors must properly intersect,
that is, without loss of generality, there is
a vertex~$c\in Q_{uv}$ that lies in~$\mathcal I_{ab}\setminus Q_{ab}$,
and a vertex~$d\in Q_{uv}$ that does not lie in~$\mathcal I_{ab}$. Hence,
the other two vertices of~$Q_{uv}$ lie in~$Q_{ab}$. Clearly,~$c$ and~$d$
are opposite vertices in~$Q_{uv}$. By definition of IC-planar graphs, it holds
that~$\{a,b\}\cap\{u,v\}=\emptyset$.
First, assume that~$c=l_{uv}$. Then,~$u$ and~$v$ must lie in~$Q_{ab}$. More
specifically, by definition of IC-planar graphs~$\{u,v\}=\{l_{ab},r_{ab}\}$.
Without loss of generality, assume that~$u=r_{ab}$ and~$v=l_{ab}$.
Since the edges~$(u,c)$ and~$(v,c)$ have to lie in~$\mathcal I_{ab}$, this
leads to the situation depicted in Figure~\ref{fi:interiors-1}. However,
this implies that that there are only two routing edges for~$(a,b)$ with one of them
incident to~$u$, and the other one is incident to~$v$. Thus, the routing edges
are not valid. The case that~$c=r_{uv}$ works analogously.
Second, assume that~$c=u$. Then,~$l_{uv}$ and~$r_{uv}$ must lie in~$Q_{ab}$.
If~$l_{uv}$ and~$r_{uv}$ are adjacent on~$Q_{ab}$, say~$l_{uv}=b$
and~$r_{uv}=r_{ab}$, then there is only a single routing edge for~$(u,v)$
that is incident to~$b$ and thus not valid; see Figure~\ref{fi:interiors-2}.
Otherwise, there are two cases. If~$\{l_{uv},r_{uv}\}=\{a,b\}$, say~$r_{uv}=a$
and~$l_{uv}=b$, then there are only two routing edges for~$(u,v)$ with one
of them incident to~$a$, and the other one incident to~$b$; see
Figure~\ref{fi:interiors-3}. If~$\{l_{uv},r_{uv}\}=\{l_{ab},r_{ab}\}$,
say~$l_{uv}=l_{ab}$ and~$r_{uv}=r_{ab}$, then both routing edges of~$(u,v)$
are incident to~$b$; see Figure~\ref{fi:interiors-4}. The case that~$c=v$
works analogously.
This proves that, if there is a proper intersection between two routing
quadrilaterals, than at least one of the corresponding matching edges has no
valid routing edge. Thus, one of the conditions must hold.
\end{proof}
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[scale=0.9,page=1]{interiors}\label{fi:interiors-1}}\hfil
\subfigure[]{\includegraphics[scale=0.9,page=2]{interiors}\label{fi:interiors-2}}\\
\subfigure[]{\includegraphics[scale=0.9,page=3]{interiors}\label{fi:interiors-3}}\hfil
\subfigure[]{\includegraphics[scale=0.9,page=4]{interiors}\label{fi:interiors-4}}
\caption{Illustration of the proof of Lemma~\ref{lem:interiors}.
The routing quadrilateral~$Q_{ab}$ is drawn bold, and~$Q_{uv}$ is drawn dark gray and thicker.
(a) $l_{uv}$ lies in~$\mathcal I_{ab}\setminus Q_{ab}$.
(b) $l_{uv}$ and~$r_{uv}$ are adjacent in~$Q_{ab}$.
(c) $\{l_{uv},r_{uv}\}=\{a,b\}$.
(d) $\{l_{uv},r_{uv}\}=\{l_{ab},r_{ab}\}$.}
\label{fi:interiors}
\end{figure}
By using Lemma~\ref{lem:interiors}, we can find a hierarchical structure on the
routing quadrilaterals. We construct a directed graph~$H=(V_H,E_H)$ with
$V_H=\{\mathcal I_{uv}\mid (u,v)\in M\}\cup \{G\}$.
For each pair~$\mathcal I_{uv},\mathcal I_{xy}$, $E_H$ contains a directed
edge~$(\mathcal I_{uv},\mathcal I_{xy})$ if and only if
$\mathcal V_{uv}\subset\mathcal V_{xy}$ and there is no matching edge~$(a,b)$
with~$\mathcal V_{uv}\subset\mathcal V_{ab}\subset\mathcal V_{xy}$.
Finally, we add an edge from each subgraph that has no outgoing edges to~$G$.
Each vertex but~$G$ only has one outgoing edge. Obviously, this graph contains
no (undirected) cycles. Thus,~$H$ is a tree.
We will now show how to construct a drawing of~$G$ based on~$H$ in a bottom-up
fashion. We will first look at the leaves of the graph.
Let~$\mathcal I_{uv}$ be a vertex of~$H$ whose children are all leaves.
Let~$\mathcal I_{u_iv_i},\ldots,\mathcal I_{u_kv_k}$
be these leaves. Since these interiors are all leaves in~$H$, we can pick any
of their routing edges. However, the interiors may touch on their boundary, so
not every combination of routing edges can be used. Assume that a matching
edge~$(u_i,v_i),1\le i\le k$ has more than two valid routing edges. Then, we
can always pick a \emph{middle} one, that is, a routing edge that is not
incident to~$l_{u_iv_i}$ and~$r_{u_iv_i}$, since this edge will not interfere
with a routing edge of another matching edge.
Now, we can create a
2SAT formula to check whether there is a valid combination of routing edges
as follows. For the sake of clarity, we will create several redundant variables
and formulas. These can easily be removed or substituted by shorter structures
to improve the running time. For each matching edge~$(u_i,v_i),1\le i\le k$, we
create two binary variables~$l_i$ and~$r_i$, such that~$l_i$ is \true if
and only if the routing edge incident to~$l_{u_iv_i}$ is picked, and~$r_i$ is
\true if and only if the routing edge incident to~$r_{u_iv_i}$ is
picked. If~$(u_i,v_i)$ has only one routing edge, then it is obviously incident
to~$l_{u_iv_i}$ and~$r_{u_iv_i}$, so we set $l_{u_iv_i}=r_{u_iv_i}=\true$
by adding the clauses~$l_{u_iv_i}\vee\false$
and~$r_{u_iv_i}\vee\false$. If~$(u_i,v_i)$ has exactly two routing
edges, the picked routing edge has to be incident to either~$l_{u_iv_i}$
or~$r_{u_iv_i}$, so we add the clauses~$l_{u_iv_i}\vee r_{u_iv_i}$
and~$\neg l_{u_iv_i}\vee\neg r_{u_iv_i}$. If~$(u_i,v_i)$ has more than two
routing edges, we can pick a middle one, so we set $l_{u_iv_i}=r_{u_iv_i}=\false$
by adding the clauses~$\neg l_{u_iv_i}\vee\false$
and~$\neg r_{u_iv_i}\vee\false$. Next, we have to add clauses to
forbid pairs of routing edges that can not be picked simultaneously, i.e.,
they share a common vertex. Consider a pair of matching
edges~$(u_i,v_i),(u_j,v_j),1\le i,j\le k$. If~$r_{u_iv_i}$=$l_{u_jv_j}$, we add
the clause $\neg r_i\vee \neg l_j$. For the other three cases, we add
an analogue clause.
Now, we use this 2SAT to decide whether the
subgraph~$I_{uv}$ is IC-planar, and which routing edges can be used. For each
routing edge~$(a,b)$ of~$I_{uv}$, we solve the 2SAT formula given above with
additional constraints that forbid the use of routing edges incident to~$a$
and~$b$. To that end, add the following additional clauses: If~$l_{u_iv_i}=a$,
add the clause~$\neg l_i\vee\false$. For the other three cases, we add an
analogue clause. If this 2SAT formula has no solution, then the
subgraph~$\mathcal I_{uv}$ is not IC-planar. Otherwise, there is a solution
where you pick the routing edges corresponding to the binary variables.
To decide whether a subgraph~$I_{uv}$ whose children are not all leaves is
IC-planar, we first compute which of their routing edges can be picked by
recursively using the 2SAT formula above. Then, we use the 2SAT formula
for~$I_{uv}$ to determine the valid routing edges of~$I_{uv}$. Finally, we can
decide whether~$G$ is IC-planar and, if yes, get a drawing by solving the
2SAT formula of all children of~$G$.
Hence, we give the following for the proof of the time complexity.
\newcommand{\triangtest}{
Let $T=(V,E_T)$ be a triangulated plane graph with $n$ vertices and let
$M=(V,E_M)$ be a matching. There exists an $O(n^3)$-time algorithm to test if
$G=(V,E_T \cup E_M)$ admits an IC-planar drawing that preserves the embedding
of~$T$. If the test is positive, the algorithm computes a feasible drawing.
}
\begin{theorem}\label{th:triang-test}
\triangtest
\end{theorem}
\begin{proof}
We need to prove that the described algorithm runs in $O(n^3)$ time. Indeed, for each subgraph~$I_{uv}$, we
have to run a 2SAT formula for each routing edge. Once we have determined the
valid routing edges, we do not have to look at the children anymore.
Let~$c_{uv}$ be the number of children of~$I_{uv}$. Each of these 2SAT formula contains $2c_{uv}$ variables and up to $4c_{uv}(c_{uv}-1) $ clauses. Since every edge
of~$G$ can only be a routing edge for exactly one matching edge, we have to
solve at most~$n$ 2SAT formulas. The tree~$H$ consists of at most~$n/2+1$
vertices (one for each matching edge), so a very conservative estimation is
that we have to solve~$O(n)$ 2SAT formulas with~$O(n)$ variables and~$O(n^2)$
clauses each. Aspvall {\em et al.}~\cite{apt-ltatt-79}
showed how to solve 2SAT in
time linear in the number of clauses. We can use the linear-time algorithm of
Section~\ref{ic:sec:drawing} to draw the IC-planar graph corresponding to the
IC-planar embedding by picking the routing edges corresponding to the binary
variables. Thus, our algorithm runs in~$O(n^3)$ time.
\end{proof}
\section{Open Problems}\label{se:conclusions}
The research presented in this paper suggests interesting open problems.
\begin{description}
\item[Problem 1.] We have shown that every IC-planar graph has a straight-line drawing in quadratic
area, although the angle formed by any two crossing edges can be small.
Conversely, straight-line \Rac drawings of IC-planar graphs may require
exponential area. From an application perspective, it is interesting to design algorithms that compute a straight-line drawing of IC-planar graphs in polynomial area and good crossing resolution.
\item[Problem 2.] Also, although IC-planar graphs are both 1-planar and straight-line \Rac drawable graphs, a characterization of the intersection between these two classes is still missing. In particular, studying whether \emph{NIC-planar graphs} (see Zhang~\cite{z-dcmgp-AMS14}), which lie between IC-planar graphs and 1-planar graphs, are also \Rac graphs may lead to new insights on this problem.
\item[Problem 3.] We proved that recognizing IC-planar graphs is NP-hard. Is it possible to design fixed-parameter tractable (FPT) algorithms for this problem, which improve the time complexity of those described by Bannister \emph{et al.}~\cite{DBLP:conf/wads/BannisterCE13} for 1-planar graphs, with respect to different parameters (vertex cover number, tree-depth, cyclomatic number)? Are there other parameters that can be conveniently exploited for designing FPT testing algorithms for IC-planar graphs?
\end{description}
\section*{Acknowledgments}
We thank the anonymous reviewers of this work for their useful comments and suggestions.
We also thank Michael A. Bekos and Michael Kaufmann for suggesting a simpler counterexample for the area requirement of IC-plane straight-line RAC drawings.
\clearpage
{\small |
2,877,628,088,857 | arxiv | \section{Introduction}
Since Arlo Landolt's (1968) discovery of the first DAV \mbox{(HL Tau 76)} we have learned much about pulsating white dwarfs. They are otherwise normal white dwarf stars, and they represent the final stage of about 98\% of the stars (see the recent review of \citealt{winget2} and references therein), so it is a key issue to gain insight into their interiors. While white dwarfs are relatively simple to model, some observed phenomena still challenge theory.
One unsolved mystery is the variability of the pulsational modes' amplitudes and frequencies of PNNV, cool DBV and cool DAV stars \citep{handler1}. While we can find very stable modes in certain white dwarfs which can be used to measure evolutionary effects (cooling and/or contraction, e.g. hot DAVs G117-B15A and R548 -- `The Most Stable Optical Clocks Known': Mukadam, Kepler \& Winget 2001 and references therein), in some cases short-term variabilities make it difficult to determine even the pulsational modes. An example for this latter is the case of PG 1115+158 (DBV) with its remarkably unstable amplitude spectra \citep{handler2}.
In some cases we encounter strange, sudden, short-term variations in the pulsational behaviours of stars: for example the `sforzando' effect observed in the DBV star GD 358 in 1996, where the nonlinear light curve of the star changed into a high amplitude, remarkably sinusoidal one for a short period of time (\citealt{kepler1}, \citealt{provencal1}). In the case of PG 1456+103 its luminosity variations almost disappeared just before the Whole Earth Telescope (WET, \citealt{nather1}) run started on the star \citep{handler1}. The frequency and amplitude variations of multiplet components (mostly for high $k$ modes) observed in GD 358 \citep{provencal1} are further examples for short-term variations.
Possible explanations for these observed phenomena could be non-linear mode coupling, convection/pulsation interactions, mode excitation, and beating of different pulsation modes -- such as unresolved rotational splitting. To find the right explanation(s) for a certain case is a great challenge. A crucial first step is to identify the periods of the true pulsation modes of a white dwarf.
One possible way of mode identification when strong amplitude variations occur is to observe the star in many seasons, determine all the observed periods from the different runs and search for the (nearly) equidistant period spacings among them, which can be consecutive overtones with the same horizontal degree ($l$) value. \citet{kleinman1} used this method for the first time for a DAV star (G29-38) where recurring sets of normal modes were identified.
Since KUV 02464+3239 was found to be a luminosity variable DA star near the red edge of the DAV instability strip \citep{fontaine1}, we expect to find the short-term variabilities which characterize the pulsation of similar stars. Using a 2900\,s-long light curve, \citet{fontaine1} determined a quasi-period of $\sim$832\,s for the star's light variation. The long period, large amplitude variations they found with strongly non-sinusoidal light curve along with the presence of harmonic peaks in the frequency spectrum are consistent with the behaviour of a cool DAV star. However, up to now only the results of this short discovery run had been published about this target.
In this paper we present the results of observational analysis of KUV 02464+3239. We describe our observations and data reduction process in Sect.~\ref{obs}. The results of Fourier analyses and frequency determination tests can be found in Sect.~\ref{fourier}. Sect.~\ref{Ampl.var.} contains our tests on amplitude variations. We discuss the results of asteroseismological investigations (modeling, results for stellar parameters) in Sect.~\ref{seism} and give a brief summary in Sect.~\ref{sum}.
\section[]{Observations and data reduction}
\label{obs}
We observed KUV 02464+3239 on 20 nights between October 2006 and February 2007. The observations were made with a Princeton Instruments VersArray:1300B back-illuminated CCD camera\footnote{http://www.princetoninstruments.com/products/imcam/\\versarray/dsheet.aspx} attached to the 1m RCC telescope at Piszk\'estet\H o mountain station of Konkoly Observatory. The log of observations is in Table~\ref{obslog}. We did not use any filter (`white light observations') to maximize the photon count from this relatively faint target ($m_v = \rmn{16\fm07}$). The back-illuminated detector we used is more sensitive to bluer wavelengths than a front-illuminated one. Note that the amplitude values of the Fourier light curve solution corresponding to these white light measurements differ from filtered ones.
Standard IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} routines were used during the reduction process; the frames were bias, dark and flat corrected before we performed aperture photometry with the IRAF DAOPHOT package. The observational times of every data points were calculated to Barycentric Julian Date (BJD).
Different apertures were used for every night. For a given night, we chose the aperture size that resulted in the lowest scatter for differential light curves of constant stars in the field.
We used 10 or 30\,s exposure times depending on the weather conditions during the observing runs. For 10\,s exposure we averaged the data into 30\,s bins to keep all the data with the same bin, although, we are aware of that for multiperiodic pulsators with nearby pulsations it does introduce an extra uncertainty.
\begin{table}
\caption{Journal of observations of KUV 02464+3239. Subset No. refers to subsets in Sect.~\ref{Analysis.ref.interval.}, $N$ denotes the number of points remained after averaging (see details in text) and $\delta T$ means the time span of the runs including gaps.}
\label{obslog}
\begin{center}
\begin{tabular}{p{4mm}p{6mm}ccrr}
\hline
Run & Subset & Date & Start time & $N$ & $\delta T$\\
No. & No. & (UT) & (BJD-2\,450\,000) & & (h)\\
\hline
01 & 1 & 2006 Oct 06 & 4014.577 & 210 & 2.02\\
02 & 1 & 2006 Oct 07 & 4015.584 & 117 & 1.31\\
03 & 1 & 2006 Oct 09 & 4017.546 & 216 & 2.37\\
04 & 1 & 2006 Oct 11 & 4019.543 & 264 & 2.98\\
05 & & 2006 Oct 25 & 4034.283 & 304 & 6.29\\
06 & & 2006 Nov 26 & 4065.603 & 112 & 1.27\\
07 & & 2006 Nov 27 & 4066.542 & 65 & 0.87\\
08 & 2 & 2006 Nov 28 & 4068.206 & 1163 & 11.17\\
09 & & 2006 Dec 07 & 4077.299 & 278 & 4.11\\
10 & & 2006 Dec 08 & 4078.474 & 182 & 1.89\\
11 & 3 & 2006 Dec 11 & 4081.169 & 732 & 8.72\\
12 & & 2006 Dec 12 & 4082.435 & 81 & 0.75\\
13 & & 2006 Dec 13 & 4083.176 & 477 & 9.92\\
14 & 4 & 2006 Dec 14 & 4084.170 & 1048 & 10.92\\
15 & 4 & 2006 Dec 15 & 4085.179 & 877 & 10.56\\
16 & 4 & 2006 Dec 16 & 4086.187 & 696 & 7.54\\
17 & & 2006 Dec 17 & 4087.322 & 107 & 1.14\\
18 & 4 & 2006 Dec 19 & 4089.208 & 747 & 6.90\\
19 & & 2007 Jan 29 & 4130.216 & 362 & 5.16\\
20 & & 2007 Feb 19 & 4151.289 & 189 & 1.91\\
\hline
Total: & & & & 8227 & 97.80\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{bognar_fig1.eps}
\end{center}
\caption{Variable and comparison stars in the CCD field. The three brightest stars (C1, C2, C3) were selected as a reference system for the differential photometry.}
\label{map}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{bognar_fig2.eps}
\end{center}
\caption{Differential light curve of the check star (C3) to the average of the reference stars obtained on JD 2\,454\,086.}
\label{checkstar}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=17.5cm]{bognar_fig3.eps}
\end{center}
\caption{Normalized differential light curves of the 20 runs obtained on KUV 02464+3239 between October 2006 and February 2007. Time is given as BJD-2\,450\,000.}
\label{lightcurves}
\end{figure*}
We performed aperture photometry on the same 22 stars for every night. Possible comparison stars were checked for variability and longer-term trends (caused by colour effects of stars with different spectral types) in an iterative way. This process resulted in our choosing the three brightest stars marked in Fig.~\ref{map} as a reference system for the differential photometry. The quality of this method is shown by the differential light curve of a check star (C3) compared to the average of the reference stars in Fig.~\ref{checkstar}.
Following the traditional method of photoelectric photometry, we obtained first-order extinction coefficients for the nights which were long enough. The values lay in the range of 0.07--0.39 in white light.
The colour term of extinction was checked. However, we could not precisely determine the second order extinction coefficient for each night. In addition to the iterative process, we obtained \textit{BVRI} photometry to estimate the spectral types of the comparison stars selected in the iterative process. The relative colour indices were checked. The differences between the \textit{B-V} colour indices are higher than $\sim$0.5 which shows immediately that the comparison stars have much later spectral types than KUV 02464+3239. Naturally, this deficiency often resulted in trends in the differential light curves and accordingly significant signals in the low frequency region of their Fourier Transform (FT). Although the low frequency signals do not exceed 1--6\,mmag, to get a homogeneous dataset for the variable star, polynomial fits were performed in the last step of the reduction process. It is widely applied in white dwarf research using multichannel photometry. The light curves obtained for each run can be seen in Fig.~\ref{lightcurves}.
\section[]{Fourier analysis}
\label{fourier}
Since the star was not well studied before, our first aim was to find the frequency content.
Analyses were made by use of the MuFrAn (Multi-Frequency Analyzer) package \citep{kollath1, csubry1}. MuFrAn provides efficient tools for frequency determination with its standard analyzer applications (FFT, DFT, linear and nonlinear fitting options) and graphic display routines. The program handles unequally spaced and gapped observational data. Frequency values are given in cycle/day\,(c/d) units.
We followed the standard steps of a pre-whitening process to get frequency, amplitude and phase values utilising the light curve's Fourier spectrum. In pre-whitening processes the question is always when to stop iterating. In deciding whether a peak belongs to a real pulsation frequency, we used the widely accepted approximation: if a peak reaches the threshold of signal-to-noise ratio of $\sim$4 it has a small probability of being due to noise \citep{breger1}.
The tools of the time string analysis program Period04 \citep{lenz1} were used to calculate S/N of the individual peaks. The program determines the signal (S) value as the amplitude of the peak after the least-squares fit. The noise (N) is given as an average amplitude calculated in a frequency range that encloses the peak in the residual spectrum. We determined S/N values from the $\pm$25\,c/d frequency range of peaks after pre-whitening the original FT.
\subsection[]{Investigation of individual nights}
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{bognar_fig4.eps}
\end{center}
\caption{Amplitude spectra of the subsets. Numbers of subsets are indicated in the left-upper corners of each panels. Window functions are given in the inserts. Remarkable amplitude variations can be seen from one interval to another.}
\label{FT.ref.nights}
\end{figure}
\begin{table}
\caption{Frequencies detected with S/N\,$\geq$\,4 values in the Fourier Transforms of subsets. Period and amplitude values are given for completeness.}
\label{ref.nights.freq}
\begin{center}
\begin{tabular}{rrrr}
\hline
\multicolumn{1}{c}{Frequency} & \multicolumn{1}{c}{Period} & \multicolumn{1}{c}{Ampl.} & \multicolumn{1}{c}{S/N}\\
\multicolumn{1}{c}{(c/d)} & \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{(mmag)} & \\
\hline
\multicolumn{4}{l}{Subset 1}\\
90.660$\pm$0.004 & 953.01 & 12.8 & 4.9\\
99.251$\pm$0.003 & 870.52 & 17.4 & 6.6\\
103.790$\pm$0.002 & 832.45 & 30.6 & 11.9\\
114.103$\pm$0.004 & 757.21 & 10.0 & 4.0\\[1.5mm]
\multicolumn{4}{l}{Subset 2}\\
99.567$\pm$0.028 & 867.76 & 23.6 & 7.1\\
104.880$\pm$0.036 & 823.80 & 20.2 & 6.5\\
111.452$\pm$0.045 & 775.22 & 15.1 & 5.6\\
139.227$\pm$0.063 & 620.57 & 11.6 & 6.0\\[1.5mm]
\multicolumn{4}{l}{Subset 3}\\
69.475$\pm$0.047 & 1243.61 & 14.7 & 5.4\\
87.264$\pm$0.044 & 990.09 & 16.3 & 6.0\\
98.025$\pm$0.053 & 881.41 & 16.1 & 6.0\\
104.358$\pm$0.047 & 827.92 & 18.0 & 7.3\\[1.5mm]
\multicolumn{4}{l}{Subset 4}\\
$^*$86.579$\pm$0.004 & 997.93 & 9.5 & 5.5\\
86.996$\pm$0.002 & 993.15 & 18.4 & 10.6\\
$^*$89.142$\pm$0.003 & 969.24 & 10.7 & 6.2\\
$^*$90.100$\pm$0.001 & 958.94 & 15.2 & 8.8\\
98.797$\pm$0.002 & 874.52 & 9.1 & 5.5\\
104.036$\pm$0.002 & 830.48 & 10.2 & 6.5\\
139.954$\pm$0.005 & 617.35 & 3.8 & 4.7\\
173.960$\pm$0.004 & 496.66 & 4.7 & 4.3\\
\hline
\end{tabular}
\end{center}
$^*$Closely spaced frequencies to the dominant peak. Signals at 89.14 and 90.10\,c/d are close to the 1\,c/d aliases of each others. The peak at 90.1\,c/d has an unreliably high amplitude solution only when we fit at least the first six frequencies of this subset.
\end{table}
Because the pulsation periods are short compared to the typical run length, the one-night long observing runs provide a long enough time base to investigate the basic pulsational characteristic of the star, even in a case if not all the modes are resolved. In addition, a night by night analysis allows us to follow the changes in pulsation amplitudes.
Considering the threshold of S/N$\sim$4 for the peaks, our results show that the pulsation frequencies of KUV 02464+3239 are at $\sim$87, 97, 104, 111 and 139\,c/d. In one case we also find a very significant (S/N=5.4) signal at $\sim$70\,c/d. During the pre-whitening process we did not take into account peaks under 60\,c/d in the FTs. This limit is an overestimation of the frequency range filtered by polynomial fitting and this means that any possible pulsation frequency below this level remained undiscovered.
Our analysis revealed that remarkable amplitude variations can happen from one night to another as in many cases of PNNV, cool DBV and cool DAV stars (\citealt{handler1}).
\subsection[]{Analysis of reference intervals}
\label{Analysis.ref.interval.}
The dataset can be grouped into four subsets of selected nights. The longer time base means better spectral resolution and with the proper selection of nights, the time base still remains short enough not to obscure the possible short-term variabilities.
The first four nights (JD 2\,454\,014\,--\,2\,454\,019) were selected from October (subset No. 1). We got the longest time string during the eighth run (JD 2\,454\,068) so we used this run as a reference for November (subset No. 2). In December we had the chance to observe KUV 02464+3239 in two consecutive weeks. In view of the results of individual nights' Fourier analyses we selected two intervals: the first is the run on JD 2\,454\,081 (subset No. 3), the second consists of four nights (subset No. 4; JD 2\,454\,084\,--\,2\,454\,089, except the short run on JD 2\,454\,087). The latter runs have similar FTs but the run on JD 2\,454\,081 has a different FT.
Fig.~\ref{FT.ref.nights} shows the FTs of our subsets. The variations in amplitudes of peaks in the FTs are obvious. Frequency values determined after pre-whitening are given in Table~\ref{ref.nights.freq}. S/N values of peaks were also calculated. Noise levels change from lower to higher frequencies between 2.6-2.5\,mmag (subset 1), 3.3-1.9\,mmag (subset 2), 2.7-2.5\,mmag (subset 3) and 1.7-1\,mmag (subset 4). Standard deviations of frequency values are determined by Monte Carlo (MC) simulations, getting solutions for each frequency in each subset. We generated synthetic light curves (by adding Gaussian random noise) corresponding to the frequencies can be seen in Table~\ref{ref.nights.freq}. After the non-linear least squares fitting of 100 synthetic datasets of each subset we determined the standard deviations of the frequencies.
\begin{table}
\caption{Frequency and amplitude values of the 6 accepted pulsation frequencies for each subset. Amplitudes were determined by fitting with these frequency components only. Subset No. refers to subsets of Sect.~\ref{Analysis.ref.interval.}}
\label{ref.nights.freq2}
\begin{center}
\begin{tabular}{lcccccc}
\hline
Subset & \multicolumn{6}{c}{Frequency}\\
No. & \multicolumn{6}{c}{(c/d)}\\
\hline
1 & & & 99.25 & 103.79 & & \\
2 & & & 99.57 & 104.88 & 111.45 & 139.23\\
3 & 69.48 & 87.26 & 98.02 & 104.36 & &\\
4 & & 86.99 & 98.80 & 104.04 & &\\
\hline
& \multicolumn{6}{c}{Amplitude (mmag)}\\
\hline
1 & & & 17.28 & 29.56 & & \\
2 & & & 23.61 & 20.24 & 15.12 & 11.59\\
3 & 14.74 & 16.25 & 16.07 & 17.96 & &\\
4 & & 20.83 & 8.83 & 10.14 & &\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection[]{Frequency determination test}
\label{robust}
To support our final frequency determination we performed a robustness test. Synthetic light curves were generated using frequencies, amplitudes and phases of the light curve solutions with Gaussian random noise added. The applied noise levels were scaled from 1 to 4 times the residual scatters ($\sigma$) of the light curve solutions. Independent frequency analyses were carried out for each dataset (1) to check how accurately we can get back the input frequencies from the noisy datasets and (2) whether we can identify the same frequencies that were found to be significant in the original datasets.
Testing our first subset we faced a $\pm$1--2\,c/d alias problem as we raised the noise level, especially in the case of frequencies with lower amplitudes. Frequencies were found rather far from the input values. At the 4\,$\sigma$ level we could identify 2 frequencies (99.25, 103.79\,c/d) out of the 4 found to be significant earlier. In the case of our second and third subsets, the amplitudes were rather high and we found all 4 of the frequencies identified before. Adding noise to the fourth subset caused alias problems from the 2.5\,$\sigma$ level upward and the lowest amplitude peaks disappeared. We could find only 4 frequencies (86.99, 90.10, 98.79 and 104.03\,c/d) within the most noisy time string.
Table~\ref{ref.nights.freq2} shows the results of our robustness test with the finally accepted frequencies. Peaks at $\sim$104 and 99\,c/d were found in all subsets. Other four frequencies (at $\sim$69, 87, 111 and 139\,c/d) were rather high amplitude peaks in one or two segments. These 6 frequencies were determined unambiguously by the analyses of subsets and perhaps characterize the pulsation of KUV 02464+3239 within some amplitude limit.
As can be seen in Fig.~\ref{FT.ref.nights} and in Table~\ref{ref.nights.freq2}, remarkable amplitude variations occur from one subset to another. A signal at $\sim$87\,c/d became dominant in December while the amplitude of the peak at 104\,c/d decreased by 66\%. The peak at 99\,c/d first increased its amplitude by 37\% then decreased by 63\%.
We represent the frequencies which were found at least in two subsets together in Fig.~\ref{scatter}. The groups of frequencies are well-separated, there is no problem with overlapping sidelobes. The error bars give the uncertainty of our findings on a certain value in a subset. Since they overlap, it is clear that the two peaks around 87\,c/d can have different values because of the uncertainties. The error bars also overlap in the case of peaks at $\sim$104\,c/d. The four peaks around 99\,c/d represent an interesting case. Since the three well-determined peaks are more widely separated than the error bars, we can say that they are distinct. The reasons could be a frequency change on a short time scale (from one subset to the other -- within one month). Another explanation is that they are independent modes that are always excited and their amplitudes change from subset to subset. Alternatively, they might be components of an unresolved rotationally split mode. A comparative analysis on a longer time base could clear up the real situation (if it is a regular behaviour). At this early stage of the interpretation of KUV 02464+3239 we mention this behaviour but we give a general solution based on the whole dataset.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{bognar_fig5.eps}
\end{center}
\caption{Frequecies out of the 6 accepted ones found at least in two subsets. We calculated the width of error bars by the frequency range within the peaks were found during our robustness test (see Sect.~\ref{robust} for details).}
\label{scatter}
\end{figure}
\subsection[]{Analysis of the whole dataset}
\label{whole.lc.analysis}
The long time base of the whole dataset gives more precise frequency values if we assume the frequencies are stable over the time base. In this case the short-term variations in amplitudes are obscured but the frequencies are well-determined.
The successive pre-whitening steps of the whole dataset can be seen in the panels of Fig.~\ref{whitening}. Table~\ref{whole.freq} shows the values of frequencies determined from the whole dataset. Standard deviations are calculated by MC simulations. Peaks at $f_1$\,--\,$f_6$ are marked in the first and second panels of Fig.~\ref{whitening} (upper segment of Table~\ref{whole.freq}). They represent the 6 frequencies that come out of the final analysis. The residual spectrum in the third panel is definitely not white noise. With further analysis, we found 7 other frequencies ($\rmn{f}_7$\,--\,$\rmn{f}_{13}$) in the residual spectrum and present them in the lower segment of Table~\ref{whole.freq}. The last panel of Fig.~\ref{whitening} shows the FT after pre-whitening with 13 frequencies.
Amongst $\rmn{f}_7$, $\rmn{f}_8$ and $\rmn{f}_9$, additional pulsation modes could be present, but we cannot say which ones are real modes. $\rmn{f}_7$ is near to the $+$1\,c/d alias of the $f_2$ peak obtained in subset 3. There is no sign of $\rmn{f}_8$ in any subsets. $\rmn{f}_9$ can be a remainder of the group of four peaks near $f_3$ obtained by the subsets. The two higher frequencies in Table~\ref{ref.nights.freq2} (99.25, 99.57\,c/d) appear as $f_3$ and the two lower frequencies (98.02, 98.80\,c/d) can produce $\rmn{f}_9$ in the whole dataset. The double structure around 139\,c/d might be a sign of rotational splitting: $\delta f$ = 0.5\,c/d = 5.79\,$\mu$Hz. Assuming that these peaks belong to high-overtone ($k\gg1$) $l=1$, $m=-1,0$ or $m=0,1$ modes the rotation period of the star would be $\sim$1\,d. Considering the peaks at 104\,c/d, their frequency separation is only $\delta f$ = 0.13\,c/d = 1.5\,$\mu$Hz which corresponds to a $\sim$3.8\,d rotation period under the assumptions above. However, regarding the large amplitude of the closely spaced peak, $\rmn{f}_{10}$ could also be an independently excited mode with another $l$ value. Determination of the frequency at 111.09\,c/d ($\rmn{f}_{11}$) was ambiguous because of aliasing. The peak at 173.96\,c/d is consistent with being the first harmonic of the dominant mode.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{bognar_fig6.eps}
\end{center}
\caption{Frequency analysis of the whole dataset. The panels show the successive pre-whitening steps, the window function is given in the insert. The 6 finally accepted frequencies are marked in the first and second panels. The third panel shows the residual spectrum with definite signals remained. The spectrum after pre-whitening with 13 frequencies can be seen on the last panel. It is still not white noise.}
\label{whitening}
\end{figure}
For asteroseismological investigations, we use only the 6-frequency solution supported by the analyses of subsets. We consider these frequencies as a set of normal modes. A dataset with better coverage and higher signal-to-noise ratio would enable us to determine additional real pulsation modes.
\begin{table}
\caption{Frequency, period and amplitude values of 13 frequencies derived by the whole dataset. Frequencies at $f_1$\,--\,$f_6$ are accepted as normal modes by the analyses and test of subsets. Frequency and amplitude values of $f_1$\,--\,$f_6$ were determined by fitting with these components only. Frequencies and amplitudes of $\rmn{f}_7$\,--\,$\rmn{f}_{13}$ were calculated by fitting with all the 13 frequencies. Amplitude values of $f_1$\,--\,$f_6$ for the 13 frequency solution are given in parentheses.}
\label{whole.freq}
\begin{center}
\begin{tabular}{lrlrrr}
\hline
& \multicolumn{3}{c}{Frequency} & \multicolumn{1}{c}{Period} & \multicolumn{1}{c}{Ampl.}\\
& \multicolumn{2}{c}{(c/d)} & \multicolumn{1}{c}{($\mu$Hz)} & \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{(mmag)}\\
\hline
$f_1$ & 69.1060 & $\pm$0.0003 & 799.838 & 1250.253 & 4.4 (4.4)\\
$f_2$ & 86.9879 & $\pm$0.0001 & 1006.804 & 993.242 & 13.2 (14.4)\\
$f_3$ & 99.7516 & $\pm$0.0001 & 1154.532 & 866.151 & 9.5 (11.5)\\
$f_4$ & 104.2617 & $\pm$0.0001 & 1206.733 & 828.684 & 11.6 (11.4)\\
$f_5$ & 111.1095 & $\pm$0.0002 & 1285.989 & 777.611 & 5.5 (8.0)\\
$f_6$ & 139.5157 & $\pm$0.0003 & 1614.765 & 619.285 & 4.0 (3.9)\\
\hline
$\rmn{f}_7$ & 88.1611 & & 1020.383 & 980.024 & 8.4 \\
$\rmn{f}_8$ & 89.7100 & & 1038.311 & 963.103 & 7.3 \\
$\rmn{f}_9$ & 98.8533 & & 1144.136 & 874.022 & 6.5 \\
$\rmn{f}_{10}$ & 104.3915 & & 1208.235 & 827.654 & 10.2 \\
$\rmn{f}_{11}$ & 111.0865 & & 1285.724 & 777.772 & 5.8 \\
$\rmn{f}_{12}$ & 139.0162 & & 1608.983 & 621.510 & 3.7 \\
$\rmn{f}_{13}$ & 173.9642 & & 2013.474 & 496.654 & 3.2 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section[]{Tests on amplitude variations}
\label{Ampl.var.}
The Fourier analysis method presumes stationary amplitude and frequency values while the behaviour of KUV 02464+3239 seems to be different. We do not know why the observed amplitudes of the star vary so much. To provide constraints for possible explanations, we performed tests for the amplitude variability.
We generated synthetic light curves corresponding to the whole dataset using the frequency, amplitude and phase values of either the 6 accepted frequencies or with the additional closely spaced frequencies around the 87\,c/d dominant mode. In every case, we analysed the four subsets by nonlinear least-squares fitting using the original frequency values as initial parameters.
As our first test ($a$) we generated a synthetic light curve using only 6 frequencies and we did not add noise to the dataset. The analyses of subsets did not show noticeable amplitude variations from one subset to another. This result shows that beating of the 6 modes cannot explain the apparent amplitude variations. In our second test ($b$) we added Gaussian random noise to the synthetic light curve using the residual scatter of the whole light curve's 13-frequency solution. Our analysis showed that noise had a slight influence on the amplitudes (mainly for the lower amplitude modes) and cannot be responsible for the large observed variability. In our subsequent tests ($c$ -- $h$) we generated the synthetic light curves using 6+2 frequencies where the two additional frequencies were close to the dominant mode ($\delta f$ = $\pm$0.185\,c/d = 2.14\,$\mu$Hz). This separation was selected to be close to the resolution limit of the longest subset (subset 4) and this could be a reasonable value for rotational splitting. We set the amplitudes of the triplet's side components to be the half ($c$ -- $e$) and one tenth ($f$ -- $h$) of the 87\,c/d mode's amplitude. The phase differences were $\pm 22.5^{\circ}$ ($c$, $f$), $\pm45^{\circ}$ ($d$, $g$) and $\pm90^{\circ}$ ($e$, $h$) with respect to the central peak's phase at the initial epoch (JD\,2\,454\,014). With these tests we simulated the effects on amplitudes of closely spaced frequencies which might remain unresolved in the subsets. Such peaks could be detected by the analysis of the whole light curve if they had large enough amplitudes so as not to vanish in the other peaks and the noise.
\begin{table}
\caption{Results of tests on amplitude variability of the 87\,c/d mode. At test cases $a$ -- $h$ we give relative amplitude differences with respect to the synthetic light curves' input value. See details in Sect.~\ref{Ampl.var.}.}
\label{amplvar}
\begin{center}
\begin{tabular}{lrrrrrrrr}
\hline
\multicolumn{1}{c}{Subset} & \multicolumn{8}{c}{Test cases}\\[1.5mm]
& \multicolumn{1}{c}{$a$} & \multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{$c$} & \multicolumn{1}{c}{$d$} & \multicolumn{1}{c}{$e$} & \multicolumn{1}{c}{$f$} & \multicolumn{1}{c}{$g$} & \multicolumn{1}{c}{$h$}\\[2.0mm]
& \multicolumn{8}{c}{Amplitude variation (\%)}\\
\hline
1 & 0 & +4 & +28 & +9 & -12 & -3 & +10 & -3\\
2 & 0 & -6 & +60 & +37 & -52 & +11 & +1 & -9\\
3 & 0 & +15 & -87 & -64 & -17 & -7 & -28 & +6\\
4 & 0 & -5 & +22 & +11 & -35 & +7 & +14 & -4\\
\hline
\end{tabular}
\end{center}
\end{table}
Table~\ref{amplvar} summarizes the test results on the amplitude variations of the dominant peak. In some cases we got large amplitude variations: disappearance and almost-disappearance of the peak in subset 3 (tests $c$ and $d$). In test $c$ we fitted a peak at 86.3\,c/d (the input value was 86.99\,c/d) with 1.5\,mmag amplitude. This means an amplitude decrease by 87\% if we accept the 86.3\,c/d peak as the representative of the original one. In test $d$ we got a 64\% decrease in amplitude. Triplet side components with smaller amplitudes caused smaller, but noticeable variations. The maximum was a 28\% decrease in the amplitude of the dominant peak (test $g$).
Amplitude variations were detected for other modes as well. The presence of noise and beating with triplet components can be the explanation since we generated the synthetic light curves using constant amplitude values. The average of the variations of the 5 modes is 15\%.
We conclude that the observed large amplitude variability can be simulated with a triplet assuming certain phase relations and relatively high amplitudes for the side components. However, these special phase relations and high amplitudes of side lobes to the main peak are rather improbable. A real amplitude variation seems to be plausible.
\section[]{Asteroseismology of KUV 02464+3239}
\label{seism}
Our main goal in asteroseismology is to find stellar models that match the observed properties of the star with the highest precision. The most important observed values are the periods of light curves. At first we are looking for models which produce pulsation periods with the lowest differences to the observed ones. We used the White Dwarf Evolution Code (WDEC) originally written by Martin Schwarzschild and modified by \citet{kutter1}, \citet{lamb1}, \citet{winget1}, \citet{kawaler1}, \citet{wood1}, \citet{bradley1}, \citet{montgomery1} and \citet{kim1} to generate equilibrium white dwarf models each with a given set of input parameters. For each model, we computed a set of periods based on the adiabatic equations of non-radial stellar oscillations \citep{unno1}. We computed grids of such models, varying the input parameters.
In WDEC we start with a hot ($\sim$100\,000\,K) polytrope model. The code evolves it down to the requested temperature and the model we finally get are thermally relaxed solution to the stellar structure equations. To get pulsation periods of a model we used the integrated form of the WDEC given by Metcalfe (2001), which includes a pulsation code.
We used the equation-of-state (EOS) tables of \citet{lamb2} in the core and the EOS tables of \citet{saumon1} in the envelope of the star. We used OPAL opacities updated by \citet{iglesias1}. The convection was treated within the Mixing Length Theory (MLT) according to \citet{bohm1} and with the $\alpha = 0.6$ parametrization suggested by model atmosphere calculations of \citet{bergeron2}. Precise value of $\alpha$ has very little effect on the pulsation periods of models (Bischoff-Kim, Montgomery \& Winget 2008b). Treatment of the hydrogen/helium transition zone was done by equilibrium diffusion calculations contrary to the treatment of the helium/carbon transition layer which was parametrized.
\subsection[]{The model grid}
As we discussed in Sect.~\ref{whole.lc.analysis} we used the 6-frequency light curve solution for the seismological analysis of KUV 02464+3239. Table~\ref{whole.freq} shows that we found modes only in the long period regime (between 619\,s and 1250\,s). \citet{kim3} shows that low order modes are especially sensitive to the mass of the hydrogen layer.
The best-fitting models have to fulfil two criteria: (1) to give back the period values closest to our observed values and (2) to provide physically acceptable solutions for effective temperature and $\rmn{log}\,g$. Finding solutions according to the first criterion we used the \verb"fitper" tool developed by Kim (2007). The program calculates the \textit{r.m.s.} values using the following equation:
\begin{equation}
\sigma_{r.m.s.} = \sqrt{\frac{\sum_{i=1}^{N} (P_i^{\rmn{calc}} - P_i^{\rmn{obs}})^2}{N}}
\label{equ1}
\end{equation}
\noindent where \textit{N} denotes the number of observed periods.
\begin{table*}
\caption{Best-fitting solutions for KUV 02464+3239 in the mass range 0.525 -- 0.74\,$M_{\sun}$. We represent the results of former spectroscopic observations ($T_{\rmn{eff}}$, $\rmn{log\,} g$, \citealt{fontaine1}), the expected seismological mass derived by \citet{bradley3} and the observed pulsation periods as reference values. Models that have masses within 1\,$\sigma$ in $\rmn{log\,} g$ range are typeset in boldface.}
\label{best.fit}
\begin{tabular}{lccccccccccc}
\hline
\multicolumn{1}{l}{$M_*$/$M_{\sun}$, ($\rmn{log\,} g$)} & \multicolumn{1}{c}{$T_{\rmn{eff}}$\,(K)} & \multicolumn{1}{c}{-log\,$M_\rmn{H}$} & \multicolumn{1}{c}{$X_\rmn{o}$} & \multicolumn{1}{c}{$X_{\rmn{fm}}$} & \multicolumn{6}{c}{Model periods in seconds ($l$,$k$)} & \multicolumn{1}{c}{$\sigma_{r.m.s.}$\,(s)}\\
\hline
0.525, (7.85) & 11\,400 & 6.9 & 0.7 & 0.5 & 620.4 & 777.1 & 830.3 & 865.4 & 992.3 & 1250.7 & 0.95\\
& & & & & (1,8) & (1,11) & (1,12) & (2,24) & (1,15) & (1,20) & \\[1.5mm]
0.525, (7.85) & 11\,600 & 7.2 & 0.5 & 0.4 & 620.3 & 776.3 & 830.7 & 866.0 & 992.6 & 1251.0 & 1.12\\
& & & & & (2,16) & (1,11) & (1,12) & (2,24) & (1,15) & (1,20) & \\[1.5mm]
0.575, (7.97) & 11\,000 & 6.2 & 0.7 & 0.5 & 619.8 & 777.4 & 831.0 & 868.7 & 994.0 & 1250.5 & 1.45\\
& & & & & (2,17) & (1,12) & (1,13) & (2,25) & (1,16) & (1,21) & \\[1.5mm]
0.585, (7.99) & 11\,000 & 6.4 & 0.5 & 0.4 & 617.9 & 775.4 & 827.1 & 866.9 & 991.6 & 1250.5 & 1.44\\
& & & & & (2,17) & (1,12) & (1,13) & (2,25) & (1,16) & (1,21) & \\[1.5mm]
\textbf{0.615, (8.03)} & \textbf{11\,800} & \textbf{4.0} & \textbf{0.7} & \textbf{0.3} & \textbf{616.4} & \textbf{777.4} & \textbf{828.8} & \textbf{865.6} & \textbf{991.1} & \textbf{1249.7} & \textbf{1.51}\\
& & & & & \textbf{(1,12)} & \textbf{(1,16)} & \textbf{(2,31)} & \textbf{(1,18)} & \textbf{(1,21)} & \textbf{(2,48)} & \\[1.5mm]
\textbf{0.625, (8.04)} & \textbf{11\,000} & \textbf{7.4} & \textbf{0.5} & \textbf{0.2} & \textbf{620.6} & \textbf{780.0} & \textbf{827.4} & \textbf{866.8} & \textbf{994.1} & \textbf{1252.1} & \textbf{1.50}\\
& & & & & \textbf{(1,9)} & \textbf{(1,12)} & \textbf{(2,24)} & \textbf{(1,14)} & \textbf{(2,29)} & \textbf{(1,21)} & \\[1.5mm]
\textbf{0.645, (8.07)} & \textbf{11\,400} & \textbf{5.2} & \textbf{0.9} & \textbf{0.2} & \textbf{618.1} & \textbf{779.7} & \textbf{828.2} & \textbf{865.7} & \textbf{991.9} & \textbf{1251.8} & \textbf{1.33}\\
& & & & & \textbf{(2,21)} & \textbf{(1,15)} & \textbf{(1,16)} & \textbf{(1,17)} & \textbf{(1,20)} & \textbf{(2,45)} & \\[1.5mm]
\textbf{0.650, (8.08)} & \textbf{11\,800} & \textbf{4.6} & \textbf{0.6} & \textbf{0.1} & \textbf{620.3} & \textbf{776.6} & \textbf{827.1} & \textbf{866.2} & \textbf{992.8} & \textbf{1249.6} & \textbf{0.93}\\
& & & & & \textbf{(1,12)} & \textbf{(2,29)} & \textbf{(1,17)} & \textbf{(1,18)} & \textbf{(1,21)} & \textbf{(2,48)} & \\[1.5mm]
\textbf{0.680, (8.13)} & \textbf{11\,800} & \textbf{5.0} & \textbf{0.5} & \textbf{0.1} & \textbf{618.8} & \textbf{778.1} & \textbf{826.6} & \textbf{865.2} & \textbf{992.1} & \textbf{1251.9} & \textbf{1.26}\\
& & & & & \textbf{(1,12)} & \textbf{(2,29)} & \textbf{(1,17)} & \textbf{(1,18)} & \textbf{(1,21)} & \textbf{(2,48)} & \\[1.5mm]
\textbf{0.685, (8.14)} & \textbf{11\,400} & \textbf{4.8} & \textbf{0.6} & \textbf{0.4} & \textbf{618.7} & \textbf{778.6} & \textbf{826.4} & \textbf{866.4} & \textbf{994.1} & \textbf{1249.9} & \textbf{1.12}\\
& & & & & \textbf{(1,12)} & \textbf{(1,16)} & \textbf{(1,17)} & \textbf{(2,32)} & \textbf{(2,37)} & \textbf{(2,47)} & \\[1.5mm]
0.725, (8.2) & 11\,800 & 5.8 & 0.7 & 0.4 & 620.4 & 778.5 & 828.4 & 865.6 & 995.3 & 1249.5 & 1.11\\
& & & & & (1,12) & (1,16) & (1,17) & (1,18) & (1,21) & (1,27) & \\[1.5mm]
0.725, (8.2) & 11\,200 & 5.4 & 0.8 & 0.4 & 616.9 & 778.5 & 828.9 & 865.2 & 992.7 & 1251.6 & 1.26\\
& & & & & (1,12) & (1,16) & (1,17) & (1,18) & (2,37) & (2,47) & \\[1.5mm]
0.740, (8.23) & 11\,800 & 6.0 & 0.6 & 0.4 & 620.0 & 778.9 & 828.1 & 867.0 & 991.3 & 1250.7 & 1.09\\
& & & & & (1,12) & (1,16) & (1,17) & (1,18) & (1,21) & (1,27) & \\[1.5mm]
Reference values: & & & & & & & & & \\[1.5mm]
0.65, (8.08) & 11\,290 & & & & 619.3 & 777.6 & 828.7 & 866.2 & 993.2 & 1250.3 & \\
\hline
\end{tabular}
\end{table*}
We built our model grid varying 5 input parameters of the WDEC: $T_{\rmn{eff}}$, $M_*$, $M_{\rmn{H}}$, $X_\rmn{o}$ (central oxygen abundance) and $X_{\rmn{fm}}$ (the fractional mass point where the oxygen abundance starts dropping). For our first scan we fixed the mass of the helium layer at $M_{\rmn{He}} = 10^{-2}\,M_*$. We built our grid and searched for acceptable solutions in the parameter space determined by spectroscopic results on the star. \citet{fontaine1} derived $T_{\rmn{eff}} = 11\,290$\,K and $\rmn{log\,} g = 8.08$ values for KUV 02464+3239 with the estimated flux calibration uncertainties of $\pm200$\,K and 0.05\,dex in $\rmn{log\,} g$ \citep{fontaine2}. However, this uncertainty estimate does not include contributions from different spectra, modeling uncertainties, and different fitting procedures. We make some allowance for this by covering a range of $\pm500$\,K in $T_{\rmn{eff}}$ and $\pm0.1$\,dex in $\rmn{log\,} g$. According to this effective temperature value, our grid covers the range 10\,800 to 11\,800\,K ($\sim$11\,290\,$\rmn{K}\pm$2.5$\sigma$). Estimating the mass range we have to cover, we used the seismological masses determined for DAs by \citet{bradley3}. According to his results the mass of a DA white dwarf with $\rmn{log\,} g = 8.08$ is about $0.65\,M_{\sun}$. Our grid covers the $M_* = 0.525 - 0.74\,M_{\sun}$ mass range ($\rmn{log\,} g = \,\sim7.9 - 8.2$) which means that we investigated model solutions at $\rmn{log\,} g = 8.08\pm$2$\sigma$. We varied the mass of the hydrogen layer between $10^{-4} - 10^{-8}\,M_*$. The core parameters were changed between $X_\rmn{o} = 0.5 - 0.9$ and $X_{\rmn{fm}} = 0.1 - 0.5$ taking into account the carbon-oxygen profiles derived by \citet{salaris1}. Step sizes were 200\,K ($T_{\rmn{eff}}$), 0.005\,$M_{\sun}$ ($M_*$), $10^{-0.2}\,M_*$ ($M_{\rmn{H}}$) and 0.1 ($X_\rmn{o}$ and $X_{\rmn{fm}}$).
\subsection[]{Best-fit models}
While running the WDEC we allowed the modes to be $l = 1$ or 2. Since there are no previous asteroseismological analyses on this star or other contrainst on the $l$ values of modes we let all 6 modes be $l = 1$ or 2 for the fitting procedure. The variations in pulsation amplitudes rule out a meaningful application of different weights on the amplitudes \citep{castanheira1}.
Assuming better visibility of $l = 1$ modes over $l = 2$ modes (see e.g. \citealt{castanheira1} and references therein), we prefer models that gave at least 3 $l = 1$ solutions with low $\sigma_{r.m.s.}$ values. We summarize the parameters of the best-fitting models in Table~\ref{best.fit}. It shows the model parameters, calculated pulsation periods with their $l$ and $k$ values and the corresponding $\sigma_{r.m.s.}$ value. We find that the mode identifications of the best-fitting models are typically quite different from model to model. This is due to the fact that over the span of 400\,K, the periods of the relatively high overtone modes observed can change by about 40\,s. As a result, the cooling of models for a given structure causes different $l=1$ or 2 modes to fit a given observed period best. Note that since our current models use about 400 zones, the uncertainty of the model periods is $\sim$1\,s \citep{brassard1}, therefore, differences in periods below 1\,s are within the `noise' of our modeling accuracy. The presence of shorter period modes could give stronger constraints on the stellar structure, resulting in fewer solutions with low $\sigma_{r.m.s.}$. However, within the $\sim$1\,s uncertainty of our modeling we can map the possible groups of models in the parameter space determined by spectroscopy.
\subsubsection[]{Results for the main stellar parameters}
Models with stellar masses within the 1\,$\sigma$ $\rmn{log\,} g$ range are typeset in boldface in Table~\ref{best.fit}. We found 6 models out of the 13 that fulfill this criterion. The best fit over all with $\sigma_{r.m.s.} = 0.93$\,s has $M_* = 0.65\,M_{\sun}$ which corresponds with the expected mass from the spectroscopic $\rmn{log\,} g$ value. Fig.~\ref{grid} shows the 13 selected models in the $T_{\rmn{eff}}$ -- $\rmn{log\,} g$ plane. Closed and open circles denote the models within and out of the 1\,$\sigma$ $\rmn{log\,} g$ range, respectively. The spectroscopic solution with the corresponding uncertainties determined by Fontaine et al. (2001, 2003) is also given in Fig.~\ref{grid}.
Assuming that the largest amplitude modes at 829, 866 and 993\,s are $l = 1$, we favour the models which give this solution. Within the 1\,$\sigma$ $\rmn{log\,} g$ range we find the 0.645, 0.650 and 0.680\,$M_{\sun}$ models in accordance with this criterion. Taking into account that $T_{\rmn{eff}} = 11\,290 \pm 200$\,K is expected also by spectroscopy, the 11\,800\,K solutions are over the 1\,$\sigma$ limit and seem to be too hot. This means that the 0.645\,$M_{\sun}$ model would be our best choice.
Table~\ref{best.fit} shows the hydrogen layer masses of our models are between $10^{-4} - 4\,\rmn{x}\,10^{-8}\,M_*$. Restricting ourselves to the 6 models selected by $\rmn{log\,} g$ we find that 5 of them have $M_\rmn{H} = 10^{-4} - 6\,\rmn{x}\,10^{-6}\,M_*$. We can give additional constraints on the hydrogen layer mass if we take into account only the three `favoured' models between 0.645 and 0.680\,$M_{\sun}$. In this case $M_\rmn{H} = 2.5\,\rmn{x}\,10^{-5} - 6.3\,\rmn{x}\,10^{-6}\,M_*$.
\begin{figure}
\begin{center}
\includegraphics[width=9.2cm]{bognar_fig7.eps}
\end{center}
\caption{The 13 models of Table~\ref{best.fit} in the $T_{\rmn{eff}}$ -- $M_*$ plane. Closed and open circles denote the models within and out of the 1\,$\sigma$ $\rmn{log\,} g$ range determined by spectroscopy. The black square denotes the spectroscopic solution with its uncertainties given by Fontaine et al. (2001, 2003). The grid with dashed lines corresponds to the step sizes of our model grid.}
\label{grid}
\end{figure}
\subsubsection[]{Mode identification}
According to our selection criterion the models have mostly $l = 1$ solutions for the pulsation modes. Models with solely $l = 1$ modes exist only out of the 1\,$\sigma$ $\rmn{log\,} g$ range ($M_* = 0.725$ and $0.74\,M_{\sun}$).
Considering the full range of best-fit models, we cannot uniquely assign an $l$ value to any mode. At least one model has an $l = 2$ mode solution for each period. We cannot provide unique assignments for the overtone numbers either, however, the range of stellar and hydrogen layer masses of our models constrains the $k$ values.
In summary we can say that the observed pattern of modes can be decribed with high accuracy assuming mostly $l = 1$ modes. Considering the 3 `favoured' models the modes at 619 and 778\,s are always different $l$ values. The 1250\,s mode is always $l = 2$ for both these models.
\subsubsection[]{Varying the He layer mass}
In our second scan, we varied the mass of the He layer in the grid from $10^{-2}$ to $10^{-3}\,M_*$ in 3 steps. This way we could check the sensitivity of our solutions to the thickness of the He layer. In this case we ran a more sophisticated version of the WDEC with core composition profiles based on stellar evolution calculations done by \citet{salaris1}. This grid covers the temperature and hydrogen layer mass range we used previously ($T_{\rmn{eff}} = 10\,800 - 11\,800$\,K and $M_\rmn{H} = 10^{-4} - 10^{-8}\,M_*$). We varied the stellar mass between $0.60 - 0.69\,M_{\sun}$. Step sizes were as well 200\,K ($T_{\rmn{eff}}$), 0.005\,$M_{\sun}$ ($M_*$) and $10^{-0.2}\,M_*$ ($M_{\rmn{H}}$).
According to our first criterion we searched for models with at least 3 $l = 1$ solutions and low $\sigma_{r.m.s.}$ values. We found two acceptable models at 0.61 and 0.635\,$M_{\sun}$. Both models have the same helium layer masses ($10^{-3}\,M_*$). However, none of them have all of the large amplitude modes as $l = 1$. The mode at 993\,s is always $l = 2$, which would imply a much larger physical amplitude for this mode since it has the largest light amplitude. These two models have hydrogen layers
with masses of $2.5\,\rmn{x}\,10^{-6}$ and $10^{-6}\,M_*$. These values are at the thin side of the range given by the previously selected models. Although we cannot rule out helium layer masses thinner than $10^{-2}\,M_*$, our favouring the three largest amplitude modes being $l = 1$ implies that $M_{\rmn{He}} = 10^{-2}\,M_*$ is the preferred value.
\subsubsection[]{Seismological parallax}
According to the luminosity of a selected model we can give an estimation on the distance and parallax of the star. Unfortunately, there is no trigonometric parallax measurement for KUV 02464+3239. However, we can still predict what the parallax should be.
The first step is to derive the bolometric magnitude ($M_{\rmn{bol}}$) of the star:
\begin{equation}
M_{\rmn{bol}} = M_{\sun\,\rmn{bol}} - 2.5\,\rmn{log}\,(L/L_{\sun})\,,
\end{equation}
\noindent where $M_{\sun\,\rmn{bol}} = +4.75$ \citep{allen1} and $L_{\sun} = 3.854\,\rmn{x}\,10^{33}$\,ergs/s \citep{sackmann1}. $M_{\rmn{bol}}$ can be related to the absolute visual magnitude using the bolometric correction: $M_V = M_{\rmn{bol}} - \rmn{BC}$. Bergeron, Wesemael \& Beauchamp (1995a) performed colour-index and magnitude calculations using DA and DB model grids. According to Table 1 in \citet{bergeron1} $\rmn{BC} = -0.441$ and -0.611 at temperatures 11\,000 and 12\,000\,K, respectively. In view of the apparent visual magnitude $m_v = \rmn{16.07}$ and using the distance modulus formula we can derive the distance and the parallax of KUV 02464+3239.
The 6 models selected by $\rmn{log}\,g$ have $\rmn{log} \,L/L_{\sun} = -2.56$ to $-2.71$ where the most and least luminous models are at 0.615 and 0.625\,$M_{\sun}$, respectively. In these cases, the calculated seismological parallax values are between $13.5 - 15.4$\,mas (milliarc second) with a 14.6\,mas average. Restricting ourselves to the 3 `favoured' models with $\rmn{log}\,L/L_{\sun} = -2.6$ to $-2.66$, the parallax angles are 14.8, 14.2 and 14.7\,mas for the 0.645, 0.650 and 0.680\,$M_{\sun}$ models, respectively. These results imply a distance of $\sim$70\,pc for KUV 02464+3239.
\subsection[]{Mode trapping}
\begin{figure}
\begin{center}
\includegraphics[width=9.5cm]{bognar_fig8.eps}
\end{center}
\caption{Period spacing diagrams of the 6 selected models by $\rmn{log}\,g$. Filled and open circles denote the model periods with $l=1$ or $l=2$ solutions, respectively. Vertical dashed lines are drawn corresponding to the observed period values. Model periods found to be the closest of the observed ones are denoted with open squares.}
\label{periodsp}
\end{figure}
KUV 02464+3239 was mentioned by \citet{fontaine1} to be a photometric twin of another cool DAV, GD 154. In spite of the similarities between the shape of their light curves we find significant differences if we compare their pulsational properties. The Fourier spectrum of GD 154 is dominated by three normal modes and their harmonics \citep{pfeiffer1}. Subharmonics of the 1186\,s mode were also detected by \citet{robinson1}. In the case of KUV 02464+3239 even with single-site observations we could determine 6 modes and using the whole light curve, only the first harmonic of the dominant mode was found. Clear presence of further harmonics and subharmonics can be seen only by some parts of the light curve \citep{bognar1}.
The small number of GD 154's eigenmodes compared to other cool DAV's implies a very efficient mode selection mechanism -- possibly trapping of the modes by a very thin ($M_\rmn{H} \sim 10^{-10}\,M_*$) hydrogen layer \citep{pfeiffer1}. We can use our models on KUV 02464+3239 to test whether some of the modes are trapped.
One possible way to find trapped modes is to construct period spacing diagrams and search for minima. Mode trapping causes departures from the uniform period spacings of consecutive $k$ modes. Trapped modes have lower kinetic energies and as \citet{bradley5} showed, minima in period spacing diagrams correspond to minima in kinetic energies. Accordingly we made a simple statistical fit on the 6 models selected by $\rmn{log}\,g$. We collected how often we find a period near or at a minimum of a model's corresponding period spacing diagram. We found that the largest amplitude modes (at 829, 866 and 993\,s) frequently occur near or at period spacing minima and in these cases the $l = 1$ solution is preferred. The 1250\,s mode has not been found at a minimum in any of the cases. Fig.~\ref{periodsp} shows the period spacing diagrams of the 6 selected models. Filled and open circles denote the model periods with $l=1$ or $l=2$ solutions, respectively. Vertical dashed lines are drawn corresponding to the observed period values.
\section[]{Summary and conclusions}
\label{sum}
We have presented the analyses of our observations on the DA variable KUV 02464+3239.
Using analyses of data subsets we accepted 6 frequencies as normal modes of pulsation. With the analysis of the whole dataset we determined 7 additional frequencies as possible modes. Since remarkable amplitude variations (up to 30 -- 60\,\%) occured on a time scale of weeks, we performed tests for the possible sources of the amplitude variability. Neither beating of the 6 modes nor the effect of noise resulted in large enough variations. The test cases of closely spaced modes or rotational triplets revealed large amplitude variability, but only in unrealistic amplitude and phase relations. A real amplitude variation seem to be plausible.
The best asteroseismological models have $M_* = 0.645, 0.650$ and $0.680\,M_{\sun}$ and the mass of their hydrogen layer is between $2.5\,\rmn{x}\,10^{-5} - 6.3\,\rmn{x}\,10^{-6}\,M_*$. Although we cannot exclude thinner helium layers, $M_{\rmn{He}} = 10^{-2}\,M_*$ models reproduce the observed properties of the star better. Asteroseismological parallax values calculated by the luminosity of the 3 models are between 14.2 -- 14.8\,mas. Using period spacing diagrams we conclude that mode trapping can explain the high amplitude modes, but is not required.
The best way to discriminate between models would be to obtain observational constraints on the $l$ values of at least the large amplitude modes. It may be possible by time dependent UV spectra or chromatic amplitudes but due to the faintness of the star and its complicated pulsation spectra this would be a challenging task. Our results show the effectiveness of long, single-site observations. Multisite campaign(s) on KUV 02464+3239 would enable us to find additional pulsation modes and possibly signs of rotational splittings, and this would give further constraints on the stellar structure.
Resolving the closely spaced modes expected in the case of KUV 02464+3239 we could exclude the beating of modes as cause of the observed amplitude variability. Our tests on amplitudes indicate this possibility. This would mean that we see real energy content variations of the individual modes. In this case the observed variability is presumably due to non-linear processes like mode coupling and/or interaction between pulsation and convection. Finding the real physical explanation of amplitude variations is a key issue not only in the case of KUV 02464+3239, but also for other long-period white dwarf pulsators.
\section*{Acknowledgments}
The authors would like to thank the referee, S.~O. Kepler for his helpful comments and suggestions. This research was partly supported by HSO project No.\,98022.
|
2,877,628,088,858 | arxiv | \section{Introduction} \label{Sec:Intro}
Two points in \Teich space determine a unique \Teich geodesic that connects
them. One would like to understand the behavior of this geodesic and how
the given data, two end points $x,y$ in $\mathcal{T}(S)$ \Teich of a surface $S$,
translate to concrete information about the geodesic segment $[x,y]$
connecting them. Much is known about this relationship. (See
\cite{rafi:SC, rafi:CM, rafi:LT, rafi:TT}.) The first part of the paper
is devoted to organizing and improving some of these results
which are scattered through several papers. Accumulation of these
results provides a complete (coarse) description of a \Teich geodesic.
One can summarized this as follows:
\begin{Thm} \label{Thm:Description}
Let $\mathcal{G}\from \mathbb{R} \to \mathcal{T}(S)$ be a \Teich geodesic. For every subsurface $Y$,
there is an interval of times $I_Y$ (possibly empty) where $Y$ is
\emph{isolated} at $\mathcal{G}_t$, for $t \in I_Y$.
During this interval, the restriction of $\mathcal{G}$ to $Y$ behaves like a
geodesic in $\mathcal{T}(Y)$. Outside of $I_Y$, the projection
to the curve complex of $Y$ moves by at most a bounded amount.
\end{Thm}
In fact, we know for which subsurfaces $Y$ the interval $I_Y$ is non-empty,
and in what order these intervals appear along $\mathbb{R}$.
And applying the theorem inductively, we can describe the restriction
of the geodesic to $Y$ during $I_Y$ (\secref{Proj}).
In the rest of the paper we consider some of the implications of
the above theorem and we examine to what extend \Teich geodesics
behave like geodesics in a hyperbolic space. It is known that the \Teich
space is not hyperbolic; Masur showed that \Teich space is not $\delta$--hyperbolic
\cite{masur:NH} and Minsky showed that the thin part of \Teich space
has a product like structure that resembles a space with positive curvature
\cite{minsky:PR}. However, there is a strong analogy between the geometry
of \Teich space and that of a hyperbolic space. For example,
the isometries of \Teich space are either hyperbolic, elliptic or
parabolic \cite{thurston:GD, bers:EP} and the geodesic fellow is exponentily
mixing \cite{masur:IE, veech:TGF}. There is also a sense that \Teich space
is hyperbolic relative to its thin parts; Masur and Misnky showed that electrified
\Teich space is $\delta$--hyperbolic \cite{minsky:CCI}
Each application of \thmref{Description} presented in this paper examines
how the \Teich space equipped with the \Teich metric is similar to or different
from a relatively hyperbolic space. Apart from their individual utility, these results
also showcase how one can apply \thmref{Description} to answer geometric problems
in \Teich space.
As the first application, we show that \Teich geodesics do not \emph{backtrack}.
This is a generalization of a theorem of Masur and Minsky \cite{minsky:CCI}
stating that the shadow of a \Teich geodesic to the curve complex is an
un-parametrized quasi-geodesic. We show:
\begin{Thm} \label{Thm:Shadow}
The projection of a \Teich geodesic to the complex of curves of
any subsurface $Y$ of $S$ is an un-parametrized quasi-geodesic
in the curve complex of $Y$.
\end{Thm}
This produces a sequence of markings, analogous to a resolution of a hierarchy
\cite{minsky:CCI}, which is obtained directly froma \Teich geodesic.
As the second application, we examine the fellow traveling properties of
\Teich geodesics. We show:
\begin{Thm} \label{Thm:Fellow-Travel}
Consider a \Teich geodesics segment $[x,y]$ with end points $x$ and $y$
in the thick part. Any other geodesic segment that starts near $x$ and
ends near $y$ fellow travels $[x,y]$.
\end{Thm}
In contrast to above, we can provide examples where:
\begin{Thm} \label{Thm:Counter}
When the end points of a geodesic segment are allowed to be in the thin part,
the above theorem does not hold.
\end{Thm}
As our third application, we prove that geodesic triangles are slim
while they pass through the thick part of \Teich space, suggesting similarities
between \Teich space and relatively hyperbolic groups.
\begin{Thm} \label{Thm:Thin}
For a geodesic triangle $\triangle(x,y,z)$ in \Teich space, if
a large segment of $[x,y]$ is in the thick part, then it is either
close to $[x,z]$ or $[y,z]$.
\end{Thm}
\subsection*{Organization of the paper}
In \secref{Comb-Description}, we make the notion of coarsely describing
a point in \Teich space precise. This means to record enough information
so that one can estimate the length of any curve on the surface and the
distance between two points in \Teich space. It turns out that it is sufficient
to keep track of which curves are short as well as the length and
the twisting parameter of the short curves.
A \Teich geodesic is the image of a quadratic differential under the
\Teich geodesic flow. In \secref{Quadratic} we discuss
how one can translate the information given by the flat structure of a
quadratic differential to obtain the combinatorial information needed
to describe a point in $\mathcal{T}(S)$.
The precise statement for the description of a \Teich geodesic and some
related statements are given \secref{Proj}. \thmref{Shadow} is proven in
\secref{Backtrack}, Theorems~\ref{Thm:Fellow-Travel} and \ref{Thm:Counter}
are proven in \secref{Fellow-Travel}, and \thmref{Thin} is proven
in \secref{Thin}.
\subsection*{Notation}
The notation $A \stackrel{{}_\ast}{\asymp} B$ means that the ratio $A/B$ is bounded both
above and below by constants depending on the topology of $S$ only.
When this is true we say $A$ is \emph{comparable} with $B$ or
$A$ and $B$ are comparable. The notation $A\stackrel{{}_\ast}{\prec} B$ means that $A/B$
is bounded above by a constant depending on the topology of $S$. Similarly,
$A \stackrel{{}_+}{\asymp} b$ means $|A-B|$ is uniformly bounded and
$A \stackrel{{}_+}{\prec} B$ means $(B-A)$ is uniformly bounded above in both cases
by a constant that depend only on the topology of $S$.
\subsection*{Acknowledgements} I would like to thank Saul Schleimer
for his great help and encouragement.
\section{Combinatorial description of a point in \Teich space}
\label{Sec:Comb-Description}
In this section, we discuss the notion of a marking which provides
a combinatorial description of a point in \Teich space
(see \defref{Marking}). Given a description of a point
$x$ in \Teich space we are able to estimate the extremal length of
any curve at $x$ (\thmref{Length}). Also, given the description
of two points $x,y \in \mathcal{T}(S)$, we are able to estimate the \Teich distance
between them (\thmref{Distance}). We first establish terminology and the
definitions of some basic concepts.
\subsection{\Teich metric}
Let $S$ be a compact surface of hyperbolic type possibly with boundary.
The \Teich space $\mathcal{T}(S)$ is the space of all conformal structures
on $S$ up to isotopy. In this paper, we consider only the \Teich
metric on $\mathcal{T}(S)$. For two points $x,y \in \mathcal{T}(S)$ the \Teich
distance between them is defined to be
$$
d_\mathcal{T}(x,y) = \frac 12 \log \max_f K_f,
$$
where $f \from x \to y$ ranges over all quasi-conformal maps
from $x$ to $y$ in the correct isotopy class and $K_f$ is
the quasi-consofmal constant of the map $f$. (See
\cite{gardiner:QT, hubbard:TT} for background information.)
A geodesic in this metric is called a \Teich geodesic.
\subsection*{Arcs and curves}
By a \emph{curve} in $S$ we mean a free isotopy class of an essential simple
closed curve and by an \emph{arc} in $S$ we mean a proper isotopy class of
an essential simple arc. In both cases,
\emph{essential} means that the given curve or arc is neither isotopic
to a point nor it can be isotoped to $\bdy S$. The definition of an arc
is slightly different when $S$ is an annulus. In this case, an \emph{arc}
is an isotopy class of a simple arc connecting the two boundaries of $S$,
relative to the endpoints of the arc. We use $\I(\alpha, \beta)$ to denote the
geometric intersection number between arcs or curves $\alpha$ and $\beta$
and we refer to it simply as the intersection number.
Define the arc and curve graph $\AC(S)$ of $S$ as follows: the vertices
are essential arcs and curves in $S$ and the edges are pairs of vertices that
have representatives with disjoint interiors. Giving the edges length one turns
$\AC(S)$ into a connected metric space. The following is contained in
\cite{minsky:CCI, minsky:CCII, klarreich:BC}
\begin{theorem}
The graph $\AC(S)$ is locally infinite, has infinite diameter and is Gromov
hyperbolic. Furthermore, its boundary at infinity can be identified with
$\EL(S)$, the space of ending laminations of $S$.
\end{theorem}
Recall that, $\EL(S)$ is the space of $\emph{irrational}$ laminations
in $\PML(S)$ after forgetting the measure. An irrational lamination is one
that has non-zero intersection number with every curve.
\subsection*{Measuring the twist}
It is often desirable to measure the number of times a curve
$\gamma$ twists around a curve $\alpha$. This requires us to choose
a notion of \emph{zero twisting}. The key example is the case
where $S$ is an annulus with a core curve $\alpha$. Then $\AC(S)$ is
quasi-isometric to $\mathbb{Z}$. Choose an arc $\tau \in \AC(S)$ to serve as
the origin. Then the \emph{twist} of $\gamma \in \AC(S)$ about
$\alpha$ is
$$
\twist_\alpha(\gamma, \tau) = \I(\gamma, \tau),
$$
relative to choice of origin $\tau$.
In general, if $\alpha$ is a curve in $S$ let $S^\alpha$ be the
corresponding annular cover. A notion of zero twisting
around $\alpha$ is given by a choice of arc
$\tau \in \AC(S^\alpha)$. Then, for every
$\gamma \in \AC(S)$ intersecting $\alpha$ essentially,
we define
$$
\twist_\alpha(\gamma, \tau) = \I(\tilde \gamma, \tau),
$$
where $\tilde \gamma$ is any essential lift of $\gamma$
to $S^\alpha$. Since there may be several choices for
$\tilde \gamma$, this notion of twisting is well defined up
to an additive error of at most one.
A geometric structure on $S$ often naturally defines a notion
of zero twisting. For example, for a given point $x \in \mathcal{T}(S)$
and a curve $\alpha$, we can define twisting around $\alpha$ in $x$ as
follows: lift $x$ to a the conformal structure $x^\alpha$ on $S^\alpha$.
Consider the hyperbolic metric associated to $x^\alpha$ and choose $\tau$ in
$x^\alpha$ to be any hyperbolic geodesic perpendicular to $\alpha$.
Now, for every curve $\gamma$ intersecting $\alpha$ non-trivially, define
$$
\twist_\alpha(\gamma, x) = \twist_\alpha(\gamma, \tau) = \I(\tilde \gamma, \tau).
$$
Similarly, for a quadratic differential $q$ on $S$ we can define
$\twist_\alpha(\gamma, q)$; lift $q$ to a singular Euclidean metric $q^\alpha$
and choose $\tau$ to be any Euclidean perpendicular arc to $\alpha$.
(See \secref{Quadratic} for the definition of the Euclidean metric associated
to $q$.)
Similarly, any foliation, arc or curve $\lambda$
intersecting $\alpha$ essentially defines a notion of zero twisting.
Since the intersection is essential the lift $\lambda^\alpha$ of
$\lambda$ to $S^\alpha$ contains an essential arc which we may
use as $\tau$. Anytime two geometric objects define notions of zero
twisting, we can talk about the relative twisting between them.
For example, for two quadratic differentials $q_1$ and $q_2$ and a curve
$\alpha$, let $\tau_1$ be the arc in $q_1^\alpha$ that is perpendicular
to $\alpha$ and $\tau_2$ be the arc in $q_2^\alpha$ that is perpendicular
to $\alpha$. Considering both these arcs in $S^\alpha$, it makes sense
to talk about their geometric intersection number. We define:
$$
\twist_\alpha(q_1,q_2) =\I(\tau_1, \tau_2).
$$
The expression $\twist_\alpha(x_1,x_2)$ for Riemann surfaces $x_1$
and $x_2$ is defined similarly.
\subsection*{Marking}
Our definition of \emph{marking} differs slightly from that of
\cite{minsky:CCII} and contains more information.
\begin{definition} \label{Def:Marking}
A marking on $S$ is a triple
$\mu=(\mathcal{P}, \{l_\alpha\}_{\alpha \in\mathcal{P}}, \{\tau_\alpha\}_{\alpha \in \mathcal{P}})$
where
\begin{itemize}
\item $\mathcal{P}$ is a pants decomposition of $S$.
\item For $\alpha \in \mathcal{P}$, $l_\alpha$ is a positive real number
which we think of as the length of $\alpha$.
\item For $\alpha \in \mathcal{P}$, $\tau_\alpha$ is an arc in the annular cover
$S^\alpha$ of $S$ associated to $\alpha$, establishing a notion of
zero twisting around $\alpha$.
\end{itemize}
\end{definition}
For a curve $\alpha$ in $S$ and $x \in \mathcal{T}(S)$, we define the extrema length
of $\alpha$ in $x$ to be
$$
\Ext_x(\alpha) = \sup_{\sigma \in [x]} \frac{\ell^2_\sigma(\alpha)}{\area(\sigma)}.
$$
Here, $\sigma$ ranges over all metric in the conformal class $x$ and
$\ell_\sigma(\alpha)$ is the infimum of the $\sigma$--length of
all representatives of the homotopy class of the curve $\alpha$.
Using the Extremal length, we define a map from $\mathcal{T}(S)$ to the space of
markings as follows: For any $x\in\mathcal{T}(S)$, let $\mathcal{P}_x$ be the pants
decomposition with the shortest extremal length in $x$ obtained using the
greedy algorithm. For $\alpha \in \mathcal{P}_x$, let $l_\alpha = \Ext_x(\alpha)$.
As in the discussion of zero twist above, let $\tau_\alpha$ be any geodesic
in $S^\alpha$ that is perpendicular to $\alpha$ in $x^\alpha$. We call this the
\emph{short marking at $x$} and denote it by $\mu_x$.
As mentioned before, we can compute the extremal length of any curve
in $x$ from the information contained in $\mu_x$ up to a multiplicative error.
It follows from \cite{minsky:PR} that:
\begin{theorem}
\label{Thm:Length-Formula}
For every curve $\gamma$, we have
$$
\Ext_x(\gamma) \stackrel{{}_\ast}{\asymp} \sum_{\alpha \in \mathcal{P}}
\left(\frac1{l_\alpha}
+ l_\alpha \cdot \twist_\alpha(\gamma, \tau_\alpha)^2 \right) \I(\alpha, \gamma)^2.
$$
\end{theorem}
\subsection*{Subsurface Projection}
To compute the distance between two points $x,y \in \mathcal{T}(S)$ we need
to introduce the concept of subsurface projection. We call a collection
of vertices in $\AC(S)$ having disjoint representatives a \emph{multicurve}.
For every proper subsurface $Y \subset S$ and any multicurve
$\alpha$ in $\AC(S)$ we can project $\alpha$ to $Y$ to obtain
a multicurve in $\AC(Y)$ as follows: let $S^Y$ be the cover of $S$ corresponding
to $\pi_1(Y) < \pi_1(S)$ and identify the Gromov compactification of $S^Y$ with $Y$.
(To define the Gromov compactification, one needs first to pick a metric on
$S$. However, the resulting compactification is independent of the metric.
Since $S$ admits a hyperbolic metric, every essential curve in $S$ lifts to an
arc which has a well defined end points in the Gromov boundary of $S^Y$.)
Then for $\alpha \in \AC(S)$, the projection $\alpha \rY$ is
defined to be the set of lifts of $\alpha$ to $S^Y$ that are essential curves or
arcs. Note that $\alpha \rY$ is a set of diameter one in $\AC(Y)$ since all the
lifts have disjoint interiors.
For markings $\mu$ and $\nu$, define
$$
d_Y(\mu,\nu)= \diam_{\AC(Y)}(\mathcal{P} \rY \cup \mathcal{R} \rY)
$$
where $\mathcal{P}$ and $\mathcal{R}$ are the pants decompositions for $\mu$ and
$\nu$ respectively.
\subsection*{Distance Formula}
In what comes below, the function $[a]_C$ is equal to $a$ if $a\geq C$ and it
is zero otherwise. Also, we modify the $\log(a)$ function to be one for
$a\leq e$. We can now state the distance formula:
\begin{theorem}[Theorem 6.1, \cite{rafi:CM}] \label{Thm:Distance}
There is a constant $C>0$ so that the following holds.
For $x,y \in \mathcal{T}(S)$ let $\mu_x= (\mathcal{P}, \{l_\alpha\}, \{ \tau_\alpha\})$ and
$\mu_y= (\mathcal{R}, \{k_\beta\}, \{ \sigma_\beta\})$ be the
associated short markings.
Then,
\begin{align}
d_\mathcal{T}(x,y) \asymp
&\sum_Y \Big[ d_Y(\mu_x, \mu_y )\Big]_C +
\sum_{\gamma \not \in \mathcal{P} \cup \mathcal{R}}
\Big[ \log d_\gamma (\mu_x, \mu_y )\Big]_C \notag \\
& +\sum_{\alpha \in \mathcal{P} \setminus \mathcal{R}} \log \frac 1{l_\alpha} +
\sum_{\beta \in \mathcal{R} \setminus \mathcal{P}} \log \frac 1{k_\beta} \label{Eq:Distance} \\
& + \sum_{\gamma \in \mathcal{P} \cap \mathcal{R}}
d_\mathbb{H} \Big( \big(1/l_\gamma, \twist_\gamma(x,y) \big),
\big(1/k_\gamma, 0\big)\Big). \notag
\end{align}
Here, $d_\mathbb{H}$ is the distance in the hyperbolic plane.
\end{theorem}
\begin{remark}
In above theorem, $C$ can be taken to be as large an needed. However,
increasing $C$ will increase the constants hidden inside $\asymp$.
Let ${\gothic L}$ be the left hand side of \eqnref{Distance} and
${\gothic R}$ be the right hand side. Then, a stronger version of this theorem
can be stated as follows: There is $C_0>0$, depending only on the
topology of $S$, and for every $C\geq C_0$ there are constants $A$ and $B$
so that
$$
\frac {\gothic L} A -B \leq {\gothic R} \leq A \, {\gothic L} + B.
$$
\end{remark}
As a corollary, we have the following criterion for showing two
points in \Teich space are a bounded distance apart.
Let $\epsilon_0>\epsilon_1>0$, let $\mathcal{A}_x$ be a set of curves in $x$ that have
extremal length less than $\epsilon_0$ and assume that every other curve in
$x$ has a length larger than $\epsilon_1$. Let $\epsilon'_0, \epsilon'_1$ and $\mathcal{A}_y$
be similarly defined for $y$.
\begin{corollary} \label{Cor:Bounded-Distance}
Assume, for $x,y \in \mathcal{T}(S)$, that
\begin{enumerate}
\item $\mathcal{A}_x = \mathcal{A}_y$
\item For any subsurface $Y$ that is not an annulus with core curve in $\mathcal{A}_x$,
$d_Y(\mu_x, \mu_y)=O(1)$.
\item For $\alpha \in \mathcal{A}_x$, $\ell_x(\alpha) \stackrel{{}_\ast}{\asymp} \ell_y(\alpha)$.
\item For $\alpha \in \mathcal{A}_x$,
$\displaystyle \twist_\alpha(x,y) = O\left( 1/ {\Ext_x(\alpha)}\right)$.
\end{enumerate}
Then, $d_\mathcal{T}(x,y)=O(1)$.
\end{corollary}
\begin{proof}
Condition $(2)$ implies that the first two terms in Equation \eqref{Eq:Distance}
are zero. Since $\mathcal{A}_x=\mathcal{A}_y$, curves in $\mathcal{P}\setminus \mathcal{R}$ and
$\mathcal{R} \setminus \mathcal{P}$ have lengths that are bounded below. Hence the third
and the forth terms of \eqnref{Distance} are uniformly bounded. The conditions
on the lengths and twisting of curves in $\mathcal{A}_x$ imply that the last term is
uniformly bounded; for points $p, q \in \mathbb{H}$, $p=(p_1,p_2)$, $q=(q_1,q_2)$, if
\begin{equation*}
(p_1 -q_1) \stackrel{{}_\ast}{\asymp} p_2 \stackrel{{}_\ast}{\asymp} q_2
\qquad\text{then}\qquad d_\mathbb{H}(p,q) =O(1). \qedhere
\end{equation*}
\end{proof}
\section{Geometry of quadratic differentials}
\label{Sec:Quadratic}
A geodesic in \Teich space is the image of a quadratic differential
under the \Teich geodesic flow. Quadratic differentials are naturally
equipped with a singular Euclidean structure. We, however, often
need to compute the extremal length of a curve. In this section,
we review how the extremal length of a curve can be computed from
the information provided by the flat structure and how the flat length
and the twisting information around a curve change along a \Teich geodesic.
\subsection*{Quadratic differentials}
Let $\mathcal{T}(S)$ be the \Teich space of $S$ and $\mathcal{Q}(S)$ be the space of unit
area quadratic differentials on $S$. Recall that a quadratic differential $q$ on
a Riemann surface $x$ can locally be represented as
$$
q=q(z) \, dz^2,
$$
where $q(z)$ is a meromorphic function on $x$ with all poles having
a degree of at most one. All poles are required to occur at the punctures.
In fact, away from zeros and poles, there is a change of coordinates
so that $q=dz^2$. Here $|q|$ locally defines a Euclidean metric
on $x$ and the expressions $\Im(\sqrt{q})=0$ and $\Re(\sqrt{q})=0$ define the
horizontal and the vertical directions. Vertical trajectories foliate the surface
except at the zeros and the poles. This foliation equipped with the transverse
measure $|dx|$ is called the vertical foliation and is denoted by $\lambda_-$.
The horizontal foliation is similarly defined and is denoted by $\lambda_+$.
A neighborhood of a zero of order $k$ has the structure of the Euclidean cone
with total angle $(k+2)\pi$ and a neighborhood of a degree one pole has the
structure of the Euclidean cone with total angle $\pi$. In fact, this locally Euclidean
structure and this choice of the vertical foliation completely determines $q$.
We refer to this metric as the $q$--metric on $S$.
\subsection*{Size of a subsurface}
For every curve $\alpha$, the geodesic representatives of $\alpha$
in the $q$--metric form a (possibly degenerate) flat cylinder $F_q(\alpha)$.
For any proper subsurface $Y\subset S$, let $\sY=\sY_q$ be the representative
of the homotopy class of $Y$ that has $q$--geodesic boundaries and that is
disjoint from the interior of $F_q(\alpha)$ for every curve $\alpha \subset \bdy Y$.
When the subsurface is an annulus with core curve $\alpha$ we think
of $\sF=F_q(\alpha)$ as its representative with geodesic boundary.
Define $\size_q(Y)$ to be the $q$--length of the shortest essential curve in $Y$
and for a curve $\alpha$ let $\size_q(\sF)$ be the $q$--distance between
the boundary components of $\sF$. When $Y$ is a pair of pants,
$\size_q(Y)$ is defined to be the diameter of $\sY$.
\subsection*{An estimate for lengths of curves}
For every curve $\alpha$ in $S$, denote the extremal length of $\alpha$
in $x\in \mathcal{T}(S)$ by $\Ext_x(\alpha)$. For constants $\epsilon_0>\epsilon_1>0$, the
$(\epsilon_0, \epsilon_1)$--thick-thin decomposition of $x$ is the pair $(\mathcal{A}, \mathcal{Y})$,
where $\mathcal{A}$ is the set of curves $\alpha$ in $x$ so that
$\Ext_x(\alpha) \leq \epsilon_0$ and $\mathcal{Y}$ is the set of homotopy class of
the components of $x$ cut along $\mathcal{A}$. We further assume that
the extremal length of any essential curve $\gamma$ that is disjoint from
$\mathcal{A}$ is larger than $\epsilon_1$.
Consider the quadratic differential $(x,q)$ and the thick-thin
decomposition $(\mathcal{A}, \mathcal{Y})$ of $x$. Let $\alpha \in \mathcal{A}$
be the common boundary of subsurfaces $Y$ and $Z$ in $\mathcal{Y}$.
Let $\alpha^*$ be the geodesic representative of $\alpha$ in the boundary
of $\sY$ and let $\sE=E_q(\alpha, Y)$ be the largest regular neighborhood of
$\alpha^*$ in the direction of $\sY$ that is still an embedded annulus.
We call this annulus the expanding annulus with core curve $\alpha$
in the direction of $Y$. Define $M_q(\alpha, Y)$ to be $\Mod_x(\sE)$,
where $\Mod_x(\param)$ is the modulus of an annulus in $x$.
Recall from \cite[Lemma 3.6]{rafi:SC} that
$$
\Mod_x (\sE) \stackrel{{}_\ast}{\asymp} \log \frac{\size_q(Y)}{\ell_q(\alpha)}
\quad\text{and}\quad
\Mod_x (\sF) = \frac{\size_q(\sF)}{\ell_q(\alpha)}.
$$
Let $\sG=E_q(\alpha,Z)$ and $M_q(\alpha, Z)$ be defined
similarly.
The following statement relates the information about the flat lengths of
curves to their extremal length. For a more general statement see
\cite[Lemma 3 and Theorem 7]{rafi:LQC}.
\begin{theorem} \label{Thm:Length}
Let $(x,q)$ be a quadratic differential and let $(\mathcal{Y}, \mathcal{A})$ be the thick-thin
decomposition of $x$. Then
\begin{enumerate}
\item For $Y \in \mathcal{Y}$ and a curve $\gamma$ in $Y$
$$
\Ext_x(\gamma) \stackrel{{}_\ast}{\asymp} \frac{\ell_q(\gamma)^2}{\size(Y)^2}.
$$
\item For $\alpha \in \mathcal{A}$ that is the common boundary of $Y, Z \in\mathcal{Y}$,
\begin{align*}
\frac 1{\Ext_x(\alpha)} &\stackrel{{}_\ast}{\asymp} \log \frac{\size_q(Y)}{\ell_q(\alpha)} +
\frac{\size_q(F_q(\alpha))}{\ell_q(\alpha)} +\log \frac{\size_q(Z)}{\ell_q(\alpha)} \\
& \stackrel{{}_\ast}{\asymp} \Mod_x (\sE) + \Mod_x (\sF) + \Mod_x (\sG).
\end{align*}
\end{enumerate}
\end{theorem}
\subsection*{Length and twisting along a \Teich geodesic} \label{Sec:twist}
A matrix $A \in \SL(2,\mathbb{R})$ acts on any $q \in \mathcal{Q}(S)$
locally by affine transformations. The total angle at a point
does not change under this transformation. Thus the resulting singular
Euclidean structure defines a quadratic differential that we denote by $Aq$.
The \Teich geodesic flow, $g_t \from \mathcal{Q} \to \mathcal{Q}$, is the action
by the diagonal subgroup of $\SL(2,\mathbb{R})$:
$$
g_t(q)= \begin{bmatrix} e^t & 0 \\ 0&e^{-t}\end{bmatrix} q.
$$
The \Teich geodesic described by $q$ is then a map
$$
\mathcal{G} \from \mathbb{R} \to \mathcal{Q}, \qquad \mathcal{G}(t)= (x_t, q_t)
$$
where $q_t=g_t(q)$ and $x_t$ is the underlying Riemann surface
for $q_t$.
The flat length of a curve along a \Teich geodesic is well behaved.
Let the horizontal length $h_t(\alpha)$ of $\alpha$ in $q$ be the transverse
measure of $\alpha$ with respect to the vertical foliation of $q_t$ and
the vertical length $v_t(\alpha)$ of $\alpha$ be the transverse
measure with respect to the horizontal foliation of $q_t$. We have
(see the discussion on \cite[Page 186]{rafi:SC})
$$
\ell_{q_t}(\alpha) \stackrel{{}_\ast}{\asymp} h_t(\alpha) + v_t(\alpha).
$$
Since the vertical length decreases exponentially fast and the
horizontal length increases exponentially fast, for every
curve $\alpha$, there are constants $L_\alpha$ and $t_\alpha$ so that
\begin{equation} \label{Eq:Flat-Length}
\ell_{q_t}(\alpha) \stackrel{{}_\ast}{\asymp} L_\alpha \cosh (t -t _\alpha).
\end{equation}
We call the time $t_\alpha$ the balanced time for $\alpha$ and the length
$L_\alpha$ the minimum flat length for $\alpha$.
We define the twisting parameter of a curve along a \Teich geodesic
to be the relative twisting of $q_t$ with respect to the vertical foliation.
That is, for any curve $\alpha$ and time $t$, let $\tau_t$ be the
arc in the annular cover of $q_t^\alpha$ that is perpendicular
to $\alpha$ and let $\lambda_-$ be the vertical foliation of $q_t$
(which is topologically the same foliation for every value of $t$).
Define
$$
\twist_t(\alpha) = \twist_\alpha(\tau_t, \lambda_-).
$$
This is an increasing function that ranges from a minimum of zero to
a maximum of $T_\alpha=d_\alpha(\lambda_-, \lambda_+)$. That is,
$\tau_t$ looks like $\lambda_-$ at the beginning and like $\lambda_+$
in the end. In fact, from \cite[Equation 16]{rafi:CM} we have
the following explicit formula:
\begin{equation} \label{Eq:Twist}
\twist_t(\alpha) \stackrel{{}_+}{\asymp}
\frac{2\,T_\alpha \, e^{2(t-t_\alpha)}}{\cosh^2(t-t_\alpha)}.
\end{equation}
Also, \cite[Proposition 5.8]{rafi:LT} gives the following estimate on the modulus
of $\sF_t=F_{q_t}(\alpha)$:
\begin{equation} \label{Eq:Modulus}
\Mod_{q_t}(\sF_t) \stackrel{{}_\ast}{\asymp} \frac{T_\alpha}{\cosh^2(t-t_\alpha)}.
\end{equation}
That is, the modulus of $\sF_t$ is maximum when $\alpha$ is balanced and
goes to zero as $t$ goes to $\pm \infty$. The maximum modulus of $\sF_t$ is
determined purely by the topological information $T_\alpha$, which is the relative
twisting of $\lambda_-$ and $\lambda_+$ around $\alpha$. The size of $\sF_t$
at $q_t$ is equal to its modulus times the flat length of $\alpha$ at $q_t$. Hence,
\begin{equation} \label{Eq:Size}
\size_{q_t}(\sF_t) =\frac {T_\alpha L_\alpha}{\cosh(t-t_\alpha)}
\end{equation}
\section{Projection of a quadratic differential to a subsurface}
\label{Sec:Restriction}
In this section, we introduce the notion of an isolated surface in a quadratic
differential. Let $(x,q)$ be a quadratic differential, $Y \subset S$
be a proper subsurface and $\sY$ be the representative of $Y$
with $q$--geodesic boundaries. Note that, when $\sY$ is non-degenerate,
it is itself a Riemann surface that inherits its conformal structure from
$x$. In this case, for a curve $\gamma$ in $Y$, we use the
expression $\Ext_\sY(\gamma)$ to denote the extremal length of
$\gamma$ in the Riemann surface $\sY$. The following lemma which is a
consequence of (\cite[Lemma~4.2]{minsky:PR}).
\begin{lemma}[Minsky]
\label{Lem:Length-Comparison}
There exists a constant $m_0$ depending only on the topological type of $S$
so that, for every subsurface $Y$ with negative Euler characteristic the
following holds. If $M_q(\alpha, Y)\geq m_0$ for every boundary component
$\alpha$ of $Y$ then for any essential curve $\gamma$ in $Y$
$$
\Ext_{\sY}(\gamma) \stackrel{{}_\ast}{\asymp} \Ext_x(\gamma).
$$
\end{lemma}
Fixing $m_0$ as above, we say $Y$ is \emph{isolated}
in $q$ if, for every boundary component $\alpha$ of $Y$,
$M_q(\alpha,Y) \geq m_0$. The large expanding annuli in the boundaries
of $Y$ isolate it in the sense that one does not need any information about
the rest of the surface to compute extremal lengths of curves in $Y$.
As we shall see, when $Y$ is isolated, the restrictions of the hyperbolic
metric of $x$ to $Y$ and the quadratic differential $q$ to $Y$ are at most
a bounded distance apart in the \Teich space of $Y$.
For $x \in \mathcal{T}(S)$ and $Y \subset S$ we define the Fenchel-Nielsen
projection of $x$ to $Y$, a complete hyperbolic metric $x \rY$ on $Y$,
as follows: Extend the boundary curves of $Y$ to a pants decomposition
$\mathcal{P}$ of $S$. Then the Fenchel-Nielsen coordinates of $\mathcal{P} \rY$
defines a point $x \rY$ of $\mathcal{T}(Y)$ (see \cite{minsky:PR} for
a detailed discussion).
Now, we construct a projection map from $q$ to $q\rY$ by considering
the representative with geodesic boundary $\sY$ and capping
off the boundaries with punctured disks. It turns out that the underlying
conformal structure of $q \rY$ and $x \rY$ are not very different, but
the quadratic differential restriction commutes with the action of
$\SL(2,\mathbb{R})$. When $Y$ is not isolated in $q$, the capping off process is
not geometrically meaningful (or sometimes not possible). Hence, the process
is restricted to the appropriate subset of $\mathcal{Q}$.
\begin{theorem} \label{Thm:Restriction}
Let $Y$ be a subsurface of $S$ that is not an annulus and let $\mathcal{Q}_Y(S)$
be the set of quadratic differentials $q$ so that $Y$ is isolated in $q$.
There is a map $\pi_Y \from \mathcal{Q}_Y(S) \to \mathcal{Q}(Y)$, with
$\pi_Y(q) = q \rY$, so that
\begin{equation} \label{Eq:Y}
d_{\mathcal{T}(Y)}(q \rY, x \rY)=O(1).
\end{equation}
Furthermore, if, for $A \in \SL(2,\mathbb{R})$, both $q$ and $Aq$ are in $\mathcal{Q}_Y (S)$
then
\begin{equation} \label{Eq:A}
d_{\mathcal{T}(Y)} \big( (Aq) \rY, A (q \rY) \big)=O(1).
\end{equation}
\end{theorem}
\begin{proof}
We first define the map $\pi_Y$. Let $(x,q)$ be a quadratic differential
with $Y$ isolated in $q$. Let $\sY$ be the representative of $Y$ with
$q$--geodesic boundaries. Our plan, nearly identical to that
of~\cite{rafi:TT}, is to fill all components of $\bdy \sY$ with locally flat
once-punctured disks.
Fix $\alpha \subset \bdy Y$ and recall that $\sE = E_q(\alpha, Y)$
is an embedded annulus and $\alpha^*$ is a boundary of $\sE$.
Let $a_1, \ldots, a_n$ be the points on $\alpha^*$
which have angle $\theta_i > \pi$ in $\sE$. Note that this set is nonempty:
if it is empty then $\sE$ meets the interior of the flat cylinder $F(\alpha)$,
a contradiction. Let $\sE'$ be the double cover of $\sE$ and let
$\alpha'$ be the pre-image of $\alpha^*$. Let $q'$ be the lift of
$q \stroke{\sE}$ to
$\sE'$. Along $\alpha'$ we attach a locally flat disk $\sD'$
with a well defined notion of a vertical direction, as follows.
Label the lifts of $a_i$ to $\sE'$ by $b_i$ and $c_i$. We will fill
$\alpha'$ by symmetrically adding $2\,(n-1)$ Euclidean triangles
to obtain a flat disk $\sD'$ such that the total angle at each $b_i$
and $c_i$ is a multiple of $\pi$ and is at least $2\pi$.
\begin{figure}[ht]
\setlength{\unitlength}{0.01\linewidth}
\begin{picture}(50,50)
\put(0,0){\includegraphics[width=65\unitlength]{Triangles.pdf}}
\put(8,45){$b_1$}
\put(0,32){$b_2$}
\put(3,16){$b_3$}
\put(16,4){$b_4$}
\put(23,10){$\ldots$}
\put(36,1){$b_n$}
\put(50,5){$c_1$}
\put(57,18){$c_2$}
\put(55,33){$c_3$}
\put(42,46){$c_4$}
\put(35,38){$\ldots$}
\put(22,49){$c_n$}
\end{picture}
\caption{\bf The filling of the annulus $\sE'$}
\label{fig:triangles}
\end{figure}
We start by attaching a Euclidean triangle to vertices $b_1, b_2, b_3$,
which we denote by $\bigtriangleup(b_1,b_2,b_3)$ (see Figure~\ref{fig:triangles}).
We choose the angle $\angle b_2$ at the vertex $b_2$ so that
$\theta_2+\angle b_2$ is a multiple of $\pi$. Assuming
$0 \leq \angle b_2 < \pi$, there is a unique such triangle. Attach an isometric
triangle to $c_1, c_2, c_3$. Now consider the points $b_1, b_3, b_4$.
Again, there exists a Euclidean triangle with one edge equal to
the newly introduced segment $[b_1,b_3]$, another edge equal to the
segment $[b_3,b_4]$ and an angle at $b_3$ that makes the total angle
at $b_3$, including the contribution from the triangle
$\bigtriangleup(b_1,b_2,b_3$), a multiple of $\pi$.
Attach this triangle to the vertices $b_1, b_3, b_4$ and an identical triangle
to the vertices $c_1,c_3,c_4$. Continue in this fashion until finally adding
triangles $\bigtriangleup(b_1,b_n, c_1)$ and $\bigtriangleup(c_1,c_n, b_1)$.
Due to the symmetry, the two edges connecting $b_1$ and $c_1$
have equal length, and we can glue them together. We call the union of the
added triangles $\sD'$. Notice that the involution on $\sE'$ extends to $\sD'$.
Let $\sD = \sD(\alpha)$ be the quotient of $\sD'$, and note that $\sD$ is a
punctured disk attached to $\alpha^*$ in the boundary of $\sE$.
For $i \not = 1$, the total angle at $b_i$ and at $c_i$ is a multiple of $\pi$
and is larger than $\theta_i > \pi$; therefore, it is at least $2\pi$. We
have added $2\,(n-1)$ triangles. Hence, the sum of the total angles of all
vertices is $2\sum_i \theta_i + 2\,(n-1)\pi$, which is a multiple of $2\pi$.
Therefore, the sum of the angles at $b_1$ and $c_1$ is also a multiple
of $2\pi$. But they are equal to each other, and each one is larger than $\pi$.
This implies that they are both at least $2\pi$. It follows that the
quadratic differential $q'$ extends over $\sD'$ symmetrically
with quotient an extension of $q$ to $\sD$.
Thus, attaching the disk $\sD(\alpha)$ to every boundary component $\alpha^*$
in $\bdy \sY$ gives a point $q \rY \in \mathcal{Q}(Y)$. This completes the
construction of the map $\pi_Y$.
We now show that the distance in $\mathcal{T}(Y)$ between $q \rY$ and
$x \rY$ is uniformly bounded. For this, we examine the extremal lengths
of curves in two conformal structures. Since $Y$ is isolated in $q$,
the boundaries of $Y$ are short in $x$. This implies, using \cite{minsky:PR} that,
for any essential curve $\gamma$ in $Y$, the extremal lengths of $\gamma$
in $x$ and in $x \rY$ are comparable
\begin{equation} \label{eq:Ext}
\Ext_{x \rY}(\gamma) \stackrel{{}_\ast}{\asymp} \Ext_x(\gamma)
\end{equation}
(see the proof of Theorem 6.1 in \cite[page 283, line 19]{minsky:PR}).
We need to show that the extremal lengths of $\gamma$ in $q$ and in
$q \rY$ are comparable as well. This obtain this after applying
\lemref{Length-Comparison} twice. Once considering $\sY$ as
a subset of $q$ and once as a subset of $q \rY$, \lemref{Length-Comparison}
implies:
$$
\Ext_x(\gamma) \stackrel{{}_\ast}{\asymp} \Ext_\sY(\gamma) \stackrel{{}_\ast}{\asymp} \Ext_{q \rY}(\gamma).
$$
Since the extremal lengths of curves are comparable, the distance in the
$\mathcal{T}(Y)$ between $x \rY$ and $q \rY$ is uniformly bounded above
\cite[Theorem 4]{kerckhoff:AG}.
We note that defining the map $\pi_Y$ involved a choice of labeling
of the points $\{ a_i \}$. However, the above argument will work for
any labeling. In fact, for any labeling of points in a boundary component
of $\sY$ in $q$, one can use the corresponding labeling $A(\sY)$ in
$(Aq)$ so that $A (q \rY) = (A q) \rY$. Since all the different labeling
result in points that are close in $\mathcal{T}(Y)$ to $x \rY$, Equation~\eqref{Eq:A}
holds independently of the choices made. This finishes the proof.
\end{proof}
\section{Projection of a \Teich geodesic to a subsurface}
\label{Sec:Proj}
As mentioned before, a quadratic differential $q$ defines a \Teich geodesic
$\mathcal{G} \from \mathbb{R} \to \mathcal{Q}(S)$ by taking
$$
\mathcal{G}(t) = (x_t, q_t), \quad q_t = \begin{bmatrix} e^t & 0 \\ 0&e^{-t}\end{bmatrix} q
$$
where $x_t$ is the underlying Riemann surface for $q_t$. Let $\lambda_+$
and $\lambda_-$ be the horizontal and the vertical foliations of $q_t$.
Recall that a point $x \in \mathcal{T}(S)$ has an associated shortest marking
$\mu_x$. We similarly define, for any $(x,q) \in \mathcal{Q}(S)$, a shortest
marking $\mu_q$. The marking $\mu_q$ has the same pants decomposition
and the same set of lengths $\{l_\alpha\}$ as $\mu_x$. However, we
use the flat metric of $q$ to define the transversals $\tau_\alpha$, as follows.
Recall that $q^\alpha$ is the annular cover of $q$ with respect to $\alpha$.
Define $\tau_\alpha$ to be any arc connecting the boundaries of $q^\alpha$
that is perpendicular to the geodesic representative of the core.
That is, the transversal is the quadratic differential perpendicular
instead of the hyperbolic perpendicular.
In what follows, we often replace $q_t$ subscripts simply with $t$.
For example, $\ell_t(\alpha)$ is short for $\ell_{q_t}(\alpha)$, while
$\mu_t$ is short for $\mu_{q_t}$ and
$M_t(\alpha, Y)$ is short for $M_{q_t}(\alpha, Y)$.
We let $t_\alpha$ be the time when $\alpha$ is
\emph{balanced} along $\mathcal{G}$ (see \eqnref{Flat-Length}).
We need the following two statements. First we have a lemma that is
contained in the proof of Theorem~3.1 of~\cite{rafi:CM}.
\begin{lemma}
\label{Lem:Convex}
There is a uniform constant $c \geq 0$ so that
$$M_s(\alpha, Y) \leq M_t(\alpha, Y) + c$$ for all $s \leq t \leq
t_\alpha$ and for all $t_\alpha \leq t \leq s$. \qed
\end{lemma}
Second we have a theorem that follows from the proof of
Theorem~5.5 of~\cite{rafi:SC}.
\begin{theorem}
\label{Thm:LackOfMotion}
There are constants $M_0$ and $C$ so that, if $M_t(\alpha, Y) \leq M_0+ c$
for some boundary component $\alpha$, then either
\begin{equation*}
d_Y(\mu_t, \lambda_-)\leq C \quad\text{or}\quad d_Y(\mu_t, \lambda_+)\leq C.
\end{equation*}
\end{theorem}
We now define $I_Y$, the \emph{interval of isolation} for $Y$.
Choose a large enough $M_0$ (we need $M_0 >m_0$ as in
\lemref{Length-Comparison} and we need $M_0$ to satisfy \thmref{LackOfMotion}).
Define the interval $I_{\alpha, Y} \subset \mathbb{R}$ to be empty when
$M_{t_\alpha}(\alpha,Y) < M_0$ and otherwise to be
the largest interval containing $t_\alpha$ so that $M_t(\alpha, Y) \geq
M_0$ for all $t \in I_{\alpha, Y}$. Define
$$
I_Y = \bigcap_{\alpha \subset \bdy Y} I_{\alpha, Y}.
$$
Note that, by
Lemma~\ref{Lem:Convex}, for any $t$ outside of $I_Y$, there is
a boundary component $\alpha$ such that $M_t(\alpha, Y) \leq M_0 + c$.
\begin{theorem} \label{Thm:Proj}
Let $\mathcal{G} \from \mathbb{R} \to \mathcal{Q}(S)$ be a \Teich geodesic with $\mathcal{G}(t) = (x_t, q_t)$.
Let $Y$ be a subsurface with the interval of isolation $I_Y$.
Then there exists a geodesic $\mathcal{F} \from I_Y \to \mathcal{Q}(Y)$ with
$\mathcal{F}(t)= (y_t, p_t)$, so that
\begin{itemize}
\item If $[a,b] \cap I_Y = \emptyset$ then
$$
d_Y(\mu_a, \mu_b) = O(1).
$$
\item For $t \in I_Y$,
$$
d_{\mathcal{T}(Y)} \big( x_t \rY , y_t \big) = O(1).
$$
\end{itemize}
In fact, we may take $p_t = q_t \rY$.
\end{theorem}
\begin{figure}[ht]
\setlength{\unitlength}{0.01\linewidth}
\begin{picture}(40,37)
\put(0,0){\includegraphics[width=40\unitlength]{Teich.pdf}}
\put(38,32){$\mathcal{T}(Y)$}
\put(20,23.5){$x_t \rY$}
\put(20,11.3){$y_t$}
\put(23,17){$O(1)$}
\end{picture}
\caption{The projection of $\mathcal{G}$ to $\mathcal{T}(Y)$ fellow travels the geodesic
$\mathcal{F}$.}
\label{fig:FN}
\end{figure}
\begin{proof}
For every $t\in[a,b]$, there exists a boundary component $\alpha$
so that $M_t(\alpha, Y) \leq M_0+c$. By \thmref{LackOfMotion}
$$
d_Y(\mu_t, \lambda_-)\leq C \quad\text{or}\quad d_Y(\mu_t, \lambda_+)\leq C.
$$
Let $J_- \subset [a,b]$ be the set of times where former holds and
$J_+\subset [a,b]$ be the set of times where latter holds. If $J_-$ or $J_+$
is empty, we are done by the triangle inequality. Otherwise, we note
that these intervals are closed and have to intersect. This implies
that $d_Y(\lambda_-,\lambda_+)\leq 2C$. Again we are done
after applying the triangle inequality; the bound on $d_Y(\mu_a, \mu_b)$
is at most $4C$. This proves the first conclusion of \thmref{Proj}.
To obtain the second conclusion, we construct the candidate
geodesic arc $\mathcal{F}$ in $\mathcal{T}(Y)$. Let $I_Y=[c,d]$. As suggested
in the statement of the theorem, let $p_c = q_c \rY$ and let
$\mathcal{F}=(y_t, p_t)$ be the geodesic segment from $[c,d] \to \mathcal{Q}(Y)$ defined by
$$
p_t = \begin{bmatrix} e^{t-c} & 0 \\ 0&e^{-t+c}\end{bmatrix} p_c.
$$
In fact, if we make consistent choices in the construction of $q_t \rY$
for different values of $t$, we have $p_t = q_t \rY$. Now
Equation~\eqref{Eq:A} in \thmref{Restriction} implies
$$
d_{\mathcal{T}(Y)}(x_t \rY, y_t)= O(1).
$$
This finishes the proof.
\end{proof}
For a \Teich geodesic segment whose end points are in the thick part
of the \Teich space, we can look at the short markings at the end points
of the segment instead of the horizontal and the vertical foliations, to determine
which subsurfaces are isolated along the geodesic segment. That is,
the end invariants can be taken to be the short markings instead of
the horizontal and the vertical foliations.
\begin{corollary} \label{Cor:Ends}
Let $\mathcal{G} \from \mathbb{R} \to \mathcal{T}(S)$ be a \Teich geodesic. Suppose $a<b$
are times so that $\mathcal{G}(a)$ and $\mathcal{G}(b)$ are in the thick part.
Then, for every subsurface $Y$ we have
\begin{itemize}
\item Either $I_Y \subset[a,b]$,
$$
\I(\lambda_- \rY, \mu_b \rY)=O(1)
\quad\text{and}\quad
\I(\lambda_+ \rY, \mu_a \rY)=O(1).
$$
In particular,
$$
d_Y(\lambda_-, \lambda_+)\stackrel{{}_+}{\asymp} d_Y(\mu_a, \mu_b).
$$
\item Or $I_Y \cap [a,b] = \emptyset$ and
$$
d_Y(\mu_a, \mu_b)=O(1).
$$
\end{itemize}
\end{corollary}
\begin{proof}
Since the endpoints lie in the thick part of \Teich space, the times
$a$ and $b$ are not in any interval $I_Y$. That is, $I_Y$
is either contained in $[a,b]$ or it is disjoint from it. If $I_Y =[c,d]$
then all markings $\mu_t$, $t\in[-\infty, c]$ project to a bounded set
in $\AC(Y)$. In fact, from \cite[Theorem 5.5]{rafi:SC} we know that
$\I(\mu_t \rY, \lambda_+\rY)=O(1)$. Therefore, $d_Y(\lambda_+, \mu_a)=O(1)$.
Similarly, for $t \in [d,\infty]$, $\I(\mu_t \rY, \lambda_-\rY)=O(1)$
and $d_Y(\lambda_-, \mu_b)=O(1)$. The corollary follows
immediately.
\end{proof}
\subsection*{Order of appearance of intervals $I_Y$}
By examining the subsurface projections one can determine
which curves $\alpha$ are short along a \Teich geodesic $\mathcal{G}$.
The following is the restatement of results in \cite{rafi:SC} in a way
that is more suitable for our purposes. Let $\mathcal{G}$ be a \Teich
geodesic with horizontal and vertical foliations $\lambda_\pm$ and,
for a curve $\alpha$, let $\mathcal{Z}(\alpha, D)$ be the set of subsurface $Z$ that
are disjoint from $\alpha$ and have $d_Z(\lambda_+, \lambda_-) \geq D$.
\begin{theorem} \label{Thm:M-Large}
A curve $\alpha$ is short at some point along $\mathcal{G}$
if and only if $\alpha$ is the boundary of a subsurface $Y$ so that
$Y$ is filled with subsurfaces with large projections. That is, there
are constants $\epsilon$, $D_0$ and $D_1$ so that
\begin{itemize}
\item If $\Ext_t(\alpha) \leq \epsilon$ then $\alpha$ is a boundary component
of some subsurface $Y$, where $Y$ is filled by subsurfaces in
$\mathcal{Z}(\alpha, D_0)$.
\item Suppose that $\alpha$ is a boundary component of $Y$ and that $Y$ is
filled by elements of $\mathcal{Z}(\alpha, D_1)$. Then there is a time
$t \in \mathbb{R}$ when $\Ext_t(\alpha) \leq \epsilon$.
\end{itemize}
\end{theorem}
\begin{proof}
This is a restatement of \cite[Theorem 1.1]{rafi:SC} after the following:
two curves or arcs in $\AC(Y)$ have large intersection number if and only if
their projections to some subsurface $Z$ of $Y$ is large.
(This assertion is well known and follows from \cite[Corollary D]{rafi:TL}.)
We have just translated the condition about intersection numbers to a condition
about subsurface projections.
\end{proof}
One consequence of the above theorem is that the order in which
the intervals $I_Y$ appear in $\mathbb{R}$ is essentially determined by any
geodesic $g$ in $\AC(S)$ connecting $\lambda_-$ to $\lambda_+$.
\begin{proposition} \label{Prop:Order}
The boundary curves of any isolated surface are in a $2$--neighborhood of
a geodesic $g$ in the curve complex. The order of appearance of
intervals of isolations in $\mathbb{R}$ is coarsely determined by the order
in which the vertices $\partial Y$ appear along $g$.
\end{proposition}
The proof uses both the description of a \Teich geodesic as well as
some hyperbolicity result for the curve complex $\mathcal{C}(S)$. Namely,
we use Masur and Minsky's bounded geodesic image theorem:
\begin{theorem}[Theorem 3.1 in \cite{minsky:CCII}] \label{Thm:Bounded}
If $Y$ is an essential subsurface of $S$ and $g$ is a geodesic in
$\AC(S)$ all of whose vertices intersect $Y$ nontrivially, then the projected
image of $g$ in $\AC(Y)$ has uniformly bounded diameter.
\end{theorem}
\begin{proof}[Proof of \propref{Order}]
By \thmref{M-Large}, a boundary curve $\alpha$ of any isolated subsurface
$Y$ is disjoint from some subsurface $Z$ where the projection
distance $d_Z(\lambda_+, \lambda_-)$ is large. By \thmref{Bounded},
the geodesic $g$ has to miss $Z$ as well.
Hence $\alpha$ has a distance of at most $2$ from $g$.
Write $g= g_- \cup g_0 \cup g_+$, where $Z$ intersects every curve
in $g_-$ and $g_+$ and where $g_0$ has length $10$ and $\partial Z$
is disjoint from a curve at the middle of $g_0$. From \thmref{Bounded} we
have that the projection of $g_-$ to $\AC(Z)$ is in a bounded neighborhood of
$\lambda_- \stroke{Z}$ and the projection of $g_+$ to $\AC(Z)$ is in a
bounded neighborhood of $\lambda_+ \stroke{Z}$. Let $Y'$ be another isolated
surface. We claim that if the boundary of $Y'$ is close to a point in $g_-$, then
the interval $I_{Y'}$ appears after the interval $I_Y$.
Let $I_Y =[a,b]$ and let $t \in I_{Y'}$. Then $t \not \in [a,b]$ because $Y$ and
$Y'$ intersect (the distance between their boundaries is larger than $1$)
and their boundaries can not be short simultaneously. Note that
$\partial Y'$ are part of the short marking $\mu_t$.
By \corref{Ends}, if $t<a$ then $\I(\mu_t \rY, \lambda_+\rY)=O(1)$.
Hence,
$$
\I(\mu_t \stroke{Z}, \lambda_+ \stroke{Z})=O(1)
\quad\text{and}\quad
d_Z(\mu_t, \lambda_+)=O(1).$$
But this is a contradiction because $\partial Y'$ is close to a point in $g_-$
which projects to a point in $\AC(Z)$ near $\lambda_- \stroke{Z}$.
Therefore, $t >b$.
\end{proof}
\begin{remark} \label{Rem:Ends}
Note that, using \corref{Ends}, we can restate the above statements
for \Teich geodesic segments $\mathcal{G} \from [a,b] \to \mathcal{T}(S)$ where
$\mathcal{G}(a)$ and $\mathcal{G}(b)$ are in the thick part. All statements hold after
replacing $\lambda_-$ and $\lambda_+$ with $\mu_a$ and $\mu_b$
respectively.
\end{remark}
\section{No Back-tracking} \label{Sec:Backtrack}
As before, let $\mathcal{G}$ be a \Teich geodesic with $\mathcal{G}(t) = (x_t, q_t)$ and
let $\mu_t$ be the short marking associated to $q_t$.
In this section we examine the projection of markings $\mu_t$
to the curve complex of a subsurface.
\begin{theorem}
\label{Thm:No-Back-Tracking}
For every subsurface $Y$ of $S$, the shadow of $\mathcal{G}$ in $\AC(Y)$ is an
un-parametrized quasi-geodesic. That is, for $r\leq s \leq t \in \mathbb{R}$
$$
d_Y(\mu_r, \mu_s)+d_Y(\mu_s, \mu_t) \stackrel{{}_+}{\prec} d_Y(\mu_r, \mu_t).
$$
\end{theorem}
\begin{remark}
We observe that the projection of $\mu_t$ to $\AC(Y)$ is a
coarsely continuous path. That is, there is a constant $B$ so that
for every $t \in \mathbb{R}$ there is a $\delta$ where
$$
\I(\mu_t, \mu_{t+\delta})\leq B \qquad\text{and hence}\qquad
d_Y(\mu_t, \mu_{t+\delta})=O(1).
$$
To see this, note that since lengths change continuously, $x_t$ and
$x_{t+\delta}$ have the same thick-thin decompositions and the intersection
between moderate length curves in $x_t$ and $x_{t+\delta}$ is bounded.
Also, twisting along the short curves changes coarsely continuously
(see \eqnref{Twist}).
\end{remark}
\begin{remark}
The reverse triangle inequality for a path (as given in the statement
of the theorem) is a stronger condition than being a unparametrized
quasi-geodesic. However, in Gromov hyperbolic spaces
such as $\AC(Y)$ the two conditions are equivalent.
(See \cite[Section 7]{minsky:CCII} and \cite[Section 2.1]{masur:TS}
for relevant discussions.)
\end{remark}
\begin{remark}
This contrasts with the way geodesics behave in the Lipschitz
metric on $\mathcal{T}(S)$, studied by Thurston in \cite{thurston:MSM}, where the
projection of a geodesic to a subsurface can backtrack
arbitrarily far. (Examples can easily be produced using Thurston's construction
of minimal stretch maps \cite{thurston:MSM} and the results in \cite{rafi:TL}).
\end{remark}
\begin{proof}
If $Y=S$, the above is a theorem of Masur and
Minsky~\cite[Theorem 3.3]{minsky:CCII}, that is, we already know that the
shadow of $\mathcal{G}$ to $\AC(S)$ is an unparametrized quasi-geodesic.
Let $Y$ be a proper subsurface and consider the interval of isolation
$I_Y=[c,d]$. If $Y$ is not an annulus, by the first part of \thmref{Proj},
the shadow of $\mathcal{G}(-\infty, c]$ and $\mathcal{G}[d, \infty)$ have bounded diameter in
$\AC(Y)$ and by the second part of \thmref{Proj} and again using
\cite[Theorem 3.3]{minsky:CCII}, the shadow of $\mathcal{G}[c,d]$ is an unparametrized
quasi-geodesic in $\AC(Y)$. It remains to check the case of an annulus.
But in this case $\AC(Y)$ is quasi-isometric to $Z$ and we need only
to show that the twisting around the core of $Y$ is an increasing up to an
additive error. This follows from \eqnref{Twist}.
\end{proof}
\section{Fellow traveling}
\label{Sec:Fellow-Travel}
\begin{theorem} \label{Thm:FellowTravel}
There is a constant $D > 0$ so that, for points $x, {\overline x}, y$ and ${\overline y}$
in the thick part of $\mathcal{T}(S)$ where
$$
d_\mathcal{T}(x, {\overline x}) \leq 1 \quad\text{and}\quad d_\mathcal{T}(y, {\overline y}) \leq 1,
$$
the geodesic segments $[x,y]$ and $[{\overline x}, {\overline y}]$ $D$--fellow
travel in a parametrized fashion.
\end{theorem}
\begin{remark}
The proof also works when either $x$ or $y$ is replaced with measured
foliation in $\PML(S)$ and $\mathcal{G}$ and $\overline \calG$ are infinite rays.
\end{remark}
\begin{proof}
After adjusting $x$ and $y$ along the geodesic extension
through $[x,y]$ by a bounded amount, we may assume that
$d_\mathcal{T}(x,y)=d_\mathcal{T}({\overline x},{\overline y})$. Let
$$
\mathcal{G} \from [0,l] \to \mathcal{T}(S) \quad\text{and}\quad
\overline \calG \from [0,l] \to \mathcal{T}(S)
$$
be \Teich geodesics connecting $x$ to $y$ and ${\overline x}$ to ${\overline y}$ respectively;
$\mathcal{G}(t) = (x_t, q_t)$ and $\overline \calG(t) = ({\overline x}_t, {\overline q}_t)$.
We first show that, for any curve $\alpha$,
$\ell_{q_t}(\alpha)\stackrel{{}_\ast}{\asymp} \ell_{{\overline q}_t}(\alpha)$.
Since $x$ and ${\overline x}$ are both in the thick part, for every curve $\alpha$
we have (part (1) of \thmref{Length})
$$
\Ext_x(\alpha) \stackrel{{}_\ast}{\asymp} l_{q_0}(\alpha)^2
\quad\text{and}\quad
\Ext_{\overline x} (\alpha) \stackrel{{}_\ast}{\asymp} l_{{\overline q}_0}(\alpha)^2.
$$
But $d_\mathcal{T}(x, {\overline x})=1$. Therefore,
$$
\Ext_x(\alpha) \stackrel{{}_\ast}{\asymp} \Ext_{\overline x}(\alpha)
\quad\Longrightarrow\quad
l_{q_0}(\alpha) \stackrel{{}_\ast}{\asymp} l_{{\overline q}_0}(\alpha).
$$
The same argument works to show that
$l_{q_l}(\alpha) \stackrel{{}_\ast}{\asymp} l_{{\overline q}_l}(\alpha)$.
The flat length of a curve is essentially determined by two parameters.
From \eqnref{Flat-Length} we have
$\ell_{q_t}(\alpha) \stackrel{{}_\ast}{\asymp} L_\alpha \cosh(t-t_\alpha)$ and
$\ell_{{\overline q}_t}(\alpha) \stackrel{{}_\ast}{\asymp} \overline L_\alpha \cosh(t-\overline t_\alpha)$.
Since, the flat lengths of $\alpha$ are comparable at the beginning
and the end they are always comparable. That is,
$L_\alpha \stackrel{{}_\ast}{\asymp} \overline L_\alpha$ and $t_\alpha \stackrel{{}_+}{\asymp} \overline t_\alpha$.
We use \corref{Bounded-Distance} to prove $d_\mathcal{T}(x_t, {\overline x}_t)=O(1)$
by checking the four conditions.
\subsection*{Condition (1)} We need to show that $q_t$ and ${\overline q}_t$ have the
same thick-thin decompositions. Fix an $\epsilon$ and let $(\mathcal{A}, \mathcal{Y})$
be the $(\epsilon, \epsilon)$--thick-thin decomposition of $x_t$.
Let $\alpha \in \mathcal{A}$ and let $\sE, \sF$ and $\sG$ be as in \thmref{Length}.
Since $\alpha$ is short, one of $\sE$, $\sF$ or $\sG$ must have a large
modulus. That is, for every curve $\beta$ intersecting $\alpha$,
we have
$$
\frac{\ell_{q_t}(\beta)}{\ell_{q_t}(\alpha)} \stackrel{{}_\ast}{\succ} \frac 1\epsilon.
$$
(In fact it may be larger than $e^{1/e}$.)
Since the flat length in $q_t$ and ${\overline q}_t$ are comparable, we also have
$$
\frac{\ell_{{\overline q}_t}(\beta)}{\ell_{{\overline q}_t}(\alpha)} \stackrel{{}_\ast}{\succ} \frac 1\epsilon.
$$
We show the extremal length of $\alpha$ is small in ${\overline x}_t$. If not, $\alpha$
would pass through some thick piece of ${\overline x}_t$ and it would intersect some
curve $\beta$ with $\Ext_{{\overline x}_t}(\beta) \stackrel{{}_\ast}{\prec} 1$. That is,
$\Ext_{{\overline x}_t}(\beta) \stackrel{{}_\ast}{\prec} \Ext_{{\overline x}_t}(\alpha)$.
Part (1) of \thmref{Length} implies $\ell_{{\overline q}_t}(\beta) \stackrel{{}_\ast}{\prec} \ell_{{\overline q}_t}(\alpha)$
which is a contradiction. That is, there is an $\epsilon_0$ so that if
$Ext_{x_t}(\alpha) \leq \epsilon$ then $\Ext_{{\overline x}_t}(\alpha) \leq \epsilon_0$.
Arguing in the other direction, we can find $\epsilon_1$ so that if
$\Ext_{{\overline x}_t}(\alpha) \leq \epsilon_1$ then $\Ext_{x_t}(\alpha) \leq \epsilon$.
That is, every curve not in $\mathcal{A}$ is $\epsilon_1$--thick in ${\overline x}_t$.
This proves that $(\mathcal{A}, \mathcal{Y})$ is a $(\epsilon_0, \epsilon_1)$--thick-thin
decomposition for ${\overline x}_t$.
\subsection*{Condition (2)}
The size of a surface $Y \in \mathcal{Y}$ is the flat length of the shortest
essential curve in $Y$. Hence, we have
$\size{q_t}(Y) \stackrel{{}_\ast}{\asymp} \size_{\, {\overline q}_t}(Y)$.
Now, \thmref{Length} implies that, for every curve $\gamma$ in $Y$,
if $\Ext_{x_t}(\gamma) \emul1$ then $\Ext_{{\overline x}_t}(\gamma) \emul1$
as well. But two curves of length one have bounded intersection numbers.
Hence, they have bounded projection to every subsurface $Z$. This means
$d_Z(\mu, {\overline \mu})=O(1)$.
\subsection*{Condition (3)} For each $\alpha \in \mathcal{A}$, as we saw before,
$L_\alpha \stackrel{{}_\ast}{\asymp} \overline{L}_\alpha$ and $t_\alpha \stackrel{{}_+}{\asymp} \bar t_\alpha$.
We now show that $T_\alpha \stackrel{{}_+}{\asymp} \overline{T}_\alpha$.
Since the end points of $\mathcal{G}$ and $\overline \calG$ are close, we have
\begin{align*}
&d_\alpha(\mu_0, {\overline \mu}_0)=O(1)
&&\text{and}
&& d_\alpha(\mu_l, {\overline \mu}_l)=O(1).\\
\intertext{Also, from \corref{Ends} we have}
&d_\alpha(\mu_0, \lambda_-)=O(1), \quad
&& &&d_\alpha(\mu_l, \lambda_+)=O(1),\\
&d_\alpha({\overline \mu}_0, {\overline \lambda}_-)=O(1)
&&\text{and}
&&d_\alpha({\overline \mu}_l, {\overline \mu}_+)=O(1).
\end{align*}
Hence, using the triangle inequality,
$$
T_\alpha=d_\alpha(\lambda_-, \lambda_+) \stackrel{{}_+}{\asymp} d_\alpha(\mu_0, \mu_l) \stackrel{{}_+}{\asymp}
d_\alpha({\overline \mu}_0, {\overline \mu}_l) \stackrel{{}_+}{\asymp} d_\alpha({\overline \lambda}_-, {\overline \lambda}_+)=
\overline{T}_\alpha.
$$
Now \eqnref{Modulus} implies
\begin{equation} \label{Eq:F-Equal}
\Mod_{x_t}(\sF_t) \stackrel{{}_\ast}{\asymp} \Mod_{{\overline x}_t}(\overline \sF_t).
\end{equation}
Also, as seen above, the size of all subsurfaces are comparable
in $q_t$ and ${\overline q}_t$. Therefore, by \thmref{Length}
$\Ext_{x_t}(\alpha) \stackrel{{}_\ast}{\asymp} \Ext_{{\overline x}_t}(\alpha)$.
\subsection*{Condition (4)}
We show that $\twist_\alpha(q_t, {\overline q}_t) \Ext_{x_t}(\alpha) \stackrel{{}_\ast}{\asymp} 1$. Note that,
since $d_\alpha(\lambda_-, {\overline \lambda}_-)=O(1)$,
\begin{equation} \label{Eq:Difference}
\twist_\alpha(q_t, {\overline q}_t) \stackrel{{}_+}{\asymp}
|\twist_\alpha(q_t, \lambda_-) - \twist_\alpha({\overline q}_t, {\overline \lambda}_-)|.
\end{equation}
Denote $\twist_\alpha(q_t, {\overline q}_t)$ (as before) by $\twist_t(\alpha)$
and denote $\twist_\alpha({\overline q}_t, {\overline \lambda}_-)$ by ${\overline \twist}_t(\alpha)$.
We use \eqnref{Twist} and the facts $|t_\alpha-\overline t_\alpha|=O(1)$
and $|T_\alpha -\overline T_\alpha|=O(1)$ to estimate the right hand side
of \eqnref{Difference}.
If $t \stackrel{{}_+}{\prec} t_\alpha$ (and hence $ t \stackrel{{}_+}{\prec} \overline t_\alpha$), then
$$
\twist_t(\alpha) \stackrel{{}_\ast}{\prec} \frac{T_\alpha}{\cosh^2(t-t_\alpha)}
\qquad\text{and}\qquad
{\overline \twist}_t(\alpha) \stackrel{{}_\ast}{\prec} \frac{T_\alpha}{\cosh^2(t-t_\alpha)}.
$$
But $\Ext_t(\alpha) \stackrel{{}_\ast}{\prec} \frac 1 {\Mod(\sF_t)}$. Thus using \eqnref{Modulus}
$$
\Big|\twist_t(\alpha) - {\overline \twist}_t(\alpha) \Big| \Ext_t(\alpha)
\stackrel{{}_\ast}{\prec} \frac{T_\alpha}{\cosh^2(t-t_\alpha)} \frac{\cosh^2(t-t_\alpha)}{T_\alpha}
\stackrel{{}_\ast}{\prec} 1
$$
If $t \stackrel{{}_+}{\succ} t_\alpha$, then
$$
T_\alpha - \twist_t(\alpha) \stackrel{{}_\ast}{\prec} \frac{T_\alpha}{\cosh^2(t-t_\alpha)}
\quad\text{and}\quad
T_\alpha - {\overline \twist}_t(\alpha) \stackrel{{}_\ast}{\prec} \frac{T_\alpha}{\cosh^2(t-t_\alpha)}.
$$
Hence, as before,
\begin{align*}
\Big|\twist_t(\alpha) - {\overline \twist}_t(\alpha)\Big| \Ext_t(\alpha)
& \stackrel{{}_\ast}{\prec}
\frac{\Big|(T_\alpha -\twist_t(\alpha)) - (T_\alpha -{\overline \twist}_t(\alpha)\Big|}
{\Mod_t(\alpha)}\\ &\\
& \stackrel{{}_\ast}{\prec} \frac{T_\alpha}{\cosh^2(t-t_\alpha)} \frac{\cosh^2(t-t_\alpha)}{T_\alpha}
\stackrel{{}_\ast}{\prec} 1.
\end{align*}
That is, the last condition in \corref{Bounded-Distance} holds and
$d_\mathcal{T}(q_t, {\overline q}_t)=O(1)$. This finishes the proof.
\end{proof}
We now construct the counterexample.
\begin{theorem} \label{Thm:Not-FL}
For every constant $\d > 0$, there are points
$x, y, {\overline x}$ and ${\overline y}$ in $\mathcal{T}(S)$ so that
$$
d_\mathcal{T}(x, {\overline x}) = O(1) \quad\text{and}\quad d_\mathcal{T}(y, {\overline y}) =O(1),
$$
and
$$
d_\mathcal{T}\big( [x,y], [{\overline x}, {\overline y}] \big) \stackrel{{}_\ast}{\succ} \d.
$$
\end{theorem}
\begin{proof}
For a given $\d$, we construct quadratic differentials $q_0$ and ${\overline q}_0$
with the following properties: Let $q_t$ be the image
of $q_0$ under the \Teich geodesic flow and let $x_t$ be the underlying
conformal structures of $q_t$. Let ${\overline q}_t$ and ${\overline x}_t$ be defined
similarly. We will show that
$$
d_\mathcal{T}(x_0, {\overline x}_0)= O(1), \qquad
d_\mathcal{T}(x_{2\d}, {\overline x}_{2\d})= O(1),
$$
and
$$
d_\mathcal{T}(x_\d, {\overline x}_\d) \stackrel{{}_\ast}{\succ} \d.
$$
This is sufficient to show that
$d_\mathcal{T} (x_\d,{\overline x}_{t})\stackrel{{}_\ast}{\succ} \d$ for any $t \in [0,2\d]$.
To see this note that, for any $0\leq t<\d$, we have
$$
d_\mathcal{T}(x_\d, {\overline x}_t)+ d_\mathcal{T}({\overline x}_t, {\overline x}_0) \stackrel{{}_+}{\succ} \d
\quad\text{and}\quad
d_\mathcal{T}(x_\d, {\overline x}_t)+ d_\mathcal{T}({\overline x}_t, {\overline x}_\d) \stackrel{{}_+}{\succ} d_\mathcal{T}(x_\d, {\overline x}_\d).
$$
Summing up both sides, we get
$$
2d_\mathcal{T}(x_\d, {\overline x}_t) + d_\mathcal{T}({\overline x}_0, {\overline x}_\d) \stackrel{{}_+}{\succ} \d + d_\mathcal{T}(x_\d, {\overline x}_\d).
$$
Hence,
$$
2d_\mathcal{T}(x_\d, {\overline x}_t) \stackrel{{}_+}{\succ} d_\mathcal{T}(x_\d, {\overline x}_d).
$$
A similar argument works for $\d < t \leq 2\d.$
Let $S$ be a surface of genus $2$, $\gamma$ be a separating curve in
$S$ and $Y$ and $Z$ be the components of $S \setminus \gamma$.
Consider a pseudo-Anosov map $\phi$ on a torus and choose a flat torus
$T$ on the axis of $\phi$ so that the vertical direction in $T$ matches the
unstable foliation of $\phi$. Cut open a slit in $T$ of size $\epsilon = c \, e^{-\d/2}$
and of angle $\pi/4$ (The constant $0<c<1$ is to be specified below).
Fix a homeomorphism from $Y$ to this slit torus
and call this marked flat surface $T_0$. Define
$$
T_t= \begin{bmatrix} e^t & 0 \\ 0 & e^{-t} \end{bmatrix} T_0.
$$
Note that $T_t$ is still a marked surface. The length of the slit
is minimum at $t=0$ and grows exponentially as $t \to \pm \infty$.
For $-\d/2 \leq t \leq \d/2$, the length of the slit is smaller than $c$
but the length of shortest essential curve in $T_t$ in this interval is comparable
with $1$. Hence, for $c$ small enough, $M_t(\gamma, Y) \geq m_0$
(see \secref{Restriction}) and $T_t$ looks like an isolated subsurface.
Now choose $\delta \ll \epsilon$
(specified below) and let $q_0$ be the quadratic differential defined by
gluing $T$ to $\delta \, T_{-\d/2}$. What we mean by this is
that we first scale down $T_{-\d/2}$ by a factor $\delta$. Then we cut open a slit
in $T$ of the same size and angle as the size of the slit in $\delta T_{-\d/2}$ and
then glue these two flat tori along this slit. Fixing a homeomorphism from $Z$ to
$T$ slit open, we obtain a marking for $q_0$ that is well defined up to
twisting around $\gamma$. Let $\mathcal{G} \from [0,2\d] \to \mathcal{T}(S)$ be the
\Teich geodesic segment defined by $q_0$.
Construct ${\overline q}_0$ in the similar fashion by gluing $T$ to $\delta \, T_{-3\d/2}$.
Now choose the marking map from $S$ to ${\overline q}_0$ so that $q_0$
and ${\overline q}_0$ have bounded relative twisting around $\gamma$.
Let $\overline \calG \from [0,2\d] \to \mathcal{T}(S)$ be the
\Teich geodesic segment defined by ${\overline q}_0$.
Recall that, for $-\d/2 \leq t \leq \d/2$, the subsurface $\delta T_t$ is isolated
(scaling by $\delta$ does not change the value of $M_t(\alpha, Y)$)
and by \thmref{Restriction} the projection of $T_t$ to the \Teich space of $Y$
fellow travels a \Teich geodesic. However, for $t>\d/2$ and $t<-\d/2$, the
projection to the curve complex of $Y$ changes by at most a bounded amount.
That is, the interval of isolation for $Y$ along $\mathcal{G}$, $I_Y =[0,\d]$ and
along $\overline \calG$, ${\overline I}_Y =[\d,2\d]$. In particular,
$$
d_Y(q_0, {\overline q}_0)=O(1) \quad\text{and}\quad d_Y(q_{2\d}, {\overline q}_{2\d})=O(1).
$$
Also, since no curve in $Y$ or $Z$ is ever short (the vertical and the horizontal
foliation in $Y$ and $Z$ are co-bounded), the twisting parameters around any
curves inside $Y$ or $Z$ are uniformly bounded. Projections of $q_0$ and
${\overline q}_0$ to $Z$ are identical and $\gamma$ is short in both $q_0$ and ${\overline q}_0$.
Therefore, to show $d_\mathcal{T}(x_0, {\overline x}_0)=O(1)$, it remains to show
(\corref{Bounded-Distance}) that the extremal lengths of
$\gamma$ in $x_0$ and ${\overline x}_0$ are comparable. We have
(\thmref{Length})
$$
\Ext_{x_0}(\gamma) \stackrel{{}_+}{\asymp} \log \frac 1\delta
\qquad\text{and}\qquad
\Ext_{{\overline x}_0}(\gamma) \stackrel{{}_+}{\asymp} \log \frac 1{e^\d \delta} = \log \frac 1\delta - \d.
$$
But these quantities are comparable for $\delta$ small enough. A similar
argument shows that $d_\mathcal{T}(x_{2\d}, {\overline x}_{2\d})=O(1)$.
Since $Y$ is isolated in $q_t$ for $0\leq t \leq \d$ the shadow
to the $\AC(Y)$ is an unparametrized quasi-geodesic. In fact,
since no curve is short in $Y$ in that interval, the shadow is a parametrized
quasi-geodesic (\cite[Lemma 4.4]{rafi:CC}). That is
$$
d_{\mathcal{T}(Y)}(x_0, x_\d) \stackrel{{}_\ast}{\asymp} \d.
$$
But the interval of isolation for $Y$ along the geodesic $\overline \calG$ is
$[\d, 2\d]$. Therefore,
$$
d_Y({\overline x}_0, {\overline x}_\d) =O(1).
$$
As before, we have $d_Y(x_0, {\overline x}_0)=O(1)$. Hence
$$
d_Y(q_\d, {\overline q}_\d)\stackrel{{}_\ast}{\asymp} \d.
$$
Now, by \thmref{Distance}, we have
$$
d_\mathcal{T}(x_\d, {\overline x}_\d) \stackrel{{}_\ast}{\succ} d_Y(x_\d, {\overline x}_\d) \stackrel{{}_\ast}{\asymp} \d.
$$
This finishes the proof.
\end{proof}
\section{Thin triangles}
\label{Sec:Thin}
Let $x$, $y$ and $z$ be three points in $\mathcal{T}(S)$ and
let $\mathcal{G} \from [a,b] \to \mathcal{T}(S)$ be the \Teich geodesic
connecting $x$ to $y$. In this section we prove Theorem~\ref{Thm:Thin}
from the introduction.
\begin{theorem} \label{Thm:Thin-Triangle}
For every $\epsilon$, there are constants $C$ and $D$ so that the following holds.
Let $[c,d]$ be a subinterval of $[a,b]$ with $(d-c)>C$ so that for every $t \in [c,d]$,
$\mathcal{G}(t)$ is in the $\epsilon$--thick part of $\mathcal{T}(S)$.
Then, there is a $w \in [\mathcal{G}(c), \mathcal{G}(d)]$ where
$$
\min \Big(d_\mathcal{T}\big(w, [x,z]\big), d_\mathcal{T}\big(w, [x,z]\big) \Big) \leq D.
$$
\end{theorem}
\begin{proof}
Consider the shadow map from $\mathcal{T}(S)$ to the curve comlex
$\AC(S)$ sending a point $x$ to its short marking $\mu_x$.
The geodesic triangle $\triangle(\mu_x,\mu_y,\mu_z)$ in the arc
and curve complex $\AC(S)$ is $\delta$--slim. Since the shadow of
$[x,y]$ is a quasi-geodesic (Theorem~\ref{Thm:Shadow})
for any $w \in [x,y]$, $\mu_w$ is $\delta$--close to the geodesic
$[\mu_x, \mu_y]$ in $\AC(S)$. That is, for every $w\in[x,y]$,
there is a Riemann surface $u$ in either $[x,z]$ or $[y,z]$ so that
$d_S(\mu_w, \mu_u)\leq 3\delta$.
The projection of $[\mathcal{G}(c), \mathcal{G}(d)]$ to $\AC(S)$ is in fact
a parametrized quasi-geodesic (\cite[Lemma 4.4]{rafi:CC}).
Hence, by making $C$ large, we can assume that the shadow of
$[\mathcal{G}(c), \mathcal{G}(d)]$ is as long as we like. Thus, we can choose
$w \in [\mathcal{G}(c), \mathcal{G}(d)]$ so that $\mu_w$ is far from either
the shadow of $[x,z]$ or the shadow of $[y,z]$.
To summarize, without loss of generality, we can assume that
there is a $w \in [\mathcal{G}(c), \mathcal{G}(d)]$ and a $u \in [x,z]$ so that
$d_S(\mu_w, \mu_u)=O(1)$ and that neither $\mu_u$ nor $\mu_w$
is in the $(10\delta)$--neighborhood of the geodesic $[\mu_y,\mu_z]$.
We claim that $u$ is in the thick part of \Teich space. Using
\thmref{M-Large} it is enough to show, for every
subsurface $Y$ whose boundaries are close to $\mu_u$ in $\AC(S)$, that
$d_Y(\mu_x, \mu_z)=O(1)$. Since $\mu_u$ is far away from
$[\mu_y, \mu_z]$, \thmref{Bounded} implies that
$d_Y(\mu_y, \mu_z)=O(1)$. To prove the claim, we need to show that
\begin{equation} \label{d_Y}
d_Y(\mu_x, \mu_y)=O(1).
\end{equation}
We prove \eqref{d_Y} by contradiction. Assume $d_Y(\mu_x, \mu_y)$ is large.
By \thmref{M-Large}, $\partial Y$ is short at some point $v \in[x,y]$.
But the shadow of $[x,y]$ is a quasi-geodesic
and the shadow of $[\mathcal{G}(c), \mathcal{G}(d)]$ is a parametrized quasi-geodesic.
Hence, by choosing $C$ large enough, we can conclude that, for any
such subsurface, $d_S(\partial Y, \mu_w)\stackrel{{}_+}{\succ} d_S(\mu_v, \mu_w)$ is large.
This contradicts the fact that
$$
d_S(\partial Y, \mu_u)=O(1)
\quad\text{and}\quad d_S(\mu_u, \mu_w)=O(1).
$$
Hence, \eqref{d_Y} holds and thus $u$ is in the thick part of \Teich space.
We now claim, for any subsurface $Y\subset S$, that
$$
d_Y(\mu_u, \mu_w)=O(1).
$$
This is because any such subsurface $Y$ should appear near the
curve complex geodesic connecting $\mu_u$ and $\mu_w$ and hence
$\partial Y$ has a bounded distance from $\mu_w$ in $\AC(S)$. As before,
assuming $d_Y(\mu_x, \mu_y)$ is large will result in a contradiction.
Thus, $d_Y(\mu_x, \mu_y)=O(1)$. Since $\mu_u$ is far from the geodesic
$[\mu_y,\mu_z]$, the bounded projection theorem implies that
$d_Y(\mu_y, \mu_z)=O(1)$ and by the triangle inequality,
$d_Y(\mu_x, \mu_z)=O(1)$. On the other hand, by \thmref{No-Back-Tracking}
$$
d_Y(\mu_x, \mu_y)= O(1) \quad\Longrightarrow\quad d_Y(\mu_x, \mu_w)= O(1)
$$
and
$$
d_Y(\mu_x, \mu_z)= O(1) \quad\Longrightarrow\quad d_Y(\mu_x, \mu_u)= O(1).
$$
The triangle inequality implies $d_Y(\mu_w, \mu_u)= O(1)$. This proves the claim.
We have $w$ and $u$ are both in the thick part and that all subsurface projections
between $\mu_u$ and $\mu_w$ are uniformly bounded.
\corref{Bounded-Distance} implies that $d_\mathcal{T}(u,w)=O(1)$.
\end{proof}
\bibliographystyle{alpha}
|
2,877,628,088,859 | arxiv | \section{Definitions and Preliminaries}
In the study of dynamical systems the Artin-Mazur zeta function is the generating function for counting periodic points. For any set $X$ and map $f:X\to X$ it is a formal power series defined by
\begin{equation}
\zeta_f(X;t)=\exp\left( \sum_{n=1}^\infty \#(\text{Fix}(f^n))\frac{t^n}{n} \right).
\end{equation}
We use the convention that $f^n$ means $f$ composed with itself $n$ times, and that $\text{Fix}(f^n)$ denotes the set of fixed points of $f^n$. For $\zeta_f(X;t)$ to make sense as a formal power series we assume that $\#(\text{Fix}(f^n))<\infty$ for all $n$. The zeta function is also represented by the product formula
\begin{equation*}
\zeta_f(X;t)=\prod_{x\in \text{Per}(f,X)}(1-t^{p(x)})^{-1}
\end{equation*}
where Per$(f,X)$ is the set of periodic points of $f$ in $X$ and $p(x)$ is the least positive $n$ such that $f^n(x)=x$. This function was introduced by Artin and Mazur in the case where $X$ is a manifold and $f:X\to X$ is a diffeomorphism~\cite{AM}. In this context $\zeta_f(X;t)$ is proved to be a rational function for certain classes of diffeomorphisms (e.g.~\cite{G,M}). This shows that in these cases the growth of $\#(\text{Fix}(f^n))$ is determined by the finitely many zeros and poles of $\zeta_f$. From this point onward we make the definition\begin{equation*}
a_n=\#(\text{Fix}(f^n))
\end{equation*}
for economy of notation.
\newline
We are interested in the rationality of the zeta function in an algebraic context, motivated by the following example.\newline
\noindent\textbf{Example:} Let $X$ be a variety over $\ensuremath{{\mathbb{F}}}_p$ and let $f:X\to X$ be the Frobenius map, i.e. the $p$-th power map on coordinates. Fix$(f^n)$ is exactly the set of $\ensuremath{{\mathbb{F}}}_{p^n}$-valued points of $X$. Therefore $\zeta_f(X;t)$ is the Hasse-Weil zeta function of $X$, and is rational by Dwork's Theorem~\cite{Dwork}.\newline
We study a simple, yet interesting case: fix a prime $p$ and let $X=\ensuremath{{\mathbb{A}}}^1_{\ensuremath{{\mathbb{F}}}_p}$, the affine line over $\ensuremath{{\mathbb{F}}}_p$. Let $f\in\overline{\ensuremath{{\mathbb{F}}}}_p[x]$, let $d=\deg f$, and assume that $d\geq 2$. Consider the dynamical system defined by $f$ as a self-map of $\ensuremath{{\mathbb{A}}}^1(\overline{\ensuremath{{\mathbb{F}}}}_p)$. The points in Fix($f^n$) are the roots in $\overline{\ensuremath{{\mathbb{F}}}}_p$ of the degree $d^n$ polynomial $f^n(x)-x$ counted \emph{without multiplicity}, so $a_n\leq d^n$. If we consider $\zeta_f(t)$ as a function of a complex variable $t$, it converges to a holomorphic function on $\ensuremath{{\mathbb{C}}}$ in a disc around the origin of radius $d^{-1}$ (at least - it is not clear that $d^{-1}$ is the largest radius of convergence). Our motivating question is:
\begin{question}
For which $f\in\overline{\ensuremath{{\mathbb{F}}}}_p[x]$ is $\zeta_f(\overline{\ensuremath{{\mathbb{F}}}}_p;t)$ a rational function?
\end{question}
If we count periodic points with multiplicity, then $a_n=d^n$ for all $n$ and Question 1 becomes completely trivial by the calculation
\begin{equation}\label{rationalZeta}
\zeta_f(\overline{\ensuremath{{\mathbb{F}}}}_p;t)=\exp\left( \sum_{n=1}^\infty\frac{d^nt^n}{n}\right)=\exp(-\log(1-dt))=\frac{1}{1-dt},
\end{equation}
so we count each periodic point only once. A partial answer to our question is given by the following two theorems, which show that for some simple choices of $f$, $\zeta_f$ is not only irrational, but also not algebraic over $\ensuremath{{\mathbb{Q}}}(t)$.
\begin{thm}
If $f\in\overline{\ensuremath{{\mathbb{F}}}}_p[x^p]$, then $\zeta_f(\overline{\ensuremath{{\mathbb{F}}}}_p,t)\in\ensuremath{{\mathbb{Q}}}(t)$. In particular, if $p\mid m$, then $\zeta_{x^m}(\overline{\ensuremath{{\mathbb{F}}}}_p;t)\in\ensuremath{{\mathbb{Q}}}(t)$. If $p\nmid m$, then $\zeta_{x^m}(\overline{\ensuremath{{\mathbb{F}}}}_p;t)$ is transcendental over $\ensuremath{{\mathbb{Q}}}(t)$.
\end{thm}
\begin{thm}
If $a\in\ensuremath{{\mathbb{F}}}_{p^m}^\times$, $p$ odd and $m$ any positive integer, then $\zeta_{x^{p^m}+ax}(\overline{\ensuremath{{\mathbb{F}}}}_p;t)$ is transcendental over $\ensuremath{{\mathbb{Q}}}(t)$.
\end{thm}
Our strategy of proof depends heavily on the following two theorems. Their proofs, as well as a good introduction to the theory of finite automata and automatic sequences, can be found in~\cite{AS}.
\begin{thm}[Christol]
The formal power series $\sum_{n=0}^\infty b_n t^n$ in the ring $\ensuremath{{\mathbb{F}}}_p[[t]]$ is algebraic over $\ensuremath{{\mathbb{F}}}_p(t)$ iff its coefficient sequence $\{b_n\}$ is $p$-automatic.
\end{thm}
\begin{thm}[Cobham]
For $p$, $q$ multiplicatively independent positive integers (i.e. $\log p/\log q\notin\ensuremath{{\mathbb{Q}}}$), the sequence $\{b_n\}$ is both $p$-automatic and $q$-automatic iff it is eventually periodic.
\end{thm}
The following is an easy corollary to Christol's theorem which we will use repeatedly~\cite[Theorem 12.6.1]{AS}.
\begin{cor}\label{ChristolCor}
If $\sum_{n=0}^\infty b_n t^n\in\ensuremath{{\mathbb{Z}}}[[t]]$ is algebraic over $\ensuremath{{\mathbb{Q}}}(t)$, then the reduction of $\{b_n\}$ mod $p$ is $p$-automatic for every prime $p$.
\end{cor}
We note that Corollary \ref{ChristolCor} will be applied to the logarithmic derivative $\zeta_f'/\zeta_f=\sum_{n=1}^\infty a_n t^{n-1}$, rather than to $\zeta_f$.\newline
Throughout this paper we use $v_p$ to mean the usual $p$-adic valuation, that is, $v_p(a/b)=\text{ord}_p(b)-\text{ord}_p(a)$. We use $(n)_p$ as in \cite{AS} to signify the base-$p$ representation of the integer $n$, and we denote the multiplicative order of $a$ mod $n$ by $o(a,n)$, assuming that $a$ and $n$ are coprime integers.\newline
\section{Proof of Theorem 1}
\begin{proof}
Let $f(x)\in\overline{\ensuremath{{\mathbb{F}}}}_p[x^p]$, so that $f'(x)=0$ identically. Then $f^n(x)-x$ has derivative $(f^n(x)-x)'=-1$, so it has distinct roots over $\overline{\ensuremath{{\mathbb{F}}}}_p$. Therefore $a_n=(\deg f)^n$ and $\zeta_f(\overline{\ensuremath{{\mathbb{F}}}}_p,t)$ is rational as in equation (\ref{rationalZeta}).\newline
Now suppose $f(x)=x^m$ where $p\nmid m$. Assume by way of contradiction that $\zeta_f$ is algebraic over $\ensuremath{{\mathbb{Q}}}(t)$. The derivative $\zeta_f'=d\zeta_f/dt$ is algebraic, which can be shown by writing the polynomial equation that $\zeta_f$ satisfies and applying implicit differentiation. Hence $\zeta_f'/\zeta_f$ is algebraic. We have
\begin{equation*}
\zeta_f'/\zeta_f=(\log\zeta_f)'=\sum_{n=1}^\infty a_n t^{n-1}
\end{equation*}
so in particular, $\zeta_f'/\zeta_f\in\ensuremath{{\mathbb{Z}}}[[t]]$. By Corollary \ref{ChristolCor}, for every prime $q$ the reduced sequence $\{a_n\}$ mod $q$ is $q$-automatic.\newline
First we count the roots of $f^n(x)-x=x^{m^n}-x=x(x^{m^n-1}-1)$ in $\overline{\ensuremath{{\mathbb{F}}}}_p$. There is one root at zero, and we write $m^n-1=p^ab$, where $p\nmid b$, so
\begin{equation*}
x^{m^n-1}-1=x^{p^ab}-1=(x^b-1)^{p^a}.
\end{equation*}
The polynomial $x^b-1$ has derivative $bx^{b-1}$, and $(x^b-1,bx^{b-1})=1$, so $x^b-1$ has exactly $b$ roots in $\overline{\ensuremath{{\mathbb{F}}}}_p$, as does $x^{m^n}-1$. Therefore
\begin{equation}\label{a_nCalculation}
a_n=1+\frac{m^n-1}{p^{v_p(m^n-1)}}.
\end{equation}
Now we need to reduce mod some carefully chosen prime $q$. There are two cases to consider, depending on whether $p=2$.\newline
\noindent\underline{Case 1:} If $p=2$, let $q$ be a prime dividing $m$, $q\neq 2$. There is such a prime because $m>1$ and $2\nmid m$. Let $r = 2^{-1}$ in $\mathbb{F}_q$. Reducing mod $q$,
\begin{equation}\label{a_nCase1}
a_n= 1+ \frac{m^n-1}{2^{v_2(m^n-1)}}\equiv 1-r^{v_2(m^n-1)}\pmod{q}.
\end{equation}
The subsequence $\{a_{2n}\}$ reduced mod $q$ is $q$-automatic because subsequences of automatic sequences indexed by arithmetic progressions are automatic~\cite[Theorem 6.8.1]{AS}. We define the sequence $\{b_n\}$ as
\begin{equation*}
b_n=-(a_{2n}-1).
\end{equation*}
The sequence $\{b_n\}$ is $q$-automatic, because subtracting $1$ and multiplying by $-1$ simply permute the elements of $\mathbb{F}_q$. We have $b_n=r^{v_2(m^{2n}-1)}$ by (\ref{a_nCase1}). To proceed, we need the following proposition.\newline
\begin{prop}\label{v_pFormula}
\begin{enumerate}[i.]
\item For any $n,m\in\mathbb{N}$, $m$ odd,
$$
v_2(m^{2n}-1)=v_2(n)+v_2(m^2-1).
$$
\item If $p$ is an odd prime and $n,m\in\mathbb{N}$, $p\nmid m$, then
$$
v_p(m^{(p-1)n}-1)=v_p(n)+v_p(m^{p-1}-1).
$$
\end{enumerate}
\end{prop}
\begin{proof}
The proof is an elementary consequence of the structure of the unit group $(\ensuremath{{\mathbb{Z}}}/p^n\ensuremath{{\mathbb{Z}}})^\times$, see for example \cite{Le}, and is omitted.
\end{proof}
By Proposition ~\ref{v_pFormula},
\begin{equation}\label{b_n}
b_n=r^{v_2(n)+v_2(m^2-1)}.
\end{equation} Let $d=o(r,q)$, the multiplicative order of $r$ in $\mathbb{F}_q$, and note that $d>1$ because $r\neq 1$. We see that $b_n$ is a function of $v_2(n)$ reduced mod $d$, and $v_2(n)$ is simply the number of leading zeros of $(n)_2$ (if we read the least significant digit first).
\begin{lem}\label{automatic}
If $\beta_n$ is a function of the equivalence class mod $d$ of $v_p(n)$, then the sequence $\{\beta_n\}$ is $p$-automatic.
\end{lem}
\begin{proof}
We can build a finite automaton (with output) whose output depends on the equivalence class mod $d$ of the number of initial zeros of a string, as in Figure 1 for $d=4$. There are $d$ states arranged in a circle (the $q_i$ in the figure), reading a zero moves from one of these states to the next, and reading any other symbol moves to a final state (the $r_i$) marked with the corresponding output. Therefore $\beta_n$ is $p$-automatic.
\end{proof}
\begin{figure}[htb]
\includegraphics[scale=0.65]{CroppedAutomaton.pdf}
\caption{State $q_0$ is initial. States $q_i$ and $r_i$ are reached after processing $i\bmod{4}$ leading zeroes.}
\end{figure}
By Lemma \ref{automatic}, $\{b_n\}$ is 2-automatic. It is also $q$-automatic, so by Cobham's theorem $\{b_n\}$ is eventually periodic of period $k$. For some large $n$, we have $b_{nk}=b_{nk+k}=b_{nk+2k}=\dots=b_{(n+a)k}$ for any positive integer $a$. This means that $b_{Nk}=b_{nk}$ for all $N>n$. By equation (\ref{b_n}),
\begin{equation*}
r^{v_2(Nk)+v_2(m^2-1)}=r^{v_2(nk)+v_2(m^2-1)}
\end{equation*}
which means $v_2(Nk)\equiv v_2(nk)\pmod{d}$ and so $v_2(N)\equiv v_2(n)\pmod{d}$ for all $N>n$. This is a contradiction, as $d>1$.\newline
\noindent\underline{Case 2:} If $p>2$, we pick some prime $q>m^{p-1}$ such that $q\not\equiv 1\pmod{p}$ (for example we can choose $q\equiv 2\pmod{p}$ by Dirichlet's theorem on primes in arithmetic progressions). Clearly $q\nmid m$, so $m^{q-1}\equiv 1\pmod{q}$. Let $r= p^{-1}$ in $\mathbb{F}_q$. The sequence $\{a_n\}$ is as in equation (\ref{a_nCalculation}). We take the subsequence $a_{(p-1)((q-1)n+1)}$ and reduce it mod $q$. This subsequence is $q$-automatic. We compute
\begin{align*}
a_{(p-1)((q-1)n+1)} & = 1 + \frac{m^{(p-1)((q-1)n+1)}-1}{p^{v_p(m^{(p-1)((q-1)n+1)}-1)}} = 1 + \frac{(m^{q-1})^{(p-1)n}m^{p-1}-1}{p^{v_p(m^{(p-1)((q-1)n+1)}-1)}}\\
& \equiv 1 + (m^{p-1}-1)r^{v_p(m^{(p-1)((q-1)n+1)}-1)}\pmod{q}.
\end{align*}
As $m^{p-1}-1<q$ we can invert $m^{p-1}-1$ mod $q$. If we subtract 1 and multiply by $(m^{p-1}-1)^{-1}$ as in Case 1, we get $$b_n=r^{v_p(m^{(p-1)((q-1)n+1)}-1)}$$ which is $q$-automatic.\newline
By Proposition \ref{v_pFormula}, $b_n=r^{v_p((q-1)n+1)+v_p(m^{p-1}-1)}$. Let $d=o(r,q)$, noting that $d>1$. Let
\begin{equation*}
Y=\{n\in\mathbb{N}: v_p((q-1)n+1)\equiv 0\pmod{d}\}.
\end{equation*}
$Y$ is the fiber of $\{b_n\}$ over $r^{v_p(m^{p-1}-1)}$ and is therefore a $q$-automatic set (i.e. its characteristic sequence is $q$-automatic). We argue that $Y$ is $p$-automatic.\newline
Consider a finite-state transducer $T$ on strings over $\{0,\dots,p-1\}$ such that $T((n)_p)=((q-1)n+1)_p$. On strings with no leading zeros, $T$ is one-to-one. Let $L$ be the set of base-$p$ strings $(n)_p$ such that $n\in Y$. Then
\begin{equation*}
T(L)=\{(n)_p: n\equiv 1\pmod{q-1}\hspace{.2in}\text{and}\hspace{.2in} v_p(n)\equiv 0\pmod{d}\}.
\end{equation*}
$T(L)$ is a regular language, as both of its defining conditions can be recognized by a finite automaton (for the second condition, this follows from Lemma \ref{automatic}). Therefore $T^{-1}(T(L))=L$ is regular, that is, the characteristic sequence of $Y$ is $p$-automatic. We use Cobham's theorem again to conclude that the characteristic sequence of $Y$ is eventually periodic.\newline
Let $\{y_n\}$ be the characteristic sequence of $Y$
$$y_n= \left\{
\begin{array}{lr}
1 & : n \in Y\\
0 & : n \notin Y
\end{array}
\right.$$
and let $k$ be its (eventual) period. Write $k$ as $k=Mp^N$, where $p\nmid M$ (it is possible that $N=0$). As $q\not\equiv 1\pmod{p}$, $q-1$ is invertible mod $p$-powers, so we can solve the following equation for $n$.
\begin{equation}\label{Case2Eq1}
(q-1)n\equiv -1 + p^{dN}\pmod{p^{dN+2}}
\end{equation}
Any $n$ that solves this equation satisfies $v_p((q-1)n+1)=dN$ and so $y_n=1$. Choose a large enough solution $n$ so that $\{y_n\}$ is periodic at $n$. We can solve the following equation for $a$, and choose such an $a$ to be positive.
\begin{equation}\label{Case2Eq2}
(q-1)aM\equiv p^{(d-1)N}(p-1)\pmod{p^{dN+2}}
\end{equation}
Multiplying (\ref{Case2Eq2}) by $p^N$ gives
\begin{equation}\label{Case2Eq3}
(q-1)ak \equiv p^{dN+1}-p^{dN} \pmod{p^{dN+2}}.
\end{equation}
Adding (\ref{Case2Eq1}) and (\ref{Case2Eq3}) gives
\begin{equation*}
(q-1)(n+ak)\equiv -1 + p^{dN+1}\pmod{p^{dN+2}}
\end{equation*}
from which we conclude $v_p((q-1)(n+ak)+1)=dN+1$. So $y_{n+ak}=0$. But $y_n=y_{n+ak}$ by periodicity, which is a contradiction.
\end{proof}
\section{Proof of Theorem 2}
\begin{proof}
Let $f(x)=x^{p^m}+ax$ for $a\in\mathbb{F}_{p^m}^\times$, $p$ odd. First we compute $f^n(x)$.
\begin{prop}
$f^n(x)=\sum_{k=0}^n {n\choose k} x^{p^{km}}a^{n-k}$
\end{prop}
\begin{proof}
Let $\phi(x)=x^{p^m}$ and $a(x)=ax$, so $f=\phi+a$. Both $\phi$ and $a$ are additive polynomials (they distribute over addition) and they commute, so the proof is simply the binomial theorem applied to $(\phi+a)^n$.
\end{proof}
Assume that $\zeta_f$ is algebraic. By Corollary ~\ref{ChristolCor}, the sequence $\{a_n\}$ reduced mod $q$ is $q$-automatic for every prime $q$, as is the subsequence $\{a_{(p^m-1)n}\}$ by previous remarks. Now we need to compute $a_n$ when $p^m-1$ divides $n$.
\begin{prop}
If $p^m-1$ divides $n$, then $a_{n}=p^{(n-p^{v_p(n)})m}$.
\end{prop}
\begin{proof}
The coefficient on $x$ in $f^{n}(x)$ is a power of $a^{p^m-1}=1$. Let $l$ be the smallest positive integer such that ${n\choose l}\not\equiv 0\pmod{p}$. Then
\begin{equation*}
f^{n}(x)-x=\sum_{k=l}^{n} {n\choose k} x^{p^{km}}a^{n-k}
=\left(\sum_{k=l}^{n}{n\choose k} x^{p^{(k-l)m}}(a^{n-k})^{p^{-l}}\right)^{p^l},
\end{equation*}
where raising to the $p^{-l}$ power means applying the inverse of the Frobenius automorphism $l$ times. Let $g(x)=\sum_{k=l}^{n}{n\choose k} x^{p^{(k-l)m}}(a^{n-k})^{p^{-l}}$. The derivative $g'(x)=(a^{n-l})^{p^{-l}}$ is nonzero, so $g(x)$ has $p^{(n-l)m}$ distinct roots over $\overline{\ensuremath{{\mathbb{F}}}}_p$, as does $f^n(x)-x$. So $a_n=p^{(n-l)m}$.\newline
Kummer's classic theorem on binomial coefficients mod $p$ says that $v_p({n\choose l})$ equals the number of borrows involved in subtracting $l$ from $n$ in base $p$~\cite{K}. It is clear that the smallest integer $l$ that results in no borrows in this subtraction is $l=p^{v_p(n)}$, and we are done.\end{proof}
Let $q>p$ be a prime to be determined and let $r=p^{-1}$ in $\mathbb{F}_q$. The sequence given by $b_n=r^{(p^m-1)nm}$ is eventually periodic and so is $q$-automatic. Let $c_n=a_{(p^m-1)n}b_n$. By ~\cite[Corollary 5.4.5]{AS} the product of $q$-automatic sequences over $\mathbb{F}_q$ is $q$-automatic, so $c_n$ is $q$-automatic. So
\begin{align*}
c_n & =a_{(p^m-1)n}b_n=p^{((p^m-1)n-p^{v_p((p^m-1)n)})m}r^{(p^m-1)nm}\\
& =(p^{-1})^{p^{(v_p(p^m-1)+v_p(n)})m}=(r^m)^{p^{v_p(n)}}.
\end{align*}
Choose $q>p^{mp}$ such that $q\equiv 2\pmod{p^m}$. Note that $o(r^m,q)$ divides $q-1$, so $o(r^m,q)\not\equiv 0\pmod{p}$ and $p$ is invertible mod $o(r^m,q)$. The value of $c_n$ depends only on $p^{v_p(n)}$ reduced mod $o(r^m,q)$, which in turn is a function of $v_p(n)$ mod $o(p,o(r^m,q))$, so $c_n$ is $p$-automatic by Lemma ~\ref{automatic}.\newline
By Cobham's Theorem $c_n$ is eventually periodic, so the set
\begin{align*}
Y & =\{n\in\mathbb{N}:c_n=r^m\}\\
& =\{n\in\mathbb{N}:p^{v_p(n)}\equiv 1\pmod{o(r^m,q)}\}\\
& =\{n\in\mathbb{N}:v_p(n)\equiv 0\pmod{o(p,o(r^m,q))}\}
\end{align*}has an eventually periodic characteristic sequence $\{y_n\}$. Essentially the same argument as in Theorem 1, Case 2 shows this is a contradiction when $o(p,o(r^m,q))>1$. We sketch the argument for completeness.\newline
As we chose $q>p^{mp}$, $o(r^m,q)=o(p^m,q)>p$, and $o(p,o(r^m,q))>1$. Let $d=o(p,o(r^m,q))$, and let $k=Mp^N$ be the eventual period of $Y$, where $p\nmid M$. We can solve
\begin{equation}\label{Thm2Eq1}
n\equiv p^{dN}\pmod{p^{dN+2}}
\end{equation}
\begin{equation}\label{Thm2Eq2}
aM\equiv p^{(d-1)N}(p-1)\pmod{p^{dN+2}}
\end{equation}
for large $n$ and positive $a$, so $y_n=1$. Adding (\ref{Thm2Eq1}) and $p^N$ times (\ref{Thm2Eq2}) gives
\begin{equation*}
n+ak\equiv p^{dN+1}\pmod{p^{dN+2}}
\end{equation*}
from which we conclude $v_p(n+ak)=dN+1$, so $y_{n+ak}=0$, contradicting periodicity of $\{y_n\}$. This contradiction shows that $\zeta_f$ is transcendental.
\end{proof}
\section{Concluding Remarks}
The polynomial maps in Theorems 1 and 2 are homomorphisms of the multiplicative and additive groups of $\overline{\ensuremath{{\mathbb{F}}}}_p$, respectively. It should be possible to prove similar theorems for other maps associated to homomorphisms, e.g. Chebyshev polynomials, general additive polynomials, and Latt\`es maps on $\P^1(\overline{\ensuremath{{\mathbb{F}}}}_p)$. See ~\cite{Silverman} for a discussion of special properties of these maps.\newline
It is more difficult to study the rationality or transcendence of $\zeta_f$ when the map $f$ has no obvious structure. For example, there is a standard heuristic that the map $f(x)=x^2+1$ behaves like a random mapping on a finite field of odd order (see ~\cite{Bach}, ~\cite{Pollard}, ~\cite{Silverman2} and many others). We conclude with the following tantalizing question without hazarding a guess as to the answer.
\begin{question}
For $p$ odd and $f=x^2+1$, is $\zeta_f(\overline{\ensuremath{{\mathbb{F}}}}_p,t)$ in $\ensuremath{{\mathbb{Q}}}(t)$?
\end{question}
\subsection*{Acknowledgements}
This research was partly supported by NSF grant no. CCF-0635355. The author wishes to thank Eric Bach for many helpful suggestions and comments, Jeff Shallit for useful clarifications, and an anonymous referee for helpful remarks on style and presentation.
|
2,877,628,088,860 | arxiv | \section{Introduction}
The experimental programs at Super Proton Synchrotron (SPS) at CERN,
Relativistic Heavy Ion Collider (RHIC) at BNL and Large Hadron
Collider (LHC) at CERN open up an window onto the properties of
Quantum Chromodynamics (QCD) at high temperatures in guise of
quark-gluon plasma (QGP). Following the conjecture of Matsui and Satz
~\cite{Matsui:86}, there was considerable interest to study
the properties of quarkonia at finite temperature.
It was until recently that the inherent hierarchy of the scales
in the heavy quark bound systems ($m_Q \gg m_Q v \gg m_Q v^2$)
facilitates to derive a sequence of
effective field theories (EFT) from the underlying theory, QCD, {\em namely}
non relativistic QCD (NRQCD) and potential NRQCD (pNRQCD), by
integrating out the successive scales in the system.
The heavy quark bound states are described by the singlet
and octet potentials through the matching coefficients
in the effective lagrangian, which, however can be extended to
finite temperature~\cite{Brambilla:2008cx}
with the additional thermal scales, $T$, $gT$, $g^2T$ etc.
The thermal corrections to the real
and imaginary part of the singlet potential are manifested
as the Debye screening~\cite{Matsui:86} and the Landau damping~
\cite{Beraudo:2007ky,Laine:2007qy}, respectively.
On the other hand, the non EFT defines the potential
from the late time behavior of a Wilson
loop~\cite{Brambilla:2008cx,Wilson:1974sk,Makeenko,Berges,Barchielli:NPB296,Rothkopf:MPL28,Laine:2006ns}.
However, at finite temperature, the Wilson loop depends on imaginary time and
the analytic continuation in the large real-time limit gives
the (complex) potential~\cite{Brambilla:2008cx,Laine:2006ns},
whose imaginary part is manifested as Landau damping \cite{Beraudo:2007ky}.
The separation of thermal scales in EFT
is not evident and one needs lattice techniques to test
the approach, where the dissociation of the quarkonium
states can be studied even without the potential models
rather the physics of a given quarkonium state is encoded
in its spectral function in terms of the Euclidean
meson correlation functions~\cite{Karsch,Mocsy05-08,Wong05,Cabrera07,
Alberico08,Forcrand}.
However, the reconstruction of the spectral functions from the lattice meson
correlators turns out to be very difficult. At finite temperature the
situation becomes worse because the temporal extent is decreasing and
particularly, for the bottomonium states, it becomes worst, thus
inadvertently supports the use of potential models at finite
temperature to complement the lattice studies.
The physical picture of quarkonium dissociation
has been evolved over the years, where the properties of thermally
produced heavy quarkonium
states can be observed through the energy spectrum of their decay products
~\cite{Schenke,Martinez}. Thus the disappearance of the peak
in the resonance peak hints the dissolution of the state.
Physically a resonance is dissolved into
a medium through the broadening of its width. In EFT framework,
when the binding energy is large compared to the temperature, the
resonances acquire a finite width
due to interactions with ultra-soft gluons, causing the singlet-to-octet
transitions~\cite{Brambilla:2008cx}. This picture is relevant
for the $\Upsilon$(1S) suppression at the LHC.
But when the binding energy is smaller than any of the
above thermal scales, the potential acquires an imaginary
component~\cite{Brambilla:2008cx} (Landau Damping), which induces
a thermal width. However beyond the leading-order the above
mentioned processes become entangled.
On the other hand, in a non-EFT framework, the width arises either
when a bound state absorbs a hard gluons or a light parton of the medium
scatters off the bound state by exchanging a space-like gluon.
The (heavy) quark and antiquark ($Q \bar Q$) pairs are produced in
heavy ion collisions on a very short time-scale ($\sim 1/2m_{Q}$),
when the initial state effects on the parton densities
(shadowing)~\cite{vogt:PRC812010}, the initial state
energy loss~\cite{vogt:PRC612000}, the intrinsic heavy flavors, and the final
state absorption on nucleons~\cite{vogt:PRC812010,sgavin:PRL681992} etc.
could affect the production mechanism intimately.
The shadowing and absorption are important at mid rapidity
whereas the (initial-state) energy loss and intrinsic heavy
flavor are important at forward rapidity.
As the times are elapsed, the resonances are fomed over a
formation time, $\tau_F$ and traverses the plasma and then the
hadronic matter before leaving the interacting system to be decayed
into a dilepton. This long `trek' inside the interacting system is
cliffhanger for the pair. By the time the resonance
is formed, either the screening of the color force~\cite{Matsui:86}
or an energetic gluon~\cite{xu,gdiss1-3}, even a comoving hadron~\cite{vogt}
could dissociate the resonance(s).
Since the expansion of the matter produced in heavy-ion collisions
proceeds through the successive stages of pre equilibrium (anisotropic)
and equilibrium (isotropic) phases, therefore a study of
quarkonium production is poised to provide a wealth
of information about the evolution of the plasma and its
in-medium properties.
In the early days of collider experiments at SPS and RHIC, most of the
interests were focused on the suppression of $c \bar c$ bound
states \cite{Matsui:86,Karsch:1987pv} but several
observations are yet to be understood {\em namely}
the suppression of $\psi$ (1S) does not increase from SPS to RHIC,
even though the centre-of-mass energy is increased by fifteen times. The
heavy-ion program at the LHC
may resolve those puzzles because the beam energy and luminosity are
increased by ten times of that
of the RHIC. Moreover the CMS detector has excellent capabilities
for muon detection and provides measurements of $\psi$(2S) and the
$\Upsilon$ family, which enables the quantitative analysis of quarkonia.
That is why the interest may be shifted to the bottomonium
states at the LHC energy due to the following reasons:
i) The initial state effects to the bottomonium production are much
smaller than the charmonium production. ii) The
bottomonium is much heavier than the charmonium, the competition
due to the recombination is thus unlikely.
iii) Since the bottom quark is heavier, it
can be dealt efficiently by the potential approach.
iv) Although the $\Upsilon$ states have diverse
binding energies but their similar decay
kinematics and production mechanisms
enable to measure their relative suppression unambiguously.
The works described above were limited to an isotropic medium but
the system produced in relativistic heavy-ion collision
may not be homogeneous and isotropic because at the early stages of the
collision, the asymptotic weak-coupling enhances the longitudinal expansion
substantially than the radial expansion. There have been significant
advances in the dynamical models
used to simulate plasma evolution with the
momentum-space anisotropies
in full (3+1)-dimensional simulations
\cite{Martinez:2010sd-12tu,Ryblewski:2010bs,
Ryblewski:2012rr,Florkowski:2010cf}. In recent years,
the effects of anisotropy on the quarkonia states
have been extensively investigated~\cite{Dumitru:2007hy-09fy-09ni,
Burnier:2009yu} by the leading-anisotropic
correction to the perturbative term of the potential alone
and found that the anisotropy can have a significant impact
on quarkonium suppression.
However, in the experimentally relevant regime of temperature
(just above the crossover or transition temperature), the theoretical
predictions based on high temperature methods, such as HTL perturbation
theory might be modified by the remnants of the non-perturbative
confining force~\cite{prc_vineet}.
Although the direct lattice QCD based determinations of the
potential have progressed a lot, a model potential for the phenomenological
descriptions of heavy quarkonium suppression would indeed be
quite useful. This is one of the main goal of this present study
and argue for the modification of the full Cornell potential
as an appropriate potential for heavy quarkonium at finite temperature.
Recently we have investigated the properties of quarkonia states through
the medium modifications to both the perturbative and nonperturbative
terms of the $Q \bar Q$ potential~\cite{lata:arxiv2013} in the presence of a not necessarily isotropic QCD
medium, which have mainly two important observations: The first one is
that the inclusion of the confining string term, in addition to the Coulomb
term makes both the real and imaginary parts of the potential more
stronger, compared to the medium correction of the perturbative term
of the potential alone~\cite{thakur:PRD2013}.
Since the imaginary part contributes to the width ($\Gamma$) of quarkonium
bound states~\cite{Beraudo:2007ky,Laine:2007qy,Laine:2006ns} which
in turn determines the dissociation temperatures, so
the above cumulative effects due to the remnants of nonperturbative term
dissociate the quarkonia states at higher temperatures.
Secondly the presence of the anisotropy makes the
real-part of the potential stronger but the imaginary-part becomes leaner
and overall the anisotropy makes the quarkonia to dissociate
at higher temperatures, compared to the isotropic medium.
In the present work, we continue with our model
potential~\cite{lata:arxiv2013} and numerically obtain
the dissociation temperatures of the ground and the excited states
of the $\Upsilon$ family (which was not done earlier
in~\cite{lata:arxiv2013}). With these understandings about the quarkonia
states in a static (an)isotropic medium, we move on to study the dynamical
(sequential) suppression of the bottomonium states in
nucleus-nucleus collisions at the LHC energies.
We found that the local equilibrium hydrodynamic regime alone
may not be sufficient to suppress the bottomonium states adequately
and some additional (pre-equilibrium) time zone of plasma evolution
needs to be scanned, which seems plausible theoretically
as well as experimentally. The unique features of
the bottomonium (1S) state, {\em namely} the tiny formation time and
large binding energy, facilitate to probe both
the (pre-equilibrium) era prior to the isotropization time
and the shear viscosity-to-the entropy ratio.
Our work is organized as follows. In Section~2, we
revisited the anisotropic corrections to the retarded, advanced and
symmetric gluon self energies and the corresponding (static) propagators
in hard thermal loop perturbation theory~\cite{Carrington:PRD582009} and then study
the in-medium properties
of the quarkonium states by the resulting complex potential (in Section~
2.1 and 2.2, respectively). In Section~2.3, we study the dissociation
through a complex potential by obtaining the real and imaginary part of
the binding energies and
calculate the dissociation temperatures of the ground and excited
$b \bar b$ states.
Next we switch over our discussion (in Section 3) to an expanding
medium, which undergoes expansion through the successive pre-equilibrium
and equilibrium era and
study the survival of the bottomonium states by coupling the in-medium
dissociation with the dynamics of the expansion.
We found that our model explains the CMS data~\cite{CMS:2012} reasonably
well, apart from the uncertainties arising due to various initial-state effects,
which is however expected to be very small for the bottomonium states
at the LHC. Finally, we conclude in Section 4.
\section{Bottomonium in anisotropic medium}
An interesting property at the initial phase of the QGP is nowadays
the anisotropies that occur~\cite{Arnold05,Romatschke;2007mq,rebhan}.
It is therefore worthwhile to consider the properties of quarkonia
such as the binding energy, decay width in such a system. The calculation
of the real part of the potential
at finite anisotropy was first obtained in
Ref. \cite{Dumitru:2007hy-09fy-09ni,Romatschke;2007mq,laine2} and was later
extended for the imaginary
part~\cite{Brambilla:2008cx,Dumitru:2007hy-09fy-09ni, Laine09}, by
the leading anisotropic
corrections to the perturbative term of the potential alone, which
was further coupled with the dynamical evolution of the anisotropic
plasma to quantify the quarkonium suppression in nuclear
collisions~\cite{Strickland:2011mw,Strickland:2011aa}. We now continue
with the above works to derive the potential at finite temperature
keeping both the perturbative and nonperturbative terms via the Keyldesh
presentation in real-time formalism.
\subsection{Real part of the potential}
Since the mass of the heavy quark is very large, so both the
requirements: $m_Q \gg \Lambda_{QCD}$ and $T \ll m_Q$ are met for the description of the
interactions between a pair of heavy quark
and antiquark at finite temperature, in terms of a quantum mechanical
potential. We thus obtain the potential
by correcting {\em both the short and long-distance part} of the
$Q \bar Q$ potential, with a dielectric
function, $ \epsilon(p)$~\cite{prc_vineet} embodying the effect of the
medium
\begin{eqnarray}
\label{defn}
V(r,T)&=&\int \frac{d^3\mathbf p}{{(2\pi)}^{3/2}}
\left( e^{i\mathbf{p} \cdot \mathbf{r}}-1 \right)~\frac{V(p)}{\epsilon(p)} ~.
\end{eqnarray}
We assume the same screening scales to regulate both terms (by multiplying
with an exponential damping factor and is switched off after the Fourier
transform is evaluated), to obtain the Fourier transform of the
potential\footnote{In Ref.~\cite{megiasind,megiasprd}, different scales
for the Coulomb and linear pieces were employed through a dimension-two gluon
condensate.}:
\begin{equation}
\label{vk}
{V}(p)=-\sqrt{(2/\pi)} \frac{\alpha}{p^2}-\frac{4\sigma}{\sqrt{2 \pi} p^4}.
\end{equation}
We will now obtain the dielectric permittivity through the leading
anisotropic corrections to the self-energies and then to the
static propagators in weak coupling HTL approximation.
In Keldysh representation, the retarded (R), advanced (A) and symmetric
(F) propagators can be written as the linear combination of the components
of the $(2 \times 2)$ matrix propagator in real-time formalism:
\begin{eqnarray}
\label{2a6}
D_R^0 = D_{11}^0 - D_{12}^0 ~,~ D_A^0 = D_{11}^0 - D_{21}^0 ~,~
D_F^0 = D_{11}^0 + D_{22}^0 ~,
\end{eqnarray}
where only the symmetric component involves the distribution functions
and is of particular advantage for the HTL diagrams where the terms containing
distribution functions dominate. Similar relations hold good for
the retarded ($\Pi_R$), advanced ($\Pi_A$) and symmetric ($\Pi_F$)
self energies.
Now the resummation of the propagators is done via the Dyson-Schwinger
equation
\begin{eqnarray}
{D}_{R,A}&=&D_{R,A}^0+D_{R,A}^0\Pi_{R,A}{D}_{R,A}~, \label{2b2}\\
{D}_{F}&=&D_{F}^0+D_{R}^0\Pi _R{D}_{F}+D_F^0\Pi_{A} {D}_{A}+
D_{R}^0\Pi _{F}{D}_{A}~. \label{2b7}
\end{eqnarray}
For the static potential, we need only the temporal component
(``00" $\equiv $ L) of the propagator, whose evaluation is
easier in the Coulomb gauge. Thus the above resummation (\ref{2b2})
can be recast through its temporal component as
\begin{eqnarray}
D^L_{R,A(iso)}=D^{L(0)}_{R,A}+D^{L(0)}_{R,A}\Pi^L_{R,A(iso)}{D}^L_{R,A(iso)}~. \label{3b2}
\end{eqnarray}
The above relations are not satisfied for the anisotropic system
due to the preferential direction of anisotropy. However, for
anisotropic medium which exhibits a weak anisotropy ($\xi \ll 1$), we
circumvent the problem, by expanding the
propagators and self-energies in $\xi$:
\begin{equation}
D=D_{\rm{iso}}+\xi D_{\rm{aniso}},\,\,\,\,\,\,\,\Pi=\Pi_{\rm{iso}}+\xi
\Pi_{\rm{aniso}} ~,\label{3b4}
\end{equation}
where the parameter $\xi$ is a measure of the anisotropy
\begin{equation}
\xi = \frac{\langle \mathbf{p}_{T}^{2}\rangle}{2\langle p_{L}^{2}\rangle}-1~,~
\label{anparameter}
\end{equation}
where $ {p}_{L}= \mathbf{p}.\mathbf{n} $ and ${\bf p}_T =
\mathbf{p}-\mathbf{n}(\mathbf{p}.\mathbf{n}) $ are the components of momentum
parallel and perpendicular to the direction of anisotropy, $\mathbf{n}$,
respectively.
Thus in the presence of small anisotropy, the (resummed) temporal component
of the retarded (advanced) propagator becomes
\begin{equation}
D^L_{R,A(aniso)}= D^{L(0)}_{R,A}\, \Pi_{R,A (aniso)}^L{D}^{ L}_{R,A (iso)}+D_{R,A}^{L(0)}\,
\Pi_{R,A (iso)}^L{D}^ L_{R,A (aniso)}\label{3b6}
\end{equation}
We will now calculate the temporal component of the retarded/advanced gluon
self-energy in the HTL-approximation, where the leading isotropic
contribution is
\begin{eqnarray}
\Pi^{L}_{R,A(iso)}(P)=m_D^2\left(\frac{p_{0}}{2p}\ln\frac{p_{0}+p\pm i\epsilon}{p_{0}-p\pm i\epsilon}-1\right)~,
\label{iso}
\end{eqnarray}
with the prescriptions $+i\epsilon $ ($ -i\epsilon $), for the retarded
and advanced self-energies, respectively and
$m_D^2$ (= $\frac{g^2 T^2}{6}(N_f+2 N_c$)) is the square of Debye mass.
The full anisotropic contribution is then
\begin{eqnarray}
\Pi^{L}_{R,A(aniso)}(P)=\frac{m_D^2}{6}\left(1+3\cos 2\theta_p \right)
+\Pi_{R(iso)}^{L}(P)\left(\cos(2\theta_p)-\frac{{p_{0}}^{2}}{2p^{2}}
(1+3\cos 2\theta_p)\right)~,
\label{aniso}
\end{eqnarray}
Similarly the isotropic and anisotropic terms for the temporal component
of the symmetric self-energy are given by
\begin{eqnarray}
&&\Pi^{L}_{F(iso)}(P)=-2\pi i m_D^2\frac{T}{p}\Theta(p^2-{p_0}^2)~,\nonumber\\
&&\Pi^{L}_{F(aniso)}(P)=\frac{3}{2}\pi i m_D^2\frac{T}{p}
\left(\sin^2\theta_p
+\frac{p_0^2}{{p}^2}~(3\cos^2\theta_p-1)\right)~\Theta(p^2-{p_0}^2).
\label{sym}
\end{eqnarray}
Thus the gluon self-energy is found to have both real and imaginary
part which are responsible for the Debye screening and the Landau damping,
respectively where the former is usually obtained from the retarded and
advanced self energy and the later is obtained from the symmetric self
energy alone.
Therefore the real part of the temporal component of
retarded (or advanced) propagator in the static limit gives
\begin{eqnarray}
\Re D^{00}_{R,A}(0,p)=-\frac{1}{(p^2+m_D^2)}
+\xi \frac{m_D^2}{6(p^2+m_D^2)^2}\left(3\cos 2\theta_p-1 \right),
\label{rtrdprop}
\end{eqnarray}
and the static limit of the imaginary part of
the temporal component of symmetric propagator is
\begin{eqnarray}
\Im D^{00}_F (0,p)=\frac{-2\pi T m_D^2}{p(p^2+m_D^2)^2}
+\xi\left(\frac{3\pi T m_D^2}{2p(p^2+m_D^2)^2}\sin^2{\theta_p}
-\frac{4\pi T m_D^4}{p(p^2+m_D^2)^3} \left(\sin^2\theta_p-\frac{1}{3}\right)\right)
\label{f00}
\end{eqnarray}
We can now obtain the dielectric permittivity from the
static limit of the ``00"-component of gluon propagator
\begin{equation}
\epsilon^{{}^{-1}}(p)=-\lim_{\omega \to 0} {p^2} D_{11}^{00}(\omega, p)~,
\label{ephs}
\end{equation}
where the real and imaginary parts of $D^{00}_{11}$ can be written as
\begin{eqnarray}
\Re D^{00}_{11}(\omega,p)= \frac{1}{2}\left( D^{00}_{R}+D^{00}_{A}\right)
\label{R}~~ {\rm{and}}~~
\Im D^{00}_{11}(\omega,p)= \frac{1}{2} D^{00}_{F}.
\label{F}
\end{eqnarray}
The real-part of the potential is then obtained as
\begin{eqnarray}
\label{pot}
\Re V_{\rm(aniso)}({\bf r},\xi,T)&=&\int \frac{d^3\mathbf p}{{(2\pi)}^{3/2}}
(e^{i\mathbf{p} \cdot \mathbf{r}}-1)
\left(-\sqrt{(2/\pi)}\frac{\alpha}{p^2}-
\frac{4\sigma}{\sqrt{2 \pi} p^4}\right) \times \nonumber\\
&&p^2\left[\frac{1}{(p^2+m_D^2)}-\frac{\xi m_D^2}{6(p^2+m_D^2)^2}
(3\cos(2\theta_p)-1)\right] \nonumber\\
&\equiv& \Re V_{1(aniso)}({\bf r},\xi,T)+ \Re V_{2(aniso)}({\bf r},\xi,T)~,
\end{eqnarray}
where $\theta_p$ is the angle between ${\bf r}$ and ${\bf n}$ (direction
of anisotropy) and $\Re V_{1(aniso)} ({\bf r},\xi,T)$ and
$\Re V_{2(aniso)}(\mathbf{r}, \xi,T)$ are the
medium modifications corresponding to the Coulomb and
string term, respectively, are given by ($\hat r=rm_D$)
\begin{small}
\begin{eqnarray}
\Re V_{1(\rm aniso)}(r,\theta_r,T) &=&-\alpha m_D\left[\left( \frac{e^{-\hat{r}}}{\hat{r}}+1\right)+\xi \left[\left(\frac{e^{-\hat{r}}-1}{6}\right) \right.\right.\nonumber\\
&+&\left.\left.\left(\frac{e^{-\hat r}}{6}+\frac{e^{-\hat r}}{2\hat r}+\frac{e^{-\hat r}}{\hat{r}^2}+\frac{e^{-\hat r}-1}{\hat{{r}^3}}\right)(1-3\cos^2\theta_r)\right]\right]
\end{eqnarray}
\end{small}
and
\begin{small}
\begin{eqnarray}
\Re V_{2(\rm aniso)}(r,\theta_r,T) &=& \frac{2\sigma}{m_{{}_D}}\left[\left( \frac{e^{-\hat{r}}-1}{\hat{r}}+1\right)+2\xi\left[\left(\frac{e^{-\hat{r}}-1}{6\hat r}+\frac{e^{-\hat r}+2}{12}\right) \right.\right.\nonumber\\
&+&
\left.\left.\left(\frac{e^{-\hat r}}{\hat r^2}+ \frac{5e^{-\hat r}+\hat re^{-\hat r}+1}{12\hat r}+\frac{e^{-\hat{r}}-1}{\hat{r}^3} \right) (1-3\cos^2\theta_r)\right]\right]
\label{(eq:string)}
\end{eqnarray}
\end{small}
Thus the real-part of the potential in anisotropic medium becomes
\begin{small}
\begin{eqnarray}
\label{fulrealpot}
\Re V_{\rm aniso}(r,\theta_r,T) &=&\frac{2\sigma}{m_{{}_D}}\left(\frac{e^{-\hat{r}}-1}{\hat{r}}+1\right) - \alpha m_D\left( \frac{e^{-\hat{r}}}{\hat{r}}+1\right)
+ \xi \frac{e^{-\hat{r}}}{\hat{r}} \nonumber\\
&\times&
\left[\frac{2 \sigma} {m_{D}}\left(\frac{e^{\hat{r}}-1}{\hat{r}^2}+\frac{\hat{r}^2e^{\hat{r}}-3}{3\hat r}-\frac{5 e^{\hat{r}}-\hat{r}+1}{12}\right) -\frac{\alpha m_{D}}{2}\left(\frac{e^{\hat{r}}-1}{\hat{r}^2}-\frac{1} {\hat{r}}-\frac{2\hat{r}e^{\hat{r}}-\hat r+3}{6} \right)\right. \nonumber\\
&+&\left. \left[
\frac{2\sigma} {m_D}\left( 3 \frac{e^{\hat{r}}-1}{\hat{r}^2}-\frac{3}{\hat{r}}-\frac{e^{\hat{r}}+\hat r+5}{4} \right) -\frac{\alpha m_D}{2}\left(3 \frac{e^{\hat{r}}-1}{\hat{r}^2}-\frac{3}{\hat{r}}-\frac{\hat{r}+3}{2}\right) \right] \cos 2 \theta_r \right]\nonumber\\
&=& \Re V_{iso}(r,T) + V_{\rm{tensor}}(r,\theta_r,T).
\label{fullp}
\end{eqnarray}
\end{small}
Thus the anisotropy in the momentum space introduces an angular ($\theta_r$)
dependence, in addition to the inter particle separation ($r$), to the
potential, in contrast to the $r$-dependence only in an isotropic medium.
The potential becomes stronger with the increase of anisotropy
because the (effective) Debye mass $m_D(\xi,T)$ in an anisotropic
medium is always smaller than in an isotropic medium.
In particular, the potential for quark pairs aligned in the direction of
anisotropy are stronger than the pairs aligned in the transverse direction.
\subsection{Imaginary part of the potential}
The imaginary part of the potential is obtained by the medium corrections
to both the non-perturbative part (string term) and perturbative part
of the potential at T=0, by the imaginary part of the dielectric function (\ref{f00}):
\begin{eqnarray}
\Im V_{\rm(aniso)}({\bf r},\xi,T)&=&-\int \frac{d^3\mathbf{p}}{(2\pi)^{3/2}}
(e^{i\mathbf{p} \cdot \mathbf{r}}-1)
\left(-\sqrt{\frac{2}{\pi}}\frac{\alpha}{p^2}-\frac{4\sigma}{\sqrt{2\pi}p^4}\right)
p^2\left[\frac{-\pi T m_D^2}{p(p^2+m_D^2)^2}\right.\nonumber\\
&& \left.+\xi[\frac{3\pi T m_D^2}{4p(p^2+m_D^2)^2}\sin^2{\theta_p}
-\frac{2\pi T m_D^4}{p(p^2+m_D^2)^3}~(\sin^2\theta_p-\frac{1}{3})\right]
\nonumber\\
&& \equiv \Im V_{1(aniso)} ({\bf r},\xi,T)+ \Im V_{2(aniso)} ({\bf r},\xi,T) ~,
\end{eqnarray}
where $\Im V_{1(aniso)} ({\bf r},\xi,T)$ and
$\Im V_{2(aniso)} (\mathbf{r},\xi,T)$ are the
imaginary contributions corresponding to
the Coulombic and linear terms in anisotropic medium, respectively:
The contribution due to the perturbative term in the
leading-order is given by~\cite{Dumitru:2007hy-09fy-09ni}
\begin{eqnarray}
\Im V_{1(aniso)}({\bf r},\xi,T)&= &
-\alpha T \left( \phi_0(\hat{r})+
\xi\left[\phi_{1}(\hat{r}, \theta_r)
+\phi_2(\hat{r},\theta_r)\right] \right),
\end{eqnarray}
where the functions $\phi_0(\hat{r})$, $\phi_{1}(\hat{r},\theta_r)$ and
$\phi_{2}(\hat{r},\theta_r)$ are given by
\begin{eqnarray}
\phi_0(\hat{r})&=&-\alpha T\left(-\frac{{\hat{r}}^2}{9}
\left(-4+3\gamma_{E}+3\log\hat{r}\right)\right) \nonumber\\
\phi_{1}(\hat{r},\theta_r)&=& \frac{{\hat{r}}^2}{600} \left[123-
90 \gamma_{E}- 90\log\hat{r}
+\cos 2\theta_r \left(-31+30\gamma_{E}+30\log\hat{r}\right)\right] \nonumber\\
\phi_{2}(\hat{r},\theta_r)&=& \frac{{\hat{r}}^2}{90}(-4+3\cos
2\theta_r)
\end{eqnarray}
Similarly the imaginary part due to the nonperturbative (linear) term
has also the isotropic and anisotropic term:
\begin{eqnarray}
\Im V_{2(aniso)}(r,\xi,T)= \frac{2\sigma T}{m_D^2} \left( \frac{}{}
\psi_0(\hat{r})-\xi
\left[\psi_1(\hat{r},\theta_r)+\psi_2(\hat{r},\theta_r)\right] \frac{}{}
\right)~,
\label{v2aniso}
\end{eqnarray}
where the functions $\psi_0(\hat{r})$, $\psi_1 (\hat{r},\theta_r)$
and $\psi_2 (\hat{r},\theta_r)$ are given by
\begin{eqnarray}
\psi_0(\hat{r})&=&\frac{\hat r^2}{6}+\left(\frac{-107+60\gamma_E
+60\log(\hat r)}{3600}\right)\hat r^4+O(\hat r^5)~,\\
\psi_1(\hat{r},\theta_r)&=&\int \frac{dz}{z(z^2+1)^2}\left[1-\frac{3}{2}
\left(\sin^2\theta_r\frac {\sin{z\hat r}}{z\hat r}
+(1-3\cos^2\theta_r)G(\hat{r},z)\right)\right]~,\\
\psi_2(\hat{r},\theta_r)&=&-\frac{4}{3}\int\frac{dz}{z(z^2+1)^3}
\left[1-3\left[(\frac{2}{3}-\cos^2\theta_r)\frac {\sin{z\hat r}}{z\hat r}
+(1-3\cos^2\theta_r)G(\hat{r},z)\right]\right]
\end{eqnarray}
where
\begin{eqnarray}
G(\hat{r},z)=\frac{z\hat r\cos(z\hat r)-\sin(z\hat r)}{(z\hat r)^3}
\end{eqnarray}
Thus the imaginary part of the potential in anisotropic medium in the
leading logarithmic order becomes
\begin{eqnarray}
\label{fullimgpot}
\Im V_{\rm{(aniso)}} (r,\theta_r,T)&=&-T\left(\frac{\alpha {\hat r^2}}{3}
+\frac{\sigma {\hat r}^4}{30m_D^2}\right)\log(\frac{1}{\hat r})\nonumber\\
&&+\xi T\left[\left(\frac{\alpha {\hat r^2}}{5}+\frac{3\sigma {\hat r^4}}{140m_D^2}\right)\right.
\left.-\cos^{2}\theta_r \left(\frac{\alpha {\hat r^2}}{10}+\frac{\sigma {\hat r^4}}{70m_D^2}\right)\right]\log(\frac{1}{\hat r})
\end{eqnarray}
where the magnitude is found to be smaller than the isotropic medium and
decreases with the anisotropy. In weak anisotropic limit, the imaginary
part is a perturbation and thus provides an estimate for the (thermal)
width for a particular resonance state:
\begin{eqnarray}
\Gamma_{\rm(aniso)} &=& \int d^3 {\bf r}|\Psi(r)|^2\left[\alpha T{\hat r^2}
\log(\frac{1}{\hat r})\left(\frac{1}{3}-\xi
\frac{3-\cos 2\theta_r}{20}\right)\right.\nonumber\\
&&\left.+\frac{2\sigma T}{m_D^2}{\hat r^4}\log(\frac{1}{\hat r})
\frac{1}{20}\left(\frac{1}{3}-\xi\frac{2-\cos2\theta_r}{14}\right)\right]\nonumber\\
&=&T\left(\frac{4}{\alpha m_Q^2}+\frac{12\sigma}{\alpha^2m_Q^4}\right)\left(1-\frac{\xi}{2}\right)m_D^2 \log\frac{\alpha m_Q}{2m_D}~,
\end{eqnarray}
which shows that the in-medium thermal width in anisotropic medium becomes
smaller than in isotropic medium and gets narrower with the increase of
anisotropy. This is due to the fact
that the width is proportional to the square of the Debye mass and the
debye mass decreases with the anisotropy because the
effective local parton density around a test (heavy) quark is smaller
compared to isotropic medium.
\subsection{Dissociation in a complex potential}
In short-distance limit, the vacuum contribution dominates over the medium
contribution even for the weakly anisotropic
medium and for the long-distance limit, the potential (\ref{fulrealpot}) in high temperature
approximation results a Coulomb plus a sub leading anisotropic contribution :
\begin{eqnarray}
\label{largp}
\Re V_{\rm{(aniso)}}(r,\theta_r,T) &\stackrel{\hat{r}\gg1}{\simeq}& -\frac{2\sigma}{m^2_{{}_D}r}
-\alpha m_{{}_D} -\frac{5\xi}{12}~\frac{2\sigma}{m^2_{{}_D}r}
\left(1+\frac{3}{5}\cos 2\theta_r \right)\\
&\equiv & \Re V_{\rm{iso}} (\hat{r} \gg 1,T)+ V_{\rm{tensor}} (\hat{r} \gg 1~.
\theta_r,T)~,
\end{eqnarray}
where the anisotropic contribution
($V_{\rm{tensor}} (\hat{r} \gg 1, \theta_r,T)$) is smaller than the isotropic
one ($\Re V_{\rm{iso}} (\hat{r} \gg 1,T)$), so the anisotropic part
can be treated as perturbation. Therefore, the real part of binding energy
may be obtained from the radial part of the Schr\"odinger equation
(of the isotropic component) plus the first-order perturbation
due to the anisotropic component as :
\begin{eqnarray}
E_{\mathbf {bin}}^{\rm{aniso}} \stackrel{\hat{r}\gg1}{\simeq}
\left( \frac{m_Q\sigma^2 }{m_{{}_D}^4 n^{2}} +
\alpha m_{{}_D} \right) +
\frac{2\xi}{3} \frac{m_Q\sigma^2 }{m_{{}_D}^4 n^{2}} ,
\end{eqnarray}
In the intermediate-distance scale, the real part of the potential
(\ref{fulrealpot}) does not
look simple, the interaction becomes complex and needs to be
solved numerically.
Usually the time- dependent or independent
Schr\"odinger equation is solved by the finite difference time domain
method (FDTD) or matrix method, respectively.
In the matrix method, the Schr\"odinger equation
can be solved in a matrix form through a discrete basis, instead
of the continuous real-space position basis spanned by the states
$|\overrightarrow{x}\rangle$. Here the confining potential V is subdivided
into N discrete wells with potentials $V_{1},V_{2},...,V_{N+2}$ such that
for $i^{\rm{th}}$ boundary potential, $V=V_{i}$ for $x_{i-1} < x < x_{i};
~i=2, 3,...,(N+1)$. Therefore for the existence of a bound state, there
must be exponentially decaying wave function
in the region $x> x_{N+1}$ as $x \rightarrow \infty $ and
has the form:
\begin{equation}
\Psi_{N+2}(x)=P_{{}_E} \exp[-\gamma_{{}_{N+2}}(x-x_{N+1})]+
Q_{{}_E} \exp [\gamma_{{}_{N+2}}(x-x_{N+1})] ,
\end{equation}
where, $P_{{}_E}= \frac{1}{2}(A_{N+2}- B_{N+2})$,
$Q_{{}_E}= \frac{1}{2}(A_{N+2}+ B_{N+2}) $ and,
$ \gamma_{{}_{N+2}} = \sqrt{2 \mu(V_{N+2}-E)}$. The eigenvalues
can be obtained by identifying the zeros of $Q_{E} $.
We have then obtained the real and imaginary part of
the binding energies of the bottomonium states (shown in Fig 1),
which is found to increase with the anisotropy.
We now study the dissociation of the resonances
when the binding energy decreases with the increase of the
temperature and becomes equal to $\sim \Gamma$~\cite{Mocsy05-08,Burnier07}.
The dissociation temperatures ($T_D$'s) can also be obtained
from the intersection of the binding
energies obtained from the real and imaginary part of the
potential~\cite{Strickland:2011aa,Margotta:2011ta}, respectively.
The $T_D$'s for the $\Upsilon$ (1S) and $\Upsilon$ (2S) states are
$1.97~T_c$ and $1.44T_c$, respectively (Table 1) in isotropic medium
($\xi=0$) and increases with the increase of anisotropy ($\xi>0$),
{\em i.e.} the bottomonium states persist higher
temperatures (2.1 $T_c$ for $\xi=0.6$) in a anisotropic plasma,
which can be parametrized as
$T_{\rm{aniso}}^D(\xi) \simeq T_{\rm{iso}}^D\left(1+\frac{\xi}{7}\right)$,
compared to the relation $T_{\rm{aniso}}^D(\xi) =
T_{\rm{iso}}^D\left(1+\frac{\xi}{6}\right)$ by Laine et al.\cite{Burnier:2009yu}.
Our results are found relatively higher
compared to similar calculation~\cite{Strickland:2011aa,Margotta:2011ta},
which may be due to the absence of three-dimensional medium modification
of the linear term in their calculation.
\begin{figure}[h]
\vspace{1.75in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.71in,height=2.5in]{be_vs_tbytc_y1s.eps} & \hspace{0.25in}
\includegraphics[width=2.5in,height=2.5in]{be_vs_tbytc_y2s.eps} \\
\end{array}$
\end{center}
\caption{\footnotesize The real and imaginary part of the binding energies
for the 1S and 2S states.}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{State}& \multicolumn{3}{c|}{$T_D$} & \multirow{2}{*} \large{$\tau_F$}& \multirow{2}{*}\large{$\eps$}&\multirow{2}{*}\large {$c_{s}^2$}\\
\cline{2-4}
&$\xi=0$ &$\xi=0.3$&$\xi=0.6$& \small{(fm)}& \small{($\xi=0$)} &\\
\hline\hline
$\Upsilon$(1S) & 1.97&2.01&2.11& 0.2&42.49&0.307 \\
\hline
$\Upsilon$(2S) & 1.44&1.50&1.54& 0.4&12.74&0.284\\
\hline
$\Upsilon$(3S) & 1.12&1.15&1.21& 0.6&5.21&0.249\\
\hline
$\chi_{b1}$& 1.57&1.59&1.64&0.4 &17.66& 0.292\\
\hline
\end{tabular}
\caption{\footnotesize Dissociation temperatures ($T_D$) in units of $T_c$
for bottomonium states at different anisotropies ($\xi$)
along with their (three sets) formation times($\tau_F$),
screening energy densities ($\eps$) and square of speed of sound($c_s^2$) in isotropic medium. }
\end{table}
\section{Quarkonium in expanding medium}
Let us now consider a nucleus-nucleus collision, where the partons
are formed
at time $\tau_i \sim Q_s^{-1}$ ($Q_s$ is the saturation
scale) and the system started evolving. There may be three plausible
scenarios in space-time evolution {\em viz.} i) the
partons are initially isotropized ($\tau_i=\tau_{iso}$), i.e.,
the system evolves hydrodynamically, ii) the system never isotropizes
($\tau_{iso} \rightarrow \infty$), i.e., it undergoes free streaming
motion and iii) finally the system takes finite time to isotropize
($\tau_{i}<\tau<\tau_{iso}$), i.e., undergoes
through successive anisotropic (pre-equilibrium) and
isotropic (equilibrium) phases. The pre-equilibrium
era may be conceived by the fact
that the asymptotic weak-coupling at the early stage of the collision
enhances the expansion in the beam direction (longitudinal)
substantially than the
radial expansion and results an anisotropy.
This anisotropy makes the partonic system unstable with respect to
the chromo magnetic plasma modes~\cite{Romatschke}, which facilitate
the system to isotropize quickly~\cite{Arnold05,Mrowezynski93,Arnold03}.
Recently there have been significant advances in the dynamical models
for the plasma evolution to incorporate the momentum anisotropy
~\cite{Mrowczynski:2000ed,Strickland:2007fm,martinez-strickland}.
Let us consider a region of energy density in the transverse plane,
which is
greater than or equal to the screening energy density, $\epsilon_s$
($\propto T_D^4$).
During the expansion, if the system has been cooled to an energy density
less than or equal to $\epsilon_s$, the $Q \bar Q$ pair would escape
and form the (quarkonium) resonance. On the other hand if the energy
density is still higher than $\epsilon_s$, the resonance
will not form and suppress the quarkonium production.
Thus the pattern of suppression of the quarkonium states
depends on how rapidly the system cools and how large is the
the screening energy density of the particular resonance state.
The former depends on how to model the evolution of the system,
where the equation of state needed to close the hydrodynamic equations
is still not clear and how to incorporate the dissipative
forces in the dynamics. The later (the screening energy density) depends on the
properties of quarkonium states in the medium (which may or may not be
isotropic). However, the (finite) formation time and the intrinsic
transverse momentum of the resonance enrich the suppression pattern
more interesting.
Thus the discussion on the suppression of resonances in expanding medium
is three fold respects: First we discuss on the equation
of state (EOS) which gives the speed of sound
(which controls the expansion of the medium) as a
function of temperature, in contrast to the constant value usually adopted
in the literature and use the EOS to evaluate the screening energy density
corresponding to the temperature $T_D$.
Secondly we discuss the evolution of the system first through
the pre-equilibrium era and subsequently the (local) equilibrium era
in the Bjorken boost-invariant expansion in the
presence of dissipative forces in the stress tensor.
Finally the above ingredients are coupled to quantify the
suppression of the bottomonium states at the LHC.
\subsection{Lattice equation of state}
The pressure is the primary observable to study
the QCD equation of state which, at finite chemical potential ($\mu_i$), can be written
through the Taylor-expansion~\cite{Borsayni}:
\begin{eqnarray}
\frac{p(T,\{\mu_i\})}{T^4} = \frac{p(T,\{0\})}{T^4} + \frac{1}{2} \sum_{i,j} \frac{\mu_i\mu_j}{T^2} \chi_2^{ij},
\label{eq:pmu}
\end{eqnarray}
with the susceptibilities
\begin{equation}
\chi_2^{ij} \equiv \frac{T}{V} \frac{1}{T^2}\left.\frac{\partial ^2 \log\cal{Z}}{\partial \mu_i \partial \mu_j}\right|_{\mu_i=\mu_j=0}.
\label{eq:chi2def}
\end{equation}
The trace anomaly, $I(T,\mu)$ is another quantity of interest in equation of
state which can be obtained from the relation:
\begin{eqnarray}
\frac{I(T,\mu)}{T^4}\equiv \frac{ \epsilon(T,\mu)-3p(T,\mu)}{T^4}
=\frac{I(T,0)}{T^4} \,+\, \frac{\mu^2}{2T}
\frac{\partial \chi_2}{\partial T}
\label{eq:imu}
\end{eqnarray}
In the limit of vanishing baryon chemical potential, the trace anomaly
and the Taylor-coefficients (susceptibilities) was parametrized~\cite{Borsayni} as
\begin{eqnarray}
\frac{I(T)}{T^4}& = &e^{-h_1/t-h_2/t^2}\left[h_0+ \frac{f_0~[\tanh(f_1 ~t+f_2)+1]}{1+g_1~t+g_2~t^2}\right]~.\\
\chi_2(T)&=&e^{-h_3/t-h_4/t^2} f_3~[\tanh (f_4~t+f_5)+1]~,
\end{eqnarray}
where $t=T/200$ and the values of other parameters are given
in~\cite{Borsayni}.
The inverse relation between the pressure and the trace anomaly at $\mu=0$
then becomes
\begin{equation}
\frac{p(T,0)}{T^4} = \int_0^Td T' \frac{I(T',0)}{T'^5}.
\label{eq:invpi}
\end{equation}
The energy density $\epsilon$ is then obtained from the trace
anomaly and the pressure
\begin{equation}
\epsilon=I+3p~,
\end{equation}
hence the speed of sound $c_s$ can be obtained from the relation:
\begin{equation}
c_s^2= \left.\frac{\partial p}{\partial \epsilon}\right|_{s/n}~.
\label{eq:cs2}
\end{equation}
We have shown that how the speed of sound varies with temperature rapidly
in the vicinity of the critical point
and approaches towards the asymptotically ideal value (1/3) (in Fig.2) for
very high temperature. In
particular we have marked the values of $c_s^2$'s at the dissociation
temperatures ($T_D$'s) of the $\Upsilon$ (nS) states and indicates that
how the expansion of the system deviates from the ideal one
at $T_D$'s and hence has a large bearing on the suppression.
Thus the equation of state can be used to calculate the energy
density ($\epsilon_s$) at the dissociation temperature ($T_D$) and also
be used as an input to the hydrodynamics equation of motion.
Another important quantity in the quarkonium suppression is the screening
time, which can also be obtained from the screening energy density
$\epsilon_s$ and the speed of sound.
\begin{figure}[]
\vspace{0.1in}
\begin{center}
\includegraphics[width=2.5in,height=2.5in]{cs2_vs_tbytc_lqcdpar.eps}
\end{center}
\caption{\footnotesize variation of the speed of sound}
\end{figure}
\subsection{Evolution in pre-equilibrium era}
The evolution of the pre-equilibrium (anisotropic) hydrodynamics may be
dealt in two ways: the first one is a phenomenological and
refers directly to tensor structure of an anisotropic
fluid~\cite{Ryblewski:2010bs,Ryblewski:2012rr,Florkowski:2010cf} and the
second one employs the transport equation for the gluon distribution
function in the anisotropic background
\cite{Martinez:2010sd-12tu}. Phenomenologically the
generalized anisotropy parameter can be written in (1+1) dimension
\begin{equation}
\xi(\tau,\delta)=\left(\frac{\tau}{\tau_{i}}\right)^\delta-1 ~,
\label{eq:genzeta}
\end{equation}
where the interpolating co-efficient, $\delta$ characterizes the
various isotropization process, {\em viz.} the asymptotic limits
$\delta \rightarrow 0$ and $2$ represent the (local equilibrium)
hydrodynamics and the free-streaming, respectively whereas
the intermediate values $1/6 \le \delta \le 1/2 $ and $2/3$ denote
the plasma instability and the collisional broadening, respectively.
For the general value of $\delta$, the proper time dependence of the
energy density, the hard momentum scale and the number density
for large times, $\tau \gg \tau_{i}$, can be written as,
\begin{eqnarray}
\varepsilon(\tau) &=& \varepsilon_{0} \left( \frac{\tau_{i}}{\tau}\right)^{4(1-\delta/8)/3},\\
p_{\rm hard}(\tau) &=& T_{0}\left( \frac{\tau_{i}}{\tau}\right)^{(1-\delta/2)/3},\\
n(\tau)&=&n_{0}\left( \frac{\tau_{i}}{\tau}\right)~,
\end{eqnarray}
respectively. The smoothness of the transition from a non zero
value (of $\delta$) to 0 at $\tau \sim \tau_{iso}$, is governed by
a smeared step function $\lambda(\tau)$ \cite{Martinez:2010sd-12tu,martinez-strickland},
\begin{equation}
\lambda(\tau) = \frac{1}{2} \left[{{
\tanh}{\left(\frac{\gamma (\tau-\tau_{\rm iso})}{\tau_i }\right)+1}}\right]~,
\label{eq:lambda}
\end{equation}
where the parameter, $\gamma$ sets the sharpness of the transition
from pre-equilibrium to (local equilibrium) hydrodynamic behavior. Thus
the above dependence, in terms of $\lambda (\tau)$, become
\begin{eqnarray}
\xi(\tau,\delta) &=&\left(\frac{\tau}{\tau_i}\right)^{\delta(1-\lambda(\tau))}-1
\\
{\cal E}(\tau)&=&{\cal E}_{\rm 0} {\cal R({\xi})}\
{\cal \bar U}^{4/3}\label{eq:edpreq}\\
p_{\rm hard}(\tau) &=&T_{0} ~\ {\cal\bar U}^{1/3}~
\label{eq:modelEQs}
\end{eqnarray}
where the functions ${\cal R({\xi})}$ and ${\cal \bar U}$ are given by
\begin{eqnarray}
{\cal R({\xi})}&=&\frac{1}{2} \left( \frac{1}{\xi+1}+\frac{\tan^{-1}\sqrt{\xi}}{\sqrt{\xi}}\right),\\
{\cal \bar U}& = &{\cal U}(\tau)/ {\cal U}(\tau_ {i}) ,
\end{eqnarray}
with
\begin{eqnarray}
{\cal U}(\tau) &\equiv & \left[{\cal R} \left((\frac{\tau_{iso}}{\tau_{i}})^{\delta}-
1\right)\right]^{3\lambda(\tau)/4} \left(\tau_{\rm iso}/\tau\right)^{1-\delta(1-\lambda(\tau))/2}.
\label{eq:ubar}
\end{eqnarray}
\begin{figure}[]
\vspace{0.1in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in,height=2.5in]{ts_vs_r_ups1s_etazeta.eps} & \hspace{0.25in}
\includegraphics[width=2.5in,height=2.5in]{ts_vs_r_ups2s_etazeta.eps} \\
\end{array}$
\end{center}
\caption{\footnotesize Constant energy density contour for
the $\Upsilon$ (1S) (left panel) and the $\Upsilon$(2S) (right panel)
for different values of shear ($\eta$) and bulk ($\zeta$) viscosities.}
\end{figure}
\begin{figure}[h]
\vspace{.1in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in,height=2.5in]{crs_vs_npart_y1s.eps} & \hspace{0.25in}
\includegraphics[width=2.5in,height=2.5in]{crs_vs_npart_y2s.eps} \\
\end{array}$
\end{center}
\caption{\footnotesize Centrality dependence of screening radius at fixed $P_T$
for $\Upsilon$(1S)(Left panel) and $\Upsilon$(2S) (right panel).}
\end{figure}
\subsection{Evolution in equilibrium era: Bjorken expansion }
When the rate of interaction overcomes the expansion rate
the system attains local thermodynamic equilibrium for
times $\tau \geq \tau_{iso}$.
The energy momentum tensor of the plasma in the absence of dissipative
forces is written as:
\begin{equation}
T^{\mu\nu}= (\epsilon+p)u^\mu u^\nu + g^{\mu \nu} p~,
\label{tmu}
\end{equation}
where $\epsilon$ and $p$ are the energy density and the pressure, respectively.
Then the Bjorken's boost-invariant longitudinal expansion
gives the equation of motion:
\begin{equation}
\frac{d \epsilon}{d \tau} =-\frac{\epsilon+p}{\tau}~,
\label{bj}
\end{equation}
where the equation of state ($p=c_s^2 \epsilon$) has been coupled
to give rise
\begin{eqnarray}
\label{eq0}
\epsilon(\tau) \tau^{1+c_s^2}= \epsilon(\tau_i)\tau_i^{1+c_s^2}.
\end{eqnarray}
Now we study the corrections to the Bjorken expansion
due to the dissipative forces in the energy-momentum tensor:
\begin{equation}
T^{\mu\nu}= (\epsilon+p)u^\mu u^\nu + g^{\mu \nu} p +\Pi^{\mu\nu} ~,
\label{tmun1}
\end{equation}
where the dissipative part (viscous stress tensor), $\Pi^{\mu\nu}$ is given by
\begin{equation}
\Pi^{\mu\nu}=\eta \left( \nabla^\mu u^\nu + \nabla^\nu u^\mu -
\frac{2}{3} \nabla^{\mu \nu} \nabla^\rho u_\rho \right)
+ \zeta \nabla^{\mu \nu} \nabla^\rho u_\rho ~,
\end{equation}
where $\eta$ and $\zeta$ are the shear and bulk viscosities, respectively
and $\nabla^\mu=\nabla^{\mu \nu} \partial_\nu$ with $\nabla^{\mu \nu}
=g^{\mu \nu} - u^\mu u^\nu$.
In first-order viscous hydrodynamics, the bulk and shear stresses can be
written in a gradient expansion:
\begin{equation}
\Pi=-\zeta \partial^\mu u_\mu, ~~~ \pi^{\mu\nu}=\eta \langle
\nabla^\mu u^\nu \rangle,
\end{equation}
where $ \langle \nabla^\mu u^\nu \rangle$ is the symmetrized velocity
gradient. The Israel-Stewart theory of second-order
dissipative hydrodynamics \cite{Israel:1979wp} modifies
the equation of motion for the ideal fluid (\ref{tmun1})
into~\cite{Heinz:2005zi,Muronga:2003ta,Baier:2006um,Baier:2007ix}
\begin{equation}
\frac{d \epsilon} {d \tau}=-\frac{1}{\tau}(\epsilon+p-\Phi +\Pi)
~,
\label{eq5}
\end{equation}
where the bulk ($\Pi$) and the shear stress ($\Phi$) will asymptotically
(after the relaxation times $\tau_\Pi$ and $\tau_\pi$, respectively)
reduce to their first-order values. In the Navier-Stokes limit, the
one-dimensional boost-invariant expansion gives~\cite{Fries:PRC782003},
\begin{equation}
\Phi = \frac{4\eta}{3\tau} ,\qquad \Pi = - \frac{\zeta}{\tau} \, .
\label{eq:eta-zeta}
\end{equation}
Substituting the values of $\Phi$ and $\Pi$ in ~(\ref{eq5}), the
Bjorken longitudinal expansion can be read as:
\begin{equation}
\frac{d \epsilon} {d \tau}+\frac{\epsilon+p}{\tau}=
\frac{\frac{4\eta}{3}+\zeta}{\tau^2}~,
\label{eqbj2}
\end{equation}
whose solution can be obtained with the EoS $p=c_s^2 \epsilon$,
\begin{eqnarray}
\label{eqs1}
\epsilon(\tau) \tau^{1+c_s^2}+c\left[\frac{4\eta}{3s}+\frac{\zeta}{s}\right] \frac{\tau^{1+c_s^2}}{{\tilde{\tau}}^2}
&=&\epsilon(\tau_i)\tau_i^{1+c_s^2}
+c\left[\frac{4\eta}{3s}+\frac{\zeta}{s}\right] \frac{\tau_i^{1+c_s^2}}{ {\tilde{\tau_i}}^2}
\end{eqnarray}
where the constant, $c$ is $(1+c_s^2)a_f T_{i}^3\tau_{i}$ with
$a_f=(16+21n_f/2)\pi^2 /90$, and ${\tilde{\tau}}^2$
(or ${\tilde{\tau}}_i^2$) are denoted by $(1-c_s^2)\tau^2$
($(1-c_s^2)\tau_i^2$), respectively.
The first term in both LHS and RHS accounts for the contributions coming
from the zeroth-order expansion while the second term is due to the
viscous corrections.
In the present work we use the shear viscosity to-entropy
ratio, $\eta/s$ from the perturbative QCD~\cite{pqcd} and
AdS/CFT calculations~\cite{ADS-kovtun-dtson-PRL942005},
whereas for the bulk-viscosity, $\zeta/s$ we consider the
parametrization
in~\cite{jaiswal:PRC872013,rajgopal-jhep2010,Mayer-PRL2008}, which
suggest a sharp peak in the vicinity of $T_c$ and is tiny
below $T_c$~\cite{mprakash-physrepo-1993}:
\begin{equation}
\label{eq:zetas}
\zeta{\mbox{/s}}=\left\{ \begin{array}{ll}
a_1\exp \left(\frac{T-T_c}{\Delta T}\right)+b_1\left(\frac{T_c}{T}\right)^2 & \mbox{if $T > T_c$} \\
a_1\exp \left(\frac{10 (T_c-T)}{\Delta T}\right)+\frac{b_1}{10}\left(\frac{T}{T_c }\right)^2 & \mbox{if $T_c \ge T$}~,
\end{array}
\right.
\end{equation}
where the parameter $a_1$ (=0.901) and the $\Delta$T (=$T_c/14.5$)
controls the height and the width of the curve, both of which are not
well understood and may be varied considerably. The parameter $b_1$ (=0.061)
is obtained by fitting Meyer’s central value of~$\zeta/s$ at higher
temperatures~\cite{jaiswal:PRC872013}.
\subsection{Survival of $b \bar b$ states}
Let us take a simple parametrization for the initial energy density profile on
the transverse plane:
\begin{equation}
\label{eq:edprofile}
\epsilon(\tau_i,r)=\epsilon_i A_{{}_T}(r)~,
\end{equation}
with the profile function
\begin{equation}
A_{{}_T}(r) =\left(1-
\frac{r^2}{R_T^2}\right)^{\beta} \theta(R_T-r)
\end{equation}
where $r$ is the transverse co-ordinate, $R_T$ is the transverse
radius of the nucleus, and $\beta$ represents the proportionality of
the deposited energy to the nuclear thickness.
Thus the average initial energy density, $\langle
\epsilon_i \rangle$
can be obtained as
\begin{equation}
\epsilon_i=(1+\beta) \langle \epsilon_i \rangle~,
\end{equation}
With this initial energy density profile (\ref{eq:edprofile}), we now obtain
the screening time ($\tau_s (r)$), when the energy density
drops to the screening energy density $\epsilon_s$, and construct the
screening energy density contour. Since the system evolves through the
successive pre-equilibrium and equilibrium era, so the entire contour
is obtained by amalgamating the contours in pre-equilibrium and
the equilibrium era. The contour in the
pre-equilibrium era is obtained from the energy density (\ref{eq:edpreq})
whereas the equilibrium era gives the contour:
\begin{eqnarray}
\label{taus}
\tau_s(r)=\tau_i \left(\frac{\tilde \tau_s}{\tilde \tau_i}\right)^{2/1+c_s^2}{\bigg[ \frac{\epsilon_i(r) \tilde\tau_{i}^2 +c(\frac{4\eta}{3s}+\frac{\zeta}{s})}
{\epsilon_s \tilde{\tau}_s^2 +c(\frac{4\eta}{3s}+\frac{\zeta}{s})}
\bigg ]}^{1/1+c_s^2}~,
\end{eqnarray}
The significance of the contour can be understood as follows: If a
$Q \bar Q$ pair
is produced inside the contour, the pair cannot escape and hence the
resonance cannot be formed. If it is produced outside the contour, it
survives. Since the $Q \bar Q$ pair takes finite time ($\tau_F$) to
form the physical resonances ($J/\psi$, $\Upsilon$ etc.),
the boundary of the region ($r_s$), where the quarkonium formation is
suppressed, has been quantified by equating the duration of screening
time $\tau_s(r)$ to the formation time $t_F$ in the plasma
frame (=$\gamma \tau_F$, where $\gamma$ is the dilation factor).
For the equilibrium era undergoing through
the Bjorken boost-invariant expansion, the
screening radius can calculated as:
\begin{eqnarray}
\label{rs}
r_s&=& R_T { \left( 1- A \right)}^{\frac{1}{2}}~\Theta \left( 1-A \right)~,\\
A &=& \bigg[ \frac{\epsilon_s}{\epsilon_i}
\left({\frac{t_F}{\tau_i}} \right)^{1+c_s^2}
+ \frac{c\left({\frac{4\eta}{3s} + \frac{\zeta}{s}}\right)}{\epsilon_i} \bigg(
\frac{(\frac{ t_F}{\tau_i})^{1+cs^2}}{{\tilde{\tau}}_s^2}-\frac{1}{{\tilde{\tau}}_i^2}
\bigg) \bigg]^{1/\beta},
\end{eqnarray}
which depends on the initial conditions, the dynamics of the evolution, and
also the dynamical properties of the resonance states. Since the
initial conditions are related to the centrality of the collisions, thus
the screening boundary gives rise a centrality dependent suppression pattern.
Suppose a $Q \bar Q$ pair is created initially at $\mathbf{r}_0$
with the transverse momentum $\mathbf{p}_T$ on the
transverse plane. By the time the resonance is formed, the pair moves then
to a new position ${\bf r}={\bf r}_0+ t_F \mathbf{p}_T/M$
and if $|{\bf r}|$ is greater than or equal to the screening radius $r_{s}$,
the pair will escape the deadly contour, otherwise
the resonance will never be formed.
This gives rise a characteristic dependence of $p_{{}_T}$ in the
suppression pattern as well as the inequalities of
the cosine of the angle between $\mathbf{r}$ and $\mathbf{p}_T$ vectors:
\begin{equation}
\cos \phi\,\geq\,\left[(r_s^2-r^2)\,M-\tau_F^2\,p_T^2/M\right]/
\left[2\,r\,\tau_F\,p_T\right],
\label{cosphi}
\end{equation}
which leads to a range of values of $\phi$ when the quarkonium would
escape.
Now we can write for the survival probability of the quarkonium:
\begin{eqnarray}
S(p_{{}_T})=\left[\int_0^{R_T} \, r \, dr \int_{-\phi_{\mbox{max}}}
^{+\phi_{\mbox{max}}}\,
d\phi\, P(\mathbf{r},\mathbf{p}_T)\right]/
\left[2\pi \int_0^{R_T} \, r\, dr\, P(\mathbf{r},\mathbf{p}_T)\right],
\label{sspt}
\end{eqnarray}
where $\phi_{\mbox{max}}$ is the maximum positive angle
($0\leq \phi \leq \pi$)
allowed by Eq.(\ref{cosphi}):
\begin{equation}
\phi_{\mbox{max}}=\left\{ \begin{array}{ll}
\pi & \mbox{if $y\leq -1$}\\
\cos^{-1} |y| & \mbox{if $-1 < y < 1$}\\
0 & \mbox{if $y \geq 1$}
\end{array}
\right .,
\end{equation}
with
\begin{equation}
y= \left[(r_s^2-r^2)\,M-\tau_F^2\,p_T^2/M\right]/
\left[2\,r\,\tau_F\,p_T\right],
\end{equation}
$M$ is the mass of the resonance and $P$ is the probability for the
quark-pair production at
($\mathbf{r}$, $\mathbf{p}_T$), in a hard collision which
may be factored out as
\begin{equation}
P(\mathbf{r},\mathbf{p}_T)=f(r)g(p_T),
\end{equation}
where we take the profile function f(r) as
\begin{equation}
f(r)\propto \left[ 1-\frac{r^2}{R_T^2}\right]^\alpha \theta(R_T-r)~.
\end{equation}
Often experimental measurement of survival probability at a given
number of participants ($N_{{}_{\rm part}}$) or rapidity ($y$)
is reported in terms of the $p_{{}_T}$-integrated yield ratio:
\begin{equation}
\langle S\rangle = \frac{\int_{p_{{}_T}^{\mbox{min}}}^{p_{{}_T}^{\mbox{max}}} d p_{{}_T} S(p_{{}_T})}
{\int_{p_{{}_T}^{\mbox{min}}}^{p_{{}_T}^{\mbox{max}}} d p_{{}_T}}.
\end{equation}
The production of $b \bar b$ mesons occur in-part
through the production of higher excited $b\bar b$ states
and their decay into the ground state. Since the ground
and excited states have different sizes (binding energies),
the excited states will dissolve earlier compared to the tightly bound
ground states, so a sequential suppression results. However, the situation
may not be that simple
because the states have different formation times too, opposite to
their binding energies.
So while calculating the $p_{{}_T}$-integrated inclusive
survival probability for individual states, the feed-down corrections
may be taken into account as:
\begin{eqnarray}
{\langle S \rangle}^{{}^{\mbox{incl}}} (3S) &=& { \langle S\rangle}(3S),\\
{\langle S \rangle}^{{}^{\mbox{incl}}} (2S)&=& f_1 { \langle S\rangle} (2S) +f_2 \langle S\rangle (3S),\\
{\langle S \rangle}^{{}^{\mbox{incl}}} (1S)&=& g_1 { \langle S\rangle} (1S) +g_2 \langle S\rangle
\chi_{b1} +g_3 \langle S\rangle (2S)+ g_4 \langle S\rangle (3S),
\end{eqnarray}
where the branching factors $f_i$'s and $g_i$'s are obtained
from the CDF measurement~\cite{CDF:Collab}, where $g_{i}$'s are 0.509, 0.271,
0.107 and 0.113, respectively, assuming the survival probabilities of
$\Upsilon(3S)$ and $\chi_b$(2P) are same with $g_4$ as combined fraction
while factors $f_1$ and $f_2$ are taken as 0.5.
\begin{figure}[]
\vspace{.11in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in,height=2.5in]{raa_vs_npart_etazeta_1svaryedi.eps} & \hspace{0.25in}
\includegraphics[width=2.5in,height=2.5in]{raa_vs_pt_y1s.eps} \\
\end{array}$
\end{center}
\caption{\footnotesize Centrality dependence of suppression factor for
$\Upsilon$(1S) (left panel) and variation of suppression factor against $P_T$
for $\Upsilon$(1S)(right penal).}
\end{figure}
To study the centrality dependence of the suppression factor,
we use the CMS measurements of the pseudo rapidity and
centrality dependent transverse energy density in Pb-Pb collisions
at the LHC energy~\cite{Phenix:Collab}, to obtain the initial
condition:
\begin{equation}
\langle \epsilon_i \rangle=\frac{\xi}{A_T c \tau_i}J(y,\eta)\frac{dE_T}{d\eta}~,
\label{eqdet}
\end{equation}
where the Jacobian $J(y,\eta)$ (=1.09) is taken from HYDJET 1.8 for
the pseudorapidity range $|{\eta}|< $ 0.35 in central collisions
at $\sqrt{S_{NN}}$= 2.76 TeV~\cite{CMS:Collab}.
For the top 5\% central (Pb-Pb) collisions at the LHC, the Bjorken
formula (\ref{eqdet}) (without the factor, $\xi$) estimates the (initial)
energy density, $<\epsilon>_i$ = 14 GeV/$fm^3$ for initial time $\tau_i$=1 fm.
Although this estimate is 2.6 times larger than that at RHIC
energy~\cite{Phenix:Collab}, but it underestimates the initial energy
density to the extent such that even the exited states of $\Upsilon$
family have not been dissolved by the deconfined medium. So a scale
factor ($\xi \sim 5 $) has been introduced in the Bjorken formula to get
rid of the unusually smaller values~\cite{Hirano:PRC2001}.
\begin{figure}[h]
\vspace{1.55in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in,height=2.5in]{raa_upsilon_2s.eps} & \hspace{0.25in}
\includegraphics[width=2.5in,height=2.5in]{raa_upsilon_3s.eps}\\
\end{array}$
\end{center}
\caption{\footnotesize The centrality dependence of the inclusive survival
probability of the 2S(left panel) and 3S(right panel) states, respectively.}
\end{figure}
\begin{figure}[h]
\vspace{.1in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in,height=2.5in]{2sbye1s_vs_npart.eps} & \hspace{0.25in}
\includegraphics[width=2.5in,height=2.5in]{3sbye1s_vs_npart.eps} \\
\end{array}$
\end{center}
\caption{\footnotesize Centrality dependence of the double ratio
$\frac{\Upsilon(2S)/\Upsilon(1S)|PbPb}{\Upsilon(2S)/\Upsilon(1S)|pp}$
(left panel) and
$\frac{\Upsilon(3S)/\Upsilon(1S)|PbPb}{\Upsilon(3S)/\Upsilon(1S)|pp}$
(right panel) respectively.}
\end{figure}
\begin{figure}[h]
\vspace{.1in}
\begin{center}$
\begin{array}{cc}
\includegraphics[width=2.5in,height=2.5in]{raa_vs_npart_tF1_difftiso.eps}
\end{array}$
\end{center}
\caption{\footnotesize Centrality dependence of the sequential
suppression of $\Upsilon$(1S) for different isotropization
times}
\end{figure}
\subsection{Results and discussions}
Quarkonium suppression in nucleus-nucleus collisions compared to $p$-$p$
collisions involves various time-scales, associated with a) the initial
conditions of the medium, b) the dynamics of the expansion, and the in-medium
properties of the quarkonium states and finally the competition among them
ensues a rich structure in the suppression pattern. The first one is related
to the time scale of thermalization, the second one is associated to the
formation of resonances in the dilated (fireball) frame ($t_F$), which
depend on
their intrinsic transverse momenta and the formation times in their
rest frame, and are related through a hierarchy: $\tau_F (1S)<\tau_F(2S)<\tau_F(3S)$.
The second is related to the expansion rate of the medium that can be
effectively gauged in terms of the speed of sound, which, in turn,
is interconnected through the equation of state ($p=c_s^2 \epsilon$).
The third one is the screening time, $\tau_s$ (the time-span
of the suppression) which depends on the scale of dissociation (the screening
energy, $\epsilon_s$) and the speed of sound, $c_s$.
The time $\tau_s$ also depends on the centrality of the collision (initial
conditions), i.e., if the
collision is more central then the system starts initially
from the higher energy density and take longer time
to reach $\epsilon_s$. On the other hand, since the excited states
are dissociated at lower temperatures compared to the ground state,
so $\epsilon_s$'s satisfy the hierarchy:
$\epsilon_s (1S)>> \epsilon_s (2S) >> \epsilon_s (3S)$, hence the
screening times, $\tau_s$'s will thus satisfy the reverse relation: $\tau_s(1S)<\tau_s(2S)<\tau_s(3S)$.
However, the hierarchy in the screening times, in conjunction
with the formation times makes
the suppression pattern complicated, {\em for example}
the suppression of $\Upsilon$(2S) state may not always larger than
$\Upsilon$(1S) state and the $\Upsilon$(3S) state may not be suppressed
more than the $\Upsilon$(2S) state.
We will now discuss how the competition of the various time-scales
transpires into the suppression pattern. Suppose if $\epsilon_s \gtrsim
\epsilon_i$, there will be no suppression and if
$\epsilon_i \gtrsim \epsilon_s$,
there will be suppression but the extent of suppression depends on
i) how big the difference, $\Delta$ (=$\epsilon_i- \epsilon_s)$ is between
$\epsilon_i$ and $\epsilon_s$ and it varies from one state to other,
and ii) how fast the system reaches to different $\epsilon_s$'s, i.e., how
large the screening time, $\tau_s$ is for different states,
which in turn can be modulated by the
bulk and shear forces near and away from the critical temperature,
respectively. For a fixed centrality ($\epsilon_i$), $\Delta$ is
minimum for $\Upsilon$ (1S) and increases for the excited
states due to the hierarchy in $\epsilon_s$'s. For a fixed $\Delta$,
$\tau_s$ becomes larger due to the presence of dissipative forces, compared
to ideal fluid. In fact, the shear viscosity slows down the expansion
at the early stages of the expansion and thus affect the screening
of the $\Upsilon$(1S) most whereas the bulk viscosity slows
down the late stages of the expansion and hence the excited states are
affected much.
This is due to the fact that the shear viscosity is developed at the early
stages and diminishes gradually whereas the bulk viscosity sets in
late in the proximity of the critical temperature.
Thus both the bulk and shear viscosities act as an additional handle to
decipher the suppression pattern.
With this understanding, we now analyze the results on the screening energy
density contours for the $\Upsilon$ (1S) and $\Upsilon$ (2S) states
in Fig.3 at LHC energy, i.e. how the topology of the contour depends
on the quarkonium properties, the expansion dynamics etc.
The main observations are: i) the size of the
contour increases while going from the ground state to the excited
states because the screening energy density decreases from $\Upsilon$(1S)
to $\Upsilon$(2S) states rapidly, so the system takes longer to reach
$\epsilon_s$ for the excited states, ii)
the contour also increases with the increase of the
viscous forces because the viscous forces
slows down the entire evolution. More
specifically the contour of the ground states are affected by the shear
term only, whereas the excited states are affected by both.
The above observations can be encrypted in the boundary of the screening
region ($r_s$). To understand both the centrality ($N_{\rm{Part}}$) and
transverse momentum ($p_T$) spectrum of the suppression pattern,
we have calculated $r_s$ as a function of the number
of participants ($N_{\rm{Part}}$) for various $p_T$'s in Fig. 4.
It is found that the size of the screening boundary ($r_s$)
increases rapidly with the centrality and explains why there is more
suppression in most central collision and less suppression in peripheral
collision. To be more specific, for 1S state, the screening boundary initially
enlarges rapidly and gets saturated for $N_{\rm{part}} \ge 300$, while
for the excited states it increases monotonically, with the centrality.
This explains why there is a saturation trend in the CMS results for the
$\Upsilon$ (1S) suppression for $N_{\rm{part}}>300$ (left panel of Fig. 5)
and there is gradual suppression for the excited (2S and 3S) states
(Fig. 6). We also notice that for a given
centrality, the screening boundary for the $\Upsilon$ (1S) state gets
squeezed rapidly for $Q \bar Q$ pairs having larger $p_T$, while the boundary
gets swelled for pairs with smaller momenta. Thus for smaller number
of participants, the $p_T$ above which a pair can escape, is larger
than the larger number of participants.
Since the production of partons with smaller $p_T$'s are more abundant
in smaller centralities than the higher centralities, so the above
observation explains why there is more suppression even
in the smaller centralities (left panel of Fig. 5).
However, the sensitivity of the transverse momenta is less prominent
for the excited states (right panel of Fig. 4).
That is why there is no strong centrality dependence
in the suppression pattern for $\Upsilon$ (2S) and (3S) states (Fig 6),
in contrast to the $\Upsilon$(1S) state.
With these ingredients, we explain our results on the inclusive
survival probability for the $\Upsilon$ (1S) state
computed from the feed down of the exited states (left panel of Fig. 5).
We found that the suppression increases with the centrality upto
$N_{\rm Part}$ =300 and nearly saturates beyond $N_{part} >350$.
This finding is compatible with our earlier observation
(left panel of Fig. 4), where the screening radius ($r_s$)
is almost independent of the centrality beyond a certain value.
We also notice that the inclusive (survival) probability (averaged over
the centralities) for $\Upsilon$(1S) state increases linearly with $p_T$
(right panel of Fig. 5). This is again compatible with
our earlier observation, where for a given centrality, $r_s$
increases almost linearly with $p_T$.
We have also calculated the inclusive survival probability
for the $\Upsilon$ (2S) and (3S) states (Fig. 6), which
are found to decrease slowly with the centrality.
This finding resonates with the earlier observation
(right panel of Fig. 4), where for a given $p_T$, the screening radius ($r_s$)
increases linearly with the centrality and for a given centrality, the
$p_T$ dependence of $r_s$ is very slow.
The initial state effects affect the $\Upsilon$ (nS) states in a similar
manner, so the possible acceptance and/or efficiency differences
cancel out in the ratio, ${\Upsilon(nS)/\Upsilon(1S)}_{PbPb}$
(with respect to the $p$-$p$ collisions). Moreover the final state nuclear
absorption effects are expected to minimum at LHC
energies~\cite{zwlin:PLB2001}, so we have calculated the double
ratio (Fig. 7) at the LHC energy, which shows poor suppression of
2S state with respect to 1S state for peripheral collision and
no characteristic dependence on the collision centrality for
$N_{Part}>100$. But CMS results
show more suppression of $\Upsilon$(2S) for peripheral collisions and other
approaches ~\cite{Tsong:PRC852012,Aemerick:EPJA482012,
Fnendzig:arxiv12108366v2} also agrees
with this fact. This indicate that either their may be some additional
suppression mechanisms which are still missing in the theoretical
calculations or CMS measurements are not sufficient to disentangle
the nuclear effects from medium effects and it could be
better resolved on availability of more data from heavy-ion
and proton-nucleus collision runs at LHC in future.
The results of double ratio shows exited
states are suppressed more with respect to ground state.
To explore the effects of the viscous forces on the suppression, we take
the shear viscosity to entropy ratio, $\eta/s$ as 0.08 and 0.3 along with
the parametrization of the bulk viscosity, $\zeta/s$ from (\ref{eq:zetas}).
We notice that the suppression increases with the increase of
shear viscosity which, in turn, enhances the screening time.
Our estimate agrees with the CMS data when the ratio $\eta/s$ is
taken from its perturbative estimate. This seems justified because
the screening energy density for the $\Upsilon$(1S) state is very high
where the perturbative calculation seems meaningful. So the $\Upsilon$ (1S)
production can be used to constrain the $\eta/s$ ratio.
Since the physics of isotropization is yet to understand theoretically, so
the duration of the pre-equilibrium era is uncertain.
Therefore we take the privilege to constrain the arbitrariness of
$\tau_{\rm{iso}}$ by the suppression of bottomonium production
( Fig. 8) because $\Upsilon$ (1S) is formed earlier
than the isotropization time, which is not the case for $\Upsilon$ (2S)
state. We have found that $\tau_{iso}$ =0.3 fm looks
more plausible as far as CMS data is concerned.
\section{Conclusions}
In conclusion we have studied the sequential suppression for $\Upsilon$ (1S)
and $\Upsilon$ (2S) states at the LHC energy in a longitudinally expanding
partonic system, which underwent through the successive
pre-equilibrium and equilibrium phases in the presence of dissipative forces.
Quarkonium suppression in nucleus-nucleus collisions
compared to $p$-$p$ collisions couples the
in-medium properties of the quarkonium states with
the dynamics of the expanding medium.
In this work we obtained the dissociation temperatures of the quarkonium
states by correcting both
the perturbative and nonperturbative terms in $Q\bar Q$ potential
in (an)isotropic medium through the HTL resummed perturbation.
We then modeled the pre-equilibrium evolution as anisotropic fluid
via the time dependent anisotropic parameter,
$\xi(\tau)$ and hard momentum scale, $P_{ hard}(\tau)$ while
the equilibrium era is governed by the second-order
dissipative hydrodynamics in (1+1) Bjorken boost-invariant
model and coupled them together to estimate the sequential suppression.
The expansion in equilibrium hydrodynamics is controlled by the speed
of sound $c_{s}^{2}$, which could be further handled by the lattice
QCD equation of state, the shear ($\eta$) and bulk ($\zeta$) viscous
forces etc.
The bulk viscosity in conjunction with the
shear viscosity enhances the cooling rate and thus causes
more suppression to the ground state, however, the bulk viscosity
$\zeta/s$ is significant for exited states.
The sequential suppression is a very complex
phenomenon depends on several parameters
such as the scale of the dissociation of quarkonium states, the decay
of the excited states, the centrality of collision, the transverse
momentum, the screening time, the formation time, the dissipative forces
etc., including the isotropization time and too early or too late
isotropization results in over suppression.
The tiny formation time (compared to the isotropization time)
and tightly bound character of the bottomonium states help the
the suppression of bottomonium to constrain
both the isotropization time (0.3 fm) as well as the shear
viscosity-to-the entropy ratio (0.3)
\noindent {\bf Acknowledgments:}
One of us (BKP) is thankful for some financial assistance
from CSIR project (CSR-656-PHY), Government of India. USK is also
thankful to Government of Maharashtra for the financial assistance.
|
2,877,628,088,861 | arxiv | \section{Introduction}
In this paper, we study secure communications in multi-user
interference networks from an information-theoretic point of view. The
security of communication was first studied by Shannon via a noiseless
wiretap channel \cite{Shannon:1949}. Noisy wiretap channel was
introduced by Wyner who determined its capacity-equivocation region
for the degraded case \cite{wyner}. His result was generalized to
arbitrary, not necessarily degraded, wiretap channels by Csiszar and
Korner \cite{csiszar}, and extended to Gaussian wiretap channels by
Leung-Yan-Cheong and Hellman \cite{gaussian}. This line of research
has been subsequently extended to many multi-user settings, e.g., broadcast channels with confidential messages \cite{secrecy_ic, xu_bounds_bc_cm_it_09}, multi-receiver wiretap channels \cite{fading1, bagherikaram_bc_2008, ersen_bc_asilomar_08, ersen_eurasip_2009} (see also a survey on extensions of these to MIMO channels \cite{ersen_jcn_2010}), interference channels with confidential messages \cite{secrecy_ic, he_outerbound_gic_cm_ciss_09}, interference channels with external eavesdroppers \cite{koyluoglu_ic_external}, multiple access wiretap channels \cite{tekin_gmac_w, cooperative_jamming, ersen_mac_allerton_08, liang_mac_cm_08 ,ersen_mac_book_chapter}, wiretap channels with helpers \cite{wiretap_channel_with_one_helper}, relay eavesdropper channels \cite{relay_1, relay_2, relay_3, relay_4, he_untrusted_relay, ersen_crbc_2011}, compound wiretap channels \cite{compound_wiretap_channel, ersen_ulukus_degraded_compound}, etc. While the channel models involving a single transmitter, such as broadcast channels with confidential messages and multi-receiver wiretap channels, are relatively better understood, the channel models involving multiple independent transmitters, such as interference channels with confidential messages and/or external eavesdroppers, multiple access wiretap channels, wiretap channels with helpers, and relay-eavesdropper channels, are much less understood. The exact secrecy capacity regions of all these multiple-transmitter models remain unknown, even in the case of simple Gaussian channels. In the absence of exact secrecy capacity regions, achievable
secure degrees of freedom (d.o.f.) at high signal-to-noise ratio (SNR) regimes
has been studied in the literature \cite{he_k_gic_cm_09, koyluoglu_k_user_gic_secrecy, xie_k_user_ia_compound, secrecy_ia_new, xiang_he_thesis, secrecy_ia5,raef_mac_it_12, secrecy_ia1, interference_alignment_compound_channel, xie_gwch_allerton, xie_sdof_networks_in_prepare, xie-ulukus-ciss13, xie_layered_network_journal}. In this paper, we focus on the
$K$-user interference channel with secrecy constraints, and determine its exact sum secure d.o.f.
The $K$-user Gaussian interference channel with secrecy constraints consists of $K$ \linebreak transmitter-receiver pairs each wishing to have secure communication over a Gaussian interference channel (IC); see Figure~\ref{fig:kic-general}. We consider three different secrecy constraints: 1) $K$-user interference channel with one external eavesdropper (IC-EE), where $K$ transmitter-receiver pairs wish to have secure communication against an external eavesdropper, see Figure~\ref{fig:kic-subfigs}(a). 2) $K$-user interference channel with confidential messages (IC-CM), where there are no external eavesdroppers, but each transmitter-receiver pair wishes to secure its communication against the remaining $K-1$ receivers, see Figure~\ref{fig:kic-subfigs}(b). 3) $K$-user interference channel with confidential messages and one external eavesdropper (IC-CM-EE), which is a combination of the previous two cases, where each transmitter-receiver pair wishes to secure its communication against the remaining $K-1$ receivers and the external eavesdropper, see Figure~\ref{fig:kic-subfigs}(c).
In the Gaussian wiretap channel, the secrecy capacity is the
difference between the channel capacities of the transmitter-receiver
and the transmitter-eavesdropper pairs \cite{gaussian}. It is well-known that this
difference does not scale with the SNR, and hence the secure d.o.f.~of
the Gaussian wiretap channel is zero, indicating a severe penalty due
to secrecy in this case. Fortunately, this does not hold in most multi-user
scenarios, including the interference channel. Reference
\cite{he_k_gic_cm_09} showed that nested lattice codes and layered
coding are useful in providing positive sum secure d.o.f. for the
$K$-user IC-CM; their result gave a sum secure d.o.f.~of less than
$\frac{3}{4}$ for $K=3$. Reference
\cite{koyluoglu_k_user_gic_secrecy} used interference alignment to
achieve a sum secure d.o.f.~of $\frac{K(K-2)}{2K-2}$ for the $K$-user
IC-CM, which gave $\frac{3}{4}$ for $K=3$. Based on the same idea,
\cite{xie_k_user_ia_compound, koyluoglu_k_user_gic_secrecy} achieved a
sum secure d.o.f. of $\frac{K(K-1)}{2K}$ for the $K$-user IC-EE, which
gave $1$ for $K=3$. The approach used in \cite{xie_k_user_ia_compound,
koyluoglu_k_user_gic_secrecy} is basically to evaluate the secrecy
performance of the interference alignment technique
\cite{interference_alignment} devised originally for the $K$-user interference
channel without any secrecy constraints. Since the original interference alignment scheme puts all
of the interfering signals into the same reduced-dimensionality sub-space at a receiver, it
naturally provides a certain amount of secrecy to those signals
as an unintended byproduct, because the interference signals in
this sub-space create uncertainty for one another and make it
difficult for the receiver to decode them. However, since the end-goal of \cite{interference_alignment} is \emph{only} to achieve reliable decoding of the transmitted messages at their intended receivers, the d.o.f. it provides is sub-optimal when \emph{both} secrecy and reliability of messages are considered.
\begin{figure}[t]
\centering
\includegraphics[scale=0.9]{figures/kic-wiretap-new}
\caption{$K$-user Gaussian interference channel with secrecy constraints.}
\label{fig:kic-general}
\end{figure}
Recently, the \emph{exact} sum secure d.o.f. of the two-user IC-CM was
obtained to be $\frac{2}{3}$ in \cite{xie_sdof_networks_in_prepare}.
This reference showed that while interference alignment is a key
ingredient in achieving positive secure d.o.f., a more intricate
design of the signals is needed to achieve the simultaneous end-goals
of reliability at the desired receivers and secrecy at the
eavesdroppers. In particular, in \cite{xie_sdof_networks_in_prepare},
each transmitter sends both message carrying signals, as well as
cooperative jamming signals. This random mapping of the message
carrying signals to the channel inputs via cooperative jamming signals
may be interpreted as channel prefixing \cite{csiszar}. Both the
message carrying signals and the cooperative jamming signals come from
the same discrete alphabet, and hence are structured. In addition, the
signals are carefully aligned at the legitimate receivers and the
eavesdroppers using real interference alignment \cite{real_inter_align}.
In particular, at each receiver, the unintended message and both jamming signals are
constrained in the same interference sub-space, providing an
interference-free sub-space for the intended message. Further, inside the interference sub-space,
each unintended message is protected by aligning it with
the jamming signal from the other transmitter. Such a perfect alignment provides a constant upper bound for the information leakage rate.
\begin{figure}[t]
\centerline{\begin{tabular}{ccc}
\subfigure{\includegraphics[scale=0.7]{figures/kic-ee-short}} \hspace*{0.4in}&
\subfigure{\includegraphics[scale=0.7]{figures/kic-cm-short}} \hspace*{0.4in}&
\subfigure{\includegraphics[scale=0.7]{figures/kic-cm-ee-short}}\\
(a) & (b) & (c)
\end{tabular}}
\caption{The receiver sides of the three channel models: (a) $K$-user
IC-EE, (b) $K$-user IC-CM, and
(c) $K$-user IC-CM-EE, where $W_{-i}^K \defn
\{W_1,\ldots,W_{i-1},W_{i+1},\ldots, W_K\}$.}
\label{fig:kic-subfigs}
\end{figure}
In this paper, we generalize the results in
\cite{xie_sdof_networks_in_prepare} to the case of $K$-user
interference channel, for $K>2$. Our generalization has three main
components:
\begin{enumerate}
\item While \cite{xie_sdof_networks_in_prepare} considered IC-CM only, we
consider both IC-CM and IC-EE and their combination IC-CM-EE in a
unified framework. To this end, we show converses separately for
IC-EE and IC-CM, which imply a converse for IC-CM-EE; and we show
achievability for IC-CM-EE, which implies achievability
for IC-EE and IC-CM. The achievability and converse meet giving an
{\it exact} sum secure d.o.f.~of $\frac{K(K-1)}{2K-1}$ for all three models.
\item For achievability: In the case of two-user IC-CM in \cite{xie_sdof_networks_in_prepare},
each message needs to be delivered reliably to one receiver and needs to be
protected from another receiver. This requires alignment at two receivers, which is achieved
in \cite{xie_sdof_networks_in_prepare} by simply choosing transmission coefficients properly, which cannot be extended to the $K$-user case here. In the $K$-user
IC-CM-EE case, we need to deliver each message to a receiver, while
protecting it from $K$ other receivers. This requires designing
signals in order to achieve alignment at $K+1$ receivers
simultaneously: at one receiver (desired receiver) we need alignment to ensure that the
largest space is made available to message carrying signals for their
reliable decodability, and at $K$ other receivers, we need to align
cooperative jamming signals with message carrying signals to protect
them. These requirements create two challenges: i) aligning multiple signals
simultaneously at multiple receivers, and ii) upper bounding
the information leakage rates by suitable functions which can be made small.
We overcome these challenges by using an asymptotical approach \cite{real_inter_align_exploit},
where we introduce many signals that carry each message and align them simultaneously at multiple receivers only order-wise (i.e., align most of them, but not all of them), and by developing a method
to upper bound the information leakage rate by a function which can be made small.
In contrast to the constant upper bound for the information leakage rate in \cite{xie_sdof_networks_in_prepare}, here the upper bound is not constant, but a function which can be made small. This is due to the non-perfect (i.e., only asymptotical) alignment.
\item For the converse: To the best of our knowledge, the only known
upper bound for the sum secure \dof of the $K$-user interference
channel with secrecy constraints is $\frac{K}{2}$, which is the upper
bound with no secrecy constraints \cite{interference_alignment}. The
upper bounding technique for the two-user IC-CM in
\cite{xie_sdof_networks_in_prepare} considers one single confidential
message against the corresponding unintended receiver each time, since
in that case the eavesdropping relationship is straightforward: for each
message there is only one eavesdropper and for each
eavesdropper there is only one confidential message. However, in the
case of $K$-user IC, each message is required to be kept secret against
multiple eavesdroppers and each eavesdropper is associated
with multiple unintended messages. To develop a tight converse, we focus
on the eavesdropper as opposed to the message. In the converse for IC-EE,
we consider the sum rate of all of the messages
eavesdropped by the external eavesdropper. We sequentially apply the {\it role
of a helper lemma} in \cite{xie_sdof_networks_in_prepare} to each
transmitter by treating its signal as a helper to another specific transmitter. In the
converse for IC-CM, for each receiver (which also is an eavesdropper), we consider the sum rate of
all unintended messages, and again apply the {\it role of a helper lemma} in a specific structure.
\end{enumerate}
\section{System Model, Definitions and the Result}
\label{kic-model-ee}
The input-output relationships for a $K$-user Gaussian interference channel
with secrecy constraints (Figure~\ref{fig:kic-general}) are given by
\begin{align}
\label{eqn:kic-channel-model-ee-1}
Y_i & = \sum_{j=1}^K h_{ji} X_j + N_i, \qquad i =1,\ldots,K \\
\label{eqn:kic-channel-model-ee-2}
Z & = \sum_{j=1}^K g_{j} X_j + N_Z
\end{align}
where $Y_i$ is the channel output of receiver $i$, $Z$ is the channel output of the external eavesdropper (if there is any), $X_i$ is the channel input of transmitter $i$, $h_{ji}$ is the channel gain of the $j$th transmitter to the $i$th receiver, $g_j$ is the channel gain of the $j$th transmitter to the eavesdropper (if there is any), and $\{N_1,\ldots,N_K,N_Z\}$ are mutually independent zero-mean unit-variance Gaussian random variables. All the channel gains are time-invariant, and independently drawn from continuous distributions. We further assume that all $h_{ji}$ are non-zero, and all $g_j$ are non-zero if there is an external eavesdropper. All channel inputs satisfy average power constraints, $\E\left[X^2_{i}\right] \le
P$, for $i=1,\ldots, K$.
Each transmitter $i$ intends to send a message $W_i$, uniformly chosen from a set $\mathcal{W}_i$, to receiver $i$. The rate of the message is $R_i\defn\frac{1}{n}\log|\mathcal{W}_i|$, where $n$ is the number of channel uses. Transmitter $i$ uses a stochastic function $f_i: \mathcal{W}_i\to \bfX_i$ to encode the message, where $\bfX_i\defn X_i^n$ is the $n$-length channel input of user $i$. We
use boldface letters to denote $n$-length vector signals, e.g., $\bfX_i\defn X_i^n$, $\bfY_j\defn Y_j^n$, $\bfZ\defn Z^n$, etc. The legitimate receiver $j$ decodes the message as $\hat{W}_j$ based
on its observation $\mathbf{Y}_j $. A rate tuple $(R_1,\ldots,R_K)$ is said to be achievable if for any $\epsilon>0$, there exist joint $n$-length codes such that each receiver $j$ can decode the corresponding message reliably, i.e., the probability of decoding error is less than $\epsilon$ for all messages,
\begin{equation}
\max_{j}\pr\left[W_j\neq\hat{W}_j\right] \le \epsilon
\end{equation}
and the corresponding secrecy requirement is satisfied. We consider three different secrecy requirements:
\begin{enumerate}
\item[{1)}] In IC-EE, Figure~\ref{fig:kic-subfigs}(a), all of the messages are kept information-theoretically secure against the external eavesdropper,
\begin{align}
\label{eqn:kic-secrecy-constraint-cmee-1}
H(W_1,\ldots, W_K|\bfZ) & \ge H(W_1,\ldots, W_K) - n \epsilon
\end{align}
\item[{2)}] In IC-CM, Figure~\ref{fig:kic-subfigs}(b), all unintended messages are kept
information-theoretically secure against each receiver,
\begin{align}
\label{eqn:kic-secrecy-constraint-cmee-2}
H(W_{-i}^K|\bfY_i) & \ge H(W_{-i}^K) - n \epsilon,
\qquad i=1,\ldots,K
\end{align}
where $W_{-i}^K \defn \{W_1,\ldots,W_{i-1},W_{i+1},\ldots, W_K\}$.
\item[{3)}] In IC-CM-EE, Figure~\ref{fig:kic-subfigs}(c), all of the messages are kept information-theoretically secure against both the $K-1$ unintended receivers and the eavesdropper, i.e., we impose both secrecy constraints in \eqn{eqn:kic-secrecy-constraint-cmee-1} and
\eqn{eqn:kic-secrecy-constraint-cmee-2}.
\end{enumerate}
The supremum of all sum achievable secrecy rates is the sum secrecy capacity $C_{s,\Sigma}$, and the sum secure d.o.f., $D_{s,\Sigma}$, is defined as
\begin{equation}
D_{s,\Sigma} \defn \lim_{P\to\infty} \frac{C_{s,\Sigma}}{\frac{1}{2}\log P}
= \lim_{P\to\infty} \sup \frac{R_1+\cdots+R_K}{\frac{1}{2}\log P}
\label{eqn:kic-ee-sec-dof-defn}
\end{equation}
The main result of this paper is stated in the following theorem.
\begin{theorem}
\label{eqn:kic-ds-final}
The sum secure d.o.f. of the $K$-user IC-EE, IC-CM, and IC-CM-EE is $\frac{K(K-1)}{2K-1}$ for almost all channel gains.
\end{theorem}
\section{Preliminaries}
\subsection{ Role of a Helper Lemma}
For completeness, we repeat Lemma~2 in \cite{xie_sdof_networks_in_prepare} here, which is called {\it role of a helper lemma}. This lemma identifies a constraint on the signal of a given transmitter, based on the decodability of another transmitter's message at its intended receiver.
\begin{lemma}[\!\! \cite{xie_sdof_networks_in_prepare}]
\label{lemma:gwch_general_ub_for_helper}
For reliable decoding of the $k$th transmitter's signal at the $k$th receiver, the
channel input of transmitter $i\neq k$, $\bfX_i$, must satisfy
\begin{equation}
h(\bfX_i + \tilde\bfN)\le h(\bfY_k) - n R_k + n {c}
\label{eqn:gwch_general_ub_for_helper}
\end{equation}
where $c$ is a constant which does not depend on $P$, and $\tN$ is a new
Gaussian random variable independent of all other random variables with $\sigma_{\tN}^2 <
\frac{1}{h_{ik}^2}$, and $\tilde\bfN$ is an i.i.d.~sequence of $\tilde N$.
\end{lemma}
Lemma~\ref{lemma:gwch_general_ub_for_helper} gives an upper bound on the
differential entropy of (a noisy version of) the signal of any given transmitter,
transmitter $i$ in \eqn{eqn:gwch_general_ub_for_helper}, in terms of the differential entropy of
the channel output and the message rate $n R_k = H(W_k)$, of a user $k$, based on the decodability of message $W_k$ at its intended receiver.
The inequality in this lemma, \eqn{eqn:gwch_general_ub_for_helper}, can
alternatively be interpreted as an upper bound on the message rate, i.e., on
$n R_k$, in terms of the difference of the differential entropies of the channel
output of a receiver $k$ and the channel input of a transmitter $i$; in
particular, the higher the differential entropy of the signal coming from user $i$, the lower this upper bound will be on the rate of user $k$. This motivates not using i.i.d.~Gaussian
signals which have the highest differential entropy. Also note that this lemma does not
involve any secrecy constraints, and is based only on the decodability of the messages at their
intended receivers.
\subsection{Real Interference Alignment}
\subsubsection{Pulse Amplitude Modulation}
For a point-to-point scalar Gaussian channel,
\begin{equation}
Y = X + Z
\end{equation}
with additive Gaussian noise $Z \sim \mathcal{N}(0,\sigma^2)$ and
an input power constraint $\mathe{X^2} \le P$, assume that
the input symbols are drawn from a PAM constellation,
\begin{equation}
C(a,Q) = a \{ -Q, -Q+1, \ldots, Q-1,Q\}
\label{constel}
\end{equation}
where $Q$ is a positive integer and $a$ is a real number to normalize
the transmit power. Note that, $a$ is also the minimum distance
$d_{min}(C)$ of this constellation, which has the probability
of error
\begin{equation}
\pe(e) \le \exp\left( - \frac{d_{min}^2}{8 \sigma^2}\right) = \exp\left( - \frac{a^2}{8 \sigma^2}\right)
\end{equation}
The transmission rate of this PAM scheme is
\begin{equation}
R = \log( 2 Q + 1)
\end{equation}
since there are $2Q+1$ signalling points in the constellation. For any small enough $\delta>0$, if we choose $Q = P^{\frac{1-\delta}{2}}$ and $a=\gamma P^{\frac{\delta}{2}}$, where
$\gamma$ is a constant to normalize the transmit power, which is
independent of $P$, then
\begin{equation}
\pe(e)
\le \exp\left( -\frac{\gamma^2 P^{{ \delta}}}{8\sigma^2} \right)
\qquad \hbox{and} \qquad
R
\ge \frac{1-\delta}{2} \log P
\end{equation}
and we can have $\pe(e)\to 0$ and $R\to\frac{1}{2}\log P$ as
$P\to\infty$. That is, we can have reliable communication at rates
approaching $\frac{1}{2}\log P$, and therefore have $1$ d.o.f.
\subsubsection{Real Interference Alignment}
This PAM scheme for the point-to-point scalar channel can be
generalized to multiple data streams. Let the transmit signal be
\begin{equation}
x = \mb{a}^T \mb{b} = \sum^L_{i=1} a_i b_i
\end{equation}
where $a_1,\ldots, a_L$ are rationally independent real
numbers\footnote{ $a_1, \ldots, a_L$ are rationally independent if
whenever $q_1,\ldots,q_L$ are rational numbers then
$\sum^L_{i=1} q_i a_i =0$ implies $q_i=0$ for all $i$. } and each
$b_i$ is drawn independently from the constellation $C(a,Q)$ in (\ref{constel}).
The real value $x$ is a combination of $L$ data streams, and the constellation observed at
the receiver consists of $(2 Q+1)^L$ signal points.
By using the Khintchine-Groshev theorem of Diophantine approximation
in number theory, \cite{real_inter_align_exploit,real_inter_align}
bounded the minimum distance $d_{min}$ of points in the receiver's
constellation: For any $\delta>0$, there exists a
constant $k_\delta$, such that
\begin{equation}
\label{ria:lb_of_d}
d_{min} \ge \frac{ k_\delta a}{Q^{L-1+\delta}}
\end{equation}
for almost all rationally independent $\{a_i\}_{i=1}^L$, except for a set
of Lebesgue measure zero. Since the minimum distance of the receiver
constellation is lower bounded, with proper choice of $a$ and $Q$, the
probability of error can be made arbitrarily small, with rate $R$ approaching
$\frac{1}{2} \log P$. This result is stated in the following lemma.
\begin{lemma}[\!\! \cite{real_inter_align_exploit,real_inter_align}]
\label{lemma:ria_real_alignment}
For any small enough $\delta>0$, there exists a positive constant
$\gamma$, which is independent of $P$, such that if we choose
\begin{equation}
Q = P^{\frac{1-\delta}{2(L+\delta)}}
\qquad \mbox{and} \qquad
a=\gamma \frac{P^{\frac{1}{2}}}{Q}
\end{equation}
then the average power constraint is satisfied, i.e., $\mathe{X^2}\le P $,
and for almost all $\{a_i\}_{i=1}^L$, except for a set of Lebesgue measure zero,
the probability of error is bounded by
\begin{equation}
\mathrm{Pr}(e)
\le
\exp\left( - \eta_\gamma P^{{ \delta}} \right)
\end{equation}
where $\eta_\gamma$ is a positive constant which is
independent of $P$.
\end{lemma}
Furthermore, as a simple extension, if $b_i$ are sampled independently
from different constellations $C_i(a,Q_i)$, the lower bound in
\eqn{ria:lb_of_d} can be modified as
\begin{equation}
d_{min} \ge \frac{ k_\epsilon a}{(\max_i Q_i)^{L-1+\epsilon}}
\end{equation}
\section{Converse for IC-EE}
In this section, we develop a converse for the $K$-user IC-EE (see
Figure~\ref{fig:kic-subfigs}(a)) defined
in \eqn{eqn:kic-channel-model-ee-1} and
\eqn{eqn:kic-channel-model-ee-2} with the secrecy constraint \eqn{eqn:kic-secrecy-constraint-cmee-1}. We start with the sum rate:
\begin{align}
n\sum_{i=1}^K R_i
& = \sum_{i=1}^K H(W_i) = H(W_1^K)\\
& \le I(W_1^K;\bfY_1^K) - I(W_1^K;\bfZ) + n \nextscnu \\
& \le I(W_1^K;\bfY_1^K,\bfZ) - I(W_1^K;\bfZ) + n \nextscnu \\
& = I(W_1^K;\bfY_1^K|\bfZ) + n \nextscnu \\
& \le I(\bfX_1^K;\bfY_1^K|\bfZ) + n \nextscnu \\
& = h(\bfY_1^K|\bfZ) - h(\bfY_1^K|\bfZ,\bfX_1^K) + n \nextscnu \\
& = h(\bfY_1^K|\bfZ) - h(\bfN_1^K|\bfZ,\bfX_1^K) + n \nextscnu \\
& \le h(\bfY_1^K|\bfZ) + n \nextsc\\
& = h(\bfY_1^K,\bfZ) -h(\bfZ) + n \nextscnu
\label{kid:converse-ic-ee-continue-from}
\end{align}
where $W_1^K\defn \{W_j\}_{j=1}^K$, $\bfX_1^K\defn \{\bfX_j\}_{j=1}^K$,
$\bfY_1^K\defn \{\bfY_j\}_{j=1}^K$, and all the $c_i$s in this paper are constants which do
not depend on $P$.
For each $j$, we introduce $\tilde\bfX_j = \bfX_j +
\tilde\bfN_j$, where $\tilde\bfN_j$ is an i.i.d.~sequence of $\tN_j$ which is a
zero-mean Gaussian random variable with variance $\sigma_j^2 < \min(\min_i 1/h_{ji}^2,1/g_j^2)$.
Also, $\{\tN_j\}_{j=1}^K$ are mutually independent, and are independent of all
other random variables. Continuing from \eqn{kid:converse-ic-ee-continue-from},
\begin{align}
n\sum_{i=1}^K R_i
& \le h(\tilde\bfX_1^K, \bfY_1^K,\bfZ)- h(\tilde\bfX_1^K|
\bfY_1^K,\bfZ)-h(\bfZ) + n \nextscnu \\
& \le h(\tilde\bfX_1^K, \bfY_1^K,\bfZ)- h(\tilde\bfX_1^K|
\bfX_1^K,\bfY_1^K,\bfZ)-h(\bfZ) + n \nextscnu \\
& = h(\tilde\bfX_1^K, \bfY_1^K,\bfZ)- h(\tilde\bfN_1^K)-h(\bfZ) + n \nextscnu \\
& \le h(\tilde\bfX_1^K, \bfY_1^K,\bfZ)-h(\bfZ) + n \nextsc \\
& = h(\tilde\bfX_1^K) + h( \bfY_1^K,\bfZ|\tilde\bfX_1^K)-h(\bfZ) + n \nextscnu
\\
& \le h(\tilde\bfX_1^K) -h(\bfZ) + n \nextsc \label{into-that}
\end{align}
where $\tilde\bfX_1^K\defn \{\tilde\bfX_j\}_{j=1}^K$, and the last inequality is due to the fact that $h(\bfY_1^K,\bfZ|\tilde\bfX_1^K)\leq nc'$, i.e., given all the channel inputs (disturbed by small Gaussian noises), the channel outputs can be
\emph{reconstructed}, which is shown as follows
\begin{align}
& h(\bfY_1^K,\bfZ|\tilde\bfX_1^K) \nl
&\quad\quad \le
\left[\sum_{j=1}^K h(\bfY_j|\tilde\bfX_1^K) \right]
+ h(\bfZ|\tilde\bfX_1^K)
\\
&\quad\quad =
\left[ \sum_{j=1}^K h\left(\sum_{i=1}^{K} h_{ij} (\tilde\bfX_i - \tilde\bfN_i) + \bfN_j \Bigg|
\tilde\bfX_1^K\right) \right]
+ h\left(\sum_{i=1}^{K} g_i (\tilde\bfX_i - \tilde\bfN_i) + \bfN_Z \Bigg|
\tilde\bfX_1^K\right)
\\
&\quad\quad =
\left[ \sum_{j=1}^K h\left(- \sum_{i=1}^{K} h_{ij} \tilde\bfN_i + \bfN_j \Bigg|
\tilde\bfX_1^K\right) \right]
+ h\left(- \sum_{i=1}^{K} g_i \tilde\bfN_i + \bfN_Z \Bigg|
\tilde\bfX_1^K\right)
\\
&\quad\quad \le
\left[ \sum_{j=1}^K h\left(- \sum_{i=1}^{K} h_{ij} \tilde\bfN_i + \bfN_j \right) \right]
+ h\left(- \sum_{i=1}^{K} g_i \tilde\bfN_i + \bfN_Z \right)
\\
&\quad\quad \stackrel{\triangle}{=} n \nextsc
\label{eqn:kic_reconstruction}
\end{align}
Next, we note
\begin{equation}
h(\tilde\bfX_j)
\le h( g_j \bfX_j + \bfN_Z) + n \nextsc
\le h(\bfZ) + n \nextscnu, \qquad j=1,\ldots,K \label{inserting-this}
\end{equation}
where the inequalities are due to the differential entropy version of \cite[Problem
2.14]{cover_it_book}. Inserting (\ref{inserting-this}) into (\ref{into-that}), for any $j=1,\ldots,K$, we get
\begin{align}
n\sum_{i=1}^K R_i
& \le h(\tilde\bfX_1^K) -h(\bfZ) + n c_3 \\
& \le \sum_{i=1}^K h(\tilde\bfX_i) -h(\bfZ) + n c_3\\
& \le \sum_{i=1,i\neq j}^K h(\tilde\bfX_i) + n \nextsc \label{apply-rohl}
\end{align}
which means that the net effect of the presence of an eavesdropper is to \emph{eliminate} one of the channel inputs; we call this the \emph{secrecy penalty}.
We apply the {\it role of a helper lemma}, Lemma~\ref{lemma:gwch_general_ub_for_helper}, to each $\tilde\bfX_i$ with $k=i+1$ (for $i=K$, $k=1$), in (\ref{apply-rohl}) as
\begin{align}
n\sum_{i=1}^K R_i
& \le h(\tilde\bfX_1) + h(\tilde\bfX_2) + \cdots
+ h(\tilde\bfX_{j-1}) +h(\tilde\bfX_{j+1}) + \cdots + h(\tilde\bfX_K) + n\nextsc
\\
& \le \left[ h(\bfY_2) - n R_2 \right]
+ \left[ h(\bfY_3) - n R_3 \right] + \cdots
+ \left[h(\bfY_{j}) - n R_{j} \right]\nl
&\quad+ \left[ h(\bfY_{j+2}) - n R_{j+2} \right]
+ \cdots +\left[ h(\bfY_{K}) - n R_{K} \right] + \left[ h(\bfY_1) - n R_1 \right]
+ n \nextsc
\end{align}
By noting that $h(\bfY_i) \le \frac{n}{2}\log P + n c_i'$
for each $i$, we have
\begin{align}
2 n\sum_{i=1}^K R_i \le (K-1) \left( \frac{n}{2}\log P \right) + n R_{{(j+1)} \bmod{K}} + n \nextsc
\label{eqn:kic-ee-ub-general-j}
\end{align}
for $j=1,\ldots,K$. Therefore, we have a total of $K$ bounds in \eqn{eqn:kic-ee-ub-general-j} for $j=1,\ldots,K$. Summing these $K$ bounds, we obtain:
\begin{align}
(2 K - 1) n\sum_{i=1}^K R_i \le K (K-1) \left( \frac{n}{2}\log P \right) + n \nextsc
\end{align}
which gives
\begin{align}
D_{s,\Sigma} \le \frac{ K (K-1)}{2 K - 1}
\end{align}
completing the converse for IC-EE.
\section{Converse for IC-CM}
In this section, we develop a converse for the $K$-user IC-CM
(see Figure~\ref{fig:kic-subfigs}(b)). We
focus on the secrecy constraint
\eqn{eqn:kic-secrecy-constraint-cmee-2} at a single receiver, say $j$,
as an eavesdropper, and start with the sum rate corresponding to all
unintended messages at receiver $j$:
\begin{align}
n\sum_{i=1,i\neq j}^K R_i
& = \sum_{i=1,i\neq j}^K H(W_i) = H(W_{-j}^K)\\
& \le I(W_{-j}^K;\bfY_{-j}^K) - I(W_{-j}^K;\bfY_j) + n \nextsc \\
& \le I(W_{-j}^K;\bfY_{-j}^K,\bfY_j) - I(W_{-j}^K;\bfY_j) + n \nextscnu \\
& = I(W_{-j}^K;\bfY_{-j}^K|\bfY_j) + n \nextscnu \\
& \le I(\bfX_{-j}^K;\bfY_{-j}^K|\bfY_j) + n \nextscnu \\
& = h(\bfY_{-j}^K|\bfY_j) - h(\bfY_{-j}^K|\bfY_j,\bfX_{-j}^K) + n \nextscnu \\
& \le h(\bfY_{-j}^K|\bfY_j) - h(\bfY_{-j}^K|\bfY_j,\bfX_1^K) + n \nextscnu \\
& = h(\bfY_{-j}^K|\bfY_j) - h(\bfN_{-j}^K|\bfY_j,\bfX_1^K) + n \nextscnu \\
& \le h(\bfY_{-j}^K|\bfY_j) + n \nextsc \\
& = h(\bfY_{-j}^K,\bfY_j) - h(\bfY_j) + n \nextscnu \\
& = h(\bfY_1^K)-h(\bfY_j) + n \nextscnu
\label{kic:converse-ic-cm-continue-from2}
\end{align}
where $W_{-j}^K \defn \{W_i\}_{i=1,i\neq j}^K$ is the message
set containing all unintended messages with respect to receiver $j$,
$\bfX_{-j}^K\defn \{\bfX_i\}_{i=1,i\neq j}^K$
and $\bfY_{-j}^K\defn \{\bfY_i\}_{i=1,i\neq j}^K$.
For each $j$, we introduce $\tilde\bfX_j = \bfX_j +
\tilde\bfN_j$, where $\tilde\bfN_j$ is an i.i.d.~sequence of $\tN_j$ which is a
zero-mean Gaussian random variable with variance $\sigma_j^2 < \min_i 1/h_{ji}^2$.
Also, $\{\tN_j\}_{j=1}^K$ are mutually independent, and are independent of all
other random variables. Continuing from \eqn{kic:converse-ic-cm-continue-from2},
\begin{align}
n\sum_{i=1,i\neq j}^K R_i
& \le h(\tilde\bfX_1^K, \bfY_1^K)- h(\tilde\bfX_1^K| \bfY_1^K)-h(\bfY_j) + n
\nextscnu \\
& \le h(\tilde\bfX_1^K, \bfY_1^K)- h(\tilde\bfX_1^K| \bfY_1^K,\bfX_1^K)-h(\bfY_j) + n
\nextscnu \\
& = h(\tilde\bfX_1^K, \bfY_1^K)- h(\tilde\bfN_1^K)-h(\bfY_j) + n
\nextscnu \\
& \le h(\tilde\bfX_1^K, \bfY_1^K)-h(\bfY_j) + n
\nextsc \\
& = h(\tilde\bfX_1^K) + h(\bfY_1^K|\tilde\bfX_1^K)-h(\bfY_j) + n
\nextscnu \\
& \le h(\tilde\bfX_1^K) -h(\bfY_j) + n \nextsc \label{follow-from-here}
\end{align}
where the last inequality is due to the fact that $h(\bfY_1^K|\tilde\bfX_1^K)\leq n c'$, i.e., given all the channel inputs (disturbed by small Gaussian noises), the channel outputs can be
\emph{reconstructed}, which is shown as follows
\begin{align}
h(\bfY_1^K|\tilde\bfX_1^K)
& \le
\sum_{j=1}^K h(\bfY_j|\tilde\bfX_1^K)
\\
& =
\sum_{j=1}^K h\left(\sum_{i=1}^{K} h_{ij} (\tilde\bfX_i - \tilde\bfN_i) + \bfN_j \Bigg|
\tilde\bfX_1^K\right) \\
& =
\sum_{j=1}^K h\left(- \sum_{i=1}^{K} h_{ij} \tilde\bfN_i + \bfN_j \Bigg|
\tilde\bfX_1^K\right)
\\
& \le
\sum_{j=1}^K h\left(- \sum_{i=1}^{K} h_{ij} \tilde\bfN_i + \bfN_j \right) \\
& \stackrel{\triangle}{=} n \nextsc
\end{align}
We apply the {\it role of a helper lemma}, Lemma~\ref{lemma:gwch_general_ub_for_helper}, to each $\tilde\bfX_i$ with $k=i+1$ (for $i=K$, $k=1$), in (\ref{follow-from-here}) as
\begin{align}
n\sum_{i=1,i\neq j}^K R_i
& \le h(\tilde\bfX_1^K) -h(\bfY_j) + n c_{14} \\
& \le \sum_{i=1}^K h(\tilde\bfX_i) -h(\bfY_j) + n c_{14} \\
& \le \sum_{i=1}^{K-1} \Big[ h(\bfY_{i+1}) - n R_{i+1} \Big] + \Big[ h(\bfY_{1}) - n R_{1} \Big] -h(\bfY_j) + n \nextsc \\
& = \sum_{i=1}^K \Big[ h(\bfY_{i}) - n R_{i} \Big] -h(\bfY_j) + n
\nextscnu
\end{align}
By noting that $h(\bfY_i) \le \frac{n}{2}\log P + n c_i'$
for each $i$, we have
\begin{align}
n R_j + 2 n\sum_{i=1,i\neq j}^K R_i
& \le \sum_{i=1,i\neq j}^K h(\bfY_{i}) + n
\nextscnu \\
& \le (K-1) \left(\frac{n}{2}\log P \right) + n
\nextsc
\label{eqn:kic-cm-ub-general-j}
\end{align}
for $j=1,\ldots,K$. Therefore, we have a total of $K$ bounds in \eqn{eqn:kic-cm-ub-general-j} for
$j=1,\ldots,K$. Summing these $K$ bounds, we obtain:
\begin{align}
(2 K - 1) n\sum_{i=1}^K R_i \le K (K-1) \left( \frac{n}{2}\log P \right) + n \nextsc
\end{align}
which gives
\begin{align}
D_{s,\Sigma} \le \frac{ K (K-1)}{2 K - 1}
\end{align}
completing the converse for IC-CM.
\section{Achievability}
In this section, we provide achievability for the $K$-user IC-CM-EE (see Figure~\ref{fig:kic-subfigs}(c)), which will imply achievability for $K$-user IC-EE and $K$-user IC-CM. We will prove that, for almost all channel gains, a sum secure d.o.f. lower bound of
\begin{equation}
D_{s,\Sigma} \ge \frac{ K (K-1)}{2 K - 1}
\label{ach-result}
\end{equation}
is achievable for the $K$-user IC-CM-EE.
\subsection{Background}
\label{sec:kic-achievable-scheme}
In this section, we will summarize the achievability scheme for the two-user IC-CM in \cite{xie_sdof_networks_in_prepare}, motivate the need for simultaneous alignment of multiple signals at multiple receivers in this $K$-user case, and provide an example of simultaneously aligning two signals at two receivers via asymptotic real alignment \cite{real_inter_align_exploit}. We provide the general achievable scheme for $K>2$ in Section~\ref{sec:kic-asymp-align} via cooperative jamming and asymptotic real alignment, and show that it achieves the sum secure d.o.f. in (\ref{ach-result}) via a detailed performance analysis in Section~\ref{sec:kic-performance-analysis}.
In the achievable scheme for $K=2$ in \cite{xie_sdof_networks_in_prepare}, four mutually independent discrete random variables $\{V_1,U_1,V_2,U_2\}$ are employed (see Figure~10 in \cite{xie_sdof_networks_in_prepare}). Each of them is uniformly and independently drawn from the discrete constellation $C(a,Q)$ given in (\ref{constel}). The role of $V_i$ is to carry message $W_i$, and the role of $U_i$ is to cooperatively jam receiver $i$ to help transmitter-receiver pair
$j$, where $j\neq i$, for $i,j=1,2$. By carefully selecting the
transmit coefficients, $U_1$ and $V_2$ are aligned at receiver $1$, and $U_2$ and $V_1$ are aligned at receiver $2$; and therefore, $U_1$
protects $V_2$, and $U_2$ protects $V_1$. By this signalling scheme,
information leakage rates are upper bounded by constants, and the
message rates are made to scale with power $P$, reaching the secure
d.o.f. capacity of the two-user IC-CM which is $\frac{2}{3}$.
Here, for the $K$-user IC-CM-EE, we employ a total of $K^2$ random variables,
\begin{align}
V_{ij}, \quad & i,j = 1,\ldots,K, \, j\neq i \\
U_k, \quad &k = 1,\ldots,K
\end{align}
which are illustrated in Figure~\ref{fig:kic_alignment} for the case of $K=3$. The scheme
proposed here has two major differences from
\cite{xie_sdof_networks_in_prepare}: 1) Instead of using a single
random variable to carry a message, we use a total of $K-1$ random
variables to carry each message. For transmitter $i$, $K-1$ random
variables $\{V_{ij}\}_{j\neq i}$, each representing a sub-message,
collectively carry message $W_i$. 2) Rather than protecting one
message at one receiver, each $U_k$ simultaneously protects a portion
of all sub-messages at all required receivers. More specifically,
$U_k$ protects $\{V_{ik}\}_{i\neq k,i\neq j}$ at receivers $j$, and at the
eavesdropper (if there is any). For example, in
Figure~\ref{fig:kic_alignment}, $U_1$ protects $V_{21}$ and $V_{31}$
where necessary. In particular, $U_1$ protects $V_{21}$ at receivers
1, 3 and the eavesdropper; and it protects $V_{31}$ at receivers 1, 2
and the eavesdropper. As a technical challenge, this requires $U_1$ to
be aligned with the same signal, say $V_{21}$, at multiple receivers
simultaneously, i.e., at receivers 1, 3 and the eavesdropper. These
particular alignments are circled by ellipsoids in Figure~\ref{fig:kic_alignment}. We do these simultaneous alignments using
asymptotic real alignment technique proposed in
\cite{real_inter_align_exploit} and used in
\cite{xie_k_user_ia_compound, interference_alignment_compound_channel}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{figures/kic_alignment_dashed}
\caption{Illustration of alignment for $3$-user
IC-CM-EE. $U_1$ and $V_{21}$ are marked to emphasize their
simultaneous alignment at $Y_1$, $Y_3$ and $Z$. }
\label{fig:kic_alignment}
\end{figure}
For illustration purposes, in the rest of this section, we demonstrate how we can align two signals at two receivers simultaneously; in particular, we will align $U_1$ with
$V_{21}$ at $Y_1$ and $Y_3$, simultaneously. Towards this
end, we will further divide the random variable $V_{21}$, which represents a sub-message,
into a large number of random variables denoted as $V_{21}\defn \{v_{21t}: t=1,\ldots,|T_{1}|\}$. We then send each one of these random variables after multiplying it with one of the coefficients in the following set which serves as the set of \emph{dimensions}:
\begin{align}
T_{1} = \Big\{
h_{11}^{r_{11}}
h_{21}^{r_{21}}
h_{13}^{r_{13}}
h_{23}^{r_{23}}
:
~r_{11}, r_{21}, r_{13}, r_{23} \in \{1,\ldots,m\}
\Big\}
\end{align}
where $m$ is a large constant. To perform the alignment, we let $U_1$ have the same
detailed structure as $V_{21}$, i.e., $U_1$ is also divided into a large
number of random variables as $U_{1}\defn \{u_{1t}:
t=1,\ldots,|T_{1}|\}$. At receiver $1$, the elements of $U_1$ from
transmitter $1$ occupy the dimensions $h_{11} T_1$ and the elements of
$V_{21}$ from transmitter $2$ occupy the dimensions $h_{21} T_1$.
Although these two sets are not the same, their intersection contains
nearly as many elements as $T_1$, i.e.,
\begin{equation}
\label{eqn:kic-async-intersection}
\left| h_{11} T_1 \cap h_{21} T_1 \right| = m^2 (m-1)^2 \approx m^4 = |T_1|
\end{equation}
when $m$ is large, i.e., almost all elements of $U_1$ and $V_{21}$ are
asymptotically aligned at receiver $1$. The same argument applies for
receiver $3$. At receiver $3$, the elements of $U_1$ from
transmitter $1$ occupy the dimensions $h_{13} T_1$ and the elements of
$V_{21}$ from transmitter $2$ occupy the dimensions $h_{23} T_1$. Again,
although these two sets are not the same, their intersection contains
nearly as many elements as $T_1$. Therefore, almost all elements of $U_1$ and $V_{21}$ are aligned at receivers 1 and 3, simultaneously. These simultaneous alignments are depicted in Figure~\ref{fig:kic_async}. In the following section, we use this basic idea to align multiple signals at multiple receivers simultaneously. This will require a more intricate design of signals and dimensions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.76]{figures/kic_async}
\caption{Illustration of alignment at multiple receivers. }
\label{fig:kic_async}
\end{figure}
\subsection{General Achievable Scheme via Asymptotic Alignment}
\label{sec:kic-asymp-align}
Here, we give the general achievable scheme for the $K$-user IC-CM-EE.
Let $m$ be a large constant. Let us define sets $T_i$, for $i=1,\ldots,K$, which will represent \emph{dimensions}
as follows:
\begin{align}
T_{i} \defn \left\{
h_{ii}^{r_{ii}}
\left(
\prod_{j,k=1, j\ne k}^{K} h_{jk}^{r_{jk}}
\right)
\left(
\prod_{j=1}^{K} g_j^{s_{j}}
\right)
:
~r_{jk}, s_j \in \{1,\ldots,m\}
\right\}
\end{align}
Let $M_i$ be the cardinality of $T_i$. Note that all $M_i$ are the same, thus we denote them as $M$,
\begin{equation}
M \defn m^{1+K(K-1) + K} = m^{K^2 + 1}
\end{equation}
For each transmitter $i$, for $j\neq i$, let $\bft_{ij}$ be the vector containing all the elements in the set $T_j$. Therefore, $\bft_{ij}$ is an $M$-dimensional vector containing $M$ rationally independent real numbers in $T_j$. The sets $\bft_{ij}$ will represent the \emph{dimensions} along which message signals are transmitted. In particular, for any given $(i,j)$ with $i\neq j$, $\bft_{ij}$ will represent the dimensions in which message signal $V_{ij}$ is transmitted. In addition, for each transmitter $i$, let $\bft_{(i)}$ be the vector containing all the elements in the set $T_i$. Therefore, $\bft_{(i)}$ is an $M$-dimensional vector containing $M$ rationally independent real numbers in $T_i$. The sets $\bft_{(i)}$ will represent the \emph{dimensions} along which cooperative jamming signals are transmitted. In particular, for any given $i$, $\bft_{(i)}$ will represent the dimensions in which cooperative jamming signal $U_i$ is transmitted. Let us define a $KM$ dimensional vector $\mathbf{b}_i$ by stacking $\bft_{ij}$ and $\bft_{(i)}$ as
\begin{align}
\mathbf{b}_i^T
=
\left[
\mathbf{t}_{i1}^T,
\ldots,
\mathbf{t}_{i,i-1}^T,
\mathbf{t}_{i,i+1}^T,
\ldots,
\mathbf{t}_{iK}^T,
\mathbf{t}_{(i)}^T
\right]
\end{align}
Then, transmitter $i$ generates a vector $\mathbf{a}_i$, which
contains a total of $KM$ discrete signals each identically and independently
drawn from $C(a,Q)$. For convenience, we partition this transmitted signal as
\begin{align}
\mathbf{a}_i^T
=
\left[
\mathbf{v}_{i1}^T,
\ldots,
\mathbf{v}_{i,i-1}^T,
\mathbf{v}_{i,i+1}^T,
\ldots,
\mathbf{v}_{iK}^T,
\mathbf{u}_i^T
\right]
\end{align}
where $\bfv_{ij}$ represents the information symbols in $V_{ij}$, and $\bfu_i$
represents the cooperative jamming signal in $U_i$. Each of these vectors has length $M$, and therefore, the total length of $\mathbf{a}_i$ is $KM$. The channel input of transmitter $i$ is
\begin{equation}
x_i = \bfa_i^T \bfb_i
\end{equation}
Before we investigate the performance of this signalling scheme in
Section~\ref{sec:kic-performance-analysis}, we analyze the structure of
the received signal at the receivers. Without loss of generality we will focus on
receiver 1; by symmetry, a similar structure will exist
at all other receivers. We observe that in addition to the additive Gaussian noise,
receiver $1$ receives all the vectors $\bfv_{jk}$ for all $j,k\, (j\neq k)$ and
$\bfu_i$ for all $i$. All of these signals get multiplied with the corresponding channel gains
before they arrive at receiver 1. Due
to the specific signalling structure used at the transmitters, and
the multiplications with different channel gains over the wireless communication channel,
the signals arrive at the receiver lying in various different \emph{dimensions}.
To see the detailed structure of the received signals at the receivers, let us define $\tilde{T}_i$ as a superset of $T_i$, as follows
\begin{align}
\tilde{T}_{i} \defn \left\{
h_{ii}^{r_{ii}}
\left(
\prod_{j,k=1, j\ne k}^{K} h_{jk}^{r_{jk}}
\right)
\left(
\prod_{j=1}^{K} g_j^{s_{j}}
\right)
:
~r_{jk}, s_j \in \{1,\ldots,m+1\}
\right\}
\end{align}
The information symbols coming from transmitter 1 are in vectors $\bfv_{12}, \bfv_{13}, \ldots, \bfv_{1K}$ which are multiplied by coefficients in $\bft_{12}, \bft_{13}, \ldots, \bft_{1K}$ before they are sent. These coefficients come from sets $T_2, T_3, \ldots, T_K$, respectively. After going through the channel, all of these coefficients get multiplied by $h_{11}$. Therefore, the receiving coefficients of $\bfv_{12}, \bfv_{13}, \ldots, \bfv_{1K}$ are $h_{11} \bft_{12}, h_{11} \bft_{13}, \ldots, h_{11} \bft_{1K}$, which are the \emph{dimensions} in the sets $h_{11} T_2, h_{11} T_3, \ldots, h_{11} T_K$, respectively. By construction, since each $T_i$ has powers of $h_{ii}$ in it (but no $h_{jj}$), these dimensions are {\it separate}. These correspond to {\it separate} boxes of $V_{12}$ and $V_{13}$ at receiver 1 in Figure~\ref{fig:kic_alignment} for the example case of $K=3$.
On the other hand, all of the cooperative jamming signals from all of the transmitters $\bfu_1, \bfu_2,\ldots,\bfu_K$ come to receiver 1 with received coefficients $h_{11} \bft_{(1)}, h_{21} \bft_{(2)},\ldots,h_{K1} \bft_{(K)}$, which are the \emph{dimensions} in the sets $h_{11} T_1, h_{21} T_2, \ldots, h_{K1} T_K$, respectively. We note that all of these dimensions are separate among themselves, and they are separate from the dimensions of the message signals coming from transmitter 1. That is, all of the dimensions in $h_{11} T_2, h_{11} T_3, \ldots, h_{11} T_K$ and $h_{11} T_1, h_{21} T_2, \ldots, h_{K1} T_K$ are all mutually different, again owing to the fact that each $T_i$ contains powers of $h_{ii}$ in it. These correspond to separate boxes of $V_{12}$, $V_{13}$, $U_1$, $U_2$ and $U_3$ at receiver 1 in Figure~\ref{fig:kic_alignment} for the example case of $K=3$.
Next, we note that each $\bfu_i$ is aligned together with all of the $\bfv_{ji}$ coming from the $j$th transmitter, with $j \neq i$ and $j \neq 1$, at receiver 1. Note that $\bfu_i$ occupies dimensions $h_{i1} T_i$ and $\bfv_{ji}$ (for any $j \neq i$ and $j \neq 1$) occupies dimensions $h_{j1} T_i$ at receiver 1. From the arguments in Section~\ref{sec:kic-achievable-scheme}, $\bfu_i$ and $\bfv_{ji}$ (with $j\neq i$ and $j\neq 1$) are asymptotically aligned. More formally, we note that $\bfu_i$ occupies dimensions $h_{i1} T_i$ which is contained in $\tilde{T}_i$. Similarly, all $\bfv_{ji}$, with $j \neq i$ and $j\neq 1$, occupy dimensions $h_{j1} T_i$, respectively, which are all contained in $\tilde{T}_i$. Therefore, $\bfu_i$ and all $\bfv_{ji}$ (with $j \neq i$ and $j \neq 1$) are all aligned along $\tilde{T}_i$. These alignments are shown as $U_1$ being aligned with $V_{21}$ and $V_{31}$; $U_2$ being aligned with $V_{32}$; and $U_3$ being aligned with $V_{23}$ at receiver 1 in Figure~\ref{fig:kic_alignment} for the example case of $K=3$. Further, we note that, since only $T_i$ and $\tilde{T}_i$ contain powers of $h_{ii}$, the dimensions $h_{11} T_2, h_{11} T_{3}, \ldots, h_{11} T_K, \tilde{T}_1, \tilde{T}_2,\ldots,\tilde{T}_K$ are all separable. This implies that all the elements in the set
\begin{equation}
R_1 \defn
\left(
\displaystyle \bigcup_{j=2}^K h_{11} T_j
\right)
\bigcup
\left(
\displaystyle \bigcup_{j=2}^K \tilde{T}_j
\right)
\bigcup
\tilde{T}_1
\end{equation}
are rationally independent, and thereby the cardinality of $R_1$ is
\begin{align}
M_R
\defn |R_1|
& = (K-1) m^{1 + K(K-1) + K} + K (m+1)^{1 + K(K-1) + K} \\
& = (K-1) m^{K^2+1} + K (m+1)^{K^2+1}
\end{align}
\subsection{Performance Analysis}
\label{sec:kic-performance-analysis}
We will compute the secrecy rates achievable with the asymptotic alignment based scheme proposed in Section~\ref{sec:kic-asymp-align} by using the following theorem.
\begin{theorem}
\label{kic:theo-achievability}
For $K$-user interference channels with confidential messages and one
external eavesdropper, the following rate region is achievable
\begin{equation}
R_i\ge I(V_i;Y_i) - {\max_{j\in\mathcal{K}_{0,-i}}} I(V_i; Y_j|V_{-i}^K), \qquad i=1,\ldots,K
\label{eqn:kic-lower-bound}
\end{equation}
where for convenience we denote $Z$ by $Y_0$, $V_{-i}^K \defn \{V_j\}_{j=1,j\neq i}^K$ and $\mathcal{K}_{0,-i} = \{0,1,\ldots, i-1,i+1, \ldots,K\}$. The auxiliary random variables
$\{V_i\}^K_{i=1}$ are mutually independent, and for each $i$, we
have the following Markov chain $V_i\rightarrow X_i\rightarrow
(Y_1,\ldots,Y_K)$.
\end{theorem}
In developing the achievable rates in Theorem~\ref{kic:theo-achievability}, we focus on a single transmitter, say $i$, and consider the compound setting associated with message $W_i$, where this message needs to be secured against a total of $K$ eavesdroppers, with $K-1$ of them being the other legitimate receivers ($j\neq i$) and the remaining one being the external eavesdropper ($j=0$). A proof of this theorem is given in Appendix~\ref{kic:appendix-proof-achievability}.
We apply Theorem~\ref{kic:theo-achievability} to our alignment based scheme proposed in Section~\ref{sec:kic-asymp-align} by selecting $V_i$ used in (\ref{eqn:kic-lower-bound}) as
\begin{align}
V_i \defn (\bfv_{i1}^T, \ldots, \bfv_{i,i-1}^T, \bfv_{i,i+1}^T ,\ldots, \bfv_{iK}^T)
\end{align}
for $i=1,\ldots,K$. For any $\delta>0$, if we choose $Q =
P^{\frac{1-\delta}{2(M_R+\delta)}}$ and $a = \frac{\gamma
P^{\frac{1}{2}}}{Q}$, based on Lemma~\ref{lemma:ria_real_alignment},
the probability of error of estimating $V_i$ based on $Y_i$
can be upper bounded by a function decreasing
exponentially fast in $P$, by choosing a $\gamma$, a positive constant
independent of $P$ to normalize the average power of the input
signals, as
\begin{align}
0 < \gamma \le \frac{1}{ {\sum_{t\in\mathbf{b}_i} |t| }}
= \frac{1}{ {\sum_{i=1}^{K} \sum_{t_i\in T_i} |t_i|} }
\end{align}
Furthermore, by Fano's inequality, we can conclude that
\begin{align}
I(V_i; Y_i) & \ge
\frac{ (K-1) m^{K^2+1} (1-\delta)} { M_R + \delta }
\left(\frac{1}{2} \log P\right) + o(\log P)\\
& = \frac{ (K-1) (1-\delta)} { K -1 + K \left( 1 + \frac{1}{m}
\right)^{K^2+1} + \frac{\delta}{m^{K^2+1}} }
\left(\frac{1}{2} \log P\right) + o(\log P)
\label{kic-ivy_low}
\end{align}
where $o(\cdot)$ is the little-$o$ function. This provides a lower bound for the first term in \eqn{eqn:kic-lower-bound}.
Next, we need to derive an upper bound for the second item in \eqn{eqn:kic-lower-bound}, i.e, the secrecy penalty. For any $i\in\mk=\{1,\ldots,K\}$ and $j\in\mki=\{1,\ldots,i-1,i+1,\ldots,K\}$, by the Markov chain $V_i \rightarrow (\sum_{k=1}^K h_{kj}X_{kj} , V_{-i}^K) \rightarrow Y_j$,
\begin{align}
I(V_i; Y_j | V_{-i}^K )
& \le I\left( V_i; \sum_{k=1}^K h_{kj} X_k \Big| V_{-i}^K \right) \\
& = H\left( \sum_{k=1}^K h_{kj} X_k \Big| V_{-i}^K \right)
- H\left(\sum_{k=1}^K h_{kj} X_k \Big| V_1^K \right)
\label{eqn:kic-secrecy-vi-yj}
\end{align}
where $V_1^K=\{V_1,\ldots,V_K\}$. The first term in \eqn{eqn:kic-secrecy-vi-yj} can be rewritten as
\begin{align}
H\left( \sum_{k=1}^K h_{kj} X_k \Big| V_{-i}^K \right)
& = H\left({
\sum_{k=1}^K h_{kj} \bfu_k^T \bft_{(k)} +
\mathop{
\sum_{k=1}^K
}_{k\neq i} h_{ij}\bfv_{ik}^T \bft_{ik}
} \right) \\
& = H\left({
h_{ij} \bfu_i^T \bft_{(i)} +
\mathop{
\sum_{k=1}^K
}_{k\neq i}
\Big[
h_{ij}\bfv_{ik}^T \bft_{ik} + h_{kj} \bfu_k^T \bft_{(k)}
\Big]
} \right)
\end{align}
Note that, for a given $k$, the vectors $\bft_{ik}$ and $\bft_{(k)}$
represent the same \emph{dimensions} $T_k$, and $h_{ij}, h_{kj} \in
T_k$ for all $k\neq i$, which implies that $h_{ij} T_k, h_{kj} T_k
\in \tilde{T}_k$. In addition, for each $k$, we note that a large part of
the two sets $h_{ij} T_k$ and $h_{kj} T_k$ are the same, i.e.,
\begin{equation}
\left|
h_{ij} T_k \bigcap h_{kj} T_k
\right| = m^{K^2-1} (m-1)^2 \defn M_{\delta}
\label{eqn:kic-cadinarlity-intersection}
\end{equation}
Therefore, the first term in \eqn{eqn:kic-secrecy-vi-yj} can be further upper bounded as
\begin{align}
H\left( \sum_{k=1}^K h_{kj} X_k \Big| V_{-i}^K \right)
& = H\left({
h_{ij} \bfu_i^T \bft_{(i)} +
\mathop{
\sum_{k=1}^K
}_{k\neq i}
\Big[
h_{ij}\bfv_{ik}^T \bft_{ik} + h_{kj} \bfu_k^T \bft_{(k)}
\Big]
} \right) \\
& \le
\log \Big[ (2Q+1)^M (4Q+1)^{(K-1) M_\delta} (2Q+1)^{2(K-1)(M-M_\delta)}
\Big] \\
& \le
\log \Big[ Q^{M + (K-1) M_\delta + 2(K-1)(M-M_\delta)}
\Big] + o(\log P) \\
& \le
\frac{
\left[M + (K-1) M_\delta + 2(K-1)(M-M_\delta)\right] (1-\delta)
}{
(K-1) m^{K^2+1} + K (m+1)^{K^2+1} + \delta
} \left( \frac{1}{2} \log P \right) \nl
& \quad + o(\log P) \\
& \le
\frac{
\left\{ 1 + (K-1) \left( 1 - \frac{1}{m}\right)^2 + 2(K-1)\left[1-\left(
1 - \frac{1}{m}\right)^2\right] \right\} (1-\delta)
}{
K -1 + K \left( 1 + \frac{1}{m} \right)^{
K^2+1} + \frac{\delta}{m^{K^2+1}}
} \left( \frac{1}{2} \log P \right) \nl
& \quad + o(\log P)
\label{eqn:kic-secrecy-vi-yj-1}
\end{align}
The second term in \eqn{eqn:kic-secrecy-vi-yj} is exactly the
entropy of $\{\bfu_k\}_{k=1}^K$ vectors, i.e.,
\begin{align}
H\left(\sum_{k=1}^K h_{kj} X_k | V_1^K \right)
& = H\left( \sum_{k=1}^K h_{kj} \bfu_{k}^T \bft_{(k)} \right) \\
& = \log (2Q+1)^{K M} \\
& = \frac{K m^{K^2+1} (1-\delta)}{(K-1) m^{K^2+1} + K (m+1)^{K^2+1} + \delta
} \left( \frac{1}{2} \log P\right)\nl
& \quad + o(\log P)\\
& =
\frac{
K (1-\delta)
}{
K -1 + K \left( 1 + \frac{1}{m} \right)^{
K^2+1} + \frac{\delta}{m^{K^2+1}}
} \left( \frac{1}{2} \log P\right) + o(\log P)
\label{eqn:kic-secrecy-vi-yj-2}
\end{align}
Substituting \eqn{eqn:kic-secrecy-vi-yj-1} and
\eqn{eqn:kic-secrecy-vi-yj-2} into \eqn{eqn:kic-secrecy-vi-yj}, we get
\begin{align}
I(V_i; Y_j | V_{-i}^K )
& \le H\left( \sum_{k=1}^K h_{kj} X_k \Big| V_{-i}^K \right)
- H\left(\sum_{k=1}^K h_{kj} X_k \Big| V_1^K \right) \\
& \le
\frac{
\left\{ 1 + (K-1) \left( 1 - \frac{1}{m}\right)^2 + 2(K-1)\left[1-\left(
1 - \frac{1}{m}\right)^2\right] - K \right\} (1-\delta)
}{
K -1 + K \left( 1 + \frac{1}{m} \right)^{
K^2+1} + \frac{\delta}{m^{K^2+1}}
} \left( \frac{1}{2} \log P \right) \nl
& \quad + o(\log P) \\
& \le
\frac{
K \,\frac{2m-1}{m^2} (1-\delta)
}{
K -1 + K \left( 1 + \frac{1}{m} \right)^{
K^2+1} + \frac{\delta}{m^{K^2+1}}
} \left( \frac{1}{2} \log P \right) + o(\log P)
\label{eqn:kic-secrecy-vi-yj-final}
\end{align}
We note that by choosing $m$ large enough, the factor before the
$\frac{1}{2}\log P$ term can be made arbitrarily small. Due to the non-perfect (i.e., only asymptotical) alignment, the upper bound for the information leakage rate is not a constant as in \cite{xie_sdof_networks_in_prepare}, but a function which can be made to approach zero d.o.f.
For any $i\in\mk$ and $j=0$, i.e., $Y_0=Z$ the external eavesdropper,
we should derive a new upper bound for the second term in \eqn{eqn:kic-secrecy-vi-yj}, i.e., $I(V_i;Z|V_{-i}^K)$. By similar steps, we have
\begin{align}
I(V_i;Z|V_{-i}^K)
& \le
I\left( V_i; \sum_{k=1}^K g_k X_k \Big|V_{-i}^K \right) \\
& =
H\left(\sum_{k=1}^K g_k X_k \Big|V_{-i}^K \right)
-
H\left(\sum_{k=1}^K g_k X_k
\Big|V_{1}^K\right) \\
& =
H\left(\sum_{k=1}^K g_k X_k \Big|V_{-i}^K \right)
-
H\left( \sum_{k=1}^K g_{k} \bfu_{k}^T \bft_{(k)} \right) \\
& =
H\left(\sum_{k=1}^K g_k X_k \Big|V_{-i}^K \right)
-
\log (2Q+1)^{K M}
\label{eqn:kic-secrecy-vi-z}
\end{align}
Here, we need to upper bound the first item in \eqn{eqn:kic-secrecy-vi-z}.
We first observe that
\begin{align}
H\left(\sum_{k=1}^K g_k X_k \Big|V_{-i}^K \right)
& = H\left(
\sum_{k=1}^K g_{k} \bfu_{k}^T \bft_{(k)}
+
\mathop{ \sum_{k=1}^K }_{k\neq i} g_i \bfv_{ik}^T \bft_{ik}
\right) \\
& = H\left(
g_{i} \bfu_{i}^T \bft_{(i)}
+
\mathop{ \sum_{k=1}^K }_{k\neq i}
\Big[
g_{k} \bfu_{k}^T \bft_{(k)} + g_i \bfv_{ik}^T \bft_{ik}
\Big]
\right)
\end{align}
Firstly, note that, $\bft_{(k)}$ and $\bft_{ik}$ represent the same set
$T_k$. Therefore, for different $k$, the
\emph{dimensions} are distinguishable. Secondly, due to reasons similar
to \eqn{eqn:kic-cadinarlity-intersection}, we conclude that
\begin{align}
H\left(\sum_{k=1}^K g_k X_k \Big|V_{-i}^K \right)
& = H\left(
g_{i} \bfu_{i}^T \bft_{(i)}
+
\mathop{ \sum_{k=1}^K }_{k\neq i}
\Big[
g_{k} \bfu_{k}^T \bft_{(k)} + g_i \bfv_{ik}^T \bft_{ik}
\Big]
\right) \\
& \le
\log \Big[ (2Q+1)^M (4Q+1)^{(K-1) M_\delta} (2Q+1)^{2(K-1)(M-M_\delta)}
\Big] \\
& \le
\log \Big[ Q^{M + (K-1) M_\delta + 2(K-1)(M-M_\delta)}
\Big] + o(\log P) \\
& \le
\frac{
\left[M + (K-1) M_\delta + 2(K-1)(M-M_\delta)\right] (1-\delta)
}{
(K-1) m^{K^2+1} + K (m+1)^{K^2+1} + \delta
} \left( \frac{1}{2} \log P \right) \nl
& \quad + o(\log P)
\label{eqn:kic-secrecy-vi-z-1}
\end{align}
Substituting \eqn{eqn:kic-secrecy-vi-z-1} into
\eqn{eqn:kic-secrecy-vi-z}, we attain an upper bound which is the same as the
upper bound for $I(V_i;Y_j|V_{-i}^K)$, i.e.,
\begin{align}
I(V_i;Z|V_{-i}^K)
& \le
\frac{
K \,\frac{2m-1}{m^2} (1-\delta)
}{
K -1 + K \left( 1 + \frac{1}{m} \right)^{
K^2+1} + \frac{\delta}{m^{K^2+1}}
} \left( \frac{1}{2} \log P \right) + o(\log P)
\label{eqn:kic-secrecy-vi-z-final}
\end{align}
Substituting \eqn{kic-ivy_low}, \eqn{eqn:kic-secrecy-vi-yj-final}, and
\eqn{eqn:kic-secrecy-vi-z-final} into
\eqn{eqn:kic-lower-bound}, we obtain a lower bound for the achievable
secrecy rate $R_i$ as
\begin{equation}
R_i \ge \frac{\left[ (K-1) - K \left( \frac{2m-1}{m^2}\right)\right] (1-\delta)
} { K -1 + K \left( 1 + \frac{1}{m}
\right)^{K^2+1} + \frac{\delta}{m^{K^2+1}} }
\left(\frac{1}{2} \log P\right) + o(\log P)
\end{equation}
By choosing $m\to\infty$ and $\delta\to0$, we can achieve secrecy sum rates
arbitrarily close to $\frac{K-1}{2K-1}\left( \frac{1}{2} \log
P\right)$, thereby achieving
the sum secure d.o.f. lower bound in (\ref{ach-result}).
\section{Conclusion}
In this paper, we studied secure communications in $K$-user Gaussian interference networks from an information-theoretic point of view, and addressed three important channel models: IC-EE, IC-CM and their combination IC-CM-EE in a unified framework. We showed that, for all three models, the sum secure d.o.f.
is exactly $\frac{K(K-1)}{2K-1}$. Our achievability is based on structured signalling,
structured cooperative jamming, channel prefixing and asymptotic real interference alignment.
The key insight of the achievability is to carefully design the structure of all of the signals at the transmitters so that the signals are received at both legitimate receivers and eavesdroppers in most desirable manner from a secure communication point of view. In particular, cooperative jamming signals protect information carrying signals via alignment, and the information carrying signals are further aligned to maximize secure d.o.f.
|
2,877,628,088,862 | arxiv | \section{Introduction and summary}
Mass-deformed Lie superalgebras continue to play an important r\^ole in
deepening our understanding of the gauge-gravity
correspondence\footnote{Because we will be dealing with string theory
backgrounds which do not contain an $AdS$ subspace, and
non-conformal dual field theories, we will avoid use of the more
colloquial term ``$AdS/CFT$'' in this context.}. The algebras in
question are generically of the form
\begin{equation} \{Q^\dagger, Q\} =
P + mR,
\end{equation}
where $R$ is typically some combination of spacetime and R-symmetry
rotation generators and $m$ denotes a basic ``mass-gap'' in the
spectrum of observables constrained by the supersymmetry algebra.
Perhaps the most dramatic manifestation of this algebraic structure is
the integrability of the large-$N$ dilatation operator of
$\mathcal{N}=4$ supersymmetric Yang-Mills theory ($\mathcal{N}=4$
SYM), where the ``mass-gap'' is the gap in the spectrum of conformal
dimensions $\Delta$ of the operators of the theory, for which $\Delta\geq 2$.
One of the basic building blocks on which much of the machinery that
renders the four dimensional superconformal theory integrable rests is
the closed $SU(2|3)$ sector. The matter fields in this sector comprise
two Weyl fermions $\psi_\alpha, $ $\alpha = 1,2$, transforming under
an $SU(2)$ of the Lorentz group $L$, and three complex scalars
$X,Y,Z$, transforming under an $SU(3) \in SU(4) \sim SO(6)$ of the
R-Symmetry group $R$. The relevant supercharges $S^\alpha _a$,
$Q^b_\beta$, satisfy
\begin{equation}
\{S^\alpha _a, Q^b_\beta\} = \delta ^b_a\delta ^\alpha _\beta D +
\delta ^b_a L^\alpha _\beta + \delta ^\alpha _\beta R^b_a ,
\end{equation}
where $a,b$ are $SU(2)$ indices corresponding to an $SU(2)$ contained
in the R-symmetry group. This is nothing but a subalgebra of the
full four dimensional superconformal algebra and $S,Q$ are a subset of
the superconformal and supersymmetry generators respectively
\cite{Beisert:2003ys}. $D$ is the dilatation operator, which is
realized as a quantum spin chain in the large-$N$ limit. The
ferromagnetic ground state of the chain is spanned by the chiral
primary operators $\ensuremath{\mathop{\mathrm{Tr}}}(Z^J)$. The excitations/magnons transform
under the residual $SU(2|2)$ symmetry, which has profound
consequences. For instance, the $S$-matrix of the spin chain, and its
dispersion relation are both severely constrained to all orders in
perturbation theory by this symmetry. The $SU(2|2)$ invariance also
constrains the $S$-matrix to satisfy the Yang-Baxter equations
\cite{Beisert:2005tm, Beisert:2006qh}. It is, of course, natural to
ask if these compelling consequences of the underlying (mass-deformed)
supersymmetry algebra have any repercussions for theories other than
$\mathcal{N} =4$ SYM.
Recent developments point out that the $SU(2|2)$ symmetry plays a
greater r\^ole in the gauge-gravity correspondence than was previously
realized. For example it appears in the studies of $\mathcal{N}=6$
supersymmetric Chern-Simons (SCS) theories in two different contexts.
The $\mathcal{N}=6$ conformal models appear to posses an integrable
dilatation operator in the large-$N$ limit \cite{Gaiotto:2008cg,
Minahan:2008hf}. The dilatation operator is part of the center of a
$SU(2|3)$ algebra just as in the case of the four dimensional
superconformal theory mentioned above. A generalization of the
$\mathcal{N} =6$ SCS models can be obtained by adding appropriate mass
terms to the matter fields. These mass-deformed models can be
engineered to preserve $4\leq \mathcal{N} \leq 8$ supersymmetry at the
expense of conformal invariance. For such models $SU(2|2)$ algebras
also arise as part of the spacetime supersymmetry, and they can be
used to obtain all-loop results for the spacetime $S$-matrices of the
massive theories \cite{Agarwal:2008pu}. These results clearly suggest
that the search for other natural habitats for mass-deformed
supersymmetry algebras and their consequences for the gauge/gravity
duality are worthwhile endeavors.
A natural set of theories to investigate in this regard are the
dimensional reductions of $\mathcal{N}=4$ SYM on $\mathbb{R}\times
S^3$. The $S^3$ can obviously be identified with $SU(2)$. Discarding
the dependence of the degrees of freedom of the four dimensional
theory on $\mathbb{Z}_k $, $U(1)$, and\footnote{Both $Z_k$ and $U(1)$ $\in$
$SU(2) \sim S^3$.} $SU(2)$ produces 16 supercharge theories with
massive spectra on $\mathbb{R}\times S^3/\mathbb{Z}_k$, $\mathbb{R}\times
S^2$, and $\mathbb{R}$ respectively \cite{Lin:2005nh}. The theory on
$\mathbb{R}$ is nothing but the plane-wave matrix model (PWMM)
\cite{Kim:2003rza} while the other dimensional reductions result in
gauge theories with massive spectra\footnote{The mass for the scalars
arises form the conformal coupling of the original four dimensional
theory to the radius of $S^3$. The masses of the gluons are simply a
consequence of the lack of a zero mode for vector fields on $S^2$
and $S^3$.}. Concrete proposals for the dual string theories
corresponding to these dimensionally reduced models were also
enunciated in \cite{Lin:2005nh}. Since the $SU(2|3)$ symmetry of the
four dimensional gauge theory is preserved by these dimensional
reductions it is imperative to try and uncover its consequences in the
gauge/string dualities tying the lower dimensional non-conformal
theories to string theories in non-$AdS$ type backgrounds. This line
of investigation also has the advantage of being a valuable probe for
the utility and robustness of the gauge-gravity conjecture for massive
and non-conformal gauge theories and their dual string theory
backgrounds. In much of the analysis that we perform, we concentrate
on the three dimensional gauge theory and its string dual, while
commenting on the generalizations of our results to the orbifold
theory and the plane wave matrix model where we can do so.
Since much of our intuition about the use of mass-deformed
superalgebras derives from studies of the dilatation operator of
$\mathcal{N}=4$ SYM on $\mathbb{R}^4$, it is instructive to recall
how the conformal transformation mapping the theory to
$\mathbb{R}\times S^3$ affects various elements of the superconformal
algebra. In the flat background the superconformal algebra takes on
the following heuristic form \be\begin{split}\label{superconf}
&\{Q, Q\} = P,\\
&\{S, S\} = K,\\
&\{S, Q\} = D + R + L,
\end{split}
\end{equation} where $K$ is the super-boost generator. Going from the flat space
to $S^3$ amounts to radial quantization of the conformal theory. Under
this quantization scheme, the generators map as follows
\cite{Kinney:2005ej}: \be\begin{split}\label{conf} Q &\rightarrow Q,~~S
\rightarrow Q^\dagger, ~~ D \rightarrow H,~~ t \rightarrow r.
\end{split}
\end{equation} The relation between $Q^\dagger$ and $S$ is a consequence of the
natural hermiticity properties endowed on the physical states upon the
conformal transformation \cite{Kinney:2005ej}. The last two relations
in (\ref{conf}) simply imply that dilatations are mapped to radial
scalings in the scheme of radial quantization, with the Hamiltonian
assuming the r\^ole of $D$ \cite{Fubini:1972mf}. These identifications
enable us to recover a mass-deformed algebra (the analog of the last
equation in (\ref{superconf})) even in the absence of conformal
symmetries, as part of the supersymmetry algebra for the gauge theory
on $\mathbb{R}\times S^3$. The SUSY algebra takes on the following
generic form
\begin{equation}\label{radialsusy} \{Q^\dagger, Q\} = H + \mu R + \mu L ,\end{equation}
where $\mu\sim(\text{radius of}~S^3)^{-1}$.
It is important to note that the same basic algebraic structure is
also valid for 16 supercharge theories on $\mathbb{R}\times S^3/\mathbb{Z}_k$,
$\mathbb{R}\times S^2$ and $\mathbb{R}$, even though, for these
theories there is no natural sense in which they are equivalent to
their massless counterparts. Recasting the details of the techniques applied in computing
the spectrum of the dilatation operator of the four dimensional gauge
theory and its string dual in the radial quantization scheme
should then allow us to study and solve for the physical spectrum of
these non-conformal gauge theories.
As mentioned above, this proposal has the potential of
translating into a non-trivial test of the gauge-gravity
conjecture in light of the existence of bona fide string duals for the
non-conformal lower dimensional massive gauge theories. These are the
Lin-Maldacena geometries \cite{Lin:2005nh}, which we review in section
\ref{sec:string}. Quantizing the superstring in these backgrounds has
only been successful in the plane-wave limit, yielding the same IIA
plane-wave geometry for any of the Lin-Maldacena
backgrounds\footnote{There is a caveat here concerning the vacuum of
the gauge theory under consideration. The plane-wave geometry is
only valid for ``well-spaced'' vacua. The meaning of ``well spaced''
is described in section \ref{sec:string}.\label{foot:cav} }.
Happily, the plane-wave limit is sufficient for uncovering the
string-dual manifestation of the $SU(2|2)$ algebra we find in the
gauge theories. The crucial element is the derivation of the central
charges. For the full superstring on $AdS_5\times S^5$, the authors of
\cite{Arutyunov:2006ak} showed insightfully that the central extension
of the algebra follows from a relaxation of the level-matching
condition, the natural dual of the length-changing action in the gauge
theory. We show that the very same mechanism is at play for the
plane-wave limit of the Lin-Maldacena geometries, and this allows us a
full exhibition of the $SU(2|2)$ algebra in that limit. Under the
assumption that the algebra persists beyond the plane-wave limit, we
are able to discuss, on a qualitative level, the finite-size
corrections, worldsheet scattering matrix, spinning string and giant
magnon solutions associated with these string sigma models.
This results in this paper are organized as follows. In section
\ref{sec:su22} we explicitly construct the $SU(2|3)$ algebra in SYM on
$\mathbb{R}\times S^3$ and on $\mathbb{R}\times S^2$. In sections \ref{su22-gauge}
and \ref{weak} we obtain the dispersion relation for SYM on $\mathbb{R}\times
S^2$, and constrain the form of the one and two-loop effective
Hamiltonian. We also present evidence of integrability at the two loop
level for the three dimensional theory. In section \ref{sec:intS} we
show the natural generalization of the $SU(2|2)$ $S$-matrix from SYM
on $\mathbb{R}\times S^3$ to SYM on $\mathbb{R}\times S^2$. Our results concerning
the universal form of the dispersion relation and S-matrix are
expected to remain valid to all loop orders. However, as far as the
explicit forms of the effective Hamiltonians and statements about
integrability are concerned, the present gauge theoretic analysis is
restricted to the two-loop order. In section \ref{sec:extend} we
discuss the extension of our results to the PWMM and SYM on $\mathbb{R}\times
S^3/\mathbb{Z}_k$. In section \ref{sec:string} we continue the analysis to
the leading order at strong coupling using the dual string theory. We
discuss the Lin-Maldacena geometries and review the quantization of
the string on the IIA plane-wave. We then derive the $SU(2|2)$ algebra
in the plane-wave setting and discuss implications and further
directions. Finally, we end with a discussion in section
\ref{sec:final}.
\section{$SU(2|2)$ in SYM on $\mathbb{R}\times S^3$ and $\mathbb{R}\times S^2$}
\label{sec:su22}
In this section, we isolate the closed $SU(2|3)$ sector in the sixteen
supercharge Yang-Mills theories on $\mathbb{R}\times S^3$ and
$\mathbb{R}\times S^2$. We start with the four dimensional theory in
radial quantization and present the details of the emergence of the
algebraic structure (\ref{radialsusy}) in a Hamiltonian picture. The
analysis of the four dimensional model also allows us to calibrate and
verify our results against known results for the dilatation operator
for the gauge theory in flat background geometries. We find it
convenient to use the conventions used in \cite{Ishiki:2006yr}, and use
the action
\be\begin{split}
S = \frac{1}{g^2} \ensuremath{\mathop{\mathrm{Tr}}} \int_{\mathbb{R}\times S^3}
\Biggl[ &-\frac{1}{4} F_{ab}^2 -
\frac{1}{2} D_a X_{m} D^a X^m -\frac{1}{2}\left(\frac{\mu}{2}\right)^2
X_mX^m -\frac{ i}{2} \bar \lambda \Gamma^a D_a \lambda\\
&+\frac{1}{4}[X_m,X_n]^2 -\frac{1}{2}\bar \lambda \Gamma^m[X_m,\lambda] \Biggr],
\end{split}\label{actions3}
\end{equation}
where $m,n=1,\ldots,6$ are $SO(6)$ indices and $a,b=0,\ldots,3$ are
spacetime indices. It is understood that we have normalized the
radius of $S^3$ to be $2/\mu$, such that the volume is given by
$2\pi^2(2/{\mu})^3$. In other words,
\begin{equation}
\int_{\mathbb{R}\times S^3} \equiv
\left(\frac{1}{8}\right)\left(\frac{2}{\mu}\right)^3\int dt\int_0^\pi
\sin\theta d\theta \int_0^{2\pi}d\phi\int_0^{4\pi}d\psi.
\end{equation}
We can always adjust the radius to be any other number by
correspondingly scaling the coefficient of the ``mass-term'' for the
scalars. $\Gamma^M = (\gamma^a \otimes I_8,\Gamma^m)$ are the ten-dimensional
gamma matrices, while $\gamma^a $ are the four dimensional ones. The
ten-dimensional spinor $\lambda$ is decomposed in terms of four-dimensional
spinors as
\begin{equation}
\lambda = \left(
\begin{array}{c}
\lambda_+^A \\
\lambda_{-A}\\
\end{array}
\right),~~
\text{with}\quad \lambda_+^A = \left(
\begin{array}{c}
\Psi_\alpha \\
0\\
\end{array}
\right),
\end{equation}
where the $SU(4)$ index $A=1,\ldots,4$ and $\lambda^A_+$ is a positive
chirality, four-dimensional spinor, so that $\Psi_\alpha$ carries an $SU(2)$
index $\alpha=1,2$, i.e. it is a complex 2-spinor. The negative chirality counter-part is
given by $\lambda_{-A} = C_4(\bar{\lambda}_{+A})^T$, where $C_4$ is the
four-dimensional charge conjugation operator, see \cite{Ishiki:2006yr}
for details.
The covariant derivatives $D_a = \nabla_a -i[A_a,\hspace{.1cm}]$, where
\be\begin{split}
&\nabla_aA_b = e^\mu_a(\partial_\mu A_b + \omega_{\mu b}^{\hspace{.3cm} c}A_c),\\
& \nabla _aX^m = e^\mu_a\partial_\mu X^m,\\
&\nabla_a \lambda = e^\mu_a(\partial_\mu \lambda
+ \frac{1}{4}\omega_\mu^{\hspace{.1cm}bc}\Gamma_{bc}\lambda),
\end{split}
\end{equation} where $\mu = t,\theta, \phi, \psi$ is a curved-space index and
$a,b,c$ are tangent-space indices. The non-vanishing components of the
vierbeins and spin connections are \be\begin{split} & e^1_\theta = 1/\mu,
\quad e^2_\phi = \frac{\sin \theta}{\mu}, \quad
e^3_\phi = \frac{\cos \theta}{\mu},
\quad e^3_\psi = 1/\mu,\\
& e^\theta_1 = \mu, \quad e^\phi_2 = \frac{\mu}{\sin \theta},
\quad e^\psi_2 = -\mu \frac{\cos \theta}{\sin \theta},
\quad e^\psi_3 = \mu,\\
&\omega_{12} = -\omega_{21} = -\frac{1}{2}(\cos \theta d\phi - d\psi),
\quad \omega_{23} = -\omega_{32} = -\frac{1}{2}d\theta,\\
&\omega_{31} = -\omega_{13} = - \frac{1}{2}\sin \theta
d\phi.
\end{split}
\end{equation}
The supercharges $Q$, like the fermionic fields, are decomposed as
\begin{equation} Q = \left(
\begin{array}{c}
Q_+^A \\
Q_{-A}\\
\end{array}
\right), \end{equation} with $Q_{-A} = C_4(\bar{Q}_{+A})^T$. Although the
$SO(6)$ basis, in which the scalar fields and gamma matrices are
represented as $X^m$ and $\Gamma^m$, provides a compact expression for the
action (\ref{actions3}), we will need to express the supercharges of
the theory in an $SU(4)$ basis where we have instead $X^{AB}$,
$A,B=1,\ldots,4$, and similarly for the $\Gamma^m$. The
dictionary between the two bases is given in appendix
\ref{sec:so6su4}. We adopt a Hamiltonian formalism with $A_0 =0$. In
the canonical formalism, the explicit expressions for the supercharges
are
\be\begin{split} Q^*_{+A} = \frac{1}{g^2}\ensuremath{\mathop{\mathrm{Tr}}}\int_{S^3}\Biggl[ &g^2 \lambda
^*_{+A}\gamma^i E_i + \frac{1}{2} \lambda
^*_{+A}\gamma^{ij}\gamma^0F_{ij} -2 g^2\Pi_{AB}\lambda^{*B}_-\gamma^5-
2
(D_iX_{AC})\lambda^{*C}_-\gamma^i\gamma^5\gamma^0 \\
& \pm 2i\left(\frac{\mu}{2}\right)X_{AC}\lambda^{*C}_-\gamma^5
-2i[X_{AL},X^{LP}]\lambda^{*}_{+P}\gamma^0\Biggr],
\end{split}
\end{equation}
\be\begin{split}\label{QpN4}
Q_+^A = \frac{1}{g^2}\ensuremath{\mathop{\mathrm{Tr}}}\int_{S^3}\Biggl[&g^2\gamma^iE_i\lambda_+^A +
\frac{1}{2} \gamma^0\gamma^{ij}\lambda^A_+F_{ij} -2g^2\Pi^{AB}\gamma^5\lambda_{-B}
+ 2(D_iX^{AC})\gamma^0\gamma^5\gamma^i\lambda_{-C} \\
& \mp 2i\left(\frac{\mu}{2}\right)X^{AM}\gamma^5\lambda_{-M} + 2i[X^{AL},X_{LP}]\gamma^0\lambda_+^P\Biggr],
\end{split}
\end{equation}
where $E_i = g^{-2}\dot A_i$, $\Pi^{AB} =
g^{-2}\dot{X}^{AB}$, and where we have introduced the spatial
index $i,j=1,\ldots,3$ so that $a=(0,i)$.
The supersymmetry variation of a generic field $\mathcal{W}
\rightarrow \delta_\epsilon \mathcal{W} = [\bar{Q}_{+A}\epsilon^A_+ +
\bar{Q}_-^A\epsilon_{-A}, \mathcal{W}]$, where the spinor $\epsilon$
satisfies the conformal Killing equation
\begin{equation} \nabla_\mu \epsilon^A_+
= \pm \frac{i\mu}{4}\gamma_\mu \gamma^0\epsilon^A_+ .
\end{equation} The two signs
on the $r.h.s.$ of the Killing equation result in the signs in front of
the ``mass-terms'' (the terms linear in $X_{AB}$ which do not involve
derivatives) in the expressions for the supercharges presented above.
In what is to follow, we shall take the upper sign in the Killing
equation, which would correspond to the lower sign in front of the
``mass-terms'' in the expression for the supercharges.
The canonical commutation relations following from the action are given by
\be\begin{split}\label{comrels}
&[X_{AB}(x), \dot X^{CD}(y)] = ig^2\frac{1}{2}\delta^3(x-y)\,
\left(\delta^C_A \delta^D_B - \delta^C_B \delta^D_A \right),
\\
&[A_i(x), \dot{A}_j(y)] = ig^2\delta^3(x-y)\,\delta_{ij},\\
&\{ \lambda_{+A}(x), \lambda^{\dagger B}_{+}(y) \} = g^2\delta^3(x-y)\,\delta^{B}_A,\\
&\{ \lambda_{-}^A(x), \lambda^{\dagger}_{B-}(y) \} = g^2\delta^3(x-y)\,\delta^{A}_B.
\end{split}
\end{equation}
It is useful to extract the action of the supercharges on
single-particle states formed by the scalar and fermionic partons of
the theory. For this purpose it is useful to introduce the
oscillators \be\begin{split} &\alpha^{AB} = \sqrt{\frac{\mu}{2g^2}}X^{AB} +
i\frac{1}{\sqrt{\frac{\mu}{2g^2}}}\Pi^{AB}, \hspace{.3cm}
\alpha^{\dagger}_{AB} = \sqrt{\frac{\mu}{2g^2}}X_{AB} -
i\frac{1}{\sqrt{\frac{\mu}{2g^2}}}\Pi_{AB},\\
&[\alpha^{AB},\alpha ^\dagger_{CD}] = \delta^3(x-y)\left(\delta^A_C\delta^B_D -
\delta^A_D\delta^B_C\right).
\end{split}
\end{equation} Notice, that these oscillators differ from the oscillator
variables usually employed in the canonical quantization of massive
scalar fields. We have $not$ Fourier decomposed any of the spacetime
coordinates, and the oscillator variables depend on the three $S^3$
coordinates as well as on time. The vacuum of the field theory is
taken to be annihilated by $\alpha_{AB}$ and $\lambda_+$.
On the single particle states built out of the scalar and fermionic fields
\be\begin{split}
&[Q^A_+, \alpha^\dagger_{MN}]|0\rangle = 2\sqrt{\frac{\mu}{2g^2}}\left(+ \delta ^A_N \left(
\begin{array}{c}
0 \\
\sigma^2\Psi^*_M\\
\end{array}
\right) - \delta ^A_M\left(
\begin{array}{c}
0 \\
\sigma^2\Psi^*_N\\
\end{array}
\right)\right)|0\rangle,\\
&[Q^*_{+A}, \alpha^\dagger_{MN}] |0\rangle= 0,\\
&\{Q^A_{+\alpha}, \lambda^*_{B\beta}\}|0\rangle =
\left[\left(g^2E_i\gamma^i_{\alpha \beta} +
\frac{1}{2}F_{ij}(\gamma^0\gamma^{ij})_{\alpha \beta}\right)\delta^A_B +
2i[X^{AL},X_{LB}]\gamma^0_{\alpha \beta}\right]|0\rangle, \\
&\{Q^*_{+A\alpha}, \lambda ^*_{+B\beta}\}|0\rangle =
-2i\sqrt{\frac{\mu g^2}{2}}(\gamma^5C_4\gamma^0)_{\alpha
\beta}\alpha^\dagger_{AB}|0\rangle.
\end{split}
\end{equation} The above relations are true only modulo the equations of motion
and spatial translations, as they would be for any supersymmetric
Yang-Mills theory.
To proceed further, it is instructive to fix our conventions such that
\begin{equation}
\gamma^0 = -i \begin{pmatrix}
0 &I\\
I &0
\end{pmatrix}, \hspace{.2cm} \gamma^i = \begin{pmatrix}
0 &i\sigma^i\\
-i\sigma^i &0
\end{pmatrix}, \hspace{.2cm} C_4 = \begin{pmatrix}
-\sigma^2 &0\\
0 &+\sigma^2
\end{pmatrix}, \hspace{.2cm} \gamma^5 = \begin{pmatrix}
I &0\\
0 &-I
\end{pmatrix}.
\end{equation}
The two bosonic and two fermionic states transforming under
$SU(2)_R$ and $SU(2)_L$ can be taken to be
\be\begin{split}
|\phi_a\rangle = \alpha^\dagger_{4a(=1,2)}|0\rangle, \hspace{.3cm}
|\psi_\alpha\rangle = \Psi^*_{4\alpha}|0\rangle.
\end{split}
\end{equation} A note about the positions of the fermionic $SU(2)$ indices is in
order. $\Psi^M = \Psi^M_{\alpha}$ and $\Psi^*_M = \Psi^{*\alpha}_M$
are the natural positions of the $SU(2)$ index ``$\alpha $'' on the two
component complex spinor $\Psi$. However in creating the state
$|\psi_\alpha\rangle $ the index is lowered using $\Psi^*_\alpha =
\epsilon_{\alpha \beta}\Psi^{*\alpha}$. It is also understood that
$\epsilon_{12} = -\epsilon^{12} =1$. After restricting $A,B = 1,2$ on
$Q, Q^*$ and renaming the restricted supercharges $q^a_\alpha,
q^{*\alpha}_a$, we obtain the fundamental representation of $SU(2|2)$
which can be expressed manifestly as \be\begin{split}\label{su22-main}
&q^a_\alpha |\phi_b\rangle = -2i\sqrt{\frac{\mu}{2g^2}}\delta^a_b|\psi_\alpha\rangle,\\
&q^{*\alpha}_a |\phi_b\rangle = 0,\\
&q^a_\alpha |\psi_\beta \rangle = 2\epsilon_{\alpha \beta}\epsilon^{ab}|[\phi_b,Z]\rangle,\\
&q^{*\alpha}_a|\psi_\beta\rangle = +2i\sqrt{\frac{\mu
g^2}{2}}\delta^\alpha _\beta |\phi_a\rangle.
\end{split}
\end{equation}
The canonical anti-commutation relation between the supercharges is given by
\begin{equation}\label{mass-algebra}
\{q^{a}_{\alpha},q^{*\beta}_{b}\} = 2\delta^a_b\delta_\alpha ^\beta H
+ 4\left(\frac{\mu}{2}\right)\delta^\beta _\alpha \mathcal{R}^a_b + 4
\left(\frac{\mu}{2}\right)\delta^a_b\mathcal{L}_\alpha^\beta,
\end{equation}
where
\be\begin{split}
&\mathcal{R}^a_b = \sum_{C=1}^4\alpha^\dagger_{bC}\alpha^{aC} -
\delta^a_b\frac{1}{4}\sum_{M,N=1}^4\alpha^\dagger_{MN}\alpha^{MN},\\
& \mathcal{L}^\alpha_\beta = \Psi^{*\alpha}_4\Psi^4_\beta.
\end{split}
\end{equation}
This completes our discussion of a concrete realization of
(\ref{radialsusy}) in the context of the Hamiltonian formulation of
$\mathcal{N}=4$ SYM on $\mathbb{R}\times S^3$. As expected, the
r\^ole of the conformal dimensions is assumed by the masses of the
collective excitations of the gauge theory in the curved background,
with the scale of the masses being set by $\mu$.
\subsection{Dimensional reductions}
We shall now work out the dimensional reductions of the Hamiltonian
and the supercharges to $\mathbb{R}\times S^2$. To carry out the
dimensional reduction to the three dimensional spacetime, we need to
assume that the scalar and fermionic fields do not depend on the
$U(1)$ coordinate $\psi$. As for the gauge field, one simply replaces
the third component of the one-form on the tangent space by a scalar;
namely $A_3 = \Phi$. More concretely \be\begin{split}
A &= A_ae^a_\mu dx^\mu = e^\mu_aA_\mu e^a_\nu dx^\nu\\
&= A_tdt + A_\theta d\theta + A_\phi d\phi + \frac{\Phi}{\mu}(\cos
\theta d\phi + d\psi).
\end{split}
\end{equation}
Using this decomposition in (\ref{actions3}) and dropping the $\psi$
dependence of all the fields, we get
\be\begin{split}
S = \frac{1}{g^2_{S^2}} \ensuremath{\mathop{\mathrm{Tr}}} \int_{\mathbb{R}\times S^2} \Biggl[ &-\frac{1}{4} F_{ab}^2 -
\frac{1}{2} D_a \Phi D^a \Phi - \frac{\mu^2}{2}\Phi^2 + \mu F_{12}\Phi\\
&-\frac{1}{2} D_a X_{m} D^a X^m
-\frac{1}{2}\left(\frac{\mu}{2}\right)^2 X_mX^m -\frac{ i}{2} \bar \lambda
\Gamma^a D_a \lambda\\
&+ \frac{i\mu}{8}\bar{\lambda}\Gamma^{123}\lambda -
\frac{1}{2}\bar{\lambda}\Gamma ^3[\Phi,\lambda]\\
&+\frac{1}{2}[\Phi,X_m]^2+\frac{1}{4}[X_m,X_n]^2
-\frac{1}{2}\bar \lambda \Gamma^m[X_m,\lambda] \Biggr].
\end{split}\label{actions2}
\end{equation} The radius of $S^2$ is $1/\mu$, with the non-vanishing
dreibeins and spin connections given by the standard formulae \be\begin{split}
&b^1_\theta = \frac{1}{b^\theta_1} = \frac{1}{\mu},\hspace{.2cm}
b^2_\phi = \frac{1}{b^\phi_2} = \frac{\sin \theta}{\mu},\\
&\omega_{12} = -\omega_{21} = -\cos \theta d\phi.
\end{split}
\end{equation} The three dimensional coupling $g^2_{S^2}$ is related to the four
dimensional one as \begin{equation} g^2_{S^2} = \frac{g^2 \mu}{4\pi}. \end{equation}
The measure of integration
\begin{equation}
\int_{\mathbb{R}\times S^2} \equiv
\frac{1}{\mu^2} \int dt\int _0^\pi \sin \theta d \theta \int_0^{2\pi}
d\phi,
\end{equation}
is defined to yield a volume of $4\pi/\mu ^2$. This
dimensional reduction yields the following expression for the
supercharge for the three dimensional theory
\be\begin{split}\label{QpN2} Q_+^A =
\frac{1}{g^2_{S^2}}\ensuremath{\mathop{\mathrm{Tr}}}\int_{S^2}\Biggl[&g^2_{S^2}\gamma^iE_i\lambda_+^A +
\frac{1}{2} \gamma^0\gamma^{ij}\lambda^A_+F_{ij}
-2g^2_{S^2}\Pi^{AB}\gamma^5\lambda_{-B}\\& + 2(D_iX^{AC})\gamma^0\gamma^5\gamma^i\lambda_{-C}
+ 2i\left(\frac{\mu}{2}\right)X^{AM}\gamma^5\lambda_{-M} \\
&+2i[X^{AL},X_{LP}]\gamma^0\lambda_+^P + g^2_{S^2}\Pi_{\Phi}\gamma^3\lambda^A_+\\&
+\gamma^0\gamma^{i3}D_i\Phi\lambda^A_+ - i\mu\Phi \gamma^3\lambda^A_+
-2i[\Phi,X^{AC}]\gamma^0 \gamma^5\gamma^3\lambda_{-C}\Biggr],
\end{split}
\end{equation}
The last line contains all the terms involving the new (seventh)
scalar field obtained from the dimensional reduction of the four
dimensional vector potential.
Comparing the expression for the dimensionally reduced supercharge
with (\ref{QpN4}), it is easy to see that it admits a restriction to
an $SU(2|3)$ sector, just like (\ref{su22-main}). However, we have not
yet shown that the supercharge presented above is indeed a symmetry of
the Hamiltonian obtained by the dimensional reduction to
$\mathbb{R}\times S^2$. To do that, we need to reproduce the
supercharge as the time component of a supercurrent, which we shall
now proceed to do. To this end, it is instructive to recall that for
${\cal N}=1$ SYM in 10-d,
\begin{equation}
{\cal L} = -\frac{1}{2} F_{MN} F^{MN} + i \bar \Psi \Gamma^M D_M \Psi,
\end{equation}
the SUSY variations of the fields
\begin{equation}
\delta A_N = -2i\bar\Psi \Gamma_N \epsilon,\qquad
\delta \Psi = F_{PQ} \Gamma^{PQ} \epsilon,
\end{equation}
produce the supercurrent
\be\begin{split}
j^M &=2i\bar\Psi\Gamma^M\Gamma^{PQ} F_{PQ}\epsilon.
\end{split}
\end{equation}
The supercharge is then given by
\begin{equation}
Q = \int_{\text{space}} j^0 = 2i \int_{\text{space}} \bar\Psi \Gamma^0\Gamma^{PQ}F_{PQ}\epsilon.
\end{equation}
Keeping the ten dimensional theory in mind, one can write the Lagrangian for the $\mathbb{R}\times S^2$
theory in the following form \cite{Grignani:2007xz}
\begin{equation}
{\cal L} = -\frac{1}{2} F_{MN} F^{MN} + i \bar \Psi \Gamma^M \nabla_M \Psi
-i\frac{\mu}{4}\bar\Psi \Gamma^{123} \Psi + 2\mu\Phi F_{12} -
\frac{\mu^2}{4} \phi_{\bar m}^2 -\mu^2 \Phi^2,
\end{equation}
where the 10-d gauge field is understood as $A_M =(A_\mu,\phi_{m})=
(A_\mu,\Phi,\phi_{\bar m})$ with $\mu=0,1,2$ and $m=3,\ldots,9$,
$\bar m =4,\ldots,9$. The SUSY variations of the
fields are expressed as
\begin{equation}
\delta A_M = -2i\bar\Psi \Gamma_M \epsilon,\qquad
\delta\Psi = F_{MN}\Gamma^{MN} \epsilon - \mu \Gamma^m \Gamma^{123}\phi_m \epsilon - \mu
\Gamma^3\Gamma^{123} \Phi \epsilon.
\end{equation}
Since the kinetic part of the action is not different from ${\cal
N}=1$ SYM in 10-d the $\mu$-independent part of the supercurrent that
does not involve total derivatives will be the same as in the ten
dimensional theory. Extra total derivative (surface terms) are
generated from the $\mu$-dependent piece of the variation of the
fermion kinetic term, whose contribution to $\delta{\cal L}$ is
\begin{equation}
\nabla_M\left(i\bar\Psi\Gamma^M\left(\mu \Gamma^m \Gamma^{123}\phi_m + \mu
\Gamma^3\Gamma^{123} \Phi \right)\epsilon\right).
\end{equation}
Of course the very same term is generated by the new $\mu$-dependent
piece of $\frac{\delta {\cal L}}{\delta(\partial_M\Psi)} \delta \Psi$. These add in the
expression for the supercurrent, giving us
\begin{equation}
Q = \int_{S^2} \Biggl( 2i\bar\Psi \Gamma^0\Gamma^{PQ}F_{PQ} \epsilon
-2i\mu\bar\Psi\Gamma^0\Gamma^m\Gamma^{123}\phi_m\epsilon
-2i\mu\bar\Psi\Gamma^0\Gamma^3\Gamma^{123}\Phi \epsilon \Biggr),
\end{equation}
which is thus the supercharge for the $\mathbb{R}\times S^2$ theory, albeit
in a rather compact notation. Expressing the $SO(6)$ fields in terms
of $SU(4)$ ones using the dictionary in appendix \ref{sec:so6su4}, we find
\be\begin{split}
Q_+^A = &4i\int_{S^2}\Biggl(
\frac{1}{2}\gamma^{\mu\nu}\gamma^0F_{\mu\nu}\,\lambda_+^A
-2\gamma^\mu\gamma^0 D_\mu X^{AB} \lambda_{-B}
+\gamma^\mu\gamma^3\gamma^0 D_\mu \Phi \,\lambda_+^A\\
&+2ig\gamma^3\gamma^0[\Phi,X^{AB}]\lambda_{-B}
+2ig\gamma^0[X^{AC},X_{CB}]\lambda_+^B
-i\mu X^{AB}\lambda_{-B}
-i\mu \gamma^3 \Phi \,\lambda_+^A\Biggr).
\end{split}
\end{equation}
This agrees with (\ref{QpN2}) up to the overall $4i$ outside, which
can be easily absorbed in a redefinition of the charge.
The construction clearly shows that we can restrict the three
dimensional theory consistently to an $SU(2|3)$ sector.
Furthermore, the reduced supercharges act on the $SU(2|3)$ states
exactly as in (\ref{su22-main}), with $g$ replaced by $g_{S^2}$. The
three dimensional supercharges constructed above satisfy the same
massive algebra (\ref{mass-algebra}) as the four dimensional theory
allowing us to constrain its quantum spectrum on algebraic grounds as
discussed below.
\section{Dispersion relations and the extended $SU(2|2)$ algebra}\label{su22-gauge}
In this section we shall focus on constraining the spectrum of the
four and three dimensional sixteen supercharge theories in the scheme
of radial quantization. We shall put special emphasis on the r\^ole
played by the scale introduced by the radius of the sphere
$1/\mu$. Following that we shall extend the formalism to
incorporate the three dimensional $\mathcal{N}=8$ theory. Following
\cite{Beisert:2005tm, Beisert:2006qh} we write the $SU(2|2)$ algebra
(\ref{su22-main}) abstractly as \be\begin{split}\label{su22-abstract} &
q^a_\alpha|\phi_b\rangle = a\,\delta
^a_b|\psi_\alpha\rangle,\\ &q^{*\alpha}_a |\phi_b\rangle =
c\,\epsilon_{ab}\epsilon^{\alpha \beta}|\psi_\beta \rangle
,\\ &q^a_\alpha |\psi_\beta \rangle = b\,\epsilon_{\alpha
\beta}\epsilon^{ab}|\phi_b\rangle,\\ &q^{*\alpha}_a|\psi_\beta\rangle
= d\, \delta^\alpha _\beta |\phi_a\rangle.
\end{split}
\end{equation}
In the fundamental representation, which corresponds to the
tree-level field theory, one has $a = -2i\sqrt{\mu /2g^2}$, $b=c=0$,
and $d= +2i\sqrt{\mu g^2/2}$. To proceed beyond the classical theory,
one needs to augment this algebra by two new generators $P, K$ defined
by \cite{Beisert:2005tm, Beisert:2006qh} \be\begin{split}
& \{q^a_\alpha, q^{b}_\beta\} = \epsilon^{ab}\epsilon_{\alpha \beta}P = \epsilon^{ab}\epsilon_{\alpha \beta} ab \\
& \{q^{*\alpha} _a, q^{*\beta}_b\} = \epsilon^{\alpha
\beta}\epsilon_{ab}K = \epsilon^{\alpha
\beta}\epsilon_{ab} cd
\end{split}
\end{equation} where $P$ and $K$ annihilate physical states. Notice that unlike
the case of the gauge theory on $\mathbb{R}^4$, $P$ and $K$ are not
independent, as $P = K^*$. This is yet another artifact of the effect
of the conformal transformations that map the theory from flat
spacetime to the sphere. The conformal transformation maps the
superconformal generators to the conjugates of the supercharges, and
the relation between $P$ and $K$ is another
reflection of the same map.
Closure of the algebra on $H$ and the rotation generators yields \begin{equation} H
= \frac{1}{4}(ad +bc) \hspace{.3cm} \mbox{and} \hspace{.3cm} (ad - bc)
= 2\mu. \end{equation} The second, level-shortening condition, is easily checked
to be satisfied at the classical level, using the values of
$a,b,c$, and $d$ in (\ref{su22-main}). These relations also yield the
dispersion relation for the magnons \begin{equation} H = \frac{1}{2}\sqrt{\mu
^2 + PK}. \end{equation} All the statements made above hold both for the four
and three dimensional gauge theories. However, for the specific case
of the three dimensional theory, we would like to emphasize that its
Hamiltonian involves the ``perfect square'' term $ (F_{12} - \mu
\Phi)^2$, whose minima generate the moduli space of the vacua of this
theory. Our results for SYM on $\mathbb{R}\times S^2$ only apply to
the trivial vacuum $\Phi = 0$. From the viewpoint of the gauge-gravity
duality, the string dual (\ref{LM}) for the theory proposed in \cite{Lin:2005nh}
and studied later in the paper applies only to this vacuum, making it
particularly interesting\footnote{Other vacua of the gauge theory
correspond to monopole backgrounds \cite{Ishiki:2006yr}. It would
doubtless be interesting to understand the quantum spectra of the
theory around these non-trivial vacua as well.}.
\\
To proceed further, it is important to parameterize the algebra as
\be\begin{split}
& a = \sqrt{\mu h(\lambda)}\eta\\
& b = \sqrt{\mu h(\lambda)} \frac{\zeta}{\eta}(1- x^+/x^-)\\
&c = \sqrt{\mu h(\lambda)} \frac{i\eta}{\zeta x^+}\\
&d = \sqrt{\mu h(\lambda)}\frac{x^+}{i\eta}(1- x^-/x^+)
\end{split}
\end{equation} The above parameterization, where $h$ is an arbitrary function of
the dimensionless 't Hooft coupling of the gauge theory is specific to
the case of the $\mathcal{N}=4$ theory on $\mathbb{R}\times S^3$. That
is so because the length dimensions of the four parameters are
all equal to $-1/2$ only in the four dimensional theory. It is easily
seen that for the $\mathbb{R}\times S^2$ case, $a$ and $c$ are
dimensionless, while $b$ and $d$ have length dimension $-1$. The
appropriate parameterization in that case is \be\begin{split}
& a = \sqrt{h(\lambda)}\eta,\\
& b = \mu \sqrt{ h(\lambda)} \frac{\zeta}{\eta}(1- x^+/x^-),\\
&c = \sqrt{ h(\lambda)} \frac{i\eta}{\zeta x^+},\\
&d = \mu\sqrt{h(\lambda)}\frac{x^+}{i\eta}(1- x^-/x^+).
\end{split}
\end{equation}
However, in both cases the shortening condition implies
\begin{equation}
\frac{2i}{h(\lambda)} = x^+ + \frac{1}{x^+} - x^- - \frac{1}{x^-}.
\end{equation}
Also, writing $x^+/x^- = e^{ik}$ allows us to write
\be\begin{split}
& P = ab = \mu h(\lambda)(1-e^{ik}),\\
& K = cd = \mu h(\lambda)(1 - e^{-ik}),
\end{split}
\end{equation} for both theories in question. Thus the dispersion relation
for both the $\mathcal{N}=4$ and $\mathcal{N}=8$ theories can
naturally be expressed as \begin{equation} H = \frac{\mu}{2}\sqrt{1 +
4h^2(\lambda)\sin ^2(k/2)}. \end{equation} Obviously, in the case of the three
dimensional theory, the dimensionless 't Hooft coupling is
$g^2_{S^2}N/\mu$. Using this expression, we see that the
dispersion relation agrees with the $k\ll1$ limit of the one derived
in \cite{Lin:2005nh} (and reviewed in section \ref{sec:string}) from
the world-sheet point of view.
The function $h(\lambda)$ cannot be fixed be from the
constraints of supersymmetry alone. In the following chapters we
determine it for the weakly coupled three dimensional theory at two
loop order (up to a single undetermined constant) and at strong
coupling from the dual string picture. In the gauge theoretic
analysis, we use the known results about the spectrum of the
dilatation operator of the four dimensional theory as a benchmark for
calibrating our methods and results.
\section{Weak coupling spectrum and integrable spin chains}\label{weak}
The form of the dispersion relation, discussed in the previous section
could be determined from an understanding of the realization of the
$SU(2|2)$ algebra alone. It has the same ``universal'' form for all the
dimensional reductions of the four dimensional theory whose
Hamiltonians are embedded in the $SU(2|2)$ structure as in
(\ref{su22-main}). We shall now focus on the determination of the
specific effective Hamiltonians for the gauge theories in question,
and show how their spectra are related to those of quantum spin
chains. The new results in this section include the determination of
$h(\lambda)$ to two loop order in the three dimensional gauge theory,
which also appears to be integrable (at least in the $SU(2)$ sector)
at this order. We also determine the full $SU(2|3)$ symmetric
effective Hamiltonian at one-loop and find its leading
correction\footnote{For the full $SU(2|3)$ sector, there is an
intermediate `dynamical' contribution between the one and two loop
contributions.}.
To compute the effective Hamiltonians for both the
$\mathbb{R}\times S^3$ and $\mathbb{R}\times S^2$ within the
scheme of radial quantization we first recall that
the states (for both theories) under consideration are generically of the form \begin{equation} |i_1
i_2\cdots i_n\rangle =
\frac{1}{\sqrt{N^{n}}}\ensuremath{\mathop{\mathrm{Tr}}}(a^\dagger_{i_1}a^\dagger_{i_2}\cdots
a^\dagger_{i_n})|0\rangle, \qquad a^\dagger_i =
(\alpha^\dagger_{4i})_0. \end{equation}
$(\alpha^\dagger_{4i})_0$ corresponds to the lowest spherical harmonic
mode in the momentum space expansion of the oscillators
$(\alpha^\dagger_{4i})$, $i=1,2,3$ on $S^3$ or $S^2$. Though not
displayed above, it is implied that we also include the fermionic
states $\psi_\alpha$ so that the Hilbert space transforms under
$SU(2|3)$.
As it stands, although these states have a global $SU(N)$ invariance,
they do not seem to be invariant under local gauge transformations,
which will typically mix the different momentum modes. However, we
need to keep in mind that we shall work with a gauge fixed
Hamiltonian, for which the states can only be classified by their
quantum numbers. In such a gauge fixed $J^{PC}$-like scheme these
states are physical and normalizable. In the conformal field theory,
these states are mapped to local composite operators built out of
scalar fields alone once the theory is mapped to $\mathbb{R}^4$. Local
operators with covariant derivatives inserted on $\mathbb{R}^4$ would,
in turn, correspond to operators with higher spherical harmonics on
$\mathbb{R}\times S^3$. The classification scheme for operators based
on $R$ charge and $J^{PC}$ assignments is valid for the three
dimensional $\mathcal{N}=8$ theory as well, as is the physicality of
the states mentioned above.
At tree level, the Hamiltonians for $\mathcal{N}=4$ SYM on
$\mathbb{R}\times S^3$ and $\mathbb{R}\times S^2$ reduce to harmonic
oscillator Hamiltonians, with a single oscillator assigned to each
angular momentum mode. The spectrum of the $SU(2|3)$ states is simply
given by their engineering dimensions at that level. The one-loop
correction to the energies is given by \begin{equation} \Delta E^1 = \langle I|
:\left(H^4 + H^3\frac{\Pi}{E_0-H_0}H^3\right):|I\rangle = \langle
I|\Delta H^1|I \rangle ,\end{equation} where $H^4$ and $H^3$ are the quartic and
cubic parts of the Hamiltonian. $\Pi$ is the projector on to the
subspace\footnote{Note that this subspace includes states built
from non-zero-mode excitations.} orthogonal to the states of energy $E_0$. The expressions for
$H^4, H^3$ are also taken to be normal ordered. The normal ordered
expressions can be mapped to Hamiltonians of quantum spin chains
\cite{Lee:1997dd, Lee:1998ea}. The general connection between matrix
models and their generalization to field theories and quantum spin
chains due to Lee and Rajeev has been reviewed in \cite{Lee:1999vb}.
For previous use of this identification in context of both the four
dimensional gauge theory and the plane wave matrix model we shall
refer to \cite{Agarwal:2004sz,Kim:2003rza, Klose:2003qc,
Fischbacher:2004iu}. Here we recollect some of the relevant facts
about these matrix valued operators for the sake of
completeness.
The typical term at a given order in perturbation theory takes on the
form \begin{equation} \Theta^I_J = \frac{1}{N^{(i+j-2)/2}}\ensuremath{\mathop{\mathrm{Tr}}}\left(W^{\dagger
I_1}W^{\dagger I_{2}}\cdots W^{\dagger I_i}W_{J_j}
W_{J_{j-1}}\cdots W_{J_1}\right). \end{equation} The strings $I = I_1 I_2 \cdots
I_i$ and $J = J_1 J_2\cdots J_j$ denote fixed orderings of the bits
$I_i, J_j$, etc., which are shorthand for all $SU(2|3)$ and angular
momentum indices. The oscillators $W_I$ collectively denote the three
bosonic and two fermionic matrix valued oscillator variables. These
$SU(N)$ invariant operators form a closed lie super-algebra, whose
basic anti-commutation relations are given as
\begin{eqnarray}
[\Theta ^I_J, \Theta ^K_L]_\pm &=& \delta ^K_J \Theta ^I_L + \sum_{J
= J_1J_2}(-1)^{\epsilon (J_1)[\epsilon (K) + \epsilon (L)]}\delta
^K_{J_2}\Theta ^I_{J_1L}\nonumber \\
& &+ \sum_{K = K_1K_2}\delta ^{K_1}_{J} \Theta ^{IK_2}_{L} +
\sum_{\stackrel{J=J_1J_2}{K=K_1K_2}}(-1)^{\epsilon (J_1)[\epsilon (K)
+ \epsilon (L)]}\delta ^{K_1}_{J_2}\Theta ^{IK_2}_{J_1L}\nonumber \\
& &+ \sum_{J = J_1J_2}\delta ^K_{J_1}\Theta ^I_{LJ_2} + \sum
_{J=K_1K_2}(-1)^{\epsilon (K_1)[\epsilon (I) + \epsilon (J)]}\delta
^{K_2}_J \Theta ^{K_1I}_{L}\nonumber \\
& & + \sum_{\stackrel{J=J_1J_2}{K=K_1K_2}}(-1)^{\epsilon
(K_1)[\epsilon (I) +\epsilon (J)]}\delta ^{K_2}_{J_1}\Theta
^{K_1I}_{LJ_2} \nonumber \\
& &+\sum_{J=J_1J_2J_3}(-1)^{\epsilon (J_1)[\epsilon (K) + \epsilon
(L)]}\delta ^K_{J_2}\Theta ^I_{J_1LJ_3} \nonumber \\
& & + \sum_{K=K_1K_2K_3}(-1)^{\epsilon (K_1)[\epsilon (I) + \epsilon
(J)]}\delta ^{K_2}_{J}\Theta ^{K_1IK_3}_{L}\nonumber \\
& & - (-1)^{[\epsilon (I)+ \epsilon (J)][\epsilon (K) +\epsilon (L)]}
\left(I,J \leftrightarrow K,L\right) .
\end{eqnarray}
In the above formula, expressions such as $\sum_{I = I_1I_2}$ imply
summing over all ways of writing the string $I$ as the concatenation
of two strings $I_1$ and $I_2$. $\epsilon(I)$ denotes the grade of the
string $I$, which is zero if it is bosonic and 1 if it is fermionic.
The full Lie algebra also includes an ideal which includes elements
that encode finite size effects. However, since we shall be working on
states of infinite size we shall ignore the contribution of the ideal,
which is irrelevant for our present concerns. A more complete
discussion of this algebra can be found in \cite{Lee:1999vb}.
When the operators are bosonic, their action on the single-trace
states can be expressed as
\begin{equation} \Theta ^I_J|K\rangle = \delta
^K_J|I\rangle + \sum _{K = K_1K_2}\delta ^{K_1}_J|IK_2\rangle .
\end{equation}
Identifying the states with those of a quantum spin chain, we see that
$\Theta ^{ij}_{ji} = \sum_lP_{l,l+1}$, which is to be replaced by the
graded permutation operator $\Pi$ when fermionic creation and
annihilation operators are included.
On general grounds of $SU(2|3)$ invariance in the sector of the gauge
theories under consideration, the one loop effective Hamiltonian
$\Delta H^1 = \alpha \Theta^{ij}_{ji} + \beta\Theta^{ij}_{ij} =
\sum_l(\alpha \Pi_{l,l+1} + \beta I_{l,l+1})$, for some constants
$\alpha$ and $ \beta$. Requiring $\Delta H^1$ to annihilate the chiral
primary operators $\ensuremath{\mathop{\mathrm{Tr}}}(a^\dagger_3)^n|0\rangle$ yields $\alpha =
-\beta$. To determine the coefficient of $\Pi$, we see that in the
bosonic $SU(2)$ sector the permutation operator arises entirely from
the quartic interaction vertex in $H^4$, whose contribution is \begin{equation}
-\frac{1}{4g^2}\int_{\Omega} \ensuremath{\mathop{\mathrm{Tr}}}\left([X_{AB},X_{CD}][X^{AB},
X^{CD}]\right) \rightarrow -\frac{g^2}{|\Omega
|\mu^2}\ensuremath{\mathop{\mathrm{Tr}}}(a^\dagger_aa^\dagger_ba^aa^b) = -\frac{Ng^2}{|\Omega
|\mu^2}\sum_l P_{l,l+1}. \end{equation} The formula is true for both $S^2$ and
$S^3$, where $|\Omega |$ denotes the associated volume. Substituting
the explicit formulae for the volumes, we have \begin{equation} \Delta H^1 = \frac{g^2N
\mu}{16\pi^2}\sum_l(I - \Pi_{l,l+1}),\label{1loop} \end{equation} for both the
theories in the closed $SU(2|3)$ sector. The coupling constant, for
the three dimensional theory expressed in terms of $g^2_{S^2}$ is
$g^2_{S^2}N/(4\pi)$. It is gratifying to note that for the four
dimensional theory, the above formula agrees with the known result for
the dilatation operator on $\mathbb{R}^4$ \cite{Minahan:2002ve}, after
one sets the radius of $S^3$ to unity,
i.e. $\mu =2$. It also agrees with the one loop result obtained for the three
dimensional theory in \cite{Ishiki:2006rt}. In that paper, $\Delta
H^1$ was computed for the full $SO(6)$ sector, which is closed, as it
is in the four dimensional theory, (only) at one loop. Restriction of
that result to the $SU(2)$ sector agrees with above Hamiltonian.
Moreover, our understanding of how the $SU(2|3)$ symmetry is realized
in the radial Hamiltonian formalism allows us to generalize the
one-loop result to the full $SU(2|3)$ sector and, as we shall see
below, go beyond the one-loop level.
\subsection{Higher loops}
In \cite{Beisert:2003ys}, it was shown how the $SU(2|3)$ symmetry
alone can be used to constrain the form of the higher loop corrections
to the dilatation operator of $\mathcal{N}=4$ SYM. Although the four
dimensional superconformal theory was the focus of the analysis in
that paper, the results in \cite{Beisert:2003ys} can be readily
adapted to determine the leading corrections to the one-loop radial
Hamiltonians for the gauge theories we study as well. Requiring that
the generators of the $SU(2|3)$ algebra close order by order in
perturbation theory \cite{Beisert:2003ys} it is possible to write down
the complete leading correction to (\ref{1loop}) as \begin{equation} \Delta
H_{SU(2|3)} = \frac{\mu}{2}\left(\lambda (\Theta ^{AB}_{AB} - \Theta
^{AB}_{BA}) - \sqrt{\frac{(\lambda)^3}{2}}(\epsilon^{\alpha
\beta}\epsilon_{abc}\Theta ^{abc}_{\alpha \beta} +
\epsilon_{\alpha \beta}\epsilon^{abc}\Theta _{abc}^{\alpha \beta}
)\cdots \right) ,\end{equation} where the capital indices in the first term on
the $r.h.s.$ are meant to stand for both the $SU(3)$ (i.e. $a$, $b$,
$c$, $\ldots$) and $SU(2)$ ($\alpha$, $\beta$, $\ldots$) indices. The second
term breaks the $SU(2|3)$ invariance to $SU(2)\times SU(3)$ and it
encodes the ``dynamical'' effect of altering the length of the spin
chain. It is a non-trivial fact that the effective Hamiltonian is
integrable at this order \cite{Agarwal:2005jj}. In the context of the
four dimensional gauge theory, we have reproduced the known result for
the dilatation generator explicitly within the scheme of radial
quantization. However since both the form of $\Delta H_{SU(2|3)}$
given above and its integrability follows directly from the symmetry
algebra, we also claim the above formula to be the complete first
non-trivial correction to the effective Hamiltonian for the
$\mathcal{N}=8$ model on $\mathbb{R}\times S^2$. Furthermore, it also
appears to be integrable at this order.
The coupling constant $\lambda$ is to be identified with
$g^2N/(8\pi^2)$ for the four dimensional theory and
$g^2_{S^2}N/(2\mu\pi)$ for the $\mathbb{R}\times S^2$ model.
To analyze the question of integrability at two loops it is
instructive to restrict ourselves to the bosonic $SU(2)$ sector. For
the four dimensional $\mathcal{N} =4$ theory, the explicit forms of
the dilatation operator, are known up to five loop order
\cite{Beisert:2003tq, Beisert:2003ys, Beisert:2007hz,
Fiamberti:2009jw}. These results can be readily mapped to the five
loop effective radial Hamiltonian using the maps between the 't Hooft
couplings of the theories on $\mathbb{R}^4$ and $\mathbb{R}\times S^3$
given before. In the absence of alternate explicit computations of
spectra, one can continue to use the symmetry to constrain the radial
$SU(2)$ Hamiltonian for the $\mathbb{R}\times S^2$ model at the two
loop order up to a single undetermined constant. Stated explicitly \begin{equation}
\Delta H_{SU(2)} = \frac{\mu}{2} \left(\lambda (\Theta ^{ab}_{ab} -
\Theta ^{ab}_{ba}) + \lambda^2[(2\Theta ^{ab}_{ba} -
\frac{1}{2}\Theta^{abc}_{cba} - \frac{3}{2}\Theta^{abc}_{abc}) +
\alpha_1(\Theta ^{ab}_{ab} - \Theta ^{ab}_{ba})] + \cdots\right),
\end{equation} where $\alpha_1$ is the new undetermined constant for the three
dimensional theory which is equal to zero in the four dimensional
case, by the requirement of BMN scaling, which is present in the
$\mathcal{N}=4$ theory at this loop order. However, it is not known if
there is any reason to expect such a scaling in the three dimensional
theory as well. It is known that for the PWMM (as for ${\cal N}=4$
SYM), BMN scaling is violated only at the four-loop order
\cite{Fischbacher:2004iu,Beisert:2006ez}. Nevertheless, the
perturbative integrability of the effective Hamiltonian is ensured for
arbitrary values of $\alpha_1$. A higher charge $\mathcal{Q} = \lambda
\mathcal{Q}_0 + \lambda^2\mathcal{Q}_1$ can be constructed such that
$[\Delta H, \mathcal{Q}] = \mathcal{O}(\lambda ^4)$. The explicit form
of the higher charge is \be\begin{split} &\mathcal{Q}_0 = \Theta^{cab}_{abc} -
\Theta^{bca}_{abc},\\ & \mathcal{Q}_1 = \left( -6\mathcal{Q}_0 +
\Theta^{dacb}_{abcd} - \Theta ^{bdca}_{abcd} + \Theta^{dbac}_{abcd} -
\Theta^{cbda}_{abcd}\right),
\end{split}
\end{equation} and it establishes the two-loop integrability of the $SU(2)$
sector of the three dimensional theory. As we discuss below, the
scattering matrix of the spin chain describing the $SU(2|3)$ sector of
the gauge theory is factorized, which allows us to interpret the the
two loop integrability in the $SU(2)$ sector as an important piece of
evidence in favor of integrability of the full $SU(2|3)$ sector at
this perturbative order\footnote{The two-loop Hamiltonian for the full
$SU(2|3)$ sector, determined up to a few constants, by requiring the
perturbative closure of the algebra is available in
\cite{Beisert:2003ys}. Those results are obviously valid for the
three dimensional theory as
well.}.
Finally, we note that at this order, the scaling function is
determined to be \begin{equation}\label{hlambda} h^2(\lambda) = 2\lambda +2\alpha_1\lambda^2 +
\cdots .\end{equation}
\section{Integrability and scattering matrices}
\label{sec:intS}
In this section we comment on carrying over the insights gained from
the studies of the multi-particle $S$-matrix for the planar dilatation
operator of $\mathcal{N}=4$ SYM on $\mathbb{R}^4$ to the radial
Hamiltonians described in the previous sections. Having interpreted
the effective planar Hamiltonians in the $SU(2|3)$ sector of the three
and four dimensional gauge theories as spin chains, we can proceed to
constrain the $S$-matrix of the spin chain using the symmetry algebra
by adapting Beisert's techniques in \cite{Beisert:2006qh,
Beisert:2005tm}. We bear in mind that the ferromagnetic vacuum of
the spin chains involves states made out of $Z$'s alone, while the
excitations/magnons transform under the residual $SU(2|2)$ symmetry,
which is also the symmetry of the $S$-matrix. Since the details of the
determination of the $S$-matrix by the use of the $SU(2|2)$ algebra
have been expanded on at length in \cite{Beisert:2006qh, Beisert:2005tm},
we shall refer to Beisert's original papers for the technical details.
The generalization of the single particle (fundamental) representation
of the $SU(2|2)$ algebra (\ref{su22-abstract}) to multiple particles
involves the introduction of the $\mathcal{Z}^{\pm}$ (length changing) markers as
follows \be\begin{split}\label{su22-z}
& q^a_\alpha|\phi_b\rangle = a\,\delta ^a_b|\psi_\alpha\rangle,\\
&q^{*\alpha}_a |\phi_b\rangle = c\,\epsilon_{ab}\epsilon^{\alpha
\beta}|\psi_\beta \,\mathcal{Z}^-\rangle, \\
&q^a_\alpha |\psi_\beta \rangle = b\,\epsilon_{\alpha
\beta}\epsilon^{ab}|\phi_b\,\mathcal{Z}^+\rangle,\\
&q^{*\alpha}_a|\psi_\beta\rangle = d\,\delta^\alpha _\beta
|\phi_a\rangle.
\end{split}
\end{equation} Comparison with (\ref{su22-main}) immediately clarifies the r\^ole
of the markers as essentially bookkeeping devices for gauge
transformations. The generators $P$ and $K$ act as \begin{equation} P|W\rangle =
ab|W\mathcal{Z}^+\rangle, \hspace{.3cm} K|W\rangle = cd|W{\cal Z}^-\rangle
.\end{equation} In the case of the scattering of two magnons, the two particle
$S$-matrix can be constrained up to ten undetermined functions of the
magnon momenta. The action of the $S$-matrix on two particle states
can be expressed in all generality as \be\begin{split} &
S_{12}|\phi_a^1\phi_b^2\rangle =
A_{12}|\phi^2_{\{a}\phi^1_{b\}}\rangle +
B_{12}|\phi^2_{[a}\phi^1_{b]}\rangle
+\frac{1}{2}C_{12}\epsilon_{ab}\epsilon^{\alpha \beta}|\psi ^2_\alpha
\psi^1_\beta \mathcal{Z}^-\rangle \nonumber\\
& S_{12} |\psi_\alpha \psi_\beta\rangle =
D_{12}|\psi^2_{\{\alpha}\psi^1_{\beta \}}\rangle +
E_{12}|\psi^2_{[\alpha}\psi^1_{\beta ]}\rangle
+\frac{1}{2}F_{12}\epsilon_{\alpha
\beta}\epsilon^{ab}|\phi_a^2\phi_b^1\mathcal{Z}^+\rangle\nonumber\\
& S_{12}|\phi^1_a\psi^2_\beta\rangle = G_{12}|\psi^2_\beta
\phi^1_a\rangle + H_{12}|\phi^2_a\phi^1_\alpha \rangle \nonumber\\
&S_{12}|\psi^a_\alpha \phi^2_b\rangle = K_{12}|\psi ^2_\alpha \phi
^1_b\rangle + L_{12}|\phi ^2_b \psi^1_\alpha\rangle
\end{split}
\end{equation} Requiring the two body scattering matrix to commute with the
supersymmetry generators uniquely determines the ten undetermined
functions in terms of a single function $S^0_{12}$. For example
\begin{equation}\label{S012} A_{12} = S^0_{12}\frac{x^+_2 - x^-_1}{x^-_2 - x_1^+}.
\end{equation} The expressions for all the other functions $B\cdots L$ in terms
of $S^0_{12}$ can be found in table-1 of \cite{Beisert:2005tm}.
Furthermore, the scattering matrix satisfies the Yang-Baxter algebra
\begin{equation} S_{12}S_{13}S_{23} = S_{23}S_{13}S_{12}, \end{equation} fulfilling the
necessary condition for the integrability of the $SU(2|3)$ symmetric
spin chain to all orders in perturbation theory\footnote{The
Yang-Baxter algebra is also a consequence of the Yangian symmetry
exhibited by the $S$-matrix \cite{Beisert:2007ds}.}. The magnon
momenta are to be determined by the Bethe ansatz equations obeyed by
the scattering matrix \cite{Beisert:2006qh, Beisert:2005tm}. For an
$m$ magnon state, the total energy is given by the additive relation
\begin{equation}\label{HYM} H = \sum_{i=1}^m H_i = \sum_{i=1}^m
\frac{\mu}{2}\sqrt{1 + 4h^2(\lambda)\sin ^2(k_i/2)}. \end{equation}
The
factorizability of the $S$-matrix and its determination up to a single
function are both consequences of the fact that the underlying
symmetry is $SU(2|2)$ and that the fundamental excitations fall on
atypical representations. The tensor product of this representation
uniquely gives a $single$ new irreducible representation, allowing us
to constrain the scattering matrix up to a single function of the
magnon momenta. The formal (matrix) structure of the two particle
scattering matrix, is hence a direct consequence of the symmetry and
the details of the models that realize these symmetries lie hidden in
the parameter $\mu$, the scaling function $h(\lambda)$ and, relatedly,
$S^0_{12}$.
While the Yang-Baxter condition on the $S$-matrix is a necessary
condition for integrability, it is certainly not sufficient. One
typically needs to augment this with the existence of additional
conserved charges to gain surer evidence of integrability. In the
case of the three dimensional gauge theory we have presented this
evidence in the form of higher conserved charges up to the two loop
order. At least in the $SU(2)$ sector, one can hope to reliably use
the asymptotic Bethe ansatz techniques to compute the spectrum of the
three dimensional gauge theory at this loop order. The Yang-Baxter
relation satisfied by the full $SU(2|2)$ S-matrix suggests that Bethe
ansatz techniques may be applicable to a larger sector of the gauge
theory at and beyond two loops. Clearly, probing the structure of the
spin chain describing its spectrum at higher loop orders (even in the
$SU(2)$ sector) and understanding its integrability properties remains
an exciting open problem.
\section{Comments on PWMM and $\mathcal{N}=4 $ SYM on $\mathbb{R}\times S^3/{\mathbb{Z}_k}$}
\label{sec:extend}
The principles used in constraining the spectrum of the three
dimensional gauge theory used above can also be employed in the study
of the PWMM and $\mathcal{N}=4 $ SYM on $\mathbb{R}\times
S^3/{\mathbb{Z}_k}$. These theories have a rich moduli space of vacua.
However, as long as a well defined large-$N$ expansion can be
implemented, the spectra around each of those vacua can, in principle,
be constrained by exactly the same use of the $SU(2|2)$ algebra as
explained above. The different vacua would simply correspond to
different $h(\lambda)$. For example, the $h$ function for the PWMM
around its trivial vacuum has been computed at the leading order on
the string theory side in \cite{Lin:2005nh} and up to the four loop
level in perturbation theory in \cite{Fischbacher:2004iu}\footnote{For
an exposition of the perturbative realization of $SU(2|2)$ in weaky
coupled PWMM see \cite{Aniceto:2008ep}. }. A different large-$N$
limit for the matrix model can also be taken if it is expanded around
its so-called fuzzy-sphere vacuum \cite{Maldacena:2002rb}. This
expansion simply maps the matrix model to the three dimensional gauge
theory studied above, and the corresponding $h$ would be the one
computed up to two loops in the preceding section. Thus, while the
different $h$ functions are determined dynamically in these different
theories, the r\^ole of the underlying $SU(2|2)$ and the consequent
dispersion relation (\ref{HYM}) appears to be generic to the
dimensional reductions of $\mathcal{N}=4$ SYM on $\mathbb{R}\times
S^3$ that preserve the $SU(2|3)$ symmetry. Furthermore, the ``matrix''
structure of the $SU(2|2)$ S-matrix would also be the same for this
class of theories, with the specifics of the models and choices of
vacua being encoded in the dynamically determined observable $A_{12}$.
As mentioned before, these universal properties are however crucially
contingent on there being a systematic large-$N$ limit for the study
of the spectrum of any of these given models around a particular
vacuum, which is assumed to be ``well separated'' in the sense
described below\footnote{The well-separatedness of the vacua and
formulations of large-$N$ limits are not independent issues
\cite{Lin:2005nh, Maldacena:2002rb}.}.
\section{$SU(2|2)$ from string theory}
\label{sec:string}
In this section we will discuss the emergence of the $SU(2|2)$ algebra
in the string theories dual to the family of gauge theories with 16
supercharges discussed previously. We will succeed in deriving the
explicit algebra in the plane-wave (BMN-like) limit, i.e. in the limit
that the magnon momenta $k \ll 1$. This amounts to analyzing type IIA
strings on the plane-wave, for which exact quantization has been
performed. The main point of the analysis is to show the emergence of
the central charges through a relaxation of the level-matching
condition. This was originally understood for the full $AdS_5\times
S^5$ superstring in \cite{Arutyunov:2006ak}; we will have to content
ourselves with the plane-wave limit, as the full geometries relevant
to our cases are complicated and have so far not admitted a solvable
sigma model. We find it useful to first review the bubbling geometries
method which was used to construct the full dual supergravity
solutions -- the so-called Lin-Maldacena solutions -- in which the
problem is reduced to a classical axisymmetric electrostatics problem
\cite{Lin:2005nh}. The results presented here and in sections
\ref{sec:LCGPW} and \ref{sec:supch} are known from the literature. Our
contribution -- deriving the $SU(2|2)$ algebra -- is presented in
section \ref{sec:rest}.
The geometries have the bosonic symmetry $\mathbb{R}\times SO(3)\times
SO(6)$; they contain (in addition to the temporal direction) an $S^2$
and an $S^5$ whose radii vary with the remaining two coordinates $\rho$
and $\eta$. The geometries may be thought of as arising from M2 and
M5 branes wrapping these contractile spheres. These geometries also
contain a $(\rho,\eta)$-dependent dilaton and B-field, of which the
latter has its legs in the $S^2$. There are also one-form and
three-form Ramond-Ramond potentials $C_1$ and $C_3$, similarly
dependent only on $(\rho,\eta)$, for which $C_1 \propto dt$ and $C_3
\propto dt\wedge d\Omega_2$, where $d\Omega_2$ denotes again the $S^2$. The
$(\rho,\eta)$ plane is the scene of the axisymmetric electrostatics
problem ($\rho$ is the radial, and $\eta$ the axial coordinate), whereby
the electric potential $V(\rho,\eta)$ comes to inform the specific
dependence of the geometry on those coordinates. More specifically the
electric potential is that found in the presence of a configuration of
``critically'' charged conducting disks centered on the $\eta$-axis,
subject to a certain asymptotically defined external electric field.
The ``critical'' charge corresponds to the condition that the charge
density exactly vanishes at the edge of the disks, which in turn
implies that the electric field is non-infinite there. This condition
on the charge, and the asymptotic form of the external potential, are
determined by requiring that the corresponding supergravity solutions
are well-behaved and non-singular. The total charge per disk and the
distance between disks are proportional to the units of $*dC_3$ and
$dB_2$ flux on six- and three-cycles constructed from the $S^5$ and
$S^2$ and a $\rho$ or, respectively, $\eta$ fiber. The emerging picture
is very attractive, with the solutions in one-to-one correspondence
with disk configurations.
The disk configurations corresponding to the dual of SYM on $\mathbb{R}\times
S^2$ are the simplest: finite radii disks, a single disk corresponding
to the trivial vacuum where all scalar fields have zero VEV. Adding
more disks corresponds to the other vacua of the theory, see
\cite{Lin:2005nh} for a discussion. SYM on $\mathbb{R}\times S^3/\mathbb{Z}_k$
consists of a periodic extension of this configuration, extending to
$\pm \infty$ in $\eta$. Finally, the dual of the PWMM is obtained from
an infinite disk at $\eta=0$ giving the trivial vacuum, other vacua
being obtained through the addition of finite-radii disks at $\eta
>0$. The simplest plane-wave limit of these geometries is given by the
IIA plane-wave with $SO(3)\times SO(4)$ symmetry
\be\begin{split}\label{pwmu}
&ds^2 = -2dx^+dx^- -\left(\left(\frac{\beta}{3}\right)^2 x^ix^i
+ \left(\frac{\beta}{6}\right)^2 x^{i'}x^{i'} \right) (dx^+)^2
+ dx^i dx^i + dx^{i'} dx^{i'},\\
&F_{+123} = \beta = -3 F_{+4},
\end{split}
\end{equation}
where $i$ and $i'$ run from 1 to 4 and 5 to 8, respectively, and where
$\beta$ is an arbitrary positive constant. This geometry is obtained by
expanding the Lin-Maldacena geometries around the region corresponding
to the edge of a single disk, ensuring any other disks are
well-separated from this region; this is what is meant by
``well-spaced'' vacua in footnote \ref{foot:cav}.
Consistent with the focus of the paper, we will use SYM on $\mathbb{R} \times
S^2$ around the trivial vacuum as the prototypical example, but of
course the analysis following (in sections \ref{sec:LCGPW} -
\ref{sec:stringgen}) is valid for the single-disk, plane-wave limit
dual of any of the gauge theories. The full supergravity solution dual
to the trivial vacuum of SYM on $\mathbb{R} \times S^2$ is \cite{Lin:2005nh}
\be\begin{split}\label{LM}
&ds_{LM}^{2} = \alpha'L^{1/3} \Bigl[
-8(1+r^{2})f dt^{2}+16{f}^{-1}\sin ^{2}\theta d\Omega _{5}^{2}\\
&\qquad+\frac{8r f }{
r+(1+r^{2})\arctan r}\left( \frac{dr^{2}}{1+r^{2}}+d\theta ^{2}\right)
+\frac{2r\left[r+(1+r^{2})\arctan r\right] f }{1+r\arctan r}d{\Omega }_{2}^{2}
\Bigr],\\
&f \equiv \sqrt{\frac{2}{r}[r+ (\cos ^{2}\theta +r^{2})\arctan
r]},\\
&B_{2} =- L^{1/3} \frac{2\sqrt{2}\left[ r+(-1+r^{2})\arctan r
\right] \cos \theta }{1+r\arctan r}d^{2}\Omega , \\
&e^{\Phi } = K L^{1/2} 8 r^{\frac{1}{2}}(1+r\arctan
r)^{-\frac{1}{2}} [ r+(1+r^{2})\arctan r]^{-\frac{1}{2}}f
^{-\frac{1}{2}} , \\
\end{split}
\end{equation}
with
\be\begin{split}
&C_{1} =- K^{-1} L^{ - \frac{1}{3} }
\frac{\left[ r+(1+r^{2})\arctan r\right] \cos \theta }{2r}dt,
\\
&C_{3} =-K^{-1}
\frac{r [ r+(1+r^{2})\arctan r] ^{2}f ^{2}}{\sqrt{2}(1+r \arctan r) }dt\wedge d^{2}\Omega,
\end{split}
\end{equation}
where $L$ and $K$ are constants which will be related to gauge theory
parameters below. This solution may be viewed as an IR completion of
the D2-brane solution on $\mathbb{R}\times S^2$ \cite{Itzhaki:1998dd}, which
suffers from a diverging dilaton as the radial coordinate $r$
approaches zero, invalidating the 10-dimensional description, see
figure \ref{fig:LMD2}. The D2-brane solution is given by
\begin{equation}
ds_{D2}^2 = \alpha' C^{2/3} \left( r^{5/2} \left(-dt^2 + d\Omega_2^2
\right) + \frac{dr^2}{r^{5/2}} + r^{-1/2} d\Omega_6^2 \right),\quad
e^{\Phi} = \frac{g^2}{\mu C^{1/3}} \, r^{-5/4},
\end{equation}
where $C^2 = 6\pi^2 g^2 N/\mu$, $g$ is the Yang-Mills coupling
constant, and $1/\mu$ is the radius
of the gauge-theory $S^2$. Here we have used dimensionless coordinates
$t$ and $r$ in order to match-up with those of $ds^2_{LM}$. It will be
important for us later in matching to the gauge theory results that
$t=\mu\, t_{YM}$, where $t_{YM}$ is the dimensionful gauge
theory time coordinate\footnote{The coordinate $r$ is related to the
usual coordinate $U$ from \cite{Itzhaki:1998dd} by $U=C^{2/3}\mu r$.}.
\begin{figure}
\begin{center}
\includegraphics*[bb=60 65 350 300, height=2.0in]{LMD2}
\end{center}
\caption{The Lin-Maldacena solution dual to SYM on $\mathbb{R}\times S^2$ is
a completion of the D2-brane geometry \cite{Itzhaki:1998dd} to the
IR region (small $r$).}
\label{fig:LMD2}
\end{figure}
Indeed taking $r\to\infty$ and scaling $r\to (8/\pi)^{1/3} r$, $t\to
t/2$ in $ds^2_{LM}$, $ds^2_{D2}$ is recovered with the following
identification of parameters
\begin{equation}\label{gtquan}
L = \frac{3\pi^3\lambda}{2^9\sqrt{2}} ,\quad
K = \frac{4 g^2\sqrt{2}}{(6\lambda)^{2/3}\pi},\quad
t_{LM} = \frac{t_{D2}}{2} = \frac{ \mu}{2} t_{YM},
\end{equation}
where we have introduced the dimensionless gauge theory 't Hooft
coupling $\lambda=g^2N/\mu$. As was explained in \cite{Lin:2005nh}, this
identification gives $L$ and $K$ in terms of gauge theory quantities.
Plugging them into the Lin-Maldacena solution string coupling
$\exp(\Phi)$, one finds $\sim \lambda^{5/6}/N \times g(r,\theta)$,
where the function $g(r,\theta)$ is always finite (i.e. $\leq {\cal
O}(1)$) and goes to zero for large-$r$ as $r^{-5/4}$. This implies
that at large-$N$, the string coupling is suppressed everywhere, and
so the solution may be trusted for any $r$.
The coordinate $r$ is related to the gauge theory energy scale. The
running of the SYM coupling constant is trivial and given simply by
dimensional analysis, so that the dimensionless effective coupling is
given by $g_{eff} = g^2N/E$, where $E$ is the relevant energy scale.
Since we have SYM on $\mathbb{R}\times S^2$, it is more sensible to express
this scale in units of the $S^2$ radius $1/\mu$, i.e. ${\cal E} =
E/\mu$, and so write $g_{eff} = \lambda/{\cal E}$. This running is reflected
in the string solution by the coordinate dependence of the string
coupling $\exp(\Phi)$. This allows us to identify ${\cal E} \sim
g^{-6/5}(r,\theta)$ and so at large-$r$, ${\cal E}\sim r^{3/2}$. The
curvature scale of the geometry, for example the inverse radius of the
$S^5$, diverges for large-$r$. Thus the strongly curved region
corresponds to weak effective gauge coupling or large gauge theory
energies; we will therefore call this the UV region. In this part of
the geometry (for large-$N$), strings propagate classically on a
string-scale-curved background. The small-$r$ or IR region
corresponds to strong effective gauge theory coupling and weak
curvature scales in the geometry, where a classical supergravity
analysis is appropriate.
If the large-$N$ limit is relaxed, the geometry may still be trusted
for large enough $r$, but at small $r$ the string coupling will be
large leading to a transition to an 11-dimensional description. For
small $N$, one expects then to make contact with the superconformal
M2-brane or Bagger-Lambert-Gustavsson
\cite{Bagger:2006sk,Bagger:2007jr,Bagger:2007vi,Gustavsson:2007vu}
theory (i.e. ABJM \cite{Aharony:2008ug} at $k=1,2$) and their massive
counterparts \cite{Hosomichi:2008jd,Hosomichi:2008jb,Gomis:2008cv,
Gomis:2008vc}. Of course, we do not expect to find integrability at
finite $N$.
We are interested in taking a Penrose limit of the Lin-Maldacena
geometry around a stable light-like geodesic on the $S^5$. The
geodesic is a line in the $t$-$\phi$ plane, where $\phi$ describes a
great circle in the $S^5$. This geodesic is located in the deep IR at
$r=0$, $\theta=\pi/2$, corresponding to the strongly coupled limit of
the gauge theory\footnote{This location in the $r$-$\theta$ plane
corresponds to $|g_{tt}| = g_{\phi\phi}$. At this location we have a
massless geodesic corresponding to $E=J$. Away from $r=0$,
$\theta=\pi/2$, $E > J$ and the geodesic describes a particle of
mass $m^2 = E^2-J^2$. Thus the chosen geodesic is a stable
minimization of the energy \cite{Lin:2005nh}.}. The radius of the
$S^5$ in (\ref{LM}) at $r=0$ and $\theta=\pi/2$ is equal to
\begin{equation}\label{Rdef}
\frac{R_{S^5}^2}{\alpha'} = 8 \sqrt{2} L^{1/3} = \left( 6\pi^3\lambda\right)^{1/3},
\end{equation}
where we have used (\ref{gtquan}) to express it in terms of gauge
theory quantities. In order to take the Penrose limit we define the
energy $E$ and angular momentum $J$ generators as $i\partial_t \equiv E-J $
and $-i\partial_\phi \equiv J$, and then define ($R^2=R^2_{S^5}$)
\be\begin{split}\label{xpmdef}
&x^+ = t,\quad x^-=R^2(t-\phi),\quad
p^- = i\partial_{x^+} = i \partial_t =E-J,\\
&p^+ = i\partial_{x^-} = R^{-2}(i\partial_t-i\partial_\phi) = R^{-2} E \simeq R^{-2} J,
\end{split}
\end{equation}
so that the light-cone energy is zero for $E=J$. We will take the
usual BMN-like limit by taking $R\to\infty$ and concentrating on
states with finite $p^+$ and $p^-$, so that $E \gtrsim J \sim R^2$.
Starting from (\ref{LM}) we take a Penrose limit around $r=0$, $\theta=\pi/2$
\be\begin{split}
&\theta = \frac{\pi}{2} + \frac{\sqrt{2}}{R} \, x^{i=1},\quad
r \Theta^{i=2,3,4}= \frac{\sqrt{2}}{R} \, x^{i=2,3,4},\\
&d\Omega_5^2 = \frac{dy^2}{(1+y^2/4)^2} + \frac{(1-y^2/4)^2}{(1+y^2/4)^2}
d\phi^2,\quad
y^{i'} = \frac{x^{i'}}{R},
\end{split}
\end{equation}
where $\vec\Theta$ is the embedding of the unit-$S^2$ which appears as
$d\Omega_2^2$ in (\ref{LM}) into $\mathbb{R}^3$. This gives the IIA plane-wave
\begin{equation}
ds^2 = -2dx^+dx^- -\left(4 x^ix^i
+ x^{i'}x^{i'} \right) (dx^+)^2 + dx^idx^i+ dx^{i'} dx^{i'} +{\cal O}(R^{-2}),
\end{equation}
where one can match-up with (\ref{pwmu}) by setting\footnote{One may
also verify that the Ramond-Ramond field strengths come out
correctly. Note that the constant $\beta$ may be absorbed into the
coordinates and their relations to gauge theory parameters and has
no physical significance in the gauge theory.} $\beta = 6$. In the
following sections we will analyze the $x^{i'}$ excitations of strings
on this geometry. It is of course well-known that the energy
of such excitations are given by
\begin{equation}
p^- = \sum_i \sqrt{\left(\frac{\beta}{6}\right)^2 + \frac{n_i^2}{(\alpha' p^+)^2}},
\end{equation}
where $n_i$ are the worldsheet momenta of the excitations. Using
(\ref{gtquan}) and (\ref{xpmdef}), and setting $\beta=6$ one obtains
\cite{Lin:2005nh}
\begin{equation}
H_{YM} = \frac{\mu}{2}\,p^-=\frac{\mu}{2} \sum_i \sqrt{1 + \frac{R^4}{\alpha'^2} \frac{n_i^2}{J^2}},
\end{equation}
which matches the gauge theory result (\ref{HYM}) in the $k_i =n_i/J \ll 1$
limit, and through (\ref{Rdef}), gives us the strong coupling limit of
the function $h(\lambda)$
\begin{equation}
h(\lambda) \simeq \left(6\pi^3 \lambda \right)^{1/3}.
\end{equation}
It is worth pointing out that while the three dimensional gauge theory only has sixteen supersymmetries the plane wave geometry is invariant under 24 supersymmetries. This is suggestive of an enhancement of supersymmetry in the BMN limit of the strongly coupled gauge theory and a potential connection between the three dimensional strongly coupled SYM theory and $\mathcal{N}=6$ Chern-Simons models.
We will now continue with an analysis of the supersymmetry algebra of
strings on the IIA plane-wave, with the ultimate goal of uncovering
the $SU(2|2)$ structure found in the gauge theory. We begin with some
general known results.
\subsection{Light-cone gauge strings on a plane-wave}
\label{sec:LCGPW}
The light-cone gauge quantization of strings on the plane-wave
geometry (\ref{pwmu}) corresponding to the Penrose limit described in the previous
section was carried out in the series of papers
\cite{Sugiyama:2002tf,Hyun:2002wp,Hyun:2002wu}. Using this analysis,
we will show that the $SU(2|2)$ algebra emerges from the commutation
relations of the supercharges. The main issue is to show that the
central charges emerge from a relaxation of the level-matching
condition, exactly as was shown for strings on $AdS_5\times S^5$ in
\cite{Arutyunov:2006ak}. We will find it useful to repeat portions of
the analysis in \cite{Hyun:2002wp}, both in the interest of
readability and because we will use slightly different conventions.
The string action is given by
\be\begin{split}
S = -\frac{1}{4\pi\alpha'} \int& d^2\sigma \,\Bigl\{ \sqrt{-h} h^{ab}\Bigl[
-2\partial_a X^+ \partial_b X^- + \partial_a X^I \partial_b X^I - M(X^I) \partial_a X^+ \partial_b X^+\\
&-2 \partial_a X^+ \bar \theta \Gamma^- \partial_b \theta + \partial_a X^+ \partial_b X^+
\Upsilon(\theta) \Bigr]
+2 \epsilon^{ab}\, \partial_a X^+\, \bar \theta \Gamma^{-9} \partial_b\theta \Bigr\},
\end{split}
\end{equation}
where the index $I=(i,i')$, where $i=1,\ldots,4$ and $i'=5,\ldots,8$,
and where we have introduced the shorthand
\begin{equation}
M(X^I) = \left(\frac{\beta}{3}\right)^2 X^iX^i
+ \left(\frac{\beta}{6}\right)^2 X^{i'}X^{i'} ,\qquad
\Upsilon(\theta) = \frac{\beta}{2}\,\bar\theta \Gamma^-\left(\Gamma^{123} + \frac{1}{3}\Gamma^{49}\right)\theta.
\end{equation}
We will hold off for the time being on the explicit details of the
fermions $\theta$ and the 10-d gamma matrices $\Gamma^A$. We continue by
calculating the Virasoro constraints, and then imposing the light-cone
gauge
\begin{equation}
h_{ab} = \text{diag}(-1,1),\quad X^+ = \alpha'p^+\tau.
\end{equation}
We find
\be\begin{split}\label{vircons}
&{X^-}' = \frac{1}{\alpha'p^+} \left( \dot X \cdot X' - \alpha' p^+ \bar\theta \Gamma^-
\theta' \right), \\
&\dot X^- = \frac{1}{2\alpha' p^+} \left( \dot X^2 + {X'}^2 - (\alpha' p^+)^2
M(X^I) -2\alpha' p^+ \bar \theta \Gamma^-\dot \theta + (\alpha' p^+)^2 \Upsilon(\theta) \right),
\end{split}
\end{equation}
where $\dot {} = \partial_0 = \partial_\tau$ and $' = \partial_1 = \partial_\sigma$, and the inner
product is over the composite $I$ index. Using these expressions to
eliminate $X^-$ we may calculate the light-cone conjugate momenta. The
Lagrangian $L$ and Lagrangian density ${\cal L}$ are given as
\begin{equation}
S = \int d\tau \, L = \int d^2\sigma\, {\cal L},
\end{equation}
from which we calculate
\be\begin{split}
&p^+ = \int_0^{2\pi} d\sigma\, P^+
= \int_0^{2\pi} d\sigma\,\left(-\frac{\delta{\cal L}}{\delta\dot X^-}\right),\\
&p^- = \int_0^{2\pi} d\sigma\, P^-
= \int_0^{2\pi} d\sigma\,\left(-\frac{{\delta\cal L}}{\delta\dot X^+}\right)
\equiv H.
\end{split}
\end{equation}
The first of these equalities is a tautology resulting from the
consistent definition of $X^+$, while the second yields the light-cone
Hamiltonian $H$, given by
\begin{equation}
H= \frac{1}{4\pi{\alpha'}^2p^+} \int_0^{2\pi} d\sigma\, \Bigl(
\dot X^2 + {X'}^2 + (\alpha' p^+)^2 M(X^I)
+2\alpha' p^+ \bar \theta \Gamma^{-9} \theta' - (\alpha' p^+)^2 \Upsilon(\theta) \Bigr).
\end{equation}
The first Virasoro constraint in (\ref{vircons}) yields one extra
piece of information, the level matching condition. It is this
condition which is relaxed in order to reveal the central charge of
the $SU(2|2)$ algebra. This issue has been worked out for the
$AdS_5\times S^5$ string in \cite{Arutyunov:2006ak}, where it has been
shown that the central charge, related as it is to changing the length
of the gauge theory spin-chain \cite{Beisert:2005tm}, appears in the
string treatment by going off-shell through a relaxation of the level
matching condition. Specifically, one takes the total worldsheet
momentum $p_{ws}$ to be
\begin{equation}\label{levelmatch}
p_{ws} = \frac{1}{\alpha'p^+} \int_0^{2\pi}d\sigma
\left( \dot X \cdot X' - \alpha' p^+ \bar\theta \Gamma^-
\theta' \right) \neq 0.
\end{equation}
We will see that the supersymmetry algebra will allow us to associate
$p_{ws}$ with the gauge theory magnon momentum $k$.
\subsection{Supercharges and algebra}
\label{sec:supch}
The fermions $\theta$ are given by \cite{Hyun:2002wu}
\begin{equation}
\theta = \frac{1}{\sqrt{2\alpha' p^+}}\frac{1}{2^{1/4}}
\begin{pmatrix} 0\\ \psi^A \end{pmatrix},
\end{equation}
where the $\psi^A$ are 16-component real and are further decomposed
according to their $SO(8)$ and $SO(4)$ chiralities
\begin{equation}
\gamma^9 \psi^1_{\pm} = + \psi^1_{\pm},\quad
\gamma^9 \psi^2_{\pm} = - \psi^2_{\pm},\quad
\gamma^{1234} \psi^A_{\pm} = \pm \psi^A_{\pm}.
\end{equation}
The gamma matrices and related conventions are collected in appendix
\ref{app:gammas}. There are 2 dynamical supercharges $Q_+^1$ and
$Q_-^2$ which have been constructed in \cite{Hyun:2002wp}, they are
\be\begin{split}
Q_+^1 = \frac{1}{4\pi\alpha'}\frac{1}{\sqrt{\alpha'p^+}}\int_0^{2\pi}d\sigma\,
\Bigl( &\partial_-X^i \gamma^i \psi_-^1 + \frac{m}{3}X^i\gamma^i\gamma^4\psi_+^2 \\
&+ (i \to i',~ \psi_+ \leftrightarrow \psi_-,~m \to -m/2) \Bigr),\\
Q_-^2 = \frac{1}{4\pi\alpha'}\frac{1}{\sqrt{\alpha'p^+}}\int_0^{2\pi}d\sigma\,
\Bigl( &\partial_+X^i \gamma^i \psi_+^2 - \frac{m}{3}X^i\gamma^i\gamma^4\psi_-^1 \\
&+ (i \to i',~ \psi_+ \leftrightarrow \psi_-,~m \to -m/2) \Bigr),
\end{split}
\end{equation}
where $\partial_\pm = \partial_\tau \pm \partial_\sigma$, and $m=\beta\alpha'p^+$. The canonical
commutation relations for the fields at equal times $\tau$ are given by
\begin{equation}
[X^I(\sigma), \dot X^J(\sigma')] = i 2\pi\alpha' \delta^{IJ} \delta(\sigma-\sigma'),\quad
\{\psi^A_\pm(\sigma), \psi^B_{\pm}(\sigma')\} = 2\pi \alpha' \delta^{AB} \delta(\sigma-\sigma').
\end{equation}
In order to reveal the $SU(2|2)$ algebra we are interested in, we find
it necessary to define the following projected supercharges
\begin{equation}
Q = \frac{e^{i\pi/4}}{\sqrt{2}}(1+\gamma^4)
\left( Q^1_+ + i Q^2_- \right),\qquad
\bar Q \equiv Q^* = \frac{e^{-i\pi/4}}{\sqrt{2}}(1+\gamma^4)
\left( Q^1_+ - i Q^2_- \right),
\end{equation}
and then to restrict $Q$ and $\bar Q$ to the
appropriate subalgebra. First let us quote the
result before the restriction. We find\footnote{We also find a
contribution to $\{Q,Q\}$ and $\{\bar Q,\bar Q\}$ given by $
-\frac{i}{8\pi\alpha'} \frac{\beta}{3}\int_0^{2\pi} d\sigma
\left( \psi^1_-\psi^1_- -\psi^1_+\psi^1_+ + \psi^2_+\psi^2_+ -
\psi^2_-\psi^2_-\right) (1+\gamma^4)_{\alpha\beta}$, which is a vanishing sum of
$\delta(0)$ infinities.}
\be\begin{split}\label{bigcomms}
&\{ Q_\alpha, \bar Q_\beta\} = (1+\gamma^4)_{\alpha\beta} \, H
+ \frac{\beta}{3} \, {\cal J}_{\alpha\gamma} (1+\gamma^4)_{\gamma\beta}
- \frac{\beta}{6} \, {\cal J'}_{\alpha\gamma} (1+\gamma^4)_{\gamma\beta}, \\
&\{ Q_\alpha, Q_\beta \} = -i\frac{p_{ws}}{2\pi\alpha'} (1+\gamma^4)_{\alpha\beta},\\
&\{ \bar Q_\alpha, \bar Q_\beta \} = i\frac{p_{ws}}{2\pi\alpha'} (1+\gamma^4)_{\alpha\beta},\\
\end{split}
\end{equation}
where $\alpha,\beta,\gamma$ are $SO(8)$ spinor indices ranging from 1 to 16, and where
\be\begin{split}
&{\cal J}_{\alpha\gamma} = \frac{i}{4\pi\alpha'} \int_0^{2\pi} d\sigma
\Bigl( \dot X^{\hat i} X^{\hat j} - \dot X^{\hat j} X^{\hat i}\\
&\qquad\qquad\qquad
-\frac{i}{4} \left( \psi^1_- \gamma^{\hat i \hat j} \psi^1_-
+\psi^1_+\gamma^{\hat i \hat j}\psi^1_+ + \psi^2_+\gamma^{\hat i \hat j}\psi^2_+ +
\psi^2_-\gamma^{\hat i \hat j}\psi^2_-\right)\Bigr)\, \gamma^{\hat i
\hat j}_{\alpha\gamma},\\
&{\cal J'}_{\alpha\gamma} = \frac{i}{4\pi\alpha'} \int_0^{2\pi} d\sigma
\Bigl( \dot X^{i'} X^{j'} - \dot X^{j'} X^{i'}
-\frac{i}{4} \left( \psi^1_- \gamma^{i' j'} \psi^1_-
+ \psi^2_+\gamma^{i' j'}\psi^2_+ +\right)\Bigr)\, \gamma^{i'
j'}_{\alpha\gamma},\\
\end{split}
\end{equation}
where we have introduced the $SO(3)$ index $\hat i = 1,2,3$. Notice
the crucial observation that the centrally extended algebra (i.e.
non-zero values for the $\{Q,Q\}$ and $\{\bar Q, \bar Q\}$
commutators) comes from a relaxation of the level-matching condition
(\ref{levelmatch}).
\subsection{Restriction to $SU(2|2)$}
\label{sec:rest}
In order to uncover the $SU(2|2)$ structure, we need to decompose the
$SO(8)$ fermions into $SU(2)^4$. This decomposition is discussed in
detail in \cite{Pankiewicz:2003kj}. We note that $\psi^2_\pm$ is in
the ${\bf 8_c}$ of $SO(8)$ while $\psi^1_\pm$ is in the ${\bf 8_s}$.
The decomposition into $SU(2)^4$ is different for the different
$SO(8)$ chiralities
\be\begin{split}
&{\bf 8_s} \to ({\bf 2},{\bf 2}) \oplus ({\bf 2'},{\bf 2'}),
~~\text{i.e.}~~\psi^1_a \to {\psi^1_+}_{\alpha_1\alpha_2} \oplus {\psi^1_-}^{\dot{\alpha}_1\dot{\alpha}_2}, \\
&{\bf 8_c} \to ({\bf 2},{\bf 2'}) \oplus ({\bf 2'},{\bf 2}),
~~\text{i.e.}~~\psi^2_{\dot a} \to {\psi^2_+}_{\alpha_1}^{\dot{\alpha}_2} \oplus {\psi^2_-}^{\dot{\alpha}_1}_{\alpha_2} ,\\
\end{split}
\end{equation}
where $a$ and $\dot a$ run from $1$ to $8$ and
$\alpha_1,\alpha_2,\dot{\alpha}_1,\dot{\alpha}_2$ are the indices of the four $SU(2)$'s. The
indices and gamma matrices are expounded in appendix
\ref{app:gammas}. We are interested in excitations lying in the
$SO(4)$ piece of the geometry (i.e. labelled by indices
$i',j'$). Therefore we restrict our attention to the $X^{i'}$ fields
and their superpartners $\psi^2_-$ and $\psi^1_+$. There is a freedom
in choosing either $Q\sim Q^1_+ + i\gamma^4 Q^2_-$ or $Q\sim \gamma^4 Q^1_+ + i
Q^2_-$ for our $SU(2|2)$ supercharge\footnote{Since $Q^1_+$ and $Q^2_-$
are of different $SO(8)$ chirality, $ Q^1_+ + iQ^2_-$ or
$ \gamma^4(Q^1_+ + iQ^2_-)$ mix $SU(2)$ representations and can not
contribute to a $SU(2|2)$ supercharge.}, without loss of generality,
we choose the latter option. Specifically, we define
\be\begin{split}
Q^{\dot{\alpha}_2}_{\dot{\alpha}_1} = \frac{e^{i\pi/4}}{4\pi\alpha'} \frac{1}{\sqrt{\alpha'p^+}}
\int_0^{2\pi}d\sigma
\Biggl\{ &\left(i\partial_++\frac{m}{6}\right) X^{\dot{\alpha}_2\gamma_2}
{\psi^2_-}_{\dot{\alpha}_1\gamma_2}\\
+&\left(i\partial_- + \frac{m}{6} \right) X^{\dot{\alpha}_2\gamma_2}\, i
{\sigma^4}^{\sigma_1}_{\dot{\alpha}_1}\, {\psi^1_+}_{\sigma_1 \gamma_2} \Biggr\},\\
\bar Q^{\dot{\alpha}_1}_{\dot{\alpha}_2} = \frac{e^{-i\pi/4}}{4\pi\alpha'} \frac{1}{\sqrt{\alpha'p^+}}
\int_0^{2\pi}d\sigma
\Biggl\{ &\left(-i\partial_++\frac{m}{6}\right) X_{\dot{\alpha}_2}^{\gamma_2}
{\psi^2_-}^{\dot{\alpha}_1}_{\gamma_2}\\
-&\left(-i\partial_- + \frac{m}{6} \right) X_{\dot{\alpha}_2}^{\gamma_2}\, i
{\sigma^4}^{\sigma_1\dot{\alpha}_1}\, {\psi^1_+}_{\sigma_1 \gamma_2} \Biggr\},\\
\end{split}
\end{equation}
where we have defined $X^{\dot{\alpha}_2\gamma_2} = X^{i'} {\sigma^{i'}}^{\dot{\alpha}_2\gamma_2}$.
We will now express these supercharges in terms of string oscillators.
We will be interested in the action of the algebra on excited states,
and so we leave out the zero-mode part of the following expressions.
The mode expansions for the fields were worked out in detail in
\cite{Hyun:2002wp} and are collected in appendix \ref{app:gammas}.
With these expansions we find
\be\begin{split}\label{Qosc}
&Q^{\dot{\alpha}_2}_{\dot{\alpha}_1} = i\frac{e^{i\pi/4}}{\sqrt{2\alpha' p^+}} \sum_{n\neq 0}
\Omega_n \left( \alpha_n^{\dot{\alpha}_2\gamma_2} {\psi_{-n}}_{\dot{\alpha}_1\gamma_2}+ \tilde
\alpha_n^{\dot{\alpha}_2\gamma_2} {\tilde\psi}^4_{-n\,\dot{\alpha}_1\gamma_2} \right),\\
&\bar Q^{\dot{\alpha}_1}_{\dot{\alpha}_2} = -i\frac{e^{-i\pi/4}}{\sqrt{2\alpha' p^+}} \sum_{n\neq 0}
\Omega_{-n} \left( \alpha_{n\,\dot{\alpha}_2}^{\gamma_2} {\psi_{-n}}_{\gamma_2}^{\dot{\alpha}_1}- \tilde
\alpha_{n\,\dot{\alpha}_2}^{\gamma_2} {\tilde\psi}^{4\,\dot{\alpha}_1}_{-n\,\gamma_2} \right),\\
\end{split}
\end{equation}
where ${\tilde\psi}^4_{-n\,\dot{\alpha}_1\gamma_2} =i\, \sigma^{4\,\alpha_1}_{\dot{\alpha}_1}
{\tilde\psi}_{-n\,\alpha_1\gamma_2} $, and where
\begin{equation}
\Omega_n =
\frac{1+\frac{6}{m}(\omega_n-n)}{\sqrt{1+\left(\frac{6}{m}\right)^2(\omega_n-n)^2}},\qquad
\omega_n = \ensuremath{\mathop{\mathrm{sign}}}(n) \sqrt{\left(\frac{m}{6}\right)^2+n^2}.
\end{equation}
In order to accomplish a realization of the $SU(2|2)$ algebra, we must
identify a restricted set of level-I states upon which the algebra
closes. These are states with one oscillator. We choose it to be a
left-moving (untilded) oscillator, but the opposite choice is equally
valid. The main point in uncovering the $SU(2|2)$ structure, as was
discussed previously, is to relax the level-matching condition. We
therefore do not consider any right-moving excitations. We define the
(un-level-matched) states
\begin{equation}\label{levelI}
|\phi^{\dot{\beta}_2}\rangle_{\gamma_2} = \alpha_{-n\,\gamma_2}^{\dot{\beta}_2} |0\rangle, \qquad
|\psi^{\dot{\beta}_1}\rangle_{\gamma_2} = \psi_{-n\,\gamma_2}^{\dot{\beta}_1}
|0\rangle,
\end{equation}
where $\gamma_2$ is a spectator index which we subsequently drop and
$n> 0$. We then find the standard $SU(2|2)$ action
\be\begin{split}
&Q^{\dot{\alpha}_1}_{\dot{\alpha}_2} |\phi^{\dot{\beta}_2}\rangle = a\, \delta^{\dot{\beta}_2}_{\dot{\alpha}_2}
|\psi^{\dot{\alpha}_1}\rangle,\\
&Q^{\dot{\alpha}_1}_{\dot{\alpha}_2} |\psi^{\dot{\beta}_1}\rangle = b\, \epsilon^{\dot{\alpha}_1\dot{\beta}_1}
\epsilon_{\dot{\alpha}_2\dot{\beta}_2}|\phi^{\dot{\beta}_2}\rangle,\\
&\bar Q^{\dot{\alpha}_2}_{\dot{\alpha}_1} |\phi^{\dot{\beta}_2}\rangle = c\, \epsilon^{\dot{\alpha}_2\dot{\beta}_2}
\epsilon_{\dot{\alpha}_1\dot{\beta}_1}|\psi^{\dot{\beta}_1}\rangle,\\
&\bar Q^{\dot{\alpha}_2}_{\dot{\alpha}_1} |\psi^{\dot{\beta}_1}\rangle = d\, \delta^{\dot{\beta}_1}_{\dot{\alpha}_1}
|\phi^{\dot{\alpha}_2}\rangle,\\
\end{split}
\end{equation}
where we have that
\be\begin{split}
&a = 2i \frac{e^{i\pi/4}}{\sqrt{2\alpha'p^+}} \,\Omega_n\, \omega_n, \quad
b = i \frac{e^{i\pi/4}}{\sqrt{2\alpha'p^+}} \,\Omega_{-n}, \\
&c = -2i \frac{e^{-i\pi/4}}{\sqrt{2\alpha'p^+}} \,\Omega_{-n}\, \omega_n, \quad
d = -i \frac{e^{-i\pi/4}}{\sqrt{2\alpha'p^+}} \,\Omega_{n}. \quad
\end{split}
\end{equation}
We note that
\be\begin{split}\label{abcd}
&\frac{1}{2}\left(ad+bc\right) = \frac{\omega_n}{\alpha'p^+},\qquad
ad -bc = \frac{\omega_n}{\alpha'p^+}\left(\Omega_n^2-\Omega_{-n}^2\right) \simeq
\frac{\beta}{3},\\
&ab = (cd)^*= -\frac{i}{\alpha'p^+} \Omega_n \Omega_{-n} \omega_n \simeq -\frac{in}{\alpha'p^+},
\end{split}
\end{equation}
where $\simeq$ indicates the $n \ll \alpha' p^+$ limit. We notice the
consistency with our expectations: the energy of the state (i.e.
$p^-$) is indeed given by $(ad+bc)/2$, while the central charge $ab$
is indeed proportional to the small $n$ limit of $(e^{-in/J}-1)$ with
the consistent proportionality constant $R^2/\alpha'$ appearing in the
energy\footnote{Here we have used $p^+ = J/R^2$, see (\ref{xpmdef}).}.
Computing the $\{Q,\bar Q\}$ commutator using (\ref{Qosc}), we find
(in the $n \ll \alpha' p^+$ limit)
\be\begin{split}
&\{ Q^{\dot{\alpha}_1}_{\dot{\alpha}_2}, \bar Q^{\dot{\beta}_2}_{\dot{\beta}_1} \}=
\delta_{\dot{\beta}_1}^{\dot{\alpha}_1} \delta^{\dot{\beta}_2}_{\dot{\alpha}_2} H + \frac{\beta}{3}\,\delta^{\dot{\beta}_2}_{\dot{\alpha}_2} {\cal L}_{\dot{\beta}_1}^{\dot{\alpha}_1}
+\frac{\beta}{3}\,\delta_{\dot{\beta}_1}^{\dot{\alpha}_1} {\cal R}^{\dot{\beta}_2}_{\dot{\alpha}_2},\\
&\{ Q^{\dot{\alpha}_1}_{\dot{\alpha}_2}, Q^{\dot{\beta}_1}_{\dot{\beta}_2} \}=
\epsilon^{\dot{\alpha}_1\dot{\beta}_1}\epsilon_{\dot{\alpha}_2\dot{\beta}_2} {\cal P},\qquad
\{\bar Q^{\dot{\alpha}_2}_{\dot{\alpha}_1}, \bar Q^{\dot{\beta}_2}_{\dot{\beta}_1} \}=
\epsilon^{\dot{\alpha}_2\dot{\beta}_2}\epsilon_{\dot{\alpha}_1\dot{\beta}_1} {\cal K},
\end{split}
\end{equation}
where\footnote{Supersymmetry ensures that the normal ordering
constants in $H$ and ${\cal P}$ are zero.}
\be\begin{split}
&{\cal R}^{\dot{\beta}_2}_{\dot{\alpha}_2} = \frac{1}{2}\sum_{n> 0} \frac{1}{\omega_n}
\left(\alpha^{\dot{\beta}_2\gamma_2}_{-n} \alpha_{n\,\dot{\alpha}_2\gamma_2} -
\frac{1}{2}\delta^{\dot{\beta}_2}_{\dot{\alpha}_2}\,
\alpha^{\dot{\gamma}_2\gamma_2}_{-n} \alpha_{n\,\dot{\gamma}_2\gamma_2}+\text{right-movers}\right),\\
&{\cal L}_{\dot{\beta}_1}^{\dot{\alpha}_1} = \sum_{n> 0}
\left( \psi_{-n\,\gamma_2}^{\dot{\alpha}_1}\psi_{n\,\dot{\beta}_1}^{\gamma_2}
-\frac{1}{2}\delta^{\dot{\beta}_1}_{\dot{\alpha}_1}
\psi_{-n\,\gamma_2}^{\dot{\gamma}_1}\psi_{n\,\dot{\gamma}_1}^{\gamma_2}+\text{right-movers}\right),\\
&H = \frac{1}{\alpha'p^+} \sum_{n> 0} \left( \alpha^{i'}_{-n} \alpha^{i'}_{n}
+\omega_n \, \psi^{\dot{\gamma}_1}_{-n\,\gamma_2}\psi_{n\,\dot{\gamma}_1}^{\,\gamma_2}+\text{right-movers}\right),\\
&{\cal P} = -{\cal K}= -\frac{i}{\alpha'p^+} \sum_{n> 0}n \left(\frac{1}{\omega_n} \alpha^{i'}_{-n} \alpha^{i'}_{n}
+ \psi^{\dot{\gamma}_1}_{-n\,\gamma_2}\psi_{n\,\dot{\gamma}_1}^{\,\gamma_2} -\text{right-movers}\right).\\
\end{split}
\end{equation}
Note that ${\cal P}$ is nothing but the level-matching operator
(restricted to the $X^{i'}$ supermultiplet) as previously discussed.
One can then verify that the action of ${\cal R}$ and ${\cal L}$ upon
our states are
\be\begin{split}
{\cal R}^{\dot{\beta}_2}_{\dot{\alpha}_2} |\phi^{\dot{\gamma}_2}\rangle =
\delta^{\dot{\gamma}_2}_{\dot{\alpha}_2} |\phi^{\dot{\beta}_2}\rangle - \frac{1}{2} \delta^{\dot{\beta}_2}_{\dot{\alpha}_2}
|\phi^{\dot{\gamma}_2}\rangle,\\
{\cal L}_{\dot{\beta}_1}^{\dot{\alpha}_1} |\psi^{\dot{\gamma}_1}\rangle =
\delta^{\dot{\gamma}_1}_{\dot{\beta}_1} |\psi^{\dot{\alpha}_1}\rangle - \frac{1}{2} \delta^{\dot{\alpha}_1}_{\dot{\beta}_1}
|\psi^{\dot{\gamma}_1}\rangle,\\
\end{split}
\end{equation}
as they should be. Finally we note that the value of $ad-bc$ from
(\ref{abcd}) is what it needs to be in order to close the algebra. We
have thus found the centrally extended $SU(2|2)$ algebra in the $n \ll
\alpha' p^+$ limit of the string dual to SYM on $\mathbb{R}\times S^2$.
\subsection{Generalizations, $S$-matrix, finite-size effects, and giant magnons}
\label{sec:stringgen}
As discussed at the start of section \ref{sec:string}, the IIA
plane-wave appears in the BMN-like limit of the string duals of a rich
class of vacua of any of the three theories: SYM on $\mathbb{R}\times S^2$,
SYM on $\mathbb{R}\times S^3/\mathbb{Z}_k$, or the PWMM. Thus the $SU(2|2)$ algebra
derived in the last section exists in all of these theories, as long
as the vacuum for the model being studied is well separated. For
analyses similar to those at the start of section \ref{sec:string},
but for $\mathbb{R}\times S^3/\mathbb{Z}_k$ and the PWMM, see
\cite{Lin:2005nh}. Having found the $SU(2|2)$ structure, it is natural
to ask whether we can repeat the very rich battery of tests and
analyses which have been carried out in the case of $AdS/CFT$ for
$AdS_5\times S^5$, assuming that our gauge theories really do posses an
all-loop integrable sector. These include the matching of energies of
spinning strings to the thermodynamic limit of the associated
spin-chains, matching the worldsheet S-matrix to gauge theory,
matching finite-size effects (i.e. $1/J$ corrections to the energies
of states), and the existence of solitonic string configurations with
very large worldsheet momentum, the giant magnons. In this section we
will visit each of these issues in a qualitative manner, leaving any
concrete analyses to further work.
\subsubsection{Worldsheet $S$-matrix and finite-size effects}
In order to discuss a worldsheet $S$-matrix, we must have an
interacting sigma model. Since the plane-wave worldsheet theory is
free, one must include curvature corrections in order to develop
worldsheet interactions. The near plane-wave limit is complicated (as
compared to the $AdS_5\times S^5$ case \cite{Callan:2003xr}) by
the dependence of the Lin-Maldacena geometries on the $\rho$ and $\eta$
coordinates (the coordinates $r$ and $\theta$ in (\ref{LM})), i.e. the
spatial coordinates transverse to the $S^2$ and $S^5$. The dilaton,
B-field, and Ramond-Ramond field strengths develop dependence on these
coordinates away from the strict plane-wave limit, i.e. their ${\cal
O}(R^{-2})$ corrections are $\eta$ and $\rho$ dependent. However,
despite these complications, if, as we expect, the $SU(2|2)$ symmetry
is exact and so remains at ${\cal O}(R^{-2})$, then the $S$-matrix is
highly constrained by this symmetry, to a single undetermined function
$S^0_{12}$ (see (\ref{S012})) \cite{Beisert:2005tm}. If we consider
the scattering of the bosonic $SO(4)$ excitations (the $X^{i'}$ of
section \ref{sec:LCGPW}) on a subset of states with no excitations
from the $SO(3)$ part of the geometry, then we will find the same
relevant $S$-matrix elements as those found for the $AdS_5\times S^5$
superstring in \cite{Klose:2006zd}. This is the trivial statement that
both theories share an $S^5$ and so share its near plane-wave
geometry. But then the function $S^0_{12}$ is determined at this order
in the large-$R$ expansion. It would therefore not be surprising if
the only change between $AdS_5\times S^5$ and the theories considered
here is that $\lambda$ is with replaced with $h(\lambda)$\footnote{Of course
there will be a different $h(\lambda)$ for each gauge theory and each
vacua around which the expansion is being carried out.}, i.e. that
the same expression for the BES phase-factor \cite{Beisert:2006ez}
found in the expression for $S^0_{12}$ in ${\cal N}=4$ SYM is the
relevant one here with $\lambda\to h(\lambda)$. Further work would be required
to verify (or disprove) this possibility. Similar statements apply to
the finite-size corrections to the string spectrum. At leading order,
these are given by first-order perturbation theory, and by the same
logic, states built from bosonic $SO(4)$ excitations alone must share
the same leading order finite-size corrections as those found in
$AdS/CFT$. The non-trivial information comes at next-to-leading order,
where second-order perturbation theory must be used, and where $SO(3)$
excitations appear in the intermediate states.
\subsubsection{Spinning strings and giant magnons}
Continuing our analogy with $AdS_5\times S^5$ we may think about
macroscopic spinning strings, corresponding to the thermodynamic limit
of the gauge theory spin-chains. Again, the Lin-Maldacena geometries
contain an $S^5$ which is the site of the $SU(2|2)$ symmetry. Any of
the spinning string solutions with spins only in the $S^5$ may be
borrowed from $AdS_5\times S^5$. The only difference is the modified
relationship between the radius of the $S^5$ and the gauge theory
coupling, i.e. the strong coupling consequence of replacing $\lambda\to
h(\lambda)$. The interesting question in this regard are the $1/R^2$
corrections to the energies of these spinning strings. A
semi-classical calculation would require including the fluctuations of
the $SO(3)$ modes, and these have a very different structure than the
corresponding $AdS_5$ modes (c.f. \cite{Tseytlin:2003ii}). It would be
very interesting to attempt such a calculation, which would go towards
fixing $h(\lambda)$ at next-to-leading order at strong coupling and give
information on the form of the phase-factor in $S^0_{12}$. The giant
magnon is of course also present in the Lin-Maldacena geometries, for
the same trivial reason that there is an $S^5$ which will accommodate
it. This is consistent with the strong-coupling limit of the $SU(2|2)$
dispersion relation (\ref{HYM}). The finite-size correction to the
energy of the giant magnon \cite{Arutyunov:2006gs,Astolfi:2007uz} does
not require a semi-classical treatment; the calculation takes place
within the $\mathbb{R}\times S^2$ holding the magnon solution. Thus that
correction is also valid for our case with $\lambda\to h(\lambda)$. This
finite-size correction has also been obtained from the integrability
of ${\cal N}=4$ SYM through the Bethe ansatz \cite{Janik:2007wt}.
There it is claimed that the match tests the form of the phase-factor
appearing in $S^0_{12}$. Thus, we have another piece of evidence
suggesting that the $\lambda\to h(\lambda)$ replacement may bring us from ${\cal
N}=4$ SYM, to the potentially integrable sector described here for
SYM on $\mathbb{R}\times S^2$, SYM on $\mathbb{R}\times S^3/\mathbb{Z}_k$, and the PWMM.
\section{Concluding remarks}
\label{sec:final}
In this paper we have found an application for the rich constraining
power of the mass-deformed $SU(2|2)$ algebra in the gauge/gravity
duality for $\mathcal{N}=4 $ SYM on $\mathbb{R}\times S^3$ and its
dimensional reductions. We have mostly focused on the three
dimensional $\mathcal{N}=8 $ SYM on $\mathbb{R}\times S^2$ and its
string dual to illustrate the use of this superalgebra in the
computation of the spectrum of the gauge as well as the dual
world-sheet theory. The two-loop gauge theory results and the leading
order strong coupling computations done in the plane wave limit of the
associated string theory suggest a potentially integrable $SU(2|3)$
sector for this particular realization of the gauge/gravity duality.
Moreover, we find that various quantities, such as the form of the
all-loop dispersion relation and the ``matrix'' structure of the
S-matrix for the gauge theory Hamiltonian are exactly the same in this
theory as the planar dilatation operator for $\mathcal{N}=4$ SYM.
Despite the methodological similarities, there are various fundamental
differences that distinguish the three and four dimensional gauge
theories from each other. For instance, analytical dependence of the
physical spectrum on the effective 't Hooft couplings, encoded in the
function $h(\lambda)$. Furthermore, the world sheet theory for the
three dimensional gauge theory is not a coset model which makes the
issue of understanding even its classical integrability challenging.
Finally, the gauge theory appears to possess multiple vacuaa (which is
reflected on the string theory side in the various disc configurations
discussed earlier), which is unlike the physical behavior of the four
dimensional superconformal theory. It is thus gratifying that despite
these differences, various physical quantities can be analyzed in both
these gauge theories using similar techniques of analysis. Apart from
another venue for the potential utilization of the powerful algebraic
methods tied to integrable structures, our study also opens up some
interesting lines of investigation which we comment on below.
An obvious question to ask is whether or not the tell-tale signs of
integrability for the three dimensional theory translate into all-loop
integrability. Integrability to all orders, even in a restricted
sub-sector of the theory (like the $SU(2)$ sector) would be a powerful
boost towards performing a comprehensive test for the gauge/gravity
duality without the use of conformal symmetries. Assuming
integrability holds in a subsector of the gauge theory, the complete
determination of the interpolating $h$ function, which we have
computed at weak and strong coupling, is certainly an extremely
interesting question and might be amenable to analysis by methods
such as the Y-system, which has recently yielded dramatic results
\cite{Gromov:2009tv}.
As mentioned before in the paper, a fuller understanding of the the
gauge/gravity duality for the other dimensional reductions of the
four dimensional gauge theory might be gained by adapting the
present algebraic techniques accordingly. In particular, the
interplay between the non-trivial vacua for the gauge theories in
question and mass-deformed algebras is another potential line of
investigation coming out of the present analysis.
Looking beyond the immediate concerns of this paper and the
computation of spectra; the r\^ole of supersymmetry in the study of
other extended degrees of freedom, such as Wilson loops would be
another interesting problem to study. The massless version of the
three dimensional theory defined on $\mathbb{R}^3$ has recently been
shown to posses a large class of BPS Wilson loops whose expectation
values are completely determined by supersymmetry
\cite{Agarwal:2009up}. The investigation of the corresponding
extended objects of the massive theory on $\mathbb{R}\times S^2$
should yield further valuable information for the field theory and its
string theoretic counterpart.
On a final, somewhat tangential note, it is worth pointing out that
massive algebras also arise in the context of three dimensional SYM
theories even in flat backgrounds with minimal supersymmetry
\cite{Agarwal:2009gb}. In these theories the supersymmetry algebra is
deformed by the spacetime rotation group as opposed to the
$R$-symmetry group. Furthermore, the mass-gap for the gluonic fields
in these theories is generated by a term that is closely related to
the volume measure on the gauge invariant configuration space for pure
Yang-Mills theory in three dimensions \cite{Karabali:1997wk} (for recent progress in three dimensional pure Yang-Mill theory on $\mathbb{R}\times S^2$, see \cite{Agarwal:2008dk}). This is
perhaps indicative of a potential deeper connection between massive
supersymmetry algebras and dynamical mass-generation in confining
three dimensional SYM theories.
\section{Acknowledgements}
We would like to thank Niklas Beisert and Jan Plefka for discussions
and their comments on a previous version of this manuscript. DY is
supported by the Volkswagen Foundation.
|
2,877,628,088,863 | arxiv | \section{\textbf{Introduction}}
OB associations are low-density groups of young stars, typically containing many prominent OB stars as well as numerous low-mass stars \citep{blaauw1991,brown1999}. They are technically divided into OB associations, which contain bright OB stars, and T associations, containing prominent T-Tauri stars, though other than their total mass there are no other differences and the term {\it association} is often used to refer to both types.
The low space densities ($< 0.1$~M$_\odot$ pc$^{-3}$) of associations make them dynamically unstable to Galactic tidal forces and therefore over time they should disperse. The fact that they exhibit some spatial and kinematic concentration (despite most likely being unbound) and contain short-lived OB stars implies that they must be young, and are therefore valuable tracers of the star formation process, allowing us to study the propagation of star formation and the role of feedback in triggering or halting star formation.
Historically, OB associations were vital for identifying groups of OB stars and calibrating their luminosity scale \citep{morgan1953,humphreys1978}. This work led to the first census and catalogue of classical OB associations by \citet{ruprecht1966}, which was compiled and standardised from earlier works. This catalogue was most-recently updated by \citet{wright2020}, though many systems still remain poorly defined and studied. More recently, associations have provided large samples of unobscured young stars that are useful for studies of the initial mass function, the frequency and properties of multiple systems, protoplanetary disks and planetary systems \citep{massey1995,kouwenhoven2007,kalas2015}.
The origin of associations is still debated; according to some they are the short-lived expanded remnants of dense star clusters \citep{kroupa2011}, while others suggest they are the result of multiple, sub-structured star formation events occurring over large spatial scales or at low-density \citep{miller1978,kruijssen2012}.
\subsection{Overview of association properties}
Associations have total stellar masses between a few hundred and a few tens of thousand of solar masses. Their dimensions range from a few tens to a hundred parsecs (Section~\ref{sec:structure}), giving them typical densities of 0.001 -- 0.1 M$_\odot$ pc$^{-3}$. They are highly asymmetric and substructured (Section~\ref{sec:structure}), and often contain open clusters or star forming regions within their boundaries. Associations have 1D velocity dispersions of a few km~s$^{-1}$ and are generally expected to be gravitationally unbound (Section~\ref{sec:kinematics}) and expanding (Section~\ref{sec:expansion}).
The ages of associations range from a few to a few tens of Myr, with both considerable age spreads (comparable to the age of the system) and resolvable age substructure (Sections \ref{sec:ages} and \ref{sec:kinages}). The lower age limit historically separated associations from obscured and embedded systems \citep[embedded clusters,][]{lada2003}, while the upper limit comes from the difficulty identifying low-density groups of stars older than this (Section~\ref{sec:identifying}) -- though see \citet{kounkel2019} for an attempt to extend this to older systems.
Associations are observed throughout the disk of our galaxy and their distribution has been used to trace the spiral structure of the Milky Way (Section~\ref{sec:distribution}), map previous generations of young stars and study the propagation of star formation (Section~\ref{sec:propagation}).
\subsection{On the definition of OB associations}
OB associations were originally defined as systems of stars that are very young compared to the age of the galaxy, have a common origin, and a stellar density lower than the Galactic field \citep{ambartsumian1947,ambartsumian1949}. This definition, particularly the last part, distinguished associations from the other types of stellar group known at the time, open and globular clusters. Similar definitions were employed in the landmark studies by \citet{blaauw1964}, \citet{garmany1994}, \citet{dezeeuw1999} and \citet{lada2003}.
With the discovery of very young {\it embedded clusters} some studies argued that most embedded clusters and young star forming regions evolve into associations \citep[e.g.,][]{lada2003,zinnecker2007}. In this context \citet{lada2003} separated star clusters into embedded and open clusters depending on their association with interstellar matter. They argued that as embedded clusters emerge from their natal cloud they either survive as gravitationally bound open clusters or expand as unbound associations. From this a more physical definition of associations emerged, one in which associations are specifically gravitationally unbound \citep[e.g.,][]{gouliermis2018,wright2020}, a definition in harmony with that of a star cluster as being gravitationally bound \citep[e.g.,][]{portegieszwart2010}.
The difficulty with this physical definition is that, even for relatively isolated clusters and associations, it can be difficult to reliably determine whether a group of stars is gravitationally bound. This is not usually a problem for most classical OB associations since their stellar density is sufficiently low that there is no doubt that they are unbound. However, it can be an issue for particularly young systems or where interstellar matter is present, since the precise mass and spatial distribution of this matter is much harder to determine than it is for stars. Furthermore, recent studies suggest star formation occurs over a continuum of densities rather than at either low or high densities \citep[e.g.,][]{bressert2010,kruijssen2012,kerr2021} and therefore applying a density threshold to very young (still embedded) systems would be meaningless.
For this reason we choose to define associations as {\it groups of young stars with a stellar density lower than that of the Galactic field and that are not strongly associated with interstellar matter}. The density definition separates associations from open clusters, while the interstellar matter definition separates them from star forming regions and embedded clusters. This definition is advantageous because it is both purely observational and preserves the historical definition of associations.
One note to this definition is that since associations are large and highly substructured they may (and often do) contain open or embedded clusters and star forming regions within their borders. The presence of star clusters as the nuclei of associations has been known since their original definition \citep{ambartsumian1949}, while the the existence of star forming regions within their borders is evident from many of the nearest associations such as Sco-Cen or Orion OB1. These denser or younger regions are clearly part of a larger association, though they would not be classified as associations in their own right.
\subsection{This and past reviews}
This review summarises recent results on associations and our current knowledge of their properties and origins. It builds on a strong history of review articles on this topic dating from both the pre-{\it Hipparcos} period \citep{blaauw1964,blaauw1991}, the {\it Hipparcos} era \citep[during which many great discoveries concering associations were made, e.g.,][]{brown1999} and the early {\it Gaia} era \citep{wright2020}. The wealth of studies concerning associations that have emerged over the last few years thanks to {\it Gaia} data have accelerated changes in our understanding of these systems, making this review very timely.
\section{\textbf{Identifying members of associations}}
\label{sec:identifying}
Historically, the main obstacle to studying associations was the difficulty reliably identifying their members, particularly the more-numerous low-mass stars that cannot easily be distinguished from older field stars. In this section we will discuss how the young low- and high-mass members of associations can be identified, the limitations of these techniques, the methods used to verify youth using spectroscopy, and how samples of young stars are commonly divided into different associations or subgroups.
\subsection{Identifying young stars}\label{sec:identifying}
There are numerous methods that can be used to identify candidate young stars, depending on their effective temperature and age. Each of these methods requires different data, is effective under different circumstances, and has certain biases that can limit the effectiveness of the method.
Massive members of associations can be identified using photometry. Following the reddening-invariant colour method pioneered by \cite{johnson1953}, which maps out where stars lie in a colour-colour diagram depending on their reddening, \cite{mohr-smith2015} selected massive members of the cluster Westerlund 2 and surrounding associations using the $(u-g, g-r)$ colour-colour diagram. Such diagrams show that reddened OB stars are located above the main stellar locus, and thus can easily be separated from the older field star population. Follow-up spectroscopy has demonstrated that this method produces levels of contamination from late-type stars as low as 3\% \citep{mohr-smith2017}. In regions of high extinction (where $u$-band photometry is unavailable) the optical identification of OB stars becomes difficult. It is therefore necessary to employ either near-IR or a combination of optical and near-IR photometry. \cite{comeron2002} used $JHK_s$ 2MASS photometry to select massive stars in Cygnus OB2, while \cite{poggio2018}, \cite{zari2021} and \citet{quintana2021} used a combination of optical (\textit{Gaia}) and IR (2MASS) photometry to identify massive stars in the Galaxy within 4-5 kpc from the Sun. These samples are mainly contaminated by evolved massive or intermediate mass stars (e.g. yellow and blue super-giants), but they are substantially free from RGB and AGB stars \citep[see Section 2.3 in][]{zari2021}. Finally, late B-type stars can live for $\approx$100 Myr, and can therefore be much older than the typical ages of associations (see Section \ref{sec:ages}); to definitely assign such stars to single associations it is necessary to combine photometric identification with other methods, such as kinematics.
Historically, low-mass stars with ages of a few Myr have been difficult to identify, however dusty young stellar objects (such as protostars or pre-main sequence stars with disks) can be selected through near- and mid-IR photometry, allowing for their identification with targeted surveys of individual star forming regions with telescopes such as Spitzer or Herschel \citep{gutermuth2009,fischer2017,winston2020}, or through all-sky surveys such as AKARI or WISE \citep{toth2014,marton2016}. These methods are effective, but heavily biased towards very young stars ($<5$ Myr) that retain their disks, while disk-free candidate YSOs identified using IR photometry appear to be mainly contaminants \citep{manara2018}. Identifying older stars and the substantial fraction of young stars without disks requires alternative methods.
\begin{figure}[h]
\centering
\includegraphics[width = \hsize]{cmd_gray1.png}
\caption{Colour-absolute magnitude diagram for stars in Orion based on \textit{Gaia} EDR3. The black solid line is a 20 Myr solar metallicity PARSEC isochrone \citep{Marigo2017}. For stars with $G-G_{RP} > 1$, where the isochrone separates from the main sequence, isochrones like this can be used to separate the young PMS population above the isochrone from older field stars below it.}
\label{fig:cmd}
\end{figure}
Most low-mass members of associations are in the pre-main sequence (PMS) phase of stellar evolution. PMS stars are more luminous than main sequence stars of the same $T_{\rm eff}$ and therefore stand out, above the zero-age main sequence, in optical colour-absolute magnitude diagrams (see Fig. \ref{fig:cmd}). This provides an effective and efficient technique to identify young stars that requires only photometry and astrometry. The method is particularly useful for identifying low-mass stars (as low-mass PMS stars are more over-luminous at a given age than more massive stars) and isn't biased towards other properties of the star (e.g., the presence of circumstellar material or magnetic activity), however some care is required when using it. For example, the extinction correction required by this method could introduce selection biases to the sample \citep[see][]{zari2018}. Furthermore, the samples selected may be contaminated by unresolved multiple stars or evolved field stars with uncertain parallaxes, especially for PMS stars at intermediate masses that are less offset from the main sequence than low-mass stars. Finally, as \textit{Gaia} parallax uncertainties increase as a function of magnitude, faint sources at large distances can be excluded when applying cuts on the relative parallax error $\sigma_{\varpi}/\varpi$. Nevertheless, following \textit{Gaia} DR2 this approach has been used in numerous studies to identify PMS stars younger than 20 Myr \citep[e.g.,][]{zari2018,cantat-gaudin2019,damiani2019}. Attempts at identifying PMS stars older than 20 Myr (and younger than 50 Myr) using this method have been made by \cite{kerr2021} and \cite{mcbride2021}, who also propose different methods to estimate and correct for extinction.
In the absence of reliable parallaxes, single epoch optical or near-IR photometry can be combined to identify candidate M-type stars \citep{damiani2018} or employed to determine the size and spatial distribution of the low-mass PMS population of an association \citep[see e.g.][]{sherry2004, bouy2014, zari2017, armstrong2018}. Single epoch photometry alone cannot definitively identify an individual star as a PMS star, thus samples selected based on this technique can include both background giants and foreground main-sequence stars of any age.
Optical photometric variability is one of the defining characteristics of pre–main-sequence stars \citep{joy1945, herbig1962}. \cite{Briceno2005} used variability combined with position in colour-magnitude diagrams and follow-up spectroscopy to select the low-mass young population in the Orion OB1 association. This method is potentially biased towards PMS stars with only high amplitude photometric variations, nevertheless current and future \textit{Gaia} releases will facilitate all-sky searches for variable PMS stars and the use of variability as an additional selection criterion.
PMS stars are magnetically active, which makes them bright X-ray sources. Studies of low-mass X-ray emitting young stellar populations of associations were carried out using data from the Einstein observatory \citep{walter1994}, and the ROSAT all sky survey \citep{sterzik1995, walter2000}. More recently, the X-ray observatories {\it Chandra} and XMM-{\it Newton} have been very effective at identifying large populations of young, magnetically-active stars, thanks primarily to their arcsecond-scale point spread function and high sensitivity \citep[e.g., the Massive Young star-forming complex Study in Infrared and X-rays, MYStIX, which studied young stars in 20 Galactic massive star-forming regions,][]{feigelson2013}. However, the small fields of view of these observatories have meant they have preferentially targeted compact star clusters rather than diffuse associations \citep[with the exception of more distant associations, e.g.,][]{wright2010}. All-sky data from eROSITA \citep{merloni2020} will overcome this issue and will allow members of many nearby associations to be identified, as shown by the preliminary study presented in \cite{Schmitt2021} (see \S\ref{sec:futurex-ray}). As for the other techniques presented in this Section, although X-ray selection may weed out a lot of old stars, there will be many false positives remaining. For instance, the spin-down timescales of low-mass stars are $\sim$50 Myr for solar-type stars and longer at lower-masses, so contamination by young field stars is probable. In addition, tidally locked, close binaries can maintain their X-ray activity indefinitely and appear `young' in the HR diagram due to the combined luminosity of the two stars.
In summary there are many different methods for identifying young stars. The advantages and disadvantages of each method, as applied to associations, are a combination of: whether the method is sensitive in the typical 1--50 Myr interval we are interested in; whether they offer any age discrimination within this range; whether they can be applied to isolated stars; and whether the observational inputs are available over wide fields of view. Unfortunately, while each of the methods discussed here allows identification of young stars, they can all introduce contamination by older, unrelated field stars. These methods are therefore each insufficient in isolation to confirm the youth of stars.
\subsection{Confirming the youth of stars}
\label{sec:confirming}
The {\it secure} identification of association members is inextricably linked to confirmation of their age, with the most decisive indicators of youth coming from spectroscopy. Estimates of spectral type or temperature can improve reddening determinations, allowing more accurate placement in the HR diagram (and this is often all that is required to confirm the status of young high-mass stars), but for the more numerous lower mass stars there are more direct methods of determining youth.
Lithium is an ephemeral element in the photospheres of low-mass stars; it is consumed by nuclear reactions on the PMS. Whilst a comprehensive understanding of the Li depletion process is still lacking, it is clear that Li abundance can be used as an empirical indicator of youth in a mass-dependent way (see Figure~\ref{fig:li}). At low masses ($<0.7 M_\odot$) the presence of undepleted Li is a sure sign that a star is younger than about 20 Myr. Conversely, stars which exhibit total Li depletion must be much older, whilst intermediate cases offer some age discrimination \citep{Jeffries2014b}. Li is less diagnostic at higher masses because there is little PMS depletion and depletion on the main sequence is much slower. Using Li abundance as a tracer of young stellar populations in the nearest associations has a long history \citep[e.g.,][]{Martin1998, preibisch1998, Mamajek2002,preibisch2002}. Improvements in instrumentation are now allowing an extension of these techniques to wider fields and greater numbers of stars \citep[e.g.,][]{jeffries2014,Briceno2019, armstrong2018}, however the requirement for reasonably high resolving power and signal-to-noise still limits this technique to associations closer than $\sim 1$~kpc.
\begin{figure}[t]
\centering
\includegraphics[width = \hsize]{jeffries2014_bw.pdf}
\caption{EW of the Li~{\sc i} 6708\AA\ absorption line for stars towards the $\gamma$ Vel cluster and Vela OB2 association. The bold diamonds show objects selected as members of the cluster/association on the basis of their Li abundance and position in the $V$ vs $V-I$ CMD \citep[see][]{jeffries2014}, while the open circles and open triangles (indicating upper limits) are sources not selected as members. The solid lines show theoretical predictions of the strength of this line at ages of 10 (upper) and 20 (lower) Myr \citep{baraffe1998}, while the dashed line indicates an empirical upper limit to the line strength at 50 Myr judged from observations of stars in the IC 2391/2602 clusters. The age discrimination of this Li feature is quite dependent on colour/mass; in particular, it is less sensitive at bluer colours/higher masses and may not offer a complete selection of members where the association stars are old enough to have depleted their Li in certain mass ranges (e.g. in the mid M-dwarfs in this example).}
\label{fig:li}
\end{figure}
Spectroscopic indications of gravity can be used to confirm stellar youth at low masses and can resolve the confusion in absolute colour-magnitude diagrams caused by variability, reddening and binarity. PMS stars have lower gravities than MS stars of similar $T_{\rm eff}$, but for a given PMS age the difference is wider at lower masses, so these techniques are most sensitive for very young stars and/or PMS stars of low-mass ($<1M_\odot$) \citep[see Figure 8 of][for an example]{lopez-valdivia2021}. Surface gravity is difficult to measure directly, although its relative magnitude can be inferred from gravity-sensitive spectral indices in the optical/near IR, such as atomic alkali, CaH or TiO molecular lines \citep{wilking2005}, empirically constructed spectral indices \citep{damiani2014} or the shape of the $H$-band peak \citep{scholz2009}. Directly-measured surface gravities can be used to calibrate pre-MS evolutionary models \citep{olney2020} and directly infer the age of a low-mass star. In comparison to photometrically derived ages, this method does not underestimate the ages of stars on the binary sequence.
Young stars with a protoplanetary disk are likely to be accreting gas. Classical T Tauri stars can be easily identified through having strong and broad H$\alpha$ emission, even in low-resolution spectra \citep{white2003}. These strong signatures are limited to just a fraction of the low-mass population of an association, reducing from $\sim 60$\% at a 1-2 Myr to $<5$\% at 10\,Myr \citep{Fedele2010}. The complementary population of ``weak-lined" T-Tauri stars can still be identified from their weak, chromospheric H$\alpha$ emission (along with several other lines, such as Ca II H \& K). However, these relatively weak activity indicators can persist for a few hundred Myr in low-mass stars so such samples inevitably suffer contamination from field stars and active binaries, in a similar way to X-ray-selected stars.
\subsection{Assigning young stars into groups}\label{sec:assigning}
When a sample of candidate young stars has been identified they can be divided into groups based on their distribution in position (plane of the sky and distance) and kinematics (proper motion and radial velocity). This may be performed after the youth of these stars has been verified (see Section~\ref{sec:confirming}) or to confirm youth via the corroboration that a group of stars is spatially or kinematically coherent (the clustering of stars, both spatially and kinematically, decreases as stars age, and therefore can be used as a youth proxy). A combination of kinematics and spectroscopic indicators of youth is increasingly used to get a sample of the low-mass population with minimal contamination \citep[e.g.,][]{Prisinzano2016, Wright2019}.
For compact clusters, stars can be grouped together using just plane-of-the-sky information, often starting with a curated membership list (that removes most of the field contaminants) and then identifying the group using techniques such as the minimum spanning tree \citep{gutermuth2009}, stellar density maps \citep{carpenter2000,megeath2016}, various mixture models \citep{kuhn2014}, or using other plane-of-the-sky clustering techniques \citep{buckner2019}. Some of these approaches require {\it a-priori} assumptions regarding the number of different substructures present in a given population, and therefore the results of such methods should not be used to study the structure of such regions without understanding the biases at work.
For low-density structures such as associations, assigning stars into groups cannot be done using just sky positions. Since associations have relatively small internal velocity dispersions they continue to form coherent structures in velocity space even as they disperse. In the \textit{Hipparcos} era the identification of massive members of associations relied strongly on proper motions \citep[e.g.][]{dezeeuw1999}, sometimes combined with positions and parallaxes \citep{debruijne1999, hoogerwerf1999}, or radial velocities \citep{rizzuto2011}. Multi-epoch spectra are sometimes required to reliably identify binaries (that can introduce discrepant RVs due to the orbital motions), but even without multiple epochs, the broadening of an RV distribution by binaries can be estimated and assigning high membership probabilities based on a single-epoch of spectra is possible for most single and unresolved binary stars \citep[e.g.,][]{jeffries2014,gonzalez2017}, especially in conjunction with other spectroscopic tracers of youth (see Section 2.2).
With distances becoming increasingly common from {\it Gaia}, studies of the clustering and grouping of stars are now performed similarly in a higher-dimensional space (combining position on the sky, parallax, proper motion, and radial velocity when available). As these multi-dimensional data combine different quantities (position and velocity) with different units, some prescription needs to be considered to scale the data to give each dimension an appropriate weight. With higher dimensionality it is possible to select association members without initial identification of candidate young stars (as is commonly needed for 2D clustering). Several clustering algorithms have been used for such a purpose, including DBSCAN \citep{zari2019,liu2021}, HDBSCAN \citep{kounkel2019,kerr2021}, and UPMASK \citep{cantat-gaudin2019}, Gaussian mixture models \citep{kuhn2020}, as well as other custom codes. The typical inputs for these are the minimum number of stars in a group, the typical group dimensions, and the density threshold required to detect groups (though some codes are able to automatically adjust this threshold to detect both compact and extended populations).
In recent years, with a large number of clustering codes and approaches available, as well as the variance that would result in using the same clustering algorithm tuned with different critical parameters, or applied to a differently-selected initial source list, it is common to see different groups identified within the same region of space in different studies \citep[e.g.,][]{kounkel2018,galli2019,zari2019,kos2019,chen2020,krolikowski2021}. With the greater degree of attention given to extended associations (that are not as simple to define as compact star clusters) some confusion may arise regarding which of these overlapping groups should be considered as fundamental. As such, it is important to keep in mind that these algorithms only trace the underlying density distribution of a given population with a greater or lesser sensitivity to the structures at a particular scale, often requiring a trade-off between the ability to identify finer substructure vs the ability to study the population as a whole.
\section{\textbf{Properties of OB associations}}
Associations come in diverse shapes and sizes and can appear, at first sight, a rather heterogeneous group. In this section we review the general properties of associations, with reference to notable individual examples, and the similarities and patterns that exist.
\subsection{Size and internal structure}
\label{sec:structure}
The sizes of associations range from tens to several hundreds of parsecs \citep[e.g.,][]{blaauw1964,blaha1989,gouliermis2018}, though this can vary between studies due to different methods for defining the borders, different membership of the systems included, and on which systems were included within the sample. For example, \citet{garmany1992} measure a mean OB association size of $137 \pm 83$~pc, while \citet{melnik1995} measure an average diameter of $\sim$40~pc. Studies of extra-galactic OB association populations reveal broadly similar size distributions to Galactic associations, depending on the resolution of the data used. For example, \citet{lucke1970} list 122 OB associations in the Large Magellanic Cloud (LMC) with sizes of 15--150~pc and a mean of 80~pc, while \citet{gouliermis2018} used the better-resolved catalogue of associations in the LMC from \citet{bica2008} to measure an average size of 30~pc.
Associations increase in size with age \citep[e.g.,][]{blaauw1964}, as would be expected for gravitationally unbound and expanding systems. Furthermore, associations connected to nebular material are typically smaller \citep{gouliermis2018}. If the association with nebular material is an indication of relative youth, then this also supports a picture whereby associations increase in size as they age.
Associations exhibit considerable internal substructure. This includes the presence of subgroups with different ages and kinematics \citep{blaauw1964,garmany1992}, young or open clusters \citep{ambartsumian1949}, or bright central concentrations \citep{ivanov1987,melnik1995}. In some cases this substructure is very clear, while in other systems it can be revealed using structural diagnostics that quantify physical substructure \citep[e.g.,][]{cartwright2004,wright2014}.
The most well-studied associations, such as Sco-Cen or Orion OB1, have been sub-divided into ``OB subgroups'' historically based on their on-sky distribution \citep[e.g.,][divided Sco-Cen into the subgroups of Upper Sco, Upper Centaurus-Lupus and Lower Centaurus-Crux]{blaauw1964}. This physical substructure is often correlated with kinematic \citep{wright2016,wright2018} or temporal substructure \citep{pecaut2016} that suggest these subgroups are real and not chance over-densities.
Recent studies have combined on-sky positions with parallax, proper motion or radial velocity information to subdivide associations based on their 5- or 6D spatial and kinematic structure \citep[e.g.,][]{kounkel2018,cantat-gaudin2019,berlanas2019}. For example, \cite{damiani2019} and \cite{kerr2021} studied the Sco-Cen association, finding a wealth of sub-groups with different spatial and kinematic properties, which do not adhere well with the classical sub-division of the association. \cite{cantat-gaudin2019} studied the young stars in the Vela OB2 association and found that the region is highly substructured, with seven main kinematic groups overlapping on the sky and whose formation is spread over $\sim$~35~Myr.
These physical subgroups often show distinct age differences, supporting the correlation between spatial, kinematic and temporal substructure \citep[e.g.,][]{zari2019}. Furthermore, they often extend beyond the bounds of the classically defined OB associations (usually based on the distribution of OB stars) to expose older parts of the association \citep[e.g.,][]{cantat-gaudin2019}.
Many associations also contain open or embedded clusters within their borders that have been considered part of the association. In many cases their ages and kinematics are consistent with being related to the association, such as the $\gamma$~Vel cluster in Vela~OB2 \citep{jeffries2014}, $\rho$~Oph in Sco-Cen \citep{blaauw1991}, or NGC~2353 in CMa OB1 \citep{fitzgerald1990}. In some cases however the open cluster has been shown to either have a very different age or significantly different kinematics, or both, suggesting that the cluster did not form as part of the association and is now just projected against the association \citep[e.g., the NGC~2547 cluster projected against Vela~OB2,][]{sacco2015}.
\subsection{Kinematics, velocity dispersion and virial state}
\label{sec:kinematics}
Kinematic studies of associations were historically limited to their most luminous members and used either {\it Hipparcos} proper motions or radial velocities obtained from individual spectra. The former were of limited precision for detailed kinematic studies (allowing membership to be constrained, but rarely probing internal kinematics), while the latter were scarce and strongly affected by unresolved binarity. It wasn't until the availability of multi-object spectroscopy \citep[e.g.,][]{preibisch2002,furesz2008,da-rio2016} and, later, {\it Gaia} proper motions \citep[e.g.,][]{wright2018,kounkel2018} that detailed kinematic studies of the (more numerous) low-mass members of associations became possible.
From kinematic studies using either radial velocities, proper motions or both, estimates of the velocity dispersion of associations can be calculated. Combining the velocity dispersion with estimates of the total mass of the association, the virial state can be derived.
The accurate calculation of the velocity dispersion from measurement of the radial velocity or proper motion for individual stars requires that the measurement uncertainty and the impact of unresolved binaries be accounted for. The former can be easily quantified, but requires some modelling as the uncertainties are often significantly heteroskedastic or do not follow a normal distribution (e.g., \citealt{jack15} find that the uncertainty distribution of Gaia-ESO Survey radial velocities better follows a Student's t-distribution with extended tails). Unresolved binaries can significantly affect the instantaneous measure of velocity from single-epoch radial velocity surveys, particularly for high-mass stars that are commonly found in binary or multiple systems \citep[e.g.,][]{gieles2010}, and require full modelling \citep[e.g.,][]{cott12}. The impact of unresolved binarity on measured proper motions has not been fully explored, and while smaller than the effect on radial velocities, is still potentially significant \citep{jack20}. Proper motions can also become unusable at large distances due to the distance-dependence of velocities derived from them, a problem that does not affect radial velocities.
The corrected velocity dispersions of associations can show considerable variation, but are typically a few kilometres per second along each axis. For example, \citet{ward2018} studied 18 associations using {\it Gaia} DR1 proper motions and measured velocity dispersions of 3--13~km~s$^{-1}$ with a median of 7~km~s$^{-1}$, and \citet{melnik2020} measured an average velocity dispersion of 4.5~km~s$^{-1}$ from 28 associations studied with {\it Gaia} DR2. Small differences between studies are predominantly due to differences in the association membership lists, many of which date from the 1980s \citep[e.g.,][]{humphreys1978,blaha1989} and need revisiting and updating with modern data \citep[e.g.,][]{quintana2021}.
Immediately apparent from the earliest kinematic studies of associations was that their velocity distributions were far from Gaussian and correlated with spatial position \citep[e.g.,][]{furesz2008,tobin2009}. Later studies using proper motions (that probed more than one kinematic dimension) found that the velocity distributions of associations were also highly anisotropic, with velocity dispersion ratios between the two proper motion axes of at least $\sim$1.5 \citep{wright2016,melnik2017,ward2018} and up to $\sim$6 \citep{melnik2017}. 3D kinematic studies of associations using proper motions and radial velocities are rare, but similar levels of anisotropy have been found \citep[e.g.,][]{wright2018}.
\begin{figure*}[h]
\centering
\includegraphics[width = 14cm]{wright216_substructure.pdf}
\caption{Proper motions for 798 X-ray and spectroscopically selected stars towards Cygnus OB2 (including 16 O-type stars), with kinematic outliers removed. The vectors are coloured based on their direction of motion to highlight kinematic substructure. The grey box shows the border of the X-ray observations used to identify members and the colour wheel in the top-right corner shows the relationship between colour and position angle. Figure from \citet{wright2016}.}
\label{fig:cygob2_substructure}
\end{figure*}
The anisotropy and non-Gaussianity observed within associations has been attributed to {\it kinematic substructure}, groups of stars in the same area of space with similar kinematics, but within a wider group with more diverse kinematics. An example of this in Cyg OB2 from \citet{wright2016} is shown in Figure~\ref{fig:cygob2_substructure} where the proper motion vectors are coloured according to their position angle on the sky to highlight the kinematic substructure. This kinematic substructure has since been observed in nearby all associations studied \citep[e.g.,][]{wright2018,kounkel2018,cantat-gaudin2019}. There have been many efforts to quantify the kinematic substructure. For example, \citet{arnold2020} analysed the kinematic substructure in Cyg OB2 and found a strong spatial/velocity correlation on sub-pc scales, but no correlations or structures were found on larger scales.
This kinematic substructure represents subgroups of stars within the association, and using sufficiently-accurate kinematic data the subgroups can be separated. There is growing evidence that when association subgroups are identified using kinematics that the velocity dispersion anisotropy of the subgroups is less pronounced \citep[e.g.,][find velocity dispersion ratios of 1.0--1.6 for the subgroups in Vela OB2]{cantat-gaudin2019b}, which may indicate that the true association subgroups have isotropic velocity dispersions or might highlight a bias introduced by using kinematics to identify subgroups.
The virial state of the association can be estimated by combining the measured velocity dispersion with estimates of the total stellar mass of the association (often extrapolated from a subset of the total population and therefore requiring assumptions about the form of the mass function), though see \citet{parker2016} for caveats. Given that associations have a low stellar density compared to gravitationally bound open clusters, it is unsurprising that all associations to date have been found to be super-virial. The ratio between the virial mass and the stellar mass was found by \citet{melnik2017} to range from 10 to 1000, with a median of $\sim$50. Assuming there are no additional forces acting on the members of the association as it expands, then the virial mass will increase as the association expands \citep[since the virial mass is proportional to radius,][]{portegies-zwart2015}. If the expansion is assumed to be linear then the virial mass for an association will increase linearly with time, and as the stellar mass remains approximately constant, the virial to stellar mass ratio will also increase with time.
\subsection{Expansion}
\label{sec:expansion}
Given their low space density, associations should be unbound and in a state of expansion and may have been more compact in the past \citep[e.g.,][]{ambartsumian1947,ambartsumian1949,blaauw1952}. This expansion can be inferred from the observed correlation between radius and density \citep{pfalzner2009}. However, the internal velocity dispersions for many associations are too small to explain their present-day size by expansion from a significantly more compact state \citep[e.g.,][]{preibisch2008,torres2008}, raising questions over this interpretation.
Measuring the expansion of associations is, in principle, very simple. \citet{blaauw1946} put forward the linear expansion model, which assumes that associations had an initially compact configuration and have expanded linearly (in time) from then. For nearby associations it is necessary to account for {\it virtual expansion} caused by the radial motion of the association towards (or away from) the observer, which can cause a false expansion (or contraction) on the sky, even when no physical expansion exists \citep{blaauw1964}. This can be corrected for using radial velocities, either for the bulk motion of the association \citep{brown1997} or for individual stars \citep{wright2018}.
Early kinematic studies struggled to find evidence of expansion in associations. For example, both \citet{wright2016} and \citet{arnold2020} could find no evidence for expansion in Cyg~OB2 from ground-based proper motions, while \citet{ward2018} searched for expansion in 18 OB associations using {\it Gaia} DR1 data, but could not find evidence for expansion in any of their targets. The availability of {\it Gaia} DR2 astrometry lead to expansion being measured in some, but not all, systems. Using {\it Gaia} DR2 data, \citet{melnik2020} studied 28 OB associations and found evidence for expansion within 6 of them. \citet{ward2020} studied the kinematics of 110 OB associations using {\it Gaia} DR2 and argued that their properties were not consistent with expansion from a single, compact configuration, but were more consistent with originating from a highly substructured velocity field.
More recent studies have found that if associations are divided into subgroups based on their spatial and kinematic substructure then these subgroups often show evidence for expansion, even when the whole system does not. For example, \citet{kounkel2018} divided the Orion OB1 association into subgroups using spatial and kinematic information and found evidence for the expansion of the Orion D subgroup. \citet{cantat-gaudin2019} divided the young stars of the Vela OB2 association into 7 subgroups and found all of them to be expanding. \citet{armstrong2020} conducted a spectroscopic study of the centre of the Vela OB2 association to obtain radial velocities and found that, once the members of the Gamma Vel cluster were removed from the sample, the Vela OB2 association was expanding in all three dimensions (see Figure~\ref{fig:vela_ob2_expansion}).
\begin{figure}
\centering
\includegraphics[width = \hsize]{vela_ob2_expansion_armstrong2020.pdf}
\caption{Cartesian position–velocity plots of stars in Vela OB2 with 3D kinematics from \citet{armstrong2020}. The best-fitting correlation gradients between position and velocity are listed and shown in each panel as solid lines, with dashed lines indicating the uncertainties. All three dimensions show a positive correlation between position and velocity, which is a strong indication of expansion.}
\label{fig:vela_ob2_expansion}
\end{figure}
For associations where expansion has been observed, the expansion is usually anisotropic, even when subgroups are identified using kinematics. Of the three OB associations found to be expanding by \citet{melnik2017}, two are significantly anisotropic, with only Car OB1 being consistent with isotropic expansion. \citet{wright2018} found that all three subgroups of Sco-Cen exhibit strongly anisotropic expansion, with all expanding preferentially along the Galactic $Y$ axis but not along the other two axes. \citet{cantat-gaudin2019} and \citet{armstrong2020} also both found that the expansion of the Vela OB2 association subgroups was strongly anisotropic.
In younger systems such as star forming regions or embedded clusters (which may represent the precursors of expanded associations), there is also growing evidence for expansion. \citet{kuhn2019} found evidence for expansion in $>$75\% of their sample of young clusters, including the Orion Nebula Cluster, which \citet{dario2017} had also found was expanding \citep[though][had not]{dzib2017}. \citet{kounkel2018} observed a clear radial expansion pattern in the $\lambda$ Ori cluster, while \citet{Wright2019} found strong evidence for expansion in NGC 6530, though like the older systems the expansion of NGC 6530 is highly asymmetric, with almost all the expansion occurring in the declination direction.
In summary, there is growing evidence that many, if not the majority of, associations have substructures within them that are expanding. Older studies were unable to identify this expansion due to a combination of inferior kinematic data, poorly-defined association membership lists, or because they were searching for expansion across the entire system rather than within the subgroups. The expansion that has been measured is generally anisotropic, a trend which also extends towards younger and embedded clusters.
\subsection{Ages and age spreads}
\label{sec:ages}
Establishing the ages of association members and quantifying differences, gradients and spreads in those ages are a crucial part of understanding the formation and evolution of associations. The only star with a model-independent age is the Sun, and beyond that there is a hierarchy of age-determination methods, of decreasing accuracy \citep[e.g., see][]{soderblom2010}. For young stars, these range from model-dependent methods such as asteroseismology, the fitting of stellar evolutionary isochrones in the HR, colour-magnitude or the $\log g$ vs $T_{\rm eff}$ (Kiel, or spectroscopic HR) diagrams and the depletion and diffusion of light elements, to more indirect or empirically calibrated methods such as using the time-dependence of accretion, rotation and magnetic activity \citep{soderblom2014}.
The ages of high-mass stars can be estimated from their positions in the HR or spectroscopic HR diagrams. Age discrimination is possible because O and early B-type stars evolve quickly. Accurate spectroscopic determination of the stellar parameters and a distance (for the HR diagram) are required. Typically these techniques have been deployed in more distant but rich associations, where the large number of O-stars can be used to make statistical inferences about ages and age spreads. For example \cite{wright2015} used 169 OB stars in Cyg OB2 to infer an age spread from 1--7 Myr. This work was expanded by \cite{berlanas2020}, with the benefit of {\it Gaia} DR2 parallaxes, and evidence is presented for `bursts' of star formation (or at least, bursts of O-star formation) at ages of 3 and 5 Myr. Such studies are difficult because observational errors in the stellar parameters lead to significant age uncertainties and the fidelity of these ages can be compromised by unresolved binarity and a genuine astrophysical spread in the HR diagram caused by rotation-dependent internal mixing and mass-loss rate uncertainties (for the highest mass evolved stars). The relative paucity of high-mass stars can be countered by access to the larger low-mass populations, which further allow a comparison of the ages of the high- and low-mass stars.
The low-mass populations of young ($\leq 10$ Myr) clusters and associations ubiquitously show a large spread of luminosity around any mean isochrone in the HR diagram. The causes of this spread are long-debated \citep[see][]{hillenbrand2008, jeffries2011, soderblom2014}. Observational uncertainties play a role (particularly dealing with reddening, accretion, variability and binarity), but cannot account for all or even most of the dispersion \citep[e.g.,][]{reggiani2011}. If the dispersion were attributed to age, it would equate to age spreads $\geq 10$ Myr in many clusters and associations, but it is equally possible that astrophysical scatter associated with varied accretion histories or differing levels of magnetic activity might play a significant role \citep{baraffe2017, gully2017}. This dispersion is a poorly understood nuisance factor when searching for age gradients or differences between populations within an association and hampers interpretations involving sequential, triggered or bursts of star formation \citep[though see][for an instance in the Orion Nebula cluster, where multiple star formation bursts separated by $\sim 1$ Myr may have been resolved]{jerabkova2019}.
\begin{figure*}[h]
\centering
\includegraphics[width = 15cm]{kounkel2018_figure13.pdf}
\caption{Estimated stellar ages across the Orion OB1 association from
\citet{kounkel2018}. Left: ages derived using spectroscopic
effective temperature and bolometric luminosities in the HR diagram. Right: ages derived using just photometry in the colour magnitude
diagram (distances assigned using the average distance to stars in each
group).}
\label{fig:orion-age}
\end{figure*}
Despite these difficulties there has been a frenzy of activity in the past few years, exploiting both wide-field spectroscopic surveys and {\it Gaia} astrometry to probe the age structure of local associations. In Orion, \cite{da-rio2016} looked at the stellar populations along the Orion A molecular cloud, using near-IR gravity diagnostics to confirm a spread in {\it radius} and thus possibly some age spread, but found little evidence for any age gradient along the cloud. {\it Gaia} DR2 studies by \cite{kounkel2018}, \cite{zari2019} and \cite{kos2019} have dissected the wider Orion OB association using clustering algorithms in position and velocity phase-space (see Figure~\ref{fig:orion-age} for an example). The ages quoted for these sub-regions range from 1--21 Myr, spread over dimensions of $\sim$100 pc. That the temporal substructure exposed by such studies shows a correlation with the spatial and kinematic substructure in these associations provides strong evidence that this substructure is real and reflects the star formation history within the association.
A similarly complex pattern is emerging in the Sco-Cen and Vela OB2 associations. In Sco-Cen the rich, low-mass population has been used to trace clear age differences (on average) between the younger Upper Sco region at $\sim 10$ Myr and the older Upper Centaurus-Lupus and Lower Centaurus-Crux regions at $\sim 16$ Myr \citep{pecaut2016}. \cite{squicciarini2021} have used {\it Gaia} EDR3 astrometry and spectroscopic radial velocities to show that about half the Upper Sco population is in the form of a clustered population that existed in more compact configurations in the past, whilst the rest is more diffuse. They suggest star formation proceeded over about 10 Myr in small groups that gradually dissolve; indeed the deduced kinematic ages (see Section \ref{sec:kinages}) of the compact subgroups are younger than their isochronal ages. In a similar exercise for Vela OB2, \cite{cantat-gaudin2019} found seven subgroups with ages from 8 to 50 Myr. Age was not strongly correlated with position, but was correlated with kinematics suggesting a more turbulent than sequential star forming history. See Section~\ref{sec:propagation} for a more detailed discussion of the star formation history within associations.
In all these works it is explicitly (or implicitly) assumed that the model-dependent ages are accurate. Whilst there can be some confidence in the {\it relative} ages (or at least the age order) of different groups of stars, their absolute ages are more uncertain. There is a long-standing problem \citep[see][and references therein]{bell2013, pecaut2016} that more massive stars tend to have older isochronal ages than their lower mass siblings by factors of $\sim 2$. Whilst there are undoubtedly problems to solve in the high-mass stellar modelling, it seems likely, with the emergence of evidence from PMS low-mass eclipsing binary systems \citep{kraus2015, david2019} and lithium depletion \citep[][]{jeffries2017} in associations, that fitting conventional low-mass isochrones underestimates the ages (and masses) of low-mass PMS stars by factors of $1.5$--$2$ and that the adoption of models incorporating rotation, magnetic activity and starspots may bring these ages into much closer agreement with those of high-mass stars \citep[e.g.,][]{feiden2016, somers2020} -- high-mass stars can also appear younger in the HR diagram if they are born rotating significantly rapidly \citep[e.g.,][]{ekstrom2012}. These uncertainties must be considered when discussing proposed scenarios where low-mass star formation is affected or triggered by the birth or deaths of high-mass stars.
\subsection{Kinematic ages}
\label{sec:kinages}
Kinematic ages are often cited as providing a model-independent age for a group of stars, since they do not rely on any stellar physics. They are calculated from the time the system needs to expand from an initially compact configuration at its current rate to reach its present size \citep[e.g.,][]{lesh1968,blaauw1978,makarov2007}. Their validity as a model-independent age is based on the assumption that the group of stars has expanded, unhindered, from an initially compact state to its current configuration, and that the expansion began at, or shortly after, the birth of the stars.
Kinematic ages have been calculated for many different associations and moving groups \citep[e.g.,][]{Ducourant2014}. Historically, these have often disagreed with isochronal ages calculated for those groups \citep[e.g.,][]{soderblom2010}. However, the combination of improved {\it Gaia} astrometry, a revision of the pre-MS evolutionary age scale \citep[e.g.,][]{bell2013}, and improved kinematic age calculation methods \citep[e.g.,][]{miret-roig2018,crundall2019} have resolved many discrepancies. For example, the $\beta$ Pic moving group has recently been calculated to have a kinematic age of $18.3^{+1.3}_{-1.2}$~Myr \citep{crundall2019}, in excellent agreement with its lithium depletion boundary age of $21 \pm 4$~Myrs \citep{binks2014}.
The reliability of kinematic ages is dependent on a number of assumptions \citep[see][for a general critique of this method]{soderblom2014}. Foremost amongst these is the assumption that the time at which the stars were closest to each other was when they formed. An example relevant to this is the $\lambda$ Ori system, whose isochronal age suggests it is much younger than its kinematic age \citep{kounkel2018}. If the isochronal age is correct, and not underestimated (see Section \ref{sec:ages} for a discussion on the reliability of isochronal ages), it would imply that the stars in the $\lambda$ Ori group may have formed from material that was already expanding. If this is a common occurrence it raises serious questions over the validity of kinematic ages. Related to this is the assumption that associations formed in a compact configuration (see Section \ref{sec:expansion}, and in particular the evidence for anisotropic expansion), with equally serious implications.
\citet{brown1997} performed N-body simulations to test the validity of kinematic ages and how astrometric uncertainties and the different methods employed to measure kinematic ages can affect the results. Their results suggest that the traceback method will underestimate the age of the association (and overestimate its initial size), with ages converging to $\sim$4~Myr, while the method of comparing velocity with position in a given dimension can also lead to significant uncertainties. Approaches such as the forward-modelling method used by \citet{crundall2019} have the potential to address some of these issues, but could be computationally time-consuming for large systems.
\subsection{Distribution within the Galaxy}
\label{sec:distribution}
\begin{figure*}
\vspace{-1cm}
\centering
\includegraphics[width = 15cm]{map_plane_zoomin_and_dust.pdf}
\caption{Left: Density distribution of OBA stars from \cite{zari2021} in the Galactic plane. The density is displayed in arbitrary units. The Sun is at $X, Y = 0,0$. The x-axis is directed towards the Galactic centre, and the y-axis in the direction of Galactic rotation. The z-axis is perpendicular to the plane. The dashed circles have radii from 1 to 6 kpc. The black crosses indicate the position of the OB associations listed in \cite{wright2020}. The black lines show the spiral arm model from \cite{reid2019}. Top Right: Zoom-in of the OBA star density map shown on the left. The white stars represent YSO groups identified by \cite{Kuhn2021a}. The black thin line shows the location of the Sagittarius-Carina arm. Bottom right: Density distribution of dust from \cite{Lallement2019}. As in the left plot, black crosses indicate the position of OB associations listed in \cite{wright2020}. The thick gray lines represent the approximate location of the Radcliffe wave (left) and the Split (right).}
\label{fig:map_plane}
\end{figure*}
Since the seminal work by \cite{morgan1953}, OB stars and OB associations have been used to trace the spatial distribution of young stars in the Milky Way, and to probe the spiral arms of the Galaxy. In recent years many studies
have linked the spatial distribution of young stars, young clusters, and associations in the solar neighbourhood ($d < 500-1000$~pc) into larger structures.
\cite{bouy2015} studied the spatial distribution of O and B-type stars within 500 pc of the Sun. They suggested that the distribution of OB stars in the solar neighbourhood is described by stream-like structures called `blue-streams'. Such blue-streams are associated with the three largest OB associations within 500 pc of the Sun: Orion OB1, Vela OB2 and Sco-Cen. The work of \cite{bouy2015} was based on data from the \textit{Hipparcos} satellite, and motivated \cite{zari2018} to perform a follow-up study using \textit{Gaia} DR2. \cite{zari2018} confirmed that the 3D structure of star forming regions in the solar neighbourhood is complex and filamentary, although they did not find evidence for the presence of the `blue-streams' hypothesised by \cite{bouy2015}.
Indeed, subsequent studies have found that numerous associations in the solar neighbourhood can be linked to two gaseous structures: the Radcliffe wave and the Split. The Radcliffe wave \citep{Alves2020, Zucker2020, Green2019} is a coherent and narrow structure around 2.7 kpc long, whose 3D shape is well described by a damped sinusoidal wave in the plane of the Milky Way. The Radcliffe wave corresponds to the densest part of the Local Arm of the Milky Way and includes star forming regions in Orion, Taurus, Perseus and Cygnus. The Split \citep{Lallement2019} is argued to be a long arm segment linking the Local and Sagittarius–Carina spiral arms, and includes the Sco-Cen association. Figure \ref{fig:map_plane} (bottom right) shows a schematic representation of the Radcliffe wave and the Split, plotted on top of the dust density map from \cite{Lallement2019}.
Thanks to data from \textit{Gaia} it has been possible to characterise the distribution of associations and young stars beyond the solar neighbourhood. \cite{zari2021} estimated and studied the 3D distribution of OBA stars in the Milky Way disk within 4-5 kpc of the Sun. Figure \ref{fig:map_plane} (left) shows the distribution of the OB associations from the list presented in Table 1 of \citet[][and references therein]{wright2020} projected on the Galactic plane, plotted on top of the density distribution of the filtered sample of OBA stars from \cite{zari2021}. In general, the distribution of OB associations and OBA stars trace similar structures. These structures can be identified as part of the Sagittarius (or sometimes Sagittarius-Carina) Arm towards the inner Galaxy, the Orion (or sometimes Cygnus-Orion) Spur approximately at the Sun's position, and the Perseus Arm towards the outer Galaxy. There are however a number of important differences. The distribution of OBA stars shows a strong over-density corresponding to the Scutum-Centaurus arm towards the inner Galaxy (at approximately $X \sim 2$ kpc). Only two associations seem to be loosely linked with this arm suggesting our census of OB associations is incomplete at such distances. Many associations also do not seem to correspond to any significant over-densities in the OBA star distribution: such associations might have too low a density to appear on Fig. \ref{fig:map_plane} or may just be over-densities of OB stars in the sky. Finally, the mean distances to numerous associations derived in the literature do not correspond to the distances of the over-densities in Fig. \ref{fig:map_plane}, although they are towards the same lines of sight. This calls again for a revision of the census, membership, extent and distances of associations \citep[see e.g.,][]{quintana2021}.
Finally, young stellar objects (YSOs) can also be used to trace Galactic structure.
\cite{Kuhn2021b} presented a catalogue of candidate YSOs, and found groups of YSO candidates associated with the Local, Sagittarius-Carina Arm, and Scutum-Centaurus Arms. \cite{Kuhn2021a} used the same catalogue and focused on a linear feature between Galactic longitudes $l \approx 4^{\circ}–18.5^{\circ}$ including the star forming regions M8, M16, M17 and M20. Figure \ref{fig:map_plane} (top right) shows the YSO groups associated with this structure compared to the OBA star density map of \citet{zari2021} (Fig. \ref{fig:map_plane} left). The structure traced by the YSO groups corresponds to a prominent over-density that is also visible in the OBA star map and mapped out by the previously-known Sagittarius, Scutum and Serpens OB associations in this region from \citet{wright2020}. The structure does not seem to be isolated, but appears to be a feature of the Sagittarius-Carina arm, similar to those observed in external galaxies.
\section{\textbf{Discussion}}
Historically, star clusters have been much better studied than associations due to the observational bias that they are much easier to study and categorise as they are spatially much more compact, and suffer far less from back/foreground contamination. Arguably, this has led to a bias towards clusters being considered much more important than associations as sites of star formation. The frequent use of the phrase `most stars form in clusters' is usually supported by a reference to \citet{lada2003}. However, Lada \& Lada define a `cluster' as a group of 35 or more physically related stars whose stellar mass density exceeds $1.0 M_\odot$ pc$^{-3}$ -- a broad definition which says nothing about boundedness. A more correct statement, and one more consistent with Lada \& Lada's view, would be `most stars form at significantly higher densities than the field', followed by `and around 10\% of stars remain in bound clusters at significantly higher densities for $>10$ Myr'.
\subsection{Formation and origin}
There are many theories as to how GMCs form, and how they turn a globally supersonic medium into star-forming `units'. For a detailed overview of star formation, and, in particular, how GMCs convert gas into stars, we direct the reader to other chapters in this volume, such as Chapters 1, 5 and 7. For the purpose of this chapter the important point is that when we observe stars with ages of a few Myr we observe them at both relatively low densities in associations and at high densities in bound clusters. A key question in star formation is: why?
At one extreme is the view that all stars form at high densities in bound clusters, but that the vast majority of these clusters are very rapidly dispersed by gas expulsion. The essence of this idea is that an embedded star cluster in virial equilibrium will expel the residual gas left over from star formation thanks to feedback from massive stars on a timescale of a few Myr. Since the star formation efficiency is generally quite low and therefore the majority of the mass is still in the form of gas, expelling this gas will dramatically reduce the potential of the cluster causing it to unbind and quickly disperse \citep[e.g.][]{hills1980,kroupa2001,goodwin2006}. Associations are then the expanding remnants of one (or more) originally dense star clusters\footnote{For much of the 1990s and 2000s this was one of the dominant pictures of star formation and an underlying assumption in much work on clusters e.g. \citet{goodwin2006}.}.
It has become clear that the view of associations as the remains of (a small number of) expanded clusters does not fit the observational data. Recent studies have revealed that associations are more spatially extended than previously thought, and that they present a high degree of substructure in physical space, kinematics, and age (see Section 3). This suggests that associations are globally {\em dynamically} very young\footnote{Note that dynamical age and chronological age can be very different. The former is a measure of age in terms of crossing times, which, in unbound systems, is effectively infinite (some relaxation can occur on small scales and within substructures, and unbound initially high density regions will have encounters early-on that can erase substructures, but the point stands).} \citep{parker2014}, since any form of substructure is easier to erase than to form -- any encounters within a group of stars will erase substructure within a few crossing times, which can be particularly rapid in bound stellar systems \citep[e.g.,][]{goodwin1997,goodwin2004,parker2014}. This strongly suggests that associations form with a similar spatial configuration as we currently observe them: over large volumes, with considerable substructure, at low average densities (though some stars may form at higher densities in subclusters), and most likely globally unbound.
The kinematic and physical substructure observed in associations reflects that of GMCs. Observations of cores within GMCs show stars often form at relatively low densities with significant substructure. The detailed large-scale sub-mm maps from {\em Herschel} show cores (unsurprisingly) following the complex gas structures. Two particularly good examples are Aquila \citep[][see their fig. 1]{konyves2010}, and Orion B \citep[][see their fig. 5]{konyves2020}. Similar structures are seen in very young class I/II stars traced by e.g. {\em Spitzer} \citep{gutermuth2009}. We would expect the distributions of cores and stars we see over scales of several pc in e.g. Aquila and Orion B to be only marginally bound or unbound \citep[e.g. from Larson's laws,][]{larson1981}. Therefore, it is difficult to imagine these regions evolving into anything other than associations (with maybe some small subclusters).
\subsection{Propagation of star formation in OB associations}
\label{sec:propagation}
Whilst complex and difficult to measure, the ages of stars and age distributions within associations provide clues as to how star formation has proceeded in different regions.
The classical model for the formation of OB sub-groups in associations was proposed by \cite{elmegreen1977}. This model predicts that ionising radiation and winds from massive stars terminate the star formation process and drive shocks in other parts of the parental cloud. New generations of massive stars are born, and the process is repeated. As a consequence, low-mass stars should be older than OB stars, since only OB stars are formed by triggering, while low-mass stars should form spontaneously through the cloud. Another popular model is based on the mechanism of radiation driven implosion \citep{Sandford1982, kessel2003, bisbas2011, haworth2012}. According to this model, low-mass stars should be younger than the OB stars (which initiate their formation), and one may expect to see an age gradient in the low-mass population \citep{preibisch2007}. However, from an observational perspective, there is currently no convincing evidence for either of these mechanisms.
Being the closest OB association to the Sun ($d\sim140$~pc), Sco-Cen has provided an ideal laboratory for testing theories of the propagation of star formation \citep{preibisch1998, preibisch2002, preibisch2007}. Following the pre-\textit{Gaia} study by \citet{pecaut2016} and \textit{Gaia} studies mentioned above, \cite{krause2018} and \cite{kerr2021} proposed more complex models for the star formation history of Sco-Cen, which take into account the increasing layers of complexity emerging from the data. \cite{krause2018} combined gas observations and hydrodynamical simulations to study the formation of the Sco-Cen super bubble, and suggested the following scenario for the evolution of the association. Dense gas was originally distributed in an elongated cloud, which occupied the current area of the association. The star-formation events in Upper Centaurus Lupus and Lower Centaurus Crux lead to super-bubbles that expanded, surrounding and compressing the parental molecular cloud, and triggering star formation in Upper Scorpius. This scenario predicts the formation of kinematically-coherent sub-groups within the association that move in different directions, similar to the observed kinematics in Sco-Cen \citep{wright2018}. \cite{krause2018} also predict that young groups could also be found in regions containing older stars, and that several young groups with similar ages might form over large scales. This is consistent with what is observed in Sco-Cen and other associations. The ages derived by \cite{kerr2021} however seem inconsistent with the model proposed by \cite{krause2018}. \cite{kerr2021} argue that Krause's `surround and squash' scenario might act in certain regions of the association, while in others a simple model of sequential star formation might have been enough for the propagation of star formation. It could therefore be that multiple methods of star formation propagation have acted within a single association.
The shell-like distribution of stars and gas in the Vela OB2 complex suggests an episode of triggered star formation that shaped the gas into a shell, causing it to compress and form stars \citep{cantat-gaudin2019b}. This would indicate that the expansion of the IRAS Vela shell preceded the formation of the stars in Vela OB2, with the expanding motion of the gas imprinted on the stars that formed. The mechanism responsible for shaping the distribution of stars and gas would have to be energetic enough to not only compress gas into forming stars, but also induce the expansion of the IRAS Vela shell, which \citet{cantat-gaudin2019b} suggest was most likely a supernova.
A similar conclusion was drawn in Orion by \citet{kounkel2020} and \citet{grossschedl2021}, who independently concluded that a major feedback event occurred in the region $\approx$~6~Myr ago. Such an event shaped the spatial distribution and kinematics of the gas and young stars that are observed today, and possibly triggered the formation of some of the youngest stellar groups in the region. Although \citet{kounkel2020} and \citet{grossschedl2021} studied the motions of different groups as a function of time, they did not assess the presence of age gradients at the time such groups were formed (instead assessing them at the current time). This may be important to better explain the formation history of associations and potentially to evaluate the importance of triggering. The star formation history of the Orion OB association is however not completely understood. The age distribution of the groups identified for instance by \cite{kounkel2018} and \cite{zari2019} has not produced a clear picture of the progression of star formation across the association.
It is interesting to compare the large age spreads and seemingly complex star formation histories of associations with those of clusters. In particular, the association Cyg OB2 and the young massive cluster Wd 1 are both a few $\times 10^4 M_\odot$ with an age of $\sim 5$ Myr \citep{brandner2008,wright2015}. However, Cyg OB2 seems to have a significant age spread \citep{wright2015}, whilst Wd 1 seems to have formed in a single burst of star formation \citep{brandner2008}. Presumably the very different star formation histories are telling us about the properties of the (presumably fairly similar mass) GMCs from which they formed. The reason why some GMCs produce massive OB associations, while other GMCs of similar mass produce massive clusters is, however, still far from clear.
\subsection{Dynamical evolution and dispersal}
OB associations are massive, extended stellar structures whose low density suggests they are globally unbound. Their expansion rates are sufficiently low (or are limited to one or two dimensions) that they remain co-moving for some length of time and form elongated structures. Many elongated, and relatively old, association-like structures have been discovered in recent years using {\it Gaia} data.
\cite{jerabkova2019} reported the discovery of a $\approx$~17~Myr population, clustered in proper motions, but filamentary and extending for $\approx$90~pc in physical space, which they interpret as a relic of star formation in a molecular cloud filament. An analogous structure was also reported in Vela by \cite{beccari2020}, extending $\sim$260~pc and with an approximate age of 35~Myr. \citet{kounkel2019} also identify a number of filamentary structures orientated parallel to the Galactic Plane and with ages up to $\sim$100-200 Myr (young enough that they have not completed a full Galactic orbit) -- co-moving, filamentary and extended groups much older than this appear difficult to identify using currently-available data and methods \citep{kounkel2019}.
These groups are too young to have been significantly affected by tidal forces and therefore their extended structure is likely to be mostly primordial. As associations are thought to form from filamentary molecular clouds, the resulting young population should retain a similar string-like morphology, which would explain the observation of these extended structures. Furthermore, since the dissolution of associations is not completely isotropic (see Section~\ref{sec:expansion}), and in some cases shows preferential expansion in the X and Y directions of the Galactic Plane \citep[e.g.,][]{wright2018}, this may explain the origin of such extended and filamentary structures.
As a population of stars is orbiting around the Galaxy, its velocity dispersion increases due to dynamical evolution, tidal forces (such as from passing stars, nearby GMCs, and the Galaxy itself) and shear due to Galactic rotation. Over time stars in the group will slowly drift away and eventually most stars in the group will have dispersed into the galactic field population. Some of the stars may still remain as a coherent group for several tens if not hundreds of Myr, with the populations that were originally more massive managing to retain a larger fraction of their members as co-moving for longer periods of time. \citet{kounkel2019} identified a large number of co-moving groups in the Milky Way using {\it Gaia} DR2 data. They found that the number of stars within a co-moving and coherent group decreases with time according to a power law.
\section{\textbf{Implications}}
Over the last decade there have been many changes to our view of OB associations and their origins, and these have significant implications for our understanding of the star and planet formation processes.
\subsection{At what densities does star formation take place?}
A key question for star formation studies in galaxies is in what sort of environments do stars typically form? Do most stars form at the fairly low stellar densities observed in associations? Or do they often form at higher (cluster-like) densities and disperse? Or are both important \citep[e.g.][]{bressert2010,kruijssen2012}? This is a question with potentially significant implications for star formation: do stellar systems form in cores that are essentially `isolated', or do dynamics and encounters play an important role in the formation of stars and mass assembly?
The distinction is typically made as to whether a particular region is a `cluster' or an `association' from its total energy: clusters are bound, and associations are unbound \citep[e.g.][]{gieles2011}. However, from the point of view of star formation, multiple systems or planets, a more useful distinction is if stellar systems have ever `known' about other systems. This relies on their `density history' -- in particular, the highest density environment they have ever been in, and for how long.
There is a `critical' density in a star forming environment of $\sim$100 $M_\odot$ pc$^{-3}$ above which star forming cores interact with each other. This comes roughly from how closely one can pack 0.1 pc radius cores with a lifetime of $\sim $0.2 Myr and velocity dispersion of $\sim$1 km s$^{-1}$ \citep{goodwin2007} -- above this density cores will interact with each-other while forming stars and not be `isolated'. Even if a region at this density is unbound, encounters will occur before it expands (unless the expansion velocity is 10s km s$^{-1}$).
If, as seems clear from the data on associations, most of the stars in associations have always been at low density, then these systems will be `pristine'. That is, they have not been altered by external encounters/irradiation (however, secular effects may be important) and it will be important to compare the properties of such stars, together with their circumstellar environments and planetary systems with those found in dense clusters.
\subsection{Do binary and multiple systems evolve differently in clusters and associations?}
A significant fraction of stars are thought to form in binary and higher-order multiple systems \citep[see reviews by][]{goodwin2007,reipurth2014,duchene2013}. Binary and multiple systems can be changed or destroyed by secular decay (for higher-order multiple systems) and by external encounters. Secular decay should be environmentally-independent (although small external perturbations could play some role) and occurs on order of the multiple system's crossing time (typically 10s to 100s kyr). For example, a triple system would usually be expected to decay to a (hardened) binary and a single star \citep[e.g.][]{goodwin2005,reipurth2010}. The effect of close encounters depends on both the frequency and energy of encounters \citep{heggie1975,hills1975,kroupa1995} -- both of which are density-dependent \citep[though it is generally a very stochastic process,][]{parker2012b}. The more encounters and the more energetic they are, the more systems we would expect to be destroyed (or at least significantly altered).
The effect of both secular decay and close encounters is therefore to decrease the multiplicity fraction (MF) of a group of stars over time. Unfortunately, these two effects are extremely difficult to disentangle as they both lead, in a very stocastic way, to a reduction in the MF.
Observations seem to show that multiplicity decreases with age. The MF for Class 0 YSOs is close to unity \citep{chen2013,tobin2019}, but by the time these sources reach the Class I and Class II stage the MF is closer to what is typically observed in the field \citep[see][]{duchene2013,reipurth2014}. However, the simplifying assumption of a single initial population of multiple systems which are then altered by dynamics \citep[e.g.][]{kroupa1995,kroupa2001} seems difficult to justify {\it a priori}. This is because if the star formation environments are different enough to produce an association versus a cluster, one might well expect that the initial multiplicity properties of stars would be different. This leaves us with attempting to reverse-engineer from current populations the stochastic history of secular decays and external encounters on potentially different initial multiple populations which could give rise to the currently observed populations.
From our earlier discussions that most stars in associations seem always to have been at low density, while stars in clusters we know have spent a significant fraction (if not all) of their lives at high density, we would expect the multiplicity properties of clusters and associations to be different as the encounter/irradiation histories of stars within them were different. There is some evidence supporting this view. In the low-density Taurus Molecular clouds the multiplicity fraction is very high, $\sim$66--75\%, in excess of what is found in the field by a factor of $\sim$2 for stars of a comparable mass. In contrast, in the dense Orion Nebula Cluster (ONC), the multiplicity fraction at separations of 67.5--675 AU is consistent with the field \citep{reipurth2007}.
The dependence of the MF on both local stellar density and binary/multiple system separation is also interesting, with wide ($>$1000 AU) binaries particularly sensitive to environment. For example in the low-density Taurus Molecular cloud and Cygnus OB2 association there are many such wide binaries \citep{kraus2011,caballero-nieves2020}, while in the dense ONC no binaries with separations over 1000~AU have been conclusively identified \citep{scally1999}. These observations can help place limits on the density history of such binaries. For example, \citet{griffiths2018} showed that the high fraction of wide binaries amongst massive stars in Cyg OB2 was inconsistent with them being born in a dense cluster with other massive stars, because wide binaries in clusters will be destroyed or hardened by the presence of another massive star.
For close binaries ($<$100 AU), all nearby low-density regions that have been studied appear to show a MF that is very similar to the field for separations of 60--100 AU (with only a tentative hint of a density dependence) but an excess of systems $< 60$ AU by a factor of roughly two \citep{king2012}. Separations of $< 60$ AU are interesting as these systems are `hard' in all local environments and so should survive into the field. So an excess in local regions over the field of a factor of two suggests some environments must under-produce fairly close systems. The obvious suspect for under-production are clusters. The ONC is the only dense cluster close enough for detailed multiplicity studies, and it has been difficult to resolve close ($<60$ AU) companions in the ONC. A recent search for close companions in the ONC using high resolution imaging has shown a similar excess in the MF at 20--60 AU as in low density regions: twice the field \citep{duchene2018}. However, this effect may be limited to particular primary mass ranges \citep{de-furio2019,de-furio2021}. It is also problematic to extrapolate from a sample of one cluster.
At intermediate separations (100-1000 AU) the picture is still unresolved. There is evidence that stars in the denser regions of the Orion Molecular Clouds have a higher MF at separations of 100--1000 AU than the regions with lower stellar density \citep{kounkel2016,tobin2022}. Thus, despite being presumably much more sensitive to destruction via encounters than low density populations, clustered regions can seemingly produce a larger fraction of companions at some separation ranges. This could be primordial, it could be due to the tightening of previously wider binaries, or it could be due to the loose capture of stars into multiple systems \citet{moeckel2011}.
In summary it is currently impossible to say with any certainty if or how birth multiplicity varies with environment (or if it is universal) and how it is processed differently in different environments). In the future, in light of the larger census of YSOs that can now be assembled with a variety of different surveys such as {\em Gaia}, and with new facilities coming online that offer increasingly better spatial resolution, it may be possible to further improve multiplicity statistics in a larger number of star forming regions to better determine the role of the environment in the underlying processes governing both the initial multiplicity of a population and its secular decay and the effect of external encounters.
\subsection{Do protoplanetary disks and planetary systems evolve differently in clusters and associations?}
There are two external effects that can alter the properties of planetary systems: encounters and irradiation. Encounters can affect both disks and ($N$-body) planetary systems, while irradiation only affects disks. The stellar density of the environment surrounding a young star (and the evolution of that density) determines both of these effects through the likelihood of close encounters and the local radiation and particle field.
In dense clusters, disks may be photoevaporated by EUV radiation from nearby O-type stars \citep[typically when within 1~pc, e.g.,][]{winter2018b,cai2019,parker2021}, while even a brief period ($< 1$ Myr) at high density is enough to alter planetary system properties \citep[see][and references therein]{parker2020}. At lower densities in associations these effects should be different and reduced in comparison \citep[e.g.][]{laughlin1998,clarke1993,adams2006,wright2012,portegies-zwart2015, vincke2015, nicholson2019,winter2019}.
Current observational evidence is limited, but suggests that disks are affected by their environment. \citet{dejuanoverlar2012} found evidence for a decrease in the protoplanetary disk radius at very high cluster densities, but their sample of sources at such high densities was small. \citet{eisner2018} made a similar discovery in the ONC, when compared to lower-density star forming regions. \citet{guarcello2016} found that the fraction of stars with protoplanetary disks is inversely proportional to the local ultraviolet flux in Cyg~OB2, suggesting disk dissipation is dominated by photo-evaporation and therefore strongly dependent on environment.
The extreme mass loss rate in the protoplanetary disks in the ONC from photoevaporation from $\theta^1$ Ori C \citep{henney2002} has raised the question of how these disks can survive for a prolonged period of time to be currently detected, without invoking implausibly large initial disk masses or unrealistic ages for $\theta^1$ Ori C, given the typical ages of other stars in the vicinity. However, the age gradient of stars in the ONC \citep{beccari2017} may provide a solution to the problem: older stars have had a greater shielding from photoevaporation through the surrounding dust, allowing them to circulate throughout the cluster and migrate away from $\theta^1$ Ori C. On the other hand, the most strongly irradiated stars are the youngest, with the greatest mass reservoir remaining \citep{winter2019a}.
Further measurements of protoplanetary disk frequencies in OB associations will be needed to fully understand the influence of environment on disk evolution. To understand how the environment affects the properties (or frequency of) planetary systems will require large samples of planetary systems in star clusters and OB associations to be compiled. The current evidence and our understand of planet formation does suggest that the properties of planetary systems should depend on environment, though the exact form of this dependency could take many forms.
\section{\textbf{Future Prospects}}
Many upcoming facilities and instruments have the potential to significantly advance our understanding of associations. Here we discuss these observatories, as well as the prospects of future {\it Gaia} data releases.
\subsection{ Future {\it Gaia} data releases}
The data from the \textit{Gaia} satellite have allowed for significant advancements in our understanding of associations. Thanks to \textit{Gaia}'s exquisite astrometric precision, it has been possible to identify low-mass members of nearby associations (within 500~pc) and to study in detail their distribution in 3D physical space, their kinematics and ages.
With \textit{Gaia} EDR3 the precision of parallaxes and proper motions has improved on average by 20\% and by a factor of 2 respectively, compared to DR2. \textit{Gaia} DR3 (expected mid-2022) will further provide new and improved radial velocities (for $\sim$30 million stars out to $G\sim15-15.5$~mag, with precisions $<$1~km~s$^{-1}$ for the brightest stars) and astrophysical parameters for sources based on the BP/RP prism and RVS spectra (for the full content of \textit{Gaia} DR3 see \url{https://www.cosmos.esa.int/web/gaia/release}). {\it Gaia} DR4 will feature parallaxes that are more precise by a factor 1.7 with respect to \textit{Gaia} DR2, while for proper motions the gain will be a factor of 5.2 \citep{brown2021}.
The parallax precision improvement from future {\it Gaia} data releases, combined with a better understanding and treatment of the systematics, should allow the identification of pre-main sequence members of associations beyond 500 pc from the Sun, and potentially out to 2~kpc (for the most massive and brightest PMS stars). A major improvement in the study of nearby associations will come from the radial velocities provided in \textit{Gaia} DR3. This will be a uniform data set that will facilitate studies of the kinematic and dynamic properties of both the clustered and diffuse populations of associations. For example, between 40--50\% of the $\approx$ 40,000 pre-main-sequence star candidates published by \citet{zari2018} will have radial velocities in {\it Gaia} DR3, compared to the current value of 2\%.
\subsection{Multi-object spectroscopic surveys}
Spectroscopy is important in the study of associations for verifying the youth of candidate young stars (Section~\ref{sec:confirming}), providing accurate stellar parameters (allowing stars to be placed in the HR diagram and stellar ages estimated), measuring precise radial velocities (to facilitate full 3D kinematic studies) and abundances (for chemical tagging).
The next-generation of multi-object spectroscopic instruments will be ideal for studying associations due to their combination of wide fields of view (typically several square degrees) and large numbers of fibres. This will allow them to observe hundreds or thousands of stars at a time spread over large areas, thus making them ideally suited to the study of associations.
The first such facility to come online is SDSS-V \citep{Kollmeier2017}, whose northern and southern hemisphere telescopes have already embarked on am ambitious survey of the Milky Way and the Local Volume, including many thousands of both high- and low-mass young stars. The telescopes operate in both low- ($R = 2000$) and high-resolution ($R = 20,000$) in the optical and near-infrared, with $\sim$500 fibres distributed over 2.8 (northern) and 7 (southern) square degree fields of view.
WEAVE \citep[WHT Enhanced Area Velocity Explorer,][]{dalton2018} is the new multi-object spectrograph being installed on the William Herschel Telescope, with first light expected late 2022. WEAVE has a 3 square degree field of view and $\sim$1000 fibres that can observe in low- ($R = 5000$) or high-resolution ($R = 20,000$) modes. WEAVE will undertake 8 pre-planned surveys during its first 5 years of operation, including surveys targeting dispersed young stars and associations in the Galactic Plane, and open clusters.
In 2023, these instruments will be joined by ESO's 4MOST \citep[4-metre Multi Object Spectroscopic Telescope,][]{dejong2019} spectrograph, which has a 4 square degree field of view and $\sim$2400 fibres that observe in both low- ($R = 5000$--$7500$) and high-resolution ($R = 20,000$) modes synchronously. 4MOST will also operate surveys on a 5 year timescale, with multiple surveys of young stars, clusters and associations in the first slew of surveys.
\subsection{X-ray telescopes}
\label{sec:futurex-ray}
The eROSITA X-ray telescope was launched in 2019 and is performing a survey of the whole sky over 4 years, in the form of 8 successive all-sky maps taken over 6 months. The accumulated sensitivity to 0.2-8 keV X-rays will be about 25 times that of previous all-sky X-ray surveys \citep{predehl2021}. This has the potential to identify many candidate young stars across the broad areas covered by associations, by virtue of their magnetic and accretion-related activity (see Section~\ref{sec:identifying}).
The first results from eROSITA, based on the first scan of the sky, have now been released and a first look at the Sco Cen OB association was presented by \cite{Schmitt2021}. Already the sensitivity is good enough to detect M-type association members at $\sim 120$ pc with saturated levels of X-ray emission. Extrapolating to end-of-survey sensitivities, one can estimate that most low-mass PMS stars ($0.2 \leq M/M_\odot \leq 2$) will be identified out to $\sim 500$ pc and solar-type PMS stars should be detected well beyond 1 kpc. Since X-ray selection is unbiased by kinematic considerations, it is likely that these data will play a very important role in the study of the demography and structure of nearby associations over the coming decades.
\section{Summary}
OB associations are important objects to study as they represent the transition phase between clustered (or unclustered) star formation and the population of mature stars orbiting in the Milky Way. As such they are important not just for understanding the evolution of stellar clustering, but also provide samples for studies of stellar evolution (at both low and high masses) and the evolution of multiple systems, protoplanetary disks, and young planetary systems.
Identifying members of associations in a reliable and unbiased fashion is difficult due to their low densities and the large areas of sky they cover. Candidate young stars are often identified using a combination of optical and infrared photometry, astrometry, X-ray observations and/or variability. Validation of youth can be achieved using spectroscopy, by confirming the effective temperatures of high-mass stars or by measuring either the abundance of atmospheric lithium or surface-gravity sensitive spectral features in low-mass stars. Once a reliable sample of young stars has been obtained, they can be divided into groups or sub-groups using multi-dimensional (spatial and kinematic) data and a variety of clustering algorithms.
Recent work has uncovered significant levels of spatial, kinematic and temporal substructure in associations. This substructure has helped resolve the large-scale structure of associations, explain kinematic anisotropy and complexity, and resolve age spreads commonly observed in associations. Evidence for the expansion of associations has been mixed, with early studies finding no or limited evidence for their expansion, while more recent studies (that break down the structure of associations using spatial and kinematic information) finding that expansion is a common feature of associations, though this expansion can be quite anistropic. This in particular has implications for whether kinematic ages can be used as a model-free age indicator. The large-scale star formation history within (and between) associations does not show any obvious patterns that would suggest a single mechanism responsible for the propagation of star formation (e.g., triggering of some sort) and it may be that multiple mechanisms are at work at the same time. As associations disperse they seem to form extended, filamentary structures that can survive as coherent moving groups for up to $\sim$300~Myr. The origin of their filamentary morphologies may be a combination of primordial structure and the observed anisotropic expansion patterns.
These observations suggest a picture in which associations form as extended, highly substructured and with a low average density (albeit with some over-densities). Some of the over-densities may be dense enough to form clusters, and some of these may survive as long-lived open clusters, while other expand and disperse, often anisotropically, thereby preserving an extended and filamentary structure for several hundreds of Myr.
The low densities at which many stars in associations form has implications for the star formation process and the formation and evolution of binary and multiple systems, protoplanetary disks, and young planetary systems. Full observational tests of the predictions made by such implications have yet to be carried out, but preliminary studies suggest some differences in the products of star formation between stars in dense clusters and low-density associations.
\bigskip
\textbf{Acknowledgments.} NJW acknowledges an STFC Ernest Rutherford Fellowship (grant number ST/M005569/1) and a Leverhulme Trust Research Project Grant (RPG-2019-379).
\bigskip
{\small
\bibliographystyle{pp7.bst}
|
2,877,628,088,864 | arxiv | \section*{Appendices}
\subsection*{The ER=EPR `counterexample'}
{\em Classical 3-Manifold structure}: We start by considering the prime
counterexample considered in the ER=EPR proposal \cite{Maldacena13}.
This consists of a pair of black holes connected by an Einstein Rosen
bridge. (Physically, this might correspond to what is produced by a
pair-creation event.)
\begin{figure}[th!]
\includegraphics[width=55mm]{Fig4a.pdf}
\vskip -0.1in
\includegraphics[width=55mm]{Fig4b.pdf}
\vskip -0.1in
\caption{
Structure of a pair of black holes connected by an Einstein Rosen (ER)
bridge. The dashed red lines on the Penrose diagrams (left) denote
the space-like hypersurfaces used to construct the embedding diagrams
(right). Case: (a) when the pair are initially formed (say by a
pair-creation event) the horizons coincide; and (b) at a
later time; the horizons are separating, through the ER bridge, at the
speed of light.
\label{Fig4}
}
\end{figure}
If we ignore evaporation, the black hole exteriors are static and
eternal. Their joint Penrose diagram is shown in Fig.~\ref{Fig4} as the
left and right black diamond shapes of either of the left-hand figures.
Each dashed red line denotes some specific space-like hypersurface (a
specific time slice). In Fig.~\ref{Fig4}(a), this time slice is chosen
when the two exteriors touch at the center of the left-hand figure. The
black $45\degree$ diagonal lines passing through the center of this
figure represent the horizons of the two black holes respectively. With
regard to the scenario where these black holes are created by some
pair-creation process, this ``$t=0$'' time slice would correspond to
their initial creation event, and the Penrose diagram loses any meaning
for earlier times. The right-hand diagram is the spatial embedding
diagram corresponding to this initial time slice. Far from either black
hole, externally, the embedding diagram looks flat. As one approaches
either black hole from the outside, one approaches a horn-like structure
on the embedding diagram. The horn structure terminates at the horizon,
denoted as a vertical black ring that encircles the horn. For the $t=0$
hypersurface the horizons of the two black holes coincide.
Fig.~\ref{Fig4}(b) shows the same black hole pair, but a later
hypersurface is chosen (left-hand figure). The corresponding embedding
diagram (right-hand figure) shows that the two horizons have separated
and are connected internally by a bridge --- the so-called
Einstein-Rosen (ER) bridge. The proper length of this bridge grows very
rapidly (at roughly the speed of light) so there is no possibility of
passing from the exterior of one black hole to the exterior of the
other. The pair forms an example of non-traversable wormhole.
\begin{figure}[b!]
\includegraphics[width=60mm]{Fig5.pdf}
\vskip -0.1in
\caption{
Schematic description of entanglement across the horizons for our black
hole pairs. Entanglement is sketched on the embedding diagrams of the
3-manifolds based on a field theoretic formulation on a lattice. Each cell
(small blue circle) denotes an individual lattice site. For clarity,
only those cells neighboring the horizons are shown. Entanglement across
the horizons is shown by (light blue) lines connecting cells at three
epochs: (a) the initial hypersurface when the horizons coincide; (b)
when the horizons are separated by $O({\text{Plank length}})$; and (c)
on a later hypersurface.
\label{Fig5}
}
\end{figure}
{\em Quantum fields and entanglement}: Continuing to ignore evaporation,
we can consider quantum fields propagating on the time-evolving family
of spatial 3-manifolds corresponding to the family of embedding diagrams
for different hypersurfaces (time slices). Provided we stay away from
the singularity (wavy blue line at the top of the Penrose diagrams) all
these 3-manifolds are smooth and locally flat. The lowest energy states
(vacuum) of these quantum fields will then not be too different from the
structure of vacuum in flat spacetime. In particular, there will be
entanglement across all boundaries. This may be formalized, for example
on a quantum field theoretic setting on a lattice \cite{BDS13,Braunstein13},
with the typical lattice spacing determining the UV cutoff as Planckian.
It is sufficient for our purposes here to consider schematic pictures of
the entanglement on such a lattice representation of these 3-manifolds.
Fig.~\ref{Fig5} shows such a schematic representation of entanglement on
these 3-manifolds. The lattice sites are represented as small blue
circles. For clarity, only those circles neighboring horizons are shown.
Entanglement across the horizons is shown as pale blue lines connecting
lattice sites. As can be seen in Fig.~\ref{Fig5}(a) the initial $t=0$
hypersurface does indeed show entanglement across the joint horizon. The
exteriors of the two black holes are indeed entangled on this time
slice.
In Fig.~\ref{Fig5}(b), we see the entanglement for a hypersurface at
$t=O({\text{Planck time}})$. Once the horizons have separated by even of
order one lattice site, presumed to be separated by $O({\text{Planck
length}})$, the entanglement between the black hole exteriors includes a
set of intermediary lattice sites. When these intermediary sites are
traced out the original entanglement will almost have vanished. As the
separation between the horizons increases the entanglement between the
black holes is exponentially suppressed, effectively vanishing. Thus,
on a later hypersurface, such as Fig.~\ref{Fig5}(c), there will be no
entanglement between the black holes.
We may therefore conclude, (i) that on the initial $t=0$ hypersurface
there is indeed entanglement of the external degrees-of-freedom for the
black hole pair. However, (ii) this vanishes within $O({\text{Planck
time}})$. Further, (iii) being local entanglement across a horizon this
has no observable consequences. For such a short-lived phenomenon one
might question whether this entanglement is perhaps better thought of as
a mathematical artifact.
Before we proceed to seeing how theorem 1 applies to this counterexample
we might consider other ways of creating maximally entangled pairs of
black holes. Indeed, Ref.~\onlinecite{Maldacena13} suggests other
mechanisms by which the {\em internal} degrees-of-freedom of a pair of
black holes may be maximally entangled. For example, by waiting for one
black hole to radiate until its Page time and then collapse the
resulting radiation into a second black hole. However, all of these
alternative suggested mechanisms have entanglement of a completely
different character than the counterexample studied above. The
entanglement is not ephemeral and it is between the internal instead of
exterior degrees-of-freedom. Thus, there is no connection between the
smoothness or otherwise of the quantum fields for these mechanisms and
the counterexample above.
{\em Theorem 1 for the ER=EPR `counterexample'}: Finally, we shall
consider evaporation in the scenario of pair-created black holes studied
above. To apply theorem 1, all be need to do reinterpret $B$, $N$, $R$,
etc as the Hilbert spaces of the joint interior, the combined
neighborhoods and combined radiation systems for the black hole pair.
Assumption 1.c needs to be modified to ``the joint black interior Hilbert
space dimensionality may be well approximated as the exponential of the
combined Bekenstein Hawking entropy of the black hole pair''. As noted
in Fig.~1 of the manuscript, the quantum state of $(B,N,R)$ is
arbitrary. All the equations used to derive the contradiction for
theorem 1 remain unchanged. We find the same paradox as before, with its
onset at the Page time, when the joint surface area of the black hole
pair has dropped to one-half its initial value.
The proposed counterexample to the paradox thus fails.
\subsection*{Post-firewall paradoxes with negligible entropies}
In this section we repeat the key elements of both theorems in the
manuscript with the following modifications: (a) We explicitly include
the entropy in the atmosphere, bounding its size rather than merely
considering it to be negligible; (b) We only follow the black hole
evaporation to the point where the black hole is still much larger than
Planck scale. To illustrate that neither of these changes affect the
results of the manuscript we focus solely on the behavior at the
Page time.
{\em Theorem 1}:
Consider now the scenario where we follow a black hole to a relatively
late stage of its complete evaporation. In particular, when its area
has shrunk to some small fraction of its original size, but is still
much larger than the Planck scale so that the physics of Planck scale
black holes plays no part. We denote all pre-Page time radiation as $R$
and the post-Page time radiation as $R'$ (produced up until the black
hole has reached a specified fraction, say roughly $\varepsilon/2$,
of its original area). It follows therefore from the generic behavior
of entropy during evaporation \cite{Braunstein13} that
\begin{equation}
S(R':R)=(1-\varepsilon)S_{\text{BH}}, \qquad \varepsilon\ll 1.
\label{RadENT}
\end{equation}
Combining this with Eq.~(2) of the manuscript we find
\begin{equation}
S(B,N:R)\ge(1-\varepsilon)S_{\text{BH}}, \qquad \varepsilon \ll 1.
\label{RadENT2}
\end{equation}
Equation~(\ref{RadENT2}) tells us that the radiation is almost perfectly
maximally entangled with a subspace of the joint system $(B,N)$ and as
$R$ quickly becomes remotely separated we may conclude that
$\frac{1}{2}(1-\varepsilon)S_{\text{BH}}$ represents a lower bound to
the thermodynamic entropy of this joint system.
{\it Free-fall equanimity}: Consider now a freely-falling observer who
is believed to see nothing special until they pass well within a large
black hole's horizon (assumption 1.b). For black holes formed by matter
in a pure quantum state, the (global) state of $(B,N,R)$ may also be
treated as pure implying $S(B,N\!:\!R)= S(B\!:\!R)+S(N\!:\!R)$. This in
turn, allows assumption 1.b to be decomposed into external and internal
constraints.
Externally, we assume that our infalling observer is not passing through
an atmosphere of exotic matter prior to reaching the horizon. Therefore
from Eq.~(1) of the manuscript, we have $\frac{1}{2}S(N\!:\!R) \ll
\frac{1}{2} S_{\text{BH}}$ for a black hole at the Page time.
Internally, this implies that Eq.~(\ref{RadENT2}) reduces to
\begin{equation}
S(B:R)\gtrsim S_{\text{BH}}.
\label{reduced}
\end{equation}
Now, a trivial bound to the quantum mutual information is
that $2\log_e({\text{dim}}(B))\ge S(B\!:\!R)$. If this bound were
saturated, the huge thermodynamic entropy inside $B$ would imply that an
infalling observer would {\it immediately\/} encounter an incredibly
mixed state (e.g., a near uniform mixture of roughly $10^{10^{77}}$
orthogonal quantum states for an initially stellar mass black hole) with
correspondingly huge energies as soon as they passed the horizon. They
would immediately encounter an `energetic curtain' \cite{Braunstein13}
or firewall \cite{AMPS} upon entering the black hole. To guarantee
assumption 1.b holds, the above dimensional bound must be far from
saturation, i.e., at the Page time
\begin{equation}
\log_e( {\text{dim}}(B))\gg \frac{1}{2}\,S_{\text{BH}}.
\label{combined1ab}
\end{equation}
{\it Finite interior Hilbert space}: We may now derive a
contradiction along the lines of the original firewall paradox.
Assumption 1.c holds that the black hole interior has a Hilbert space
dimensionality that is well approximated by the exponential of the
Bekenstein-Hawking entropy. Thus, at the Page time, when a black hole's
surface area has shrunk to one-half of its original value we would have
\begin{equation}
\log_e( {\text{dim}}(B))\simeq
\frac{1}{2}\,S_{\text{BH}},
\label{BHentropy}
\end{equation}
which directly contradicts Eq.~(\ref{combined1ab}).
{$~$}\hfill \rule{2mm}{2mm}
\vskip 0.05truein
{\em Theorem 2}:
Note that Eq.~(6) of the manuscript involves only correlations between
external degrees-of-freedom and hence relates quantities which are, in
principle, directly observable and reportable. Combining this with the
assumption of complete evaporation, Eq.~(\ref{RadENT}), we easily find
\begin{equation}
S(N:R)\ge (1-\varepsilon) S_{\text BH}, \qquad \varepsilon\ll 1.
\label{RadENT3}
\end{equation}
Locality (assumption 2.b) has allowed us to eliminate $B$ from
Eq.~(\ref{RadENT2}), which in turn allows us to do without any specific
bound to the size of the interior Hilbert space. More surprisingly,
locality implies a very different picture: one where huge thermodynamic
entropies must reside outside the black hole instead of inside it.
At first sight, this appears reminiscent of arguments based on
time-reversing Hawking radiation. Ordinary Hawking radiation evolves out
of vacuum modes, but any (information bearing) deviations were argued to
have started out as high-energy excitations near the horizon
\cite{Giddings94}. By contrast, the huge entropies in
Eq.~(\ref{RadENT3}) are associated with degrees-of-freedom that are
maximally entangled with the outgoing radiation and therefore correspond
to an effect of the ``infalling partners'' to the radiation. Thus,
Eq.~(\ref{RadENT3}) represents a distinct (and much stronger) phenomenon
imposed by locality.
{\it Non-exotic atmosphere}: Assumption 1.c is weaker than 1.b,
only requiring that externally, black holes should resemble their
classical counterparts (aside from their slow evaporation). In turn, we
apply this in a weak manner to only suppose that the black hole
does not contain an atmosphere of super-entropic exotic matter.
From Eq.~(1) of the manuscript
\begin{equation}
S(N:R) \le \eta\, S_{\text{BH}}, \qquad \text{with $\eta\ll 1$},
\label{EPeqn}
\end{equation}
and combining Eqs.~(\ref{RadENT3}) and~(\ref{EPeqn}) yields the
contradiction
\begin{equation}
1\le \varepsilon+\eta\ll 1,
\end{equation}
whatever the details of the radiation process.
{$~$}\hfill \rule{2mm}{2mm}
\section*{Arbitrary infallen matter}
In this section we generalize our results to show that they
apply even when the matter that collapsed to form the black hole
is not pure. We start with a more general review of generic
black hole radiation necessary to analyse such scenarios.
\subsection*{Generics of black hole radiation}
\vskip -0.1truein
In the manuscript we considered a black hole with (initial)
thermodynamic entropy $S_{\text{BH}}$ which can completely evaporate
into a net pure state of radiation. As discussed, the generic
evaporative dynamics of such a black hole may be captured by Levy's
lemma for the random sampling of subsystems from an initially pure state
consisting of $S_{\text{BH}}$ qunats \cite{Braunstein13}. This either
assumes the infallen matter is pure (as in the manuscript) or ignores it
entirely. Throughout, we set Boltzmann's constant to unity and work with
natural logarithms leading to the measure of qunats (i.e., $\log_e 2$
times the number of qubits \cite{bit}).
In order to extend our analysis to include infallen matter carrying some
(von Neumann) entropy $S_{\text{matter}}$, we need only take the
initially pure state used above and replace it with a bipartite pure
state consisting of two subsystems: $S_{\text{BH}}$ qunats to
represent the degrees-of-freedom that evaporate away as radiation; and a
reference subsystem. Without loss of generality, the matter's entropy
may be treated as entanglement between these two subsystems, however,
here we shall simplify our analysis by assuming uniform entanglement
between the black hole subsystem and $S_{\text{matter}}$ reference
qunats. The generic properties of the radiation may then again be
studied by random sampling the former subsystem to simulate the
production of radiation \cite{Braunstein13}.
The behavior is generic and for our purposes may be summarized in terms
of the radiation's von Neumann entropy, $S(R)$, as a function of the
number of qunats in this radiation subsystem. One finds
\cite{Braunstein13} that $S(R)$ initially increases by one qunat for
every extra qunat in $R$, until it contains
$\frac{1}{2}(S_{\text{BH}}+S_{\text{matter}})$ qunats. From that stage
on it decreases by one qunat for every extra qunat in $R$ until it drops
to $S_{\text{matter}}$ when $R$ contains $S_{\text{BH}}$ qunats and the
black hole has completely evaporated.
Because the von Neumann entropy of a randomly selected subsystem only
depends on the size of that subsystem, the same behavior is found whether
$R$ above represents the early or late epoch radiation with respect
to any arbitrary split. Further, in the simplest case where we choose
the joint radiation $(R,R')$ to correspond to the net radiation from
a completely evaporated black hole we may immediately write down the
generic behavior for the quantum mutual information $S(R':R)$.
In particular, $S(R':R)$ starts from zero when $R$ consists of zero
qunats. From then on, it increases by two qunats for every extra qunat
in $R$ until $S(R':R)$ reaches $S_{\text{BH}}-S_{\text{matter}}$ when
$R$ contains $\frac{1}{2}(S_{\text{BH}}-S_{\text{matter}})$ qunats. From
that stage on until $R$ contains
$\frac{1}{2}(S_{\text{BH}}+S_{\text{matter}})$ qunats $S(R':R)$ remains
constant, after which $S(R':R)$ decreases by two qunats for every extra
qunat in $R$ until it drops to zero once the $R$ contains the full
$S_{\text{BH}}$ qunats of the completely evaporated black hole
\cite{Braunstein13}. Interestingly, it is during the region where
$S(R':R)$ is constant that the information about the infallen matter
becomes encoded into $R$ for the first time \cite{Braunstein13}.
Finally, setting $S_{\text{matter}}$ to zero gives the `standard'
behavior for $S(R)$ and $S(R'\!:\!R)$ upon which the results in the
manuscript are derived.
From the above, we are motivated to generalize the Page time: we define
{\it any\/} time where $S(R'\!:\!R)$ is maximal a (generalized) Page
time; the earliest such time the `initial Page time'; and the latest the
`final Page time'. Prior to the initial Page time, the quantum
information about the initial infallen matter is encoded entirely within
the black hole interior \cite{Braunstein13}. After the final Page time
this information is encoded entirely within the radiation
\cite{Braunstein13}.
\vskip -0.4truein
\subsection*{Including infallen matter}
\vskip -0.1truein
Let us start with a consideration of how the reasoning in Theorem 2
becomes modified by the presence of infallen matter carrying entropy.
\noindent {\bf Theorem 2 generalized}: In the main body of the
manuscript we did not explicitly include entropy associated with
infallen matter. Fig.~\ref{Fig6} shows the most general scenario.
Subsystem $I$ denotes the matter that falls into the region surrounding
the black hole where radiation is produced. Thus, we suppose that late
epoch radiation can in principle come from the joint subsystem $(N,I)$.
In this figure we also include subsystem $I_{\text{early}}$ denoting
matter that has fallen into the region surrounding the black hole at an
earlier epoch or indeed matter that may have collapsed to form the black
hole in the first place.
\begin{figure}[h!]
\vskip -0.1in
\includegraphics[width=38mm]{Fig6.pdf}
\vskip -0.15in
\caption{
Quantum circuit diagram for evaporation of a quantum black hole with
causal horizon and infallen matter. Subsystem $I$ denotes infallen
matter that falls into the region surrounding the black hole to
participate in late epoch radiation generation. (This does not exclude
the possibility that the matter falls directly into the black hole.)
Subsystem $I_{\text{early}}$ denotes matter infalling at earlier times
or even that collapses to form the original black hole.
\label{Fig6}
}
\end{figure}
As in the manuscript we apply strong subadditivity:
\begin{eqnarray}
S(R':R) &\le& S(C,N',R':R) \nonumber \\
&=& S(N,I:R) = S(N:R).
\label{NewMain1}
\end{eqnarray}
Here, we used the fact that joint subsystems $(C,N',R')$ and $(N,I)$ are
unitarily related. Finally, the most natural assumption is that the
infallen matter $I$ is independent of the quantum state of the black
hole, $(B,N)$, or its early epoch radiation $R$. The original inequality
of Eq.~(10) from the manuscript is thus found to still hold in the presence
of infallen matter.
From the summary above of generic radiation production including
infallen matter we have enough to generalize Theorem 2. As in the
manuscript, we take $R$ to be all the early epoch radiation until the
Page time (for this theorem we may take any generalized Page time), and
we let $R'$ denote all the radiation generated from the Page time
onward until the black hole has shrunk to a size much smaller than the
original black hole (say roughly $\varepsilon/2$ of its original area),
but still much larger than the Planck scale. In this case, instead of
Eq.~(\ref{RadENT}), we have
\begin{equation}
S(R':R)= (1-\varepsilon) \, S_{\text{BH}}-S_{\text{matter}},
\qquad \varepsilon\ll 1,
\label{RadCorrMatter}
\end{equation}
where $S_{\text{matter}}\equiv S(I_{\text{early}},I)$ is the net entropy
in all the infallen matter. Combining this with Eqs.~(\ref{EPeqn})
and~(\ref{NewMain1}) gives
\begin{equation}
1 -\frac{S_{\text{matter}}}{S_{\text{BH}}} \le \epsilon + \eta \ll 1.
\end{equation}
Once again we obtain a contradiction except in the extreme case of a
black hole whose net infallen matter contains virtually as much entropy
as the entire black hole's original entropy
$S_{\text{BH}}$.
\vskip 0.1truein
\noindent
{\bf Theorem 1 generalized}:
It is simple enough to repeat the above reasoning for Theorem 1, where
we no longer make use of locality. In this case, we may still use
Fig.~\ref{Fig6} provided we ignore the no-communication
decomposition structure. In particular, strong subadditivity yields
\begin{eqnarray}
S(R':R) &\le& S(B',N',R':R) \nonumber \\
&=& S(B,N,I:R) = S(B,N:R),
\label{NewMain2}
\end{eqnarray}
which is identical to Eq.~(2) of the manuscript. Here, we use the fact
that joint subsystems $(B',N',R')$ and $(B,N,I)$ are unitarily related.
Again, the most natural assumption is that the infallen matter $I$ is
independent of the quantum state of the black hole, $(B,N)$, or its
early epoch radiation $R$.
Applying Eq.~(\ref{RadCorrMatter}) to any Page time then tells us that
for a unitarily and completely evaporating black hole
\begin{equation}
S(B,N:R) \ge (1-\varepsilon) \, S_{\text{BH}}-S_{\text{matter}},
\qquad \varepsilon \ll 1.
\end{equation}
To simplify our argument, we shall suppose that the infallen matter
($I_{\text{early}}, I)$ has actually entered the black hole. In that
case, for any times prior to the initial Page time, the infallen
matter's external reference qunats are maximally entangled with some
subsystem of the black hole interior \cite{Braunstein13}. We shall label
the orthogonal complement of this subsystem within $B$ as $B^{\perp}$.
It is clear that: i) $(B^{\perp},N,R)$ can be treated as a pure quantum
state; and ii) $S(B,N\!:\!R)=S(B^{\perp},N\!:\!R)$. So that
\begin{equation}
S(B^{\perp}:R)+S(N:R) \ge (1-\varepsilon) \,
S_{\text{BH}}-S_{\text{matter}}, \qquad \varepsilon \ll 1.
\label{xxx}
\end{equation}
To ensure that our infalling observer is not passing through an
atmosphere of exotic matter before they reach the horizon, Eq.~(1)
from the manuscript for a large black hole implies that Eq.~(\ref{xxx})
reduces to
\begin{equation}
S(B^{\perp}:R) \gtrsim
S_{\text{BH}}-S_{\text{matter}}.
\end{equation}
Since $\log_e ({\text{dim}}(B^{\perp}))=\log_e ({\text{dim}}(B))-
S_{\text{matter}}$ by construction, we find the trivial bound
\begin{equation}
\log_e({\text{dim}}(B)) \gtrsim \frac{1}{2}(S_{\text{BH}}+S_{\text{matter}}).
\end{equation}
If this bound were saturated, then the huge thermodynamic entropy in
$B$ would imply that an infalling observer would immediately encounter an
incredibly mixed state with correspondingly huge energies as soon
as they passed the horizon. They would immediately encounter
an `energetic curtain' or firewall upon entering the black hole.
To ensure, therefore that assumption 1.b holds, the above bound must
be far from saturation, i.e.,
\begin{equation}
\log_e({\text{dim}}(B)) \gg \frac{1}{2}(S_{\text{BH}}+S_{\text{matter}}),
\label{gg}
\end{equation}
where $B$ is the black hole at the {\it initial\/} Page time.
However, assumption 1.c would require that the left- and
right-hand-sides of Eq.~(\ref{gg}) should be nearly equal. As with the
generalization of Theorem 2, we again obtain a contradiction, in this
case, however, apparently independent of the amount infallen matter.
|
2,877,628,088,865 | arxiv | \section{INTRODUCTION} \label{section_intro}
Variable star discoveries provide information on stellar properties, formation, and evolution, and are critical for determining distances and ages of astronomical objects. Eclipsing binaries allow the measurement of masses, radii, and temperatures, and can be used to test stellar formation theory predictions. Lower mass eclipsing binaries are observationally challenging due to the low intrinsic brightness of the star, and more systems are needed to properly characterize the mass/radius relationship in stellar models \citep{2010AJ....140.1158T, 2015ApJ...804...64M, 2017ApJ...845...72K}. Ground-based surveys such as the Palomar Transient Factory \citep{2009PASP..121.1395L}, ATLAS \citep{2011PASP..123...58T}, HAT \citep{2004PASP..116..266B}, HAT-South \citep{2018arXiv180100849B}, SuperWASP \citep{2006PASP..118.1407P}, KELT \citep{2007PASP..119..923P}, CSTAR \citep{2015ApJS..218...20W}, and many others are very successful in detecting variables (including transiting exoplanets) and adding to known variable star catalogs such as the Variable Star Index\footnote{http://www.aavso.org/vsx/} (VSX). These surveys either observe at day or longer time-scale cadences, or observe dedicated sky areas to reach fast cadence at the expense of all sky coverage. In contrast, the Evryscope is optimized for shorter-timescale observations with continuous all sky coverage and a multi-year period observation strategy. The continuous, fast-cadence, all-sky Evryscope light curves are sensitive to variations (including transits and eclipses) lasting only a few minutes, and provide fine sampling for ten minute level variations or longer.
The Evryscope is a robotic camera array mounted into a 6 ft-diameter hemisphere which tracks the sky \citep{2015PASP..127..234L}. The telescope is located at CTIO in Chile and observes continuously, covering 8150 sq. deg. in each 120s exposure. The Evryscope was deployed with 22 cameras and can accommodate 27 total cameras (with a corresponding increased field of view of 10,000 sq. deg). Each camera features a 29MPix CCD providing a plate scale of 13"/pixel. The Evryscope monitors the entire accessible Southern sky at 2-minute cadence, and the Evryscope database includes tens of thousands of epochs on 16 million sources. In this paper, we limited the search field to the region around the South Celestial Pole, and chose the brighter stars in order to maximize the number of epochs per source and minimize systematics.
The Southern Polar sky area is less explored than other parts of the sky, primarily due to the difficulty in reaching it. This is evidenced by the comparatively low number of planet, eclipsing binary, and variable star discoveries in this region. For example, the sky area in the declination region of \ang{-75} to \ang{-90} comprises 3.4\% of the southern sky's total area; however the VSX catalog of known variables in the same region accounts for only 1.2\% of the southern sky total. Surveys of the Southern Polar sky region typically either use a telescope located at a low latitude South American site or an instrument in the Antarctic. The former choice can be challenging depending on the airmass of the target region, while the second poses engineering difficulties due to the harsh environment \citep{1538-3881-145-3-58, 2015ApJS..218...20W}.
We use the Evryscope to explore the Southern Polar region (declinations \ang{-75} to \ang{-90}). While the airmass is non-optimal ($\sim$ 1.7 average), the Evryscope monitors the Southern Polar region continuously every night for the entire night at 2 minute cadence, with the same camera for multiple years. This long-term, same-camera coverage at short cadence results in many continuous data points with consistent airmass, and minimizes systematics. Targets in this region average over 60,000 epochs per year. Our observing strategy results in several hundred thousand light curves with targets ranging in brightness from 9 $<$ m\textsubscript{v} $<$ 15. The light curves have the precision necessary to potentially detect eclipsing binaries, variable stars, transiting gas-giant planets around small-cool host stars, and short-transit-time planets around small compact stellar remnants including white dwarfs and hot subdwarfs. With additional filtering, the light curves are precise enough to potentially detect gas-giant planets around bright solar type stars; we will address this in future work. These Evryscope light curves also facilitate searches with wide period ranges (for the Polar Search we searched from 3-720 hours), longer periods, and wide amplitude ranges. Long-period discoveries are typically non-interacting stars and are challenging to detect due to the low number of transits.
The primary target of this paper's search is eclipsing binaries, particularly low-mass and long-period systems. The secondary target of this paper's search is gas-giant planets around M-dwarf or late K-dwarf primaries. This survey relies on detection power to narrow the candidates and uses observations from mid 2016 to early 2017. The more challenging transiting exoplanet detections will be conducted with additional systematics removal steps, additional candidate filtering to push to lower power detections, and will use the full three plus year data set (Ratzloff et al., in prep).
Eclipsing binaries are the best calibrators for determining relations between mass, radius, luminosity, and temperature. Relatively few low-mass (M-dwarf or late-K-dwarf secondary) eclipsing binaries have been discovered \citep{2011ApJ...731....8K, 2012ApJ...757..133L, 2018AJ....156...27C}, and many are too faint for easy radial-velocity followup measurements. This has limited our ability to measure the mass/radius relation at low masses, where many low-mass systems suggest larger radii than stellar models predict \citep{2010AJ....140.1158T, 2015ApJ...804...64M, 2017ApJ...845...72K}. This is particularly important for the determination of transiting planet radii around low-mass single stars, where some of the most exciting nearby planets are likely to be discovered \citep{2014SPIE.9143E..20R, 2018arXiv181002826C, 2018AJ....156..102S}.
In this paper, we report the discovery of 303 new variables including seven eclipsing binaries with low-mass secondary stars. We perform spectroscopic followup on select eclipsing binaries to confirm the stellar type and secondary size. Radial velocity measurements reveal that seven of the eclipsing binary discoveries are low-mass (.06 - .34 \(M_\odot\)) secondaries with K-dwarf primaries.
In \S~\ref{section_observations} we describe the Evryscope photometric observations that led to the discoveries as well as our analysis of the light curves and detection algorithms for identifying variables. In \S~\ref{section_followup_observations} we describe the followup observations performed for the low-mass eclipsing binaries including PROMPT \citep{2005NCimC..28..767R} followup photometry, identification spectra and radial velocity followup using the Goodman \citep{2004SPIE.5492..331C} spectrograph on the 4.1 meter SOAR telescope and the CHIRON \citep{2013PASP..125.1336T} echelle spectrometer on the CTIO/SMARTS 1.5 meter telescope. In \S~\ref{section_analysis} we present and characterize our discoveries. We also detail our analysis of the radial velocity followup work including the Monte-Carlo simulation to fit the masses, radii, and other parameters. We conclude in \S~\ref{section_summary}.
\section{OBSERVATIONS AND VARIABILITY SEARCH} \label{section_observations}
\subsection{Evryscope Photometry}
All eclipsing binary and variable discoveries were detected in a transit search of the polar region (declinations \ang{-75} to \ang{-90}). The observations were taken from August 9, 2016 to April 4, 2017. The exposure time was 120s through a Sloan-g filter and each source typically had 16,000 epochs. We briefly describe the calibration and reduction of images and the construction of light curves; further details will be presented in an upcoming Evryscope instrumentation paper. Raw images are filtered with a quality check, calibrated with masterflats and masterdarks, and have large-scale backgrounds removed using the custom Evryscope pipeline. Forced photometry is performed using APASS-DR9 \citep{2015AAS...22533616H} as our master reference catalog. Aperture photometry is performed on all sources using multiple aperture sizes; the final aperture for each source is chosen to minimize light curve scatter. The primary systematics challenges are: background and airmass changes and the subsequent effects on stars of different magnitude and color, the ratchet observing cycle causing the targets to switch cameras and appear in different positions on the CCD chips over the observing season, daily aliases, source blending, PSF distortions, and vignetting. We use the quality filter, calibrations, aperture photometry, along with a custom implementation of the SysRem \citep{2005MNRAS.356.1466T} algorithm to remove the systematics challenges described above.
\subsection{Detection of Variables} \label{section_det_of_var}
Filtering by declination and magnitude returns 239,991 initial targets from the Evryscope light curve database. 76,407 are eliminated by an additional quality filter based on non-blended sources. The remaining 163,584 are analyzed using Box Least Squares (BLS) \citep{Kovacs:2002gn, 2014A&A...561A.138O} with the pre-filtering, daily-alias masking, and settings described in \S~\ref{variability_algorithms}. The light curves are then sorted by BLS detection power, in terms of Signal Detection Efficiency (SDE) \citep{Kovacs:2002gn}. Figure \ref{fig:bls_power} shows the BLS SDE distribution for the targets along with the distribution of detected periods. Targets with an SDE $>$ 10 and with nearby reference stars gives 9104 suspects for further inspection. The 10-SED cutoff is chosen to: 1) limit the number of targets to an amount that is reasonable for human followup (in this case $\sim$ 10,000), 2) ensure a reasonable chance of detecting high-amplitude candidates without accumulating excessive false-positives, and 3) reach three percent level signal depths on bright stars to potentially detect low-mass secondaries and gas-giant planets around M-dwarfs or late K-dwarfs.
We compare the target light curve (both unfolded and folded to the best BLS period) to two nearby reference stars of similar magnitude looking for any signs that the detected variation is present in the references indicating systematics (see \S~\ref{false_positive}). The folded plots are colored by time to check how well-mixed the detection is, since a transit or eclipse with only a single or few occurrences is more likely to be an artifact of the detection algorithm. The light curves are also folded on the second and third best BLS periods to check for aliases, as well as the best Lomb-Scargle (LS) \citep{1975Ap&SS..39..447L, 1982Ap&SS..263..835S} period to check for sinusoidal variability. From visual inspection, we identify 649 variables from the machine filtered 9104 suspects. 346 are known variables and 303 are new discoveries.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{bls_total_mag_power_plot.png}
\includegraphics[width=.45\columnwidth]{bls_total_hist_power_plot.png}
\includegraphics[width=.45\columnwidth]{bls_total_hist_period_plot.png}
\caption{Detection characteristics from the BLS results of the polar search. The top panel shows the BLS power in SDE vs. magnitude (15\% of the points are shown for better visualization), the lower left panel is the histogram of BLS power in SDE, the lower right is the histogram of periods found. Targets with an SDE $>$ 10 are selected for further inspection.}
\label{fig:bls_power}
\end{figure}
\subsection{Machine-Learning Stellar Classification} \label{section_stellar_class}
We developed a machine-learning based classifier that uses publicly available catalog data to estimate stellar size from a B-V color/magnitude space, and to estimate spectral type from multiple color-differences. The discovery candidates were matched to APASS-DR9 \citep{2015AAS...22533616H} and PPMXL \citep{2010AJ....139.2440R} catalogs to obtain reduced proper motion (RPM) and color differences (B-V, V-K, J-H, H-K) for each target. Modifying the method in \cite{1972ApJ...173...xxx} with a two step machine learning process described below, we classify stars based on B-V and RPM to identify stellar size - main sequence, giants, white dwarfs, or sub dwarfs. The RPM and B-V combination provides a high return on our target catalog (99\% of our targets are classified as demonstrated below) and captures spectral information using available data. After the stellar size estimation is completed, the four color differences are used to approximate the spectral type.
In the first step of the machine learning process, we use a support vector machine (SVM) from the SKYLEARN python module \citep{scikit-learn} to identify likely hot subdwarfs (HSD) from all other stars. The HSD are challenging to separate since they can be close to main sequence O/A stars in this parameter space. We find the SVM to be an effective way to segregate the HSD, shown in the top panel of Figure \ref{fig:classifier} as the small confined area enclosed in the black border. This is done by using a training set of HSD from \citep{2017OAst...26..164G} and other types of stars from SIMBAD \citep{2000A&AS..143....9W}, filtering the outliers, then computing the contour boundaries. The SVM method is a non-probabilistic two-class classifier that computes a hard boundary (decision boundary) by minimizing the distance (or margin) between the points closest to the boundary. As with any classifier there are missed targets and contaminants, and there are physical reasons the results can be skewed (reddening for example). Our goal in this step is to separate the most challenging class (the HSD) from all the other classes while providing a boundary with a reasonable contingency space to the nearby white dwarf and main sequence regions.
Once the HSD are identified, all remaining objects are classified using a Gaussian Mixture Model (GMM) \citep{scikit-learn} with three classes to identify white dwarfs, main sequence, and giants. We again use an outlier filtered training set of stars of each type from SIMBAD (20,972 main sequence, 1515 white dwarfs (WD), and 10,000 giants). The GMM classifier results are shown in the bottom panel of Figure \ref{fig:classifier}. The GMM method is a best fit to 2-D Gaussian function (probability density function), using the training points to adjust the Gaussian centers, orientations, and elongations. Our application of this method uses three dimensions (WD, main sequence, and giants). Although more dimensions are possible, overlapping or poorly separated classes tend to give poor results (part of the motivation of using the SVM for the HSD step). The GMM produces contour lines with Negative-log-likelihood (NLL) values that can be converted ($LH=10^{-NLL}$) to give an estimate of the confidence level the data point belongs in the class.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{RPM_plot_targets_B_V_svm.png}
\includegraphics[width=1.0\columnwidth]{RPM_plot_targets_B_V_gmm.png}
\caption{The Evryscope Target Classification - We use B-V color differences and reduced proper motion (RPM) data with a two step machine learning algorithm to classify star size. Top: the training data (gold squares=hot subdwarfs, grey=all others) for the support vector machine (SVM) which returns the resulting hot subdwarf classification region (the area inside the black border). Bottom: the training data (blue stars=white dwarfs, green=main sequence, red diamonds=giants) for the Gaussian Mixture Model (GMM) which returns the resulting classification contours. Negative log likelihood plot-lines 1, 1.7, 2.8 are shown.}
\label{fig:classifier}
\end{figure}
We use the spectral type and temperature profiles in \citep{2013ApJS..208....9P} to derive a function (using 1-D interpolation) that uses available color differences to derive an estimate for spectral type (Figure \ref{fig:temp_prof}). If only B-V is available, we classify simply by the letter (O,B,A,F,G,K,M); if multiple colors are available we average the fits and choose the closest spectral type (G9, K4, M3 for example). For main sequence stars we add the luminosity class V. The code produces a function with RPM and color differences inputs and outputs the star size, star type, and NLL score for the GMM step. We used this to classify all of our discoveries, with the added requirement that the HSD also be apparent spectral type O or B and that the WD have a NLL score of less than 4.0. The added requirements help filter contaminants from main sequence A stars for the HSD, and borderline WD stars. Candidates identified as likely K or M-dwarfs with shallow (typically less than 10\%) eclipses or transits are identified as potentially high value targets and analyzed in more detail.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{paper_amajek_fit_plots_all.png}
\caption{The Evryscope Target Classification - We use (B-V, V-K, J-H, H-K) color differences to estimate temperature and spectral type using the data in \citep{2013ApJS..208....9P} to interpolate profiles for each color difference. The data are the grey points and the interpolations are the colored lines in the figures. We average the four results and pick the closest spectral type.}
\label{fig:temp_prof}
\end{figure}
The Evryscope classifier is designed to: 1) facilitate identification of as many of the target light curves as practical, 2) identify targets to be included in Evryscope transit searches (white dwarfs, hot subdwarfs, K and M-dwarfs), and 3) classify variability discoveries helping to identify those as potentially interesting for further followup. For the Polar Search, 98.5\% of our targets have B-V and RPM data available, and 91.0\% have all four color differences and RPM data. Tests using 485,000 targets spread across the entire southern sky (all RA and declinations \ang{+10} to \ang{-90} ) have demonstrated very high returns - 99\% of Evryscope targets have all four color differences and RPM data available for classification. Once the catalogs were compiled and matched, the classifier took only a few minutes to classify the 485,000 test targets, making it practical for use on the full Evryscope database. All discoveries in this work are classified using the APASS-DR9 \citep{2015AAS...22533616H} and PPMXL \citep{2010AJ....139.2440R} catalogs as described above. A similar approach using the GAIA-DR2 \citep{2018A&A...616A...1G} catalog will be used as an additional target filter for the transiting exoplanet searches (Ratzloff et al., in prep).
We tested the Evryscope classifier in several ways. We chose known WD from APASS \citep{2017MNRAS.472.4173R} and high confidence ($>$.80) WD suspects from ATLAS \citep{2017arXiv170309714P} and SDSS \citep{2015arXiv150105309P} with m\textsubscript{v} $<$ 16.5, for a total of 211 classifier test targets. Using \cite{2017OAst...26..164G} with m\textsubscript{v} $<$ 15.0, we obtain 1560 HSD classifier test subjects (which may include WD due to the difficulty in separating the two groups). We use \cite{2011AJ....142..138L} to obtain 3764 high-confidence M-dwarfs. Using \cite{2001KFNT...17..409K} and filtering out the bright stars we have 999 main sequence, 452 giants, and 895 K-dwarfs for classifier testing.
Table \ref{tab:classifier_perf} shows the performance of the classifier to correctly determine star size (ms/giant/WD/HSD). Table \ref{tab:classifier_perf_2} shows the performance of the classifier to correctly determine letter spectral type (O,B,A,F,G,K,M). Table \ref{tab:classifier_perf_3} shows the performance of the classifier to correctly determine full spectral type (O0 - M9).
\begin{table}
\caption{Evryscope Classifier star size (ms/giant/WD/HSD) performance.}
\begin{tabular}{ l c c}
Test group & Star size & ES Classifier \% correct\\
\hline
M-dwarfs & ms & 95.3\%\\
K-dwarfs & ms & 86.2\%\\
M-giants & giant & 98.7\%\\
Main Sequence & ms & 94.1\%\\
HSD & HSD & 54.5 (77.7 w/WD)\%\\
WD & WD & 87.0\%\\
\hline
\end{tabular}
\label{tab:classifier_perf}
\end{table}
\begin{table}
\caption{Evryscope Classifier letter spectral type (O,B,A,F,G,K,M) performance.}
\begin{tabular}{ l c c}
Test group & letter spectral type & ES Classifier \% correct\\
\hline
M-dwarfs & M & 95.2\%\\
K-dwarfs & K & 81.1\%\\
M-giants & M & 97.9\%\\
Main Sequence & O-M & 69.5\%\\
HSD & O,B & 76.5\%\\
\hline
\end{tabular}
\label{tab:classifier_perf_2}
\end{table}
\begin{table}
\caption{Evryscope Classifier full spectral type (O0 - M9) performance.}
\begin{tabular}{ l c c c c}
& & & ES Classifier & \\
Test group & spectral type & mean & variance & \% +/-3\\
\hline
M-dwarfs & M0-M9 & -.50 & 1.9 & 95.3\%\\
K-dwarfs & K0-K9 & -.98 & 2.7 & 81.6\%\\
M-giants & M0-M9 & -2.0 & 1.7 & 78.0\%\\
Main Sequence & O0-M9 & -.87 & 3.7 & 63.1\%\\
\hline
\footnote{Shown is the mean difference and variance in classifier performance numerical class versus the known class. The last column shows the percent of the test group that is classified correctly to within 3 of the known numerical class.}
\end{tabular}
\label{tab:classifier_perf_3}
\end{table}
We also compared the classifier results to SOAR ID spectra taken for the low-mass eclipsing binaries (\S~\ref{section_lmeb}). 7 of the 8 were classified as the correct spectral type (K for example), and within +/- 1 numeric class (K5 or K6 for example) (Table \ref{tab:classifier_comp}).
\begin{table}
\caption{Comparison of the Evryscope Classifier to SOAR ID spectra.}
\begin{tabular}{ l c c}
ID (EVR+) & SOAR ID Sptp & ES Classifier Sptp\\
\hline
J053513.22-774248.2 & G7V & K1V\\
J06456.10-823501.0 & G8V & G9V\\
J103938.18-872853.8 & K7V & K6V\\
J110815.96-870153.8 & K4V & K3V\\
J165050.23-843634.6 & K5V & K4V\\
J180826.26-842418.0 & G5V & G6V\\
J184114.02-843436.8 & K2V & K3V\\
J211905.47-865829.3 & K5V & K6V\\
\hline
\end{tabular}
\label{tab:classifier_comp}
\end{table}
\subsection{Variability search algorithms} \label{variability_algorithms}
We selected sources in the polar region with m\textsubscript{v} $<$ 14.5 and with light curves that passed quality tests to eliminate sources with blending, narrow time coverage, or low number of epochs (\S~\ref{section_det_of_var}). Light curves (with MJD timestamps) were pre-filtered with a Gaussian smoother to remove variations on periods greater than 30 days, and a 3rd order polynomial fit was subtracted to remove long-term variations. Light curves were then searched for transit-like, eclipse-like, and stellar variability signals using the Box Least Squares (BLS) \citep{Kovacs:2002gn, 2014A&A...561A.138O} and Lomb-Scargle (LS) \citep{1975Ap&SS..39..447L, 1982Ap&SS..263..835S} algorithms.
We tested the recovery rates on Evryscope light curves with different BLS settings - with periods ranging from 2-720 hours, 10,000-100,000 periods tested, and transit fractions from .001 to 0.5. Recovery rate tests were run on known eclipsing binaries in our magnitude range with different transit depths ranging from .01 to .25, and on simulated few-percent level transit signals injected onto Evryscope light curves representative of low-mass secondaries. The tests showed that a very wide BLS test period range (2-720 hours) led to decreased detections as the periodogram becomes biased to long periods or spikes in longer periods arise from data gaps. This challenge combined with the survey 6-month time coverage (\S~\ref{section_intro}), shows too aggressive of a period range can detect fewer eclipsing binary candidates. Based on these tests, the final BLS settings used on the Evryscope Polar Search were a period range 3-250 hours with 25,000 periods tested and a transit fraction of .01 to .25.
Period detections of 24-hours and corresponding aliases (4, 6, 8, 16, 36, 48, and 72 hours) were masked in +/- .1 hour widths. The results were sorted by BLS signal detection strength - BLS periodogram peak power in terms of sigmas above the mean power. Targets with peak power greater than 10-sigma were verified visually with a panel detection plot. We use the Lomb-Scargle (LS) algorithm to identify sinusoidal variables. For LS, we used a period range 3-720 hours to include sensitivity to longer period variables. We recover slightly lower amplitude variables (minimum discovery amplitude in this work $= .008$) than eclipsing binaries (minimum discovery depth in this work $= .029$) as shown in the Appendix.
Figure \ref{fig:eb_detection} shows the phase-folded Evryscope light curve for EVRJ110815.96-870153.8 a K4V primary and .21 \(M_\odot\) secondary with a BLS detected period of 12.28 hours. Figure \ref{fig:var_detection} shows the light curve for EVRJ032442.50-780853.9 a variable star with a LS detected period of 4.67 hours.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{paper_plot_22N16EVRPOL112129_19005513.png}
\includegraphics[width=1.0\columnwidth]{power_spectrum_plot.png}
\caption{An example low mass eclipsing binary discovery (EVRJ110815.96-870153.8) from this survey. The Evryscope light curve phased on its period of 12.277 hours is shown on the top panel. Grey points = 2 minute cadence, blue points = binned in phase. The bottom panel shows the BLS power spectrum with the highest peak at the 12.277 hour detection.}
\label{fig:eb_detection}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{paper_plot_46N26EVRPOL233773_39648516.png}
\includegraphics[width=1.0\columnwidth]{22N16_power_spectrum_plot_ls.png}
\caption{An example variable discovery (EVRJ032442.50-780853.9) from this survey. The Evryscope light curve phased on its period of 4.676 hours is shown on the top panel. Grey points = 2 minute cadence, blue points = binned in phase. The bottom panel shows the LS power spectrum with the highest peak at the 4.676 hour detection.}
\label{fig:var_detection}
\end{figure}
\subsection{False Positive Tests} \label{false_positive}
We performed several tests to verify the variability signals were not false positives. First, we compared the candidate light curve with several nearby reference star light curves looking for similar variation to test for systematics or PSF blending. The nearest reference stars within 0.2 degrees of the reference star were filtered by magnitude and light curve coverage. The nearest three with magnitudes within 0.5 mag of the target star and with a light curve coverage and number of data points within 20\% of the target light curve are chosen for comparison. The references are folded at the same period as the detected period of the candidate, and are inspected visually for signs of similar signals, offsets, or outliers. Candidates with references showing similar variability are assumed to be systematics and thrown out.
Next we tested how well-mixed in phase the observations were, with poor mixing potentially indicating matched-filter fits to systematics or data gaps instead of astrophysical signals. This is performed by folding the candidate on the detected period and color coding the points by time (ranging from a blue-to-red scheme mapped to early-to-late times) and visually inspecting the resulting plot. For each discovery, we also compared the phased light curve of the first and the second half of the data looking for inconsistency. Candidates with marginal results from these tests were reviewed by an additional person and thrown out if both agreed the target is suspect.
Eclipsing binary light curves that did not reveal a secondary eclipse or out-of-transit ellipsoidal variation were tested further. For these candidates, we folded the light curves at twice the detected period, looking for differences in odd/even transit depths to rule out finding half of the actual period. Candidates passing these tests were then flagged as probable variable discoveries and analyzed further as detailed in \S~\ref{section_analysis}.
\section{FOLLOWUP OBSERVATIONS} \label{section_followup_observations}
Followup observations for select eclipsing targets were made with the PROMPT telescopes \citep{2005NCimC..28..767R} in order to confirm the Evryscope detection. We used the SOAR Goodman spectrograph \citep{2004SPIE.5492..331C} for stellar classification and intermediate-resolution radial velocity measurements. We used the CHIRON \citep{2013PASP..125.1336T} spectrograph for high-resolution radial velocity measurements to measure the companion masses of select suspected low-mass secondaries.
\subsection{SOAR Goodman ID Spectroscopy} \label{section_ID_spec}
We observed the low mass candidates on April 29, 2018 on the SOAR 4.1 m telescope at Cerro Pachon, Chile with the Goodman spectrograph. We used the red camera with the 400 1/mm grating with a GG-455 filter in M1 and M2 preset mode with 2x2 binning and the 1" slit (R $\sim$ 825). The red camera \footnote{http://www.ctio.noao.edu/soar/content/goodman-red-camera} is optimized for the optical red part of the spectrum and when used with the M1 and M2 presets provides a wavelength coverage of 3500-9000 Angstroms. The Goodman spectra are 2-D, single order. We took eight consecutive 60s spectra for each of the targets and for the standard LTT3864. For calibrations, we took 3 x 60s FeAr lamps, 10 internal quartz flats using 50\% quartz power and 10s integrations, and 10 bias spectra.
We processed the spectra with a custom pipeline written in Python by the Evryscope team; this pipeline is described in detail here. The eight spectra for each target are median-combined, bias-subtracted, and flat-corrected. A 3rd-order polynomial is fit to the brightest pixels in each row; the spectra are then extracted in a 10-pixel range and background subtracted. We identify 8 prominent lamp emission lines for each preset (including 3749, 4806, 6965 Angstroms and many others spread across the entire wavelength range) and compare with the known lines of the Iron-Argon arc lamp using a Gaussian fit of each feature. We use a 4th-order polynomial to fit the Gaussian peaks and wavelength-calibrate each spectrum. We used the standard star LTT3864 to flux-calibrate by first removing prominent absorption features then fitting a 7th-order polynomial to the continuum. The resulting SOAR standard star spectra was visually matched to the template from the ESO library and verified to fit within the template precision. The spectra were normalized and the results from the M1 and M2 presets were combined for each target with a wavelength coverage of 3500-9000 Angstroms.
Errors in the SOAR spectra arise from instrumentation systematics, observational conditions, and the extraction pipeline. Instrumentation error sources are dominated by flexure, component alignment, and limitations in optical quality due to manufacturing constraints; see \cite{2004SPIE.5492..331C} for an elaborate discussion of these contributions. Observational sources of errors are primarily due to background noise, airmass, and atmospheric effects. Errors in the spectra from the extraction process are discussed in detail in \cite{2017ASPC..509..263F}; the chosen standard, normalization process, and resolution are the error sources relevant to this work. The Goodman spectrograph has been operating consistently for over 15 years, and we use the accumulated knowledge to minimize errors from instrumentation, observation, and processing sources. In \S~\ref{section_SOAR_ID_spectra_analysis} we compare the SOAR ID spectra to the spectra of stars with known stellar types. The known spectra are from different instruments, observational strategies, and pipelines; additionally the available known spectra are limited to an accuracy of $\approx$1-2 in the luminosity class. The combined errors in the high SNR SOAR ID spectra are less than this limitation. We demonstrate this in \S~\ref{section_SOAR_ID_spectra_analysis} by comparing the results from different stellar classification methods, which are consistent to $\approx$1-2 in the luminosity class.
\subsection{PROMPT Photometry}
EVRJ114225.51-793121.0, EVRJ06456.10-823501.0, EVRJ184114.02-843436.8, and EVRJ211905.47-865829.3 were observed with the PROMPT P8 60cm telescope located at CTIO Chile. All observations were taken with Johnson B and Johnson R filters, interleaved. Table \ref{tab:prompt_phot} summarizes the PROMPT followup work.
\begin{table}
\caption{PROMPT observations of select targets.}
\begin{tabular}{ l c c c}
ID (EVR+) & Date & Images & B/R(s)\\
\hline
J06456.10-823501.0 & Dec 10, 2017 & 412 & 40/20\\
J114225.51-793121.0 & Oct 30, 2017 & 190 & 90/60\\
J114225.51-793121.0 & Feb 16, 2018 & 288 & 90/60\\
J184114.02-843436.8 & Dec 19, 2017 & 202 & 100/45\\
J211905.47-865829.3 & Nov 21, 2017 & 120 & 130/90\\
\hline
\end{tabular}
\label{tab:prompt_phot}
\end{table}
The PROMPT followup observations confirm the candidate variability is astrophysical and not an Evryscope systematic by observing the Evryscope detection signal with a separate instrument and different eclipse time. The PROMPT telescopes have a 100 times larger aperture than the Evryscope cameras, giving the PROMPT light curves a lower root-mean-square (RMS) scatter and improved signal-to-noise-ratio (SNR) compared to the Evryscope discovery light curves. The amount of improvement depends on many factors including target brightness and sky background; here we show a representative example, EVRJ211905.47-865829.3 in Figure \ref{fig:combined_lc}. The light curve RMS (after removing the eclipse) for this target is .006 in PROMPT and .108 in Evryscope (unbinned 2-minute cadence). This corresponds to a SNR of $\approx{167}$ for the PROMPT single transit light curve and $\approx{9.5}$ for the Evryscope one year light curve. These results compare nicely to estimated theoretical SNR of 175 and 12 for PROMPT and Evryscope respectively, using reasonable values for sky background, throughput, and airmass for these telescopes observing an $m\textsubscript{v} =$ 14.0 magnitude target. We point out that the Evryscope binned light curve can reach the SNR of the PROMPT light curve, in this example with reduced sampling. In an upcoming white dwarf / hot subdwarf fast binary discovery paper (Ratzloff et al., in prep) we demonstrate the ability to reach higher than PROMPT SNR with multiple year binned Evryscope data. In this work, we use PROMPT to verify the Evryscope candidates and better characterize the eclipse depth and shape to reduce the error in the companion radii calculation. For this target, we also observed the secondary eclipse for comparison with the primary eclipse shown in Figure \ref{fig:combined_lc}. The PROMPT data also provides an additional eclipse time (several months past the latest Evryscope eclipse), and by phase-folding both light curves, the period accuracy is increased.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{paper_plot_combined_32N45_.png}
\includegraphics[width=1.0\columnwidth]{paper_plot_combined_werrors_32N45_.png}
\caption{\textit{Top:} Combined light curves of EVRJ211905.47-865829.3. This object was flagged as a potential 9.3 hour transiting gas giant planet as the transit depths are unchanged by color and in odd/even phase. There is a slight out of phase ellipsoidal variation when folded at the 18.6 hour period indicating it is most likely a grazing eclipsing binary with nearly identical primary and secondaries. \textit{Bottom:} A detailed view of the transit in the PROMPT light curve with 1$\sigma$ errors shown.}
\label{fig:combined_lc}
\end{figure}
The PROMPT images were processed with a custom aperture photometry pipeline written in Python. The images were dark and bias-subtracted and flat-field-corrected using the master calibration frames. Five reference stars of similar magnitude are selected and aperture photometry is performed using a range of aperture sizes. The background is estimated using a sigma clipped annulus for each star scaled by the aperture size. A centroid step Gaussian fits the PSF to calculate the best center and ensures each aperture center is consistent regardless of pixel drift. The light curve rms variation is computed for the range of apertures, and the lowest-variation aperture size is chosen. A final detrending step using a 3rd order polynomial is applied to remove remaining systematics. Photometric errors are calculated per epoch using the estimated CCD aperture photometry noise in \cite{1995ExA.....6..163M} and the atmospheric scintillation noise approach in \cite{1967AJ.....72Q.328Y}. A detailed summary of the photometric error calculation is given in \cite{2017AJ....153...77C}. We combined the PROMPT and Evryscope light curves for final inspection. An example of a grazing eclipsing binary originally flagged as a 1.7 R\textsubscript{J} planet candidate is shown in Figure \ref{fig:combined_lc}. Radial velocity follow-up with the HARPS \citep{2003Msngr.114...20M} spectrograph on the ESO La Silla 3.6m telescope combined with the detailed light curve analysis confirms the candidate is a grazing eclipsing binary.
\subsection{Intermediate-resolution Spectroscopy and Radial Velocity}
EVRJ114225.51-793121.0, EVRJ06456.10-823501.0, EVRJ053513.22-774248.2, EVRJ184114.02-843436.8, and EVRJ211905.47-865829.3 were observed on November 15 and 19, 2017 and December 15 and 16, 2017 on the SOAR 4.1 m telescope at Cerro Pachon, Chile with the Goodman spectrograph. EVRJ110815.96-870153.8, EVRJ180826.26-842418.0, EVRJ165050.23-843634.6, and EVRJ103938.18-872853.8 were observed on February 12, 2018 and March 3, 2018. We used the blue camera with the 2100 1/mm grating in custom mode with 1x2 binning and the 1" slit (R $\sim$ 5500). We took four 300-360s spectra depending on the target and conditions. For all targets, we took 3 x 60s FeAr lamps after each group of science images. We took 10 internal quartz flats with 80\% quartz lamp power and 60s integration, and 10 bias spectra.\\
The spectra are processed using a modified version of the Python code described in \S~\ref{section_ID_spec} and radial velocity measurements are calculated (\S~\ref{section_SOAR_RV}). The SOAR spectra return radial velocity precision of $\approx$ 10 km/s for our targets, which allows us to characterize the secondary mass for small late M-dwarf stars. This also allowed us to rule out potential planetary-mass secondaries - the case in several of the grazing eclipses.\\
\subsection{High-resolution Radial Velocity}
EVRJ06456.10-823501.0 and EVRJ053513.22-774248.2 were observed between January 28, 2018 and March 25, 2018 on seven nights (one data point per night) with the SMARTS 1.5 m telescope at CTIO, Chile with the CHIRON spectrograph. EVRJ184114.02-843436.8 was observed on March 23, 2018. Spectra were taken in image slicer mode (R $\sim$ 80000). One 1500 to 1800 second spectrum was taken depending on the target and conditions. Spectra of RV standard HD131977 were taken to verify processing results.
Spectra were wavelength calibrated by the CHIRON pipeline, which we processed using a custom python code to measure radial velocity. We visually inspected the spectral orders and chose the top seven by SNR and with prominent atmospheric absorption features. The orders are spread throughout the wavelength range, and we select the most prominent atmospheric feature per order. Within each of the selected orders, for each observation, we clip a small section (typically 20 Angstroms) encompassing the best absorption feature. For example order nine uses the 4957 Angstrom feature, order fourteen uses the 5328 Angstrom feature, and order thirty-seven uses the 6563 Angstrom feature. We fit a Lorentzian to the absorption features and measure the wavelength shift of each observation in each order. For each observation, we sigma clip any outlier orders and use the average shift to calculate the velocity. Using the standard deviation of the measured shifts between the orders, we place error limits. The error in the RV standard is measured to $\approx$ 200 m/s, while the errors in the fainter targets are $\approx$ 1km/s. An example is shown in Figure \ref{fig:combined_CHIRON}; the best fit RV amplitude from the CHIRON data is 69.0 km/s and for the SOAR data is 64.7 km/s.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{CHIRON_SOAR_plot_26N59_RV_combined.png}
\caption{Combined Radial Velocity curves for target EVRJ06456.10-823501.0. The red data points are from CHIRON RV data, and the blue points are SOAR data with the yellow and green curves of best fit.}
\label{fig:combined_CHIRON}
\end{figure}
\section{DISCOVERIES} \label{section_analysis}
In this section we present the discoveries beginning with the eclipsing binaries and variables. We measure the amplitudes of the variation, and for select targets we use the radial velocity measurement to estimate the companion mass. We show distributions of the periods, amplitudes, and magnitudes of the discoveries and summarize the important statistics of the search. All results are summarized in Tables 8-11.
\subsection{Discovery candidates parameter estimations}
Candidates passing the false positive checks (\S~\ref{false_positive}) are separated by variation type (eclipse-like or sinusoidal variable-like) and measured. The eclipsing binary light curves are folded on the best period and fit with a Gaussian using the approximate phase and depth from the visual inspection plot as the prior. For the variable candidates, we use the best sinusoidal fit from the LS detection. Given the large number of candidates, fitting the light curve amplitude consistently and automatically is key. An additional challenge is the degeneracy due to orbital angle, limb darkening, and orbital eccentricity. We find the Gaussian (for eclipsing binaries) and best sinusoidal fit from the LS detection (for variables) methods to be effective and efficient to measure the variability of the discoveries, while select targets with followup data can be fit with more complicated tools (see \S~\ref{section_SOAR_RV}). Figure \ref{fig:best_fits} shows an example eclipsing binary and variable star fit.\\
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{amplitude_fit_plot_paper_5N24EVRPOL029441_.png}
\includegraphics[width=1.0\columnwidth]{amplitude_fit_plot_paper_4N56EVRPOL020122_.png}
\caption{Top: Eclipsing binary discovery EVRJ131324.31-792126.3 folded on its 33.7 hour period representative of 100's of Evryscope variable discoveries. Gray points are two minute cadence and yellow is the best Gaussian fit to measure depth. Bottom: variable star discovery EVRJ131228.85-782429.2 folded on its 136.665 hour period representative of 100's of Evryscope variable discoveries. Gray points are two minute cadence and yellow is the best LS fit to measure amplitude.}
\label{fig:best_fits}
\end{figure}
\subsection{Identification Spectra} \label{section_SOAR_ID_spectra_analysis}
For the discoveries with potential low-mass secondaries, we compare the SOAR ID spectra to ESO template spectra (available at www.eso.org), see Figure \ref{fig:id_spectra}. After finding the closest matching spectra, we compare the results from the color differences classifier described in the previous section. Finally, we use the PyHammer \citep{2007AJ....134.2398C} spectra fitting tool to confirm our fits. PyHammer uses empirical templates of known spectral types and performs a weighted least squares best fit to the input spectra and returns the estimated spectral type. For the the low-mass secondary eclipsing binaries, the results from the three methods are in agreement to within 1-2 in the luminosity class. The spectral types are shown in Table \ref{tab:low_mass_eb}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{Final_combined_spectrum_22N14_plot.png}
\caption{An example low mass eclipsing binary discovery (EVRJ103938.18-872853.8) ID spectra taken with the Goodman Spectrograph on the 4.1m SOAR telescope at CTIO, Chile. The green line is a K5V template from the ESO library.}
\label{fig:id_spectra}
\end{figure}
\subsection{Radial Velocity - SOAR Data} \label{section_SOAR_RV}
We cross-correlate the SOAR spectra and measure the velocity shift throughout the period found in the Evryscope photometry. Using the color differences in \S~\ref{section_stellar_class}, and the stellar type, radii, and mass profiles from \cite{2013ApJS..208....9P}, we derive functions (using 1-d interpolation) to estimate the primary radius and mass. The secondary radius and mass are then determined using Keplarian/Newtonian calculations described in the following section. For this step, we assume a circular orbit, zero inclination angle, and no limb-darkening. We run a Monte Carlo (MC) simulation to estimate the radius and mass ranges. Due to the simplifying zero-inclination angle assumption and the uncertainty in the SOAR RV measurements, our mass calculations for the secondaries are lower limits. More detailed modeling will be addressed in future work. We discuss our final solutions in \S~\ref{section_lmeb}. The results are listed in Table \ref{tab:low_mass_eb} and plots of the photometric and radial velocity light curves are shown in the appendix.
\subsubsection{Secondary mass and radius determination}
\textbf{Photometry:} From visual inspection of the candidate light curves, an initial guess is made for the transit phase and depth and fit with a Gaussian (Figure \ref{fig:soar_rv}). The data is fit with a least squares minimization using scipy to measure the amplitude and phase.\\
\textbf{Radial Velocity:}
An initial sine curve fit is made using a guess for the amplitude and zero point, while the phase and period are controlled by the transit time and the period found in the photometric light curve (Figure \ref{fig:soar_rv}). The amplitude and zero point are used as inputs to a sine fitting function that uses a sine curve with a fixed phase and period. The function fits the data with a least squares fit; this is the gold line and it returns an RV of 56 km/s for target EVRJ110815.96-870153.8. This assumes a circular orbit and edge on geometry. We leave more detailed analysis with additional variables to future work.
\subsubsection{MC best fit of mass and radius}
Using the methods described in the previous section, we perform a Monte Carlo simulation (as described in \cite{Press:2007:NRE:1403886}) to determine the best fit and distribution of the primary and secondary mass and radius.
From the Evryscope photometry, we use a bootstrap technique to leverage the very large number of epochs. We randomly choose half of the data points for each iteration with 5000 trials, and fit the data with a least squares minimization for each iteration. We also vary the radius of the primary for each trial by the range in \citep{2013ApJS..208....9P} (spanning +/- 1 in numeric class). From the radial velocity data, we choose a random number in the error bar range of each of the data points (red) and fit the best sine curve (the silver curves) shown in Figure \ref{fig:soar_rv}. We vary the mass of the primary for each trial by the error range in the estimated mass. The propagated results are shown in Figure \ref{fig:soar_rv_errors}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{soar_fit_paper_plot_bc_22N16_SOAR_RV_1000.png}
\caption{EVRJ110815.96-870153.8 K-dwarf eclipsing binary eclipse and radial velocity fit. Top: The best fit (yellow) to the Evryscope photometry using a Gaussian with an initial guess to measure the depth and determine secondary radius. Bottom: The best fit (green) to the SOAR RV data (red points) using a sine curve with an initial guess to measure the velocity and determine the secondary mass. The silver lines are the MC simulation to determine the best fit and error range.}
\label{fig:soar_rv}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\columnwidth]{soar_hist_paper_plot_bc_22N16_Primary_mass_SOAR_RV_5000.png}
\includegraphics[width=0.45\columnwidth]{soar_hist_paper_plot_bc_22N16_Primary_radius_SOAR_RV_5000.png}
\includegraphics[width=0.45\columnwidth]{soar_hist_paper_plot_bc_22N16_Secondary_mass_SOAR_RV_5000.png}
\includegraphics[width=0.45\columnwidth]{soar_hist_paper_plot_bc_22N16_Secondary_radius_SOAR_RV_5000.png}
\caption{Primary and secondary mass and radius determined from our MC simulation. The top panels are the mass and radius of the primary in solar units, the bottom panels are the mass and radius of the secondary. The y-axis is the counts from the MC simulation totaling 5000 trials.}
\label{fig:soar_rv_errors}
\end{figure}
\subsection{Search Statistics}
Sorting by BLS sigma power and choosing only the top candidates greater than 10 sigma narrows the candidates to 5.6\% (9104/163,584) of the filtered list. Visual inspection yields 7.3\% (649/9104) actual variables from the BLS 10 sigma power list. The fraction of all discoveries to all searched is .40\% (649/163,584). The false positive BLS rate is 5.2\% (8455/163,584). Of 649 total variables detected, 346 are known in VSX. The total known periodic variables listed in VSX for the same sky area as the Evryscope Polar Search is 1928, giving a return of 17.9\% (346/1928). There are 1050 known variables in the widest period ranges (3-720 hours) we searched, giving 33.0\% return. There are 858 known variables in the period ranges (3-240 hours) we searched with BLS, giving 40.3\% return. We add 303 new variables or 29\% to the known variables in the region.\\
\subsection{Eclipsing Binaries and Variables - Distribution of results}
Histograms of the eclipsing binary discoveries are shown in Figure \ref{fig:discovery_dist}. We discovered a total of 168 eclipsing binaries; most periods found are 75 hours or less, and most amplitudes found are 5-25\%. The results of the variables are shown in Figure \ref{fig:discovery_dist_var}, we found 135 total and most are smaller amplitudes and shorter periods.
\begin{figure}[h!]
\centering
\includegraphics[width=.45\columnwidth]{eb_total_hist_period_plot.png}
\includegraphics[width=.45\columnwidth]{eb_total_hist_mag_plot.png}
\includegraphics[width=.45\columnwidth]{eb_total_hist_amp_plot.png}
\caption{Histogram plots summarizing the eclipsing binary discovery results. We are sensitive to periods of several hundred hours and a large fraction of our discoveries are greater than 10\% amplitude.}
\label{fig:discovery_dist}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=.45\columnwidth]{variables_total_hist_period_plot_720.png}
\includegraphics[width=.45\columnwidth]{variables_total_hist_period_plot_50.png}
\includegraphics[width=.45\columnwidth]{variables_total_hist_mag_plot.png}
\includegraphics[width=.45\columnwidth]{variables_total_hist_amp_plot.png}
\caption{Histogram plots summarizing the variable discovery results. A larger fraction of the variable star discoveries are small amplitude and short period.}
\label{fig:discovery_dist_var}
\end{figure}
\subsection{Classification}
The discovery classification results are shown in Figure \ref{fig:discovery_class}. We find 267 are main sequence, 34 are giants, and two are not classified. Spectral type G is the most common, with the spectral types shown in Table \ref{tab:class_discovery}. We find more giant variables (24) than giant eclipsers (10) as shown in Figure \ref{fig:discovery_class}. Also shown are the discoveries by star size and spectral type compared to total targets searched (Table \ref{tab:class_discovery_total}).
\begin{table}
\caption{Classification discovery results - spectral type}
\begin{tabular}{ l c c }
Classifier Spectral Type & Number of Discoveries & Percent\\
\hline
B & 2 & 0.7\\
A & 14 & 4.6\\
F & 89 & 29.4\\
G & 109 & 36.0\\
K & 76 & 25.1\\
M & 11 & 3.6\\
none & 2 & 0.7\\
Total & 303 & 100\\
\hline
\end{tabular}
\label{tab:class_discovery}
\end{table}
\begin{table}
\caption{Classification discovery results - compared to total searched}
\begin{tabular}{ l c c c}
Classification & Total Searched & Number of Discoveries & Percent\\
\hline
ms & 114585 & 267 & 0.23\\
giant & 40775 & 34 & 0.08\\
HSD & 335 & 0 & 0.00\\
WD & 21 & 0 & 0.00\\
O & 20 & 0 & 0.00\\
B & 331 & 2 & 0.60\\
A & 4110 & 14 & 0.34\\
F & 26102 & 89 & 0.34\\
G & 49560 & 109 & 0.22\\
K & 60964 & 76 & 0.12\\
M & 14629 & 11 & 0.08\\
\hline
\end{tabular}
\label{tab:class_discovery_total}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{RPM_plot_targets_B_V_gmm_eb.png}
\includegraphics[width=1.0\columnwidth]{RPM_plot_targets_B_V_gmm_variables.png}
\caption{Classification results of the eclipsing binary and variable discoveries - Negative log likelihood plot-lines 1, 1.7, 2.8 shown. Top: Eclipsing Binaries. Bottom: Variables.}
\label{fig:discovery_class}
\end{figure}
\subsection{Eclipsing Binaries with low-mass secondaries} \label{section_lmeb}
We identified seven of the eclipsing binary discoveries as hosting potential low-mass secondaries and found that four are less than .25 solar mass. Three of the systems are fully eclipsing binaries (p = 12.3 to 25.9 hours) with dwarf primaries (SpT\textsubscript{p} = G5V, K4V, K5V) and M-dwarf secondaries (mass = .06 - .20 \(M_\odot\)). The other three systems are grazing eclipses with (p = 20.8 to 137.1 hours) with dwarf primaries (SpT\textsubscript{p} = G8V, K2V, K7V), M-dwarf secondaries (mass = .24 - .37 \(M_\odot\)) and minimum radii (r = .20 to .26 \(r_\odot\)). Table \ref{tab:low_mass_eb} presents a list of all low-mass secondary targets. Also included is a likely visual binary EVRJ114225.51-793121.0 (separated in SOAR observations) and EVRJ211905.47-865829.3 a grazing EB with nearly identical primary and secondaries.
\section{SUMMARY} \label{section_summary}
The Evryscope was deployed to CTIO in May 2015 and has been operational since that time. We conducted a variability search of the southern polar area using the first 6-months of available data and by selecting the brighter stars (m\textsubscript{v} $<$ 14.5) and limiting the declination range (\ang{-75} to \ang{-90}). We sorted by detection power and visually searched the top 5 \% for variability. We recovered 346 known variables and discovered 303 new variables, including 168 eclipsing binaries six of which we identify as low-mass (.06 - .37 \(M_\odot\)) secondaries with K-dwarf primaries. We encourage the community to followup further on these targets. We measured amplitudes, periods, and variability type and provide a catalog of all discoveries in the Appendix.
This research was supported by the NSF CAREER grant AST-1555175 and the Research Corporation Scialog grants 23782 and 23822. HC is supported by the NSFGRF grant DGE-1144081. BB is supported by the NSF grant AST-1812874. OF and DdS acknowledge support by the Spanish Ministerio de Econom\'ia y Competitividad (MINECO/FEDER, UE) under grants AYA2013-47447-C3-1-P, AYA2016-76012-C3-1-P, MDM-2014-0369 of ICCUB (Unidad de Excelencia 'Mar\'ia de Maeztu'). The Evryscope was constructed under NSF/ATI grant AST-1407589.
\bibliographystyle{apj}
|
2,877,628,088,866 | arxiv | \section*{Acknowledgments}
The acknowledgments should go immediately before the references. Do not number the acknowledgments section.
\textbf{Do not include this section when submitting your paper for review.}
\fi
\bibliographystyle{acl_natbib}
\section{Introduction}
\label{sec:intro}
Keeping readers engaged in an article and helping them find desired information are important objectives \cite{calder2009experimental,nenkova2011automatic}. These objectives help readers deal with the explosion of online content and provide an edge to content publishers in a competitive industry.
To help readers find personally relevant content
while maintaining the flow of natural reading,
we propose a new text summarization problem where the summary is \textbf{h}oned \textbf{a}s you \textbf{re}ad (HARE).
The challenge is to learn from unobtrusive user feedback, such as the types in Figure~\ref{fig:feedback_alternatives}, to identify uninteresting content to hop over.
\begin{figure}[ht
\centering
\includegraphics[width=1\columnwidth]{figs/task_demo_alternate.pdf}
\caption{Potential feedback methods for HARE\xspace used on a smartphone. In (a), users can choose to swipe left or right to indicate interest or disinterest in sections of text as they read. Users may also provide implicit feedback in the form of dwell time in center window (b) or gaze location, as measured by camera for example (c). More interesting text may have longer gazes or dwell time. The approaches evaluated in this paper rely on feedback similar to (a), but further development in HARE\xspace can extend to (b) or (c).}
\label{fig:feedback_alternatives}
\end{figure}
This new task is related to both query-based summarization (QS) and interactive personalized summarization (IPS). In QS, users must specify a query to guide the resultant summary \cite{damova2010query}.
For users performing focused research, specifying queries is useful, but for more leisurely reading, this requirement interrupts the natural flow.
Approaches to IPS avoid the problem of having to explicitly provide a query. However, they suffer a similar problem by requiring users to go through several iterations of summary reading and feedback-providing before a final summary is produced \cite{yan2011summarize,avinesh2018sherlock,gao2019preference,simpson2019interactive}.
In contrast, HARE\xspace places high importance on non-intrusiveness by satisfying multiple properties detailed in Section~\ref{sec:interaction_loop} (such as feedback being non-invasive).
We find that due to the high cost of generating a dataset for this task, evaluation poses a difficulty. To overcome this, we adapt recent research in unsupervised summary evaluation.
We also describe a variety of approaches for HARE\xspace that estimate \textit{what} the user is interested in and \textit{how much} they want to read. Automated evaluation finds that relatively simple approaches based on hiding sentences nearby or similar to disliked ones, or explicitly modelling user interests, outperforms the control, where no personalization is done.
Human evaluation suggests that not only is deciding the relevance of sentences rather easy in practice, but that even with simple binary feedback, HARE\xspace models may truly provide useful reading assistance.
The major contributions of this work are:
\begin{enumerate}
\item We define the novel HARE\xspace task, and describe a suitable evaluation technique (Section~\ref{sec:task_formulation}).
\item We describe a wide range of motivated approaches for HARE\xspace that should serve as useful baselines for future research (Section~\ref{sec:methods}).
\item We evaluate our approaches to gain a deeper understanding of the task (Section~\ref{sec:experiments}).
\end{enumerate}
\section{Related Work}
\label{sec:related}
In this section, we examine related work on QS, IPS, and unsupervised summarization evaluation.
\subsection{Query-based Summarization}
Both tasks of HARE\xspace and QS aim to produce personalized summaries.
Unlike generic summarization where many large datasets exist \cite{hermann2015teaching,alex2019multinews,Narayan2018DontGM}, development in QS has been affected by a lack of suitable training data \cite{xu2020query}.
To cope, approaches have relied on handcrafted features \cite{conroy2005classy}, unsupervised techniques \cite{van2019query}, and cross-task knowledge transfer \cite{xu2020query}.
The approach of \citet{mohamed2006improving} highlights how query-based summarizers often work by adapting a generic summarization algorithm and incorporating the query with an additional sentence scoring or filtering component. Alternatively, one can avoid training on QS data by decomposing the task into several steps, each performed by a module constructed for a related task \citep{xu2020query}.
A pervasive assumption in QS is that users have a query for which a \emph{brief} summary is expected. This is reflected in QS datasets where dozens of documents are expected to be summarized in a maximum of 250 words \cite{dang2005overview,hoa2006overview} or single documents summarized in a single sentence \citep{hasselqvist2017querybased}.
However, in HARE\xspace, we are interested in a wider range of reading preferences. This includes users who are interested in reading the whole article and users whose interests are not efficiently expressed in a written query.
\subsection{Interactive Personalized Summarization}
The iterative refinement of summaries based on user feedback is also considered by IPS approaches.
An early approach by \citet{yan2011summarize} considers progressively learning user interests by providing a summary (of user-specified length) and allowing them to click on sentences they want to know more about. Based on the words in clicked sentences, a new summary can be generated and the process repeated.
Instead of per-sentence feedback, \citet{avinesh2017joint} allows users to indicate which bigrams of a candidate summary are relevant to their interests.
A successor to this system reduces the computation time to produce each summary down to an interactive level of 500ms \cite{avinesh2018sherlock}.
The APRIL system \cite{gao2019preference} aims to reduce the cognitive burden of IPS by instead allowing users to indicate preference between candidate summaries. Using this preference information, a summary-ranking model is trained and used to select the next pair of candidate summaries.
Shared among these previous works is that the user is involved in an interactive process which interrupts the normal reading flow with the reviewing of many intermediate summaries.
In HARE\xspace, the user reads the document as it is being summarized, so that any given sentence is read at most once (if it has not already been removed). These previous works also focus on multi-document summarization, whereas we wish to improve the reading experience during the reading of individual documents.
\iffalse{
\cite{yan2011summarize}
Summarize What You Are Interested In: An Optimization Framework for Interactive Personalized Summarization
- allows users to click on sentences to indicate interest and optionally refine the summary
- requires used to specify a compression ratio
\cite{avinesh2017joint}
Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback
- https://www.aclweb.org/anthology/P17-1124.pdf
- 2017
- precursor to SHERLOCK
- propose an interactive EMDS system that asks users to label important bigrams within candidate summaries. Given the important bigrams, they use integer linear programming to optimise important bigram coverage in the summary. In simulation experiments, their system can achieve near-optimal performance in ten rounds of interaction, collecting up to 350 important bigrams. However, labelling important bigrams is a large burden on the users, as users have to read through many potentially unimportant bigrams (see Sect. 5). Also, they assume that the users’ feedback is always perfect.
- While we could show that user feedback significantly improves the quality of the summary, each iteration in our model can take from several seconds for small document collections to hours for larger collections with thousands of sentences
\cite{avinesh2018sherlock}
Sherlock: A System for Interactive Summarization of Large Text Collections
- 2018
- https://www.vldb.org/pvldb/vol11/p1902-p.v.s..pdf
- Using our system, in every interaction loop the system suggests a potential summary and the user can accept or reject individual concepts (e.g., entities) shown as green or red in the suggested summary. The feedback is then used to refine the summary for the next iteration.
\cite{gao2019preference}
(APRIL) Preference-based interactive multi-document summarisation
- https://link.springer.com/article/10.1007/s10791-019-09367-8
- 2019
- "Active Preference-based ReInforcement Learning"
- "To reduce the number of interaction rounds, we propose the Active Preference-based ReInforcement Learning (APRIL) framework. APRIL uses active learning to query the user, preference learning to learn a summary ranking function from the preferences, and neural Reinforcement learning to efficiently search for the (near-)optimal summary. Our results show that users can easily provide reliable preferences over summaries and that APRIL outperforms the state-of-the-art preference-based interactive method in both simulation and real-user experiments."
- "Controlled experiments suggest that asking for preferences places a lower cognitive burden on the human subjects than asking for absolute ratings or categorised labels (Thurstone 1927; Kendall 1948; Kingsley and Brown 2010)"
- "First, we estimate the user’s ranking over candidate summaries using active preference learning (APL) in an interaction loop. Second, we use the learnt ranking to guide a neural reinforcement learning (RL) agent to search for the (near-)optimal summary."
- "Our results suggest that with only ten rounds of user interaction, APRIL produces summaries better than those produced by both non-interactive methods and SPPI."
- https://github.com/UKPLab/irj-neural-april
- SPPI (Sokolov et al. 2016b; Kreutzer et al. 2017). The core of SPPI is a policy-gradient RL algorithm, which receives rewards derived from the preference-based feedback. It maintains a policy that approximates the utility of each candidate output and selects the higher-utility candidates with higher probability.
\cite{simpson2019interactive}
Interactive Text Ranking with Bayesian Optimisation: A Case Study on Community QA and Summarisation
- https://arxiv.org/pdf/1911.10183.pdf
- we propose an interactive text ranking approach that actively selects pairs of candidates, from which the user selects the best. Unlike previous strategies, which attempt to learn a ranking across the whole candidate space, our method employs Bayesian optimisation to focus the user’s labelling effort on high quality candidates and integrate prior knowledge to cope better with small data scenarios. We apply our method to community question answering (cQA) and extractive multi-document summarisation, finding that it significantly outperforms existing interactive approaches
- For summarisation, Sokolov et al. (2016), Lawrence and Riezler (2018) and Singh et al. (2019), train reinforcement learners by querying the user directly for rewards, which requires in the order of 10\^5 interactions
- We also test prior predictions from a state-ofthe-art summary scoring method, SUPERT (Gao et al., 2020), which uses a variant of BERT that has been fine-tuned on news articles to obtain 1024- dimensional contextualised embeddings of a summary. To score a summary, SUPERT extracts a pseudo-reference summary from the source documents, then compares its embedding with that of the test summary
- we simulate a user’s preferences with a noisy oracle based on the user-response models of Viappiani and Boutilier (2010)
- https://papers.nips.cc/paper/3943-optimal-bayesian-recommendation-sets-and-myopically-optimal-choice-query-sets.pdf
- ... not really sure what this paper is about
}\fi
\subsection{Unsupervised Summary Evaluation}
When gold-standard human-written summaries are available for a document or question-document pair, the quality of a model-produced summary is commonly computed with the ROUGE metric~\cite{lin2004automatic}.
Driven by high costs of obtaining human-written summaries at a large scale, especially for tasks such as multi-document summarization or QS, unsupervised evaluation of summaries (i.e. without using gold-standards) has rapidly developed \citep{louis2013automatically}.
\citet{louis2009automatically} found that the Jensen Shannon divergence between the word distributions in a summary and reference document out-performs many other candidates and achieves a high correlation with manual summary ratings, but not quite as high as ROUGE combined with reference summaries.
\citet{sun2019feasibility} consider a variety of distributed text embeddings and propose to use the cosine similarity of summary and document ELMo embeddings \cite{peters2018deep}.
\citet{bohm2019better} consider \emph{learning} a reward function from existing human ratings. Their reward function only requires a model summary and document as input and achieves higher correlation with human ratings than other metrics (including ROUGE which requires reference summaries). \citet{stiennon2020learning} also consider this approach, with a larger collection of human ratings and larger models.
However, \citet{gao2020supert} found that comparing ELMo embeddings or using the learned reward from \citeauthor{bohm2019better} does not generalize to other summarization tasks. Their evaluation of more advanced contextualized embeddings found that Sentence-BERT (SBERT) embeddings \cite{reimers2019sentence} with word mover's-based distance~\cite{kusner2015word} outperforms other unsupervised options. Post-publication experiments by \citeauthor{bohm2019better} further support the generalizability of this approach\footnote{The additional results can be found here: \url{https://github.com/yg211/summary-reward-no-reference}.}.
In Section~\ref{sec:metric}, we adapt the method of \citeauthor{gao2020supert} to HARE\xspace evaluation.
\iffalse{
\cite{louis2009automatically}
Automatically Evaluating Content Selection in Summarization without Human Models
- We present a fully automatic method for content selection evaluation in summarization that does not require the creation of human model summaries. Our work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other. Results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons are very effective for the evaluation of content selection. Our automatic methods rank participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness. The best feature, JensenShannon divergence, leads to a correlation as high as 0.88 with manual pyramid and 0.73 with responsiveness evaluations.
- note that SUPERT specifically outperforms JensenShannon divergence
\cite{sun2019feasibility}
The Feasibility of Embedding Based Automatic Evaluation for Single Document Summarization
- for normal summaries using cosine sims of embeddings (nenkova paper)
- Here we present a suite of experiments on using distributed representations for evaluating summarizers, both in reference-based and in reference-free setting.
- Our experimental results show that the max value over each dimension of the summary ELMo word embeddings is a good representation that results in high correlation with human ratings.
- Averaging the cosine similarity of all encoders we tested yields high correlation with manual scores in reference-free setting.
- The distributed representations outperform ROUGE in recent corpora for abstractive news summarization but are less good on older test data and systems.
\cite{bohm2019better}
Better Rewards Yield Better Summaries: Learning to Summarise Without References
- use a learned reward function:
- summaries with high ROUGE scores often receive low human judgement.
- To find a better reward function that can guide RL to generate human-appealing summaries, we learn a reward function from human ratings on 2,500 summaries. Our reward function only takes the document and system summary as input.
- compared to the state-of-the-art supervised-learning systems and ROUGE-as-rewards RL summarisation systems, the RL systems using our learned rewards during training generate summaries with higher human ratings
- the target of our work is to learn a good reward, which is not necessarily a good evaluation metric. A good evaluation metric should be able to correctly rank summaries of different quality levels, while a good reward function focuses more on distinguishing the best summaries from the mediocre and bad summaries
\cite{gao2020supert}
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
- uses SBERT embeddings and pseudo-summaries
- Most existing evaluation methods require certain forms of human involvement, thus are supervised: they either directly let humans rate the generated summaries (e.g. Pyramid (Nenkova and Passonneau, 2004)), elicit human-written reference summaries and measure their overlap with the generated summaries (e.g. using ROGUE (Lin, 2004a) or MoverScore (Zhao et al., 2019)), or collect some human annotations (e.g. preferences over pairs of summaries (Gao et al., 2019a)) to learn a summary evaluation function
- we focus on evaluating the relevance (Peyrard, 2019) of multi-document summaries, i.e. measuring how much salient information from the source documents is covered by the summaries. There exist a few unsupervised evaluation methods (Louis and Nenkova, 2013; Sun and Nenkova, 2019), but they have low correlation with human relevance ratings at summary level: given multiple summaries for the same source documents, these methods can hardly distinguish summaries with high relevance from those with low relevance (see §3).
- We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-theart unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18-3
- We find that vectorizing the summary and the top sentences in the source documents using contextualized embeddings, and measuring their semantic overlap with soft token alignment techniques is a simple yet effective method to rate the summary’s quality. The resulting method, SUPERT, correlates with human ratings substantially better than the state-of-the-art unsupervised metrics.
\cite{stiennon2020learning}
Learning to summarize from human feedback
- We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning
- find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone
From Word Embeddings To Document Distances (introduces wmd)
- 2015
Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts
- 2019
}\fi
\section{Task Formulation}
\label{sec:task_formulation}
To define the proposed task, we will first describe how a user interacts with an HARE\xspace summarizer (Section~\ref{sec:interaction_loop}).
Second, we describe a method for modelling user interests and feedback for automatic evaluation (Section~\ref{sec:user_modelling}). Third, we propose an evaluation metric for this new task (Section~\ref{sec:metric}).
\subsection{User-Summarizer Interaction Loop}
\label{sec:interaction_loop}
\begin{figure}[ht
\centering
\includegraphics[width=1\columnwidth]{figs/task_demo_v3.pdf}
\caption{In HARE\xspace, users are shown sentences in their original order, and can provide relevance feedback. A model uses this feedback to optimize the remainder of the article, automatically hiding uninteresting sentences.}
\label{fig:demo}
\end{figure}
\begin{algorithm}[ht]
\SetAlgoLined
user chooses a document $D = [x_{1}, ..., x_{|D|}]$ to read with help from summarizer $M$\;
$S = \emptyset$ \tcp{summary sentences}
\For{i = 1, ..., $|D|$}{
\If{$M$ decides to show $x_{i}$ to user}{
show sentence $x_{i}$ to user\;
$S := S \cup \{x_{i}\}$\;
incorporate any feedback into $M$\;
}
\If{user is done reading}{break}
}
\Return S
\caption{User-Summarizer Interaction}
\label{alg:interaction}
\end{algorithm}
The interaction between a user and HARE\xspace summarizer, as shown in Figure~\ref{fig:demo} and sketched in Algorithm~\ref{alg:interaction}, consists of the user reading the shown sentences and providing feedback on their relevance. Using this feedback, the summarizer decides which remaining sentences to show, aiming to hide uninteresting sentences.
This interaction is designed to smoothly integrate into the natural reading process by exhibiting three important properties: 1) feedback is either implicit or non-intrusive, 2) sentences are presented in their original order to try maintain coherence, and 3) updates to the summary should occur beyond the current reading point so as to not distract the user.
Next, we discuss how to model a user in this interaction for the purposes of automatic evaluation.
\subsection{User Modelling}
\label{sec:user_modelling}
In order to model user interaction during HARE\xspace, we need to know what kind of feedback they would provide when shown a sentence. This requires understanding how much a user would be interested in a given sentence and how feedback is provided.
\paragraph{User interests}
For our work, user interests will be modelled as a weighted set of concept vectors from a semantic embedding space.
Given a weighted set of $k$ user interests, $U = \{<w_{1}, c_{1}>, ..., <w_{k}, c_{k}>\}$ such that $w_{i} \in [0, 1]$ and $\max(w) = 1$, and a sentence embedding, $x$, the interest level (which we also refer to as importance) is calculated with Equation~\ref{eqn:interest}. We use cosine distance for $\Delta$. Intuitively, the importance of a sentence reflects the maximum weighted similarity to any of the interests. This method of computing importance is similar to that used by \citet{avinesh2018sherlock,wu2019neural,teevan2005personalizing}. However, we adapt it to accommodate modern distributed sentence embeddings (SBERT).
\begin{equation}
R(U, x) = \max_{i=1, ..., k} w_{i} (1 - \Delta(c_{i}, x))
\label{eqn:interest}
\end{equation}
\iffalse{
Need to find out common ways of representing either... query or user preferences
our unsupervised evaluation method requires a weighting over doc sentences. it uses WMD metric to compute concept distance. For uniformity, we use a similar metric for user preference modelling. That is, the user interests are represented as a weighted set of concepts, and interested in a sentence is the weighted WMD (converted to similarity) to that sentence.
how do other QS systems represent user interests?
APRIL \cite{gao2019preference}
- learns from preference based feedback -- so how is user preference implicitly modelled?
- preference is implicitly represented as a ranking function
- how is preference represented during automatic evaluation?
- they represent the "oracle" (user) preference by computing the ROUGE score of the summary pair
- not easy to adapt to query based summarization (it is designed to mimic users for generic summarization)
- implement *noisy preferences* -- where feedback noise depends on similatity between options
Sherlock
- given document, extracts set of concepts and initial concept weights -- concepts are bigrams
- for each presented summary, present concepts are also shown -- user can "accept" or "reject" concepts by clicking them
- this feedback is used to increase or decrease the weight of associated concepts
- ** weighting over bigrams \citep{avinesh2017joint,avinesh2018sherlock}
- earlier version of work gives more evaluation and user modelling details:
- https://www.aclweb.org/anthology/P17-1124.pdf
- uses generic summarization for evaluation
Analysis and Modeling of Unified User Interest \cite{jayarathna2017analysis}
- https://www.cs.odu.edu/~sampath/publications/conferences/iri-2017-jayarathna.pdf
- ** represents interests as LDA-derived concept vectors
- derives a weighting over the feature sources? not the concepts... I think
\cite{wu2019neural}
Neural News Recommendation with Heterogeneous User Behavior
- https://www.aclweb.org/anthology/D19-1493.pdf
- ** a weighted sum of embeddings of things they are interested in: (news recommendation paper)
- such "things" include news, queries
- "Following (Okura et al., 2017), the click probability score yˆ is computed by the inner product of the user and candidate news representations"
User Modelling for Personalized Question Answering
- https://www-users.cs.york.ac.uk/~suresh/papers/UMFPQA.pdf
- creates individual models of users based on their reading level and interests
- user model composed of: age, reading level, profile (a set of documents and/or web pages of interest)
- keyphrase extraction is used to obtain a list of keyphrases from the text documents or bookmarked web pages
- ** answer ranking criterion giving priority to answers from documents having common keyphrases with the user’s profile documents (keyphrases are weighted)
Personalizing Search via Automated Analysis of Interests and Activities
- https://cs.fit.edu/~pkc/apweb/related/teevan05sigir.pdf
- we pursue techniques that leverage implicit information about the user’s interests
- ** represents user interests as set of relevant queries/documents, but ultimately uses a term-frequency representation (with weighting)
Reading behaviour based user interests model and its application in recommender system
- https://www.researchgate.net/publication/320605225_Reading_behaviour_based_user_interests_model_and_its_application_in_recommender_system
- represent user interests as a set of (tag/concept, weight)
- essentially proposes keeping track of which keywords occur in "clicked" and "skipped" texts, and modifying term weights accordingly
- computes semantic relevance between a work and a tag by looking at the relationship between their wikipedia pages
- ** comes up with a set of recommended items by considering, for each item, the maximum relevance to any tag (and applying threshold)
- ** all recommended items are ranked according to relevant_tag_weight * relevance_to_tag
News Recommender Systems - Survey and Roads Ahead
- https://web-ainf.aau.at/pub/jannach/files/Journal_IPM_2018.pdf
- we are looking for algorithms like "content-based filtering" rather than collaborative or hybrid filtering
- *most* approaches can be categorized as using one of those three approaches (less common ones include knowledge-based techniques)
Deep Learning based Recommender System: A Survey and New Perspectives
- https://arxiv.org/pdf/1707.07435.pdf
- Generating Personalized Recipes from Historical User Preferences
- https://arxiv.org/pdf/1909.00105.pdf
- ** as a set of relevant embeddings
- as a set of terms (query-based sentence compression paper): https://www.aclweb.org/anthology/D19-1612.pdf
}\fi
\paragraph{Feedback types}
Given a sentence interest score of $r_{x} \in [0, 1]$, what feedback will be observed by the model?
If using implicit feedback like dwell time or gaze tracking, feedback could be continuously valued.
With explicit feedback, like ratings or thumbs up/down, feedback could be discrete.
For an in-depth discussion on types of user feedback, see \citet{jayarathna2017analysis}.
In this work, we will consider an explicit feedback inspired by the ``Tinder sort'' gesture popularized by the Tinder dating app\footnote{\url{https://tinder.com/?lang=en}}, where users swipe left to indicate disinterest, and right to indicate interest. This feedback interaction has proven to be very quick and easy. Users will routinely sort through hundreds of items in a sitting \cite{david2016screened}.
To adapt this feedback method to our interactive summarization system, we can consider users to ``accept'' a sentence if they swipe right, and ``reject'' it if they swipe left (see Figure~\ref{fig:feedback_alternatives}a and Figure~\ref{fig:demo})\footnote{If we wanted to make the feedback optional, we could simply let no swipe indicate acceptance, and left swipe indicate rejection.}.
To model the noisy feedback a user provides, we adopt a logistic model, shown in Equation~\ref{eqn:neg_feedback}, following \citet{gao2019preference,viappiani2010optimal,simpson2019interactive}.
Our feedback model is parameterized by a decision threshold, $\alpha \in [0, 1]$, and a noise level, $m > 0$. Low $\alpha$ means that users are willing to accept sentences with lower importance.
We consider the model to receive a feedback value of $0$ if they reject a sentence, and $1$ if they accept.
In setting $\alpha$ for feedback modelling, we tie it to the users length preference to better simulate realistic behavior. When users want to read very little for example, they only accept the best sentences. If a user wants to read $l$ out of $|D|$, then we set $\alpha = 1 - l/|D|$. For user modelling, we sample $l$ uniformly from the range $[1, |D|]$.
\begin{equation}
P_{\alpha, m}(\text{accept } x) = 1 - \left[1 + exp\left(\frac{\alpha - r_{x}}{m}\right)\right]^{-1}
\label{eqn:neg_feedback}
\end{equation}
\iffalse
\begin{figure}[ht
\centering
\includegraphics[width=1\columnwidth]{figs/feedback_probs.pdf}
\caption{A visualization of sentence accept probabilities for various combinations of $m$ (noise level) and $\alpha$ (decision threshold).}
\label{fig:feedback_probs}
\end{figure}
\fi
\subsection{Unsupervised Evaluation}
\label{sec:metric}
Unsupervised evaluation is tricky to do properly. You must show that it correlates well with human judgement, but also be confident that maximizing the metric does not result in garbage \cite{barratt2018note}.
As discussed in Section~\ref{sec:related}, we adapt the unsupervised summary evaluation method described by \citet{gao2020supert}. This metric computes a mover's-based distance between the SBERT embeddings of the summary and a heuristically-chosen subset of document sentences (a ``pseudo-reference'' summary). They show that it correlates well will human ratings and that using it as a reward for training a reinforcement learning-based summarizer produces state-of-the-art models.
The authors found that basing the pseudo-reference summary on the lead heuristic, which generally produces good single and multi-document summaries, worked best.
For HARE\xspace, we can apply the analogous idea: when computing the summary score, we can use all document sentences in the pseudo-reference summary, but weight them by their importance:
\begin{equation}
score(U, D, S) = 1 - \frac{1}{\sum_{x \in D} r_{x}} \sum_{x \in D} r_{x} \min_{s \in S} \Delta(x, s)
\label{eqn:metric}
\end{equation}
This metric has the behavior of rewarding cases where an important sentence is highly similar to at least one summary sentence.
For this reason, coverage of the different user interests is also encouraged by this metric: since sentences drawing their importance from similarity to the same concept are going to be similar to each other, having summaries representing a variety of important concepts is better.
\section{Methods}
\label{sec:methods}
We consider three groups of approaches ranging in complexity:
(1) simple heuristics,
(2) adapted generic summarizers, and
(3) preference learning.
\subsection{Simple Heuristics}
This first set of approaches are as follows:
\paragraph{\textsc{ShowModulo}} This approach shows every $k^{th}$ sentence to the user. When $k=1$, this is equivalent to the control, where every sentence is shown.
By moving through the article faster, we suspect that greater coverage is obtained, making it more likely that important concepts are represented.
\paragraph{\textsc{HideNext}} This approach shows all sentences, except for the $k$ following any rejected sentence. E.g. when $k=2$ and the user rejects a sentence, the two after it are hidden. The motivation for this model is that nearby sentences are often related, so if one is disliked, a neighbour might also be. Larger $k$ suggests a larger window of relatedness.
\paragraph{\textsc{HideAllSimilar}}
While \textsc{HideNext} hides physically nearby sentences, this model hides all sentences that are actually conceptually similar to a rejected one, where similarity is measure with cosine similarity of SBERT embeddings.
We also include a compromise between hiding based on physical and conceptual similarity: \textbf{\textsc{HideNextSimilar}}. This model hides only the unbroken chain of similar sentences after a rejected one.
\subsection{Adapted Generic Summarizers}
This set of approaches make use of generic extractive summarizers.
The motivation for considering them is that even though they are independent of user interests, they are often designed to provide good coverage of an article. In this way, they may accommodate all user interests to some degree.
For a given generic summarizer, we consider the following options:
\paragraph{\textsc{GenFixed}} This approach first uses the generic summarizer to rank the sentences, and then shows a fixed percentage of the top sentences.
\paragraph{\textsc{GenDynamic}} This approach estimates an importance threshold, $\hat{\alpha}$, of sentences the user is willing to read, and hides the less important sentences. Importance is computed by scoring the sentences with the generic summarizer and rescaling the values to $[0, 1]$. The initial estimate is $\hat{\alpha}=0$, which means that all sentences are important enough. Each time a sentence is rejected, the new estimate is updated to be the average importance of all rejected sentences. To help avoid prematurely extreme estimates, we also incorporate $\epsilon$-greedy exploration. With probability $1 - \epsilon$, the sentence is only shown if the importance meets the threshold, otherwise it is shown anyways. A larger $\epsilon$ will help find a closer approximation of the threshold, but at the cost of showing more unimportant sentences.
\subsection{Preference Learning}
The approaches in this group use more capable adaptive algorithms to learn user preferences in terms of both preferred length and concepts:
\paragraph{\textsc{LR}} This approach continually updates a logistic regression classifier to predict feedback given sentence embeddings. Before a classifier can be trained, all sentences are shown. We propose two variations of this approach. The first uses an $\epsilon$-greedy strategy similar to \textsc{GenDynamic}. The second uses an $\epsilon$-decreasing strategy: for a sentence at a given fraction, $frac$, of the way through the article, $\epsilon = (1 - frac)^{\beta}$, for $\beta > 0$.
\paragraph{\textsc{CoverageOpt}} This approach explicitly models user interests and length preference.
It scores potential sentences by how much they improve coverage of the user interests.
However, since we do not know the user's true interests or their length preference, both are estimated as they read.
This approach prepares for each article by using K-Means clustering of sentence embeddings to identify core concepts of the article. The initial estimate of concept importances is computed with:
\begin{equation}
\hat{C} = \left[1 + exp\left(\frac{cfsum}{\beta}\right)\right]^{-1}
\label{eq:concept_weight_estimate}
\end{equation}
We initialize the vector $cfsum$ with the same value $c \in \mathbb{R}$ for each concept. A larger $c$ means that more evidence is required before a concept is determined to be unimportant. $\beta > 0$ controls how \textit{smoothly} a concept shifts between important and unimportant (larger value means more smoothly).
To update the estimate of user interests with $feedback \in \{0, 1\}$ for sentence $x$, we update $cfsum$ with:
\begin{equation}
cfsum \leftarrow cfsum + 2(feedback-0.5)concepts(x)
\label{eq:cfsum_update}
\end{equation}
If $feedback=0$ for example, this moves $cfsum$ away from the article concepts represented by that sentence.
The function $concepts()$ returns the relevance of each concept for the specified sentence.
After updating $\hat{C}$, we re-compute sentence importances based on their contribution to improving concept coverage, weighted by concept importance.
Next, we update the estimated length preference, $\hat{l}_{frac}$, by averaging the importance of rejected sentences.
The summary is updated to show sentences among the top $\hat{l}_{frac}|D|$ important sentences. If the user has rejected low and medium importance sentences, then only the most coverage-improving sentences will be shown.
\section{Experiments}
\label{sec:experiments}
In this section, we first describe the experimental setup, and then provide an analysis of the results.
\subsection{Setup}
\paragraph{Dataset} We evaluate on the test articles from the non-anonymized CNN/DailyMail dataset \cite{hermann2015teaching}\footnote{Accessed through HuggingFace: \url{https://huggingface.co/datasets/cnn_dailymail}.}. We remove articles with less than 10 sentences so as to cluster sentences into more meaningful groups for user interest modelling.
This leaves us with 11222 articles, with an average of 34.0 sentences per article.
\paragraph{User modelling} We apply K-Means clustering to SBERT sentence embeddings for each article to identify $k=4$ cluster centers/concepts. User interests are a random weighting over these concepts, as described in Section~\ref{sec:user_modelling}. For feedback noise, we use $m=0.01$ (essentially no noise) and $m=0.1$ (intended to capture the difficulty in deciding whether a single sentence is of interest or not). $\alpha$ is chosen as described in Section~\ref{sec:user_modelling}.
\paragraph{Metrics} Evaluation with the two noise values of $m=0.01$ and $m=0.1$ correspond to $score_{sharp}$ and $score_{noisy}$ respectively. $score_{adv}$ corresponds to the difference between $score_{noisy}$ and the control score (no personalization). Positive values indicate outperforming the control. Since the scores fall between 0 and 1, we multiply them by 100.
\paragraph{Privileged information comparison models} We consider for comparison three oracle models and the control. \textsc{OracleGreedy} has access to the user preferences and greedily selects sentences to maximize the score, until the length limit is reached. \textsc{OracleSort} selects sentences based only on their interest level. \textsc{OracleUniform} selects sentences at random throughout the article until the length limit is reached\footnote{Readers cannot be guaranteed a uniform sampling of sentences unless their length preference is known in advance.}.
\subsection{Results}
Table~\ref{tab:overall} reports the results for each model with its best performing set of hyperparameters. While $score_{sharp}$ and $score_{noisy}$ can range from 0 to 100, the difference between the control and \textsc{OracleGreedy} is less that 5 points (reflected in $score_{adv}$). This suggests that even relatively small performance differences are important. For stochastic models (marked by a * in Table~\ref{tab:overall}), results are averaged across 3 trials and standard deviations were all found to be below 0.05.
Overall, we find that the simple heuristics provide robust performance, unaffected (and possibly helped) by noise. While the more complex \textsc{CoverageOpt} approach is able to perform best with low-noise feedback, it falls behind when noise increases. Next we discuss in more detail the results for each group of models, then comment on aspects of efficiency, and finally discuss the results of our human evaluation.
\input{tables/overall_results}
\subsubsection{Privileged Information Models}
\label{sec:results_privileged}
\textsc{OracleUniform} outperforms the control as well as \textsc{OracleSorted}. This may seem counter-intuitive, since \textsc{OracleUniform} has the disadvantage of not knowing true user interests. However, the strength of \textsc{OracleUniform} is that it provides uniform coverage over the whole article, weakly accommodating any interest distribution. By choosing only the most interesting sentences, \textsc{OracleSorted} runs the risk of only showing those related to the most important concept.
If our user model simulated more focused interests, \textsc{OracleSorted} may perform better however.
It is also interesting to see how much higher \textsc{OracleGreedy} is than every other model, suggesting that there is plenty of room for improvement. The reason the oracle does not reach 100 is that the summary length is restricted by user preference.
If future approaches consider abstractive summarization techniques, it may be possible to move beyond this performance barrier.
\subsubsection{Simple Heuristics}
While we suspected that the \textsc{ShowModulo} strategy might benefit from exposing readers to more concepts faster, we found that this does not work as well as \textsc{OracleUniform}. The top performance of $score_{adv} = -3.32$ is reached with $k=2$, and it quickly drops to $-7.06$ with $k=3$.
The minimally adaptive approach of hiding a fixed number of sentences after swiped ones, as per \textsc{HideNext}, does help however, especially with $n=2$.
The related models of \textsc{HideNextSimilar} and \textsc{HideAllSimilar}, which simply hide sentences similar to ones the user swipes away, work surprisingly well, in both moderate and low noise.
In Figure~\ref{fig:hide_similar_results}, we can see that their performance peaks when the similarity threshold is around 0.5 to 0.6.
\begin{figure}[ht
\centering
\includegraphics[width=1\columnwidth]{figs/hide_similar_results.pdf}
\caption{The performance for \textsc{HideNextSimilar} and \textsc{HideAllSimilar} for a range of similarity thresholds. When the threshold is high, it means that only the most similar sentences are hidden.}
\label{fig:hide_similar_results}
\end{figure}
\subsubsection{Adapted Generic Summarizers}
We use the following extractive summarizers: LexRank \cite{erkan2004lexrank}, SumBasic \cite{nenkova2005impact}, and TextRank \cite{mihalcea2004textrank}\footnote{Implementations provided by Sumy library, available at \url{https://pypi.python.org/pypi/sumy}.}.
We find that the generic summarizer-based models always perform worse than the control when showing a fixed fraction of the article (\textsc{GenFixed}). The best model of this type used the SumBasic summarizer, showing 75\% of sentences.
When dynamically estimating target summary length (\textsc{GenDynamic}), the control is outperformed by only 0.09 points. This is achieved by the SumBasic summarizers with $\epsilon = 0.5$. For both variations, we find that the best hyperparameters tend to be those that make them show the most sentences.
\subsubsection{Preference-learning Models}
\input{tables/lr_results}
The \textsc{LR} models out-perform the control, as shown in Table~\ref{tab:lr_results}, but fail to match the simpler approaches. Using a decaying $\epsilon$ actually hurt performance, suggesting that the model is simply not able to learn user preferences fast enough. However, there is a sweet spot for the rate of $\epsilon$ decay at $\beta = 1$.
We find that \textsc{CoverageOpt} consistently improves with larger initial concept weights ($c$) and a slower concept weight-saturation rate ($\beta$), with the performance plateauing around $\beta = 4$ and $c = 5$.
When both $c$ and $\beta$ are both large, there is a longer exploration phase with more evidence required to indicate that any given concept should be hidden.
\subsection{Efficiency}
\paragraph{Acceptance rate} When measuring the fraction of shown sentences that are accepted, we find no consistent connection to their performance.
For example, the control and the best \textsc{HideNext}, \textsc{HideNextSimilar}, \textsc{HideAllSimilar}, and \textsc{CoverageOpt} models all have rates between 64-66\% in the noisy feedback case.
\textsc{OracleSorted} has the highest however, at 79\%, while \textsc{OracleGreedy} is only at 69\% acceptance. As discussed in Section~\ref{sec:results_privileged}, this is because the sentence set which maximizes the score is not necessarily the same as the set with the highest importance sum.
\paragraph{Speed} The approaches presented here are able to update the summary in real-time. Running on a consumer-grade laptop, each full user-article simulation (which consists of many interactions) takes between 100ms for the slowest model (\textsc{GenFixed} with TextRank), to 2.8ms for \textsc{HideAllSimilar}, to 1.3ms for \textsc{HideNext}.
\subsection{Human Evaluation}
Finally, we run a human evaluation to test a variety of approaches on multiple measures.
\paragraph{Setup} We selected 10 news articles from a variety of sources and on a variety of topics (such as politics, sports, and science), with an average sentence length of 20.6, and asked 13 volunteers to read articles with the help of randomly assigned HARE\xspace models. In total, we collected 70 trials. Participants were shown sentences one at a time and provided feedback to either accept or reject sentences. They were also able to stop reading each article at any time.
After reading each article, they were asked several questions about the experience, including the coherence of what they read (how well-connected consecutive sentences were, from 1 to 5) and how easy it was to decide whether to accept or reject sentences (from 1 to 5). We also showed them any unread sentences afterwards in order to determine how many would-be accepted sentences were not shown. Coverage, roughly corresponding to our automated evaluation metric, can then be estimated with the fraction of interesting sentences that were actually shown.
\begin{figure}[ht
\centering
\includegraphics[width=1\columnwidth]{figs/human_eval.pdf}
\caption{Summary of human evaluation results. Error bars indicate 90\% confidence intervals.}
\label{fig:human_eval}
\end{figure}
\paragraph{Results} From the human evaluation, we find that making the decision to accept or reject sentences is quite easy, with an average decision-ease rating of 4.4/5.
However, departing from the assumptions of our user model, people ended up reading more than an average of 50\% of the articles (up to 70\% for the control). This could influence the relative performance of the various models, with a skew towards models that tend to hide fewer sentences.
We find the acceptance rate to vary from 47\% for \textsc{LR} to 75\% for \textsc{CoverageOpt}, with the remainder around 60\%.
From Figure~\ref{fig:human_eval} we can see that the best model (highest coverage) appears to be \textsc{CoverageOpt}. This is followed by the control and \textsc{LR} model, with their 90\% confidence intervals overlapping. This highlights that achieving good coverage of interesting sentences is not the same as achieving a high acceptance rate. The worst performing model according to both human and automated evaluation is \textsc{ShowModulo}. The remaining four models significantly overlap in their confidence intervals. However, it is interesting to note that \textsc{HideAllSimilar} performs poorer than we would expect. Given the positive correlation between the percent of the article users end up reading and the model coverage, we can guess that this is a result of the model automatically hiding too many sentences. This also leads to low reported summary coherence, as many sentences are skipped. In contrast, the control achieves the highest coherence (since nothing is skipped), with \textsc{CoverageOpt} near the middle of the pack.
\section{Conclusion}
\label{sec:conclusion}
In this paper we proposed a new interactive summarization task where the document is automatically refined during the normal flow of reading. By not requiring an explicit query or relying on time-consuming and invasive feedback, relevant information can be conveniently provided for a wide range of user preferences.
We provided an approximate user model and suitable evaluation metric for this task, building upon recent advances in unsupervised summary evaluation.
To guide examination of this new task, we proposed a variety of approaches, and perform both automated and human evaluation.
Future research on this task includes adapting the interaction model to implicit feedback and trying more advanced approaches.
\section{Ethical Considerations}
\paragraph{Diversity of viewpoints} The HARE\xspace task is intended for the design of future user-facing applications. By design, these applications have the ability to control what a user reads from a given article.
It is possible that, when deployed without sufficient care, these tools could exacerbate the ``echo chamber'' effect already produced by automated news feeds, search results, and online communities \citep{pariser2011filter}.
However, the ability to influence what readers are exposed to can also be leveraged to \textit{mitigate} the echo chamber effect. Rather than considering only what user interests appear to be at a given moment, future HARE\xspace models could incorporate a diversity factor to explicitly encourage exposure to alternative views when possible. The weighting of this factor could be tuned to provide both an engaging reading experience and exposure to a diversity of ideas.
\paragraph{Beneficiaries} As mentioned in Section~\ref{sec:intro}, those most likely to benefit from HARE\xspace applications once successfully deployed will be those using them to read (by saving time and increased engagement) as well as any content publishers who encourage their use.
\section{Experimental Setup}
\textbf{Computing infrastructure} All experiments were performed on a machine with an Intel Core i7-6700HQ CPU with 16G RAM and a GeForce GTX 960M GPU.
\textbf{Hyperparameter searches} For parameterized models, grid searches over the following ranges were performed:
\begin{itemize}
\item \textsc{ShowModulo}: $k \in \{2, 3, 4, 5\}$
\item \textsc{HideNext}: $n \in \{1, 2, 3, 4\}$
\item \textsc{HideNextSimilar} and \textsc{HideAllSimilar}: $threshold \in \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9\}$
\item \textsc{GenFixed}: $frac \in \{0.25, 0.5, 0.75\}$
\item \textsc{GenDynamic}: $\epsilon \in \{0, 0.1, 0.2, 0.3, 0.4, 0.5\}$
\item \textsc{LR} (constant $\epsilon$): $\epsilon \in \{0, 0.1, 0.2, 0.3, 0.4, 0.5\}$
\item \textsc{LR} (decreasing $\epsilon$): $\beta \in \{0.25, 0.5, 1, 2, 4\}$
\item \textsc{CoverageOpt}: $\beta \in \{0.25, 0.5, 1, 2, 4\}$ and $c \in \{0, 1, 2, 3, 4\}$
\end{itemize}
\section{Detailed Results}
Detailed results for those models without full results reported in the paper are shown here. For \textsc{ShowModulo} and \textsc{HideNext}, results are shown in Table~\ref{tab:simplest_results}. For summarizer-based models, results are shown in Table~\ref{tab:summarizer_results}. For \textsc{CoverageOpt}, results are shown in Table~\ref{tab:coverage_results}.
\input{tables/simplest_results}
\input{tables/summarizer_results}
\input{tables/coverage_results}
\section{Human Evaluation}
Human evaluation was performed via a chatbot deployed on the Telegram chat app\footnote{\url{https://telegram.org/}} using their convenient API\footnote{\url{https://core.telegram.org/bots/api}}. A screenshot of the chatbot serving as a simple HARE\xspace interface is shown in Figure~\ref{fig:chatbot}. To participate, volunteers were instructed to engage with the publicly accessible bot in the app and follow instructions provided therein.
\begin{figure}[ht
\centering
\includegraphics[width=0.65\columnwidth]{figs/chatbot.png}
\caption{A screenshot of the demo in action. For each sentence, users were able to accept, reject, or stop reading the article at that point.}
\label{fig:chatbot}
\end{figure}
|
2,877,628,088,867 | arxiv | \section{Introduction}\label{sec1}
The solutions of the semilinear elliptic equation
\begin{equation}\label{eq1}
-\Delta u + V(x)u = f(x,u),\quad x\in \mathbb{R}^N, \quad N\geq 3,
\end{equation}
play a pivotal role in the search for certain kinds of solitary waves of Klein-Gordon or Schr\"{o}dinger nonlinear type equations (see \cite{B-L1,B-L2,S}).
The question of the existence of solutions for (\ref{eq1}) has been intensively studied by many researchers under a variety of conditions on $V$ and $f$. In \cite{FW}, Floer and Weinstein used a Lyapunov-Schmidt type reduction to
prove the existence of a positive solution of (\ref{eq1}) for the case $N=1$, with $f(x,s) = s^3$ and $V$ a globally bounded potential having a nondegenerate critical point and
satisfying $\inf_{\mathbb{R}^N} V >0$. In \cite{Oh1,Oh2}, Oh generalized the result in \cite{FW} for $N\geq 3$ and $f(x,s) = \vert s\vert^{p-2}s$, with $2< p < 2^* := 2N/(N-2)$.
The use of mountain pass arguments to study (\ref{eq1}) goes back at least as far as \cite{DN} and \cite{R}. In \cite{DN}, Ding and Ni obtained several results concerning the existence of positive solutions for the equations of the type (\ref{eq1}) with $V(x)\geq 0$. Under hypotheses on $f$ coherent with the prototype nonlinearity $Q(x)s^{p-1}$, $2<p< 2^*$, the main results in \cite{DN} establish the existence of positive solutions for (\ref{eq1}), decaying uniformly to zero as $\vert x\vert\to \infty$, when $Q(x)$ is neither radial nor small at infinity.
In \cite{R}, Rabinowitz, among other results, employed a mountain-pass type argument to find a ground state solution for (\ref{eq1}), supposing $\inf_{\mathbb{R}^N} V >0$ and $f$ superlinear and subcritical at infinity, as long as $\liminf_{\vert x\vert\to \infty} V(x) > 0$ is sufficiently large (see Theorem 1.7 and Remark 2.21 in \cite{R}).
A further step in the study of such problems was made by del Pino and Felmer in \cite{DF} that, under the hypothesis $\liminf_{\vert x\vert\to \infty} V(x) > 0$, considered the problem on a local setting by assuming that the potential satisfies $\inf_\Omega V < \inf_{\partial\Omega}V$ on a nonempty bounded open set $\Omega \subset \mathbb{R}^N$. The technique of del Pino and Felmer relies on the study of a functional associated with a version of the original problem obtained by a penalization of the nonlinear term outside $\Omega$. This penalization allows us to overcome the lack of compactness and to prove the existence of a mountain pass critical level. Appropriate $L^\infty$ - estimate for the solution
of the penalized problem implies that it is actually a solution to the original one.
In the last two decades many studies have focused on potentials that may vanish at infinity, that is, such that $\liminf_{\vert x\vert\to \infty} V(x) = 0$ (we refer the reader to the articles
\cite{Alves-Souto, Ambrosetti-Malchiodi-Ruiz, Ambrosetti-Wang, Ambrosetti-Felli-Malchiodi, Azzollini-Pomponio, BPR, BR, BGM, C-M2, GM} and references therein). It is worth mentioning that in this case the natural space to deal with equation (\ref{eq1}) via critical point theory is $D^{1,2}(\mathbb{R}^N)$ which is only embedded in $L^{2^*}(\mathbb{R}^N)$. Consequently conditions as $\vert f(x,s)\vert\leq C\vert s\vert^{p-1}$, $p\neq 2^*$, do not guarantee in general that the related energy functional is well defined.
Among several methods that have been used to deal with problems with vanishing potentials, we would like to mention a clever adaptation of the penalization technique that has been introduced by Alves and Souto \cite{Alves-Souto}. We note that, in order to prove that the solution of the modified problem is a solution of the original problem, those authors adapt some ideas found in \cite{Brezis-Kato} to obtain an $L^\infty$ - estimate for
the solution of the penalized problem in terms of its $L^{2^*}$ norm.
In \cite{Alves-Souto} it is assumed that the potential $V$ is a nonnegative function satisfying the assumptions:
\begin{equation}
V(x)\leq V_\infty \mbox{ for all } x\in B_{r_0}(x_0), \mbox{ for some } V_\infty,\ r_0 >0 \mbox{ and } x_0\in\mathbb{R}^N, \label{V12}
\end{equation}
\begin{equation}
\mbox{there are } \Lambda>0 \mbox{ and } R > \vert x_0\vert + r_0 \mbox{ such that } \displaystyle \frac{1}{R^4}\inf_{\vert x\vert\geq R} \vert x\vert^4 V (x) \geq \Lambda.\label{V_2}
\end{equation}
Furthermore it is required that $f$ is an autonomous continuous function that it is positive on $(0,\infty)$ and satisfies the hypothesis:
\begin{gather}
\limsup_{s\to 0^+} \frac{sf(s)}{s^{2^*}}< +\infty. \label{f_1}
\end{gather}
Assuming further that $f$ is superlinear and subcritical at infinity, the main result in \cite{Alves-Souto} provides a constant $\Lambda^*>0$ such that equation (\ref{eq1}) has a positive solution whenever $\Lambda \geq \Lambda^*$. We observe that condition (\ref{f_1}) implies that $f$ has critical or supercritical decay at the origin.
Motivated by the articles \cite{Alves-Souto, DF, R}, our primary goal in this paper is to present a version of the penalization technique that will enable us to obtain results on the existence and multiplicity of solutions for equation (\ref{eq1}), depending on the decay of the potential at infinity, under versions of hypothesis (\ref{f_1}) that allow the nonlinear term to have supercritical, critical, or subcritical behavior near the origin. Furthermore we contemplate the possibility of having $f$ nonautonomous and assuming negative values. We also establish results on the existence of multiple and infinitely many solutions when $f$ is odd with respect to the second variable.
We emphasize that the argument we use for the existence of solutions uncovers an interplay between the behavior of the nonlinear term at the origin and the decay of the potential at infinity. A key ingredient for establishing such relation it is a result of the $L^\infty$-estimate for the solution of the penalized problem that does not depend on the behavior of the nonlinear term close to the origin.
In our first result, we suppose the function $f$ and the potential $V$ satisfy
\begin{description}
\item[(${f_1}$)] {$f: \mathbb{R}^N\times \mathbb{R} \to \mathbb{R}$ is a continuous function} and there is $q>2$ such that
$$
\limsup_{s\to 0}\left\vert \frac{sf(x,s)}{s^{q}}\right\vert< +\infty, \ \ \mbox{uniformly in}\ \mathbb{R}^N;
$$
\item[($f_2$)] there are $a_1, a_2>0$ and $p\in (2,2^*)$ such that
$$
\vert f(x,s)\vert \leq a_1\vert s\vert ^{p-1} + a_2
, \ \ \mbox{for every}\ (x,s)\in \mathbb{R}^N\times \mathbb{R};
$$
\item[($f_3$)] there are $\theta >2$ and $S_0 \geq 0$ such that
$$
sf(x,s) \geq \theta F(x,s) >0,\ \ \mbox{for every}\ \vert s\vert\geq S_0 ,\ x\in \mathbb{R}^N,
$$
where $F(x,s) := \int_0^s f(x,t)\,dt$;
\item[$(V_1)$] {$V: \mathbb{R}^N \to \mathbb{R}$ is a continuous function} and either
$V\geq 0$ in $\mathbb{R}^N$ and satisfies (\ref{V12})
or $\Omega := \{x\in \mathbb{R}^N : V(x) <0\}$ is a nonempty bounded set and $\inf_{\Omega}V > -S/\vert\Omega\vert^{2/N}$,
where $S>0$ is the best constant for the embedding $D^{1,2}(\mathbb{R}^N)$ into $L^{2^*}(\mathbb{R}^N)$;
\item [(${V_2}$)] there are constants $\Lambda > 0$ and $R>0$ ($R> \vert x_0\vert +r_0$,
for $r_0 >0$ and $x_0\in\mathbb{R}^N$ given by (\ref{V12}), if $V\geq 0$) such that
\[
\inf_{\vert x\vert \geq {R}}\vert x\vert^{(N-2)(q-2)} V (x) \geq \Lambda,
\]
with $q>2$ given by (${f_1}$).
\end{description}
We note that ($f_3$) is the famous Ambrosetti-Rabinowitz's superlinear condition which has been introduced in the seminal article \cite{Ambrosetti-Rabinowitz} to study via minimax methods semilinear elliptic problems on bounded domains. Condition ($f_3$) with $S_0=0$ has been assumed in the articles \cite{Alves-Souto, DF, R}. Such hypothesis with $S_0>0$ allows the nonlinear term $f$ to assume negative values.
It is important to observe that hypotheses ($f_1$) and ($V_2$) provide a clear relation between the behavior of $f$ near the origin and the lower bound for the decay of the potential at infinity. Finally we note that hypothesis ($f_2$) implies that $f$ has a subcritical growth at infinity and that, under condition ($V_1$), the potential $V$ may assume negative values.
We may now state our first result on the existence of a positive solution for (\ref{eq1}).
\begin{theorem}\label{thm}
Suppose $V$ satisfies $({V_1})$-$({V_2})$ and $f$ satisfies $({f_1})$-$({f_3})$. Then there is $\Lambda^* >0$ such that (\ref{eq1}) possesses a positive solution for every $\Lambda \geq \Lambda^*$.
\end{theorem}
Theorem \ref{thm} may be seen as a complement of the result established in \cite{Alves-Souto} because under its hypotheses $f$ may assume negative values and it does not have necessarily a critical or supercritical behavior near the origin. In other words, if $q \in (2, 2^*)$, we obtain a result that is not covered in \cite{Alves-Souto}. Moreover if $q=2^*$, hypothesis $(f_1)$ is exactly
(\ref{f_1}) and $({V_2})$ allows the same decrease at infinity for $V$ as the one considered in \cite{Alves-Souto}. If $q>2^*$, Theorem \ref{thm} improves the result of [1] in relation to the behavior (decay) of V at infinity.
We note that $\Lambda ^*$ given in Theorem \ref{thm} depends on the radius $R>0$ given in condition $(V_2)$. In particular, when the condition $(f_3)$ holds with $S_0=0$, and $V$ satisfies the following version of $(V_2)$:
\begin{description}
\item [($V_3$)] there are constants $\Lambda > 0$ and $R>0$ ($R> \vert x_0\vert +r_0$,
for $r_0 >0$ and $x_0\in\mathbb{R}^N$ given by (\ref{V12}), if $V\geq 0$) such that
\[
\frac{1}{R^{(N-2)(q-2)}}\inf_{\vert x\vert \geq {R}}{\vert x\vert}^{(N-2)(q-2)}V (x) \geq \Lambda,
\]
\end{description}
we may find $\widetilde{\Lambda}^*> 0$ independent of $R$ such that (\ref{eq1}) has a positive solution for every $\Lambda \geq \widetilde{\Lambda}^*$ (see Theorem \ref{resultalso} in Section \ref{proofsthm12}). Observe that (${V_3}$)
is precisely the hypothesis (\ref{V_2}) in \cite{Alves-Souto} when $q= 2^*$.
To reinforce the interplay between the behavior of the nonlinear term at the origin and the decay of the potential, evidenced by conditions $(f_1)$ and $({V_2})$, we present a result in which the function $f$ approaches zero rapidly at the origin: supposing that $f$ and $V$ satisfy
\begin{description}
\item[($\widehat{f_1}$)] there are constants $q, a >0$ such that
$$
\displaystyle\limsup_{s\to 0} \vert f(x,s)\vert e^{(a/\vert s\vert^{q})}< +\infty,\ \ \mbox{uniformly in}\ \mathbb{R}^N,
$$
\end{description}
\begin{description}
\item [(${V_4}$)] there are constants $R>0$, $\Lambda > 0$, $\mu >0$ and $R>0$ ($R> \vert x_0\vert +r_0$,
for $r_0 >0$ and $x_0\in\mathbb{R}^N$ given by (\ref{V12}), if $V\geq 0$) such that
\[
\inf_{ \vert x\vert \geq {R}}e^{\mu \vert x\vert^{(N-2)q}}V (x) \geq \Lambda,
\]
with $q$ given by ($\widehat{f_1}$),
\end{description}
we may state
\begin{theorem}\label{thm2}
Suppose $V$ satisfies $({V_1})$ and $({V_4})$, and $f$ satisfies $(\widehat{f_1})$, $({f_2})$, and $({f_3})$.
Then there are constants $\Lambda^*, \mu^* >0$ such that (\ref{eq1}) possesses a positive solution for every $\Lambda \geq \Lambda^*$ and $0< \mu \leq \mu^*$.
\end{theorem}
We remark that under the hypotheses of Theorems \ref{thm} and \ref{thm2} we may actually find solutions $u^+$ and $u^-$ of (\ref{eq1}) with $u^+>0$ and $u^-<0$ in $\mathbb{R}^N$. Furthermore, if we suppose $f$ is odd with respect to the second variable, we may derive the existence of multiple pairs of solutions for (\ref{eq1}). More specifically, assuming
\begin{description}
\item[(${f_4}$)] $f(x,-s) = -f(x,s)$, for every $(x,s)\in \mathbb{R}^N\times \mathbb{R}$,
\end{description}
we may use our version of the penalization technique and a minimax critical point theorem for functionals with symmetry due to Bartolo et al. \cite{BBF} (see Theorem \ref{simmpt} in Section 5) to obtain:
\begin{theorem}\label{thm3}
Suppose $V$ satisfies $({V_1})$-$({V_2})$ and $f$ satisfies $({f_1})$-$({f_4})$.
Then, given $l\in \mathbb{N}$, there is $\Lambda_l^* >0$ such that (\ref{eq1}) possesses $l$ pairs of nontrivial solution for every $\Lambda \geq \Lambda_l^*$.
\end{theorem}
As a consequence of Theorem \ref{thm3}, we may consider a setting where equation (\ref{eq1}) may have infinitely many pairs of nontrivial solutions without supposing that the potential
is coercive in $\mathbb{R}^N$. For example, assuming the following version of $({V_2})$:
\begin{description}
\item [(${V_5}$)] there are sequences $(R_j), \, (\Lambda_j) \subset (0,\infty)$ such that, for every
$j\in \mathbb{N}$,
\[
\inf_{ \vert x\vert \geq {R_j}}\vert x\vert^{(N-2)(q-2)} V (x) \geq \Lambda_j,
\]
with $q$ given by ($f_1$),
\end{description}
we may state:
\begin{proposition}\label{prop4}
Suppose $V$ satisfies $({V_1})$ and $({V_5})$ and $f$ satisfies $({f_1})$, $(f_2)$, $({f_4})$ and $(f_3)$ with $S_0 =0$.
Then equation (\ref{eq1}) possesses infinitely many pairs of nontrivial solution provided
\begin{equation}\label{condprop4}
\limsup_{j\to \infty}\frac{\Lambda_j}{R_j^{(N-2)(q-2)}} = \infty.
\end{equation}
\end{proposition}
We remark that hypotheses (${V_5}$) and (\ref{condprop4}) do not imply that the potential $V$ is coercive. Actually, given any $\alpha\geq 0$ it is possible to find a potential satisfying those conditions and $\liminf_{\vert x\vert\to \infty} V(x) = \alpha$. We also note that, under those assumptions, we may always assume that $R_j \to \infty$, as $j\to \infty$, and, consequently that $R_j > \vert x_0\vert +r_0$, for every $j\in\mathbb{N}$, if $V\geq 0$.
It is worthwhile mentioning that Proposition \ref{prop4} holds under hypothesis $(f_3)$ with $S_0>0$ and a version of $(\ref{condprop4})$. Furthermore, it is possible to derive the existence of infinitely many pairs of nontrivial solutions for (\ref{eq1}) when $f$ and $V$ satisfy ($\widehat{f_1}$) and a version of $(V_4)$, respectively (see statements and remark in Section 5).
The article is organized as follows: In Section 2, we introduce the version of the penalization argument used for proving our results and we establish the existence of a mountain pass solution for the penalized problem. Section 3 is devoted to proving an estimate for the $L^\infty$ norm for the solutions to the modified problem in terms of its $L^{2^*}$ norm. In section 3, we also obtain a result on the decay of the solution of the penalized problem at infinity. In Section 4, we present the proofs of Theorems \ref{thm} and \ref{thm2} and the corresponding results when $(f_3)$ holds with $S_0=0$. The proofs of Theorem \ref{thm3} and Proposition \ref{prop4} are presented in Section 5.
\section{Preliminaries}
Let $E$ be the subspace of $D^{1,2}(\mathbb{R}^N)$ defined by
\[
E= \left\{
u\in D^{1,2}(\mathbb{R}^N) ; \int_{\mathbb{R}^N}V(x)u^2\,dx < \infty \right\}.
\]
We claim that, under the hypothesis (${V_1}$),
\[
\| u \|:= \left[\int_{\mathbb{R}^N}\left(\vert\nabla u\vert^2 + V(x)u^2\right)\,dx\right]^{1/2}
\]
is a norm in $E$ and $E$ is continuously embedded in $D^{1,2}(\mathbb{R}^N)$.
Since the claim is trivially verified if $V\geq 0$ in $\mathbb{R}^N$, in order to verify the claim it suffices to suppose that $\Omega\neq\emptyset$: given $u\in D^{1,2}(\mathbb{R}^N)$, we may use H\"{o}lder's inequality, with $r=2^*/2$ and $r'=N/2$,
and the estimate $\vert u\vert^2_{L^{2^*}(\Omega)}\leq S^{-1}\int_{\mathbb{R}^N}\vert\nabla u\vert^2\, dx$ to obtain
\begin{equation}\label{4}
\int_{\Omega}u^2\,dx \leq \frac{\vert\Omega\vert^{2/N}}{S}\int_{\mathbb{R}^N}\vert\nabla u\vert^2\, dx.
\end{equation}
From (${V_1}$), there is $\alpha >0$ such that
\begin{equation}\label{4,5}
\inf_{x\in \Omega} V(x) \geq -\alpha > -\frac{S}{\vert\Omega\vert^{2/N}}.
\end{equation}
Thus we may invoke (\ref{4}) to derive
\begin{equation}\label{E-D12}
\int_{\mathbb{R}^N}\left(\vert\nabla u\vert^2 + V(x)u^2\right)\,dx \geq \left( 1 - \frac{\alpha \vert\Omega\vert^{2/N}}{S}\right)\int_{\mathbb{R}^N}\vert\nabla u\vert^2\, dx.
\end{equation}
Observing that $(1- {\alpha \vert\Omega\vert^{2/N}}S^{-1}) >0$, this concludes the proof of the above claim.
Henceforth in this paper we take $\alpha =0$ and $\Omega = \emptyset$ whenever $V\geq 0$ in $\mathbb{R}^N$. Note that in this setting the above estimates are satisfied for those values of $\alpha$ and $\Omega$.
By solution of (\ref{eq1}), we mean a solution in the sense of distribution, namely a function $u\in D^{1,2}(\mathbb{R}^N)$ such that
\[
\int_{\mathbb{R}^N}\left(\nabla u\nabla\phi + V(x)u\phi\right)\, dx - \int_{\mathbb{R}^N}f(x, u)\phi\, dx =0,
\]
for every $\phi\in C^\infty_0(\mathbb{R}^N)$, the space of $C^\infty$-functions with compact support. In fact, an inspection of the proof of our results will reveal that the weak solution obtained satisfies the above identity for every $\phi\in E$.
Since in Theorems \ref{thm} and \ref{thm2} we intend to prove the existence of positive solutions, we let $f(x,s)=0$, for every $(x,s)\in \mathbb{R}^N\times (-\infty, 0]$.
Next, in order to deal with the fact that in our setting $f$ may assume negative values, we introduce a version of the penalization argument employed in \cite{Alves-Souto}:
for $\theta >2$ and $R>0$ given by conditions ($f_3$) and (${V_2}$) respectively, take
$k = 2\theta/(\theta-2)$ and consider, for every $(x,s)\in \mathbb{R}^N\times (0,\infty)$,
\[
\tilde{f}(x,s) = \left\{ \begin{array}{rl}
-\frac{1}{k}V(x)s, & \mbox{ if $kf(x,s) < - V(x)s$};\\
f(x,s), & \mbox{if $-V(x)s\leq kf(x,s) \leq V(x)s$};\\
\frac{1}{k}V(x)s, & \mbox{if $kf(x,s) > V(x)s$}.
\end{array}
\right.
\]
Furthermore set $\tilde{f}(x,s) = 0$, for every $(x,s)\in \mathbb{R}^N\times (-\infty, 0]$, and define
\begin{equation}\label{defg}
g(x,s) = \left\{ \begin{array}{ll}
f(x,s), & \mbox{ for $(x,s)\in \mathbb{R}^N\times\mathbb{R}$, $\vert x\vert \leq R$};\\
\tilde{f}(x,s), & \mbox{ for $(x,s)\in \mathbb{R}^N\times\mathbb{R}$, $\vert x\vert > R$}.
\end{array}
\right.
\end{equation}
Observe that $g$ is a Carath\'eodory function satisfying
\begin{equation}
\begin{cases}
g(x,s) = 0,\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times (-\infty, 0];\\
g(x,s) = f(x,s), \ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R},\ \vert x\vert \leq R;\label{1} \\
\vert g(x,s)\vert \leq \vert f(x,s)\vert, \ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}; \\
\vert g(x,s)\vert \leq \frac{1}{k}V(x)\vert s\vert,\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert x\vert > R;\\
\end{cases}
\end{equation}
and
\begin{equation}
\begin{cases}
G(x,s) = F(x,s),\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert x\vert \leq R; \\
G(x,s) \leq \frac{1}{2k}V(x)s^2,\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert x\vert > R; \label{2}
\end{cases}
\end{equation}
where $G(x,s) := \int_0^sg(x,t)\, dt$. The auxiliary problem that we associate with (\ref{eq1}) is the following:
\begin{equation}\label{AP}
\begin{cases}
-\Delta u + V(x)u = g(x,u), \quad x\in \mathbb{R}^N;\\
\quad u\in E.
\end{cases}
\end{equation}
We observe that any positive solution $u$ of (\ref{AP}) that satisfies $\vert f(x,u)\vert\leq V(x)u/k$ for $\vert x\vert \geq R$ it is actually a solution of (\ref{eq1}).
As a consequence of (\ref{1}) and (\ref{2}), the functional
\[
J(u) = \frac12\int_{\mathbb{R}^N}\left(\vert\nabla u\vert^2 + V(x)u^2\right)\, dx - \int_{\mathbb{R}^N}G(x,u)\, dx
\]
is well defined and of class $C^1$ in $(E, \| \cdot\|)$. Moreover
\begin{equation}\label{ws}
J'(u)v = \int_{\mathbb{R}^N} (\nabla u\nabla v + V(x)u v)\, dx - \int_{\mathbb{R}^N}g(x,u)v dx,\, \mbox{for every}\ u, v\in E.
\end{equation}
Thus, any critical point of $J$ is a weak solution of (\ref{AP}).
\begin{proposition}[mountain pass geometry]\label{mpg}
Suppose $V$ satisfies $(V_1)$-$(V_2)$ and $f$ satisfies $({f_1})$-$({f_3})$. Then
\begin{enumerate}
\item there exist constants $\beta, \rho >0$ such that $J(u) \geq \beta$ for every $u\in E$ such that $\| u\|= \rho$;
\item there exists $e\in E$, $\| e\| > \rho$, such that $J(e) <0$.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof of item $1$ is standard and follows well-known arguments. For the reader's convenience, we give a proof for the case $\Omega \neq \emptyset$: By $({V_1})$ and $({V_2})$, $\Omega \subset B_R(0)$ and $V(x)>0$ for every $\vert x\vert \geq R$. Consequently, from (\ref{4,5}) and (\ref{2}), we have
\begin{align}\label{3}
\lefteqn{J(u) = \frac{1}{2}\int_{\mathbb{R}^N}\left(\vert\nabla u\vert^2 + V(x)u^2\right)\, dx - \int_{B_R(0)}\! F(x,u)dx - \int_{\mathbb{R}^N \setminus B_R(0)}\! G(x,u)dx} \nonumber \\
& \geq \frac{1}{2}\int_{\mathbb{R}^N}\left(\vert\nabla u\vert^2 + V(x)u^2\right)\,dx - \int_{B_R(0)}\! F(x,u)dx -\frac{1}{2k} \int_{\mathbb{R}^N \setminus \Omega}\! V(x)u^2 dx \nonumber \\
& = \frac{1}{2}\left[1- \frac{\alpha \vert\Omega\vert^{2/N}}{S}\right]\int_{\mathbb{R}^N}\vert\! \nabla u\vert^2\,dx + \frac{(k-1)}{2k}\int_{\mathbb{R}^N \setminus \Omega}\! V(x)u^2 dx - \int_{B_R(0)}\! F(x,u) dx \nonumber \\
& \geq d_1 \int_{\mathbb{R}^N}(\vert\nabla u\vert^2\,dx + V(x)u^2)\,dx - \int_{B_R(0)} F(x,u) dx,
\end{align}
where $d_1:= \min\{(1- {\alpha \vert\Omega\vert^{2/N}S^{-1}})/2, (k-1)/(2k)\}$.
Given $\epsilon >0$, combining $({f_1})$ with $(f_2)$, we find a constant $C(\epsilon)>0$ such that
\[
\vert F(x,s)\vert\leq \epsilon\vert s\vert^{2} + C(\epsilon)\vert s\vert^{2^*},\ \mbox{for every} \ (x,s)\in \mathbb{R}^N\times \mathbb{R}.
\]
Thus, there are positive constants $d_2=d_2(R)$ and $d_3=d_3(\epsilon)$ such that
\begin{equation}\label{aux2}
\int_{B_R(0)} F(x,u)\, dx \leq \epsilon d_2\| u\|^2 + d_3\| u\|^{2^*}, \ \mbox{for every}\ u\in E.
\end{equation}
The above estimates and (\ref{3}) give us
\[
J(u) \geq d_1\|u\|^2\ - \epsilon d_2\|u\|^2 + d_3\|u\|^{2^*}, \ \mbox{for every}\ u\in E.
\]
Using the above estimate and taking $\epsilon >0$ sufficiently small,
we conclude the verification of item $1$ by finding appropriated values of $\beta, \rho >0$,
For proving the item $2$, invoking hypotheses $(V_1)$ and $(V_2)$ and taking $V_\infty = 0$, if $\Omega \neq \emptyset$, we may suppose that $B_{r_0}(x_0) \subset B_R(0)$ and $V(x) \leq V_\infty$, for every $x \in B_{r_0}(x_0)$. Next, considering a nonnegative function $\phi \in E\setminus \{0\}$ such that $\rm{supp}(\phi) \subset B_{r_0}(x_0)$, we obtain
\begin{equation}\label{6}
J(t\phi) \leq \frac{t^2}{2}\int_{B_{r_0}(x_0)}(\vert\nabla \phi\vert^2 + V_\infty \phi^2)\, dx - \int_{B_{r_0}(x_0)}F(x,t\phi)\, dx,\quad \forall t\geq 0.
\end{equation}
By $(f_2)$ and $(f_3)$, there are constants $C_1, C_2>0$, depending on $r_0$, such that
\begin{equation}\label{7}
F(x,s) \geq C_1s^\theta - C_2,\ \mbox{for every}\ (x,s)\in B_{r_0}(x_0)\times [0,\infty).
\end{equation}
From (\ref{6})-(\ref{7}), we have
\begin{equation}\label{estmpl}
J(t\phi) \leq \frac{t^2}{2}\int_{B_{r_0}(x_0)}(\vert\nabla \phi\vert^2 + V_\infty\phi^2)\, dx - C_1t^{\theta}\int_{B_{r_0}(x_0)}\vert\phi\vert^{\theta}\, dx + C_2 \vert B_{r_0}(x_0)\vert,
\end{equation}
for every $t\geq 0$. Since $\theta >2$, we have $J(t\phi) \to -\infty$ as $t\to +\infty$. Hence, taking $e= t\phi$, with $t>0$ sufficiently large, we have that $\| e\| > \rho$ and $J(e) <0$. This concludes the verification of item 2. The proof of Proposition \ref{mpg} is complete. \end{proof}
Recalling that $J$ satisfies the Palais-Smale condition \cite{Ambrosetti-Rabinowitz, R} if every sequence $(u_n) \subset E$ such that $(J(u_n)) \subset \mathbb{R}$ is bounded and $J'(u_n)\to 0$, as $n \to \infty$, has a strongly convergent subsequence, we may state:
\begin{lemma}\label{ps}
Suppose $V$ satisfies $(V_1)$-$(V_2)$ and $f$ satisfies $({f_1})$-$({f_3})$.
Then $J$ satisfies the Palais-Smale condition.
\end{lemma}
\begin{proof}
Let $(u_n) \subset E$ be a sequence such that $(J(u_n)) \subset \mathbb{R}$ is bounded and $J'(u_n)\to 0$, as $n \to \infty$. First of all we shall verify that $(u_n)$ is bounded in $(E,\|\cdot\|)$.
Using (\ref{1}), (\ref{2}), we have
\begin{align}\label{9}
J(u_n) & - \frac{1}{\theta}J'(u_n)u_n
= \frac{(\theta - 2)}{2\theta}\| u_n\|^2
+ \int_{\mathbb{R}^N} \left(\frac{1}{\theta}g(x,u_n)u_n - G(x,u_n)\right)\, dx \nonumber\\
&\geq \frac{(\theta - 2)}{2\theta}\|u_n\|^2 + \int_{B_R(0)}\left(\frac{1}{\theta}f(x,u_n)u_n - F(x,u_n)\right)\, dx \nonumber\\
& \quad - \frac{(\theta +2)}{2\theta k}\int_{\mathbb{R}^N\setminus B_R(0)}V(x)u_n^2\, dx
\end{align}
Next, we invoke $({f_1})$ and $({f_3})$ to find a positive constant $C=C(R)$ such that
\begin{equation}\label{estamrab}
\frac{1}{\theta}f(x,s)s - F(x,s) \geq -C,\ \mbox{for every}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}.
\end{equation}
Hence (\ref{9}) and our choice of $k$ provide
\begin{align}
J(u_n) - \frac{1}{\theta}J'(u_n)u_n & \geq \frac{(\theta - 2)}{2\theta}\int_{\mathbb{R}^N}\vert\nabla u_n\vert^2\,dx + \frac{(\theta - 2)^2}{4\theta^2}\int_{\mathbb{R}^N\setminus B_R(0))} V(x)u_n^2\, dx \nonumber \\
&\quad + \frac{(\theta - 2)}{2\theta}\int_{B_R(0)} V(x)u_n^2\, dx - C\vert B_R(0)\vert. \nonumber
\end{align}
Consequently, using $\theta>2$ and the estimates (\ref{4}) and (\ref{4,5}), with $\alpha =0$ if $\Omega= \emptyset$, we get
\[
J(u_n) - \frac{1}{\theta}J'(u_n)u_n \geq K\|u_n\|^2 - C\vert B_R(0)\vert,
\]
where
\begin{equation}\label{aux_K}
K: = \frac{(\theta - 2)^2}{4\theta^2}\left(1 - \frac{\alpha \vert\Omega\vert^{2/N}}{S}\right).
\end{equation}
Using that $(J(u_n))\subset \mathbb{R}$ is bounded and that $J'(u_n)\to 0$, as $n \to \infty$, we conclude that the sequence $(u_n)$ is bounded in
$(E,\| \cdot\|)$. Consequently there is $L>0$ such that $\|u_n\| \leq L$ for every $n$. Furthermore, using that $E$ is continuously embedded in $D^{1,2}(\mathbb{R}^N)$ and the Sobolev embedding theorem, by taking a subsequence if necessary, we may suppose there is $u\in E$ such that
\begin{equation}\label{conv}
\begin{cases}
u_n \rightharpoonup u \mbox{ weakly in $E$},\\
u_n \to u \mbox{ strongly in $L^{\sigma}$, $\sigma \in [1, 2^*)$, on bounded subsets of $\mathbb{R}^N$},\\
u_n(x) \to u(x) \mbox{ for almost every $x\in \mathbb{R}^N$}.
\end{cases}
\end{equation}
A standard argument shows that $u$ is a critical point of $J$. Furthermore $u^- = \min(u,0) =0$. Indeed, by (\ref{1}), (\ref{9}) and (\ref{conv}), we have
\[
\|u_n^-\|^2 = \int_{\mathbb{R}^N} \left(\vert\nabla u_n^-\vert^2 + V(x)(u_n^-)^2\right)\, dx = J'(u_n)u_n^- = o_n(1).
\]
Hence, $u_n^- \to 0$ strongly in $E$ and, consequently, $u^- = 0$.
Setting $v_n := u_n - u$, by (\ref{9}) and (\ref{conv}), we have
\begin{align}\label{conv2}
\| v_n\|^2 & = \int_{\mathbb{R}^N}g(x,u_n)v_n\,dx + J'(u_n)v_n - \int_{\mathbb{R}^N}\left( \nabla u\nabla v_n + V(x)uv_n \right)\,dx \nonumber\\
& = \int_{\mathbb{R}^N}g(x,u_n)v_n\,dx + o_n(1).
\end{align}
Given $\epsilon >0$, we find $r>R$ such that
\begin{equation}\label{conv3}
\int_{\mathbb{R}^N \setminus B_r(0)} V(x)u^2\,dx < \frac{\epsilon^2}{(4k+2)^2L^2}.
\end{equation}
Next, invoking (\ref{1}), we obtain
\[
\left\vert \int_{\mathbb{R}^N \setminus B_r(0)}\! g(x,u_n)v_n\, dx\right\vert \leq \frac{1}{k}\int_{\mathbb{R}^N \setminus B_r(0)}\! V(x)u_n^2\,dx + \frac{1}{k}\int_{\mathbb{R}^N \setminus B_r(0)}\! V(x)\vert u_n\vert \vert u\vert\,dx.
\]
The above estimate and (\ref{conv2}) provide
\begin{align}\label{conv4}
\|v_n\|_{D^{1,2}}^2 + \int_{B_r(0)} V(x)v_n^2\,dx + \frac{(k-1)}{k}\int_{\mathbb{R}^N \setminus B_r(0)} V(x)(u_n^2 + u^2)\,dx\nonumber\\
\leq \int_{B_r(0)}g(x,u_n)v_n\, dx + \frac{(2k+1)}{k}\int_{\mathbb{R}^N \setminus B_r(0)}V(x)\vert u_n\vert\vert u\vert\,dx + o_n(1).
\end{align}
Using that $v_n^2 \leq 2(u_n^2 + u^2)$ in the third integral on the left-hand side of inequality (\ref{conv4}), we find
\begin{align*}
(k &-1)\|v_n\|^2 \leq 2k\int_{B_r(0)}g(x,u_n)v_n\,dx - (k+1)\int_{\Omega}V(x)v_n^2\,dx \\
& \quad + (4k+2)\int_{\mathbb{R}^N\setminus B_r(0)}V(x)\vert u_n\vert\vert u\vert\,dx + o_n(1)\\
& \leq 2k\int_{B_r(0)}g(x,u_n)v_n\,dx - (k+1)\int_{\Omega}V(x)v_n^2\,dx \\
& \quad + (4k+2)\left[\int_{\mathbb{R}^N\setminus B_r(0)}V(x)u_n^2\,dx\right]^{1/2}\left[\int_{\mathbb{R}^N\setminus B_r(0)}V(x)u^2\,dx\right]^{1/2} + o_n(1).
\end{align*}
Since, by (\ref{4}) and (\ref{4,5}),
\[
\int_{\mathbb{R}^N\setminus B_r(0)}V(x)u_n^2\,dx \leq \|u_n\|^2 \leq L^2,
\]
we may invoke (\ref{conv}) and (\ref{conv3}) to conclude that $\limsup_{n\to \infty}\|v_n\|^2 < \epsilon$ for every $\epsilon >0$. The fact that $\epsilon >0$ can be chosen arbitrarily small implies that $u_n \to u$ strongly in $E$.
\end{proof}
\begin{remark}\label{rem} We observe that the decay of $V$ at infinity is not used in the proofs of Proposition \ref{mpg} and Lemma \ref{ps}. Actually, in those proofs, we have only
used hypothesis $({V_1})$ and the fact that $V$ is positive on $\mathbb{R}^N\setminus B_R(0)$.
\end{remark}
By the Mountain Pass Theorem \cite{Ambrosetti-Rabinowitz}, there is $u \in E$ such that
\begin{equation}\label{8}
J(u) = c>0\quad \mbox{and}\quad J'(u) = 0,
\end{equation}
where
\begin{equation}\label{level}
c= \inf_{\gamma \in \Gamma}\max_{t\in [0,1]} J(\gamma(t)),
\end{equation}
with $$\Gamma = \{ \gamma \in C([0,1],E) : \gamma(0)=0, \gamma(1) = e\},$$
for $e$ given by Proposition \ref{mpg}.
Since $c>0$ and $g(x,s)=0$ for every $(x,s)\in \mathbb{R}^N\times (-\infty, 0]$, the function $u$ is a nontrivial and nonnegative weak solution of (\ref{AP}). Consequently, by the regularity theory and maximum principle, $u$ is positive in $\mathbb{R}^N$. It remains to verify that $u$ is a solution of (\ref{eq1}).
We conclude this section with an estimate for the norm of the solution given by (\ref{8}) that will be of use later to estimate the decay at infinity of the positive weak solutions of (\ref{AP}). Considering $B_0 := B_{r_0}(x_0)$ and $\phi$ and the constants $C_1$ and $C_2$, given in the proof of Proposition \ref{mpg}, we define
\begin{equation}\label{supmpl}
d := \sup_{t\geq 0}\left[ \frac{t^2}{2}\int_{B_0}(\vert\nabla \phi\vert^2 + V_\infty\phi^2)dx - C_1t^{\theta}\int_{B_0}\vert\phi\vert^{\theta}dx + C_2 \vert B_0\vert\right]
\end{equation}
\begin{corollary}\label{2kd}
Suppose $u$ is a solution of (\ref{AP}) such that $J'(u)=0$ and $J(u) =c$, with $c>0$ given by (\ref{level}), then
\[
\|u\|^2 \leq K^{-1}\left[d + C\vert B_R(0)\vert\right],
\]
where $C$, $K$ and $d$ are given by (\ref{estamrab}), (\ref{aux_K}) and (\ref{supmpl}), respectively.
\end{corollary}
\begin{proof} First of all we observe that as a direct consequence (\ref{level}) and (\ref{supmpl}), $c\leq d$.
Furthermore, arguing as in the proof of Lemma \ref{ps},
\[
c= J(u) = J(u)-\frac{1}{\theta}J'(u)u \geq K\|u\|^2 - C\vert B_R(0)\vert.
\]
Consequently
$
\|u\|^2 \leq K^{-1}\left[ c +C\vert B_R(0)\vert\right] \leq K^{-1}\left[ d + C\vert B_R(0)\vert\right].
$
\end{proof}
\begin{remark} \label{hypalso}
If we suppose $(f_3)$ with $S_0= 0$, the estimate provided by Corollary \ref{2kd} does not depend on the value of $R$. Indeed, since in this case the constant $C$ given by (\ref{estamrab}) is zero, we have $\|u\|^2 \leq K^{-1}d$ .
\end{remark}
\section{Estimates}
This section is devoted to establishing an estimate for the $L^\infty$ norm of the solutions $u$ in terms of its $L^{2^*}$ norm and the decay of the solutions. Here we shall consider the problem (\ref{AP})
with $V$ satisfying $({V_1})$ and $g: \mathbb{R}^N\times \mathbb{R} \to \mathbb{R}$ a Carathéodory function satisfying
\begin{description}
\item[$({g_1})$] there are $R> 0$ and $k>1$ such that
$$
\vert g(x,s)\vert \leq \frac{1}{k} V(x) \vert s\vert, \ \mbox{for every}\ s\in \mathbb{R}, \ \mbox{for a.e.}\ x\in \mathbb{R}^N\setminus B_R (0);
$$
\item[$({g_2})$] there are $a_1>0$, $a_2 \geq 0$, and $p\in (2,2^*)$ such that
$$
\vert g(x,s)\vert\leq a_1\vert s\vert^{p-1} + a_2, \ \mbox{for every}\ s\in \mathbb{R}, \ \mbox{for a.e.}\ x\in \mathbb{R}^N.
$$
\end{description}
Note that as a direct consequence of $({V_1)}$ and $({g_1})$ we have that $\Omega \subset B_R(0)$ whenever $\Omega \neq \emptyset$.
The next result is a major step in our proofs of the existence of solutions for (\ref{eq1}) because it provides an estimate for the $L^\infty$ norm for the solutions of the auxiliary problem regardless of the behavior of $g$ near the origin. In our proof of that estimate, we adapt to our setting some of the arguments used by Br\'{e}zis and Kato \cite{Brezis-Kato} and Alves and Souto \cite{Alves-Souto}.
\begin{lemma}\label{infinity}
Suppose $({V_1})$ and $({g_1})$-$({g_2})$ are satisfied. If $u\in E$ is a solution of Problem (\ref{AP}), then $u\in L^\infty(\mathbb{R}^N)$
$$
\| u\|_{L^{\infty}(\mathbb{R}^N)} \leq
\left(C_1 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-2} + C_2 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-1} + C_3\right)^{\frac{1}{2^* -p}}
\left(1 + \vert u\vert_{L^{2^*}(\mathbb{R}^N)}\right),
$$
with the constants $C_1, C_2, C_3 \geq 0$ depending on the values of $a_1$, $a_2$, $\alpha$, $\vert\Omega\vert$, and $p$. Moreover $ C_2 =0$ whenever $a_2 =0$.
\end{lemma}
\begin{proof} Since the argument is similar to $u^-$, for proving Lemma \ref{infinity} it suffices to verify that $u^+\in L^\infty(\mathbb{R}^N)$ and that the estimate holds with $u^+$ replacing $u$. Moreover, considering that the proof is simpler
when $\Omega = \emptyset$, we shall present a proof of the lemma under the hypothesis $\Omega\neq \emptyset$.
Taking $\tau = 2^*/(p-2)$ and $\, \sigma \in \mathbb{R}^N$ such that $\sigma \geq 2^*/2 \tau^\prime >1$, where $\tau^\prime$ is the exponent conjugate to $\tau$,
for every $m\in \mathbb{N}$, $m >a_2$ , we set $v=(u-a_2)^+$ and we define
\begin{align*}
A_m &:= \{ x\in \mathbb{R}^N : \vert v\vert^{\sigma -1} \leq m\}; \\
v_m & := \left\{ \begin{array}{ll}
v\vert v\vert^{2(\sigma -1)} & \mbox{in } A_m;\\
m^2v & \mbox{in } B_m :=\mathbb{R}^N\setminus A_m;
\end{array}
\right.\\
w_m & := \left\{ \begin{array}{ll}
v\vert v\vert^{(\sigma -1)} & \mbox{in } A_m;\\
mv & \mbox{in } B_m.
\end{array}
\right.
\end{align*}
A standard verification implies that $v_m \in E$. Furthermore we have that $vv_m = w_m^2$ and
\[
\nabla v_m = \left\{ \begin{array}{ll}
(2\sigma-1)\vert v\vert^{2(\sigma -1)}\nabla v & \mbox{in } A_m\\
m^2\nabla v & \mbox{in } B_m
\end{array}
\right.
\]
and
\[
\nabla w_m = \left\{ \begin{array}{ll}
\sigma\vert v\vert^{(\sigma -1)}\nabla v & \mbox{in } A_m\\
m\nabla v & \mbox{in } B_m.
\end{array}
\right.
\]
Hence, considering that $\sigma^2 >2\sigma -1$,
\begin{align}\label{27}
\int_{\mathbb{R}^N} \vert\nabla w_m\vert^2\,dx &= \sigma^2 \int_{A_m}\vert v\vert^{2(\sigma -1)}\vert\nabla v\vert^2\,dx + m^2 \int_{B_m}\vert\nabla v\vert^2\,dx \nonumber\\
& =\frac{\sigma^2}{2\sigma -1}\int_{\mathbb{R}^N}\nabla v\nabla v_m\,dx
+ m^2\left(1 - \frac{\sigma^2}{2\sigma -1}\right)
\int_{B_m}\vert\nabla v\vert^2\,dx \nonumber\\
& \leq \frac{\sigma^2}{2\sigma -1}\int_{\mathbb{R}^N}\nabla v\nabla v_m\,dx.
\end{align}
Since $v_m \in E$, $v_m\geq 0$, and $v_m =0$ on the set $[ u\leq a_2] :=\{x\in \mathbb{R}^N ;\ u(x)\leq a_2\}$, we may use that $u$ is a solution of (\ref{AP}) and $({g_1})$ to obtain
\begin{align*}
\int_{\mathbb{R}^N}(\nabla u\nabla v_m & + V(x)uv_m)\,dx = \int_{\mathbb{R}^N}g(x,u)v_m\,dx\\
&\leq \int_{B_R(0)}\vert g(x,u)\vert v_m\,dx + \frac{1}{k}\int_{\mathbb{R}^N\setminus B_R(0)}V(x)uv_m\,dx.
\end{align*}
Using the definition of $v$ and that $\Omega\subset B_R(0)$, we get
\begin{align}
\int_{\mathbb{R}^N}\nabla v\nabla v_m\, dx + & \int_{\Omega}V(x)uv_m\,dx \nonumber \\
+ & \frac{(k-1)}{k} \int_{\mathbb{R}^N\setminus\Omega}V(x)uv_m\,dx \leq
\int_{B_R(0)}\vert g(x,u)\vert v_m\,dx.
\end{align}
Therefore, from $({V_1})$, $({g_2})$, (\ref{4,5} and (\ref{27})
\begin{equation}\label{28,6}
\int_{\mathbb{R}^N} \vert\nabla w_m\vert^2 \,dx \leq \frac{\sigma^2}{2\sigma -1}\left[
\int_{B_R(0)}(a_1\vert u\vert^{p-1} + a_2)v_m \, dx + \alpha\int_{\Omega}u v_m \,dx\right]
\end{equation}
We claim that
\begin{equation}\label{3.6}
\int_{\mathbb{R}^N} \vert\nabla w_m\vert^2\, dx \leq \frac{\sigma^2}{2\sigma -1}\left( a_3 \vert v\vert^{2\sigma}_{L^{2\sigma\tau^{\prime}}(\mathbb{R}^N)}+ a_4 \vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1} \right),
\end{equation}
where
\begin{equation}\label{3.71}
a_3 = 2a_1 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-2} + \alpha \vert\Omega\vert^{\frac{(p-2)}{2^*}};
\end{equation}
and
\begin{equation}\label{3.72}
a_4 = \left\{\begin{array}{l}
2^{p-1} a_2^{-p}(a_1 a_2^{p-2} +1)(1 + \vert u\vert_{L^{2^*}(\mathbb{R}^N)})^{p-1} + \alpha a_2(1 + \Omega\vert),\, \mbox{if}\, a_2> 0;\\
0, \, \mbox{if}\, a_2= 0.
\end{array}
\right.
\end{equation}
Indeed, invoking one more time that $v_m= 0$ whenever $u\leq a_2$ and using that $vv_m = w_m^2$, we have
\begin{align*}
\int_{B_R(0)}(a_1\vert u\vert^{p-1} &+ a_2)v_m \, dx \\
& \leq 2^{p-1} a_1\int_{\mathbb{R}^N} \vert u\vert^{p-2} w_m^2\, dx + 2^{p-1} a_2(a_1 a_2^{p-2} + 1)\int_{[u\geq a_2]}v_m \, dx
\end{align*}
Supposing $a_2>0$ and applying H\"{o}lder's inequality, we obtain
\begin{align}\label{3.4}
\int_{B_R(0)}(a_1\vert u\vert^{p-1} +& a_2)v_m \, dx \leq 2^{p-1} a_1\vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{(p-2)}\left(\int_{\mathbb{R}^N} w_m^{2\tau^\prime}\, dx\right)^{\frac{1}{\tau^\prime}} \\
+ & 2^{p-1}a_2(a_1a_2^{p-2} +1) \vert[u\geq a_2]\vert^{\frac{1}{\tau}}
\left(\int_{[u\geq a_2]} \vert v_m\vert^{\tau^{\prime}}\, dx\right)^{\frac{1}{\tau^\prime}},\nonumber
\end{align}
Since $\vert w_m\vert\leq \vert v\vert^\sigma$ and $\vert v_m\vert\leq \vert v\vert^{2\sigma -1}$ in $\mathbb{R}^N$, we have
\begin{align}
\int_{B_R(0)}\!
(a_1\vert u\vert^{p-1} +& a_2)v_m \, dx \leq 2^{p-1} a_1 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{(p-2)}\vert v\vert_{L^{2\sigma\tau^\prime}(\mathbb{R}^N)}^{2\sigma}\nonumber \\
+ & 2^{p-1}a_2(a_1 a_2^{P-2} +1) \vert[u\geq a_2]\vert^{\frac{1}{\tau}}
\left(\int_{[u\geq a_2]}\! \vert v\vert^{(2\sigma -1)\tau^{\prime}}\, dx\right)^{\frac{1}{\tau^\prime}}.\nonumber
\end{align}
Thus, using H\"{o}lder's inequality one more time, we obtain
\begin{align}
\int_{B_R(0)}(a_1\vert u\vert^{p-1} +& a_2)v_m \, dx \leq 2^{p-1}a_1 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{(p-2)}\vert v\vert_{L^{2\sigma\tau^\prime}(\mathbb{R}^N)}^{2\sigma}\nonumber \\
+ & 2^{p-1}a_2(a_1 a_2^{P-2} +1) \vert[u\geq a_2]\vert^{(\frac{1}{\tau} + \frac{1}{2\sigma \tau^\prime})}
\vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1}. \nonumber
\end{align}
Consequently, observing that $a_2^{2^*}\vert[u \geq a_2]\vert\leq \vert u\vert^{2^*}_{L^{2^*}(\mathbb{R}^N)}$, we may write
\begin{align}
\int_{B_R(0)}(a_1\vert u\vert^{p-1} +& a_2)v_m \, dx \leq 2^{p-1} a_1\vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{(p-2)}\vert v\vert_{L^{2\sigma\tau^\prime}(\mathbb{R}^N)}^{2\sigma}\nonumber \\
+ & 2^{p-1} a_2^{ -(\frac{2^*}{2\sigma \tau^\prime} + p-1)}(a_1 a_2^{P-2} +1)\vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{\frac{2^*}{(2\sigma \tau^\prime} + p-2)}
\vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1}.\nonumber
\end{align}
Since $0< 2^*/2\sigma \tau^\prime\leq 1$, we obtain
\begin{align}
\int_{B_R(0)}(a_1\vert u\vert^{p-1} +& a_2)v_m \, dx \leq 2^{p-1} a_1\vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{(p-2)}\vert v\vert_{L^{2\sigma\tau^\prime}(\mathbb{R}^N)}^{2\sigma}\nonumber \\
+ & 2^{p-1} a_2^{-p}(a_1 a_2^{P-2} +1) (a_2 +1) (1 +\vert u\vert_{L^{2^*}(\mathbb{R}^N)})^{(p-1)}
\vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1}. \nonumber
\end{align}
Analogously, we obtain
\[
\int_{\Omega} u v_m\, dx \leq \vert\Omega\vert^{\frac{(p-2)}{2^*}} \vert v\vert^{2\sigma}_{L^{2\sigma\tau^{\prime}}(\mathbb{R}^N)}+ a_2(1 + \vert\Omega\vert)
\vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1}.
\]
From the above inequalities and (\ref{28,6}), we conclude that the estimate (\ref{3.6}) satisfied if $a_2>0$. A similar argument implies that (\ref{3.6}) is also held when $ a_2 =0$, and the claim is proved.
From the last claim, $\vert w_m\vert = \vert v\vert^\sigma$ in $A_m$, the embedding $D^{1,2}(\mathbb{R}^N)\hookrightarrow L^{2^*}(\mathbb{R}^N)$ and $2\sigma -1 >1$, we obtain
\[
\left(\int_{A_m} \vert v\vert^{2^*\sigma}\,dx\right)^{2/2^*}
\leq \sigma^2S^{-1}\left( a_3 \vert v\vert^{2\sigma}_{L^{2\sigma\tau^{\prime}}(\mathbb{R}^N)}+ a_4 \vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1} \right).
\]
Letting $m\to\infty$ and using the monotone convergence theorem, we may write
\[
\vert v\vert_{L^{2^*\sigma}(\mathbb{R}^N)}^{2\sigma}
\leq \sigma^2 S^{-1}(a_3 + a_4)\left( \vert v\vert^{2\sigma}_{L^{2\sigma\tau^{\prime}}(\mathbb{R}^N)}+ \vert v\vert_{L^{2\sigma \tau^{\prime}}(\mathbb{R}^N)}^{2\sigma -1} \right).
\]
Taking $\sigma = 2^*/2\tau^\prime$ and replacing $\sigma$ by $\sigma^j$, $j\in \mathbb{N}$, in the above inequality, we obtain
\[
\vert v\vert_{L^{2^*\sigma^j}(\mathbb{R}^N)}^{2\sigma^j}
\leq \sigma^{2j} S^{-1}(a_3 + a_4)\left( \vert v\vert^{2\sigma^j}_{L^{2^*\sigma^{(j-1)}}(\mathbb{R}^N)}+ \vert v\vert_{L^{2^*\sigma^{(j-1)}}(\mathbb{R}^N)}^{2\sigma^j -1} \right).
\]
Using an argument of induction, we may verify that the following inequality holds for every $j\in \mathbb{N}$.
\[
\vert v\vert_{L^{2^*\sigma^j}}\leq \sigma^{\frac{1}{\sigma} + \frac{2}{\sigma^2}+ \dots +\frac{j}{\sigma^j} }\left[ 2\left(S^{-1}(a_3 + a_4)+ 1\right)\right]^{\frac{1}{2}(\frac{1}{\sigma }+\frac{1}{\sigma^2} + \cdots + \frac{1}{\sigma^j})}( 1 + \vert v\vert_{L^{2^*}(\mathbb{R}^N)}).
\]
Now, letting $j\to \infty$ and using that
$
\sum_{j=1}^\infty{j}/{\sigma^j} = {\sigma}/{(\sigma -1)^2}$
and $\frac12\sum_{j=1}^\infty{1}/{\sigma^j} = {1}/{2(\sigma -1)}$,
we have
\[
\vert v\vert_{L^{\infty}}\leq \sigma^{\frac{\sigma}{(\sigma-1)^2}}
\left[ 2\left( S^{-1}(a_3 + a_4) + 1\right)\right]^{\frac{1}{2(\sigma-1)}}
(1 + \vert v\vert_{L^{2^*}(\mathbb{R}^N)}).
\]
Hence, considering that $\sigma >1$, $1/2(\sigma -1) = 1/(2^*-p)$, $\vert v\vert_{L^{2^*}}\leq \vert u\vert_{L^{2^*}}$, and $\vert u^+\vert_\infty \leq \max\{ a_2, \vert v\vert_\infty\}$, we obtain
\[
\vert u^+\vert_{L^{\infty}}\leq
\left[ \sigma^{\frac{2\sigma}{(\sigma-1)}} 2S^{-1}(a_3 + a_4) + 2\sigma^{\frac{2\sigma}{(\sigma-1)}} + a_2^{2(\sigma-1)}\right]^{\frac{1}{2^* - p}}
(1 + \vert u\vert_{L^{2^*}(\mathbb{R}^N)}).
\]
Hence, from (\ref{3.71}) and (\ref{3.72}),
\[
\vert u^+\vert_{L^{\infty}}\leq
\left[ C_1 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-2} + C_2 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-1} + C_3 \right]^{\frac{1}{2^* - p}}
(1 + \vert u\vert_{L^{2^*}(\mathbb{R}^N)}),
\]
where $C_1, C_2$ and $C_3$ are constants depending on the values of $a_1,a_2, p, \alpha, \vert\Omega\vert$ and $k$. Moreover, $C_2 =0$ whenever $a_2=0$. The proof of Lemma \ref{infinity} is complete.
\end{proof}
\begin{remark}\label{obsCinfty}
We note that the values of the constants $C_1$, $C_2$, and $C_3$ given by Lemma \ref{infinity} do not depend on the value of $R$ in hypothesis $(g_1)$ or on the value of the potential $V$ on $\mathbb{R}^N\setminus \Omega$.
\end{remark}
\begin{lemma}\label{decay} Suppose ($V_1$) and $(g_1)$-$(g_2)$ are satisfied.
If $u\in E$ is a weak solution of (\ref{AP}), then
\[
\vert u(x)\vert \leq M\left(\frac{R}{\vert x\vert}\right)^{N-2}, \ \mbox{for every} \ x \in \mathbb{R}^N,\ \vert x\vert\geq R,
\]
where $R>0$ is given by $(g_1)$ and
\[
M= \left(C_1 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-2} + C_2 \vert u\vert_{L^{2^*}(\mathbb{R}^N)}^{p-1} + C_3\right)^{\frac{1}{2^* -p}}
\left(1 + \vert u\vert_{L^{2^*}(\mathbb{R}^N)}\right),
\]
with $C_1$, $C_2$, and $C_3$ the constants provided by Lemma \ref{infinity}.
\end{lemma}
\begin{proof}
Let $z \in C^{\infty}(\mathbb{R}^N\setminus\{0\})$ be the harmonic function $z(x) = M (R/\vert x\vert)^{N-2}$, for $x\in \mathbb{R}^N$. Next, take
\[
w^+(x) = \left\{ \begin{array}{ll}
(u(x) - z(x))^+, & \mbox{if}\ \vert x\vert \geq R,\\
0, & \mbox{if}\ \vert x\vert < R.
\end{array}
\right.
\]
As a direct consequence of Lemma \ref{infinity} $\vert u(x)\vert \leq z(x)$, for every $x$ such that $\vert x\vert=R$. Moreover, since $\Delta z=0$ in $\mathbb{R}^N\setminus B_R(0)$, $w^+\in E$, $w^+(x) =0$ for every $\vert x\vert\leq R$, and $w^+\geq 0$, by $(g_1)$, we have
\begin{align*}
\int_{\mathbb{R}^N}\vert\nabla w^+\vert^2\, dx & =\int_{\mathbb{R}^N}\nabla(u-z)\nabla w^+\, dx\\
& = \int_{\mathbb{R}^N}\nabla u\nabla w^+\, dx - \int_{\mathbb{R}^N}\nabla z\nabla w^+\, dx\\
&= \int_{\mathbb{R}^N}\left(g(x,u)w^+ - V(x)uw^+\right)\, dx\\
&\leq \int_{\mathbb{R}^N\setminus B_R(0)}V(x)\left( \frac{\vert u\vert}{k} - u\right)w^+\, dx \leq 0,
\end{align*}
where in the last inequality we have used that $u\geq 0$ whenever $w^+>0$.
Now, taking
\[
w^-(x) = \left\{ \begin{array}{ll}
(- u(x) - z(x))^+, & \mbox{if}\ \vert x\vert \geq R,\\
0, & \mbox{if}\ \vert x\vert < R,
\end{array}
\right.
\]
arguing as above and observing that $u\leq 0$ whenever $w^->0$, we obtain
\begin{align*}
\int_{\mathbb{R}^N}\vert\nabla w^-\vert^2\, dx & =\int_{\mathbb{R}^N}\nabla(-u-z)\nabla w^-\, dx\\
&= \int_{\mathbb{R}^N}\left(-g(x,u)w^- + V(x)uw^-\right)\, dx\\
&\leq \int_{\mathbb{R}^N\setminus B_R(0)}V(x)\left( \frac{\vert u\vert}{k} + u\right)w^-\, dx \leq 0.
\end{align*}
It follows from the above estimates that $w ^i\equiv 0$, $i = \pm$, and, consequently, $\vert u\vert\leq z$ on $\mathbb{R}^N\setminus B_R (0)$. The proof of Lemma \ref{decay}
is complete.
\end{proof}
\section{Existence of a positive solution}\label{proofsthm12}
In this section, we present the proofs of Theorems \ref{thm} and \ref{thm2} and the corresponding results when hypothesis $(f_3)$ holds with $S_0=0$.
\medskip
\noindent{\textbf{Proof of Theorem \ref{thm}}}.
In view of Corollary \ref{2kd}, (\ref{E-D12}) and the estimate $\vert u\vert^2_{L^{2^*}(\Omega)}\leq S^{-1}\int_{\mathbb{R}^N}\vert\nabla u\vert^2\, dx$, the equation (\ref{AP}) has a positive solution $u\in E$ satisfying
\begin{equation}\label{4.1}
\vert u\vert_{L^{2^*}(\mathbb{R}^N)} \leq \widehat{C} := \left[ K^{-1}( S - \alpha \vert\Omega\vert^{2/N})^{-1} (d + C \vert B_R(0)\vert)\right]^{1/2},
\end{equation}
with $C$, $K$ and $d$ given by (\ref{estamrab}), (\ref{aux_K}) and (\ref{supmpl}), respectively.
Since $g$ defined by (\ref{defg}) satisfies the hypotheses $(g_1)$ and $(g_2)$, with $k = 2\theta/(\theta -2)$ and $R$,
$a_1$, $p$ and $\theta$ given by $(V_2)$, $(f_2)$ and $(f_3)$, in view of (\ref{4.1}) and Lemma \ref{decay}, we have
\begin{equation}\label{4.2}
\vert u(x)\vert \leq \widehat{M}\left(\frac{R}{\vert x\vert}\right)^{N-2}, \ \mbox{for every} \ x \in \mathbb{R}^N,\ \vert x\vert\geq R,
\end{equation}
where
\begin{equation}\label{4.3}
\widehat{M} =
\left(C_1\widehat{C}^{p-2} + C_2\widehat{C}^{p-1} + C_3\right)^{1/(2^* -p)}\left( 1 + \widehat{C}\right),
\end{equation}
for constants $C_1$, $C_2$, and $C_3$ given by Lemma \ref{infinity}.
Next, using the hypotheses $(f_1)$ and $(f_2)$, we find a constant $C>0$ such that
\begin{equation}\label{4.4}
\vert f(x,s)\vert \leq C\vert s\vert^{q-1},\ \mbox{for every}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert s\vert\leq \widehat{M}.
\end{equation}
Therefore, from (\ref{4.1}) - (\ref{4.4}),
\[
\vert f(x,u(x))\vert\leq C \widehat{M} ^{(q-2)}\left(\frac{R}{\vert x\vert}\right)^{(N-2)(q-2)}\vert u(x)\vert,\ \mbox{for every}\ x\in \mathbb{R}^N, \ \vert x\vert\geq R.
\]
Taking $\Lambda^* = k C \widehat{M} ^{(q-2)}R^{(N-2)(q-2)}$, we may invoke the above estimate and $(V_2)$ to conclude that, for every $\Lambda \geq \Lambda^* >0$, we have
\[
\vert f(x,u(x))\vert \leq \frac{1}{k}V(x)\vert u(x)\vert,\ \mbox{for every}\ x \in \mathbb{R}^N,\ \vert x\vert\geq R.
\]
Thus, $\tilde{f}(x,u(x)) = f(x, u(x))$ for every $x\in \mathbb{R}^N$ such that $\vert x\vert \geq R$, and $g(x,u(x)) = f(x, u(x))$, for every $x\in \mathbb{R}^N$. This implies that $u$ is a positive solution of (\ref{eq1}). The proof of Theorem \ref{thm} is complete. \hfill $\Box$
\medskip
As it was observed in the introduction, when $(f_3)$ holds with $S_0=0$, we may provide a relation between the parameter in hypothesis $(V_2)$ and the value of $R$ that, in particular, generalizes the result in \cite{Alves-Souto} when $q>2^*$.
\begin{theorem} \label{resultalso}
Suppose $V$ satisfies $({V_1})$ and $({V_3})$, and $f$ satisfies $({f_1})$-$({f_2})$, and $({f_3})$ with $S_0 = 0$. Then there is $\widetilde{\Lambda}^*>0$ such that (\ref{eq1}) possesses a positive solution for every $\Lambda \geq \widetilde{\Lambda}^*$.
\end{theorem}
\begin{proof}
Since $f$ satisfies $(f_3)$ with $S_0 =0$, we may invoke
Remark \ref{hypalso} to conclude that the constant $\widehat{C}$, given by (\ref{4.1}),
does not depend on the value of $R$. Therefore, from Remark \ref{obsCinfty} and (\ref{4.3}), the estimate
(\ref{4.4}) holds with the constants $C$ and $\widehat{M}$ being independent of the values of $\Lambda$ and $R$. Consequently, supposing that $(V_3)$ holds, the argument used in the proof of Theorem \ref{thm} implies that
(\ref{eq1}) possesses a positive solution for every $\Lambda \geq \widetilde{\Lambda}^* = C \widehat{M}^{(q-2)}$.
\end{proof}
\medskip
\noindent{\textbf{Proof of Theorem \ref{thm2}}}.
Since $(\widehat{f_1})$ implies that $(f_1)$ holds, we may invoke Corollary \ref{2kd}, Lemma \ref{decay} and the argument used in the proof of Theorem \ref{thm} to conclude that equation (\ref{AP}) has a positive solution $u\in E$ satisfying the estimate (\ref{4.2}).
Next, we fix $0< \widehat{a} < a$. From $(\widehat{f_1})$ and $(f_2)$ we may find $C>0$ such that
\[
\vert f(x,s)\vert \leq Ce^{-\widehat{a}/ \vert s\vert^q}\vert s\vert,\ \mbox{for every}\ (x,s)\in \mathbb{R}^N\times\mathbb{R},\ \vert s\vert\leq \widehat{M},
\]
where $\widehat{M}$ is given by (\ref{4.3}). Consequently, by (\ref{4.2}), we may write
\[
\vert f(x,u(x))\vert\leq C e^{-\mu^* \vert x\vert^{(N-2)q}}\vert u(x)\vert, \ \mbox{for every}\ x\in \mathbb{R}^N, \ \vert x\vert\geq R,
\]
where $\mu^* = \widehat{a}/ (\widehat{M}R^{(N-2)})^q$. Therefore, taking $\Lambda^* = k C$, from $(V_4)$, for every $\Lambda \geq \Lambda^*$ and $0< \mu \leq \mu^* $, we get
\[
\vert f(x,u(x))\vert \leq \frac{1}{k}V(x)\vert u(x)\vert,\ \mbox{for every}\ x \in \mathbb{R}^N,\ \vert x\vert\geq R.
\]
This implies that $u$ is a positive solution of (\ref{eq1}) since, from the above estimate and (\ref{defg}), $g(x,u(x)) = f(x,u(x))$ for every
$x\in \mathbb{R}^N$. The proof of Theorem \ref{thm2} is complete. \hfill $\Box$
\medskip
{Remark similar to Theorem \ref{thm} shows that $\Lambda^*$ in Theorem \ref{thm2} depends on $R$ given in $({V}_4)$ if $S_0 >0$ in $(f_3)$. If $(f_3)$ holds with $S_0=0$ and $V$ satisfies
\begin{description}
\item [($V_6$)] there are constants $\Lambda > 0$, $\mu >0$ and $R>0$ ($R> \vert x_0\vert + r_0$, for $r_0>0$ and $x_0\in \mathbb{R}^N$ given by (\ref{V12}), if $V\geq 0$) such that
\[
\inf_{\vert x\vert\geq R}e^{\mu (\frac{\vert x\vert}{R})^{(N-2)q}}V (x) \geq \Lambda,
\]
with $q$ given by ($\widehat{f_1}$),
\end{description}
then we may find $\widehat{\Lambda}^* > 0$ independent of $R$ such that (\ref{eq1}) has a positive solution for every $\lambda > \widehat{\Lambda}^*$. More precisely, using the arguments employed in the proof of Theorem \ref{thm2}, we may also state the following version of Theorem \ref{resultalso}: }
\begin{theorem}\label{thm4.2}
Suppose $V$ satisfies $({V_1})$ and $({V_6})$, and $f$ satisfies $(\widehat{f_1})$, $({f_2})$, and $({f_3})$ with $S_0=0$.
Then there are constants $\widehat{\Lambda}^*, \widehat{\mu}^* >0$ such that (\ref{eq1}) possesses a positive solution for every $\Lambda \geq \widehat{\Lambda}^*$ and $0< \mu \leq \widehat{\mu}^*$.
\end{theorem}
\section{Existence of multiple solutions}\label{section5}
In this section, we present proofs of Theorem \ref{thm3} and Proposition \ref{prop4}. Here we also state a version of Proposition \ref{prop4} under hypothesis $(f_3)$ with
$S_0>0$ and corresponding results under hypothesis $(\widehat{f_1})$ and versions of $(V_4)$.
Before proving Theorem \ref{thm3}, we recall an abstract result that provides the existence of $k$ pairs of nontrivial solutions for even functionals defined in Banach spaces (see
\cite{BBF}).
\begin{theorem}\label{simmpt}
Suppose $ X= X_1\bigoplus X_2$, with $\dim{X_1} < \infty$, is a Banach space and that $I\in C^1(X,\mathbb{R})$ is an even functional satisfying $I(0)=0$, the Palais-Smale condition, and
\begin{description}
\item[$(I_1)$] there exist $\beta, \rho >0$ such that $I(u)\geq \beta$, for every $u\in X_2$ such that $\|u\| = \rho$;
\item[$(I_2)$] there exist a finite dimensional subspace $W$ of $X$ and $\gamma >0$ such that $I(u) \leq \gamma$, for every $u\in W$.
\end{description}
Then, if $\dim{W} >\dim{X_1}$, the functional $I$ possesses $l$ pairs of nontrivial critical points $\{\pm u_1, \cdots , \pm u_l \}$, where $l = \dim{W} - \dim{X_1}$. Furthermore
$\beta\leq I(u_i)\leq \gamma$, for every $i\in \{ 1, \cdots , l\}$.
\end{theorem}
In order to apply Theorem \ref{simmpt}, we consider $\hat{g} :\mathbb{R}\times \mathbb{R}^N\rightarrow \mathbb{R}$ the odd extension of the function defined by (\ref{defg}) for every $(x,s)\in \mathbb{R}^N \times [0,\infty)$. We note that $\hat{g}$ is a Carath\'{e}odory function satisfying
\begin{equation}
\begin{cases}
\hat{g}(x,s) = f(x,s), \ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R},\ \vert x\vert \leq R;\label{5.1} \\
\vert\hat{g}(x,s)\vert \leq \vert f(x,s)\vert, \ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}; \\
\vert\hat{g}(x,s)\vert \leq \frac{1}{k}V(x)\vert s\vert,\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert x\vert > R.\\
\end{cases}
\end{equation}
Moreover, defining $\hat{G}(x,s) := \int_0^sg(x,t)\, dt$, we have
\begin{equation}
\begin{cases}
\hat{G}(x,s) = F(x,s),\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert x\vert \leq R; \\
\hat{G}(x,s) \leq \frac{1}{2k}V(x)s^2,\ \mbox{for}\ (x,s)\in \mathbb{R}^N\times \mathbb{R}, \ \vert x\vert > R; \label{5.2}
\end{cases}
\end{equation}
Next, we consider the symmetric version of the auxiliary equation (\ref{AP}):
\begin{equation}\label{ODDAP}
\begin{cases}
-\Delta u + V(x)u = \hat{g}(x,u),\quad x\in \mathbb{R}^N\\
\quad u\in E.
\end{cases}
\end{equation}
We also observe that any solution $u$ of (\ref{ODDAP}) that satisfies $\vert f(x,u)\vert\leq V(x)\vert u\vert/k$, for $\vert x\vert \geq R$, is a solution of (\ref{eq1}).
As in the proofs of Theorems \ref{thm} and \ref{thm2}, the functional $\hat{J}: E \rightarrow \mathbb{R}$, given by
\[
\hat{J}(u) = \frac12\int_{\mathbb{R}^N}\left(\vert\nabla u\vert^2 + V(x)u^2\right)\, dx - \int_{\mathbb{R}^N}\hat{G}(x,u)\, dx
\]
associated with (\ref{ODDAP}), is of class $C^1$ in $(E, \| \cdot\|)$ and critical points of $\hat{J}$ are weak solutions of equation (\ref{ODDAP}). Furthermore $\hat{J}$ is an even functional functional and $\hat{J}(0)=0$.
The next result provides a version of Proposition \ref{mpg} for the functional $\hat{J}$.
\begin{proposition}\label{oddmpg}
Suppose $V$ satisfies $(V_1)$-$(V_2)$ and $f$ satisfies $({f_1})$-$({f_3})$. Then
\begin{enumerate}
\item there exist constants $\beta, \rho >0$ such that $\hat{J}(u) \geq \beta$ for every $u\in E$ such that $\| u\|= \rho$;
\item given $l\in \mathbb{N}$, there exist $\{\phi_1, \cdots , \phi_l\} \subset E$ and $D_l>0$ such that such that $\hat{J}(u) < D_l $ for every $u\in W_l := \langle \phi_1, \cdots, \phi_l\rangle$.
\end{enumerate}
\end{proposition}
\begin{proof} The item $1$ has been actually proved in Proposition \ref{mpg}. For proving the item $2$, as in the proof of Proposition \ref{mpg}, we consider $V_1\geq 0$ and $B_0 := B_{r_0}(x_0) \subset B_R(0)$ such that $V(x) \leq V_\infty$ for every $x \in B_0$. Next we take functions $\phi_i \in E\setminus \{0\}$, $1\leq i\leq l$, such that $\rm{supp}(\phi_i) \subset B_{r_0}(x_0)$, for every $i\in \{1,\cdots, l\}$, and $\rm{supp}(\phi_i)\cap\rm{supp}(\phi_j) = \emptyset$, for $i\neq j\in \{1,\cdots, l\}$.
Defining $d_i$, $i\in\{1, \cdots, l\}$, by
\[
d_i :=
\sup_{\tau \in \mathbb{R}}\left[ \frac{\tau^2}{2}\int_{B_0}(\vert\nabla \phi_i\vert^2 + V_\infty\phi_i^2)dx - C_1\vert\tau\vert^{\theta}\int_{B_0}\vert\phi_i\vert^{\theta}dx + C_2 \vert B_0\vert\right],
\]
with $C_1$ and $C_2$ given by (\ref{7}), we obtain
$J(t\phi_i) \leq d_i <\infty$ for every $t\in \mathbb{R}$. Since $\rm{supp}(\phi_i)\cap\rm{supp}(\phi_j) = \emptyset$, for $i\neq j\in \{1,\cdots, l\}$, we may assert that the item $2$ is satisfied with $D_l = \max\{d_i; \ 1\leq i\leq l\}$. The proposition is proved.
\end{proof}
We are now ready to present the proofs of Theorem \ref{thm3} and Proposition \ref{prop4}:
\medskip
\noindent{\textbf{Proof of Theorem \ref{thm3}}}.
Observing that the argument employed in the proof of Lemma \ref{ps} may be easily adapted to verify that $\hat{J}$ satisfies the Palais-Smale condition, we may invoke Theorem \ref{simmpt} and Proposition \ref{oddmpg} to verify that
$\hat{J}$ possesses $l$ pairs of nontrivial critical points $\{\pm u_1, \cdots , \pm u_l \}$. Moreover
$\beta \leq \hat{J}(u_i) \leq D_l$, and
\begin{equation}\label{estimateui}
\|u_i\| \leq K^{-1} \left( D_l + C \vert B_R(0)\vert\right), \ \mbox{for every} \ i\in \{1,\cdots, l\},
\end{equation}
where $C$, $K$ and $D_l$ are given by (\ref{estamrab}), (\ref{aux_K}) and Proposition \ref{oddmpg}-item 2, respectively. The proof of Theorem \ref{thm3} is concluded by using the estimate provided by Lemma \ref{decay} and arguing as in the proof of Theorem \ref{thm}. \hfill $\Box$
\medskip
\noindent{\textbf{Proof of Proposition \ref{prop4}}}. Noting that the estimate (\ref{estimateui}) holds with $C=0$ (see Remark \ref{hypalso}), we may argue as in the proof of Theorem
\ref{resultalso} to conclude that under the hypothesis of Theorem \ref{thm3}, with $f$ satisfying $(f_3)$ with $S_0=0$, we may find $l$ pairs fo nontrivial pairs of solutions whenever
$V$ satisfies $(V_5)$ with $\Lambda_j / R_j^{(N-2)(q-2)}$ is sufficiently large, independently of the value of $R_j$. This fact and the hypothesis $\limsup_{j\to\infty}\Lambda_j / R_j^{(N-2)(q-2)}= \infty$ concludes the proof of Proposition \ref{prop4}. \hfill $\Box$
It is not difficult to verify that, as a direct consequence Theorem \ref{thm3}, a version of Proposition \ref{prop4} holds under $(f_3)$ without supposing $S_0=0$:
\begin{proposition}\label{prop5.3}
Suppose $V$ satisfies $(V_1)$ and $(V_5)$ and $f$ satisfies $(f_1)$-$(f_4)$. Then there exists a sequence $(\Lambda^*_{j,l})\subset (0,\infty)$, $(j,l)\in \mathbb{N}^2$, such that
(\ref{eq1}) possesses infinitely many pairs of nontrivial solutions whenever $(\Lambda_j)$ has a subsequence, $(\Lambda_{j_i})$, satisfying $\Lambda_{j_i}\geq \Lambda^*_{j_i,l_i}$, for every $i\in \mathbb{N}$, with $l_i\to\infty$, as $i\to\infty$.
\end{proposition}
\medskip
We also state without proving a version of Theorem \ref{thm3} under the hypothesis of Theorem \ref{thm2} and $(f_4)$.
\begin{theorem}\label{5.4}
Suppose $V$ satisfies $({V_1})$ and $({V_4})$, and $f$ satisfies $(\widehat{f_1})$, and $({f_2})$-$({f_4})$.
Then, given $l\in \mathbb{N}$ there are constants $\Lambda^*_l, \mu^*_l >0$ such that (\ref{eq1}) possesses $l$ pairs of nontrivial solution for every $\Lambda \geq \Lambda^*_l$ and $0< \mu \leq \mu^*_l$.
\end{theorem}
\begin{remark}\label{5.5}
Finally we like to emphasize that, assuming the condition $(f_4)$, it is not difficult to derive versions of Propositions \ref{prop4} and \ref{prop5.3} under the hypothesis
$(\widehat{f_1})$ and conditions related to $(V_4)$ and $(V_5)$.
\end{remark}
\paragraph{Acknowledgments}
This work was financed in part by the CNPq/Brazil. Sergio H. M. Soares wishes to thank the Department of Mathematics of the University of Brasília, where part of this article was written, for the hospitality.
|
2,877,628,088,868 | arxiv | \section{Introduction}
Since the first experimental realizations of Bose--Einstein condensation in cold atomic gases in 1995 \cite{WieCor-95,Ket-95}, the rigorous understanding of the condensation from basic laws of quantum physics has become a major problem in mathematical physics. In the present paper, we will investigate this issue for a system of trapped bosons and provide a quantitative justification of the condensation for the low-lying eigenstates.
We consider a system of $N$ bosons in $\mathbb{R}^3$ described by the Hamiltonian
\begin{equation} \label{eq:HN}
H_N= \sum\limits_{j = 1}^N (-\Delta_{x_j} + V_{\rm ext}(x_j)) + \sum\limits_{1 \leqslant j < k \leqslant N} N^2 V(N(x_j-x_k))
\end{equation}
on the bosonic space $L^2(\mathbb{R}^3)^{\otimes_s N}$. Here the external potential, which satisfies $V_{\rm ext}(x) \to+\infty$ as $|x|\to +\infty$, serves to trap the system. The interaction potential $V$ is non-negative and sufficiently smooth. The Hamiltonian $H_N$ with the core domain $C_c^\infty(\mathbb{R}^3)^{\otimes_s N}$ is bounded from below and can be extended to be a self-adjoint operator by Friedrichs' method.
Note that the range of the interaction is of order $N^{-1}$, much smaller than the average distance of the particles (which is of order $N^{-1/3}$ as the system essentially occupies a volume of order 1). Therefore, any particle interacts with very few others, namely the system is very dilute. On the other hand, the interaction potential is very strong in its range (the strength is of order $N^2$), making the correlation of particles complicated. This so-called {\em Gross--Pitaevskii regime} is relevant to the physical setup in \cite{WieCor-95,Ket-95}, and its mathematical analysis is both interesting and difficult.
To the leading order, the macroscopic properties of the system are well captured by the famous Gross--Pitaevskii theory \cite{Gro-61,Pit-61}. In this theory, a quantum particle is effectively felt by the others as a hard sphere whose radius is the {\em scattering length} of the interaction potential. Recall that the scattering length $\mathfrak{a}$ of the potential $V$ is defined by the variational formula
\begin{equation}\label{eq:var scat}
8\pi \mathfrak{a} = \inf\left\{ \int_{\mathbb{R} ^3} 2 |\nabla f| ^2 + V |f| ^2 , \quad \lim_{|x|\to \infty}f(x)=1 \right\}.
\end{equation}
When $V$ is sufficiently smooth, \eqref{eq:var scat} has a minimizer $0\le f\le 1$ and it satisfies
\begin{equation}\label{eq:scat-intro-1}
(-2\Delta+V)f=0.
\end{equation}
The scattering length can then be recovered from the formula
\begin{equation}\label{eq:scat-intro-2}
8\pi \mathfrak{a} = \int Vf.
\end{equation}
By scaling, the scattering length of $V_N=N^2V(N\cdot)$ is $\mathfrak{a} N^{-1}$. If we formally replace the interaction potential $V_N(x-y)$ in $H_N$ by the Dirac-delta interaction $8\pi \mathfrak{a} N^{-1}\delta_0(x-y)$, and insert the ansazt of full condensation $\Psi_N= u^{\otimes N}$, then we obtain the Gross--Pitaevskii approximation for the ground state energy per particle
\begin{align} \label{eq:eGP}
e_{\rm GP}= \inf_{\norm{u}_{L^2(\mathbb{R}^3)=1}} \int_{\mathbb{R} ^3} \left( |\nabla u| ^2 + V_{\rm ext} |u|^2 + 4\pi \mathfrak{a} |u|^4 \right).
\end{align}
It is well-known that the variational problem \eqref{eq:eGP} has a minimizer $\varphi_{\rm GP} \ge 0$. This minimizer is unique (up to a constant phase) and solves the Gross--Pitaevskii equation
\begin{equation} \label{eq:GP-equation}
\left(-\Delta + V_{\rm ext} + 8\pi \mathfrak{a} \varphi_{\rm GP}^2 -\mu \right) \varphi_{\rm GP}= 0, \quad \mu \in \mathbb{R}.
\end{equation}
Note that the expectation of $H_N$ against the uncorrelated wave function $\varphi_{\rm GP}^{\otimes N}$ gives us a formula like \eqref{eq:eGP} but with $4\pi \mathfrak{a}$ replaced by the larger value $(1/2)\int V$. Thus in the Gross--Pitaevskii regime, the particle correlation due to the two-body scattering process plays a crucial role.
\medskip
The rigorous derivation of the Gross--Pitaevskii theory from the many-body Schr\"o\-ding\-er theory is the subject of many important works in the last two decades; see \cite{LSY-00,LS-02,LS-06,NRS-16,BBCS-18,BBCS-19,BBCS-19b} for low-lying eigenstates, \cite{DSY-19,DS-19} for thermal equilibrium states, and \cite{ESY-09,ESY-10,Pic-15,BOS-14,BS-19} for dynamics. In regard to the ground state energy
\[
E_N := \inf {\rm Spec}(H_N)= \inf_{\Psi\in L^2(\mathbb{R}^3)^{\otimes_s N}, \norm{\Psi}_{L^2}=1} \langle \Psi, H_N \Psi\rangle,
\]
Lieb, Seiringer and Yngvason \cite{LSY-00} proved that
\begin{equation} \label{eq:BEC-0}
\lim_{N\to \infty} \frac{E_N}{N} = e_{\rm GP}.
\end{equation}
Later, Lieb and Seiringer \cite{LS-06} proved that the ground state $\Psi_N$ of $H_N$ exhibits the complete Bose--Einstein condensation, namely
\begin{equation} \label{eq:BEC}
\lim_{N\to \infty} \frac{\gamma_{\Psi_N}^{(1)}} {N} = |\varphi_{\rm GP}\rangle \langle \varphi_{\rm GP}|
\end{equation}
in the trace norm. See also \cite{NRS-16} for a simplified proof of these results. Here the one-body density matrix $\gamma_{\Psi_N}^{(1)}$ of $\Psi_N$ is an operator on $L^2(\mathbb{R}^3)$ with kernel
\[
\gamma_{\Psi_N}^{(1)}(x,y)= N \int_{(\mathbb{R}^3)^{N-1}} \Psi_N(x,x_2,...,x_N) \overline{\Psi_N(y,x_2,...,x_N)} \mathrm{d} x_2 ... \mathrm{d} x_N.
\]
In particular, $\gamma_{u^{\otimes N}}^{(1)}=|u\rangle \langle u|$. Note that \eqref{eq:BEC} may hold even if
$\Psi_N$ and $\varphi_{\rm GP}^{\otimes N}$ are not close in the usual norm topology.
\medskip
A special case of trapped systems is the {\em homogeneous gas}, where $H_N$ acts on $L^2(\mathbb{T}^3)^{\otimes_s N}$ instead of $L^2(\mathbb{R}^3)^{\otimes_s N}$, with $\mathbb{T}^3$ the unit torus in three dimensions, and the external potential is ignored. For this translation-invariant system, it is easy to see that
\[
e_{\rm GP}=4\pi \mathfrak{a}, \quad \varphi_{\rm GP} =1.
\]
In this case, recently Boccato, Brennecke, Cenatiempo and Schlein \cite{BBCS-18,BBCS-19} proved that
\begin{align} \label{eq:BEC-opt}
E_N= N e_{\rm GP} + O(1), \quad \langle \varphi_{\rm GP}, \gamma_{\Psi_N} \varphi_{\rm GP}\rangle = N + O(1),
\end{align}
improving upon the leading order convergence in \cite{LS-02}. These optimal bounds are crucial inputs for the analysis of the excitation spectrum of $H_N$ in \cite{BBCS-19b}. Similar bounds were obtained earlier for the quantum dynamics in \cite{BOS-14,BS-19}.
\medskip
\subsection*{Main result} In the present paper we aim at providing an alternative approach to the optimal condensation \eqref{eq:BEC-opt} and extending it to inhomogeneous trapped gases. Our main result is
\begin{theorem}[Optimal condensation for trapped Bose gases] \label{thm:main} Let $0\le V_{\rm ext}\in C^1(\mathbb{R}^3)$ satisfy $|\nabla V_{\rm ext}(x)|^2 \le 2V_{\rm ext}(x)^{3}+C$ and $V_{\rm ext}(x)\to \infty$ as $|x|\to \infty$. Let $0\le V\in L^1(\mathbb{R}^3)$ be radial with compact support such that its scattering length $\mathfrak{a}$ is small. Let $\varphi_{\rm GP}$ be the Gross--Pitaevskii minimizer for $e_{\rm GP}$ in \eqref{eq:eGP}. Let $E_N$ be the ground state energy of the Hamiltonian $H_N$ in \eqref{eq:HN}. Then we have the energy bound
\begin{align} \label{eq:EN-NeGP}
|E_N - Ne_{\rm GP}| \le C
\end{align}
and the operator bound
\begin{align} \label{eq:HN>=}
H_N \ge N e_{\rm GP} + C^{-1}\sum_{i=1}^N (\1 - |\varphi_{\rm GP}\rangle \langle \varphi_{\rm GP}|)_{x_i} -C \quad \text{ on }\quad L^2(\mathbb{R}^3)^{\otimes_s N}.
\end{align}
Here $C>0$ is constant independent of $N$.
\end{theorem}
A direct consequence of Theorem \ref{thm:main} is
\begin{corollary} For any wave function $\Psi_N$ in $L^2(\mathbb{R}^3)^{\otimes_s N}$, we have
\begin{align} \label{eq:BEC-opt-thm}
0\le N - \langle \varphi_{\rm GP}, \gamma_{\Psi_N}^{(1)} \varphi_{\rm GP}\rangle \le C (\langle \Psi_N, H_N \Psi_N \rangle - E_N +1).
\end{align}
In particular, if $\Psi_N$ is a ground state of $H_N$, then $N - \langle \varphi_{\rm GP}, \gamma_{\Psi_N}^{(1)} \varphi_{\rm GP} \rangle \le C$.
\end{corollary}
The precise smallness condition on the scattering length needed for our proof is
\begin{align} \label{eq:gap-condition}
\int_{\mathbb{R}^3} \left( |\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + 40 \pi \mathfrak{a} \norm{\varphi_{\rm GP}}_{L^\infty}^2< \inf_{\substack{u\bot \varphi_{\rm GP} \\ \norm{u}_{L^2}=1}} \int_{\mathbb{R}^3} \left( |\nabla u |^2 + V_{\rm ext} |u|^2 \right).
\end{align}
Note that when $\mathfrak{a}$ tends to $0$, the left and right sides of \eqref{eq:gap-condition} converge to the first and second lowest eigenvalues of $-\Delta+V_{\rm ext}$, respectively. Thanks to the spectral gap of the one-body Schr\"odinger operator, \eqref{eq:gap-condition} holds for $\mathfrak{a}>0$ small.
As in \cite{BBCS-18}, the smallness condition on the interaction helps us to simplify the analysis. The case of large interaction is more difficult and left open. Heuristically, the technical smallness assumption could be removed by using the leading order result \eqref{eq:BEC-0}-\eqref{eq:BEC} as an input, plus a localization method on the number of excited particles to carefully factor out the particle correlation. This has been done for the homogeneous case \cite{BBCS-19}, but the extension to the inhomogeneous case would require several additional arguments. On the other hand, our proof below is direct and we do not use the leading order result \eqref{eq:BEC-0}-\eqref{eq:BEC} at all.
Note that our proof of the lower bound \eqref{eq:scat-intro-2} can be extended to hard core potentials because we will need only the finiteness of the scattering length (more precisely we will very soon replace $V$ by $Vf$ which is a bounded measure with $\int Vf=8\pi \mathfrak{a}$). However, the condition $V\in L^1(\mathbb{R}^3)$ is important for the upper bound $E_N\le Ne_{\rm GP}+C$.
\subsection*{Proof strategy} Our proof contains two main steps.
\begin{itemize}
\item First, we factor out the particle correlation by simply `completing the square'. The idea is inspired by recent works of Brietzke--Fournais--Solovej \cite{BFS-18} and Four\-nais--Solovej \cite{FS-19} on the Lee--Huang--Yang formula. This step allows us to bound $H_N$ by a quadratic Hamiltonian on Fock space, up to an energy shift.
\item Second, we estimate the ground state energy of the quadratic Hamiltonian. In the homogeneous case, this step can be done by `completing the square' again, as realized already in 1947 by Bogoliubov \cite{Bog-47}. In the inhomogeneous case, the analysis is significantly more complicated. We will derive a sharp lower bound for general quadratic Hamiltonians, and then apply it to the problem at hand.
\end{itemize}
Our method is different from that of \cite{BBCS-18,BBCS-19}. For the reader's convenience, we will quickly present our proof in the homogeneous case. Then we explain further details in the inhomogeneous case.
We use the Fock space formalism, which is recalled in Section \ref{sec:pre}. Our convention of the Fourier transform is
\[ \widehat g(p)=\int g(x)e^{-ip\cdot x} \mathrm{d} x. \]
\subsubsection*{Homogeneous case} Let us focus on the main estimate \eqref{eq:HN>=}. Let $ P=|\varphi_{\rm GP} \rangle \langle \varphi_{\rm GP}|$ and let $f_N(x)=f(Nx)$ where $f$ is the scattering solution of \eqref{eq:scat-intro-1}.
Since $V_N\ge 0$ we have
\begin{equation} \label{eq:key-1-PPfV1-fPP}
(\1- P\otimes P f_N) V_N (\1- f_N P\otimes P) \ge 0
\end{equation}
where $V_N$ and $f_N$ are the multiplication operators by $N^2V(N(x-y))$ and $f(N(x-y))$ on the two-particle space. Expanding \eqref{eq:key-1-PPfV1-fPP} leads to the operator inequality
\begin{align} \label{eq:HN>=HBog}
H_N &\ge \sum_{p\ne 0} \left( |p|^2 a_p^* a_p + \frac{1}{2} \widehat{f_NV_N}(p) a_p^* a_{-p}^* a_0 a_0 + \frac{1}{2} \widehat{f_NV_N}(p) a_0^* a_{0}^* a_p a_{-p} \right) \nonumber\\
&\quad +\frac{1}{2} \left( \int (2f_N-f_N^2)V_N \right) a_0^* a_0^* a_0 a_0.
\end{align}
Here $a_p^*$ and $a_p$ are the creation and annihilation operators of momentum $p\in 2\pi \mathbb{Z}^3$ on Fock space. Note that the form of the `square' in \eqref{eq:key-1-PPfV1-fPP} is slightly different from that of \cite{BFS-18,FS-19} as we factor out completely the `cubic contribution' to make the analysis shorter (in [11] the cubic terms are estimated further by the Cauchy--Schwarz inequality).
Next, recall that by an extension of Bogoliubov's method \cite[Theorem 6.3]{LS-01}, we have
\begin{align*}
A(b_p^* b_p + b_{-p}^* b_{-p})+ B(b_p^* b_{-p}^* + b_{p}b_{-p}) \ge - (A-\sqrt{A^2-B^2}) \frac{[b_p,b_{p}^*]+[b_{-p}, b_{-p}^*]}{2}
\end{align*}
for all constants $A\ge B\ge 0$ and operators $b_p, b_{-p}$ on Fock space. Taking
\[ b_p=N^{-1/2}a_0^* a_p, \quad b_p^* b_p \le a_p^* a_p, \quad [b_p,b_p^*] \le 1, \quad \forall 0\ne p \in 2\pi \mathbb{Z}^3 \]
we find that
\begin{multline} \label{eq:HN>=HBog-1}
\sum_{p\ne 0} \left( (|p|^2-\mu) a_p^* a_p + \frac{1}{2} \widehat{f_NV_N}(p) a_p^* a_{-p}^* a_0 a_0 + \frac{1}{2} \widehat{f_NV_N}(p) a_0^* a_{0}^* a_p a_{-p} \right)\\
\begin{aligned}[b]
&\ge \frac{1}{2} \sum_{p\ne 0} \left( (|p|^2-\mu) (b_p^* b_p+b_{-p}^*b_{-p}) + N\widehat{f_NV_N}(p) b_p^* b_{-p}^* +\widehat{f_NV_N}(p) b_p b_{-p} \right)\\
&\ge - \frac{1}{2} \sum_{p\ne 0} \left( |p|^2-\mu - \sqrt{(|p|^2-\mu)^2 - |N\widehat{f_NV_N}(p)|^2} \right)
\end{aligned}
\end{multline}
for any constant $0<\mu<4\pi^2-8\pi \mathfrak{a}$ (we used $N \|\widehat {f_NV_N}\|_{L^\infty}\le 8\pi \mathfrak{a}$). It is straightforward to see that the right side of \eqref{eq:HN>=HBog-1} equals
\begin{align} \label{eq:HN>=HBog-2}
- \frac{N^2}{4}\sum_{p\ne 0} \frac{|\widehat{f_NV_N}(p)|^2}{4|p|^2} + O(1) &= - \frac{N^2}{4} \int_{\mathbb{R}^3} \frac{|\widehat{f_NV_N}(p)|^2}{4|p|^2} \frac{\mathrm{d} p}{(2\pi)^3}+ O(1) \nonumber\\
&= - \frac{N^2}{2}\int_{\mathbb{R}^3} V_N f_N (1-f_N) + O(1).
\end{align}
Here we have used Plancherel's identity and the fact that $\widehat{V_Nf_N}(p)=2|p|^2 \widehat{(1-f_N)}(p)$ which follows from the scattering equation \eqref{eq:scat-intro-1}.
Finally, inserting \eqref{eq:HN>=HBog-1}-\eqref{eq:HN>=HBog-2} in \eqref{eq:HN>=HBog} we conclude that, for any $\mu<4\pi^2- 8\pi \mathfrak{a}$,
\begin{align} \label{eq:HN>=HBog-3}
H_N &\ge \begin{multlined}[t]
\mu \mathcal{N}_+ -\frac{N^2}{2}\int V_N f_N (1-f_N)\\
+ \frac{1}{2} (N-\mathcal{N}_+)(N-\mathcal{N}_+-1) \int (2f_N-f_N^2)V_N + O(1)
\end{multlined}\nonumber\\
& \ge (\mu-16\pi \mathfrak{a}) \mathcal{N}_+ + \frac{N^2}{2}\int f_N V_N + O(1)= (\mu-16\pi \mathfrak{a}) \mathcal{N}_+ + 4\pi \mathfrak{a} N + O(1)
\end{align}
where $\mathcal{N}_+:= N-a_0^*a_0$. If $\mathfrak{a}<\pi/6$, we can choose $\mu$ such that $16\pi \mathfrak{a}<\mu<4\pi^2- 8\pi \mathfrak{a}$ and conclude the proof of \eqref{eq:HN>=}.
\subsubsection*{Inhomogeneous case}
Using \eqref{eq:key-1-PPfV1-fPP} we obtain a lower bound of the form
\begin{align} \label{eq:HN>=first-bound-intro}
H_N &\ge N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right) \nonumber\\
& \quad + \inf {\rm Spec} (\mathbb{H}_{\rm Bog}) + (\mu-\mu_1)\mathcal{N}_+ + O(1)
\end{align}
where
\[
\mathcal{N}_+ =\mathrm{d}\Gamma(Q), \quad \mu_1=\int_{\mathbb{R}^3} \left( |\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right)+ 32\pi \mathfrak{a} \norm{\varphi_{\rm GP}}_{L^\infty}^2,
\]
and $\mathbb{H}_{\mathrm{Bog}}$ is an operator on the excited Fock space $\mathcal{F}(Q L^2(\mathbb{R}^3))$ defined by
\begin{align} \label{eq:H-K-intro}
\mathbb{H}_{\mathrm{Bog}} &= \mathrm{d}\Gamma(H) + \frac{1}{2} \iint K(x,y) (a^*_x a^*_y + a_x a_y) \, \mathrm{d} x \, \mathrm{d} y,\nonumber\\
H &=Q(-\Delta+V_{\rm ext}-\mu)Q,\quad Q=\1-P\\
\intertext{and}
K(x,y) &=(Q\otimes Q \widetilde K (\cdot, \cdot) )(x,y), \quad \widetilde K(x,y)= \varphi_{\rm GP}(x) \varphi_{\rm GP}(y) (NV_Nf_N)(x-y). \nonumber
\end{align}
Note that \eqref{eq:gap-condition} allows us to choose $\mu>\mu_1$ such that $H> \|K\|_{\rm op}$, where $K$ is the operator with kernel $K(x,y)$. Therefore, in principle, the quadratic Hamiltonian $\mathbb{H}_{\mathrm{Bog}}$ can be diagonalized by a Bogoliubov transformation; see \cite{GS-13,BB-15,NNS-16,Der-17} for recent results. However, extracting an explicit lower bound is not straightforward. We will prove the following general lower bound, which is of independent interest,
\begin{equation} \label{eq:HBog>=abs-intro}
\mathbb{H}_{\rm Bog} \ge - \frac{1}{4} {\rm Tr}\,\left(H^{-1} K^2 \right) - C \norm{K}_{\rm op} \mathrm{Tr}(H^{-2} K^{2})
\end{equation}
(see Theorem \ref{thm:general-bound-HBog}). The simpler general lower bound
\begin{equation} \label{eq:Bog-half-lower-bound}
\mathbb{H}_{\rm Bog} \ge - \frac{1}{2} {\rm Tr}\,\left(H^{-1} K^2 \right)
\end{equation}
is well-known; see \cite[Theorem
5.4]{BD-07}, \cite[Theorem 2]{NNS-16} and \cite[Theorem 3.23]{Der-17}. The significance of \eqref{eq:quad-Hamil-lwb-general} is that we get the optimal constant $(-1/4)$ for the main term which is crucial for our application.
It remains to evaluate the right side of \eqref{eq:HBog>=abs-intro} for $H$ and $K$ in \eqref{eq:H-K-intro}. For a heuristic calculation, let us replace $H$ and $K$ by $-\Delta$ and $\widetilde K$, respectively. We can write
\[
\mathrm{Tr}\left((-\Delta)^{-1} \widetilde{K}^2\right) = N^2 {\rm Tr}\,\left(\varphi_{\rm GP}(x) \widehat{(f_NV_N)} (p) \varphi_{\rm GP}(x) p^{-2} \varphi_{\rm GP}(x) \widehat{(f_NV_N)} (p) \varphi_{\rm GP}(x) \right)
\]
where $\varphi_{\rm GP}(x)$ and $v(p)$ are multiplication operators in the position and momentum spaces. If we could commute $\varphi_{\rm GP}(x)$ and $p^{-2}$, then the above trace would become
\begin{multline*}
N^2 {\rm Tr}\,\left(\varphi_{\rm GP} (x)\widehat{(f_NV_N)} (p) \varphi_{\rm GP}^2(x) |p|^{-2} \widehat{(f_NV_N)} (p) \varphi_{\rm GP}(x) \right) \\
\begin{aligned}
&=2 N^2 {\rm Tr}\,\left(\varphi_{\rm GP}^2 (x)\widehat{(f_NV_N)} (p) \varphi_{\rm GP}^2(x) \widehat{(1-f_N)} (p) \right) \\
&= 2N^2 \iint \varphi_{\rm GP}^2 (x) (V_Nf_N(1-f_N))(x-y) \varphi_{\rm GP} ^2 (y) \mathrm{d} x \mathrm{d} y.
\end{aligned}
\end{multline*}
Here we have used $\widehat{V_Nf_N}(p)=2|p|^2 \widehat{(1-f_N)}(p)$ thanks to the scattering equation \eqref{eq:scat-intro-1} and the equality between the Hilbert--Schmidt norm of operators and the $L^2$-norm of operator kernels. This heuristic calculation can be made rigorous by using the Kato--Seiler--Simon inequality \cite[Theorem 4.1]{Sim-05} to control several commutators. We can also bound $\mathrm{Tr}(H^{-2} K^{2})$ by $O(1)$ in the same way.
In summary, we obtain from \eqref{eq:HN>=first-bound-intro} and \eqref{eq:HBog>=abs-intro} that
\begin{align*}
H_N &\ge \begin{multlined}[t]
N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left(\left(\left(\left(2f_N-f_N^2\right)V_N\right)*\varphi_{\rm GP}^2\right)\varphi_{\rm GP}^2 \right)\\
+ \frac{N^2}{2} \int \left(\left(V_Nf_N\left(1-f_N\right)\right)* \varphi_{\rm GP} ^2\right) \varphi_{\rm GP}^2 + (\mu-\mu_1)\mathcal{N}_++ O(1)
\end{multlined}\\
&= \begin{multlined}[t]
N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2}\int \left(\left(V_Nf_N\right)*\varphi_{\rm GP}^2\right) \varphi_{\rm GP}^2 \\
+ (\mu-\mu_1)\mathcal{N}_+ + O(1)
\end{multlined}\\
&= N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + 4\pi \mathfrak{a} \int \varphi_{\rm GP}^4 + (\mu-\mu_1)\mathcal{N}_+ + O(1).
\end{align*}
Here we have used $NV_Nf_N\approx 8\pi \mathfrak{a} \delta_0$ in the last estimate. Thus \eqref{eq:HN>=} holds true.
The energy upper bound $E_N\le Ne_{\rm GP}+O(1)$ is a separate issue, which is conceptually easier. It is known that in the Fock space setting, a good trial state is of the form
\[ W(\sqrt{N}\varphi_{\rm GP}) \Gamma' W(\sqrt{N}\varphi_{\rm GP})^* \]
where $W(g)=e^{a(g)-a^*(g)}$ is the Weyl operator and $\Gamma'$ is an appropriate quasi-free state; see Benedikter--Porta--Schlein \cite[Appendix A]{BPS-16} (similar constructions in the homogeneous case can be found in \cite{ESY-08,NRS-18}). This construction can be adapted to the $N$-particle Hilbert space, using the unitary operator $U_N$ introduced by Lewin--Nam--Serfaty--Solovej \cite{LNSS-15} instead of the Weyl operator and a modified version of $\Gamma'$.
\bigskip
\noindent{\bf Organization of the paper.} In Section \ref{sec:pre} we recall some standard facts on the scattering length, Gross--Pitaevskii theory, Fock space formalism and quasi-free states. Then we prove the operator lower bound \eqref{eq:HN>=} in Section \ref{sec:inhom} and the energy upper bound in Section \ref{sec:upp}.
\bigskip
\noindent{\bf Acknowledgments.} We thank S\o ren Fournais, Jan Philip Solovej and Benjamin Schlein for helpful discussions. We thank the referees for constructive comments and suggestions. We received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy (EXC-2111-390814868), and from the National Science Centre (NCN project Nr. 2016/21/D/ST1/02430).
\section{Preliminaries} \label{sec:pre}
\subsection{Scattering length} We recall some basic properties of the scattering length from \cite[Appendix C]{LSSY-05}. Under our assumptions on the potential $V$, the scattering problem \eqref{eq:var scat} has a unique minimizer $f$. The minimizer is radially symmetric, $0\le f\le 1$ and
\begin{equation} \label{eq:scattering-int}
(-2\Delta+V(x))f(x)=0, \quad 8\pi \mathfrak{a} = \int V f.
\end{equation}
Moreover, the function $\omega=1-f$ vanishes at infinity, more precisely
$$
0\le \omega(x) \le \frac{C}{|x|+1}, \quad \forall x\in \mathbb{R}^3.
$$
By scaling, the function $f_N(x)=f(Nx)$ solves the scattering problem for $V_N(x)=N^2V(Nx)$, namely
\begin{equation} \label{eq:scatering-eq-N}
(-2\Delta+V_N(x))f_N(x)=0, \quad \frac{8\pi \mathfrak{a}}{N} = \int V_N f_N.
\end{equation}
Thus the function $\omega_N=1-f_N$ vanishes at infinity,
\begin{equation}\label{eq:bounds_on_w}
0\le w_N(x) \leq \frac{C}{|Nx|+1},
\end{equation}
and satisfies
\begin{equation} \label{eq:scatering-eq-N-omega}
- 2 \Delta \omega_N = V_Nf_N \quad \text{ in } \mathbb{R}^3.
\end{equation}
\subsection{Gross--Pitaevskii theory} Under our assumption on the external potential $V_{\rm ext}$, the minimization problem \eqref{eq:eGP} has a minimizer $\varphi_{\rm GP} \ge 0$ under the constraint
\[ \varphi_{\rm GP} \in H^1(\mathbb{R}^3), \quad \int |\varphi_{\rm GP}|^2=1, \quad \int V_{\rm ext} |\varphi_{\rm GP}|^2 <\infty. \]
Moreover, the minimizer is unique (up to a constant phase) and satisfies the Euler--Lagrange equation \eqref{eq:GP-equation}; see \cite[Appendix A]{LSY-00} for details.
From \eqref{eq:GP-equation} and the fact $H^1(\mathbb{R}^3)\subset L^6(\mathbb{R}^3)$, we find that $(-\Delta+V_{\rm ext})\varphi_{\rm GP} \in L^2(\mathbb{R}^3)$. Under the extra condition $|\nabla V_{\rm ext}|^2 \le 2V_{\rm ext}^3 + C$ we can show that $\Delta\varphi_{\rm GP}\in L^2(\mathbb{R}^3)$ as follows. Replacing $V_{\rm ext}$ by $V_{\rm ext}+1$ if necessary, we can assume that $V_{\rm ext} \ge 1$. By the IMS formula
\begin{align} \label{eq:IMS-formula}
g^2(x) (-\Delta)+ (-\Delta) g^2(x) = 2g(x) (-\Delta)g(x) - 2 |\nabla g(x)|^2, \quad \forall 0\le g\in H^1,
\end{align}
we can write
\begin{align*}
(-\Delta+V_{\rm ext})^2 &= \Delta^2 + V_{\rm ext}^2 + V_{\rm ext} (-\Delta) + (-\Delta) V_{\rm ext} \\
&= \Delta^2 + V_{\rm ext}^2 + 2\sqrt{V_{\rm ext}} (-\Delta) \sqrt{V_{\rm ext}} - 2 | \nabla \sqrt{V_{\rm ext}}|^2.
\end{align*}
The condition $|\nabla V_{\rm ext}|^2 \le 2V_{\rm ext}^3 + C$ ensures that
$$
2 | \nabla \sqrt{V_{\rm ext}}|^2 = \frac{|\nabla V_{\rm ext}|^2}{2V_{\rm ext}} \le V_{\rm ext}^2 +C.
$$
Therefore, we conclude that
\begin{equation} \label{eq:H^2>Delta^2}
(-\Delta+V_{\rm ext})^2 \ge \Delta^2 -C \quad \text{ on }L^2(\mathbb{R}^3).
\end{equation}
Consequently, from $(-\Delta+V_{\rm ext})\varphi_{\rm GP} \in L^2(\mathbb{R}^3)$ we deduce that $\Delta \varphi_{\rm GP}\in L^2(\mathbb{R}^3)$. In summary, we have $\varphi_{\rm GP} \in H^2(\mathbb{R}^3)\subset L^\infty(\mathbb{R}^3)$.
\begin{remark}
We can replace our assumption $|\nabla V_{\rm ext}|^2 \le 2V_{\rm ext}^3 + C$ by \eqref{eq:H^2>Delta^2}, or slightly more general
$$
(-\Delta+V_{\rm ext})^2 \ge C^{-1}\Delta^2 -C \quad \text{ on }L^2(\mathbb{R}^3).
$$
This kind of conditions is natural to ensure that the operator domain $D(-\Delta+V_{\rm ext})$ is a subspace of $H^2(\mathbb{R}^3)$. In general, if $0\le V_{\rm ext}\in L^2_{\rm loc}(\mathbb{R}^d)$, then $-\Delta+V_{\rm ext}$ is essentially self-adjoint with core domain $C_c^\infty(\mathbb{R}^d)$ by Kato's theorem \cite[Theorem X.28]{Reed-Simon-Vol2}, but $D(-\Delta+V_{\rm ext})$ may be different from $D(-\Delta)\cap D(V_{\rm ext})$ \cite[Theorem X.32]{Reed-Simon-Vol2}. See \cite{Glimm-Jaffe,Davies} for further discussions in this direction.
\end{remark}
\subsection{Fock space formalism} Let $\mathfrak{H}=L^2(\mathbb{R}^d)$ (or a closed subspace of $L^2(\mathbb{R}^d)$) be the Hilbert space of one particle. The bosonic Fock space is defined by
\[ \mathcal{F}(\mathfrak{H})= \bigoplus_{n=0}^\infty \mathfrak{H}^{\otimes_s n} \]
where the number of particles can vary. For any $g\in \mathfrak{H}$, we can define the creation and annihilation operators $a^*(f)$, $a(f)$ on Fock space by
\begin{align*}
(a^* (g) \Psi )(x_1,\dots,x_{n+1})&= \frac{1}{\sqrt{n+1}} \sum_{j=1}^{n+1} g(x_j)\Psi(x_1,\dots,x_{j-1},x_{j+1},\dots, x_{n+1}), \\
(a(g) \Psi )(x_1,\dots,x_{n-1}) &= \sqrt{n} \int_{\mathbb{R}^d} \overline{g(x_n)}\Psi(x_1,\dots,x_n) \mathrm{d} x_n, \quad \forall \Psi \in \mathfrak{H}^n,\, \forall n.
\end{align*}
These operators satisfy the canonical commutation relations
\[
[a(g_1),a(g_2)]=[a^*(g_1),a^*(g_2)]=0,\quad [a(g_1), a^* (g_2)]= \langle g_1, g_2 \rangle, \quad \forall g_1,g_2 \in \mathfrak{H}.
\]
We may also define
the operator-valued distributions $a_x^*$ and $a_x$, with $x\in \mathbb{R}^d$, by
$$
a_x^*= \sum_{n=1}^\infty \overline{f_n(x)} a^*(f_n), \quad a_x= \sum_{n=1}^\infty f_n(x) a(f_n)
$$
where $\{f_n\}_{n=1}^\infty$ is an orthonormal basis of $\mathfrak{H}$ (the definition is independent of the choice of the basis). Equivalently, we have
\[
a^*(g)=\int_{\mathbb{R}^d} g(x) a_x^* \mathrm{d} x, \quad a(g)=\int_{\mathbb{R}^d} \overline{g(x)} a_x \mathrm{d} x, \quad \forall g\in \mathfrak{H}.
\]
The canonical commutation relations can be rewritten as
\[ [a^*_x,a^*_y]=[a_x,a_y]=0, \quad [a_x,a^*_y]=\delta(x-y), \quad \forall x,y\in \mathbb{R}^d. \]
These creation and annihilation operators can be used to express several important observables. For example, the particle number operator can be written as
\[
\mathcal{N} := \bigoplus_{n=0}^\infty n \1_{\mathfrak{H}^{\otimes_s n}} = \sum_n a^*(u_n) a(u_n) =\int_{\mathbb{R}^d} a_x^* a_x \mathrm{d} x.
\]
Here $\{u_n\}$ is any orthonormal basis for $\mathfrak{H}$. More generally, for any one-body self-adjoint operator $A$ we have
\[
\mathrm{d} \Gamma(A):= \bigoplus_{n=0}^\infty \left( \sum_{i=1}^n A_{x_i} \right) = \sum_{m,n} \langle u_m, A u_n\rangle a_m^* a_n = \int_{\mathbb{R}^d} a^*_x A_x a_x \mathrm{d} x.
\]
For $H_N$ in \eqref{eq:HN}, we can write
\begin{align} \label{eq:2nd-Q}
H_N &= \mathrm{d} \Gamma (-\Delta + V_{\rm ext}) + \frac{1}{2} \sum_{m,n,p,q} \langle u_m\otimes u_n, V_N u_p\otimes u_q\rangle a^*(u_m)a^*(u_n)a(u_p) a(u_q)\nonumber\\
&= \mathrm{d} \Gamma (-\Delta + V_{\rm ext}) + \frac{1}{2}\int_{\mathbb{R}^d}\int_{\mathbb{R}^d} V_N(x-y) a_x^* a_y^* a_x a_y \mathrm{d} x \mathrm{d} y.
\end{align}
\subsection{Quasi-free states} Let $\Gamma$ be a (mixed) state on Fock space with finite particle number expectation, namely $\langle \mathcal{N} \rangle_\Gamma = \mathrm{Tr}(\mathcal{N} \Gamma)<\infty$. We call $\Gamma$ a {\em quasi-free state} if it satisfies Wick's Theorem:
\begin{align*}
&\langle a^{\#}(f_{1}) a^{\#}(f_{2}) \cdots a^{\#}(f_{2n}) \rangle_{\Gamma} = \sum_{\sigma} \prod_{j=1}^n \langle a^{\#}(f_{\sigma(2j-1)}) a^{\#}(f_{\sigma(2j)}) \rangle_{\Gamma} , \\%\label{eq:Wick-2}
&\langle \langle a^{\#}(f_{1}) a^{\#}(f_{2}) \cdots a^{\#}(f_{2n-1}) \rangle_{\Gamma} = 0, \quad \forall f_1,..,f_n\in \mathfrak{H}, \forall n\in \mathbb{N}.
\end{align*}
Here $a^{\#}$ is either the creation or annihilation operator and the sum is taken over all permutations $\sigma$ satisfying $\sigma(2j-1)<\min\{\sigma(2j),\sigma(2j+1) \}$ for all $j$.
By the definition any quasi-free state is determined uniquely by its one-body density matrices $(\gamma_\Gamma,\alpha_\Gamma)$, where $\gamma_\Gamma: \mathfrak{H}\to \mathfrak{H}$ and $\alpha_\Gamma:\mathfrak{H}\to \mathfrak{H}^* \equiv \overline{\mathfrak{H}}$ defined by
\[
\left\langle {g_1,{\gamma _\Gamma }g_2} \right\rangle = \left\langle {{a^*}(g_2)a(g_1)} \right\rangle_\Gamma,\quad \left\langle {\overline{g_1}, \alpha _\Gamma {g_2} } \right\rangle = \left\langle {a^*(g_2)a^*(g_1)} \right\rangle_\Gamma, \quad \forall g_1,g_2 \in \mathfrak{H}.
\]
It is well-known (see e.g. \cite[Theorem 3.2]{Nam-11}) that any given operators $(\gamma,\alpha)$, with $\gamma: \mathfrak{H}\to \mathfrak{H}$ and $\alpha: {\mathfrak{H}} \to \mathfrak{H}^* \equiv \overline{\mathfrak{H}}$, are the one-body density matrices of a (mixed) quasi-free state with finite particle number expectation if and only if
\begin{align} \label{eq:1-pdm-quasi}
\gamma\ge 0, \quad \mathrm{Tr} \gamma <\infty, \quad \overline{\alpha}=\alpha^*, \quad \begin{pmatrix}
\gamma & \alpha^* \\
\alpha & 1 + \overline{\gamma}
\end{pmatrix} \ge 0 \quad \text{on } \mathfrak{H} \oplus \mathfrak{H}^*.
\end{align}
Here we write $\overline{A}=JAJ$ for short, with $J$ the complex conjugation, namely $\overline{A} g = \overline{(A \overline{g})}$.
The reader may think of the quasi-free states as ``Gaussian quantum states". In particular, the contribution of sectors with high particle numbers decays very fast. In fact, if $\Gamma$ is a quasi-free state, then
\begin{align} \label{eq:fluc-N}
\langle \mathcal{N}^\ell \rangle_\Gamma \le C_\ell (1+ \langle \mathcal{N}\rangle_\Gamma )^\ell, \quad \forall \ell\ge 1.
\end{align}
Here the constant $C_\ell$ depends only on $\ell$ (see \cite[Lemma 5]{NN-17}).
\section{Lower bound} \label{sec:inhom}
In this section, we will prove the operator lower bound \eqref{eq:HN>=}.
\begin{lemma}[Lower bound]
\label{lem:lwb} Let $V_{\rm ext}$ and $V$ be as in Theorem~\ref{thm:main}, where the scattering length $\mathfrak{a}$ of $V$ is small so that \eqref{eq:gap-condition} holds true. Then
\[
H_N \ge N e_{\rm GP} + C^{-1}\sum_{i=1}^N Q_{x_i} -C \quad \text{ on }\quad L^2(\mathbb{R}^3)^{\otimes_s N}
\]
with $Q= \1 - |\varphi_{\rm GP}\rangle \langle \varphi_{\rm GP}|$. The constant $C>0$ is independent of $N$.
\end{lemma}
\subsection{Reduction to quadratic Hamiltonian} Denote
\begin{align}
\mu_1 &:= \int_{\mathbb{R}^3} \left( |\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right)+ 32\pi \mathfrak{a} \norm{\varphi_{\rm GP}}_{L^\infty}^2, \label{eq:def-mu1}\\
\mu_2 &:= \inf_{\substack{u\bot \varphi_{\rm GP} \\ \norm{u}_{L^2}=1}} \int_{\mathbb{R}^3} \left( |\nabla u |^2 + V_{\rm ext} |u|^2 \right) - 8\pi \mathfrak{a} \norm{\varphi_{\rm GP}}_{L^\infty}^2. \label{eq:def-mu2}
\end{align}
Note that $\mu_1<\mu_2$ thanks to \eqref{eq:gap-condition}. Our starting point is
\begin{lemma} \label{lem:red-qua} Let $\mu_1<\mu<\mu_2$. Then under the notations in \eqref{eq:H-K-intro} we have
\begin{align} \label{eq:HN>=first-bound}
H_N &\ge N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right) \nonumber\\
& \quad + (\mu- \mu_1) \mathcal{N}_+ + \inf {\rm Spec} (\mathbb{H}_{\rm Bog})- C.
\end{align}
\end{lemma}
\begin{proof} We write $\varphi=\varphi_{\rm GP}$ for short. Denote $P= |\varphi \rangle \langle \varphi|=\1-Q$. Let $f_N$ be the scattering solution of $V_N(x)=N^2V(Nx)$ as in \eqref{eq:scatering-eq-N}.
Expanding the operator inequality \eqref{eq:key-1-PPfV1-fPP}, we obtain
\begin{align} \label{eq:VN>=inhomogeneous}
V_N &\ge P\otimes P \left(2f_N- f_N^2\right) V_N P \otimes P + \left(P\otimes P f_N V_N Q \otimes Q + h. c. \right) \nonumber \\
&\quad + \left( P\otimes P f_N V_N P\otimes Q + P\otimes P f_N V_N Q\otimes P + h. c. \right).
\end{align}
Let $\{\varphi_n\}_{n=0}^\infty$ be an orthonormal basis for $L^2(\mathbb{R}^3)$ with $\varphi_0=\varphi$ and denote $a_n:=a(\varphi_n)$.
From \eqref{eq:VN>=inhomogeneous} we have the operator inequality in $L^2(\mathbb{R}^3)^{\otimes_s N}$:
\begin{equation} \label{eq:HN>H0H1H2}
H_N\ge \mathcal{H}_0 + \mathcal{H}_1 + \mathcal{H}_2
\end{equation}
where
\begin{align*}
\mathcal{H}_0 & = \int_{\mathbb{R}^{3}} (|\nabla \varphi |^2 + V_{\rm ext} \varphi^2) + \frac{1}{2} \int_{\mathbb{R}^{3}} (((2f_N-f_N^2)V_N)*\varphi^2)\varphi^2 a_0^* a_0^* a_0 a_0,\\
\mathcal{H}_1 & = a^*(Q(-\Delta + V_{\rm ext}) \varphi) a_0 + a^*(Q( (f_N V_N) \ast\varphi^2) \varphi)) a_0^* a_0 a_0 + {\rm h.c.},\\
\mathcal{H}_2 & = \frac{1}{2}\sum_{m,n\ge 1} \left( \langle \varphi_m, (-\Delta + V_{\rm ext}) \varphi_n \rangle a_m^* a_n + N^{-1}\langle \varphi_m\otimes \varphi_n, K \rangle a_m^* a_n^* a_0 a_0 + {\rm h.c.}\right).
\end{align*}
\subsubsection*{Analysis of $\mathcal{H}_0$.} Using \eqref{eq:HN>=HBog-3} again we have
\begin{align*}
\mathcal{H}_0 & =\begin{multlined}[t]
\int_{\mathbb{R}^{3}} (|\nabla \varphi |^2 + V_{\rm ext} \varphi^2) (N-\mathcal{N}_+)\\
+ \frac{1}{2} \int_{\mathbb{R}^{3}} (((2f_N-f_N^2)V_N)*\varphi^2)\varphi^2 (N-\mathcal{N}_+)(N-\mathcal{N}_+-1)
\end{multlined}\\
&\ge\begin{multlined}[t]
N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi |^2 + V_{\rm ext} \varphi^2 \right) + \frac{N^2-N}{2} \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi^2)\varphi^2 \right) \\
- \left( \int_{\mathbb{R}^{3}} (|\nabla \varphi |^2 + V_{\rm ext} \varphi^2) + N \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi^2)\varphi^2 \right) \right) \mathcal{N}_+.
\end{multlined}
\end{align*}
Then using
\[
0\le \int_{\mathbb{R}^{3}} N(((2f_N-f_N^2)V_N)*\varphi^2)\varphi^2 \le 2N \norm{f_NV_N}_{L^1} \norm{\varphi}_{L^4}^4 \le 16 \pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2
\]
and the definition of $\mu_1$ in \eqref{eq:def-mu1}, we obtain
\begin{align} \label{eq:H0>=}
\mathcal{H}_0 &\ge N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi |^2 + V_{\rm ext} \varphi^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi^2)\varphi^2 \right) \nonumber \\
& \quad + (16\pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 -\mu_1 ) \mathcal{N}_+ - C.
\end{align}
\subsubsection*{Analysis of $\mathcal{H}_1$.} We have
\begin{align} \label{eq:cH1-0}
\mathcal{H}_1 & = a^*(Q(-\Delta + V_{\rm ext}) \varphi) a_0 + a^*(Q( (f_N V_N) \ast\varphi^2) \varphi)) (N-\mathcal{N}_+) a_0 + {\rm h.c.} \nonumber\\
&= a^* (Q (-\Delta + V_{\rm ext}+ (N f_N V_N) \ast\varphi^2 )\varphi) a_0 - a^*(Q( (f_N V_N) \ast\varphi^2) \varphi)) \mathcal{N}_+ a_0 + {\rm h.c.}\nonumber\\
&= a^* (Q ((N f_N V_N) \ast\varphi^2 - 8\pi \mathfrak{a} \varphi^2 )\varphi) a_0 - a^*(Q( (f_N V_N) \ast\varphi^2) \varphi)) \mathcal{N}_+ a_0 + {\rm h.c.}
\end{align}
Here in the last equality we have used the Gross--Pitaevskii equation \eqref{eq:GP-equation}. For the first term on the right side of \eqref{eq:cH1-0}, denoting
\[ g=Q ((N f_N V_N) \ast\varphi^2 - 8\pi \mathfrak{a} \varphi^2 )\varphi \]
we have
\begin{align*}
\norm{g}_{L^2} & \le \norm{(N f_N V_N) \ast\varphi^2 - 8\pi \mathfrak{a} \varphi^2}_{L^\infty} \\
&\le \norm{N (f_N V_N*\varphi^2)^{\wedge} - 8\pi \mathfrak{a} \widehat{\varphi^2}}_{L^1}= \int_{\mathbb{R}^3} | \widehat {fV}(p/N) - \widehat {fV}(0) | |\widehat{\varphi^2}(p)| \mathrm{d} p \\
&\le \frac{\|\nabla_p \widehat {fV}\|_{L^\infty}}{N} \int_{\mathbb{R}^3} |p| |\widehat{\varphi^2}(p)| \mathrm{d} p \le \frac{1}{N} \norm{|x| fV}_{L^1} \norm{\varphi^2}_{H^{1/2}}\le \frac{C}{N}.
\end{align*}
Therefore, by the Cauchy--Schwarz inequality
\[
\pm ( a^*(g) a_0 + a_0^* a(g)) \le N a^*(g) a(g) + N^{-1} a_0^* a_0 \le N \norm{g}_{L^2}^2 \mathcal{N}_+ + 1 \le C.
\]
For the second term on the right side of \eqref{eq:cH1-0}, we use
\[
\norm{Q ((f_N V_N) \ast\varphi^2)\varphi}_{L^2} \le \norm{(f_N V_N) \ast \varphi^2}_{L^\infty} \le \norm{f_N V_N}_{L^1} \norm{\varphi^2}_{L^\infty} \le \frac{8\pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 }{N},
\]
and the Cauchy--Schwarz inequality
\begin{multline} \label{eq:cH1-0-second}
\pm \left( a^*(Q( (f_N V_N) \ast\varphi^2) \varphi)) \mathcal{N}_+ a_0 + {\rm h.c.} \right)\\
\begin{aligned}[b]
&\le\varepsilon^{-1} a^*(Q( (f_N V_N) \ast\varphi^2) \varphi)) a(Q( (f_N V_N) \ast\varphi^2) \varphi)) + \varepsilon a_0^* \mathcal{N}_+^2 a_0\\
&\le \varepsilon^{-1} \norm{Q(f_N V_N) \ast\varphi^2) \varphi}_{L^2}^2 \mathcal{N}_+ + \varepsilon N \mathcal{N}_+^2\\
&\le \varepsilon^{-1} \left( \frac{8\pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 }{N} \right)^2 \mathcal{N}_+ + \varepsilon N^2 \mathcal{N}_+.
\end{aligned}
\end{multline}
Optimizing over $\varepsilon>0$ we can replace the right side of \eqref{eq:cH1-0-second} by $16\pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 \mathcal{N}_+$. Thus
\begin{equation} \label{eq:H1>=}
\mathcal{H}_1 \ge - 16\pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 \mathcal{N}_+ - C.
\end{equation}
\subsubsection*{Analysis of $\mathcal{H}_2$} We will prove that
\begin{equation} \label{eq:H2>=}
\mathcal{H}_2 -\mu \mathcal{N}_+ \ge \inf {\rm Spec} (\mathbb{H}_{\rm Bog}).
\end{equation}
The main difficulty in \eqref{eq:H2>=} is to remove the factors $N^{-1}a^*_0a^*_0$ and $N^{-1}a_0a_0$ in $\mathcal{H}_2$.
Recall the operators $H,K$ defined in \eqref{eq:H-K-intro}. For any (mixed) state $\Gamma$ on $L^2(\mathbb{R}^3)^{\otimes_s N}$, we can write
\begin{equation*}
\braket{\mathcal{H}_2-\mu \mathcal{N}_+}_{\Gamma} = {\rm Tr}\, (H \gamma) + \Re {\rm Tr}\, (K\alpha)
\end{equation*}
where $K: \mathfrak{H}_+^* \to \mathfrak{H}_+$ is the operator with kernel $K(x,y)$ and $\gamma : \mathfrak{H}_+ \to \mathfrak{H}_+$, $\alpha : \mathfrak{H}_+ \to \mathfrak{H}_+^* \equiv \overline{\mathfrak{H}_+}$ are operators defined by
\begin{equation*}
\braket{g_1,\gamma g_2} = \braket{ a^*(g_2) a(g_1)}_\Gamma, \quad \braket{\overline{g_1}, \alpha g_2} = N^{-1}\braket{ a^*(g_2) a^*(g_1) a_0a_0}_\Gamma, \quad \forall g_1,g_2\in \mathfrak{H}_+.
\end{equation*}
Then we have $\gamma\ge 0$, $\mathrm{Tr} \gamma= \langle \mathcal{N}_+\rangle_{\Gamma}<\infty$ and $\alpha^*=\overline{\alpha}$. Moreover, for all $g_1,g_2\in \mathfrak{H}_+$, by the Cauchy--Schwarz inequality, we have
\begin{align*}
\pm(a^*(g_1)a^*(g_2) a_0 a_0 + {\rm h.c.}) &\le a^*(g_1) a_0 (a^*(g_1)a_0)^*+ (a^*(g_2)a_0)^* a^*(g_2)a_0 \\
&= a^* (g_1) a(g_1) (N-\mathcal{N}_+ +1) + a(g_2) a^*(g_2) (N-\mathcal{N}_+)\\
&\le N a^* (g_1)a(g_1) + N a(g_2) a^*(g_2).
\end{align*}
Here we have used $a^*(g_1)a(g_1)(\mathcal{N}_+-1)\ge 0$. Consequently, for all $g_1,g_2\in \mathfrak{H}_+$,
\begin{multline*}
\left \langle
\begin{pmatrix}
g_1 \\
\overline{g_2}
\end{pmatrix},
\begin{pmatrix}
\gamma & \alpha^* \\
\alpha & 1 + \overline{\gamma}
\end{pmatrix}
\begin{pmatrix}
g_1 \\
\overline{g_2}
\end{pmatrix}
\right\rangle_{\mathfrak{H}_+\oplus \mathfrak{H}_+^*} = \langle g_1, \gamma g_1 \rangle + \langle g_2, (1+\gamma) g_2\rangle + \langle \overline{g_2}, \alpha {g_1} \rangle + \langle g_1, \alpha^* \overline{g_2}\rangle \\
=\left\langle a^*(g_1)a(g_1) + a(g_2) a^*(g_2) + N^{-1} a^*(g_2) a^*(g_1) a_0 a_0 + N^{-1} a_0^* a_0^* a(g_2)a(g_1) \right\rangle_{\Gamma} \ge 0.
\end{multline*}
Thus $(\gamma,\alpha)$ satisfies the conditions in \eqref{eq:1-pdm-quasi}. Hence, there exists a mixed quasi-free state $\Gamma'$ on Fock space $\mathcal{F}(\mathfrak{H}_+)$ such that $(\gamma,\alpha)$ are its one-body density matrices. Therefore,
\[
\braket{\mathcal{H}_2-\mu \mathcal{N}_+}_{\Gamma} = {\rm Tr}\, (H \gamma) + \Re {\rm Tr}\, (K\alpha) = {\rm Tr}\, \left( \mathbb{H}_{\mathrm{Bog}} \Gamma' \right) \ge \inf {\rm Spec} (\mathbb{H}_{\rm Bog}).
\]
Thus \eqref{eq:H2>=} holds true.
\subsubsection*{Conclusion} Inserting \eqref{eq:H0>=}, \eqref{eq:H1>=} and \eqref{eq:H2>=} in \eqref{eq:HN>H0H1H2} we obtain \eqref{eq:HN>=first-bound}.
\end{proof}
\subsection{A general bound for quadratic Hamiltonians} Now we prove a general lower bound on quadratic Hamiltonians on Fock space, which is of independent interest.
\begin{theorem} [Lower bound for quadratic Hamiltonians] \label{thm:general-bound-HBog} Let $\mathfrak{K}$ be a closed subspace of $L^2(\mathbb{R}^d)$. Let $K$ be a self-adjoint bounded operator on $\mathfrak{K}$ with real-valued symmetric kernel $K(x,y)=K(y,x)$. Let $H>0$ be a self-adjoint operator on $\mathfrak{K}$ such that $H$ has compact resolvent with real-valued eigenfunctions, $H^{-1/2}K$ is a Hilbert-Schmidt operator, and $H \ge (1+\varepsilon) \|K\|_{\rm op}$ for a constant $\varepsilon>0$. Then
\begin{equation} \label{eq:quad-Hamil-lwb-general}
\mathrm{d}\Gamma(H) + \frac{1}{2} \iint K(x,y) (a_x^* a_y^* + a_x a_y) \mathrm{d} x \mathrm{d} y \ge - \frac{1}{4} {\rm Tr}\,\left(H^{-1} K^2 \right) - C_\varepsilon \norm{K}_{\rm op} \mathrm{Tr}(H^{-2} K^{2})
\end{equation}
on the Fock space $\mathcal{F}(\mathfrak{K})$. Here the constant $C_\varepsilon>0$ depends only on $\varepsilon$.
\end{theorem}
\begin{remark} The constant $-1/4$ is optimal. In fact, if $H$ and $K$ commute, then the ground state energy of the quadratic Hamiltonian is
\[
\frac{1}{2} \mathrm{Tr} \left(\sqrt{H^2-K^2}-H\right) = - \frac{1}{2} \mathrm{Tr} \left( \frac{K^2}{H+ \sqrt{H^2-K^2}}\right)
\]
which is close to $-(1/4) \mathrm{Tr}(H^{-1}K^2)$ when $H$ is significantly bigger than $K$.
\end{remark}
\begin{proof}[Proof of Theorem ~\ref{thm:general-bound-HBog}] First, let us assume that $K$ is trace class. Then following the analysis of Grech--Seiringer \cite[Section 4]{GS-13}, we see that the ground state energy of the quadratic Hamiltonian in \eqref{eq:quad-Hamil-lwb-general} is $\frac{1}{2}\mathrm{Tr}(E-H)$ where
\[
E:=(D^{1/2}(D+2K) D^{1/2})^{1/2}, \quad D:=H-K \ge 0.
\]
Using the formula
$$x= \frac{2}{\pi} \int_{0}^\infty \frac{x^2}{x^2+t^2} \mathrm{d} t, \quad \forall x\ge 0,$$
and the resolvent identity, we can rewrite
\begin{align} \label{eq:deal-I-II-III}
E-D &= \frac{2}{\pi} \int_{0}^\infty \left(\frac{1}{D^2 + t^2} - \frac{1}{E^2+t^2}\right)t^2 \mathrm{d} t = \frac{2}{\pi} \int_{0}^\infty \frac{1}{D^2 + t^2} D^{1/2} (2K) D^{1/2} \frac{1}{E^2+t^2} t^2 \mathrm{d} t \nonumber\\
&=\begin{multlined}[t]
\frac{2}{\pi} \int_{0}^\infty \frac{1}{D^2 + t^2} D^{1/2} (2K) D^{1/2} \frac{1}{D^2+t^2} t^{2} \mathrm{d} t\\
- \frac{2}{\pi} \int_{0}^\infty \left(\frac{1}{D^2 + t^2} D^{1/2} (2K) D^{1/2} \right)^2 \frac{1}{D^2+t^2} t^{2} \mathrm{d} t\\
+\frac{2}{\pi} \int_{0}^\infty \left( \frac{1}{D^2 + t^2} D^{1/2} (2K) D^{1/2} \right)^3 \frac{1}{E^2+t^2} t^{2} \mathrm{d} t
\end{multlined}\nonumber\\
&=: (\mathrm{I}) + (\mathrm{II}) + (\mathrm{III}).
\end{align}
\subsubsection*{Dealing with $(\mathrm{I})$.} Using the cyclicity of the trace and
\begin{equation*}
\frac{2}{\pi}\int_{0}^\infty \frac{x t^{2}}{(x^2+t^2)^2} \mathrm{d} t = \frac{1}{2}, \quad \forall x>0,
\end{equation*}
we have
\begin{equation} \label{eq:deal-I}
\mathrm{Tr}{\rm (I)}= \frac{2}{\pi} \mathrm{Tr} \int_{0}^\infty \frac{Dt^{2}}{(D^2 + t^2)^2} (2K) \mathrm{d} t = \mathrm{Tr}(K).
\end{equation}
\subsubsection*{Dealing with $(\mathrm{II})$.} Note that $D=H-K>0$ has compact resolvent (since $H$ has compact resolvent and $K$ is compact). Therefore, we can write
\[ D=\sum_{j} D_j |\varphi_j\rangle \langle \varphi_j| \]
with positive eigenvalues $(D_j)$ and an orthonormal basis of eigenvectors $(\varphi_j)$. Therefore,
\begin{align*}
{\rm Tr}\, (\mathrm{II}) &= - \frac{8}{\pi} \mathrm{Tr} \int_0^\infty \frac{D}{(D^2+t^2)^2} K \frac{D}{D^2+t^2} K t^2 \mathrm{d} t \\
&= - \frac{8}{\pi} \mathrm{Tr} \int_0^\infty \sum_{i,j} \frac{D_j}{(D_j^2+t^2)^2} |\varphi_j \rangle \langle \varphi_j| K \frac{D_i}{D_i^2+t^2} |\varphi_i\rangle \langle \varphi_i| K t^2 \mathrm{d} t \\
&= - \frac{8}{\pi} \sum_{i,j} |\langle \varphi_i, K \varphi_j\rangle|^2 \int_0^\infty \frac{D_iD_j }{(D_j^2+t^2)^2(D_i^2+t^2)} t^2 \mathrm{d} t.
\end{align*}
Using
\[
\frac{8}{\pi} \int_0^\infty \frac{xy }{(x^2+t^2)^2 (y^2+t^2)} t^2 \mathrm{d} t = \frac{2 y}{(x+y)^2} \le \frac{1}{2x}, \quad \forall x,y>0
\]
we find that
\begin{equation} \label{eq:deal-II}
{\rm Tr}\, (\mathrm{II}) \ge - \frac{1}{2} \sum_{ij} |\langle \varphi_i, K \varphi_j\rangle|^2 D_j^{-1} = -\frac{1}{2} {\rm Tr}\, \left(K D^{-1} K\right).
\end{equation}
\subsubsection*{Dealing with $(\mathrm{III})$.} By H\"older's inequality for Schatten norm \cite[Theorem 2.8]{Sim-05},
\begin{align*}
\mathrm{Tr}{\rm (III)} &= \frac{16}{\pi} \left| \int_0^\infty \mathrm{Tr} \left( \left( \frac{D}{D^2+t^2} K \right)^3 D^{1/2} \frac{t^2}{E^2+t^2} D^{-1/2} \right) \mathrm{d} t \right|\\
&\le \frac{16}{\pi} \int_0^\infty \norm{\frac{D}{D^2+t^2} K}_{\mathfrak{S}^3}^3 \norm{D^{1/2} \frac{t^2}{E^2+t^2} D^{-1/2}}_{\rm op} \mathrm{d} t.
\end{align*}
Then by the Araki--Lieb--Thirring inequality \cite{LT-76,Ara-90},
\begin{align*}
\norm{ \frac{D}{D^2+t^2} K}_{\mathfrak{S}^3}^3 &= \mathrm{Tr} \left( \left( K \left( \frac{D}{D^2+t^2} \right)^2 K \right)^{3/2} \right) = \mathrm{Tr} \left( \left( \frac{D}{D^2+t^2} K^2 \frac{D}{D^2+t^2} \right)^{3/2} \right) \\
&\le \mathrm{Tr} \left( \left( \frac{D}{D^2+t^2} \right)^{3/2} |K|^3 \left( \frac{D}{D^2+t^2} \right)^{3/2} \right) = \mathrm{Tr} \left( \frac{D^{3}}{(D^2+t^2)^3} |K|^{3} \right).
\end{align*}
Here we have used the fact that $|A|=\sqrt{A^*A}$ and $\sqrt{AA^*}$ have the same non-zero eigenvalues (with multiplicity). On the other hand, using $H\ge (1+\varepsilon) \|K\|_{\rm op}$ we find that
\[ D+2K = H+ K \ge C_\varepsilon^{-1} (H-K)= C_\varepsilon^{-1} D \]
for any large constant $C_\varepsilon$ satisfying $(C_\varepsilon+1)/(C_\varepsilon-1) \le 1+\varepsilon$. Hence,
\[
E^2 = D^{1/2} (D+2K)D^{1/2} \ge C_\varepsilon^{-1} D^2.
\]
Reversely, we also have $D+2K \le C_\varepsilon D$, and hence
\[
E^2 = D^{1/2} (D+2K)D^{1/2} \le C_\varepsilon D^2.
\]
Since the mapping $0\le A\mapsto \sqrt{A}$ is operator monotone, we deduce that $D^{1/2}E^{-1/2}$ and $E^{1/2}D^{-1/2}$ are bounded operators.
Therefore,
\begin{align*}
\norm{D^{1/2} \frac{t^2 }{E^2 +t^2} D^{-1/2}}_{\rm op} &= \norm{D^{1/2} E^{-1/2} \frac{t^2 }{E^2 +t^2} E^{1/2} D^{-1/2}}_{\rm op} \\
&\le \norm{D^{1/2} E^{-1/2}}_{\rm op} \norm{\frac{t^2}{E^2+t^2}}_{\rm op} \norm{E^{1/2}D^{-1/2}}_{\rm op} \le C_\varepsilon.
\end{align*}
We conclude that
\begin{equation} \label{eq:deal-III}
|\mathrm{Tr}{\rm (III)}| \le C_\varepsilon \int_0^\infty \mathrm{Tr} \left( \frac{D^3}{(D^2+t^2)^3} |K|^{3} \right) \mathrm{d} t \le C_\varepsilon \mathrm{Tr}( D^{-2}|K|^3).
\end{equation}
In the last estimate we have used the identity
\[
x^{-2}= \frac{16}{3\pi} \int_0^\infty \frac{x^3}{(x^2+t^2)^3} \mathrm{d} t, \quad \forall x>0.
\]
\subsubsection*{Conclusion in the trace class case} Inserting \eqref{eq:deal-I}, \eqref{eq:deal-II}, \eqref{eq:deal-III} in \eqref{eq:deal-I-II-III} we find that
\[
\mathrm{Tr}(E-H) = \mathrm{Tr}(E-D)-\mathrm{Tr}(K) \ge - \frac{1}{2} \mathrm{Tr}(D^{-1}K^2) - C_\varepsilon \mathrm{Tr}( D^{-2}|K|^3).
\]
Let us replace $D=H-K$ by $H$ on the right side. Using $H\ge (1+\varepsilon) \|K\|_{\rm op}$ and the Cauchy--Schwarz inequality we have
\begin{align*}
D^2 &= (H-K)^2 = H^2 + K^2 - HK - KH \ge (1-\eta) H^2 - (\eta^{-1}-1)K^2 \\
&\ge (1-\eta) H^2 - (\eta^{-1}-1) (1+\varepsilon)^{-2} H^2 \ge (C_\varepsilon)^{-1} H^2.
\end{align*}
Here the constant $0<\eta<1$ is chosen sufficiently close to $1$ (depending on $\varepsilon$). Therefore,
\[
\mathrm{Tr}\left( D^{-2}|K|^3\right) \le C_\varepsilon \mathrm{Tr}\left( H^{-2}|K|^3\right) \le C_\varepsilon \norm{K}_{\rm op}\mathrm{Tr}\left( H^{-2}K^2\right).
\]
By the resolvent identity and H\"older's inequality for Schatten norm \cite[Theorem 2.8]{Sim-05},
\begin{align*}
\left| \mathrm{Tr}\left(\left(D^{-1}-H^{-1}\right)K^2\right)\right| &= \left| \mathrm{Tr}\left(D^{-1} K H^{-1}K^2\right) \right| \\
&\le \norm{D^{-1} H}_{\rm op} \norm{H^{-1}K}_{\mathfrak{S}^2}^2 \norm{K}_{\rm op} \le C_\varepsilon \norm{K}_{\rm op}\mathrm{Tr}\left(H^{-2}K^2\right)
\end{align*}
where $\norm{\cdot}_\mathfrak{S}^2$ is the Hilbert--Schmidt norm. Thus \eqref{eq:quad-Hamil-lwb-general} holds true:
\begin{align*}
\mathrm{Tr}(E-H) &\ge - \frac{1}{2} \mathrm{Tr}\left(D^{-1}K^2\right) - C_\varepsilon \mathrm{Tr}\left(D^{-2}|K|^3\right)\\
&\ge - \frac{1}{2} \mathrm{Tr}\left(H^{-1}K^2\right) - C_\varepsilon \norm{K}_{\rm op}\mathrm{Tr}\left(H^{-2}K^2\right).
\end{align*}
\subsubsection*{Removing the trace class condition} Finally, let us remove the trace class condition on $K$. Recall that $H$ has compact resolvent. For every $n\in \mathbb{N}$ we introduce the spectral projection
$$
P_n = \1(H\le n), \quad Q_n = \1(H>n)
$$
and decompose
$$
K=K_n^{(1)}+K_n^{(2)}, \quad K_{n}^{(1)}= \frac{1}{2}(P_n K + K P_n),\quad K_n^{(2)}= \frac{1}{2}(Q_n K + K Q_n).
$$
Note that $K_n^{(1)}$ is a self-adjoint finite-rank operator because $P_n$ is finite-rank. Moreover, $K_n^{(1)}$ has a real-valued symmetric kernel because $K$ has the same property and $H$ has real-valued eigenvalues. By the triangle inequality,
\begin{align*}
\|K_n^{(1)}\|_{\rm op} &\le \frac{1}{2} ( \|P_n K\|_{\rm op} + \|K P_n\|_{\rm op} ) \le \|K\|_{\rm op}, \\
\| H^{-s} K_{n}^{(1)}\|_{\mathfrak{S}^2} &\le \frac{1}{2}( \| H^{-s} P_n K\|_{\mathfrak{S}^2} + \|H^{-s} K P_n\|_{\mathfrak{S}^2}) \le \| H^{-s} K\|_{\mathfrak{S}^2}, \quad \forall s\ge 1/2.
\end{align*}
Take $\eta\in (0,\varepsilon/2)$. Using $H\ge (1+\varepsilon)\|K\|_{\rm op}$ we find that
\[
\left(1-\eta \right) H \ge \left(1-\frac{\varepsilon}{2}\right) (1+\varepsilon) \norm{K}_{\rm op} \ge \left(1 + \frac{\varepsilon(1-\varepsilon)}{2}\right) \| K_n^{(1)}\|_{\rm op}.
\]
Applying \eqref{eq:quad-Hamil-lwb-general} in the trace class case with $(H,K)$ replaced by $((1-\eta) H, K_n^{(1)})$ we get
\begin{align} \label{eq:1-eta-H-K}
\left(1-\eta \right) &\mathrm{d}\Gamma(H) + \frac{1}{2} \iint K_1^{(n)}(x,y) (a_x^* a_y^* + a_x a_y) \mathrm{d} x \mathrm{d} y \nonumber\\
&\ge - \frac{1}{4(1-\eta)} \| H^{-1/2} K_{n}^{(1)}\|_{\mathfrak{S}^2}^2 - C_\varepsilon \norm{K_1^{(n)}}_{\rm op} \| H^{-1} K_{n}^{(1)}\|_{\mathfrak{S}^2}^2 \nonumber\\
&\ge - \frac{1}{4(1-\eta)} \| H^{-1/2} K\|_{\mathfrak{S}^2}^2 - C_\varepsilon \norm{K}_{\rm op} \| H^{-1} K\|_{\mathfrak{S}^2}^2, \quad \forall n\ge 1.
\end{align}
Next, note that $K_n^{(2)}$ is also self-adjoint and has a real-valued symmetric kernel. Moreover, $Q_n \to 0$ strongly as $n\to \infty$ (namely $\|Q_n u\|\to 0$ for all $u\in \mathfrak{K}$) since $H$ has compact resolvent. Since $H^{-1/2}K$ is Hilbert-Schmidt, we deduce that
$$
\| H^{-1/2} K_{n}^{(2)}\|_{\mathfrak{S}^2} \le \frac{1}{2}( \| H^{-1/2} Q_n K\|_{\mathfrak{S}^2} + \|H^{-1/2} K Q_n\|_{\mathfrak{S}^2}) \to 0 \quad \text{ as }n\to \infty.
$$
Recall that from \cite[Theorem 2]{NNS-16}, the bound \eqref{eq:Bog-half-lower-bound} holds true if $\| H^{-1/2} K H^{-1/2}\|_{\rm op}<1$ and $\| H^{-1/2} K\|_{\mathfrak{S}^2}<\infty$. Using \eqref{eq:Bog-half-lower-bound} with $(H,K)$ replaced by $(\eta H, K_n^{(2)})$, we find that
\begin{equation} \label{eq:eta-H-K}
\eta \mathrm{d}\Gamma(H) + \frac{1}{2} \iint K_n^{(2)}(x,y) (a_x^* a_y^* + a_x a_y) \mathrm{d} x \mathrm{d} y \ge - \frac{1}{2\eta} \| H^{-1/2} K_n^{(2)}\|_{\mathfrak{S}^2} \to 0 \end{equation}
as $n\to \infty$. Putting \eqref{eq:1-eta-H-K} and \eqref{eq:eta-H-K} together, we find that
$$
\mathrm{d}\Gamma(H) + \frac{1}{2} \iint K(x,y) (a_x^* a_y^* + a_x a_y) \mathrm{d} x \mathrm{d} y \ge - \frac{1}{4(1-\eta)} \| H^{-1/2} K\|_{\mathfrak{S}^2}^2 - C_\varepsilon \norm{K}_{\rm op} \| H^{-1} K\|_{\mathfrak{S}^2}^2.
$$
Taking $\eta\to 0$ we obtain \eqref{eq:quad-Hamil-lwb-general}. This completes the proof of Theorem~\ref{thm:general-bound-HBog}.
\end{proof}
\subsection{Explicit lower bound for \texorpdfstring{$\mathbb{H}_{\rm Bog}$}{HBog}}
Now we apply Theorem \ref{thm:general-bound-HBog} to compute an explicit lower bound for the quadratic Hamiltonian $\mathbb{H}_{\rm Bog}$ in Lemma \ref{lem:red-qua}.
\begin{lemma} [Lower bound for $\mathbb{H}_{\rm Bog}$]
\label{lem:missing_term}
For $\mathbb{H}_{\rm Bog}$ in Lemma \ref{lem:red-qua} we have
\begin{align} \label{eq:HBog>=fNVN}
\inf {\rm Spec}(\mathbb{H}_{\rm Bog}) \ge -\frac{N^2}{2} \int_{\mathbb{R}^{3}} (V_Nf_N(1-f_N) \ast \varphi_{\rm GP}^2) \varphi_{\rm GP}^2 -C.
\end{align}
\end{lemma}
\begin{proof} We will write $\varphi=\varphi_{\rm GP}$ for short. Recall the notations $H,K$ in \eqref{eq:H-K-intro}.
\subsubsection*{Lower bound by Theorem \ref{thm:general-bound-HBog}}
Since $\varphi\ge 0$, the kernel $K(x,y)$ is symmetric and real-valued. Thus the operator $K$ is symmetric. It is bounded with
$\|K\|_{\rm op}\le 8 \pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2$ because for all $g_1,g_2\in \mathfrak{H}_+$ we have
\begin{align}
\left|\langle g_1,K g_2 \rangle \right| &= \left| \iint \overline{g_1(x)} \varphi(x) (Nf_NV_N(x-y)) \varphi(y) g_2(y) \mathrm{d} x \mathrm{d} y \right| \nonumber \\
&\le \norm{\varphi}_{L^\infty}^2 \norm{g_1}_{L^2} \norm{g_2}_{L^2} \norm{Nf_NV_N}_{L^1}= 8 \pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 \norm{g_1}_{L^2} \norm{g_2}_{L^2}. \label{eq:K-op}
\end{align}
On the other hand, since $\mu_2>\mu$ we have
\[ H=Q(-\Delta+V_{\rm ext}-\mu)Q \ge \mu_2-\mu + 8\pi \mathfrak{a} \norm{\varphi}_{L^\infty}^2 \ge (1+\varepsilon) \norm{K}_{\rm op}\quad \text{ on } \mathfrak{H}_+ \]
for a small constant $\varepsilon>0$ independent of $N$. Moreover, $H$ has compact resolvent since $V_{\rm ext}(x)\to \infty$ as $|x|\to \infty$. Thus we can apply Theorem \ref{thm:general-bound-HBog} and obtain
\begin{equation} \label{eq:lwb-appl}
\inf {\rm Spec}(\mathbb{H}_{\rm Bog}) \ge -\frac{1}{4} \mathrm{Tr}_{\mathfrak{H}_+}\left(H^{-1}K^2\right) - C \mathrm{Tr}_{\mathfrak{H}_+}\left(H^{-2}K^2\right).
\end{equation}
\subsubsection*{Replacing $H$ and $K$ by $1-\Delta$ and $\widetilde K$} We can interpret $ H = Q(-\Delta+V_{\rm ext}-\mu) Q$ as an operator on $L^2(\mathbb{R}^3)$. Then using $V_{\rm ext} \ge 0$ and $Q=1-|\varphi\rangle \langle \varphi|$ we have
\begin{align} \label{eq:comp-1a}
H \ge Q(-\Delta) Q -\mu &=-\Delta + |\varphi\rangle \langle \Delta \varphi| + |\Delta \varphi\rangle \langle \varphi| + \norm{\nabla \varphi}_{L^2(\mathbb{R}^3)}^2 |\varphi\rangle \langle \varphi| -\mu \nonumber\\
&\ge -\Delta - 2\norm{\Delta \varphi}_{L^2} -\mu \quad \text{ on } L^2(\mathbb{R}^3).
\end{align}
Moreover, by \eqref{eq:H^2>Delta^2} and the Cauchy-Schwarz inequality,
\begin{align} \label{eq:comp-2a}
H^2 &\ge Q(-\Delta+V_{\rm ext}-\mu)^2 Q - \norm{Q(-\Delta+V_{\rm ext}-\mu) \varphi}_{L^2(\mathbb{R}^3)}^2 \nonumber\\
&\ge \frac{1}{2} Q(\Delta)^2 Q - C = \frac{1}{2}(\Delta)^2 - \frac{1}{2} \big(|\varphi\rangle \langle \Delta \varphi| \Delta + \Delta |\Delta\varphi\rangle \langle \varphi| \big) +\frac{1}{2} \norm{\Delta \varphi}^2_{L^2} - C \nonumber\\
&\ge \frac{1}{4}(\Delta)^2 - C \quad \text{ on } L^2(\mathbb{R}^3).
\end{align}
Since $H_{|\mathfrak{H}_+}$ is strictly positive, we also have, for any large constant $C_0>0$,
\begin{equation} \label{eq:comp-2b}
C_0^2 H^2 \ge H^2+C_0 \quad \text{ on }\mathfrak{H}_+.
\end{equation}
Recall that the mapping $t\mapsto t^{-1}$ is operator monotone for $t>0$. Moreover, if $A$ is a self-adjoint positive operator on $L^2(\mathbb{R}^3)$ that commutes with $Q$, then
\begin{equation} \label{QAQ-A}
Q (QAQ)_{|\mathfrak{H}_+}^{-1} Q = Q A^{-1}Q\quad \text{ on } L^2(\mathbb{R}^3)
\end{equation}
by Spectral Theorem. Therefore,
\begin{align} \label{eq:comp-2c}
Q H_{|\mathfrak{H}_+}^{-2} Q &\le C_0^2 Q (H^2+C_0)_{|\mathfrak{H}_+}^{-1} Q \nonumber\\
&\le C Q (H^2+C_0)^{-1} Q \le C Q(1-\Delta)^{-2} Q \quad \text{ on } L^2(\mathbb{R}^3).
\end{align}
Similarly,
\begin{align} \label{eq:comp-1}
QH_{|\mathfrak{H}_+}^{-1}Q &= Q (H+C)_{|\mathfrak{H}_+}^{-1}Q + C Q (H(H+C))_{|\mathfrak{H}_+}^{-1} Q \nonumber\\
&\le Q ( H+C)^{-1} Q + C Q H_{|\mathfrak{H}_+}^{-2} Q \nonumber\\
&\le Q (1-\Delta)^{-1}Q + C Q(1-\Delta)^{-2} Q \quad \text{ on } L^2(\mathbb{R}^3).
\end{align}
Next we replace $K$ by $\widetilde K$. Following \eqref{eq:K-op} we have $\|\widetilde K\|_{\rm op} \le C$ where $K$ is the operator on $L^2(\mathbb{R}^3)$ with kernel
$\widetilde K(x,y)$. Using $K^2\le Q\widetilde K^2 Q$ on $\mathfrak{H}_+$ and \eqref{eq:comp-2c} we can estimate
\begin{align}
\mathrm{Tr}_{\mathfrak{H}_+}\left(H^{-2}K^2\right) &\le \mathrm{Tr}_{\mathfrak{H}_+}\left(H^{-2}Q\widetilde K^2Q\right)= \mathrm{Tr}_{L^2(\mathbb{R}^3)}\left( QH^{-2}_{|\mathfrak{H}_+}Q \widetilde K^2\right) \nonumber\\
&\le \begin{multlined}[t]
C \mathrm{Tr}\left(Q (1-\Delta)^{-2}Q \widetilde K^2\right) = C \mathrm{Tr}\left( (1-\Delta)^{-2}Q \widetilde K^2 Q\right)\\
= C\mathrm{Tr}\left( (1-\Delta)^{-2} \left( \widetilde K^2 - |\varphi \rangle \langle \widetilde K^2 \varphi| - |\widetilde K^2\varphi \rangle \langle \varphi| + \|\widetilde K \varphi\|_{L^2(\mathbb{R}^3)}^2 \right)\right)
\end{multlined}\nonumber\\
&\le C \mathrm{Tr}\left( (1-\Delta)^{-2} \widetilde K^2 \right) + C. \label{eq:comp-2}
\end{align}
Similarly, from \eqref{eq:comp-1a} we deduce that
\begin{align}
\mathrm{Tr}_{\mathfrak{H}_+}\left(H^{-1}K^2\right) &\le \mathrm{Tr}_{\mathfrak{H}_+}\left(QH^{-1}Q\widetilde K^2\right) \le \mathrm{Tr}\left( \left( Q (1-\Delta)^{-1}Q + C Q(1-\Delta)^{-2} Q \right) \widetilde K^2\right)\nonumber\\
&\le \mathrm{Tr}\left((1-\Delta)^{-1} \widetilde K^2\right) + C \mathrm{Tr}\left( (1-\Delta)^{-2} \widetilde K^2 \right) + C. \label{eq:comp-1}
\end{align}
Thus \eqref{eq:lwb-appl} reduces to
\begin{equation} \label{eq:lwb-appl-aaa}
\inf {\rm Spec}(\mathbb{H}_{\rm Bog}) \ge -\frac{1}{4}\mathrm{Tr}\left((1-\Delta)^{-1} \widetilde K^2\right) -C \mathrm{Tr}\left( (1-\Delta)^{-2} \widetilde K^2 \right) - C.
\end{equation}
\subsubsection*{Evaluation of traces in \eqref{eq:lwb-appl-aaa}} Note that the operator $\widetilde{K}$ with kernel $\varphi(x) N{f_NV_N}(x-y) \varphi(y)$ can be written as
\begin{equation} \label{eq:K-phi-g}
\widetilde{K}= \varphi(x) N\widehat{f_NV_N}(p) \varphi(x) \quad \text{ on }L^2(\mathbb{R}^3)
\end{equation}
where $\varphi(x)$ and $v(p)$ are the multiplication operators on the position and momentum spaces (the derivation of \eqref{eq:K-phi-g} uses $\widehat{u*v}=\widehat u \widehat v$). Recall the Kato--Seiler--Simon inequality on Schatten norms \cite[Theorem 4.1]{Sim-05}:
\begin{align}\label{eq:KSS}
\norm{u(x) v(p)}_{\mathfrak{S}^r} \le C_{d,r}\norm{u}_{L^r(\mathbb{R}^d)}\norm{v}_{L^r(\mathbb{R}^d)}, \quad 2\le r <\infty.
\end{align}
Consequently,
\begin{align} \label{eq:lwb-appl-1}
{\rm Tr}\,\Big( (1-\Delta)^{-2} &\widetilde K^2\Big) = {\rm Tr}\, \left( \varphi(x) N\widehat{(f_NV_N)}(p) \varphi(x) \frac{1}{(1+p^2)^2} \varphi(x) N\widehat{(f_NV_N)} (p) \varphi(x) \right) \nonumber \\
&\le \norm{\varphi}_{L^\infty(\mathbb{R}^{3})}^2 \|N\widehat{f_N V_N}\|_{L^\infty(\mathbb{R}^{3})}^2 \norm{\varphi(x) (p^2 + 1)^{-1}}^2_{\mathfrak{S}^2} \nonumber \\
&\le C \norm{\varphi}_{L^\infty(\mathbb{R}^{3})}^2 \|N\widehat{f_N V_N}\|_{L^\infty(\mathbb{R}^{3})}^2 \norm{\varphi}_{L^2(\mathbb{R}^3)}^2 \norm{(p^2 + 1)^{-1}}_{L^2(\mathbb{R}^3)}^2\le C.
\end{align}
Here we have used H\"older's inequality for Schatten spaces and $\|N\widehat{f_N V_N}\|_{L^\infty} \le 8\pi \mathfrak{a}$.
Next, consider
\[
\mathrm{Tr}\left((1-\Delta)^{-1}\widetilde K^2\right) = {\rm Tr}\,\left(\varphi(x) N\widehat{(f_NV_N)} (p) \varphi(x) \frac{1}{1+p^2} \varphi(x) N\widehat{(f_NV_N)} (p) \varphi(x) \right).
\]
Let us decompose
\begin{multline*}
2 \varphi(x) (1+p^2)^{-1} \varphi(x) - \varphi^2(x) (1+p^2)^{-1} - (1+p^2)^{-1} \varphi^2(x) \\
\begin{aligned}[t]
&= - \left[ \varphi(x), \left[ \varphi(x), (1+p^2)^{-1}\right]\right] = \left[ \varphi(x), (1+p^2)^{-1} [\varphi(x),p^2] (1+p^2)^{-1}\right] \\
&=\begin{multlined}[t]
\left[ \varphi(x), (1+p^2)^{-1}\right] [\varphi(x),p^2] (1+p^2)^{-1} + (1+p^2)^{-1} \left[ \varphi(x), [\varphi(x),p^2]\right] (1+p^2)^{-1} \\
\qquad+ (1+p^2)^{-1} [\varphi(x),p^2] \left[ \varphi(x), (1+p^2)^{-1}\right]
\end{multlined}\\
&=\begin{multlined}[t]
-2 (1+p^2)^{-1} [\varphi(x),p^2] (1+p^2)^{-1} [\varphi(x),p^2] (1+p^2)^{-1} \\
- 2 (1+p^2)^{-1} |\nabla \varphi(x)|^2 (1+p^2)^{-1},
\end{multlined}
\end{aligned}
\end{multline*}
where we have used
\[ [\varphi(x),(p^2+1)^{-1}] = - (p^2+1)^{-1} [\varphi(x),p^2](p^2+1)^{-1} \]
and the IMS formula \eqref{eq:IMS-formula} for $[\varphi(x), [\varphi(x),p^2]]$. This gives
\begin{multline}\label{eq:1-2-3-second}
\varphi(x) N\widehat{f_NV_N} (p) \varphi(x) \frac{1}{1+p^2} \varphi(x) N\widehat{f_NV_N}(p) \varphi(x) \\
\begin{aligned}[b]
&= \begin{multlined}[t]
\frac{1}{2} \varphi(x) N\widehat{f_NV_N} (p) \left( \varphi^2(x) \frac{1}{1+p^2} + \frac{1}{1+p^2} \varphi^2(x) \right) N\widehat{f_NV_N}(p) \varphi(x)\\
- \varphi(x) N\widehat{f_NV_N} (p) \frac{1}{1+p^2} [\varphi(x),p^2] \frac{1}{1+p^2} [\varphi(x),p^2] \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x)\\
- \varphi(x) N\widehat{f_NV_N} (p) \frac{1}{1+p^2} |\nabla \varphi(x)|^2 \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x)
\end{multlined}\\
&=: (\mathrm{I}) + (\mathrm{II}) + (\mathrm{III}).
\end{aligned}
\end{multline}
\subsubsection*{Dealing with $(\mathrm{I})$.} For the main term (I), we write
\begin{align*}
{\rm Tr}\, (\mathrm{I})
&= \Re \mathrm{Tr} \left( \varphi^2(x) N\widehat{f_NV_N} (p) \varphi^2(x) \frac{N\widehat{f_NV_N}(p)}{1+p^2} \right)\\
&= \begin{multlined}[t]
\Re \mathrm{Tr} \left( \varphi^2(x) N\widehat{f_NV_N} (p) \varphi^2(x) \frac{N\widehat{f_NV_N}(p)}{p^2} \right) \\
- \Re \mathrm{Tr} \left( N\widehat{f_NV_N} (p) \varphi^2(x) \frac{N\widehat{f_NV_N}(p) }{p^2(1+p^2)} \varphi^2(x) \right).
\end{multlined}
\end{align*}
The first term can be computed exactly using the scattering equation \eqref{eq:scatering-eq-N-omega}
\begin{multline*}
\Re \mathrm{Tr} \left( \varphi^2(x) N\widehat{f_NV_N} (p) \varphi^2(x) \frac{1}{p^2} N\widehat{f_NV_N}(p) \right)\\
= 2 \Re \mathrm{Tr} \left( \varphi^2(x) N\widehat{f_NV_N} (p) \varphi^2(x) N\widehat {(1-f_N)}(p) \right) = 2 N^2 \int_{\mathbb{R}^{3}} (V_Nf_N(1-f_N) \ast \varphi^2) \varphi^2.
\end{multline*}
Here we have used the following identity
\begin{align}\label{eq:tr-int}
\mathrm{Tr}|\left(\overline{\varphi_1(x)} \overline{ \widehat{g}_1(p)} \varphi_2(x) \widehat g_2(p) \right) &= \left\langle \widehat{g}_1(p) \varphi_1(x), \varphi_2(x) \widehat g_2(p) \right\rangle_{\mathfrak{S}^2}\nonumber\\
&= \left\langle (\widehat{g}_1(p) \varphi_1(x)) (\cdot,\cdot), (\varphi_2(x) \widehat g_2(p)) (\cdot, \cdot) \right\rangle_{L^2(\mathbb{R}^3\times \mathbb{R}^3)} \nonumber\\
&= \iint \overline{g_1(y-z) \varphi_1(z)} \varphi_2(y) g_2(y-z) \mathrm{d} y \mathrm{d} z.
\end{align}
This is based on the equality between the Hilbert--Schmidt norm of operators and the $L^2$-norm of operator kernels (the kernel of operator $\widehat{g}_1(p) \varphi_1(x)$ is $g_1(y-z)\varphi(z)$, similarly to \eqref{eq:K-phi-g}). Here in our case $\varphi(x)\ge 0$ and $\widehat{f_NV_N}(p)$ is real-valued since $V_Nf_N$ is radial.
The second term can be estimated by H\"older's and the Kato--Seiler--Simon inequalities:
\begin{align*}
&\left| \mathrm{Tr} \left( N\widehat{f_NV_N} (p) \varphi^2(x) \frac{N\widehat{f_NV_N}(p)}{p^2(1+p^2)} \varphi^2(x) \right) \right| \le \norm{N\widehat{f_NV_N}}_{L^\infty}^2 \norm{\varphi^2(x) \sqrt{\frac{1}{p^2(1+p^2)}}}_{\mathfrak{S}^2}^2 \\
&\qquad\qquad\qquad\qquad\qquad \qquad\qquad \quad \le C\norm{N\widehat{f_NV_N}}_{L^\infty}^2 \norm{\varphi^2}_{L^2}^2 \norm{\sqrt{\frac{1}{p^2(1+p^2)}}}_{L^2}^2 \\
&\qquad\qquad\qquad\qquad\qquad \qquad\qquad \quad \le C\norm{N\widehat{f_NV_N}}_{L^\infty}^2 \norm{\varphi}_{L^4}^4 \norm{\frac{1}{p^2(1+p^2)}}_{L^1} \le C.
\end{align*}
Here we used again $\| N \widehat{f_NV_N} \|_{L^\infty} \le 8\pi \mathfrak{a}$. Thus
\begin{align*}
{\rm Tr}\, (\mathrm{I}) = 2 N^2 \int_{\mathbb{R}^{3}} (V_Nf_N(1-f_N) \ast \varphi^2) \varphi^2 + O(1).
\end{align*}
\subsubsection*{Dealing with $(\mathrm{II})$.} By expanding further
\[ [\varphi(x),p^2] = (\Delta \varphi)(x) + 2 (\nabla \varphi(x)) \cdot \nabla \]
and using the triangle inequality we have
\begin{align*}
& |\mathrm{Tr}(\mathrm{II})| = \left| \mathrm{Tr} \left( \varphi(x) \frac{N\widehat{f_NV_N} (p) }{1+p^2} [\varphi(x),p^2] \frac{1}{1+p^2} [\varphi(x),p^2] \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x) \right) \right| \\
&\begin{multlined}[t]
\le\left| \mathrm{Tr} \left( \varphi(x) \frac{N\widehat{f_NV_N} (p) }{1+p^2} (\Delta \varphi(x)) \frac{1}{1+p^2} (\Delta \varphi(x)) \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x) \right) \right| \\
+ 4 \left| \mathrm{Tr} \left( \varphi(x) \frac{N\widehat{f_NV_N} (p) }{1+p^2} (\Delta \varphi(x)) \frac{1}{1+p^2} ((\nabla \varphi(x)) \cdot \nabla) \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x) \right) \right|\\
+ 4\left| \mathrm{Tr} \left( \varphi(x) \frac{N\widehat{f_NV_N} (p) }{1+p^2} ((\nabla \varphi(x)) \cdot \nabla) \frac{1}{1+p^2} ((\nabla \varphi(x)) \cdot \nabla) \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x) \right) \right|.
\end{multlined}
\end{align*}
Then by H\"older's and the Kato--Seiler--Simon inequalities,
\begingroup
\allowdisplaybreaks
\begin{align*}
|\mathrm{Tr}(\mathrm{II})| & \le\norm{\varphi}_{L^\infty}^2 \norm{N \widehat{f_NV_N}}_{L^\infty}^2 \norm{(\Delta \varphi(x)) \frac{1}{1+p^2}}_{\rm \mathfrak{S}^2}^2\\
&\quad + 4 \norm{\varphi}_{L^\infty}^2 \norm{N \widehat{f_NV_N}}_{L^\infty}^2 \norm{\frac{1}{1+p^2} (\Delta \varphi(x))}_{\rm \mathfrak{S}^2} \norm{\frac{1}{1+p^2} |\nabla \varphi(x)|}_{\rm \mathfrak{S}^2} \norm{ |\nabla| \frac{1}{1+p^2}}_{\rm op} \\
&\qquad +4 \norm{ \varphi(x) \frac{1}{\sqrt{1+p^2}}}_{\mathfrak{S}^4} \norm{N \widehat{f_NV_N}}_{L^\infty} \norm{ \frac{1}{\sqrt{1+p^2}} |\nabla \varphi(x)|}_{\mathfrak{S}^4} \norm{ |\nabla| \frac{1}{\sqrt{1+p^2}} }_{\rm op} \times \\
& \qquad\quad \times \norm{ \frac{1}{\sqrt{1+p^2}} |\nabla \varphi(x)|}_{\mathfrak{S}^4} \norm{ |\nabla| \frac{1}{\sqrt{1+p^2}} }_{\rm op} \norm{N \widehat{f_NV_N}}_{L^\infty} \norm{ \frac{1}{\sqrt{1+p^2}} \varphi(x) }_{\mathfrak{S}^4}\\
&\le C\norm{\varphi}_{L^\infty}^2 \norm{N \widehat{f_NV_N}}_{L^\infty}^2 \norm{\Delta \varphi}_{L^2}^2 \norm{ \frac{1}{1+p^2}}_{L^2}^2\\
&\quad + 4 \norm{\varphi}_{L^\infty}^2 \norm{N \widehat{f_NV_N}}_{L^\infty}^2 \norm{ \frac{1}{1+p^2}}_{L^2}^2 \norm{\Delta \varphi}_{L^2} \norm{\nabla \varphi(x)}_{L^2} \norm{ \frac{|p|}{1+p^2}}_{\rm op} \\
&\qquad + 4 \norm{N \widehat{f_NV_N}}_{L^\infty}^2 \norm{\varphi}_{L^4}^2 \norm{\nabla \varphi}_{L^4}^2 \norm{ \frac{1}{\sqrt{1+p^2}}}_{L^4}^4 \norm{ \frac{|p|}{1+p^2}}_{\rm op}^2 \le C.
\end{align*}
\endgroup
\subsubsection*{Dealing with $(\mathrm{III})$} This term is negative and can be ignored for an upper bound. Nevertheless, we can bound it by H\"older's and the Kato--Seiler--Simon inequalities,
\begin{align*}
|{\rm Tr}\,({\rm III})| &= \left| \mathrm{Tr} \left( \varphi(x) N\widehat{f_NV_N} (p) \frac{1}{1+p^2} |\nabla \varphi(x)|^2 \frac{1}{1+p^2} N\widehat{f_NV_N}(p) \varphi(x) \right) \right| \\
&\leq \norm{\varphi}_{L^\infty(\mathbb{R}^{3})}^2 \norm{N\widehat{f_NV_N}}_{L^\infty(\mathbb{R}^{3})}^2 \norm{|\nabla \varphi(x)| \frac{1}{p^2+1}}_{\rm \mathfrak{S}^2}^2 \\
& \le \norm{\varphi}_{L^\infty(\mathbb{R}^{3})}^2 \norm{N\widehat{f_NV_N}}_{L^\infty(\mathbb{R}^{3})}^2 \norm{\nabla \varphi}_{L^2(\mathbb{R}^3)}^2 \norm{\frac{1}{p^2+1}}_{L^2(\mathbb{R}^3)}^2 \le C.
\end{align*}
In summary, we deduce from \eqref{eq:1-2-3-second} that
\begin{align} \label{eq:lwb-appl-2}
\mathrm{Tr}((1-\Delta)^{-1} \widetilde K^2) &= \mathrm{Tr} \left( \varphi(x) N\widehat{f_NV_N} (p) \varphi(x) \frac{1}{1+p^2} \varphi(x) N\widehat{f_NV_N}(p) \varphi(x) \right) \nonumber\\
&= 2 N^2 \int_{\mathbb{R}^{3}} (V_Nf_N(1-f_N) \ast \varphi^2) \varphi^2 + O(1).
\end{align}
Inserting \eqref{eq:lwb-appl-1} and \eqref{eq:lwb-appl-2} in \eqref{eq:lwb-appl-aaa} we obtain the desired estimate \eqref{eq:HBog>=fNVN}:
\[
\inf {\rm Spec}(\mathbb{H}_{\rm Bog}) \ge -\frac{1}{2}N^2 \int_{\mathbb{R}^{3}} (V_Nf_N(1-f_N) \ast \varphi^2) \varphi^2 -C.
\]
\end{proof}
\subsection{Conclusion of lower bound}
\begin{proof}[Proof of Lemma \ref{lem:lwb}] From Lemma \ref{lem:red-qua} and Lemma \ref{lem:missing_term}, we have
\begin{align*}
H_N &\ge \begin{multlined}[t]
N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right)\\
+ (\mu- \mu_1) \mathcal{N}_+ + \inf {\rm Spec} (\mathbb{H}_{\rm Bog})- C
\end{multlined}\\
&\ge \begin{multlined}[t]
N\int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left ((((2f_N-f_N^2)V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right)\\
+ (\mu- \mu_1) \mathcal{N}_+ -\frac{1}{2}N^2 \int_{\mathbb{R}^{3}} (V_Nf_N(1-f_N) \ast \varphi^2) \varphi^2 -C
\end{multlined}\\
& = \begin{multlined}[t]
N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int_{\mathbb{R}^3} \left (((f_N V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right) \\
+ (\mu- \mu_1) \mathcal{N}_+ -C.
\end{multlined}
\end{align*}
It remains to show that
\begin{equation} \label{eq:fnvn-delta}
\frac{N^2}{2} \int_{\mathbb{R}^3} \left (((f_N V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right) = N 4\pi \mathfrak{a} \int_{\mathbb{R}^3} |\varphi_{\rm GP}|^4 + O(1).
\end{equation}
In fact, we have
\begin{multline*}
\left| N \int \left (((f_N V_N)*\varphi_{\rm GP}^2)\varphi_{\rm GP}^2 \right) - 8\pi \mathfrak{a} \int |\varphi_{\rm GP}|^4\right|\\
\begin{aligned}[b]
&= \left| \int N \widehat {f_NV_N}(k) |\widehat {\varphi_{\rm GP}^2}(k)|^2 \mathrm{d} k - \widehat {fV}(0) \int |\widehat {\varphi_{\rm GP}^2}(k)|^2 \mathrm{d} k \right|\\
&= \left| \int \left( \widehat {fV}(k/N) - \widehat {fV}(0) \right) |\widehat {\varphi_{\rm GP}^2}(k)|^2 \mathrm{d} k \right|\\
&\le \norm{\nabla_k \widehat {fV}}_{L^\infty} \int |k/N| |\widehat {\varphi_{\rm GP}^2}(k)|^2 \mathrm{d} k\le C N^{-1} \norm{|x|fV}_{L^1} \norm{\varphi_{\rm GP}^2}_{H^{1/2}} \le C N^{-1}.
\end{aligned}
\end{multline*}
Thus \eqref{eq:fnvn-delta} holds true. Hence we find that
\begin{multline*}
H_N \ge N \int_{\mathbb{R}^{3}} \left(|\nabla \varphi_{\rm GP} |^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + 4\pi \mathfrak{a} N \int_{\mathbb{R}^3} |\varphi_{\rm GP}|^4\\
+ (\mu- \mu_1) \mathcal{N}_+ -C= N e_{\rm GP} + (\mu- \mu_1) \mathcal{N}_+ -C.
\end{multline*}
Since $\mu>\mu_1$ and $\mathcal{N}_+=\sum_{i=1}^N Q_{x_i}$, the proof of Lemma \ref{lem:lwb} is complete.
\end{proof}
\section{Upper bound} \label{sec:upp}
In this section prove the missing energy upper bound.
\subsection{Construction of the trial state}
Let us explain the construction of the trial state. In the Fock space setting, it is known \cite[Appendix A]{BPS-16} that we can reach the energy $Ne_{\rm GP} + O(1)$ using trial states of the form
\[ W(\sqrt{N}\varphi_{\rm GP}) \Gamma' W(\sqrt{N}\varphi_{\rm GP})^* \]
where $W(g)=e^{a(g)-a^*(g)}$ is the Weyl operator and $\Gamma'$ is an appropriate quasi-free state. In the following, we will adapt this construction to the $N$-particle Hilbert space. We will use the unitary operator $U_N$ introduced in \cite{LNSS-15} instead of the Weyl operator and modify the quasi-free state slightly. Denote
\[ Q=\1-|\varphi_{\rm GP} \rangle\langle \varphi_{\rm GP}|, \quad \mathcal{H}_+ = QL^2(\mathbb{R}^3). \]
As explained in \cite{LNSS-15}, any function $\Psi_N\in L^2(\mathbb{R}^3)^{\otimes_s N}$ admits a unique decomposition
\begin{equation*}
\Psi_N = \varphi^{\otimes N} \xi_0 + \varphi^{\otimes N-1} \otimes_s \xi_1 + \varphi^{\otimes N-2} \otimes_s \xi_2 + ... + \xi_N
\end{equation*}
with $\xi_k \in \mathcal{H}_+^{\otimes_s k}$ (with the convention that $\xi_0 \in \mathbb{C}$). This defines a unitary map $U_N$ from $L^2(\mathbb{R}^3)^{\otimes_s N}$ to $\mathcal{F}^{\le N}(\mathcal{H}_+)$, the truncated Fock space with particle number $\mathcal{N} \le N$, by
\begin{equation} \label{eq:def-UN}
U_N \left( \sum_{k} \varphi^{\otimes N-k} \otimes_s \xi_k \right) = \bigoplus_{k=0}^N \xi_k.
\end{equation}
Next, let $k$ be the Hilbert--Schmidt operator on $L^2(\mathbb{R}^3)$ with kernel
\[ k(x,y)= \varphi_{\rm GP}(x) N (1-f_N(x-y)) \varphi_{\rm GP}(y), \]
with $f_N$ the scattering solution in \eqref{eq:scatering-eq-N}. Define $\gamma:\mathfrak{H}_+\to \mathfrak{H}_+$, $\alpha:\mathfrak{H}_+\to \mathfrak{H}_+^*\equiv \overline{\mathfrak{H}_+}$ by
\[ \gamma=Qk^2Q, \quad \text{and} \quad \alpha=\overline{Q} kQ. \]
Then we have $\gamma\ge 0$, $\mathrm{Tr} \gamma\le \mathrm{Tr} k^2 <\infty$, $\alpha^*=Qk\overline{Q}=\overline{\alpha}$ and for all $g_1,g_2\in \mathfrak{H}_+$
\begin{align*}
& \left \langle
\begin{pmatrix}
g_1 \\
\overline{g_2}
\end{pmatrix},
\begin{pmatrix}
\gamma & \alpha^* \\
\alpha & 1 + \overline{\gamma}
\end{pmatrix}
\begin{pmatrix}
g_1 \\
\overline{g_2}
\end{pmatrix}
\right\rangle_{\mathfrak{H}_+\oplus \mathfrak{H}_+^*} = \left\langle g_1, k^2 g_1 \right\rangle + \left\langle g_2, \left(1+k^2\right) g_2\right\rangle + 2\Re \left\langle \overline{g_2}, k {g_1} \right\rangle \ge 0
\end{align*}
by the Cauchy--Schwarz inequality. Thus $(\gamma,\alpha)$ satisfies \eqref{eq:1-pdm-quasi}. Hence, there exists a unique (mixed) quasi-free state $\Gamma$ on the excited Fock space $\mathcal{F}(\mathfrak{H}_+)$ such that $(\gamma,\alpha)$ are its one-body density matrices, namely
\begin{equation} \label{eq:def-G+}
\left\langle g_1, k^2 g_2\right\rangle = \left\langle a^*(g_2)a(g_1) \right\rangle_\Gamma, \quad \left\langle \overline{g_1}, k g_2\right\rangle = \left\langle a^*(g_2)a^*(g_1) \right\rangle_\Gamma, \quad \forall g_1,g_2\in \mathfrak{H}_+.
\end{equation}
In this section we will prove
\begin{lemma}[Upper bound] \label{lem:upp} Let $V_{\rm ext}$, $V$ as in Theorem~\ref{thm:main} (but without the technical condition \eqref{eq:gap-condition}). Let $U_N=U_N(\varphi_{\rm GP})$ be as in \eqref{eq:def-UN} and let $\Gamma$ be as in \eqref{eq:def-G+}. Let $\1^{\le N}=\1(\mathcal{N}\le N)$ be
the truncation on the particle number operator. Then
\[
\Gamma_N := U_N^* \1^{\le N} \Gamma \1^{\le N} U_N
\]
is a non-negative operator on $L^2(\mathbb{R}^3)^{\otimes_s N}$ with $|1- \mathrm{Tr} \Gamma_N| \le C_s N^{-s}$ for any $s \ge 1$, and
\[ \mathrm{Tr}(H_N \Gamma_N) \le N e_{\rm GP} + C \]
with the Hamiltonian $H_N$ in \eqref{eq:HN}. Consequently,
\[ E_N\le \frac{\mathrm{Tr}(H_N \Gamma_N)}{\mathrm{Tr} \Gamma_N} \le Ne_{\rm GP} + C. \]
\end{lemma}
\begin{remark} In the Fock space setting in \cite[Appendix A]{BPS-16}, the quasi-free state $\Gamma'$ is constructed using an explicit Bogoliubov transformation of the form
\begin{equation*}
T_0=\exp\left(\frac{1}{2} \iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}} k(x,y) (a_x^* a_y^* - a_x a_y) \mathrm{d} x \mathrm{d} y \right).
\end{equation*}
Its action on creation and annihilation operators is given for any $g\in L^2(\mathbb{R}^{3})$ by
\begin{equation*}\label{eq:action_of_T_on_a}
T_0^* a^*(g) T_0 = a^*(\mathrm{ch}(k) g) + a(\mathrm{sh}(k) \overline{g}),
\end{equation*}
where
\begin{equation*}
\mathrm{ch}(k) = \sum_{n\geq0} \frac{(k \overline{k})^n}{(2n)!} \quad \textrm{ and } \quad \mathrm{sh}(k) = \sum_{n\geq0} \frac{(k \overline{k})^nk}{(2n+1)!}.
\end{equation*}
The one-body density matrices of $\Gamma'$ can be computed in terms of $\mathrm{ch}(k)$ and $\mathrm{sh}(k)$. Our construction of the quasi-free state $\Gamma$ is slightly different as its one-body density matrices are given exactly in terms of $k$. This makes the energy computation easier.
\end{remark}
We divide the proof of Lemma \eqref{lem:upp} into several steps.
\subsection{Operator bound on Fock space} First, we analyse the action of the unitary transformation $U_N$.
\begin{lemma}[Operator bound on Fock space] \label{lem:UHU*} We have the operator inequality
\begin{align}
\1^{\le N} U_N H_N U_N^*\1^{\le N} \le \1_{\mathcal{F}_+} (\mathcal{G}_N + C(\mathcal{N}+1)^6 ) \1_{\mathcal{F}_+} \quad \text{on }\mathcal{F}(\mathfrak{H}_+),
\end{align}
where $\mathcal{G}_N$ is the following operator on the full Fock space $\mathcal{F}(L^2(\mathbb{R}^3))$:
\begin{equation*}
\mathcal{G}_N = \begin{multlined}[t]
N \int \left(|\nabla\varphi_{\rm GP}|^2 + V_{\rm ext} |\varphi_{\rm GP}|^2 \right) + \frac{N^2}{2} \int \left(V_N \ast \varphi_{\rm GP}^2\right) \varphi_{\rm GP}^2 \\
+ \sqrt{N} \left( a \left( \left(-\Delta + V_{\rm ext} + NV_N*\varphi_{\rm GP}^2\right)\varphi_{\rm GP}\right) + {\rm h.c.}\right) \\
+ \mathrm{d}\Gamma (-\Delta+V_{\rm ext}) + \frac{N}{2} \iint V_N (x-y) \varphi_{\rm GP}(x) \varphi_{\rm GP}(y) (a^*_x a^*_y + {\rm h.c.}) \mathrm{d} x\, \mathrm{d} y \\
+ \sqrt{N} \iint V_N(x-y) \varphi_{\rm GP}(x) (a^*_y a_{x} a_{y} + {\rm h.c.} ) \mathrm{d} x \mathrm{d} y \\
+ \frac{1+CN^{-1}}{2} \iint V_N(x-y) a^*_x a^*_y a_{x} a_{y} \mathrm{d} x \, \mathrm{d} y.
\end{multlined}
\end{equation*}
\end{lemma}
\begin{proof} Let us write $\varphi=\varphi_{\rm GP}$ for short. After a straightforward computation as in \cite[Section 4]{LNSS-15} (see also \cite[Appendix B]{LNS-15}) using \eqref{eq:2nd-Q} and the rules
\begin{equation*}
U_N a^*(\varphi)a(g_1) U_N^* = \sqrt{N-\mathcal{N}_+} a(g_1), \quad U_N a^*(g_1)a(g_2) U_N^* = a^*(g_1)a(g_2), \quad \forall g_1,g_2\in \mathfrak{H}_+
\end{equation*}
we obtain
\begin{equation} \label{eq:UHU-wG}
\1^{\le N} U_N H_N U_N^* \1^{\le N} = \1_{\mathcal{F}_+} \1^{\le N} \widetilde{\mathcal{G}}_N \1^{\le N} \1_{\mathcal{F}_+} \quad \text{ on } \mathcal{F}_+,
\end{equation}
where $\widetilde{\mathcal{G}}_N$ is the following operator on the truncated Fock space $\1^{\le N} \mathcal{F}(L^2(\mathbb{R}^3))$:
\begin{align} \label{eq:wGN}
\widetilde{\mathcal{G}}_N &= \left( (N-\mathcal{N}_+) \int \left(|\nabla\varphi ^2 + V_{\rm ext} |\varphi |^2 \right) + \frac{1}{2} (N-\mathcal{N}_+)(N-\mathcal{N}_+-1)\int \left(V_N \ast \varphi ^2\right) \varphi ^2 \right) \nonumber \\
&\qquad + \left( \sqrt{N-\mathcal{N}_+} a((-\Delta + V_{\rm ext})\varphi) + {\rm h.c.} \right) \nonumber\\
&\qquad+ \left( (N-\mathcal{N}_+-1) \sqrt{N-\mathcal{N}_+} a\left(\left(V_N*\varphi^2\right)\varphi\right) + {\rm h.c.} \right)\nonumber \\
&\qquad+ \left( \mathrm{d}\Gamma (-\Delta + V_{\rm ext}) + (N-\mathcal{N}_+) \mathrm{d}\Gamma(V_N*\varphi^2 + N^{-1} K) \right)\nonumber\\
&\qquad+ \left( \frac{1}{2} \iint K(x,y)a^*_x a^*_y \,\mathrm{d} x\, \mathrm{d} y\, \frac{\sqrt{(N-\mathcal{N}_+)(N-\mathcal{N}_+-1)}}{N} + {\rm h.c.} \right)\nonumber\\
&\qquad+ \left( \sqrt{N-\mathcal{N}_+} \iint V_N(x-y) \varphi (x) \, a^*_y a_{x'}a_{y'}\,\mathrm{d} x\, \mathrm{d} y+ {\rm h.c.} \right)\nonumber\\
&\qquad+ \frac{1}{2} \iint V_N(x-y) a^*_xa^*_y a_{x} a_{y}\,\mathrm{d} x\, \mathrm{d} y\nonumber\\
&=: {\rm (I)} + {\rm (II)} + {\rm (III)} + {\rm (IV)} + {\rm (V)} + {\rm (VI)} + {\rm (VII)}
\end{align}
where $K$ is the operator on $L^2(\mathbb{R}^3)$ with kernel $K(x,y)=\varphi(x) NV_N(x-y) \varphi(y)$. Here unlike the presentation in \cite{LNSS-15,LNS-15}, we do not put the projection $Q$ in the expression of $\widetilde{G}_N$ because we have introduced the projection $\1_{\mathcal{F}_+}$ in \eqref{eq:UHU-wG}.
Now let us simplify further $\widetilde{\mathcal{G}}_N$, which is a proper operator on $\1^{\le N}\mathcal{F}$.
\subsubsection*{Analysis of {\rm (I)}} Using $ |N-\mathcal{N}_+|\le N$, we have
\begin{equation} \label{eq:wGN-1}
{\rm (I)} \le N \int \left(|\nabla\varphi ^2 + V_{\rm ext} |\varphi |^2\right) + \frac{N^2}{2} \int \left(V_N \ast \varphi ^2\right) \varphi ^2.
\end{equation}
\subsubsection*{Analysis of {\rm (II)}} Let us replace $\sqrt{N-\mathcal{N}_+}$ by $\sqrt{N}$. By the Cauchy--Schwarz inequality we have
\begin{multline} \label{eq:wGN-2}
\pm \left( \left(\sqrt{N-\mathcal{N}_+}-\sqrt{N}\right) a((-\Delta + V_{\rm ext})\varphi) + {\rm h.c.} \right)\\
\begin{aligned}[b]
&\le N \left(\sqrt{N-\mathcal{N}_+}-\sqrt{N}\right)^2 + N^{-1} a^*((-\Delta + V_{\rm ext})\varphi) a((-\Delta + V_{\rm ext})\varphi)\\
&\le N \left( \frac{\mathcal{N}_+}{\sqrt{N-\mathcal{N}_+}+\sqrt{N} } \right)^2 + N^{-1} \mathcal{N} \norm{\left(-\Delta + V_{\rm ext}\right)\varphi}_{L^2}^2 \le C(\mathcal{N}^2+1).
\end{aligned}
\end{multline}
Here we have used $a^*(g)a(g)\le \mathcal{N} \|g\|_{L^2}^2$ and the fact that $(-\Delta + V_{\rm ext})\varphi \in L^2$.
\subsubsection*{Analysis of {\rm (III)}} We can replace $(N-\mathcal{N}_+-1)\sqrt{N-\mathcal{N}_+}$ by $N\sqrt{N}$ as
\begin{align} \label{eq:wGN-3}
&\pm \left( \left((N-\mathcal{N}_+-1) \sqrt{N-\mathcal{N}_+} -N\sqrt{N}\right) a\left(\left(V_N*\varphi^2\right)\varphi\right) + {\rm h.c.} \right)\nonumber\\
&\le N^{-1} \left((N-\mathcal{N}_+-1) \sqrt{N-\mathcal{N}_+} -N\sqrt{N} \right)^2 + N a^*\left(\left(V_N*\varphi^2\right)\varphi\right)a\left(\left(V_N*\varphi^2\right)\varphi\right)\nonumber\\
&\le C\left( \mathcal{N}_+^2 +1\right) + N \mathcal{N} \norm{\left(V_N*\varphi^2\right)\varphi}_{L^2}^2 \le C\left(\mathcal{N}^2+1\right).
\end{align}
Here we have used
\[
\norm{\left(V_N*\varphi^2\right)\varphi}_{L^2} \le \norm{V_N*\varphi^2}_{L^\infty} \norm{\varphi}_{L^2} \le \norm{V_N}_{L^1}\norm{\varphi^2}_{L^\infty} \norm{\varphi}_{L^2} \le CN^{-1}.
\]
\subsubsection*{Analysis of {\rm (IV)}} Similarly to \eqref{eq:K-op} we have $\|K\|_{\rm op} \le C$. Combining with the uniform bound $\|V_N*\varphi^2\|_{L^\infty} \le CN^{-1}$ used above, we have
\begin{align} \label{eq:wGN-4}
\pm (N-\mathcal{N}_+) \mathrm{d}\Gamma(V_N*\varphi^2 + N^{-1} K) \le C\mathcal{N}.
\end{align}
\subsubsection*{Analysis of {\rm (V)}} We can replace $N^{-1}\sqrt{(N-\mathcal{N}_+)(N-\mathcal{N}_+-1)}$ by $1$ as
\begin{align} \label{eq:wGN-5}
&\pm \left( \iint K(x,y)a^*_x a^*_y \left( \frac{\sqrt{(N-\mathcal{N}_+)(N-\mathcal{N}_+-1)}}{N} -1 \right) \,\mathrm{d} x\, \mathrm{d} y + {\rm h.c.}\right) \nonumber \\
&\le \iint \left( |K(x,y)| N^2 \left( \frac{\sqrt{(N-\mathcal{N}_+)(N-\mathcal{N}_+-1)}}{N} -1 \right)^2 + \frac{|K(x,y)|}{N^2}a_x^* a_y^* a_x a_y\right) \mathrm{d} x \mathrm{d} y \nonumber\\
&\le C \iint \left( NV_N(x-y) \left(\varphi^2(x)+\varphi^2(y) \right) \mathcal{N}_+^2 + N^{-1} \norm{\varphi}_{L^\infty}^2 V_N(x-y) a_x^* a_y^* a_x a_y\right) \mathrm{d} x \mathrm{d} y \nonumber\\
&\le C\mathcal{N}_+^2 + CN^{-1} \iint V_N (x-y) a_x^* a_y^* a_x a_y \mathrm{d} x \mathrm{d} y.
\end{align}
\subsubsection*{Analysis of {\rm (VI)}} We can replace $\sqrt{N-\mathcal{N}_+}$ by $\sqrt{N}$ as
\begin{multline} \label{eq:wGN-6}
\pm \left( \left(\sqrt{N}- \sqrt{N-\mathcal{N}_+}\right) \iint V_N(x-y) (\varphi(x) a_y^* a_x a_y + {\rm h.c.}) \mathrm{d} x\, \mathrm{d} y \right) \\
\begin{aligned}[b]
&\le \begin{multlined}[t]
N \norm{\varphi}_{L^\infty}^2 \iint |V_N(x-y)| a_y^* \left(\sqrt{N}- \sqrt{N-\mathcal{N}_+}\right)^2 a_y \mathrm{d} x\, \mathrm{d} y \\
+ N^{-1}\iint V_N(x-y) a_x^* a_y^* a_x a_y \mathrm{d} x \mathrm{d} y
\end{multlined}\\
&\le C\mathcal{N}^2 + N^{-1}\iint V_N(x-y) a_x^* a_y^* a_x a_y \mathrm{d} x \mathrm{d} y.
\end{aligned}
\end{multline}
Here we have used $(\sqrt{N}- \sqrt{N-\mathcal{N}_+})^2\le N^{-1}\mathcal{N}_+ \le \mathcal{N}_+$ and
\[
\int a^*_y \mathcal{N}_+ a_y \mathrm{d} y = \int a^*_y a_y (\mathcal{N}_+ +1) \mathrm{d} y = \mathcal{N} (\mathcal{N}_++1)\le 2 \mathcal{N}^2.
\]
\subsubsection*{Conclusion} Inserting \eqref{eq:wGN-1}--\eqref{eq:wGN-6} in \eqref{eq:wGN}, we deduce from \eqref{eq:UHU-wG} that
\begin{equation} \label{eq:GN-1<}
\1^{\le N} U_N H_N U_N^* \1^{\le N} \le \1_{\mathcal{F}_+}\1^{\le N} ( \mathcal{G}_N + C(\mathcal{N}+1)^2 ) \1^{\le N} \1_{\mathcal{F}_+} \quad \text{ on } \mathcal{F}_+.
\end{equation}
Now we remove the cut-off $\1^{\le N}$ on the right side of \eqref{eq:GN-1<}. For all terms which are positive and commute with $\mathcal{N}$, the cut-off $\1^{\le N}$ can be removed for an upper bound. It remains to consider the operator
\begin{multline*}
F := \sqrt{N} \left( a \left( \left(-\Delta + V_{\rm ext} + NV_N*\varphi^2 \right)\varphi \right) + {\rm h.c.}\right) + \frac{1}{2} \iint K(x,y) (a^*_x a^*_y + {\rm h.c.}) \mathrm{d} x\, \mathrm{d} y \\
+ \sqrt{N} \iint V_N(x-y) \varphi(x) (a^*_y a_{x} a_{y} + {\rm h.c.} ) \mathrm{d} x \mathrm{d} y
\end{multline*}
on $\mathcal{F}$. By the Cauchy--Schwarz inequality we can bound
\begin{align} \label{eq:F-F1-CN}
\pm F &\le \begin{multlined}[t]
N + \mathcal{N} \norm{\left(-\Delta + V_{\rm ext} + NV_N*\varphi^2\right)\varphi}_{L^2}^2 + \iint \left(|K(x,y)|^2 + a_x^*a_y^* a_x a_y \right) \mathrm{d} x \mathrm{d} y\\
+ \iint \left(N |V_N(x-y)|^2 |\varphi(x)|^2 a_y^* a_y + a_x^*a_y^* a_x a_y \right) \mathrm{d} x \mathrm{d} y
\end{multlined}\nonumber\\
&\le C(N^3 + \mathcal{N}^2).
\end{align}
Denote
\[ F_1:= F + C_0(N^3 + \mathcal{N}^2) \ge 0 \quad \textrm{ and } \quad \1^{>N}=\1-\1^{\le N}. \]
By the Cauchy-Schwarz inequality and \eqref{eq:F-F1-CN} we can bound
\begin{align*}
\1^{\le N} F \1^{\le N} - F &= - \1^{\le N} F_1 \1^{>N} - \1^{>N} F_1\1^{\le N} - \1^{>N} F\1^{>N}\\
&\le N^{-3} \left(\1^{\le N} F_1 \1^{\le N}\right) + N^3 \left( \1^{>N} F_1 \1^{>N}\right) - \1^{>N} F\1^{>N}\\
&\le CN^{-3} \left(N^3+\mathcal{N}^2\right) \1^{\le N} + C N^{3} \left(N^3+\mathcal{N}^2\right) \1^{>N} \le C (\mathcal{N} +1) ^6.
\end{align*}
Thus in conclusion, we have
\[
\1^{\le N} \mathcal{G}_N\1^{\le N} \le \mathcal{G}_N + C(\mathcal{N}+1)^6.
\]
Inserting this in \eqref{eq:GN-1<} we conclude the proof of Lemma \ref{lem:UHU*}.
\end{proof}
\subsection{Conclusion of upper bound}
\begin{proof}[Proof of Lemma \ref{lem:upp}]
Now consider the mixed state $\Gamma_N=U_N^* \1^{\le N} \Gamma \1^{\le N} U_N$. Again we will write $\varphi=\varphi_{\rm GP}$ for short.
\subsubsection*{Trace normalization} Since
\begin{equation} \label{eq:1>N-tre}
\1^{>N}:=\1-\1^{\le N} \le \mathcal{N}^s N^{-s}, \quad \forall s\ge 1
\end{equation}
we have
\begin{align} \label{eq:trace-Gamma-N}
0 &\le 1- \mathrm{Tr} \Gamma_N = 1- \mathrm{Tr}\left(\1^{\le N}\Gamma\right) = \mathrm{Tr} \left(\1^{>N}\Gamma\right) \nonumber\\
&\le N^{-s} {\rm Tr}\,\left( \mathcal{N}^s \Gamma \right) \le N^{-s} C_s (1+ \mathrm{Tr}\left(\mathcal{N} \Gamma\right))^s \le C_s N^{-s}, \quad \forall s\ge 1.
\end{align}
Here we have used \eqref{eq:fluc-N} and $\mathrm{Tr}(\mathcal{N} \Gamma)= \mathrm{Tr} \gamma\le \mathrm{Tr} k^2 \le C$.
\subsubsection*{Energy expectation}
Thanks to Lemma \ref{lem:UHU*}, we have
\begin{equation} \label{eq:Tr-HN-GN}
\mathrm{Tr}\left(H_N \Gamma_N\right)= \mathrm{Tr}\left( \1^{\le N} U_N H_N U_N^* \1^{\le N} \Gamma\right) \le \mathrm{Tr}\left( \left(\mathcal{G}_N + C(\mathcal{N}+1)^6\right) \Gamma\right).
\end{equation}
Using \eqref{eq:fluc-N} again we have $\mathrm{Tr}((\mathcal{N}+1)^6\Gamma)\le C$. Moreover, since $\Gamma$ is a quasi-free state on $\mathcal{F}_+$ with the one-body density matrices $(\gamma,\alpha)$, by Wick Theorem we have
\begin{align} \label{eq:GN-Gamma}
\mathrm{Tr} (\mathcal{G}_N \Gamma) &= N \int \left(|\nabla\varphi |^2 + V_{\rm ext} |\varphi |^2 \right) + \frac{N^2}{2} \int \left(V_N \ast \varphi ^2\right) \varphi ^2 \nonumber\\
& \quad + \mathrm{Tr}((-\Delta+V_{\rm ext})\gamma)+ \Re N \iint V_N(x-y) \varphi(x) \varphi(y) \alpha(x,y) \mathrm{d} x \mathrm{d} y \\
&\quad + \frac{1+CN^{-1}}{2}\iint V_N(x-y) \left(\gamma(x,x)\gamma(y,y) +|\gamma(x,y)|^2 + |\alpha(x,y)|^2 \right) \mathrm{d} x \mathrm{d} y. \nonumber
\end{align}
It remains to evaluate the right side of \eqref{eq:GN-Gamma} term by term.
\subsubsection*{Kinetic energy} Using $\gamma=Qk^2Q=(1-P)k^2(1-P)$ we can decompose
\[
\mathrm{Tr}\left(-\Delta \gamma\right)= \mathrm{Tr}\left(-\Delta k^2\right) + 2\Re \mathrm{Tr}\left(\Delta P k^2 \right) + \mathrm{Tr}\left(-\Delta P k^2 P\right).
\]
We have
\begin{align*}
\mathrm{Tr}\left(-\Delta P k^2 P\right)&= \langle \varphi, -\Delta \varphi\rangle \left\langle \varphi, k^2 \varphi\right\rangle \le C, \\
\left|\mathrm{Tr}\left(\Delta P k^2\right)\right| &= \left|\langle \varphi, k^2 (\Delta \varphi)\rangle \right| \le \norm{\varphi}_{L^2} \norm{\Delta\varphi}_{L^2} \norm{k^2}_{\rm op} \le C.
\end{align*}
Now consider the main term $\mathrm{Tr}(-\Delta k^2)$. Similarly to \eqref{eq:K-phi-g}, we write the operator $k$ as
\[ k= \varphi(x) N\widehat{\omega_N}(p) \varphi(x)\quad \text{ on } L^2(\mathbb{R}^3) \]
where $\omega_N=1-f_N$ and $\varphi(x)$, $\widehat{\omega_N}(p)$ are multiplication operators on the position and momentum spaces. By the IMS formula \eqref{eq:IMS-formula} we can decompose
\begin{align*}
\mathrm{Tr}(-\Delta k^2)&= N^2 {\rm Tr}\,\left( \varphi(x) p^2 \varphi(x) \widehat{\omega_N}(p) \varphi^2(x) \widehat{\omega_N}(p) \right)\\
&=\begin{multlined}[t]
\frac{N^2}{2} \mathrm{Tr} \left( \left(\varphi^2(x) p^2 + p^2 \varphi^2(x)\right) \widehat{\omega_N}(p) \varphi^2(x) \widehat{\omega_N}(p) \right) \\
+ N^2 {\rm Tr}\,\left( |\nabla \varphi(x)|^2 \widehat{\omega_N}(p) \varphi^2(x) \widehat{\omega_N}(p) \right).
\end{multlined}
\end{align*}
The first term can be computed exactly using the scattering equation \eqref{eq:scatering-eq-N-omega} and \eqref{eq:tr-int}:
\begin{multline*}
\frac{N^2}{2} \mathrm{Tr} \left( \left(\varphi^2(x) p^2 + p^2 \varphi^2(x)\right) \widehat{\omega_N}(p) \varphi^2(x) \widehat{\omega_N}(p) \right)\\
=\frac{N^2}{2} \Re \mathrm{Tr} \left( \varphi^2(x) \widehat{V_Nf_N}(p) \varphi^2(x) \widehat{\omega_N}(p) \right) = \frac{N^2}{2} \int N \left(\left(V_Nf_N \omega_N\right)*\varphi^2\right) \varphi^2.
\end{multline*}
The second term can be bounded using \eqref{eq:tr-int}
\[ N^2 {\rm Tr}\,\left( |\nabla \varphi(x)|^2 \widehat{\omega_N}(p) \varphi^2(x) \widehat{\omega_N}(p) \right)= N^2 \int \left(\omega_N^2*\varphi^2\right) |\nabla \varphi|^2 \le C \norm{\nabla \varphi}_{L^2}^4. \]
Here we have used
\begin{equation} \label{eq:omega*phi2}
N^2 \norm{\omega_N^2*\varphi^2}_{L^\infty} \le C \norm{|x|^{-2}* \varphi^2}_{L^\infty} \le C \norm{\nabla \varphi}_{L^2}^2
\end{equation}
by \eqref{eq:bounds_on_w} and Hardy's inequality $|x|^{-2}\le 4(-\Delta)$. Thus
\begin{equation} \label{eq:GN-Gamma-1}
\mathrm{Tr}(-\Delta \gamma)= \frac{N^2}{2} \int N \left(\left(V_Nf_N (1-f_N)\right) *\varphi^2\right) \varphi^2 + O(1).
\end{equation}
\subsubsection*{External potential energy} Using \eqref{eq:omega*phi2} again, we have
\begin{align*}
|(k^2)(x,y)|&= \left| \int \mathrm{d} z k(x,z) k(z,y) \right| = N^2 \left| \int \mathrm{d} z \varphi(x) \omega_N(x-z) \varphi^2(z) \omega_N(z-y) \varphi(y) \right| \\
&\le \varphi(x) \varphi(y) \frac{N^2}{2} \int \mathrm{d} z \left(\omega_N^2(x-z)+ \omega_N^2(z-y)\right) \varphi^2(z) \le C \varphi(x) \varphi(y).
\end{align*}
Hence,
\begin{align} \label{eq:gamma-x-y}
|\gamma(x,y)| &= \left|\left((\1-P) k^2 (\1-P)\right)(x,y)\right| \nonumber\\
&=\begin{multlined}[t]
\left| \left(k^2\right)(x,y) - \varphi(x) \int \mathrm{d} z \varphi(z) \left(k^2\right)(z,y) - \varphi(x) \int \mathrm{d} z \varphi(z) \left(k^2\right)(x,z)\right. \\
\left.+ \varphi(x)\varphi(y) \iint \mathrm{d} x \mathrm{d} y \varphi(x)\varphi(y) \left(k^2\right)(x,y) \right|
\end{multlined}\nonumber\\
&\le C \varphi(x) \varphi(y).
\end{align}
Consequently,
\begin{align} \label{eq:GN-Gamma-2}
\mathrm{Tr}(V_{\rm ext} \gamma) = \int V_{\rm ext}(x) \gamma(x,x) \mathrm{d} x \le C \int V_{\rm ext} \varphi^2 \le C.
\end{align}
\subsubsection*{Bogoliubov pairing energy} Since $\alpha(x,y)=(Q\otimes Q k)(x,y)$, by decomposing $Q=\1-P$ we have
\begin{align} \label{eq:alpha-x-y}
|\alpha(x,y) - k(x,y)| &=
\begin{multlined}[t]
\left| - \varphi(x) \left(\varphi^2*N\omega_N\right)(y) \varphi(y) - \varphi(x) \left(\varphi^2*N\omega_N\right)(x) \varphi(y)\vphantom{\int}\right.\\
\left.+ \varphi(x)\varphi(y) \int \left(N\omega_N* \varphi^2\right)\varphi^2 \right|
\end{multlined}\nonumber\\
&\le C \varphi(x) \varphi(y).
\end{align}
Here we have used
\[ N\norm{\omega_N*\varphi^2}_{L^\infty} \le C \norm{|x|^{-1}*\varphi^2}_{L^\infty}\le C \]
which is similar to \eqref{eq:omega*phi2}. Thus
\begin{align} \label{eq:GN-Gamma-3}
&\Re N \iint V_N(x-y) \varphi(x) \varphi(y) \alpha(x,y) \mathrm{d} x \mathrm{d} y \nonumber\\
&\le \Re N \iint V_N(x-y) \varphi(x) \varphi(y) \big(k(x,y) + C\varphi(x)\varphi(y)\big) \mathrm{d} x \mathrm{d} y \nonumber\\
&= - N^2 \iint (V_N\omega_N)(x-y) \varphi^2(x) \varphi^2(y) \mathrm{d} x \mathrm{d} y + C N \iint V_N(x-y) \varphi^2(x) \varphi^2(y) \mathrm{d} x \mathrm{d} y \nonumber\\
&= - N^2 \int \left((V_N(1-f_N))*\varphi^2\right)\varphi^2 +O(1).
\end{align}
In the last estimate we have used
\begin{equation} \label{eq:GN-Gamma-easyyy}
N \iint \mathrm{d} x \mathrm{d} y V_N(x-y) \varphi^2(x) \varphi(y)^2 \le C \norm{\varphi}_{L^\infty}^2 \norm{\varphi}_{L^2}^2 N\norm{V_N}_{L^1}\le C.
\end{equation}
\subsubsection*{Interaction energy} Using \eqref{eq:gamma-x-y} and \eqref{eq:GN-Gamma-easyyy} we have
\begin{equation} \label{eq:GN-Gamma-4}
\iint \mathrm{d} x \mathrm{d} y V_N(x-y) \left( \gamma(x,x) \gamma(y,y) + |\gamma(x,y)|^2\right) \le CN^{-1}.
\end{equation}
Moreover, by \eqref{eq:alpha-x-y} and the Cauchy--Schwarz inequality,
\[
|\alpha(x,y)|^2 \le (1+N^{-1}) |k(x,y)|^2 + CN |\varphi(x)|^2 |\varphi(y)|^2.
\]
Therefore,
\begin{equation} \label{eq:GN-Gamma-5}
\iint V_N(x,y) |\alpha(x,y)|^2 \mathrm{d} x \mathrm{d} y \le N^2 \int \left(\left(V_N (1-f_N)^2\right) *\varphi^2\right) \varphi^2 + C.
\end{equation}
\subsubsection*{Conclusion} Inserting \eqref{eq:GN-Gamma-1}, \eqref{eq:GN-Gamma-2}, \eqref{eq:GN-Gamma-3}, \eqref{eq:GN-Gamma-4}, \eqref{eq:GN-Gamma-5} in \eqref{eq:Tr-HN-GN}-\eqref{eq:GN-Gamma} we find that
\begin{align*}
\mathrm{Tr}(H_N\Gamma_N)& \le N \int \left(|\nabla\varphi|^2 + V_{\rm ext} \varphi^2\right) + \frac{N^2}{2} \int \left(V_N \ast \varphi^2\right) \varphi^2 \nonumber\\
& \quad + \frac{N^2}{2} \int \left((V_N f_N(1-f_N))*\varphi^2\right)\varphi^2 - N^2 \int \left((V_N (1-f_N))* \varphi^2\right) \varphi^2 \\
&\quad +\frac{N^2}{2}\int \left((V_N (1-f_N)^2) *\varphi^2\right) \varphi^2 + C\\
&= N \int \left(|\nabla\varphi|^2 + V_{\rm ext} \varphi^2 \right) + \frac{N^2}{2} \int \left((V_N f_N) \ast \varphi^2\right) \varphi^2 +C.
\end{align*}
Finally, by Young's inequality we have
\begin{align*}
\frac{N^2}{2} \int \left(V_N f_N \ast \varphi^2\right) \varphi^2 &\le \frac{N^2}{2} \norm{(V_Nf_N)*\varphi^2}_{L^2(\mathbb{R}^3)} \norm{\varphi^2}_{L^2(\mathbb{R}^3)}\\
&\le \frac{N^2}{2} \norm{V_Nf_N}_{L^1(\mathbb{R}^3)} \norm{\varphi^2}_{L^2(\mathbb{R}^3)}^2= 4\pi \mathfrak{a} N \int \varphi^4.
\end{align*}
Thus
\begin{align*}
\mathrm{Tr}(H_N\Gamma_N) \le N \int \left(|\nabla\varphi|^2 + V_{\rm ext} \varphi^2 \right) + 4\pi \mathfrak{a} N \int \varphi^4 + C = N e_{\rm GP} +C.
\end{align*}
Finally, by the variational principle
\[
E_N = \inf {\rm Spec}(H_N) \le \frac{\mathrm{Tr}(H_N\Gamma_N) }{\mathrm{Tr} \Gamma_N} \le \frac{N e_{\rm GP} +C}{ 1+CN^{-1}} \le N e_{\rm GP} + O(1).
\]
This ends the proof of Lemma \ref{lem:upp}.
\end{proof}
The proof of Theorem \ref{thm:main} is complete.
|
2,877,628,088,869 | arxiv | \section*{Introduction}
\input{sections/introduction.tex}
\input{sections/methods.tex}
\input{sections/results.tex}
\input{sections/discussion.tex}
\input{sections/acknowledgements.tex}
\section{Acknowledgements}
\label{sec:acknowledgements}
DG acknowledges funding from the University of Birmingham Dynamic Investment Fund and the EPSRC Centre grant EP/N014391/2.
DJH acknowledges funding from the MRC Projects MR/N00275X/1 and MR/S025618/1 the and Diabetes UK Project Grants 17/0005681.
This project has also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Starting Grant 715884 to DJH).
KCAW acknowledges funding from the MRC Fellowship MR/P01478X/1 and the Hub for Quantitative Modelling in Healthcare EP/T017856/1.
\section{Discussion}
\label{sec:discussion}
In this manuscript, we demonstrated how transitions to globally-coordinated activity are dependent on the degree of sortedness in population excitability.
We used a prototypical model of a pancreatic beta cell where a small population was highly excitable, whilst a larger population was less excitable.
As the global drive to the network was increased, activity across the network transitioned from a globally inactive state to one in which subsets of nodes became active and synchronised their activity.
By perturbing the spatial distribution of the highly excitable population, we showed that the drive strength at which such transitions occur is dependent on the sortedness of the network.
These results have specific implications for insulin secretion in the pancreatic islets of Langerhans, and more general implications regarding transitions to synchrony and other forms of collective dynamics in networks of coupled excitable units.
To perform our study, we developed \Alg{alg:orig}, which perturbs the sortedness of the network in a directed manner.
Whilst our algorithm is tailored towards spherical geometries and local, diffusive coupling, it can be adapted to other geometries and coupling types, since the neighbourhoods can be succinctly encoded in the adjacency matrix.
In addition, although our study focused on conditions in which there are only two different populations, \Sec{sec:methods} discusses how our metrics can be extended to networks with more population types.
Given the growing interest in studying heterogeneous populations in complex networks, we hope that our algorithms will prove useful to other researchers in the future.
\section{Introduction}
\label{sec:intro}
Many nonlinear systems exhibit \textit{excitable} behaviour, whereby
they exhibit large-amplitude oscillations in response to
small-amplitude, transient perturbations. Such excitable dynamics are
observed in semiconductor lasers \cite{Terrien2020,Terrien2021}, social media networks \cite{Mathiesen2013}, epidemiology \cite{Vannucchi2004}, and
wildfires \cite{Punckt2015}. One prominent example is electrically excitable cells, such as neurons \cite{Izhikevich2000a,DeMaesschalck2015a,Wedgwood2021a}, cardiac cells \cite{Majumder2018,Barrio2020}, pituitary cells \cite{Sanchez-Cardenas2010,Hodson2012a} and
pancreatic beta cells \cite{Bertram2007a,McKenna2016a}.
When excitable units are combined into networks, they can generate complex
rhythms \cite{Bittihn2017,Horning2017,Fretter2017}. Interestingly, such networks may also generate
dynamics that occur over low-dimensional manifolds of the full system \cite{Ashwin92,Watanabe1993,Ott2009,Bick2020}.
For example, neurons in the pre-B\"otzinger complex fire synchronously to induce the inspiratory
and expiratory phases during breathing \cite{Wittmeier2008,Gaiteri2011}.
Heterogeneity is ubiquitous in natural systems.
Whilst often portrayed as a undesirable attribute, it can play an important role in governing network dynamics \cite{Manchanda2017,Delgado2018,Lambert2018}.
For example, neurons may coarsely be stratified into excitatory and inhibitory groups, with the former promoting firing behaviour
in other neurons and the latter suppressing it.
When coupled, these neuronal subtypes give rise to a variety of behaviours, including synchronisation, and enable the network to respond differentially to incoming inputs \cite{Borgers2003,Borgers2005,Kopell2010}.
The classification of neuronal subtypes is becoming ever finer \cite{Gouwens2019,Lipovsek2021} and it remains an open question as to how this heterogeneity governs overall brain dynamics.
Even when networks comprise only a single unit type, heterogeneity may still impact the global dynamics.
For example, if the natural frequencies of nodes in a coupled oscillator network are too far apart, the network will be unable to synchronise and will instead display more complex rhythms \cite{Ottino-Loffler2016}.
Here, we explore transitions to synchrony in a locally-coupled
network of heterogeneous, excitable nodes. As a motivating example, we consider
networks of pancreatic beta cells. Individually,
these cells exhibit excitable dynamics akin to the
Hodgkin--Huxley model of nerve cells \cite{Hodgkin1952}. Cells remain at rest
until they receive a significantly large electrical impulse or the extracellular concentration of glucose surpasses a threshold value \cite{Ashcroft1989,Braun2008}.
Under sustained suprathreshold stimulation, cells exhibit repetitive bursting-type dynamics comprising epochs of firing activity, followed by periods of rest \cite{Kinard1999b}.
Beta cells are arranged into diffusively, and locally-coupled networks via
channels known as gap junctions \cite{Rorsman2012,Benninger2011}.
These networks exhibit synchronous bursting activity when exposed to sufficiently high levels of glucose \cite{Markovic2015a}.
Although exogenous factors, such as incretin \cite{Hodson2014a} and paracrine \cite{Caicedo2013} signalling influence this coordinated beta cell response, the importance of intercellular coupling has been highlighted in
several studies that demonstrate that synchronous beta cell rhythms are disrupted when gap junctions are
blocked \cite{Head2012,Benninger2014a}.
Based on empirical evidence from rodents, it has generally been assumed that beta cells form a \textit{syncticium}, such that the activity of the network can be described by a single cell \cite{Dolensek2013,Satin2020a,Podobnik2020}.
Recent studies have challenged this perspective, highlighting that some `leader cells' disproportionately influence the activity of a entire network made up primarily of `follower cells' \cite{Johnston2016a,Westacott2017,Salem2019,Benninger2021}.
One hypothesis suggests that islets are composed of a small number ($\sim$10\%) of
highly excitable cells, with the remainder being less excitable \cite{Benninger2018b}.
In this study, we explore how the spatial organisation of these two sub-populations affects the propensity of the whole network to oscillate in a synchronous fashion.
The remainder of the manuscript is arranged as follows: In \Sec{sec:methods}, we describe the beta cell model, introduce a metric that captures how sorted a network is with respect to its heterogeneity, and present an algorithm that can generate networks with arbitrary sortedness.
In \Sec{sec:results}, we investigate how dynamic
transitions to synchronous bursting depends of the degree of sortedness in the network and end in \Sec{sec:discussion} with concluding remarks.
\section{Methods}
\label{sec:methods}
\subsection{Mathematical model}
We consider a network of $N$ diffusively-coupled excitable cells from a model describing electrical activity in pancreatic beta cells in the presence of glucose \citep{Sherman1988a}. These cells exhibit \textit{bursting} dynamics (in voltage $v$) when the glucose level, $G \in [0,1]$, is sufficiently high. The system possesses a slow variable, $c$, representing Ca\textsuperscript{2+} concentration, which oscillates when the cell is active. We arrange $N = 1,018$ nodes on a hexagonal close-packed (hcp) lattice embedded within a sphere. Each node is connected to its nearest neighbours via gap-junction coupling. The parameter $\overline{g}_L$ sets the excitability of single cells within the network (Fig. \ref{fig:single_cell}). We define two sub-populations of nodes distinguished by their excitability. Population 1 is highly excitable ($\overline{g}_L = 60$) and population 2 is less excitable ($\overline{g}_L = 100$). We then consider the range over $G$ where population 1 nodes are intrinsically active, while population 2 cells are intrinsically inactive. A full description of the mathematical model is provided in \Sec{sec:model}.
\subsection{Measuring sortedness}
\label{sec:assort}
To track the degree of sortedness in the network, we define a \textit{node sortedness} measure that, for a given node, measures the proportion of neighbours that are of the same population type. For a general network with nodes attributed to $K \in \mathbb{N}$ populations, the node sortedness, $A_i$, is defined as
\begin{equation}
A_i = \frac{1}{|J_i|}\sum_{j\in J_i} \chi_{ij}, \quad \chi_{ij} =
\sum_{k=1}^K \mu_i^{(k)} \mu_j^{(k)}, \quad i = 1,\dots, N, \quad
\mu_{i}^{(k)} =
\begin{cases}
1, & i \in P_k \\
0, & \text{otherwise}
\end{cases},
\label{eq:node_assort}
\end{equation}
\noindent
where the \textit{population sets} $P_k$ contain the indices of the nodes within population $k = 1,\dots,K$ and form a partition over the node indices $\{ 1, 2, \dots, N\}$, $J_i$ is the set of indices of nodes that are adjacent to node $i$, $\mu_i^{(k)}$ is an indicator function that takes value 1 if $i$ belongs to population $k$ and value 0 otherwise,
and $\chi_{ij}$ is an indicator function that takes value 1 when node $i$ and $j$ belong to the same population and value 0 otherwise.
For each population, the average node sortedness is defined via
\begin{equation}
\overline{A}_k = \frac{1}{|P_k|} \sum_{n \in P_k} A_n, \quad k = 1,2, \dots K.
\label{eq:ave_node_assort}
\end{equation}
\noindent
Finally, the \textit{network sortedness} is defined as
\begin{equation}
\mathcal{A} = \frac{1}{K-1}\left(-1 + \sum_{k=1}^{K}{\overline{A}_k}\right).
\label{eq:net_assort}
\end{equation}
\noindent
where $\mathcal{A}\in [{-1}/(K - 1),1]$ and, for the present case with $K=2$, $\mathcal{A}\in [-1,1]$.
For a network in which populations are assigned to nodes following a uniformly random distribution,
$\mathcal{A}\approx 0$ since $\overline{A}_{k}$ is approximately equal to ${N_k}/{N}$ where $N_k$, $k=1,2$ is the number of nodes in population $k$.
and therefore $\sum_{k}\overline{A}_{k} \approx 1$.
An illustration of the computation of the sortedness metrics \eqref{eq:node_assort}-\eqref{eq:net_assort} is shown in Fig.~\ref{fig:assortativity}.
\subsection{Modifying network sortedness}
\label{sec:algorithm}
Here, we describe our approach for generating networks with different network sortednesss.
The algorithm works by exchanging the population type of nodes from different populations randomly
to increase (or decrease) $\mathcal{A}$.
The algorithm begins by randomly permuting the order of the $N$ indices.
The first $N_1$ indices of the permuted sequence are attributed to $P_1$, with the remaining $N_2$ indices attributed to $P_2$, yielding a distribution of population 1 nodes that is uniformly random in space.
On each iteration, $a$, of the algorithm, pairs of nodes (from different populations) are sampled without replacement from a joint probability density function (pdf)
\begin{equation}
P(X=i, Y=j) = f(i,j), \quad i \in P_1, j \in P_2,
\label{eq:pdf}
\end{equation}
\noindent
where $X$ and $Y$ are random integer variables indicating the node selected from population 1 and 2, respectively.
The population types of these nodes are then exchanged, that is, if $i \in P_1$ and $j \in P_2$, then $i$ is added to $P_2$ and removed from $P_1$ and vice versa for $j$.
The network sortedness \eqref{eq:net_assort} is then recomputed for the adjusted population sets.
If the exchange leads to an increase (decrease) in $\mathcal{A}$, the exchange is accepted and the algorithm proceeds to iteration $a+1$.
If the exchange does not lead to an increase (decrease) in $\mathcal{A}$, the exchange is rejected and indices $i$ and $j$ are placed back in $P_1$ and $P_2$, respectively.
In this case, a new pair of nodes is drawn from $f$ and the process is repeated until either: a pair whose exchange leads to an increase (decrease) in $\mathcal{A}$ is found and the algorithm proceeds to the next iteration; or it is determined that no such pair exists, at which point the algorithm terminates.
An example of one iteration of this algorithm is depicted in Fig.~\ref{fig:alg_iteration}.
We refer to the algorithm in which swaps are accepted only if they lead to an increase (decrease) in $\mathcal{A}$ as the \textit{forward} (\textit{backward}) algorithm.
We define $\mathcal{A}_{a}$ to be the evaluation of $\mathcal{A}$ of the network after $a$ iterations.
Running the algorithm to convergence produces the sets $\mathcal{P}_{k} = \{P_{k}^{a}\}_{a=0}^{a_\text{final}}$ containing the population sets after each iteration.
\subsubsection{Modified sortedness metrics}
\label{sec:modified_alg}
Although the algorithm yields well-sorted networks with a small number of clusters of nodes from population 1, these clusters preferentially form at the edges of the domain.
The average node sortedness, as defined in \eqref{eq:ave_node_assort}, for population 1 is maximised when a single cluster of population 1 nodes is coupled to the smallest possible number of population 2 nodes.
This naturally occurs at the edges of the domain, since any cluster of population 1 nodes in the domain interior must be surrounded by population 2 nodes.
We are interested in the dynamics that arise as the small population of highly excitable cells forms clusters within the lattice,
hence, we wish to remove this tendency for clusters to form at the domain boundary.
To overcome this, we use a modified definition of the node sortedness \eqref{eq:node_assort}
\begin{equation}
\widetilde{A}_i = \frac{1}{J} \sum_{j\in J_i} \chi_{ij} + \frac{\mu_i^{(2)}\left(J - |J_i|\right)}{J}, \quad i = 1, \dots N,
\label{eq:node_assort_rev}
\end{equation}
\noindent
where $J = 12$ is the number of connections that interior lattice nodes possess.
For nodes with $|J_{i}| < J$ (i.e., nodes on the domain boundary) the additional term in \eqref{eq:node_assort_rev} compared to \eqref{eq:node_assort} incorporates a further $J-|J_{i}|$ connections to population 2 nodes for the purposes of calculating node sortedness values.
This procedure is equivalent to assuming that the lattice defining our domain is embedded within a larger lattice of population 2 nodes.
An example of the computation of network sortedness using
\eqref{eq:node_assort_rev} is shown in Fig.~\ref{fig:adjusted_assort}. Pseudocode for the network sortedness manipulation algorithm is provided in \Sec{sec:alg}.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\linewidth]{figures/adjusted_assort_alt.pdf}
\caption{\textbf{Example showing the modified sortedness metric.} The network under consideration is the interior portion of the depicted network with population sets $P_1 = \{1,2,6\}$ (blue) and $P_2 = \{3,4,5,7\}$ (pink). Using the original node sortedness metric \eqref{eq:node_assort}, the network sortedness as computed by \eqref{eq:net_assort} is $\mathcal{A}=-17/72$. The modified node sortedness \eqref{eq:node_assort_rev} assumes that each of the boundary nodes $i \in \{1,2,3,5,6,7\}$ has an additional $J-|J_i|$ connections to population 2 nodes, where $J_i$ is the set of nodes to which node $i$ was originally coupled. These additional connections are depicted by the dashed edges emanating from the boundary nodes. In this planar domain example, each of the boundary nodes has $|J_i|=3$ connections and $J=6$. Using the modified node sortedness metric, the network sortedness has value $\mathcal{A} = -5/9$.
}
\label{fig:adjusted_assort}
\end{figure}
\subsubsection{Node selection probabilities}
\label{sec:prob}
In this section, we formulate the node selection pdf used in the network sortedness adjustment algorithm.
We assume that the selection of node from $P_1$ is independent of the selection of node from $P_2$ so that \eqref{eq:pdf} becomes
\begin{equation}
f(i,j) = f_{P_1}(i) f_{P_2}(j), \quad i \in P_1, j \in P_2.
\end{equation}
\noindent
One choice would set $f_1$ and $f_2$ to be uniform over $P_1$ and $P_2$, respectively.
Empirical observations of the algorithm outcome in this case demonstrate that clusters of population 1 nodes tend to form at the edge of the domain (not shown).
As discussed in \Sec{sec:modified_alg}, we wish to avoid this scenario.
The tendency for clusters to form near the edge occurs because of the spherical nature of our lattice domain.
In particular, a uniform choice for $f_1$ and $f_2$ means that nodes at the centre of the domain are less likely to be selected under a uniformly random sampling of indices than those at the edge because the number of nodes in the network increases superlinearly with respect to the domain radius.
Therefore, we derive choices for $f_{P_k}$ that equalise the probability of a node being selected on the basis of its radial coordinate.
The heuristic for generating $f_{P_1}$ will be the same as that for generating $f_{P_2}$ up to the population identity.
Denote the radial distance from the origin of node $i\in \mathbb{N}_N$ by $r_i = (x_i^2 + y_i^2 + z_i^2)^{1/2} \in \mathbb{R}_{\geq 0}$ where $(x_i,y_i,z_i)\in \mathbb{R}^3$ are the Cartesian coordinates of the location of the node.
We define a sequences of intervals, $\mathcal{I}_n = [(n-1) \delta r, \, n\delta r]$, for $n=1,\dots 8$ with $\delta r = r_\text{max}/8$ where $r_\text{max} = \max_i\{ r_i\}$ so that each node is assigned to exactly one interval.
The set of nodes from $P_k$ belonging to a given interval $\mathcal{I}_n$ is given by $R_{n,P_k} = \{ i\in \mathbb{N}_N \mid r_i \in \mathcal{I}_n, \, i \in P_k\}$.
Using these set definitions, the pdf $f_{P_k}$ may be defined as
\begin{equation}
f_{P_k}(i) = \frac{1}{Q|R_{n_i,P_k}|},
\quad i \in \mathbb{N}_N.
\label{eq:selection_weights}
\end{equation}
\noindent
where $R_{n_i,P_k}$ is such that $r_i \in \mathcal{I}_{n_i}$ and $Q$ is a normalisation factor ensuring that $\sum_{i \in P_k} f_{P_k}(i) = 1$.
This choice for $f_{P_k}$ reweights the probability of a given node being selected by a factor proportional to the number of cells from the same population within a spherical annulus with inner and outer radii specified by the boundaries of the intervals $\mathcal{I}_n$.
This reweighting favours selecting nodes closer to the centre of the domain over those more distal
\subsection{Evaluation of collective dynamics}
\label{sec:sum_stats}
To characterise the network dynamics, we consider two features based on the Ca\textsuperscript{2+} trajectories across all nodes, namely, the mean number of peaks ($\overline{P}$) and the time-averaged degree of synchronisation ($\overline{R}$) calculated as the average magnitude of the Kuramoto order parameter (\ref{eq:kuramoto}).
The mean number of Ca\textsuperscript{2+} peaks across all nodes is proportional to the \textit{network participation}, that is, the fraction of nodes that undergo oscillation.
The value of $\overline{R}$ captures the \textit{network coordination}, tracking how closely the phases of the Ca\textsuperscript{2+} trajectories stay to one another across the simulation duration.
We additionally define $\overline{P}_{k}$ and $\overline{R}_{k}$ where $k \in \{1,2\}$ to be the mean number of peaks in Ca\textsuperscript{2+} and the time-averaged degree of synchronisation across nodes in population $P_k$, respectively.
\section{Results}
\label{sec:results}
\subsection{Simulating dynamics on the set of networks defined by running the swapping algorithm to convergence}
\label{subsec:single_network}
We ran the swapping algorithm to convergence (in the forward direction and with 10\% of the nodes specified to be from population 1) to produce sets $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$.
For this run, the network configuration converged after $a_{final} = 203$ iterations with a corresponding sortedness of the terminal network configuration of $\mathcal{A}_{final} = 0.69$.
We then simulated the dynamical system \eqref{eq:Veq}-\eqref{eq:coupling} for $10$ equispaced values of $G \in [0.3,0.55]$, as described in \Sec{sec:sims} for $g_{coup} \in \{1,2,10\}$, and each configuration of populations defined by the population sets contained in $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ for $a \in \{1,4,7,\dots,a_{final}\}$.
We ran each simulation for $T_{max} = 360,000$ ms ($6$ minutes), and discarded the initial 90,000 ms of resulting times series to control for transients.
Each network configuration was simulated three times using each of a pre-defined set of initial conditions.
Finally, we ran simulations once more using the first of these initial conditions
to verify that results were consistent when the simulation duration was increased.
We then calculated the features $\overline{P}$ and $\overline{R}$ for each simulation.
To aid in interpreting the results, we define the following sets. Firstly, the parameter domain over which we evaluated the dynamical system was $\mathcal{D} = \{(\mathcal{A}, G) \mid \mathcal{A} \in[0,\mathcal{A}_{final}], G\in[0.3,0.55]\}$.
Secondly, the level sets $L^{+} = \{(\mathcal{A}, \ G) \mid \overline{P}(\mathcal{A}, \ G)=5\}$, $L^{*} = \{(\mathcal{A}, \ G) \mid \overline{R}(\mathcal{A}, \ G)=0.9\}$, and $L_k^{+} = \{(\mathcal{A}, \ G) \mid \overline{P_k}(\mathcal{A}, \ G)=5\}$ (for $k \in {1,2}$) were used to delineate subsets of $\mathcal{D}$ with qualitatively distinct network dynamics, which will be described below.
\subsubsection{Increasing $\mathcal{A}$ lowers the drive $G$ required for a transition to globally synchronised bursting when coupling is strong}
\label{subsubsec:strong_coupling}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{figures/network_set_strong_stripped.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for strong coupling.} \textbf{A)} Plotting $\overline{P}$ averaged over three sets of initial conditions shows that for increasing $\mathcal{A}$, lower drive $G$ is required to activate the network. \textbf{B)} Plotting $\overline{R}$ averaged over three sets of initial conditions shows that for increasing $\mathcal{A}$, lower drive $G$ is required to synchronise the network.}
\label{fig:network_set_strong_stripped}
\end{figure}
We first sought to establish whether there is a relationship between $\mathcal{A}$ and the level of drive $G$ required to activate the network.
Shown in Fig. \ref{fig:phase_transition_example} is an example in which the transition from global quiescence to global activation is dependent on both $G$ and $\mathcal{A}$ for the strongly coupled ($g_{coup} = 10$) case and where population 1 nodes comprise $10\%$ of the network.
The mean of the Ca$^{2+}$ trajectories for population 1 and 2 across the network are plotted for several values of $\mathcal{A}$ and $G$, which shows that as $\mathcal{A}$ increases, the required drive $G$ to activate the network decreases.
To examine trends across a range of network configurations, we plot the features $\overline{P}$ (Fig.~\ref{fig:network_set_strong_stripped}A, Fig.~\ref{fig:network_set_strong}A), and $\overline{R}$ (Fig.~\ref{fig:network_set_strong_stripped}B, Fig.~\ref{fig:network_set_strong}B) as a function of both $G$ and $\mathcal{A}$.
Each point depicts a value $S(\mathcal{A}, \ G)$, where $S \in \{\overline{P}, \overline{R}\}$, taken to be the median feature across the three simulations (which differ only in their initial condition).
For strong coupling, we found that $\mathcal{D}$ can be separated into a quiescent regime ($\mathcal{D}^{-}$) and an oscillatory one ($\mathcal{D}^{+}$).
The level set curve $L^{+}$ separating these regimes can be parameterised as a non-increasing function of $\mathcal{A}$ (i.e., $G = L^+(\mathcal{A})$), supporting the hypothesis that increasing $\mathcal{A}$ decreases the drive required for network activation (Fig.~\ref{fig:network_set_strong_stripped}A, B white curve).
Similarly, the level set $L^{*}$ can be parameterised as a non-increasing function of $\mathcal{A}$ that also separates the domains $\mathcal{D}^{-}$ and $\mathcal{D}^{+}$ (Fig.~\ref{fig:network_set_strong_stripped}A, B black curve).
To investigate the robustness of the above relationships, we plotted $\overline{P}$ and $\overline{R}$ resulting from each of the three initial conditions (Fig.~\ref{fig:network_set_strong_yi}).
We defined
curves $L^{+}$ (Fig.~\ref{fig:network_set_strong_yi}, white curves) and $L^{*}$ (Fig.~\ref{fig:network_set_strong_yi}, black curves), in the same manner as described above.
These curves are not identical across the choices of initial condition, and both curves are non-monotonic for the third initial condition, suggesting that multi-stability exists for some $(\mathcal{A}, \ G) \in \mathcal{D}$, at least near the transition between regimes $\mathcal{D}^{-}$ and $\mathcal{D}^{+}$.
\subsubsection{A domain with intra-population synchronicity and inter-population resonance exists when coupling is lowered to an intermediate strength}
\label{subsubsec:middle_coupling}
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{figures/network_set_middle_stripped.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for middle-strength coupling.} \textbf{A)} Plotting $\overline{P}$ averaged over three sets of initial conditions shows a third regime $\mathcal{D}^*$ bounded by $L_1^+$ and $L_2^+$. \textbf{B)} Plotting $\overline{R}$ averaged over three sets of initial conditions shows that for increasing $\mathcal{A}$, lower drive $G$ is required to synchronise the network.}
\label{fig:network_set_middle_stripped}
\end{figure}
When $g_{coup} = 2$ (intermediate strength coupling), the threshold for activation of the network was lower than in the case of strong coupling, owing to the reduction of the suppressing effect of the less excitable population 2 nodes on the more excitable population 1 nodes.
As in \Sec{subsubsec:strong_coupling}, we plot the features $\overline{P}$ (Fig.~\ref{fig:network_set_middle_stripped}A, Fig.~\ref{fig:network_set_middle}A) and $\overline{R}$ (Fig.~\ref{fig:network_set_middle_stripped}B, Fig.~\ref{fig:network_set_middle}B) within the parameter domain $\mathcal{D}$, taking the median across three initial conditions
We observed that the domain can be separated into three regimes with qualitatively distinct dynamics.
The first two regimes, $\mathcal{D}^{-}$ and $\mathcal{D}^{+}$, contain dynamics where the majority of nodes are quiescent or active (Fig.~\ref{fig:network_set_middle_stripped}A) and synchronised (Fig.~\ref{fig:network_set_middle_stripped}B), respectively.
Within the third regime, denoted $\mathcal{D}^{*}$, we found high intra-population synchronisation, with population 2 nodes oscillating (w.r.t. Ca\textsuperscript{2+}) at a frequency approximately half that of the population 1 nodes on average (Fig.~\ref{fig:network_set_middle}C, D triangle).
i.e., this regime produces inter-population resonance at a 2:1 ratio.
Moreover, between the regimes $\mathcal{D}^*$ and $\mathcal{D}^+$, we found a sliver of the domain with lowered synchronisation (Fig.~\ref{fig:network_set_middle} star), where the population 2 oscillation frequency approaches that of population 1.
For low values of $\mathcal{A}$, the curves $L_{k}^{+}$ nearly overlap one another and separate the regimes $\mathcal{D}^{-}$ and $\mathcal{D}^{+}$, however, for larger values of $\mathcal{A}$, these curves diverge and bound the $\mathcal{D}^{*}$ regime.
Due to the large fraction of nodes being contained in population population 2, we find that the curve $L^{+}$, defined as in \Sec{subsubsec:strong_coupling}, approximately overlaps $L_{2}^{+}$.
The curve $L_{1}^{+}$ marks the transition from quiescence to activity, which may or may not be synchronised, and is non-monotonic.
Despite this non-monotonicity, there still exists an overall trend linking increases in $\mathcal{A}$ and the required drive to induce activity, $G$.
In particular, for larger values of $\mathcal{A}$, where increasing $G$ results in a transition to $\mathcal{D}^{*}$, the required drive to pass through $L_{1}^{+}$ is lowest.
Moreover, when $\mathcal{A}$ is near $\mathcal{A}_{0}$, i.e., at early iterations of the algorithm, the required drive to pass through $L_{1}^{+}$ is highest (Fig.~\ref{fig:network_set_middle_stripped}A, B).
When redefining the curves $L_{k}^{+}$ for $k \in \{1,2\}$ and $L^{+}$ for individual sets of initial conditions, we again found that they were not identical, implying the presence multi-stability near the transitions between regimes.
In addition, we also found cases of multi-stability within the regime $\mathcal{D}^{+}$ (Fig.~\ref{fig:network_set_middle_yi}G, H).
For example, Fig.~\ref{fig:network_set_middle_yi} shows the plots of $\overline{P}$ (Fig.~\ref{fig:network_set_middle_yi}A, C, E) and $\overline{R}$ (Fig.~\ref{fig:network_set_middle_yi}B, D, F) resulting from each initial condition separately.
For some points $(\mathcal{A},G)$, we observed lower synchronisation ($\overline{R}$) for some initial conditions (Fig.~\ref{fig:network_set_middle_yi} square) relative to the others (Fig.~\ref{fig:network_set_middle_yi} circle).
These points of lowered synchrony persisted when $T_{max}$ was increased suggesting that this activity was not due extended transient behaviour (not shown).
\subsubsection{When coupling strength is low, only population 1 activation depends on sortedness}
\label{subsubsec:low_coupling}
\begin{figure}[ht]
\centering
\includegraphics[width=0.55\textwidth]{figures/network_set_weak_stripped.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for weak coupling.} \textbf{A)} Plotting $\overline{P}$ averaged over three sets of initial conditions shows that activation of population 1, but not population 2, is dependent on sortedness. \textbf{B)} Plotting $\overline{R}$ averaged over three sets of initial conditions shows that synchronisation is non-monotonic with respect to $G$, peaking within a $2:1$ resonance regime $\mathcal{D}^*$.}
\label{fig:network_set_weak_stripped}
\end{figure}
When $g_{coup} = 1$ (low coupling strength), we observed a greater variety of of parameter regimes supporting distinct dynamics (Fig.~\ref{fig:network_set_weak_stripped}, Fig.~\ref{fig:network_set_weak}) than in either the intermediate strength or strong coupling cases.
For this coupling strength, there is no regime in which the network is active and synchronised (i.e., regime $\mathcal{D}^{+}$ does not exist).
The region $\mathcal{D}^{-}$, in which the majority of nodes are inactive, exists for low values of $\mathcal{A}$ and $G$, and is bounded above by the curve $L_{1}^{+}$.
We next identified the regime $\mathcal{D}_{1}^{+}$ in which only nodes in $P_{1}$ are active while those in $P_{2}$ remain silent (Fig.~\ref{fig:network_set_weak}, circle).
In this regime, population 1 nodes are active but only weakly coordinated while population 2 nodes are mostly inactive (Fig.~\ref{fig:network_set_weak}D).
This results in a weak global signal (low amplitude oscillations of the average Ca\textsuperscript{2+} signal) (Fig.~\ref{fig:network_set_weak}C).
This regime can bounded below by $L_{1}^{+}$ and above by $L_{2}^{+}$ and also by $L^{+}$ (not shown).
A region, denoted $\mathcal{D}^{*}$, also exists with similar dynamics to the one defined for $g_{coup} = 2$.
Within this regime, the overall network synchronisation is high (Fig.~\ref{fig:network_set_weak_stripped}B), however, population 2 nodes exhibit oscillatory Ca$^{2+}$ behaviour with approximately half the frequency of that of the population 1 nodes (Fig.~\ref{fig:network_set_weak}C, D triangle).
A final region, $\mathcal{D}^{\&}$, exists for high values of $G$, where network synchronisation decreases (Fig.~\ref{fig:network_set_weak_stripped}B) whilst the average number of peaks continues to increase (Fig.~\ref{fig:network_set_weak_stripped}A).
The level set $L^{*}$ defines two separate curves, labelled $L^{1*}$ and $L^{2*}$, due to the non-monotonicity of the synchronisation index with respect to $G$.
The curve $L^{2*}$ bounds $\mathcal{D}^{*}$ from above and separates it from $\mathcal{D}^{\&}$, whilst $L^{1*}$ is a lower bound for $\mathcal{D}^{*}$ and separates it from $\mathcal{D}_{1}^{+}$.
Fig.~\ref{fig:network_set_weak}C (star) shows the irregular global signal caused by weak coordination, which is shown in Fig.~\ref{fig:network_set_weak}D (star).
As in the case for $g_{coup} = 2$, the curve $L_{1}^{+}$ marks the transition from quiescence to activity, however, in this case only population 1 nodes become active.
This curve is non-monotonic, however, the overall trend once again links increases in $\mathcal{A}$ with a lower required drive to induce activity.
On the other hand, the curve $L_{2}^{+}$ does not appear to be dependent on $\mathcal{A}$.
Additionally, we found that the curve $G = L^{2*}(\mathcal{A})$, defined by the set $L^{2*}$, shows an increasing trend, which suggests that the range of $G$ for which maximal synchronisation occurs increases with $\mathcal{A}$.
\section{Supplemental material}
\label{sec:supplemental}
\subsection{Mathematical model}
\label{sec:model}
We consider a network of $N$ diffusively coupled excitable cells, each of which is described by the three variable model
\begin{align}
C_m \frac{dV_i}{dt} &= - I_{K}(V_i,n_i) - I_{Ca}(V_i,h_i) - I_{K-Ca}(V_i,c_i) - I_{L}(V_i) - I_{coup,i}, \quad i = 1,\dots N,\label{eq:Veq} \\
\frac{dn_i}{dt} & = \frac{n_{\infty}(V_i) - n_i}{\tau_{n}(V_i)}, \label{eq:neq} \\
\frac{dc_i}{dt} & = - f (\alpha I_{Ca}(V_i,c_i) + k_{Ca} c_i). \label{eq:ceq}
\end{align}
\noindent
This system was adapted from the Sherman--Rinzel--Keizer model, which describes the dynamics of electrical activity in pancreatic beta cells in the presence of glucose \citep{Sherman1988a}. The intrinsic dynamics of the voltage, $V$ given by \eqref{eq:Veq} are driven by K\textsuperscript{+} ($I_{K}$), Ca\textsuperscript{2+} ($I_{Ca}$), and Ca\textsuperscript{2+}-activated K\textsuperscript{+} ($I_{K-Ca}$) ionic currents, with a rate governed by the whole cell capacitance given by $C_m$.
These currents are described via
\begin{align}
I_K(V,n) &= \overline{g}_{K} n (V - V_{k}), \label{eq:IK} \\
I_{Ca}(V,h) &= \overline{g}_{Ca} m_{\infty}(V)h_{\infty}(V)(V - V_{Ca}), \\
I_{K-Ca}(V,c) &= \overline{g}_{K-Ca} \frac{c}{K_{d} + c} (V-V_{k}), \\
I_{L}(V) &= \overline{g}_{L} (1-G) (V - V_{K}) \label{eq:leak}.
\end{align}
\noindent
In \eqref{eq:IK}-\eqref{eq:leak}, $\overline{g}_X$ denotes the maximal conductance of the channel $X$ where $X \in \{ K, Ca, K-Ca, L\}$ where $L$ signifies a leak channel; $V_X$ are the reversal potentials of the respective channels, $m$ and $n$ are the proportion of open activating gates for the Ca$^{2+}$ and K$^{+}$ channels, respectively; $h$ is the proportion of open inactivating Ca$^{2+}$ channels; $c$ is the cytosolic concentration of Ca$^{2+}$; and $G$ is the extracellular concentration of glucose, which provides a global drive to promote activity and is taken to be homogeneous across the network.
The activation of $I_{K-Ca}$ is a function of free intracellular Ca\textsuperscript{2+} concentration and is defined by a Hill-type function with disassociation constant $K_d$.
The current $I_{coup,i}$ captures the influence of the coupling between cells and will be discussed in \Sec{sec:coupling}.
The dynamics for $n$ and $h$ follow exponential decay to their state values given by
\begin{equation}
x_{\infty}(V) = \frac{1}{1 + \exp{[(V_{x} - V) / S_{x}]}}, \quad x \in \{h,m,n\}, \label{eq:ststgate}
\end{equation}
\noindent
at a rate given by the voltage-dependent time constant
\begin{equation} \label{eq:taun}
\tau_{n}(V) = \frac{\overline{\tau}}{\exp{[(V-\overline{V}) / \kappa_1]} + \exp{[-(V-\overline{V}) / \kappa_2]}} .
\end{equation}
\noindent
In \eqref{eq:ststgate}, $V_x$ represents the activation (inactivation) thresholds for $m$ and $n$ ($h$) and $S_x$ represents the sensitivity of the channels around this point. Finally, \eqref{eq:ceq} describes the evolution of the concentration of cytosolic Ca$^{2+}$, which decays and is pumped out of the cell following a combined linear process with rate $k_{Ca}$ and enters the cell via the Ca$^{2+}$ ion channel at a rate given by the scale factor $\alpha$. The parameter $f$ specifies the fraction of free to bound Ca$^{2+}$ in the cell, where the bound Ca$^{2+}$ plays no role in the relevant dynamics in our model.
The electrical activity of pancreatic beta cells is proportional to the extracellular concentration of glucose.
For sufficiently high extracellular glucose, the cells exhibit \textit{bursting} dynamics, in which their voltage periodically switches between high frequency oscillations and quiescence.
The high frequency oscillations in voltage are correlated with the secretion of insulin from these cells, so that these bursting dynamics are tightly coupled to the cells' functional role.
To expose the dependence of our system on glucose, we introduced a hyperpolarising leak current given by \eqref{eq:leak} that explicitly depends on the glucose concentration $G$.
For an isolated cell (i.e., without coupling) with the parameters specified in Table~\ref{tbl:parameters}, the system describing each node exhibits steady state behaviour for low $G$ and passes through a bifurcation as $G \in [0,1]$ is increased, as shown in Fig.~\ref{fig:single_cell}.
The bursting dynamics in our model are of the \textit{fold-homoclinic} type under the classification specified in \cite{Izhikevich2000a}.
This classification is based on separation of the full system into a fast subsystem \eqref{eq:Veq}-\eqref{eq:neq} and a slow subsystem \eqref{eq:ceq}, treating the slow subsystem variables (in this case, $c$) as parameters in the fast subsystem.
During each bursting cycle, the slow evolution of $c$ pushes the fast subsystem through bifurcations that initiate and terminate oscillatory behaviour.
In particular, when $c$ decreases to a small enough value, the fast subsystem passes through a fold bifurcation in which a stable steady state and a saddle steady state collide and annihilate one another.
Following this,the system exhibits stable periodic activity, during which $c$ increases according to \eqref{eq:ceq}.
When $c$ increases to a sufficiently large value, the fast subsystem passes through a homoclinic bifurcation that destroys the periodic orbit and the system returns to the original stable steady state.
Following this, $c$ decreases until it once again reaches the fold point and the cycle repeats.
\begin{table}
\centering
\begin{tabular}{|c c | c c | c c |}
\hline
Parameter & Value & Parameter & Value & Parameter & Value \\ [0.5ex]
\hline
$C_{m}$ (fF) & 5310 &
$V_{m}$ (mV) & 4 &
$S_{m}$ (mV) & 14 \\
$V_{n}$ (mV) & -15 &
$S_{n}$ (mV) & 5.6 &
$\kappa_1$ (mV) & 65 \\
$\kappa_2$ (mV) & 20 &
$\overline{\tau}$ (ms) & 37.5 &
$\overline{V}$ (mV) & -75 \\
$V_{h}$ (mV) & -10 &
$S_{h}$ (mV) & -10 &
$\overline{g}_{K}$ (pS) & 2500 \\
$\overline{g}_{Ca}$ (pS) & 1400 &
$V_{K}$ (mV) & -75 &
$V_{Ca}$ (mV) & 110 \\
$K_{d}$ ($\mu$ M) & 100 &
$\overline{g}_{K-Ca}$ (pS) & 30000 &
$f$ & 0.001\\
$k_{Ca}$ (ms$^{-1}$) & 0.03 &
$\alpha$ $\left(\frac{\mu \text{m}^{3} \text{Coul}}{\text{mMol}}\right)$ & $4.5061\times 10^{-6}$ &
$\overline{g}_{coup}$ (pS) & \{varies\} \\
\hline
\end{tabular}
\caption{Parameter values of the oscillator model.}
\label{tbl:parameters}
\end{table}
\subsection{Model simulations}
\label{sec:sims}
Simulations were conducted using Matlab 2019B. The dynamical systems were solved using ode15s, the relative tolerance set to $10^{-5}$, and explicit Jacobians were provided. The code was run on the University of Birmingham BlueBEAR HPC running RedHat 8.3 (x86\_64)(see http://www.birmingham.ac.uk/bear for more details). Each set of simulations ran over 16 cores using a maximum of 128GB RAM (32GB was sufficient in most cases). All code used in the project is freely available for download from: github.com/dgalvis/network\_spatial.
\subsubsection{Initial Conditions}
Initial conditions $y_i(0) = \left(V_{i}(0), n_{i}(0), c_{i}(0)\right)$ for node $i=1,\dots,N$ were sampled independently from the distributions
\begin{equation}
V_i(0) \sim \mathcal{N}(-68, (68/6)^2), \quad n_i(0) = 0, \quad c_{i}(0) \sim \mathcal{N}(0.57, (0.57/6)^2),
\end{equation}
\noindent
where $\mathcal{N}(\mu,\sigma^2)$ represents a normal distribution with mean $\mu$ and variance $\sigma^2$.
Throughout, we use $Y(0)$ to denote the set of initial conditions across the whole network, i.e., $Y(0) = \left( y_1(0), \dots, y_N(0)\right)$.
\subsubsection{Excitability and drive in the single-cell model}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{figures/Fig1Combined}
\caption{\textbf{Excitability of single cells.} The voltage traces from three cells with varying levels of intrinsic excitability ($\overline{g}_{L}$), but the same level of drive ($G = 0.7$). The red, blue, and black traces show decreasing levels of excitability with values of $\overline{g}_{L} = 60$, $120$, and $180$, respectively. More excitable cells have a shorter interburst interval. The solid black line represents a Hopf bifurcation as a function of both $G$ and $g_{L}$. At the lowest drive ($G = 0$), the Hopf bifurcation occurs for $\overline{g}_{L} = 45.21$ pS. The dotted lines represent ``level sets" of the $(\overline{g}_L,G)$ parameter space, along which the excitability of the single cell is identical. Data for the bifurcation diagram was computed using XPP 8.0 \citep{Ermentrout2002}.}
\label{fig:single_cell}
\end{figure}
The ionic current $I_L$ (\ref{eq:leak}) is a hyperpolarising current that can be used to adjust the excitability of each cell and to determine the activation level of the network.
In particular, the maximum conductance $\overline{g}_{L}$ determines the excitability of a cell. As this value increases, the cell becomes less excitable, that is, for a given value of $G$, cells with higher $\overline{g}_L$ are less likely to burst.
This behaviour is summarised in Fig.~\ref{fig:single_cell}, which shows a two parameter bifurcation diagram showing the transition from quiescent to bursting behaviour under simultaneous variation of $(\overline{g}_L,G)$, which occurs via a Hopf bifurcation of the full system \eqref{eq:Veq}-\eqref{eq:ceq}.
For $G=0$, this Hopf bifurcation occurs at $\overline{g}_{L} = 45.21$ pS.
For non-zero values of $G$, the bifurcation curve is defined via $(1 - G)\overline{g}_{L} = 45.21$ pS, as can be seen by examining the form of the \eqref{eq:Veq} and \eqref{eq:leak}.
Note that when $G = 1$, system \eqref{eq:Veq}-\eqref{eq:ceq} matches that of \cite{Sherman1988a}.
In the network modelling approach, we use the observations about the link between $\overline{g}_L$ and excitability to partition the network into two sub-populations, one being highly excitable, the other being significantly less excitable.
\subsubsection{Network structure and coupling}
\label{sec:coupling}
Pancreatic beta cells are arranged into roughly spherical clusters called islets of Langerhans (which also encompass other cell types which are disregarded in our model), which each contain $\sim$ 1,000 beta cells.
To capture this, we arrange $N = 1,018$ nodes on a hexagonal close packed (hcp) lattice embedded within a sphere.
The dominant form of coupling between beta cells in the islets is through gap junctions, which allow small molecules, including charged ions to pass directly from a cell to its adjacent neighbours.
Mathematically, this is represented through the inclusion of the diffusive term $I_{coup,i}$ in \eqref{eq:Veq} that factors in the local nature of coupling
\begin{equation}
I_{coup,i} = \overline{g}_{coup} \sum_{j\in {J}_i} (V_i-V_j), \label{eq:coupling}
\end{equation}
\noindent
where ${J}_i$ is the set of all cells to which cell $i$ is coupled.
Each node is connected to all of its nearest-neighbours so that the number of connections of nodes away from the boundary of the sphere is equal to the coordination number 12 whilst nodes on the boundary have fewer connections.
\subsubsection{Heterogeneity}
\label{sec:hetero}
We consider networks consisting of two sub-populations of nodes distinguished by their excitability (i.e., by their $\overline{g}_L$ values).
Population 1 is highly excitable ($\overline{g}_{L} = 60$ pS) and population 2 is less excitable ($\overline{g}_{L} = 100$ pS).
We then consider the range over $G$ for which population 1 nodes are intrinsically active (i.e., when $\overline{g}_{coup} = 0$) and population 2 nodes are intrinsically quiescent.
We then consider the effects of population size (by varying the proportion of overall network that population 1 nodes account for), the degree of sortedness between the two subpopulations (see \Sec{sec:assort}), global network drive ($G$), and global coupling strength on the collective dynamics of the network.
\newpage
\subsection{Description of the routines used by \Alg{alg:orig}}
\label{sec:alg}
\begin{algorithm}[H]
\caption{\label{alg:orig} Algorithm for producing networks}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $N$: number of nodes in network
\Statex $a$: number of iterations of swapping algorithm to attempt
\Statex $Dir$: signed integer determining whether algorithm runs forwards (positive) or backwards (negative)
\Statex $\rho$: proportion of population 1 nodes
\Ensure
\Statex $\mathcal{A}$: network sortedness value
\Statex $P_1$: population 1 set
\Statex $P_2$: population 2 set
\Statex $n$: number of swaps performed
\Statex
\Function{GenerateNetwork}{$N$, $a$, $Dir$, $\rho$}
\State $(x,y,z)$, $r$, $K \gets$ \Call{EstablishLattice}{$N$}
\State $N_1$, $N_2$, $P_1$, $P_2$, $\mathcal{A} \gets$ \Call{AssignInitialPopulations}{$N$, $\rho$}
\State $Term \gets $ \False \Comment{Boolean determining whether terminal network state has been reached}
\State $n \gets 0$
\While{($n < a$) and ($Term =$ \False)}
\State $f$, $F$, $Q \gets$ \Call{ComputeSelectionProbabilities}{$r[]$, $N_1$, $N_2$, $P_1$, $P_2$}
\State $m \gets 0$
\State $swap \gets$ \True \Comment{Boolean determining whether to attempt swaps}
\While{($m < N_1 \times N_2$) and ($swap =$ \True)}
\State $\widetilde{P}_1$, $\widetilde{P}_2$, $\mathcal{A}_p$, $k \gets$\Call{NodeSwap}{$f$, $F$, $Q$, $Dir$, $N_1$, $N_2$, $P_1$, $P_2$}
\If{$\text{sgn}(\mathcal{A}_p - \mathcal{A}) = \text{sgn}(Dir)$}
\State $P_1$, $P_2 \gets \widetilde{P}_1$, $\widetilde{P}_2$
\State $\mathcal{A} \gets \mathcal{A}_p$
\State $n \gets n+1$
\State $swap \gets$ \False
\Else \Comment{Reject swap if $\mathcal{A}$ does not change in the desired direction}
\For{$l \gets k$ to $N_1 \times N_2$}
\State $F[l] \gets F[l]-f[k]$
\EndFor
\State $Q \gets Q - f[k]$
\State $m \gets m+1$
\EndIf
\EndWhile
\If{$swap = $ \True}
\State $Term \gets$ \True \Comment{Terminal state has been reached}
\EndIf
\EndWhile
\State \Return $\mathcal{A}$, $P_1$, $P_2$, $n$
\EndFunction
\end{algorithmic}
\end{algorithm}
\Alg{alg:hcp_lattice1}-\Alg{alg:swap_nodes} are used by \Alg{alg:orig} which is described in the main text.
\Alg{alg:hcp_lattice1} returns a set of points in $\mathbb{R}^3$ corresponding to the centres of spheres within a hexagonal close packed lattice (hcp). The input $r_{ball}$ corresponds to the radius of the spheres within the lattice, which we set to $r_{ball}=0.5$ so that the distance between any two nearest neighbors is $d_{ball}=2r_{ball}=1$. \Alg{alg:hcp_lattice1} produces the hcp lattice using a sequence of scalings and shifts of a square lattice which takes the points $\{(x,y,z) \mid x,y,z \in \{1,\dots,M\}\}$, where $M$ is an integer corresponding to the number of spheres along the length of the lattice. We sought to embed a larger sphere, $S_{net}$, of radius $R_{net}$ within the resulting hcp-lattice, and therefore, must choose $M$ such that $S_{net}$ is contained within the lattice. For the square lattice, a natural choice would be $M=2R_{net}$, so that the length of the lattice equals the diameter of the sphere. However, for the hcp-lattice, the size of the resulting structure is $(M-1)x_{scale}+d_{ball} = Mx_{scale} = Md_{ball}$ by $(M-1)y_{scale}+d_{ball} > My_{scale}$ by $(M-1)z_{scale}+d_{ball} > Mz_{scale}$ (ignoring the shifts). To counteract this, we use:
\begin{equation}
M = \text{ceil}\left(\frac{2R_{net}}{ \text{min}([x_{scale},y_{scale},z_{scale}])}\right).
\end{equation}
We found that this choice of $M$ generated a lattice which could fully embed the sphere, at least for our selection of $R_{net} = 5.55$ (in particular, we increased $M$ and found that the number of nodes within the sphere did not increase).
\Alg{alg:sphere_lattice} first runs \Alg{alg:hcp_lattice1} to produce an hcp-lattice. It then centres the lattice at the origin (i.e., at $(0,0,0)$) and finds all points that are within a sphere of radius $R_{net}$ centred at the origin, which define the nodes in the network. It also returns $N$, the number of nodes in the spherical hcp-lattice ($N = 1,018$ in this work). \Alg{alg:init_lattice} establishes the Boolean adjacency matrix representing the connections between nodes in the spherical hcp-lattice. A connection exists between two nodes if they are at a distance of $d_{ball}$ from one another. In other words, if two spheres (of radius $r_{ball}$) centred at the locations assigned to two nodes would be touching, then a connection exists between them. \Alg{alg:init_pop} determines the population sets $P_k$ for $k \in {1,2}$. It returns the number of nodes $N_k$ in each population, the population membership sets, and the initial network sortedness value $A_0$. \Alg{alg:build_f} determines the selection probabilities for every pair of nodes ($\{(i,j) \mid i \in P_1$, $j \in P_2\}$). \Alg{alg:swap_nodes} chooses a candidate swap, produces the population sets established by that swap, and calculates $\mathcal{A}$ for the updated population sets.
\newpage
\begin{algorithm}[H]
\caption{\label{alg:hcp_lattice1} Initialising HCP lattice}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $R_{net}$: radius of the spherical lattice
\Statex $r_{ball}$: radius of balls around points in the lattice
\Ensure
\Statex $(x_1,y_1,z_1), \dots, (x_N,y_N,z_N)$: $(x,y,z)$ coordinates of nodes
\Statex $N$: number of nodes in hcp-lattice
\Statex
\Function{EstablishHCPLattice}{$R_{net}, r_{ball}$}
\State $d_{ball} \gets 2r_{ball}$
\State $x_{scale} \gets d_{ball}$
\State $y_{scale} \gets \sqrt{d_{ball}^2 - r_{ball}^2}$
\State $z_{scale} \gets \sqrt{\frac{2}{3}}d_{ball}$
\State $x_{shift} \gets r_{ball}$
\State $y_{shift} \gets - \frac{d_{ball}}{\sqrt{3}}$
\State $M \gets \text{ceil}(\frac{2R_{net}}{ \text{min}([x_{scale},y_{scale},z_{scale}])})$
\State $counter \gets 0$
\For{$i \gets 1$ to $M$}
\For{$j \gets 1$ to $M$}
\For{$k \gets 1$ to $M$}
\State $counter \gets counter + 1$
\State $x[counter] \gets k \times x_{scale}$
\State $y[counter] \gets j \times y_{scale}$
\State $z[counter] \gets i \times z_{scale}$
\If{$j$ even}
\State $x[counter] \gets x[counter] + x_{shift}$
\EndIf
\If{$i$ even}
\State $y[counter] \gets y[counter] + y_{shift}$
\EndIf
\EndFor
\EndFor
\EndFor
\State $N \gets M^3$ \Comment{Total number of nodes in the lattice}
\State \Return $(x,y,z)$, $N$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{\label{alg:sphere_lattice} Initialising Sphere lattice}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $R_{net}$: radius of the spherical lattice
\Statex $r_{ball}$: radius of points in the lattice
\Ensure
\Statex $(x_1,y_1,z_1), \dots, (x_{N_{net}},y_{N_{net}},z_{N_{net}})$: $(x_{sphere},y_{sphere},z_{sphere})$ coordinates of nodes
\Statex $r_1, \dots, r_{N_{net}}$: $r_{sphere}$ radii of nodes
\Statex $N_{net}$ number of nodes in the spherical lattice
\Statex
\Function{EstablishSphereLattice}{$R_{net}, r_{ball}$}
\State $(x, y, z), N \gets $\Call{EstablishHCPLattice}{$R_{net}, r_{ball}$}
\State $x \gets x - \text{mean}(x)$ \Comment{demean vector x}
\State $y \gets y - \text{mean}(y)$ \Comment{demean vector y}
\State $z \gets z - \text{mean}(z)$ \Comment{demean vector z}
\State $r \gets \sqrt{x^2 + y^2 + z^2}$ \Comment{compute norm over all points}
\State $counter \gets 0$
\For{$i \gets 1$ to $N$}
\If{$r[i] <= R_{net}$} \Comment{Find members of hcp-lattice within sphere radius $R_{net}$}
\State $counter \gets counter + 1$
\State $x_{sphere}[counter] \gets x[i]$
\State $y_{sphere}[counter] \gets y[i]$
\State $z_{sphere}[counter] \gets z[i]$
\State $r_{sphere}[counter] \gets r[i]$
\EndIf
\EndFor
\State $N_{net} \gets counter$ \Comment{Define number of nodes within the spherical domain}
\State \Return $(x_{sphere},y_{sphere},z_{sphere})$, $r_{sphere}$, $N_{net}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{\label{alg:init_lattice} Initialising lattice}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $R_{net}$: radius of the spherical lattice
\Statex $r_{ball}$: radius of points in the lattice
\Ensure
\Statex $(x_1,y_1,z_1), \dots, (x_N,y_N,z_N)$: $(x,y,z)$ coordinates of nodes
\Statex $r_1, \dots, r_N$: $r$ radial coordinate of nodes
\Statex $K \in \mathbb{R}^N \times \mathbb{R}^N$: connectivity matrix
\Statex
\Function{EstablishLattice}{$R_{net}, r_{ball}$}
\State $(x, y, z), r, N \gets $\Call{EstablishSphereLattice}{$R_{net}, r_{ball}$}
\State $d_{ball} \gets 2 r_{ball}$
\For{$i \gets 1$ to $N$}
\For{$j \gets 1$ to $N$}
\State $dist \gets \sqrt{(x[i] - x[j])^2 + (y[i] - y[j])^2 + (z[i] - z[j])^2}$
\If{$dist = d_{ball}$}
\State $K[i][j] \gets 1$
\Else
\State $K[i][j] \gets 0$
\EndIf
\EndFor
\EndFor
\State \Return $(x,y,z)$, $r$, $K$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{\label{alg:init_pop} Initialising populations}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $N$: number of nodes in network
\Statex $\rho$: proportion of population 1 nodes
\Ensure
\Statex $N_1$, $N_2$: number of nodes in the respective population 1
\Statex $P_1$, $P_2$: population sets
\Statex $\mathcal{A}$: network sortedness
\Statex
\Function{AssignInitialPopulations}{$N$, $\rho$}
\State $U \gets \text{random permutation of } \{1,\dots, N\}$
\State $N_1 \gets \text{floor}(\rho N)$
\State $N_2 \gets N - N_1$
\State $P_{1}, P_{2} \gets$ integer array of length $N_1$, integer array of length $N_2$
\For{$k \gets 1$ to $N_1$}
\State $P_1[k] = U[k]$ \Comment{Assign first $N_1$ elements of $U$ to $P_1$}
\EndFor
\For{$k \gets 1$ to $N_2$}
\State $P_2[k] = U[N_1+k]$ \Comment{Assign last $N_2$ elements of $U$ to $P_2$}
\EndFor
\State $\mathcal{A} \gets $ network sortedness value \eqref{eq:net_assort} using $P_1$ and $P_2$
\State \Return $N_1$, $N_2$, $P_1$, $P_2$, $\mathcal{A}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{\label{alg:build_f} Defining node pair selection probabilities}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $r_1,\dots,r_N$: radial coordinates of nodes
\Statex $N_1$, $N_2$: number of nodes in the respective population
\Statex $P_1$, $P_2$: population sets
\Ensure
\Statex $f \propto$ probability density function for node pair selection
\Statex $F \propto$ cumulative density function for node pair selection
\Statex $Q$: normalisation constant for $f$
\Statex
\Function{ComputeSelectionProbabilities}{$r[]$, $N_1$, $N_2$, $P_1$, $P_2$}
\State $f \gets $ array of length $N_1\times N_2$,
\State $F \gets$ array of length $N_1\times N_2 +1$
\State $F[1] \gets 0$
\State $k$, $Q \gets 0$
\For{$i \gets 1$ to $N_1$}
\For{$j \gets 1$ to $N_2$}
\State $k \gets k+1$
\State $p \gets 1/R_{n_i,P_1} \times 1/R_{n_j,P_2}$ \Comment{Weight probability of node pair being selected using \eqref{eq:selection_weights}}
\State $f[k] = p$
\State $Q \gets Q + p$
\State $F[k] \gets Q$
\EndFor
\EndFor
\State \Return $f$, $F$, $Q$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{\label{alg:swap_nodes} Node population swapping}
\begin{algorithmic}[1]
\scriptsize
\Require
\Statex $f \propto$ probability density function for node pair selection
\Statex $F \propto$ cumulative density function for node pair selection
\Statex $Q$: normalisation constant for $f$
\Statex $P_1$, $P_2$: sets of indices of nodes in the respective population
\Ensure
\Statex $\widetilde{P}_1$, $\widetilde{P}_2$: population sets following node population swap
\Statex $\mathcal{A}_p$: network sortedness of network with node populations swapped
\Statex $k$: index of node pair swapped
\Statex
\Function{NodeSwap}{$f$, $F$, $Q$, $N_1$, $N_2$, $P_1$, $P_2$}
\State $u \gets U(0,1)$ \Comment{Sample from unit uniform distribution}
\State $k \gets 1$
\While{$u < F(k)/Q$}
\State $k \gets k + 1$
\EndWhile
\State $i$, $j \gets k/N_2$, $(k-1)\!\!\! \mod N_2 + 1$ \Comment{Indices of selected population nodes}
\State $\widetilde{P}_1$, $\widetilde{P}_2 \gets P_1$, $P_2$ \Comment{Create copies of $P_1$ and $P_2$}
\State $\widetilde{P}_1(i)$, $\widetilde{P}_2(j) \gets P_2(j)$, $P_1(i)$ \Comment{Trial node population swap}
\State $\mathcal{A}_{p} \gets$ network sortedness value \eqref{eq:net_assort} using $\widetilde{P}_1$ and $\widetilde{P}_2$
\State \Return $\widetilde{P}_1$, $\widetilde{P}_2$, $\mathcal{A}_p$, $k$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Evaluation of collective dynamics}
\label{sec:sum_stats_supp}
For each node, the number of peaks was identified by searching for maxima exceeding 0.01 $\mu M$ in the Ca\textsuperscript{2+} timecourse across the simulation duration (see Fig. \ref{fig:single_cell}).
For a network with $N$ nodes, the time-dependent Kuramoto order parameter is a complex-valued scalar defined as
\begin{equation}
{z}(t) = R(t) \mathrm{e}^{i\Theta(t)} = \frac{1}{N}\sum_{j=1}^{N}{\mathrm{e}^{i\theta_{j}(t)}},
\label{eq:kuramoto}
\end{equation}
where $\theta_j(t)$ is the phase of the $j$th node, as extracted via a mean-subtracted Hilbert transform of the Ca\textsuperscript{2+} signal for node $j$.
The argument of $z$, $\Theta$, is the mean phase of the network whilst its magnitude, $R$, measures the degree of synchrony across the network.
We sample the Ca\textsuperscript{2+} at equispaced time points $t_i = i \delta t$, $i=0,\dots T-1$ and record the time-averaged degree of synchronisation: $\overline{R} = \frac{1}{T} \sum_{i=0}^{T-1} R(t_i)$.
\subsection{The swapping algorithm generally converges to a single cluster of population 1 nodes}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/figure2KW}
\caption{\textbf{Examples of the swapping algorithm.} Five examples of the forward swapping algorithm and the associated $\mathcal{A}$ values. The final row of panels shows the final iteration, when no increases in $\mathcal{A}$ are possible. Population 1 nodes are $10\%$ of the total number of nodes and are shown in blue. Population 2 is shown in black. Generally, population 1 forms a single cluster as the algorithm converges, however this is not always the case (see example 5).}
\label{fig:assort_examples}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/five_examples_backward.pdf}
\caption{\textbf{Examples of the backward swapping algorithm.} Five examples of the backward swapping algorithm and the associated $\mathcal{A}$ values. The final row of panels shows the final iteration, when no decreases in $\mathcal{A}$ are possible. Population 1 nodes account for $10\%$ of the total number of nodes and are shown in blue. Population 2 is shown in black. Generally, all population 1 nodes become isolated as the algorithm converges.}
\label{fig:assort_examples_backward}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/sorted_stats.pdf}
\caption{\textbf{Convergence of the swapping algorithm.} \textbf{A)} The relationship between $\mathcal{A}$ and $a$ is shown for a subset of the 1,000 runs of \Alg{alg:orig} (grey lines).
Population 1 was 10\% of the total number of nodes in the network.
The average $\pm$ standard deviation is shown as blue lines.
$\overline{\mathcal{A}}_{init} = -9.37E-4$ was the mean value $\mathcal{A}$ when $a = 0$ over all runs.
$\overline{\mathcal{A}}_{final} = 0.69$ was the mean value of $\mathcal{A}$ when $a = a_{final}$ over all runs.
$\overline{a}_{final} = 227.75$ was the mean value of $a$ at $a_{final}$ over all runs.
\textbf{B)} The relationship between population 1 clusters and $a$ is shown for a subset of the 1,000 runs of \Alg{alg:orig} (grey lines).
Population 1 was 10\% of the total number of nodes in the network. The average $\pm$ standard deviation is shown as blue lines.
$\overline{C}_{init} = 56.02$ was the average number of population 1 clusters at $a = 0$ over all runs.
$\overline{C}_{final} = 1.05$ was the average number of population 1 clusters at $a = a_{final}$ over all runs.
\textbf{C)} The relationship between $\mathcal{A}$ and $a$ for the forward and backward algorithm when population 1 was $10\%$ and $20\%$ of the overall network.
The average over all runs is plotted for each case.
Each curve also has a point of the same colour which indicates ($\overline{a}_{final}$, $\overline{\mathcal{A}}_{final}$).
\textbf{D)} The relationship between population 1 clusters and $a$ for the forward and backward algorithm when population 1 was $10\%$ and $20\%$ of the overall network.
The average over all runs is plotted for each case.
Each curve also has a point of the same colour which indicates ($\overline{a}_{final}$, $\overline{A}_{final}$).
\textbf{E)} An inset showing the distribution of $a_{final}$ over the 1,000 runs when population 1 was $10\%$ of the network and the algorithm was run in the forward direction.
}
\label{fig:assort_examples2}
\end{figure}
We first ran \Alg{alg:orig} $1,000$ times in configurations where nodes from population 1 accounted for $10\%$ of the network (i.e., $N_{1} = 102$ and $N_{2} = 916$).
The initial networks ($a = 0$) were uniform-randomly distributed ($\mathcal{A}_{init} = -9.37E-4 \pm 0.012$), and the algorithm was run until it reached convergence ($a=a_{final}$). Fig. \ref{fig:assort_examples} shows five examples (one per column) at several iterations between uniformly random spatial distribution ($a = 0$) and convergence ($a = a_{final}$). We found that convergence took $227.75 \pm 40.34$ iterations (Fig. \ref{fig:assort_examples2}E) and the final network sortedness was $\mathcal{A}_{final} = 0.69 \pm 0.019$. Fig. \ref{fig:assort_examples2}A shows examples of the relationship between $\mathcal{A}$ and $a$ for individual runs of the swapping algorithm (grey lines) as well as the average $\pm$ standard deviation (blue lines) over all the runs.
For each run, we determined the number of population 1 clusters (or connected components) as a function of iterations $a$.
We found that the population 1 nodes were initially separated into $56.02 \pm 4.86$ connected components ($C_{init}$) at $a = 0$.
In $96.3\%$ of cases, population 1 formed a single cluster at $a = a_{final}$.
The first four columns in Fig. \ref{fig:assort_examples} show cases where population 1 converged to a single cluster.
In the remaining $3.7\%$ of cases, the population 1 nodes formed multiple clusters at convergence.
One such examples of this is displayed in the fifth column in Fig. \ref{fig:assort_examples}, in which the final network consisted of three clusters.
Across all runs, we found that the number of population 1 clusters at convergence was two, three, and four in $2.4\%$, $1.2\%$, and $0.1\%$ of runs, respectively.
Fig. \ref{fig:assort_examples2}C shows examples of the relationship between number of population 1 clusters and $a$ (grey lines) as well as the average $\pm$ standard deviation (blue lines) over all the runs.
We next ran the backward algorithm $1,000$ times when population 1 formed $10\%$ of the network nodes. Fig. \ref{fig:assort_examples_backward} shows five examples (one per column) at several iterations between uniform-random spatial distribution ($a = 0$) and convergence ($a = a_{final}$).
We found that convergence took $202.68 \pm 14.58$ iterations and the final network sortedness was $\mathcal{A}_{final} = -0.11 \pm 0.00$.
In addition, the number of connected components at $a_{final}$ was $102$ in each case.
This is because the algorithm always reached a state in which all population 1 cells were isolated from one another (i.e., these nodes were coupled only to nodes from population 2).
Finally, we ran the forward and backward algorithm again $1,000$ times when population 1 comprised $20\%$ of the network (i.e., $N_{1} = 204$ and $N_{2} = 814$).
The statistics for each of these cases are reported in Table \ref{tbl:stats}.
In Fig. \ref{fig:assort_examples2}B, we show the average relationship between $\mathcal{A}$ and $a$ and Fig. \ref{fig:assort_examples2}D shows the average relationship between population 1 clusters and $a$ over all runs for each case.
\begin{table}
\centering
\begin{tabular}{||c c c c c c c||}
\hline
$N_1$ & $N_2$ & direction & $a_{final}$ & $\mathcal{A}_{final}$ & $C_{init}$ & $C_{final}$\\ [0.5ex]
\hline\hline
102 & 916 & $+1$ & $227.75 \pm 40.34$ & $0.69 \pm 0.019$ & $56.02 \pm 4.86$ & $1.05 \pm 0.28$\\
\hline
102 & 916 & $-1$ & $202.68 \pm 14.58$ & $-0.11 \pm 0.00$ & $56.02 \pm 4.86$ & $102 \pm 0$ \\
\hline
204 & 814 & $+1$ & $382.36 \pm 56.12$ & $0.72 \pm 0.0060$ & $46.33 \pm 5.95$ & $1.01 \pm 0.095$\\
\hline
204 & 814 & $-1$ & $401.41 \pm 24.48$ & $-0.22 \pm 0.0029$ & $46.33 \pm 5.95$ & $203.97 \pm 0.29$\\[1ex]
\hline
\end{tabular}
\caption{Swapping algorithm statistics where population 1 comprises $10\%$ and $20\%$ of the network.}
\label{tbl:stats}
\end{table}
\subsection{The relationship between drive and sortedness with respect to network synchronisation and activation across many initial seeds of the sorting algorithm}
In section \ref{subsec:single_network}, we characterised the behaviours displayed by the networks defined by the population sets $\mathcal{P}_1$ and $\mathcal{P}_2$ for $G \in [0.3, 0.55]$ (the interval over which cells in population 1 are intrinsically active, whilst those in population 2 are not).
We found that for strong coupling ($g_{coup}=10$), the threshold for activation and synchronisation of the full network is strongly dependent on $\mathcal{A}$, such that increasing $\mathcal{A}$ decreases the necessary drive $G$ for transition (see \Sec{subsubsec:strong_coupling}).
For $g_{coup} \in \{1,2\}$, we found several regimes of activity, as described in \Sec{subsubsec:middle_coupling} and \Sec{subsubsec:low_coupling}.
Here, we wish to establish if the identified domains of activity persist across general families of networks with similar $\mathcal{A}$ but different membership of the population sets.
To do this, we defined ranges for the extracellular glucose concentration $G \in [0.3,0.55]$ and for the number of network iterations $a\in[0,250]$ ($a\in[0,400]$ when ${N_1}/{N} \approx 0.2$).
We selected $M=2,048$ points in the $(a,G)$ plane over these ranges following a Latin hypercube sampling.
For each realisation $m\in \mathbb{N}_M$, we ran \Alg{alg:orig} for $a_m$ iterations and recorded the modified spatial sortedness value $\mathcal{A}_m$.
For maximum coverage over the range of possible values of $\mathcal{A}$, \Alg{alg:orig} was run in either a forward or a backward fashion (see \Sec{sec:algorithm}).
We did this by selecting the Algorithm direction $d_m\in\{-1,1\}$ randomly and with uniform probability.
Once the algorithm terminated, the dynamics \eqref{eq:Veq}-\eqref{eq:coupling} of the resulting network configuration were simulated using the chosen activation value $G_m$ and the summary statistics as described in \Sec{sec:sum_stats} were evaluated.
These summary statistics were then plotted against the set of $(\mathcal{A}_m,G_m)$ values.
We then repeated this process for different values of $g_\text{coup}$ and proportions of population 1 nodes $N_1 = 102$ (${N_1}/{N} \approx 0.1$) (as in the \Sec{subsec:single_network}) and $N_1 = 204$ (${N_1}/{N} \approx 0.2$).
Evaluation of the level sets led to complicated sets due to the use of different realisations of \Alg{alg:orig} and the use of different initial conditions. Since the complex nature of these level sets was not related to the relationship between sortedness, drive, and network dynamics, and further because it obfuscated results, we opted to remove these portions of the levels sets from Figs.~\ref{fig:hypercube_set_strong}-~\ref{fig:hypercube_set_weak}. As an example for comparison, Fig.~\ref{fig:hypercube_set_weak_raw} includes the full level sets corresponding to Fig.~\ref{fig:hypercube_set_weak}A.
\subsubsection{Increasing $\mathcal{A}$ lowers the required drive $G$ for a transition to globally synchronised bursting for strong coupling and varying population sizes}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/hypercube_set_strong.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for strong coupling across realisations of the swapping algorithm.}
\textbf{A)} Plotting $\overline{P}$ on a set of realisations of the swapping algorithm, $(\mathcal{A}_m, G_m)$, shows a decreasing trend in necessary drive with respect to $\mathcal{A}$ for activation. In this case, population 1 was $10\%$ of the network. \textbf{B)} Plotting $\overline{P}$ on a set of realisations of the swapping algorithm, $(\mathcal{A}_m, G_m)$, shows a decreasing trend in necessary drive with respect to $\mathcal{A}$ for synchronisation. In this case, population 1 was $10\%$ of the network. \textbf{C)} As in A, but where population 1 was $20\%$ of the network. \textbf{D)} As in B, but where population 1 was $20\%$ of the network.}
\label{fig:hypercube_set_strong}
\end{figure}
We found that the monotonic decreasing relationship between $\mathcal{A}$ and $G$ discussed in \Sec{subsubsec:strong_coupling} persists when each point $(\mathcal{A}_m,G_m)$ corresponds to a different realisation of the swapping algorithm.
The regimes $\mathcal{D}^-$ and $\mathcal{D}^+$ both exist and can be separated by the same level sets as defined previously: $L^{*} = \{(\mathcal{A}, \ G) \mid \overline{R}(\mathcal{A}, \ G)=0.9\}$, and $L^{+} = \{(\mathcal{A}, \ G) \mid \overline{P}(\mathcal{A}, \ G)=5\}$.
These boundaries show a decreasing trend in $G$ with respect to $\mathcal{A}$ in the transition from global quiescence to globally synchronised oscillations, although due to each point representing a different realisation of the \Alg{alg:orig} (and a distinct set of initial conditions), the separatrix is now non-monotonic.
Fig.~\ref{fig:hypercube_set_strong} shows $\overline{P}$ and $\overline{R}$ when population 1 nodes account for $10\%$ (Fig.~\ref{fig:hypercube_set_strong}A,B) of the network and for $20\%$ (Fig.~\ref{fig:hypercube_set_strong}C,D) of the network.
We found that increasing the proportion of population 1 nodes did not change the nature of the relationship between $\mathcal{A}$ and $G$, however, the threshold for activation $G$ was decreased over all values of $\mathcal{A}$.
This decrease in threshold is expected as the number of intrinsically active nodes (and hence `intrinsic' network excitability) in the network was doubled.
\subsubsection{The regime of inter-population resonance persists across realisations of the swapping algorithm for middle-strength coupling}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/hypercube_set_middle.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for middle-strength coupling across realisations of the swapping algorithm.} \textbf{A)} Plotting $\overline{P}$ shows that $\mathcal{D}^*$ persists across realisations of the swapping algorithm. In this case, population 1 was $10\%$ of the network. \textbf{B)} Plotting $\overline{R}$ shows that $\mathcal{D}^*$ persists across realisations of the swapping algorithm. In this case, population 1 was $10\%$ of the network. \textbf{C)} As in A, but where population 1 was $20\%$ of the network. \textbf{D)} As in B, but with where population 1 was $20\%$ of the network.}
\label{fig:hypercube_set_middle}
\end{figure}
For intermediate-strength coupling ($g_{coup}=2$), we found that the regimes discussed in \Sec{subsubsec:middle_coupling} still exist when each point $(\mathcal{A}_m,G_m)$ corresponds to a different realisation of the swapping algorithm.
In particular, we found the existence of the regions $\mathcal{D}^-$, $\mathcal{D}^+$, and $\mathcal{D}^*$, which can be separated by the level sets $L_1^+$ and $L_2^+$, where $L_{k}^{+} =\{(\mathcal{A},G) \mid \overline{P}_{k}(\mathcal{A},G)=5\}$ for $k \in \{1,2\}$, can be used to separate the three regimes.
Moreover, we observed some network simulations which exhibited lowered $\overline{R}$ within $\mathcal{D}^+$, which we conjecture is the result of multi-stability (i.e., different asymptotic dynamics for different initial conditions), as in Fig.~\ref{fig:network_set_middle_yi}.
Figure~\ref{fig:hypercube_set_middle} shows $\overline{P}$ and $\overline{R}$ in the case when population 1 nodes comprise $10\%$ (Fig.~\ref{fig:hypercube_set_middle}A,B) and $20\%$ (Fig.~\ref{fig:hypercube_set_middle}C,D) of the network.
As in the case for strong coupling, each regime is shifted downward, with respect to $G$, when the proportion on intrinsically active nodes is increased to $20\%$.
In fact, we found that for high degrees of sortedness, the inter-population resonance regime begins at the lowest value of $G$ that we considered ($G=0.3$).
This shows that for middle-strength coupling, high sortedness, and where $20\%$ of the network are nodes from population 1, activation of the network occurs for values of $G$ very near where the threshold ($G\approx 0.25)$ at which isolated population 1 nodes become active.
\subsubsection{Non-monotonicity with respect to synchronisation persists for weak coupling across realisations of the swapping algorithm and for differing population 1 sizes}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/hypercube_set_weak.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for weak coupling across realisations of the swapping algorithm.} \textbf{A)} Plotting $\overline{P}$ shows that the regimes $\mathcal{D}^*$ and $\mathcal{D}^\&$ persist across realisations of the swapping algorithm. In this case, population 1 was $10\%$ of the network. \textbf{B)} Plotting $\overline{R}$ shows that the regimes $\mathcal{D}^*$ and $\mathcal{D}^\&$ persist across realisations of the swapping algorithm. In this case, population 1 was $10\%$ of the network. \textbf{C)} As in A, but where population 1 was $20\%$ of the network.\textbf{D)} As in B, but where population 1 was $20\%$ of the network.}
\label{fig:hypercube_set_weak}
\end{figure}
Finally, we considered weak coupling ($g_{coup}=1$) for $(\mathcal{A}_m, G_m)$ using $M$ realisations of the swapping algorithm.
Figure~\ref{fig:hypercube_set_weak} shows $\overline{P}$ and $\overline{R}$ when population 1 nodes account for $10\%$ (Fig.~\ref{fig:hypercube_set_weak}A,B) and $20\%$ (Fig.~\ref{fig:hypercube_set_weak}C,D) of the network.
We found that non-monotonicity of the boundary to synchronised activity with respect to increasing $G$ was persistent for this weak coupling case.
The upper boundary of the inter-population resonance regime ($\mathcal{D}^*$), given by the set $L^{2*}$, shows an increasing trend with respect to $\mathcal{A}$ both when population 1 node comprise $10\%$ (Fig.~\ref{fig:hypercube_set_weak}B) and $20\%$ (Fig.~\ref{fig:hypercube_set_weak}D) of the network.
Moreover, we again found that the activation threshold for population 1 nodes with respect to $G$ decreases as $\mathcal{A}$ increases, which is captured by $L_1^+$, where $L_{k}^{+} =\{(\mathcal{A},G) \mid \overline{P}_{k}(\mathcal{A},G)=5\}$ for $k \in \{1,2\}$.
Interestingly, we found that the activation of population 2 nodes (reflected by $L_2^+$) with respect to $G$ shows a decreasing trend as $\mathcal{A}$ increases, but only for very low values of $\mathcal{A}$.
We conjecture that this relationship was not observed in \Sec{subsubsec:low_coupling} because only positive values of $\mathcal{A}$ (resulting from the forward algorithm) were considered there, whereas here we also include realisations of the backward algorithm (leading to negative values of $\mathcal{A}$ being considered).
Here, we found that the bounds of $\mathcal{D}^*$, those being $L^{1*}$ and $L^{2*}$, needed to be modified depending on the proportion of population 1 nodes in the network.
In particular, when only $10\%$ of the network nodes were from population 1, we defined the level set $L^{*} = \{(\mathcal{A}, \ G) \mid \overline{R}(\mathcal{A}, \ G)=0.9\}$ as in \Sec{subsubsec:low_coupling} which subsequently led to the definition of two curves: the lower bound $L^{1*}$ and the upper bound $L^{2*}$ (Fig.~\ref{fig:hypercube_set_weak}B).
However, when the proportion of population 1 nodes was increased to $20\%$, we instead defined $L^{*} = \{(\mathcal{A}, \ G) \mid \overline{R}(\mathcal{A}, \ G)=0.8\}$ (Fig.~\ref{fig:hypercube_set_weak}D). The thresholds we chose were dependent on the number of population 2 nodes. This is because we sought to define level sets that bounded the 2:1 resonance region. In that region, nodes are synchronised within, but not between, populations. Therefore, $\overline{R}$ is approximately equal to the fraction of nodes in the larger population (i.e., population 2).
\subsection{Additional figures referenced in the manuscript}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{figures/assortativity.pdf}
\caption{\textbf{Illustrative example of network sortedness metric.} The sortedness metrics are computed for the example network comprising $N_1 = 3$ population 1 nodes (blue) and $N_2 = 4$ population 2 nodes (pink) with population sets $P_1 = \{1,4,6\}$ and $P_2 = \{2,3,5,7\}$. The node sortedness values, $A_i$, $i=1,\dots,7$, take the indicated values. For ease of viewing one example calculation, the edges of node 4 are highlighted in the colour corresponding to the population of each of its neighbouring nodes.
The grey box shows the population sortedness evaluations, $\overline{A}_k$, $k\in\{1,2\}$, computed using \eqref{eq:ave_node_assort} and overall network sortedness, $\mathcal{A}$, computed using \eqref{eq:net_assort}.}
\label{fig:assortativity}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{figures/iteration.pdf}
\caption{\textbf{Example of one iteration of the network sorting algorithm in the forward direction} The initial network with $a=0$ is in the maximally unsorted state so that no two nodes from population 1 (blue) are coupled to one another. Here, the population sets are $P_1 = \{1,5,6\}$ and $P_2 = \{2,3,4,7\}$ and network sortedness is equal to $-5/8$. The algorithm attempts to move node 4 to population 1 and node 5 to population 2 (pink). In the trial configuration shown in the grey box, the network sortedness is equal to $-1/3 > -5/8$ and so the swap is accepted. Thus, the population sets are updated to $P_1 = \{1,4,6\}$ and $P_2 = \{2,3,5,7\}$ and the iteration counter is increased to $a=1$. If the swap were rejected, another pair of nodes would be selected at random and the computation of network sortedness would be repeated. If no possible swap changes $\mathcal{A}$ in the desired direction, the algorithm would terminate without incrementing the iteration number, $a$.}
\label{fig:alg_iteration}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/figure4KW2}
\caption{\textbf{Phase transitions with respect to spatial sortedness.} The mean Ca\textsuperscript{2+} dynamics of population 1 (blue) and population 2 (black) are shown as $\mathcal{A}$ increases. For higher $G$, activation occurs at lower $\mathcal{A}$. Conversely, increasing $\mathcal{A}$ allows weaker $G$ activate the system.}
\label{fig:phase_transition_example}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/network_set_strong.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for strong coupling.} \textbf{A)} Plotting $\overline{P}$ averaged over three sets of initial conditions shows that for increasing $\mathcal{A}$, lower drive $G$ is required to activate the network. \textbf{B)} Plotting $\overline{R}$ averaged over three sets of initial conditions shows that for increasing $\mathcal{A}$, lower drive $G$ is required to synchronise the network. \textbf{C)} Average Ca\textsuperscript{2+} across population 1 nodes ($\overline{c}_1$) and population 2 nodes ($\overline{c}_2$) for ($\mathcal{A}$, $G$) pairs illustrates a strong global signal in $\mathcal{D}^+$. \textbf{D}) Raster plots showing the strong synchronisation within $\mathcal{D}^+$ for strong coupling. The raster plot is ordered such that nodes whose indices are in $P_1$, i.e. population 1 nodes, are shown in blue at top of the plot, whilst nodes whose indices are in $P_2$ are shown in black at the bottom of the plot.}
\label{fig:network_set_strong}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{figures/network_set_strong_yi.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for strong coupling (three initial conditions).} \textbf{A)} Plotting $\overline{P}$ shows that for increasing $\mathcal{A}$, lower drive $G$ is required to activate the network (parameter set $Y_1(0)$). \textbf{B)} Plotting $\overline{R}$ shows that for increasing $\mathcal{A}$, lower drive $G$ is required to synchronise the network (parameter set $Y_1(0)$). \textbf{C)} $\overline{P}$ for $Y_2(0)$. \textbf{D)} $\overline{R}$ for $Y_2(0)$. \textbf{E)} $\overline{P}$ for $Y_3(0)$. \textbf{F)} $\overline{R}$ for $Y_3(0)$.}
\label{fig:network_set_strong_yi}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/network_set_middle.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for middle-strength coupling.} \textbf{A)} Plotting $\overline{P}$ averaged over three sets of initial conditions shows a third regime $\mathcal{D}^*$ bounded by $L_1^+$ and $L_2^+$. \textbf{B)} Plotting $\overline{R}$ averaged over three sets of initial conditions shows that for increasing $\mathcal{A}$, lower drive $G$ is required to synchronise the network. \textbf{C)} Average Ca\textsuperscript{2+} across population 1 nodes ($\overline{c}_1$) and population 2 nodes ($\overline{c}_2$) for ($\mathcal{A}$, $G$) pairs illustrates a strong global signal in $\mathcal{D}^+$ and that population 2 nodes are active at half the frequency of population 1 nodes, on average, in $\mathcal{D}^*$. \textbf{D}) Raster plots showing the strong synchronisation within $\mathcal{D}^+$, 2:1 frequency resonance in $\mathcal{D}^*$, and intermediate activity with lowered synchronisation in a band separating the two regimes. The raster plot is ordered such that nodes whose indices are in $P_1$, i.e. population 1 nodes, are shown in blue at top of the plot, whilst nodes whose indices are in $P_2$ are shown in black at the bottom.}
\label{fig:network_set_middle}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\linewidth]{figures/network_set_middle_yi.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for middle-strength coupling (three initial conditions).} \textbf{A)} $\overline{P}$ for $Y_1(0)$. \textbf{B)} $\overline{P}$ for $Y_1(0)$. \textbf{C)} $\overline{P}$ for $Y_2(0)$. \textbf{D)} $\overline{R}$ for $Y_2(0)$. \textbf{E)} $\overline{P}$ for $Y_3(0)$. \textbf{F)} $\overline{R}$ for $Y_3(0)$. \textbf{G)} Mean Ca\textsuperscript{2+} for $P_1$ ($\overline{c}_1$) and $P_2$ ($\overline{c}_2$) showing multi-stability in the $\mathcal{D}^+$ regime. \textbf{H)} Raster plots of the population 1 (blue) and population 2 (black) nodes showing multistability in the $\mathcal{D}^+$ regime. The raster plot is ordered such that nodes whose indices are in $P_1$, i.e. population 1 nodes, are shown at the top of the plot.}
\label{fig:network_set_middle_yi}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/network_set_weak.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for weak coupling.} \textbf{A)} Plotting $\overline{P}$ averaged over three sets of initial conditions shows that activation of population 1, but not population 2, is dependent on sortedness. \textbf{B)} Plotting $\overline{R}$ averaged over three sets of initial conditions shows that synchronisation is non-monotonic with respect to $G$, peaking within a $2:1$ resonance regime $\mathcal{D}^*$. \textbf{C)} Average Ca\textsuperscript{2+} for population 1 nodes ($\overline{c}_1$) and population 2 nodes ($\overline{c}_2$) for ($\mathcal{A}$, $G$) shows that population 1 is active but only generates a weak global signal in $\mathcal{D}_{1}^{+}$. The dynamics exhibit 2:1 resonance within the region $\mathcal{D}^*$, and lowered coordination and an irregular global signal within $\mathcal{D}^\&$. \textbf{D}) Raster plots showing the weak coordination of spiking activity across population 1 in $\mathcal{D}_{1}^{+}$ and weak coordination of spiking activity across the whole network within $\mathcal{D}^\&$. The raster plot is ordered such that nodes whose indices are in $P_1$, i.e. population 1 nodes, are shown in blue at top of the plot, whilst nodes whose indices are in $P_2$ are shown in black at the bottom.}
\label{fig:network_set_weak}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{figures/network_set_weak_yi.pdf}
\caption{\textbf{Network activity with respect to sortedness and drive for weak coupling (three initial conditions).} \textbf{A)} $\overline{P}$ for $Y_1(0)$. \textbf{B)} $\overline{P}$ for $Y_1(0)$. \textbf{C)} $\overline{P}$ for $Y_2(0)$. \textbf{D)} $\overline{R}$ for $Y_2(0)$. \textbf{E)} $\overline{P}$ for $Y_3(0)$. \textbf{F)} $\overline{R}$ for $Y_3(0)$.}
\label{fig:network_set_weak_yi}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/hypercube_set_weak_raw.pdf}
\caption{\textbf{Raw version of Fig.~\ref{fig:hypercube_set_weak}A.} This figure shows the complete level sets. We only kept the portions of the level sets $L_1^+$, $L_2^+$, $L^{1*}$, and $L^{2*}$ that had analogues in Fig.~\ref{fig:network_set_weak}A.}
\label{fig:hypercube_set_weak_raw}
\end{figure}
|
2,877,628,088,870 | arxiv | \section{Introduction}
Laplace-Beltrami operator plays a fundamental role in Riemannian
geometry \cite{Rosenberg98}. Discrete Laplace-Beltrami operators on
triangulated surface meshes span the entire spectrum of geometry
processing applications, including mesh parameterization,
segmentation, reconstruction, compression, re-meshing and so on
\cite{Levy06Laplace,Sorkine06Differential,Zhang09Sprectral}.
Laplace-Beltrami operator is determined by the Riemannian metric.
The heat kernel can be constructed from the eigenvalues and
eigenfunctions of the Laplace-Beltrami operator, conversely, it
fully determines the Riemannian metric (unique
up to a scaling). In this work, we prove the
discrete analogy to this fundamental fact, that the discrete
Laplace-Beltrami operator and the discrete Riemannian metric are
mutually determined by each other.
\smallskip
{\textbf{Related Works }}
In real applications, a smooth metric surface is usually represented
as a triangulated mesh. The manifold heat kernel is estimated from
the discrete Laplace operator. The most well-known and widely-used
discrete formulation of Laplace operator over triangulated meshes is
the so-called \emph{cotangent scheme}, which was originally
introduced in \cite{Dodziuk78Sprectral,Pinkall93MinimalSurface}. Xu
\cite{Xu04Convergence} proposed several simple discretization
schemes of Laplace operators over triangulated surfaces, and
established the theoretical analysis on convergence. Wardetzky et
al. \cite{08Nofreelunch} proved the theoretical limitation that the
discrete Laplacians cannot satisfy all natural properties, thus,
explained the diversity of existing discrete Laplace operators. A
family of operations were presented by extending more natural
properties into the existing operators. Reuter et
al. \cite{Reuter06ShapeDNA} computed a discrete Laplace operator
using the finite element method, and exploited the isometry
invariance of the Laplace operator as shape fingerprint for object
comparison. Belkin et al. \cite{Belkin08MeshLaplace} proposed the
first discrete Laplacian that pointwise converges to the true
Laplacian as the input mesh approximates a smooth manifold better.
Tamal et al. \cite{Tamal10Spectra} employed this mesh Laplacian and
provided the first convergence to relate the discrete spectrum with
the true spectrum, and studied the stability and robustness of the
discrete approximation of Laplace spectra. The eigenfunctions of
Laplace-Beltrami operator have been applied for global intrinsic
symmetry detection in \cite{OvsjanikovSG08}. Heat Kernel Signature
was proposed in \cite{SunOG09}, which is concise and characterizes the
shape up to isometry.
\smallskip
\textbf{Our Results }In this work, we prove that the discrete
Laplace-Beltrami operator based on the cotangent
scheme \cite{Dodziuk78Sprectral,Pinkall93MinimalSurface} is
determined by the discrete Riemannian metric, and also determines
the metric unique up to a scaling. The proof is using the
variational approach, which leads to a practical algorithm to
compute a Riemannian metric from a prescribed Laplace-Beltrami
operator.
\smallskip
\textbf{Paper Outline } In Section \ref{sec:overview}, we briefly overview the
fundamental theorem of smooth heat kernel and our theoretical claims
of discrete case. We clarify the simplest case, one triangle mesh
in Section \ref{sec:triangle} first; then turn to the more general
Euclidean polyhedral surfaces in Section \ref{sec:polyhedral}. Finally,
in Section \ref{sec:experiment}, we present a variational algorithm
to compute the unique Riemannian metric from from a Laplace-Beltrami
matrix. The numerical experiments on different topological triangle
meshes support the theoretic results.
\section{Preliminaries and Proof Overview}
\label{sec:overview}
\subsection{Smooth Case}
\label{sec:smooth}
Suppose $(M,\mathbf{g})$ is a complete
Riemannian manifold, $\mathbf{g}$ is the Riemannian metric. $\Delta$ is the
Laplace-Beltrami operator. The eigenvalues $\{\lambda_n\}$ and
eigenfunctions $\{\phi_n\}$ of $\Delta$ are
\[
\Delta \phi_n = -\lambda_n \phi_n,
\]
where $\phi_n$ is normalized to be orthonormal in $L^2(M)$. The
spectrum is given by
\[
0=\lambda_0 < \lambda_1 \le \lambda_2 \le \cdots, ~~~~\lambda_n \to
\infty.
\]
Then there is a heat kernel $K(x,y,t) \in C^\infty(M\times M \times
\mathbb{R}^+)$, such that
\[
K(x,y,t) = \sum_{n=0}^\infty e^{-\lambda_nt} \phi_n(x)\phi_n(y).
\]
Heat kernel reflects all the information of the Riemannian metric
$\mathbf{g}$. The details of the following theorem can be found in
\cite{SunOG09}.
\smallskip
\begin{theorem} Let $f: (M_1,\mathbf{g}_1)\to (M_2,\mathbf{g}_2)$ be a diffeomorphism
between two Riemannian manifolds. If $f$ is an isometry, then
\begin{equation}
K_1(x,y,t) = K_2( f(x),f(y),t),~\forall x,y \in M,~t >0.
\label{eqn:heat_kernel_2}
\end{equation}
Conversely, if $f$ is a surjective map, and
Eqn. (\ref{eqn:heat_kernel_2}) holds, then $f$ is an isometry.
\end{theorem}
\subsection{Discrete Case}
\label{sec:discrete}
In this work, we focus on discrete surfaces, namely polyhedral
surface. For example, a triangle mesh is piecewise linearly embedded in
$\mathbb{R}^3$.
\smallskip
\begin{definition} [{Polyhedral Surface}] An Euclidean polyhedral surface is a triple
$(S,T,\mathbf{d})$, where $S$ is a closed surface, $T$ is a triangulation of
$S$ and $\mathbf{d}$ is a metric on $S$ whose restriction to each triangle is
isometric to an Euclidean triangle.
\end{definition}
The well-known cotangent edge weight
\cite{Dodziuk78Sprectral,Pinkall93MinimalSurface} on an Euclidean
polyhedral surface is defined as follows:
\smallskip
\begin{definition}[{Cotangent Edge Weight}]
\label{def:edge_weight} Suppose $[v_i,v_j]$ is a boundary edge of
$M$, $[v_i,v_j] \in
\partial M$, then $[v_i,v_j]$ is associated with one triangle $[v_i,v_j,v_k]$, the angle
against $[v_i,v_j]$ at the vertex $v_k$ is $\alpha$, then the weight
of $[v_i,v_j]$ is given by $w_{ij} = \frac{1}{2}\cot \alpha$.
Otherwise, if $[v_i,v_j]$ is an interior edge, the two angles
against it are $\alpha,\beta$, then the weight is $w_{ij} =
\frac{1}{2}(\cot \alpha + \cot \beta)$.
\end{definition}
The discrete Laplace-Beltrami operator is constructed from the cotangent
edge weight.
\smallskip
\begin{definition}[{Discrete Laplace Matrix}]
\label{def:laplace} The discrete Laplace matrix $L=(L_{ij})$ for an
Euclidean polyhedral surface is given by
\[
L_{ij} =
\left\{
\begin{array}{ll}
-w_{ij}& i \neq j\\
\sum_k w_{ik} & i = j\\
\end{array}
\right..
\]
\end{definition}
Because $L$ is symmetric, it can be decomposed as
\begin{equation}
L = \Phi \Lambda \Phi^T
\label{eqn:laplace}
\end{equation}
where $\Lambda=diag(\lambda_0,\lambda_1,\cdots,\lambda_n)$,
$0=\lambda_0 < \lambda_1 \le \lambda_2 \le \cdots \le \lambda_n$, are the
eigenvalues of $L$, and $\Phi=(\phi_0|\phi_1|\phi_2|\cdots|\phi_n)$, $L
\phi_i = \lambda_i \phi_i$, are the orthonormal eigenvectors, such that
$\phi_i^T\phi_j = \delta_{ij}$.
\smallskip
\begin{definition}[{Discrete Heat Kernel}]
The discrete heat kernel is defined as follows:
\begin{equation}
K(t)=\Phi exp( -\Lambda t ) \Phi^T.
\label{eqn:heat_kernel}
\end{equation}
\end{definition}
The \textbf{Main Theorem}, called \emph{Global Rigidity Theorem}, in this work is as follows:
\smallskip
\begin{theorem}
\label{thm:main} Suppose two Euclidean polyhedral surfaces
$(S,T,\mathbf{d_1})$ and $(S,T,\mathbf{d_2})$ are given,
\[
L_1=L_2,
\]
if and only if $\mathbf{d_1}$ and $\mathbf{d_2}$ differ by a
scaling.
\end{theorem}
\smallskip
\begin{corollary}
\label{cor:main} Suppose two Euclidean polyhedral surfaces
$(S,T,\mathbf{d_1})$ and $(S,T,\mathbf{d_2})$ are given,
\[
K_1(t)=K_2(t), \forall t > 0,
\]
if and only if $\mathbf{d_1}$ and $\mathbf{d_2}$ differ by a
scaling.
\end{corollary}
\smallskip
\begin{proof}
Note that,
\[
\frac{dK(t)}{dt}|_{t=0} = -L.
\]
Therefore, the discrete Laplace matrix and the discrete heat kernel
mutually determine each other.
\end{proof}
\subsection{Proof Overview for Main Theorem \ref{thm:main}}
The main idea for the proof is as follows. We fix the connectivity
of the polyhedral surface $(S,T)$. Suppose the edge set of $(S,T)$
is sorted as $E=\{e_1, e_2,\cdots, e_m\}$, where $m=|E|$ number of
edges, the face set is denoted as $F$. A triangle $[v_i,v_j,v_k]\in
F$ is also denoted as $\{i,j,k\}\in F$.
By definition, an Euclidean polyhedral metric on $(S,T)$ is given by
its edge length function $d:E\to \mathbb{R}^+$. We denote a metric
as $\mathbf{d}=(d_1,d_2,\cdots,d_m)$, where $d_i=d(e_i)$ is the length of edge $e_i$. Let
\[
E_{\mathbf{d}}(2) = \{(d_1,d_2,d_3)| d_i + d_j > d_k\}
\]
be the space of all Euclidean triangles parameterized by the edge
lengths, where $\{i,j,k\}$ is a cyclic permutation of $\{1,2,3\}$.
In this work, for convenience, we use $u=(u_1,u_2,\cdots,u_m)$ to
represent the metric, where $u_k = \frac{1}{2} d_k^2$.
\smallskip
\begin{definition}[{Admissible Metric Space}]
\label{def:metric_space}Given a triangulated surface $(S,K)$, the
admissible metric space is defined as
\[
\Omega_u = \{(u_1,u_2,u_3\cdots,u_m)| \sum_{k=1}^m u_k = m, (\sqrt{u_i},\sqrt{u_j},\sqrt{u_k})\in E_{\mathbf{d}}(2), \forall \{i,j,k\}\in F\}.
\]
\end{definition}
We show that $\Omega_u$ is a convex domain in $\mathbb{R}^{m}$.
\smallskip
\begin{definition}[{Energy}]
\label{def:energy} An energy $E:\Omega_u \to \mathbb{R}$ is defined
as:
\begin{equation}
E(u_1,u_2 \cdots, u_m ) =
\int^{(u_1,u_2\cdots,u_m)}_{(1,1,\cdots,1)} \sum_{k=1}^m w_k(\mu)
d\mu_k, \label{eqn:energy}
\end{equation}
where $w_k(\mu)$ is the cotangent weight on the edge $e_k$ determined by
the metric $\mu$.
\end{definition}
Next we show this energy is convex in Lemma \ref{lem:convexity_energy}. According to the following
lemma, the gradient of the energy $\nabla E({\mathbf{d}}):\Omega\to
\mathbb{R}^m$
\[
\nabla E: (u_1,u_2\cdots,u_m)\to (w_1,w_2,\cdots w_m)
\]
is an embedding. Namely the metric is determined by the edge weight
unique up to a scaling.
\smallskip
\begin{lemma}Suppose $\Omega \subset \mathbb{R}^n$ is an open convex domain in $\mathbb{R}^n$,
$E: \Omega \to \mathbb{R}$ is a strictly convex function with
positive definite Hessian matrix, then $\nabla E:\Omega \to
\mathbb{R}^n$ is a smooth embedding. \label{lem:embedding}
\end{lemma}
\smallskip
\begin{proof}
If $\mathbf{p}\neq \mathbf{q}$ in $\Omega$, let $\gamma(t) =
(1-t)\mathbf{p} + t\mathbf{q} \in \Omega$ for all $t\in [0,1]$. Then
$f(t)=E(\gamma(t)):[0,1]\to \mathbb{R}$ is a strictly convex
function, so that
\[
\frac{d f(t)}{dt} = \nabla E|_{\gamma(t)} \cdot (\mathbf{q}-\mathbf{p}).
\]
Because
\[
\frac{d^2 f(t)}{dt^2} = (\mathbf{q}-\mathbf{p})^T
H|_{\gamma(t)}(\mathbf{q}-\mathbf{p}) > 0,
\]
$\frac{d f(0)}{dt} \neq \frac{d f(1)}{dt}$, therefore
\[
\nabla E(\mathbf{p}) \cdot (\mathbf{q}-\mathbf{p}) \neq
\nabla E(\mathbf{q}) \cdot (\mathbf{q}-\mathbf{p}).
\]
This means $\nabla E(\mathbf{p}) \neq \nabla E(\mathbf{q})$,
therefore $\nabla E$ is injective.
On the other hand, the Jacobi matrix of $\nabla E$ is the Hessian
matrix of $E$, which is positive definite. It follows that $\nabla
E:\Omega \to \mathbb{R}^n$ is a smooth embedding.
\end{proof}
From the discrete Laplace-Beltrami operator (Eqn. (\ref{eqn:laplace}))
or the heat kernel (Eqn. (\ref{eqn:heat_kernel})), we can compute all
the cotangent edge weights, then because the edge weight determines
the metric, we attain the Main Theorem \ref{thm:main}.
\section{Euclidean Triangle}
\label{sec:triangle}
In this section, we show the proof for the simplest case, a Euclidean triangle; in the next
section, we generalize the proof to all types of triangle meshes.
Given a triangle $\{i,j,k\}$, three corner angles denoted by $\{\theta_i,\theta_j,\theta_k\}$,
three edge lengths denoted by $\{d_i,d_j,d_k\}$, as shown in Fig. \ref{fig:triangle}. In this case, the problem is trivial. Given
$(w_i,w_j,w_k)=(\cot\theta_i,\cot\theta_j,\cot\theta_k)$, we can
compute $(\theta_i,\theta_j,\theta_k)$ by taking the $\arctan$
function. Then the normalized edge lengths are given by
\[
(d_i,d_j,d_k) =
\frac{3}{\sin\theta_i+\sin\theta_j+\sin\theta_k}(\sin\theta_i,\sin\theta_j,\sin\theta_k).
\]
\begin{figure*}
\centering
\begin{tabular}{c}
\includegraphics[height=1.50in]{../figures/triangle.eps} \\
\end{tabular}
\caption{An Euclidean triangle.} \label{fig:triangle}
\end{figure*}
Although this approach is direct and simple, it can not be
generalized to more complicated polyhedral surfaces. In the
following, we use a different approach, which can be generalized to
all polyhedral surfaces.
\smallskip
\begin{lemma}Suppose an Euclidean triangle is with angles
$\{\theta_i,\theta_j,\theta_k\}$ and edge lengths $\{d_i,d_j,d_k\}$,
angles are treated as the functions of the edge lengths $\theta_i(d_i,d_j,d_k)$, then
\begin{equation}
\frac{\partial \theta_i}{\partial d_i} = \frac{d_i}{2A}
\end{equation}
and
\begin{equation}
\frac{\partial \theta_i}{\partial d_j} =
-\frac{d_i}{2A}\cos\theta_k,
\end{equation}
where $A$ is the area of the triangle.
\end{lemma}
\smallskip
\begin{proof}
According to Euclidean cosine law,
\begin{equation}
\cos \theta_i = \frac{d_j^2 + d_k^2 - d_i^2}{2d_j d_k},
\end{equation}
we take derivative on both sides with respective to $d_i$,
\[
-\sin\theta_i \frac{\partial \theta_i}{\partial d_i} =\frac{-2d_i}{2d_jd_k}
\]
\begin{equation}
\begin{split}
\frac{\partial \theta_i}{\partial d_i} &=
\frac{d_i}{d_jd_k\sin\theta_i} = \frac{d_i}{2A}
\end{split}
\end{equation}
where $A = \frac{1}{2}d_jd_k \sin\theta_i$ is the area of the
triangle. Similarly,
\[
\frac{\partial}{\partial
d_j}(d_j^2+d_k^2-d_i^2)=\frac{\partial}{\partial d_j}
(2d_jd_k\cos\theta_i)
\]
\[
2 d_j = 2d_k \cos\theta_i -
2d_jd_k\sin\theta_i \frac{\partial \theta_i}{\partial d_j}
\]
\[
2A \frac{\partial \theta_i}{\partial d_j} = d_k\cos\theta_i - d_j
= -d_i \cos\theta_k
\]
We get
\[
\frac{\partial \theta_i}{\partial d_j} = -\frac{d_i\cos\theta_k}{2A}.
\]
\end{proof}
\smallskip
\begin{lemma}
\label{lem:symmetry} In an Euclidean triangle, let $u_i =
\frac{1}{2}d_i^2$ and $u_j=\frac{1}{2}d_j^2$ then
\begin{equation}
\frac{\partial \cot \theta_i }{\partial u_j} = \frac{\partial \cot \theta_j }{\partial u_i}
\end{equation}
\end{lemma}
\smallskip
\begin{proof}
\begin{equation}
\begin{split}
\frac{\partial \cot \theta_i }{\partial u_j}
&=\frac{1}{d_j}\frac{\partial \cot \theta_i }{\partial d_j}=-\frac{1}{d_j}\frac{1}{\sin^2\theta_i}\frac{\partial\theta_i}{\partial
d_j}
=\frac{1}{d_j}\frac{1}{\sin^2\theta_i}\frac{d_i\cos\theta_k}{2A} =\frac{d_i^2 }{\sin^2 \theta_i} \frac{\cos\theta_k}{2A d_id_j}\\
&=\frac{4R^2}{2A} \frac{\cos\theta_k}{d_id_j}
\end{split}
\label{eqn:symmetry}
\end{equation}
where $R$ is the radius of the circum circle of the triangle. The
righthand side of Eqn. (\ref{eqn:symmetry}) is symmetric with respect to the indices $i$ and $j$.
\end{proof}
\smallskip
\begin{corollary}
\label{cor:closed_1_form} The differential form
\begin{equation}
\omega = \cot \theta_i du_i + \cot \theta_j du_j + \cot \theta_k
du_k
\label{eqn:1_form}
\end{equation}
is a closed 1-form.
\end{corollary}
\smallskip
\begin{proof}
By the above Lemma \ref{lem:symmetry} regarding symmetry,
\[
\begin{split}
d\omega &= (\frac{\partial\cot\theta_j}{\partial
u_i}-\frac{\partial\cot\theta_i}{\partial u_j}) du_i \wedge
du_j+(\frac{\partial\cot\theta_k}{\partial
u_j}-\frac{\partial\cot\theta_j}{\partial u_k}) du_j \wedge du_k
\\&+(\frac{\partial\cot\theta_i}{\partial u_k}-\frac{\partial
\cot\theta_k}{\partial u_i}) du_k
\wedge du_i\\
& = 0.
\end{split}
\]
\end{proof}
\smallskip
\begin{definition}[{Admissible Metric Space}]
Let $u_i=\frac{1}{2}d_i^2$, the admissible metric space is defined
as
\[
\Omega_u := \{(u_i,u_j,u_k)|(\sqrt{u_i},\sqrt{u_j},\sqrt{u_k})\in
E_{\mathbf{d}}(2),~u_i+u_j+u_k = 3\}
\]
\end{definition}
\smallskip
\begin{lemma} The admissible metric space $\Omega_u$ is a convex domain in $\mathbb{R}^3$.
\label{lem:convexity_metric_space}
\end{lemma}
\smallskip
\begin{proof}
Suppose $(u_i,u_j,u_k)\in \Omega_u$ and
$(\tilde{u}_i,\tilde{u}_j,\tilde{u}_k)\in \Omega_u$, then from
$\sqrt{u_i} + \sqrt{u_j} > \sqrt{u_k}$, we get $u_i + u_j +
2\sqrt{u_iu_j} > u_k$. Define $(u_i^\lambda,u_j^\lambda,u_k^\lambda)
= \lambda (u_i,u_j,u_k) + (1-\lambda)
(\tilde{u}_i,\tilde{u}_j,\tilde{u}_k)$, where $0<\lambda <1$. Then
\[
\begin{split}
u_i^\lambda u_j^\lambda&=(\lambda u_i +
(1-\lambda)\tilde{u}_i)(\lambda u_j +
(1-\lambda)\tilde{u}_j)\\
&=\lambda^2 u_i u_j + (1-\lambda)^2 \tilde{u}_i\tilde{u}_j +
\lambda(1-\lambda) (u_i\tilde{u}_j+u_j\tilde{u}_i)\\
&\ge\lambda^2 u_i u_j + (1-\lambda)^2 \tilde{u}_i\tilde{u}_j +
2\lambda(1-\lambda)\sqrt{u_iu_j\tilde{u}_i\tilde{u}_j}\\
&=(\lambda\sqrt{u_iu_j}+(1-\lambda)\sqrt{\tilde{u}_i\tilde{u}_j})^2
\end{split}
\]
It follows
\[
\begin{split}
u_i^\lambda + u_j^\lambda + 2\sqrt{u_i^\lambda u_j^\lambda}
&\ge \lambda(u_i + u_j + 2\sqrt{u_iu_j}) + (1-\lambda)(\tilde{u}_i +
\tilde{u}_j + 2\sqrt{\tilde{u}_i\tilde{u}_j})\\&>\lambda u_k + (1-\lambda) \tilde{u}_k = u_k^\lambda
\end{split}
\]
This shows $(u_i^\lambda,u_j^\lambda,u_k^\lambda)\in \Omega_u$.
\end{proof}
Similarly, we define the edge weight space as follows.
\smallskip
\begin{definition}[{Edge Weight Space}]
The edge weights of an Euclidean triangle form the edge weight space
\[
\Omega_\theta = \{(\cot\theta_i, \cot \theta_j, \cot \theta_k)| 0 <
\theta_i,\theta_j,\theta_k < \pi, \theta_i + \theta_j + \theta_k =
\pi \}.
\]
\end{definition}
Note that,
\[
\cot\theta_k = -\cot(\theta_i+\theta_j) = \frac{1-\cot\theta_i\cot\theta_j}{\cot\theta_i + \cot\theta_j}.
\]
\begin{figure*}
\centering
\begin{tabular}{c}
\includegraphics[height=1.50in]{../figures/hessian.eps} \\
\end{tabular}
\caption{The geometric interpretation of the Hessian matrix. The
in circle of the triangle is centered at $O$, with radius $r$. The perpendiculars $n_i$,
$n_j$ and $n_k$ are from the incenter of the triangle and orthogonal
to the edge $e_i$, $e_j$ and $e_k$ respectively.}
\label{fig:hessian}
\end{figure*}
\smallskip
\begin{lemma}
The energy $E: \Omega_u \to \mathbb{R}$
\begin{equation}
\label{eqn:energy} E(u_i,u_j,u_k) = \int_{(1,1,1)}^{(u_i,u_j,u_k)}
\cot\theta_i d\tau_i + \cot \theta_j d\tau_j + \cot \theta_k d\tau_k
\end{equation}
is well defined on the admissible metric space $\Omega_u$ and is
convex. \label{lem:convexity_energy}
\end{lemma}
\smallskip
\begin{proof}
According to Corollary \ref{cor:closed_1_form}, the differential form is
closed. Furthermore, the admissible metric space $\Omega_u$ is a
simply connected domain. The differential form is exact, therefore,
the integration is path independent, and the energy function is well
defined.
Then we compute the Hessian matrix of the energy,
\[
H=-\frac{2R^2}{A} \left[
\begin{array}{ccc}
\frac{1}{d_i^2}&-\frac{\cos\theta_k}{d_id_j}&-\frac{\cos\theta_j}{d_id_k}\\
-\frac{\cos\theta_k}{d_jd_i}&\frac{1}{d_j^2}&-\frac{\cos\theta_i}{d_jd_k}\\
-\frac{\cos\theta_j}{d_kd_i}&-\frac{\cos\theta_i}{d_kd_j}&\frac{1}{d_k^2}\\
\end{array}
\right] = -\frac{2R^2}{A} \left[
\begin{array}{ccc}
(\mathbf{\eta}_i,\mathbf{\eta}_i)&(\mathbf{\eta}_i,\mathbf{\eta}_j)&(\mathbf{\eta}_i,\mathbf{\eta}_k)\\
(\mathbf{\eta}_j,\mathbf{\eta}_i)&(\mathbf{\eta}_j,\mathbf{\eta}_j)&(\mathbf{\eta}_j,\mathbf{\eta}_k)\\
(\mathbf{\eta}_k,\mathbf{\eta}_i)&(\mathbf{\eta}_k,\mathbf{\eta}_j)&(\mathbf{\eta}_k,\mathbf{\eta}_k)\\
\end{array}
\right].
\]
As shown in Fig. \ref{fig:hessian}, $d_i \mathbf{n}_i + d_j
\mathbf{n}_j + d_k \mathbf{n}_k = 0$,
\[
\mathbf{\eta}_i = \frac{\mathbf{n}_i}{rd_i}, \mathbf{\eta}_j =
\frac{\mathbf{n}_j}{rd_j}, \mathbf{\eta}_k =
\frac{\mathbf{n}_k}{rd_k},
\]
where $r$ is the radius of the incircle of the triangle. Suppose
$(x_i,x_j,x_k)\in\mathbb{R}^3$ is a vector in $\mathbb{R}^3$, then
\[
[x_i,x_j,x_k] \left[
\begin{array}{ccc}
(\mathbf{\eta}_i,\mathbf{\eta}_i)&(\mathbf{\eta}_i,\mathbf{\eta}_j)&(\mathbf{\eta}_i,\mathbf{\eta}_k)\\
(\mathbf{\eta}_j,\mathbf{\eta}_i)&(\mathbf{\eta}_j,\mathbf{\eta}_j)&(\mathbf{\eta}_j,\mathbf{\eta}_k)\\
(\mathbf{\eta}_k,\mathbf{\eta}_i)&(\mathbf{\eta}_k,\mathbf{\eta}_j)&(\mathbf{\eta}_k,\mathbf{\eta}_k)\\
\end{array}
\right] \left[
\begin{array}{c}
x_i\\
x_j\\
x_k
\end{array}
\right] = \|x_i\mathbf{\eta}_i + x_j\mathbf{\eta}_j +
x_k\mathbf{\eta}_k\|^2 \ge 0.
\]
If the result is zero, then $(x_i,x_j,x_k) =
\lambda(u_i,u_j,u_k),\lambda \in \mathbb{R}$. That is the null space
of the Hessian matrix. In the admissible metric space $\Omega_u$,
$u_i+u_j+u_k=C (C=3)$, then $du_i+du_j+du_k=0$. If $(du_i,du_j,du_k)$ belongs to the null space, then $(du_i,du_j,du_k)=\lambda(u_i,u_j,u_k)$, therefore,
$\lambda(u_i + u_j + u_k)=0$. Because $u_i,u_j,u_k$ are positive,
$\lambda=0$. In summary, the energy on $\Omega_u$ is convex.
\end{proof}
\smallskip
\begin{theorem}
The mapping $\nabla E: \Omega_u \to \Omega_\theta, (u_i,u_j,u_k) \to
(\cot\theta_i,\cot\theta_j,\cot\theta_k)$ is a diffeomorphism.
\end{theorem}
\smallskip
\begin{proof}
The energy $E(u_i,u_j,u_k)$ is a convex function defined on the
convex domain $\Omega_u$, according to Lemma \ref{lem:embedding},
$\nabla E: (u_i,u_j,u_k) \to
(\cot\theta_i,\cot\theta_j,\cot\theta_k)$ is a diffeomorphism.
\end{proof}
\section{Euclidean Polyhedral Surface}
\label{sec:polyhedral}
\vspace{2mm}
In this section, we consider the whole polyhedral surface.
\vspace{-2mm}
\subsection{Closed Surfaces}
\label{sec:closed}
Given a polyhedral surface $(S,T,{\mathbf{d}})$, the admissible metric space
and the edge weight have been defined in Section \ref{sec:discrete} respectively.
\smallskip
\begin{lemma}The admissible metric space $\Omega_u$ is convex.
\label{lem:convexity_mesh_metric_space}
\end{lemma}
\smallskip
\begin{proof}
For a triangle $\{i,j,k\}\in F$, define
\[
\Omega_u^{ijk} :=
\{(u_i,u_j,u_k)|(\sqrt{u_i},\sqrt{u_j},\sqrt{u_k})\in E_{\mathbf{d}}(2)\}.
\]
Similar to the proof of Lemma \ref{lem:convexity_metric_space},
$\Omega_u^{ijk}$ is convex. The admissible metric space for the mesh
is
\[
\Omega_u = \bigcap_{\{i,j,k\}\in F} \Omega_u^{ijk}\bigcap
\{(u_1,u_2,\cdots,u_m)|\sum_{k=1}^m
u_k = m\},
\]
the intersection $\Omega_u$ is still convex.
\end{proof}
\smallskip
\begin{definition} [{Differential Form}]
The differential form $\omega$ defined on $\Omega_u$ is the
summation of the differential form on each face,
\[
\omega = \sum_{\{i,j,k\}\in F} \omega_{ijk} = \sum_{i=1}^m 2w_i du_i,
\]
where $\omega_{ijk}$ is given in Eqn. (\ref{eqn:1_form}) in
Corollary \ref{cor:closed_1_form}. $w_i$ is the edge weight on
$e_i$.
\end{definition}
\smallskip
\begin{lemma}The differential form $\omega$ is a closed 1-form.
\end{lemma}
\smallskip
\begin{proof}
According to Corollary \ref{cor:closed_1_form},
\[
d\omega = \sum_{\{i,j,k\}\in F} d\omega_{ijk} = 0.
\]
\end{proof}
\smallskip
\begin{lemma}
The energy function
\[
E(u_1,u_2,\cdots, u_n) = \sum_{\{i,j,k\}\in F}
E_{ijk}(u_1,u_2,\cdots, u_n)= \int^{(u_1,u_2,\cdots,
u_n)}_{(1,1,\cdots,1)} \sum_{i=1}^n w_i du_i
\]
is well defined and convex on $\Omega_u$, where $E_{ijk}$ is the
energy on the face, defined in Eqn. (\ref{eqn:energy}).
\label{lem:convexity_mesh_energy}
\end{lemma}
\smallskip
\begin{proof}
For each face $\{i,j,k\}\in F$, the Hessian matrices of $E_{ijk}$
is semi-positive definite, therefore, the Hessian matrix of the total
energy $E$ is semi-positive definite.
Similar to the proof of Lemma \ref{lem:convexity_energy}, the null
space of the Hessian matrix $H$ is
\[
ker H = \{\lambda(d_1,d_2,\cdots, d_n),\lambda \in \mathbb{R}\}.
\]
The tangent space of $\Omega_u$ at $u=(u_1,u_2,\cdots, u_n)$ is denoted by
$T\Omega_u(u)$. Assume $(du_1,du_2,\cdots, du_n)\in T\Omega_u(u)$,
then from $\sum_{i=1}^m u_i = m$, we get $\sum_{i=1}^m du_m = 0$.
Therefore,
\[
T\Omega_u( u ) \cap Ker H = \{0\},
\]
hence $H$ is positive definite restricted on $T\Omega_u(u)$. So the total energy $E$ is convex on $\Omega_u$.
\end{proof}
\smallskip
\begin{theorem}
\label{thm:closed_surface} The mapping on a closed Euclidean
polyhedral surface $\nabla E: \Omega_u \to \mathbb{R}^m,
(u_1,u_2,\cdots, u_n) \to (w_1,w_2,\cdots, w_n)$ is a smooth
embedding.
\end{theorem}
\begin{proof}
The admissible metric space $\Omega_u$ is convex as shown in Lemma
\ref{lem:convexity_mesh_metric_space}, the total energy is convex as shown in Lemma \ref{lem:convexity_mesh_energy}. According to Lemma
\ref{lem:embedding}, $\nabla E$ is a smooth embedding.
\end{proof}
\subsection{Open Surfaces}
\label{sec:open}
By the double covering technique \cite{Gu03SGP}, we can convert a polyhedral
surface with boundaries to a closed surface. First, let
$(\bar{S},\bar{T})$ be a copy of $(S,T)$, then we reverse the
orientation of each face in $\bar{M}$, and glue two surfaces $S$ and
$\bar{S}$ along their corresponding boundary edges, the resulting
triangulated surface is a closed one. We get the following corollary
\smallskip
\begin{corollary}
\label{cor:open_surface} The mapping on an Euclidean polyhedral
surface with boundaries $\nabla E: \Omega_u \to$ $\mathbb{R}^m,
(u_1,u_2$,\\$\cdots, u_n)$$ \to $$(w_1,w_2,\cdots, w_n)$ is a smooth
embedding.
\end{corollary}
\smallskip
Surely, the cotangent edge weights can be uniquely obtained from the
discrete heat kernel. By combining Theorem \ref{thm:closed_surface}
and Corollary \ref{cor:open_surface}, we obtain the major Theorem \ref{thm:main}, \emph{Global Rigidity Theorem}, of this work.
\section{Numerical Experiments}
\label{sec:experiment}
From above theoretic deduction, we can design the algorithm to
compute discrete metric with user prescribed edge weights.
\smallskip
\begin{problem}
Let $(S,T)$ be a triangulated surface,
$\mathbf{\bar{w}}(\bar{w}_1,\bar{w}_2,\cdots, \bar{w}_n)$ are the
user prescribed edge weights. The problem is to find a discrete
metric $\mathbf{u}=(u_1,u_2,\cdots, u_n)$, such that this metric
$\mathbf{\bar{u}}$ induces the desired edge weight $\mathbf{w}$.
\end{problem}
The algorithm is based on the following theorem.
\smallskip
\begin{theorem} Suppose $(S,T)$ is a triangulated surface. If there
exists an $\mathbf{\bar{u}}\in \Omega_u$, which induces
$\mathbf{\bar{w}}$, then $\mathbf{u}$ is the unique global minimum
of the energy
\begin{equation}
E(\mathbf{u}) = \int_{(1,1,\cdots,1)}^{(u_1,u_2,\cdots,u_n)}
\sum_{i=1}^n (\bar{w}_i - w_i) d\mu_i. \label{eqn:algorithm_energy}
\end{equation}
\end{theorem}
\smallskip
\begin{proof}
The gradient of the energy $\nabla E(\mathbf{u}) = \bar{\mathbf{w}}
- \mathbf{w}$, and since $\nabla E(\mathbf{\bar{u}})=0$, therefore
$\mathbf{\bar{u}}$ is a critical point. The Hessian matrix of
$E(\mathbf{u})$ is positive definite, the domain $\Omega_u$ is
convex, therefore $\mathbf{\bar{u}}$ is the unique global minimum of
the energy.
\end{proof}
In our numerical experiments, as shown in Fig. \ref{fig:meshes}, we tested surfaces with different
topologies, with different genus, with or without boundaries. All
discrete polyhedral surfaces are triangle meshes scanned from real
objects. Because the meshes are embedded in $\mathbb{R}^3$, they
have induced Euclidean metric, which are used as the desired metric
$\mathbf{\bar{u}}$. From the induced Euclidean metric, the desired
edge weight $\mathbf{\bar{w}}$ can be directly computed. Then we set
the initial discrete metric to be the constant metric
$(1,1,\cdots,1)$. By optimizing the energy in Eqn. (\ref{eqn:algorithm_energy}), we can reach the global minimum, and
recovered the desired metric, which differs from the induced
Euclidean metric by a scaling.
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[height=1.6in]{../figures/bunny.origin.eps} &\includegraphics[height=1.6in]{../figures/kitten.eps} &
\includegraphics[height=1.6in]{../figures/eg.eps}\\
\includegraphics[height=1.6in]{../figures/bunny.edge.eps} &
\includegraphics[height=1.6in]{../figures/kitten_edge.eps}&
\includegraphics[height=1.6in]{../figures/eg_edge.eps}\\
Genus $0$ &Genus $1$&Genus $2$\\
\end{tabular}
\caption{Euclidean polyhedral surfaces used in the experiments.}
\label{fig:meshes}
\end{figure*}
\section{Future Work}
\label{sec:future}
We conjecture that the Main Theorem
\ref{thm:main} holds for arbitrary dimensional Euclidean polyhedral
manifolds, that means discrete Laplace-Beltrami operator (or
equivalently the discrete heat kernel) and the discrete metric
for any dimensional Euclidean polyhedral manifold are mutually
determined by each other. On the other hand, we will explore the
possibility to establish the same theorem for different types of
discrete Laplace-Beltrami operators.
\bibliographystyle{abbrv}
|
2,877,628,088,871 | arxiv | \section{Introduction} \label{Intro}
The Bargmann-Fock space $\mathcal{F}^p:=\mathcal{F}^p(\C^n)$ is the collection of entire functions $f$ on $\C^n$ such that $f(\cdot) e^{- \frac{\abs{\cdot}}{2}} \in L^p(\C^n, dv)$. It is well known that $\FF^2$ is a reproducing kernel Hilbert
space with reproducing kernel given by $K_z(w)=e^{\overline{z}w}$. As usual, we denote by $k_z$ the normalized
reproducing kernel at $z$. For a bounded operator $T$ on $\FF^p$, the Berezin transform of $T$ is the function defined by
$$\tilde{T}(z)=\ip{Tk_z}{k_z}_{\mathcal{F}^2}.$$ It was proved recently by Bauer and the first author that the vanishing
of the Berezin transform is sufficient for compactness whenever the operator is in the Toeplitz algebra \cite{BI}. However, it
is generally very difficult to check whether a given operator $T$ is in the Toeplitz algebra, unless $T$ is itself a
Toeplitz operator or a combination of a few Toeplitz operators, and as such one would like a ``simpler'' sufficient
condition to guarantee this.
In the recent and interesting paper \cite{XZ}, Xia and Zheng introduced a class of ``sufficiently localized'' operators on
$\FF^2$ which includes the algebraic closure of the Toeplitz operators. These are the operators $T$ acting on $\FF^2$ such that there exist constants $2n<\beta<\infty$ and $0<C<\infty$ with \begin{equation} \label{SL-Fock}
\abs{\ip{Tk_z}{k_w}_{\mathcal{F}^2}}\leq\frac{C}{\left(1+\abs{z-w}\right)^{\beta}}. \end{equation} It was proved by Xia
and Zheng that every bounded operator $T$ from the $C^*$ algebra generated by sufficiently localized operators whose
Berezin transform vanishes at infinity, i.e., \begin{equation}\label{Ber} \lim_{\abs{z}\to
\infty}\ip{Tk_z}{k_z}_{\mathcal{F}^2}=0 \end{equation} is compact on $\mathcal{F}^2$. One of their main innovations is
providing an easily checkable condition~\eqref{SL-Fock} which is general enough to imply compactness from the seemingly
much weaker condition~\eqref{Ber}.
The aim of this paper is threefold. First, we wish to extend the Xia-Zheng notion of sufficiently localized operators to both a much wider class of weighted Fock spaces (in particular, the class of so-called ``generalized Bargmann-Fock spaces" considered in \cite{SV}) and to a larger class of operators. Note that \eqref{SL-Fock} easily implies $$
\sup_{z\in\C^n}\int_{\C^n}\abs{\ip{Tk_z}{k_w}_{\mathcal{F}^2}} \,dv(w)<\infty; $$ and consequently one should look at
generalizations of sufficiently localized operators that allow for weaker integral conditions. Also, note that the ideas in \cite{XZ} are essentially frame theoretic (see \cite{I} for a discussion of the ideas in \cite{XZ} from this point of view) and therefore one can not easily extend these ideas to the non-Hilbert space setting. To remedy this, we will provide a simpler,
more direct proof of the main result in \cite{XZ} which follows a more traditional route and which can be extended to other
(not necessarily Hilbert) spaces of analytic functions. In particular, we show that our main result, in an appropriately modified form, holds for the classical Bergman space $A^p$ on the ball (and in Section \ref{ConcRemSec} we will discuss the possibility of extending our results to a very wide class of weighted Bergman spaces.)
The extension of the main results in \cite{XZ} to a larger class of operators and to a wider class of weighted Fock spaces is as follows. Let $d^c = \frac{i}{4} (\overline{\partial} - \partial)$ and let $d$ be the usual exterior derivative. For the rest of the paper let $\phi \in C^2(\C^n)$ be a real valued function on $\C^n$ such that \begin{equation*} c \omega_0 < d d^c \phi < C \omega_0 \end{equation*} holds uniformly pointwise on $\C^n$ for some positive constants $c$ and $C$ (in the sense of positive $(1, 1)$ forms) where $\omega_0 = d d^c |\cdot |^2$ is the standard Euclidean K\"{a}hler form. Furthermore, for $0 < p \leq \infty$, define the generalized Bargmann-Fock space $\ensuremath{{\mathcal{F}}_\phi ^p }$ to be the space of entire functions $f$ on $\C^n$ such that $fe^{-\phi} \in L^p(\C^n, dv)$ (for a detailed study of the linear space properties of $\ensuremath{{\mathcal{F}}_\phi ^p }$ see \cite{SV}). For operators $T$ acting on the reproducing kernels $K(z, w)$ of $\ensuremath{{\mathcal{F}}_\phi ^2 }$, we
impose the following conditions. We first assume that \begin{equation}\label{assump1-Fock}
\sup_{z\in\mathbb{C}^n}\int_{\mathbb{C}^n}\abs{\ip{Tk_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}\,dv(w)<\infty, \hspace{.5cm}
\sup_{z\in\C^n}\int_{\C^n}\abs{\ip{T^*k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}\,dv(w)<\infty, \end{equation} which is enough to
conclude that the operator $T$ initially defined on the linear span of the reproducing kernels extends to a bounded
operator on $\ensuremath{{\mathcal{F}}_\phi ^p }$ for $1 \leq p \leq \infty$ (see Section \ref{Fock}). To show that the operator is compact, we impose the following additional assumptions on
$T$: \begin{equation}\label{assump-Fock}
\lim_{r\to\infty}\sup_{z\in\C^n}\int_{D(z,r)^c}\abs{\ip{Tk_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}\,dv(w)=0, \hspace{.5cm}
\lim_{r\to\infty}\sup_{z\in\C^n}\int_{D(z,r)^c}\abs{\ip{T^*k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}\,dv(w)=0.
\end{equation} \begin{definition} \label{sufficient_local} We will say that a linear operator $T$ on $\ensuremath{{\mathcal{F}}_\phi ^p }$ is weakly
localized (and for convenience write $T \in \AaFock$) if it satisfies the conditions~\eqref{assump1-Fock} and~\eqref{assump-Fock}. \end{definition} Note that every
sufficiently localized operator on $\mathcal{F}^2$ in the sense of Xia and Zheng obviously satisfies~\eqref{assump1-Fock}
and~\eqref{assump-Fock} and is therefore weakly localized in our sense too. Now if $D(z, r)$ is the Euclidean ball with center $z$ and radius $r$, and if $\|T\|_{\text{e}}$ denotes the essential norm of a bounded operator $T$ on $\ensuremath{{\mathcal{F}}_\phi ^p }$ then the following theorem is one of the main results of this paper:
\begin{thm} \label{local-Fock} Let $ 1 < p < \infty$ and let $T$ be an operator on $\ensuremath{{\mathcal{F}}_\phi ^p }$ which belongs to the norm closure of $\AaFock$. Then there exists $r, C > 0$ (both depending on $T$) such that \begin{equation*} \|T\|_{\text{e}} \leq C \limsup_{|z| \rightarrow \infty} \sup_{w \in D(z, r)} \abs{\ip{Tk_z}{k_w}}. \end{equation*}
In particular, if \begin{equation*} \lim_{|z| \rightarrow \infty} \|Tk_z\|_{\ensuremath{{\mathcal{F}}_\phi ^p }} = 0 \end{equation*} then $T$ is compact on $\ensuremath{{\mathcal{F}}_\phi ^p }$. \end{thm} \noindent
\begin{comment}Strictly speaking the above result does not extend the general version of the main Theorem of Xia and Zheng in \cite{XZ}. Namely, we do not consider the closure of our ``sufficiently localized'' operators. Even though this can be
done using techniques from~\cite{MW}, we decided to present only the technically simpler version since this already
captures the main point of our work, which is to provide a general checkable condition which guarantees that~\eqref{Ber}
implies compactness, avoiding any reference to Toeplitz operators. \end{comment}
Now if $\AaordFock$ is the class of sufficiently localized operators on $\ensuremath{\mathcal{F} ^2 }} \newcommand{\Lt}{\ensuremath{L ^2 }$ then note that an application of Proposition $1.4$ in \cite{I} in conjunction with Theorem \ref{local-Fock} immediately proves the following theorem, which provides the previously mentioned generalization of the results in \cite{XZ} (see Section \ref{Fock} for more details).
\begin{thm} \label{local-ordinaryFock} Let $ 1 < p < \infty$ and let $T$ be an operator on $\ensuremath{\mathcal{F} ^p }} \newcommand{\Fq}{\ensuremath{\mathcal{F} ^q }$ which belongs to the norm closure of $\AaordFock$. If $\lim_{|z| \rightarrow \infty} \abs{\ip{Tk_z}{k_z}_{\ensuremath{\mathcal{F} ^2 }} \newcommand{\Lt}{\ensuremath{L ^2 }}} = 0$ then $T$ is compact. \end{thm}
Let us note that one can easily write the so called ``Fock-Sobolev spaces" from \cite{CZ} as generalized Bargmann-Fock spaces, so that in particular Theorem \ref{local-Fock} immediately applies to these spaces (see \cite{I} for more details).
To state the main result in the Bergman space setting requires some notation. Let $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ denote the unit ball in $\C^n$
and let the space $A^p:=A^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n)$ denote the classical Bergman space, i.e., the collection of all holomorphic functions
on $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ such that $$ \norm{f}_{A^p}^p:=\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{f(z)}^p\,dv(z)<\infty. $$ The function
$K_z(w):=(1-\overline{z}w)^{-(n+1)}$ is the reproducing kernel for $A^2$ and $$
k_z(w):=\frac{(1-\abs{z}^2)^{\frac{n+1}{2}}}{(1-\overline{z}w)^{(n+1)}} $$ is the normalized reproducing kernel at the
point $z$. We also will let $d\lambda$ denote the invariant measure on $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$, i.e., $$
d\lambda(z)=\frac{dv(z)}{(1-\abs{z}^2)^{n+1}}. $$
Now let $1 < p < \infty$ and let $\frac1p + \frac{1}{p'} = 1$. We are interested in operators $T$ acting on the reproducing kernels of $A^2$ that satisfy the following conditions. First, we assume
that there exists $0 < \delta < \min\{p, p'\}$ such that \begin{equation}\label{assump1}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{Tk_z}{k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}\,d\la(w)<\infty,
\hspace{.5cm}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{T^*k_z}{k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p(n + 1)}} _{A^2}}\,d\la(w)<\infty.
\end{equation} \noindent These are enough to conclude that the operator $T$ initially defined on the linear span of the
reproducing kernels extends to a bounded operator on $A^p$ (see the comments following the proof of Proposition \ref{MainEst1}). To treat compactness we make the following additional
assumptions on $T$: there exists $0 < \delta < \min\{p, p'\}$ such that \begin{equation}\label{assump}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{Tk_z}{k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}\,d\la(w) \rightarrow 0,
\hspace{.5cm}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{T^*k_z}{k_w}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p(n + 1)}} _{A^2}}\,d\la(w) \rightarrow 0
\end{equation} as $r \rightarrow \infty$.
\begin{definition} \label{sufficient_local_Bergman} We say that a linear operator $T$ on $A^p$ is $p$ weakly localized (which we denote by $T \in \Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$) if it
satisfies conditions~\eqref{assump1} and~\eqref{assump}. \end{definition} Note that the condition $0 < \delta < \min\{p, p'\}$ implies that both $1 - \frac{2\delta}{p(n + 1)}$ and $1 - \frac{2\delta}{p'(n + 1)}$ are strictly between $\frac{n - 1}{n + 1}$ and $1$. Furthermore, note that when $p = p' = 2$, we have that $ \frac{n - 1}{n + 1} < 1 - \frac{\delta}{(n + 1)} < 1$ precisely when $0 < \delta < 2$. Thus, in this case we can rewrite condition ~\eqref{assump1} in the following simpler way: there exists $\frac{n-1}{n + 1} < a < 1$ where \begin{equation*}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{Tk_z}{k_w}_{A^2}}\frac{\norm{K_z}^{a} _{A^2}}{\norm{K_w}^{a} _{A^2}}\,d\la(w)<\infty,
\hspace{.5cm}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{T^*k_z}{k_w}_{A^2}}\frac{\norm{K_z}^{a} _{A^2}}{\norm{K_w}^{a} _{A^2}}\,d\la(w)<\infty. \end{equation*}
\noindent Of course, one can similarly rewrite condition ~\eqref{assump} when $p = 2$.
We prove the following result.
\begin{thm} \label{local-Bergman1} Let $1 < p < \infty$ and let $T$ be an operator on $A^p$ which belongs to the norm closure of $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. If \begin{equation*} \lim_{\abs{z}\to 1}\ip{Tk_z}{k_z}_{A^2}=0 \end{equation*} then $T$ is compact. \end{thm}
It will be clear that the
method of proof also will work for the weighted Bergman space $A^p _\alpha$, and we leave this to the interested reader to
verify.
Note that this result is known through deep work of Su\'arez, \cite{Sua} in the case of $A^p$ when the operator $T$ belongs to the Toeplitz algebra generated by $L^\infty$ symbols (see also \cite{MSW} for the case of
weighted Bergman spaces.) We will prove below that the Toeplitz algebra on $A^p$ generated by $L^\infty$ symbols is a subalgebra of the norm closure of $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. In particular, the results of this paper provide a considerably simpler proof of the main results in \cite{MSW, Sua} for the $p \neq 2$ situation (though it should be noted that a similar simplification when $p = 2$ was provided in \cite{MW}).
The structure of this paper is as follows. In Section \ref{Bergman} we provide the extension of the the Xia and Zheng
result to the Bergman space on the unit ball $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$, and in particular we prove Theorem~\ref{local-Bergman1}. In Section \ref{Fock} we prove Theorems \ref{local-Fock} and \ref{local-ordinaryFock} which provides an extension of the Xia
and Zheng result in the case of the generalized Bargmann-Fock spaces. Finally in Section \ref{ConcRemSec} we will briefly discuss some interesting open problems related to these results.
\section{Bergman Space Case} \label{Bergman}
Let $\varphi_z$ be the M\"obius map of $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ that interchanges $0$ and $z$. It is well known that $$ 1-\abs{\varphi_z(w)}^2=\frac{(1-\abs{z}^2)(1-\abs{w}^2)}{\abs{1-\overline{z}w}^{2}}, $$ and as a consequence we have that
\begin{equation} \label{Magic} \abs{\ip{k_z}{k_w}_{A^2}}=\frac{1}{\norm{K_{\varphi_z(w)}}_{A^2}}. \end{equation}
Using the automorphism $\varphi_z$, the pseudohyperbolic and Bergman metrics on $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ are defined by $$
\rho(z,w):=\abs{\varphi_z(w)}\quad\textnormal{ and }\quad \beta(z,w):=\frac{1}{2}\log\frac{1+\rho(z,w)}{1-\rho(z,w)}. $$
Recall that these metrics are connected by $\rho=\frac{e^{2\beta}-1}{e^{2\beta}+1}=\tanh\beta$ and it is well-known that
these metrics are invariant under the automorphism group of $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$. We let $$ D(z,r):=\{w\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n:\beta(z,w)\leq
r\}=\{w\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n: \rho(z,w)\leq s=\tanh r\}, $$ denote the hyperbolic disc centered at $z$ of radius $r$.
Recall also that the orthogonal (Bergman) projection of $L^2(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)$ onto $A^2$ is given by the integral operator $$
P(f)(z):=\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\ip{\Kbw}{\Kbz}_{A^2}f(w)dv(w). $$ Therefore, for all $f\in A^2$ we have
\begin{equation} \label{BergResOfId} f(z)=\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\ip{f}{\kbw}_{A^2}\kbw(z)\,d\lambda(w).\end{equation}
As usual an important ingredient in our treatment will be the Rudin-Forelli estimates, see \cite{Zhu} or \cite{MW}.
Recall the standard Rudin-Forelli estimates: \begin{equation} \label{propA6} \int_{\mathbb{B}_n}
{\frac{\abs{\ip{K_z}{K_w}_{A^2}}^{\frac{r+s}{2}}}{\norm{K_z}^s_{A^2}\norm{K_w}^r_{A^2}}\,d\la(w)}\leq C = C(r,s) <
\infty, \; \; \forall z \in \mathbb{B}_n \end{equation} for all $r>\kappa>s>0$, where $\kappa=\kappa_n:=\frac{2n}{n+1}$.
We will use these in the following form: For all $\frac{n-1}{n+1}<a<1$ we have that
\begin{equation}\label{rf.n} \int_{\mathbb{B}_n}
\abs{\ip{k_z}{k_w}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)\leq C = C(a) < \infty, \; \; \forall z
\in \mathbb{B}_n. \end{equation} To see that this is true in the classical Bergman space setting, for a given
$\frac{n-1}{n+1}<a<1$ set $r=1+a$ and $s=1-a > 0$. Then $r+s=2$, and since $a>\frac{n-1}{n+1}$ we have that
$r=1+a>\frac{2n}{n+1}$. Furthermore since $0 < a < 1$ we have that $0 < s < 1 \leq \frac{2n}{n + 1}$. By plugging this in \eqref{propA6} we obtain~\eqref{rf.n}.
We will also need the following uniform version of the Rudin-Forelli estimates.
\begin{lm} \label{lm-rfloc} Let $\frac{n-1}{n+1}<a<1$. Then \begin{equation}\label{rfloc}
\lim_{R\to\infty}\sup_{z\in\mathbb{B}_n}\int_{D(z,R)^c}
\abs{\ip{k_z}{k_w}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)= 0. \end{equation} \end{lm}
\begin{proof} Notice first that \begin{eqnarray*} \int_{D(z,R)^c}
\abs{\ip{k_z}{k_w}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)&=&\int_{D(0,R)^c}
\abs{\ip{k_z}{k_{\varphi} \newcommand{\al}{\alpha} \newcommand{\be}{\beta} \newcommand{\la}{\lambda_z(w)}}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_{\varphi} \newcommand{\al}{\alpha} \newcommand{\be}{\beta} \newcommand{\la}{\lambda_z(w)}}^a_{A^2}}\,d\la(w)\\ &=& \int_{D(0,R)^c}
\abs{\ip{k_z}{k_w}_{A^2}}^a\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}_{A^2}}\,d\la(w)\\ &=& \int_{D(0,R)^c}
\frac{\abs{\ip{K_z}{K_w}_{A^2}}^a}{\norm{K_w}^{1+a}_{A^2}}\,d\la(w)\\ &=& \int_{D(0,R)^c}
\frac{dv(w)}{\abs{1-\bar{w}z}^{(n+1)a}(1-\abs{w}^2)^{\frac{n+1}{2}(1-a)}}\\
&=&\int_{R'}^{1}\int_{\mathbb{S}_n}\frac{r^{2n-1}d\xi dr}{\abs{1-zr\overline{\xi}}^{(n+1)a}(1-r^2)^{\frac{n+1}{2}(1-a)}}
\end{eqnarray*} \noindent where in the last integral $R=\log\frac{1+R'}{1-R'}$. Notice that $R'\to1$ when $R\to\infty$ and note that the last integral can be written as $$\int_{R'}^{1}I_{(n+1)a-n}(rz)\frac{r^{2n-1} dr}{(1-r^2)^{\frac{n+1}{2}(1-a)}},$$
where $$I_{c}(z):=\int_{\mathbb{S}_n}\frac{d\xi}{\abs{1-zr\overline{\xi}}^{c+n}}.$$ By standard estimates (see \cite{Zhu}*{p. 15} for example), we have that
$$I_{(n+1)a-n}(rz)\lesssim
\begin{cases}
1, \hspace{.5cm} \mbox{if } \hspace{.2cm} (n+1)a-n<0 \\
\log\frac{1}{1-|rz|^2}, \hspace{.5cm} \text{if } \hspace{.2cm} (n+1)a-n=0 \\
\frac{1}{(1-|rz|^2)^{(n+1)a-n}}, \hspace{.5cm} \text{if } \hspace{.2cm} (n+1)a-n>0,
\end{cases}$$
which gives us that
$$ \int_{D(z,R)^c}
\abs{\ip{k_z}{k_w}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)\lesssim \begin{cases}
\int_{R'}^1 \frac{r^{2n-1}}{(1-r^2)^{\frac{n+1}{2}(1-a)}}dr, \hspace{.5cm} \mbox{if } \hspace{.2cm} (n+1)a-n<0 \\
\int_{R'}^1 \log\frac{1}{1-r^2}\frac{r^{2n-1}}{(1-r^2)^{\frac12}}dr\, \hspace{.5cm} \text{if } \hspace{.2cm} (n+1)a-n=0 \\
\int_{R'}^1 \frac{r^{2n-1}}{(1-r^2)^{(n+1)a-n+\frac{n+1}{2}(1-a)}}dr, \hspace{.5cm} \text{if } \hspace{.2cm} (n+1)a-n>0
\end{cases}$$
Since $a < 1$, it is easy to see that all the functions appearing on the right hand side are integrable on $(0,1)$. Therefore, we obtain the desired conclusion by taking the limit as $R\to \infty$ (which is the same as $R'\to 1$).
\end{proof}
First, we want to make sure that the class of weakly localized operators is large enough to contain some interesting
operators. This is indeed true since every Toeplitz operator with a bounded symbol belongs to this class.
\begin{prop}\label{T-Berg} Each Toeplitz operator $T_u$ on $A^p$ with a bounded symbol $u(z)$ is in $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$ for any $1 < p < \infty$.
\end{prop} \begin{proof} Clearly it is enough to show that \begin{equation*}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{T_u k_z}{k_w}_{A^2}}\frac{\norm{K_z}^{a} _{A^2}}{\norm{K_w}^{a} _{A^2}}\,d\la(w) \rightarrow 0,
\hspace{.5cm}
\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{T_{\overline{u}} k_z}{k_w}}\frac{\norm{K_z}^{a} _{A^2}}{\norm{K_w}^{a} _{A^2}}\,d\la(w) \rightarrow 0
\end{equation*} as $r \rightarrow \infty$ for all $\frac{n - 1}{n + 1} < a < \infty$.
By definition $$ T_uk_z(w)=P(uk_z)(w)=\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\ip{K_x}{K_w}_{A^2}u(x)k_z(x)\,dv(x). $$
Therefore, \begin{eqnarray*} \abs{\ip{T_uk_z}{k_w}_{A^2}} & \leq &
\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{k_w}{k_x}_{A^2}}\abs{u(x)}\abs{\ip{k_z}{k_x}_{A^2}}\,d\la(x)\\
& \leq & \left\Vert u\right\Vert_{\infty}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{k_w}{k_x}_{A^2}\ip{k_x}{k_z}_{A^2}}\,d\la(x).
\end{eqnarray*} Now for $z,x\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$, set $$
I_z(x):= \abs{\ip{k_x}{k_z}_{A^2}} \int_{D(z,r)^c}\abs{\ip{k_w}{k_x}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)
$$ First note that \begin{eqnarray*}
\int_{D(z,r)^c}\abs{\ip{T_uk_z}{k_w}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w) &\leq& \left\Vert
u\right\Vert_{\infty}
\int_{D(z,r)^c}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{k_w}{k_x}_{A^2}\ip{k_x}{k_z}_{A^2}}\,d\la(x)\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)\\
&=& \left\Vert u\right\Vert_{\infty}
\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{k_w}{k_x}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)\abs{\ip{k_x}{k_z}_{A^2}}\,d\la(x)\\
& = & \left\Vert u\right\Vert_{\infty} \int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n} I_z(x) \,d\la(x)\\ &=& \left\Vert
u\right\Vert_{\infty}\left(\int_{D(z,\frac{r}{2})}+\int_{D\left(z,\frac{r}{2}\right)^c}\right)I_z(x)\,d\lambda(x).
\end{eqnarray*} To estimate the first integral notice that for $x\in D\left(z,\frac{r}{2}\right)$ we have
$D(z,r)^c\subset D\left(x,\frac{r}{2}\right)^c$. Therefore, the first integral is no greater than
$$
\int_{D(z,\frac{r}{2})}\int_{D(x,\frac{r}{2})^c}\abs{\ip{k_w}{k_x}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)\abs{\ip{k_x}{k_z}_{A^2}}\,d\la(x).$$
It is easy to see that the last expression is no greater than $C(a)
A\left(\frac{r}{2}\right)$, where $$A(r)=\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}
\abs{\ip{k_z}{k_w}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w),$$ and $C(a)$ is just the bound from the
standard Rudin-Forelli estimates~\eqref{rf.n}.
Estimating the second integral is simpler. The second integral is clearly no greater than
$$
\int_{D\left(z,\frac{r}{2}\right)^c}\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\abs{\ip{k_w}{k_x}_{A^2}}\frac{\norm{K_z}^a_{A^2}}{\norm{K_w}^a_{A^2}}\,d\la(w)\abs{\ip{k_x}{k_z}_{A^2}}\,d\la(x).$$
By the standard Rudin-Forelli estimates~\eqref{rf.n} the inner integral is no greater than $$
C(a)\frac{\norm{K_z}^a_{A^2}}{\norm{K_x}^a_{A^2}},$$ where the constant $C(a)$ is independent of $z$ and $x$. So, the
whole integral is bounded by $C(a)A\left(\frac{r}{2}\right)$. Therefore
$$\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n} \int_{D(z,r)^c}\abs{\ip{T_uk_z}{k_w}_{A^2}}\frac{\norm{K_z}^a}{\norm{K_w}^a}\,d\la(w)\leq \left\Vert
u\right\Vert_{\infty}\left(C(a)A\left(\frac{r}{2}\right)+C(a)A\left(\frac{r}{2}\right)\right).$$ Applying the uniform
Rudin-Forelli estimates~\eqref{rfloc} in Lemma \ref{lm-rfloc} completes the proof since $2C(a)\left\Vert
u\right\Vert_{\infty}A\left(\frac{r}{2}\right)\to 0$ as $r\to\infty$. \end{proof}
We next show that the class of weakly localized operators forms a $*$-algebra.
\begin{prop}\label{C*-Berg} If $1 < p < \infty$ then $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$ is an algebra. Furthermore, $\AatwoBerg$ is a $*-$algebra. \end{prop}
\begin{proof} It is trivial that $T\in \AatwoBerg$ implies $T^*\in\AatwoBerg$. It is also easy to see that any linear combination of
two operators in $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$ must be also in $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. It remains to prove that if $T, S\in \Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$, then $TS\in \Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. To that end, we have that
\begin{align*}
\int_{D(z,r)^c} & \abs{\ip{TSk_z}{k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}\,d\la(w) \\ & = \int_{D(z,r)^c}\abs{\ip{Sk_z}{T^*k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}\,d\la(w)\\
\\ &=
\int_{D(z,r)^c}\abs{\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\ip{Sk_z}{k_x}_{A^2}\ip{k_x}{T^*k_w}_{A^2}\,d\la(x)}\frac{\norm{K_z}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}\,d\la(w)\\
&\leq
\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{k_x}{T^*k_w}_{A^2}}\frac{d\la(w)}{\norm{K_w}^{1 - \frac{2\delta}{p'(n + 1)}} _{A^2}}\abs{\ip{Sk_z}{k_x}_{A^2}}\norm{K_z}^{1 - \frac{2\delta}{p'(n + 1)}}
\,d\la(x).\\ \end{align*} Proceeding exactly as in the proof of the previous Proposition and using the conditions
following from $T, S\in \Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$ in the place of the local Rudin-Forelli estimates~\eqref{rfloc} (and replacing $a$ with ${1 - \frac{2\delta}{p(n + 1)}})$ we obtain that $$
\lim_{r\to\infty}\sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c}\abs{\ip{TSk_z}{k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p(n + 1)}} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p(n + 1)}} _{A^2}}\,d\la(w)=0.
$$ The corresponding condition for $(TS)^*$ is proved in exactly the same way. \end{proof}
We next show that every weakly localized operator can be approximated by infinite sums of well localized pieces. To state
this property we need to recall the following proposition proved in \cite{MW} \begin{prop} \label{Covering_Bergman} There
exists an integer $N>0$ such that for any $r>0$ there is a covering $\FF_r=\{F_j\}$ of $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ by disjoint Borel sets
satisfying \begin{enumerate} \item[\label{Finite} \textnormal{(1)}] every point of $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ belongs to at most $N$ of the
sets $G_j:=\{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n: d(z, F_j)\leq r\}$, \item[\label{Diameter} \textnormal{(2)}] $\textnormal{diam}_d\, F_j \leq 2r$
for every $j$. \end{enumerate} \end{prop} We use this to prove the following proposition, which is similar to what
appears in \cite{MW}, but exploits condition \eqref{assump}.
\begin{prop}\label{MainEst1} Let $1 < p < \infty$ and let $T$ be in the norm closure of $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. Then for every $\epsilon > 0$ there
exists $r>0$ such that for the covering $\FF_r=\{F_j\}$ (associated to $r$) from Proposition~\ref{Covering_Bergman}, we have:
\begin{eqnarray*} \norm{ TP-\sum_{j}M_{1_{F_j} }TPM_{1_{G_j} }}_{A^p\to L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv) } < \epsilon. \end{eqnarray*}
\end{prop}
\begin{proof} By Proposition~\ref{C*-Berg} in conjunction with Proposition \ref{Covering_Bergman} and a simple approximation argument, we may assume that $T \in \Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. Now define $$ S=TP-\sum_{j}M_{1_{F_j} }TPM_{1_{G_j} }.$$ Given $\epsilon$ choose $r$ large enough so
that \begin{equation*} \sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c} \abs{ \ip{Tk_z}{k_w}_{A^2}}
\frac{\norm{K_z}^{1 - {\frac{2\delta}{p' (n + 1) }} }_{A^2}}{\norm{K_w}^{1 - {\frac{2\delta}{p' (n + 1) }} }_{A^2}}\,d\lambda(w)<\epsilon \end{equation*} and \begin{equation*} \sup_{z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\int_{D(z,r)^c} \abs{ \ip{T^*k_z}{k_w}_{A^2}}\frac{\norm{K_z}^{1 - \frac{2\delta}{p (n + 1) }} _{A^2}}{\norm{K_w}^{1 - \frac{2\delta}{p (n + 1) }} _{A^2}}
\,d\lambda(w)<\epsilon. \end{equation*} Now for any $z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ let $z \in F_{j_0}$, so that \begin{eqnarray*} \abs{Sf(z)} & \leq &
\int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}\sum_{j}1_{F_j(z)}1_{G_j^c}(w) \abs{ \ip{T^*\Kbz}{\Kbw}_{A^2} }\abs{f(w)} \,dv(w)\\
& = & \int_{G_{j_0} ^c} \abs{ \ip{T^*\Kbz}{\Kbw}_{A^2} }\abs{f(w)} \,dv(w)\\
& \leq & \int_{D(z,r)^c} \abs{ \ip{T^*\Kbz}{\Kbw}_{A^2} }\abs{f(w)} \,dv(w).
\end{eqnarray*}
To finish the proof, we will estimate the operator norm of the integral operator on $L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)$ with kernel $1_{D(z, r) ^c} (w)|\langle T^*K_z, K_w\rangle_{A^2}| $ by using the classical Schur test. To that end, let $h(w) = \|K_w\|_{A^2} ^{\frac{2\delta}{p p '(n + 1)}}$ so that \begin{align*} \int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n} 1_{D(z, r) ^c} (w)|\langle T^* K_z, K_w\rangle_{A^2}| h(w) ^{p'} \, dv(w) & = \int_{D(z, r)^c} |\langle T^* K_z, K_w\rangle_{A^2}| \|K_w\|_{A^2} ^{\frac{2\delta}{p (n + 1) }} \, dv(w) \\ & = \int_{D(z, r)^c} |\langle T^* k_z, k_w\rangle_{A^2}| \|K_z\| \|K_w\|_{A^2} ^{\frac{2\delta}{p(n + 1) } - 1 } \, d\lambda(w) \\ & \leq \epsilon \|K_z\|_{A^2} ^{\frac{2\delta}{p(n + 1)}} = \epsilon h(z) ^{p'}. \end{align*}
Similarly, we have that \begin{equation*} \int_{\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n} 1_{D(z, r) ^c} (w)|\langle T^* K_z, K_w\rangle_{A^2}| h(z) ^p \, dv(z) \leq \epsilon h(w) ^p \end{equation*} which completes the proof.
\end{proof}
It should be noted that a very similar Schur test argument actually proves that condition ~\eqref{assump1} implies that $T$ is bounded on $A^p$.
We can now prove one of our main results whose proof uses the ideas in ~\cite{MW}*{Theorem 4.3} and \cite{I}*{Lemma 5.3}. First, for any $w \in \mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ and $1 < p < \infty$, let $k_w ^{(p)}$ be the ``$p$ - normalized reproducing kernel" defined by \begin{equation*}k_w ^{(p)} (z) = \frac{K(z, w) }{\|K_w\|^\frac{2}{p'} }. \end{equation*} Clearly we have that $k_w ^{(2)} = k_w$ and an easy computation tells us that $\|k_w ^{(p)}\|_{A^p} \approx 1$ (where obviously we have equality when $p = 2$).
\begin{thm}\label{essBerg}
Let $1 < p < \infty$ and let $T$ be in the norm closure of $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$. Then there exists $r, C > 0$ (both depending on $T$) such that \begin{equation*} \|T\|_{\text{e}} \leq C \limsup_{|z| \rightarrow 1^{-}} \sup_{w \in D(z, r)} |\langle Tk_z ^{(p)} , k_w ^{(p')} \rangle_{A^2}| \end{equation*} where $\|T\|_{\text{e}}$ is the essential norm of $T$ as a bounded operator on $A^p$.
\end{thm}
\begin{proof}Since $P : L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv) \rightarrow A^p$ is a bounded projection, it is enough to estimate the essential norm of $T = TP$
as an operator on from $A^p$ to $L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)$.
Clearly if $\|TP\|_\text{e} = 0$ then there is nothing to prove, so assume that $\|TP\|_\text{e} > 0.$ By Proposition~\ref{MainEst1} there exists $r>0$ such that for the covering $\FF_r=\{F_j\}$ associated
to $r$ (from Proposition~\ref{Covering_Bergman}) \begin{eqnarray*} \norm{TP- \sum_{j}M_{1_{F_j} }TPM_{1_{G_j} }}_{A^p\to
L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)} < \frac{1}{2} \|TP\|_\text{e}. \end{eqnarray*}
Since $\sum_{j< m}M_{1_{F_j} }TPM_{1_{G_j} }$ is compact for every $m\in \N$ we have that $\|TP\|_{\text{e}}$ (as an operator from $A^p$ to $L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)$) can be estimated in the following way:
\begin{align*} \norm{TP}_\text{e} &\leq \norm{TP- \sum_{j < m}M_{1_{F_j} }TPM_{1_{G_j} }}_{A^p\to L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)}\\ &\leq
\norm{TP- \sum_{j}M_{1_{F_j} }TPM_{1_{G_j} }}_{A^p\to L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)}+\norm{T_m}_{A^p \rightarrow L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)} \\ & \leq \frac{1}{2} \|TP\|_\text{e} + \norm{T_m}_{A^p \rightarrow L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)}, \end{align*} where
\begin{equation*} T_m = \sum_{j\geq m}M_{1_{F_j} }TPM_{1_{G_j} }. \end{equation*} We will complete the proof by showing that there exists $C > 0$ where \begin{equation*} \limsup_{m\to\infty}\norm{T_m}_{A^p\to
L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)}\lesssim C \limsup_{|z| \rightarrow 1^{-}} \sup_{w \in D(z, r)} |\langle Tk_z ^{(p)}, k_w ^{(p')} \rangle_{A^2}| + \frac{1}{4}\norm{TP}_\text{e}. \end{equation*}
If $f\in A^p$ is arbitrary of norm no greater than $1$, then
\begin{align*} \norm{T_m f}_{A^p}^p &= \sum_{j\geq m}\norm{M_{1_{F_j} }TPM_{1_{G_j} }f}_{A^p}^p\\ &= \sum_{j\geq m}
\frac{\norm{M_{1_{F_j} }TPM_{1_{G_j} }f}_{A^p}^p}{\norm{M_{1_{G_j} }f}_{A^p}^p}\norm{M_{1_{G_j} }f}_{A^p}^p \leq
N\sup_{j\geq m}\norm{M_{1_{F_j} }Tl_j}_{A^p}^p \end{align*} where
$$l_j:=\frac{P M_{1_{G_j} }f}{\norm{M_{1_{G_j} }f}_{A^p}}.$$
Therefore, we have that
$$\norm{T_m}_{A^p\to L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)}\lesssim \sup_{j\geq m}\sup_{\norm{f}_{A^p} \leq 1}\left\{\norm{M_{1_{F_j} } Tl_j}_{A^p}:
l_j=\frac{PM_{1_{G_j} }f}{\norm{M_{1_{G_j} }f}_{A^p}}\right\}$$ and hence
$$\limsup_{m\to \infty} \norm{T_m}_{A^p\to L^p(\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n, dv)}\lesssim \limsup_{j\to\infty}\sup_{\norm{f}_{A^p}\leq
1}\left\{\norm{M_{1_{F_j} } Tl_j}_{A^p}: l_j=\frac{PM_{1_{G_j} }f}{\norm{M_{1_{G_j} }f}_{A^p}}\right\}.$$
Now pick a sequence $\{f_j\}$ in $A^p$ with $\norm{f_j}_{A^p}\leq 1$ such that
$$ \limsup_{j\to \infty}\sup_{\norm{f}\leq 1}\left\{\norm{M_{1_{F_j} } Tg}_{A^p}: g=\frac{PM_{1_{G_j}}f}{\norm{M_{1_{G_j}
}f}_{A^p}}\right\}-\frac{1}{4} \|TP\|_{\text{e}} \leq \limsup_{j\to\infty}\norm{M_{1_{F_j} } Tg_j}_{A^p},$$ where $$g_j=\frac{P M_{1_{G_j}
}f_j}{\norm{M_{1_{G_j}
}f_j}_{A^p}}= \frac{\int_{G_j} \ip{f}{ k_w ^{(p')}}_{A^2} k_{w} ^{(p)} \,d\lambda(w)}{\left(\int_{G_j}\abs{\ip{f}{ k_u ^{(p')} }_{A^2}} ^p \, d\lambda (u)\right)^{\frac{1}{p}}} = \int_{G_j} \widetilde{a}_j (w) \, k_w ^{(p)} \, d\lambda(w) $$ where \begin{equation*} \widetilde{a}_j (w) = \frac{\ip{f}{ k_w ^{(p')}}_{A^2} }{\left(\int_{G_j}\abs{\ip{f}{ k_u ^{(p')} }_{A^2}} ^p \, d\lambda (u)\right)^{\frac{1}{p}}}. \end{equation*}
Finally, by the reproducing property and H\"{o}lder's inequality, we have that \begin{align*} \limsup_{j \rightarrow \infty} \norm{M_{1_{F_j} } T g_j}_{A^p} ^p & \leq \limsup_{j \rightarrow \infty} \int_{F_j} \left( \int_{G_j} \abs{\widetilde{a}_j (w) } \abs{Tk_w ^{(p)} (z)} \, d\lambda(w) \right)^p \, dv(z) \\ & = \limsup_{j \rightarrow \infty} \int_{F_j} \left( \int_{G_j} \abs{\widetilde{a}_j (w) } \abs{\ip{Tk_w ^{(p)}}{k_z ^{(p')}}_{A^2}} \, d\lambda(w) \right)^p \, d\lambda(z) \\ & \leq \limsup_{|z| \rightarrow 1^{-} } \sup_{w \in D(z, 3r)} \abs{\ip{Tk_z ^{(p)}}{ k_w ^{(p')}}_{A^2}}^p \left( \sup_j \lambda(G_j) ^p \int_{G_j} \abs{\widetilde{a}_j (w) } ^p \, d\lambda(w) \right) \\ & \leq C(r) \limsup_{|z| \rightarrow 1^{-} } \sup_{w \in D(z, 3r)} \abs{\ip{Tk_z ^{(p)}}{ k_w ^{(p')}}_{A^2}}^p \end{align*} since by Proposition \ref{Covering_Bergman} we have that $z \in F_j$ and $w \in G_j$ implies that $d(z, w) \leq 3r$ and $\lambda(G_j) \leq C(r)$ where $C(r)$ is independent of $j$.
\end{proof}
We will finish this section off with a proof of Theorem \ref{local-Bergman1}. First, for $z\in\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$,
define \begin{equation*} U_z ^{(p)} f(w):= f(\varphi_z(w)) (k_z (w))^\frac{2}{p} \end{equation*} which via a simple change of variables argument is clearly an isometry on $A^p$. As was shown in \cite{Sua}, an easy computation tells us that there exists a unimodular function $\Phi(\cdot, \cdot)$ on $\mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n \times \mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ where \begin{equation} \label{TransOpForm} (U_z ^{(p)})^* k_w ^{(p')} = \Phi(z, w) k_{\phi_z (w)} ^{(p')}. \end{equation}
With the help of the operators $U_z ^{(p)}$, we will prove the following general result which in conjunction with Theorem \ref{essBerg} proves Theorem \ref{local-Bergman1}. Note that proof is similar to the proof of \cite{I}*{Proposition 1.4}.
\begin{prop} \label{BerVanProp} If $T$ is any bounded operator on $A^p$ for $1 < p < \infty$ then the following are equivalent \begin{itemize}{}{}
\item [(a)] $\lim_{|z| \rightarrow 1^{-}} \sup_{w \in D(z, r)} |\langle Tk_z ^{(p)} , k_w ^{(p')} \rangle_{A^2}| = 0$ for all $r > 0$,
\item [(b)] $\lim_{|z| \rightarrow 1^{-}} \sup_{w \in D(z, r)} |\langle Tk_z ^{(p)} , k_w ^{(p')} \rangle_{A^2}| = 0$ for some $r > 0$,
\item [(c)] $\lim_{|z| \rightarrow 1^{-}} \abs{\ip{Tk_z}{k_z}_{A^2}} = 0 $. \end{itemize}
\end{prop}
\begin{proof} Trivially we have that $(a) \Rightarrow (b)$, and the fact that $(b) \Rightarrow (c)$ follows by definition and setting $z = w$. We will complete the proof by showing that $(c) \Rightarrow (a)$.
Assume to the contrary that $\abs{\ip{Tk_z}{k_z}_{A^2}}$ vanishes as $|z| \rightarrow 1^{-}$ but that \begin{equation*} \limsup_{|z| \rightarrow 1^{-}} \sup_{w \in D(z, r)} \abs{\ip{Tk_z ^{(p)}}{ k_w ^{(p')}}_{A^2}} \neq 0 \end{equation*} for some fixed $r > 0$. Thus, there exists sequences $\{z_m\}, \{w_m\} $ and some $0 < r_0 < 1$ where $\lim_{m \rightarrow \infty} |z_m| = 1$ and $|w_m| \leq r_0$ for any $m \in \N$, and where \begin{equation} \label{BerPropAssump} \limsup_{m \rightarrow \infty} \abs{\ip{ Tk_{z_m} ^{(p)}}{ k_{\varphi_{z_m} (w_m)} ^{(p')}}_{A^2}} > \epsilon\end{equation} for some $\epsilon > 0$. Furthermore, passing to a subsequence if necessary, we may assume that $\lim_{m \rightarrow \infty} w_m = w \in \mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$. Note that since $|w_m| \leq r_0 < 1$ for all $m$, we trivially have $\lim_{m \rightarrow \infty} k_{w_m} ^{(p')} = k_w ^{(p')} $ where the convergence is in the $A^{p'}$ norm.
Let $\mathcal{B}(A^p)$ be the space of bounded operators on $A^p$. Since the unit ball in $\mathcal{B}(A^p)$ is $\WOT$ compact, we can (passing to another subsequence if necessary) assume that \begin{equation*} \widehat{T} = \WOT - \lim_{m \rightarrow \infty} U_{z_m} ^{(p)} T (U_{z_m} ^{(p')})^*. \end{equation*} Thus, we have that \begin{align*} \limsup_{m \rightarrow \infty} \abs{\ip{ Tk_{z_m} ^{(p)}}{ k_{\varphi_{z_m} (w_m)} ^{(p')}}_{A^2}} & =
\limsup_{m \rightarrow \infty} \abs{\ip{ U_{z_m} ^{(p)} T (U_{z_m} ^{(p')})^* k_{0} ^{(p)} }{ k_{ w_m} ^{(p')}}_{A^2}} \\ & = \limsup_{m \rightarrow \infty} \abs{\ip{ U_{z_m} ^{(p)} T (U_{z_m} ^{(p')})^* k_{0} ^{(p)} }{ k_{ w} ^{(p')}}_{A^2}} \\ & = \abs{\ip{\widehat{T} k_0}{ k_w}_{A^2}} .\end{align*} However, for any $z \in \mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n$ \begin{equation*} \abs{\ip{\widehat{T} k_z ^{(p)}}{ k_z ^{(p')}}} = \lim_{m \rightarrow \infty} \abs{\ip{U_{z_m} ^{(p)} T (U_{z_m} ^{(p')})^* k_z ^{(p)}}{ k_z ^{(p')}} } \approx \lim_{m \rightarrow \infty} \abs{\ip{T k_{\varphi_{z_m} (z) } ^{(p)}}{ k_{\varphi_{z_m}(z)} ^{(p')} }_{A^2}} = 0 \end{equation*} since by assumption $\abs{\ip{Tk_z}{k_z}}$ vanishes as $|z| \rightarrow {1^{-}}$. Thus, since the Berezin transform is injective on $A^p$, we get that $\widehat{T} = 0$, which contradicts (\ref{BerPropAssump}) and completes the proof. \end{proof}
\section{Generalized Bargmann-Fock space case} \label{Fock}
In this section we will prove Theorems \ref{local-Fock} and \ref{local-ordinaryFock}. Some parts of the proofs are essentially identical to proof of Theorem \ref{local-Bergman1} and so we will we only outline the necessary modifications.
For this section, let $$ D(z,r):=\left\{w\in\C^n:\abs{w-z}<r\right\} $$ denote the standard Euclidean disc centered at
the point $z$ of radius $r>0$. For $z\in\C^n$, we define $$ U_z f(w):= f(z-w) k_z(w), $$ which via a simple change of
variables argument is clearly an isometry on $\ensuremath{\mathcal{F} ^p }} \newcommand{\Fq}{\ensuremath{\mathcal{F} ^q }$ (though note in general that it is not clear whether $U_z$ even maps $\ensuremath{{\mathcal{F}}_\phi ^p }$ into itself). Recall also that the orthogonal projection of $L^2(\C^n,
e^{-2\phi} dv)$ onto $\ensuremath{{\mathcal{F}}_\phi ^2 }$ is given by the integral operator $$
P(f)(z):=\int_{\C^n}\ip{\Kbw}{\Kbz}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}f(w)\,e^{-2\phi(w)} dv. $$ Therefore, for all $f\in \ensuremath{{\mathcal{F}}_\phi ^p }$ we have \begin{equation} \label{FockResOfId} f(z)=\int_{\C^n}\ip{f}{\widetilde{\kbw}}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}\widetilde{\kbw}(z)\,dv(w)\end{equation} where $\widetilde{\kbw}(z) := \Kbw(z) e^{-\phi(w)}$. Note that $|K(z, z)| \approx e^{2\phi(z)}$ (see \cite{SV}) so that \begin{equation} \label{EquivOfNormRepKer} |\widetilde{\kbw}(z)| \approx |\kbw(z)|. \end{equation}
The following analog of Lemma~\ref{lm-rfloc} is simpler to prove in this case.
\begin{lm} \begin{equation}\label{rfloc-Fock} \lim_{R\to\infty}\sup_{z\in\mathbb{C}^n}\int_{D(z,R)^c}
\abs{\ip{k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}\,dv(w)= 0. \end{equation} \end{lm} \noindent To prove this, simply note that there exists $\epsilon > 0$ such that $\abs{\ip{k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \leq e^{-\epsilon|z - w|}$ for all $z, w \in \C^n$. The proof of this is then immediate since $$
\int_{D(z,R)^c} \abs{\ip{k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}\,dv(w)\leq \int_{D(0,R)^{c}} e^{-\epsilon|w|} dv(w) $$ which clearly goes to
zero as $R\to\infty$.
As in the Bergman case, $\AaFock$ contains all Toeplitz operators with bounded symbols. Also, as was stated in the introduction, any $T \in \AaFock$ is automatically bounded on $\ensuremath{{\mathcal{F}}_\phi ^p }$ for all $1 \leq p \leq \infty$. To prove this, note that it is enough to prove that $T$ is bounded on $\ensuremath{\mathcal{F}_\phi ^1 }$ and $\ensuremath{\mathcal{F}_\phi ^\infty }$ by complex interpolation (see \cite{I}). To that end, we only prove that $T$ is bounded on $\ensuremath{\mathcal{F}_\phi ^1 }$ since the proof that $T$ is bounded on $\ensuremath{\mathcal{F}_\phi ^\infty }$ is similar. If $T \in \AaFock$ and $f \in \ensuremath{\mathcal{F}_\phi ^1 }$, then the reproducing property gives us that \begin{align*} \abs{Tf(z)} e^{- \phi(z)} & \approx \abs {\ip{f}{T^*k_z}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \\ & \lesssim \int_{\C^n} \abs{f(u)} \abs{\ip{T^* k_z}{k_u}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \, e^{- \phi(u)} \, dv(u). \end{align*} Thus, by Fubini's theorem, we have that \begin{equation*} \norm{Tf}_{\ensuremath{\mathcal{F}_\phi ^1 }} \leq \int_{\C^n} \abs{f(u)} \left( \int_{\C^n} \abs{\ip{T^* k_z}{k_u}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \, dv(z) \right) e^{- \phi(u)} \, dv(u) \lesssim \norm{f}_{\ensuremath{\mathcal{F}_\phi ^1 }}. \end{equation*}
In addition, $\AaFock$ satisfies the following two properties: \begin{prop} \label{T-Fock} Each Toeplitz operator $T_u$ on $\ensuremath{{\mathcal{F}}_\phi ^p }$ with a bounded
symbol $u(z)$ is weakly localized. \end{prop}
\begin{proof} Since $\abs{\ip{k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \leq e^{-\epsilon|z - w|}$ for some $\epsilon > 0$ we have that \begin{align*} \abs{\ip{T_u k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} & \lesssim \|u\|_{L^\infty} \int_{\C^n} \abs{\ip{k_z}{k_x}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \abs{\ip{k_x}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \, dx \\ & \lesssim \|u\|_{L^\infty} \int_{\C^n} e^{- \epsilon |z - x|} e^{-\epsilon |x - w |} \, dx. \end{align*} Now if $|z - w| \geq r$ then by the triangle inequality we have that either $|z - x| \geq r/2$ or $|x - w| \geq r/2$ so that \begin{equation*} \int_{D(z, r)^c} \abs{\ip{T_u {k}_z}{{k}_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \, dw \lesssim e^{-\frac{\epsilon r}{2}} \|u\|_{L^\infty} \int_{D(z, r)^c} \int_{\C^n} e^{- \frac{\epsilon}{2} |z - x|} e^{-\frac{\epsilon}{2} |x - w |} \, dx \,dw \lesssim e^{-\frac{\epsilon r}{2}} \|u\|_{L^\infty} \end{equation*} \end{proof}
Note that $T_u$ is sufficiently localized even in the sense of Xia and
Zheng by \cite{XZ}*{Proposition 4.1}. Also note that a slight variation of the above argument shows that the Toeplitz operator $T_\mu \in \AaFock$ if $\mu$ is a positive Fock-Carleson measure on $\C^n$ (see \cite{SV} for precise definitions).
\begin{prop} \label{C*-Fock} $\AaFock$ forms a $*$-algebra.
\end{prop} We will omit the proof of this proposition since it is proved in exactly the same way as it is in the Bergman space case (where the only difference is that one uses (\ref{FockResOfId}) in conjunction with (\ref{EquivOfNormRepKer}) instead of (\ref{BergResOfId})).
We next prove that operators in the norm closure of $\AaFock$ can also be approximated by infinite sums
of well localized pieces. To state this property we need to recall the following proposition proved in \cite{MW}
\begin{prop} \label{Covering} There exists an integer $N>0$ such that for any $r>0$ there is a covering $\FF_r=\{F_j\}$
of $\C^n$ by disjoint Borel sets satisfying \begin{enumerate} \item[\label{Finite} \textnormal{(1)}] every point of
$\C^n$ belongs to at most $N$ of the sets $G_j:=\{z\in\mathbb{C}^n: d(z, F_j)\leq r\}$, \item[\label{Diameter} \textnormal{(2)}]
$\textnormal{diam}_d\, F_j \leq 2r$ for every $j$. \end{enumerate} \end{prop} We use this to prove the following
proposition, which is similar to what appears in \cite{MW}, but exploits condition \eqref{assump-Fock} (and is proved in a manner that is similar to the proof of \cite{I}*{Lemma 5.2}). Note that for the rest of this paper, $\ensuremath{L_\phi ^p}$ will refer to the space of measurable functions $f$ on $\C^n$ such that $f e^{- \phi} \in L^p(\C^n, dv)$.
\begin{prop}\label{MainEst2} Let $1 < p < \infty$ and let $T$ be in the norm closure of $\AaFock$. Then
for every $\epsilon > 0$ there exists $r>0$ such that for the covering $\FF_r=\{F_j\}$ (associated to $r$) from
Proposition \ref{Covering} \begin{eqnarray*} \norm{ TP-\sum_{j}M_{1_{F_j} }TPM_{1_{G_j} }}_{\ensuremath{{\mathcal{F}}_\phi ^p } \to
\ensuremath{L_\phi ^p} } < \epsilon. \end{eqnarray*} \end{prop}
\begin{proof} Again by an easy approximation argument we can assume that $T \in \AaFock$. Furthermore, we first prove the theorem for $p = 2$.
Define $$ S=TP-\sum_{j}M_{1_{F_j} }TPM_{1_{G_j} }. $$ Given $\epsilon$ choose
$r$ large enough so that \begin{equation*} \sup_{z\in\C^n}\int_{D(z,r)^c} \abs{\ip{T^*k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}
dv(w)<\epsilon \quad\textnormal{ and }\quad \sup_{z\in\C^n}\int_{D(z,r)^c} \abs{ \ip{Tk_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }}}
dv(w)<\epsilon. \end{equation*} Now for any $z\in\C^n$, pick $j_0$ such that $z \in F_{j_0}$. Then we have that \begin{eqnarray*} \abs{Sf(z)} & \leq &
\int_{\C^n}\sum_{j}1_{F_j} (z)1_{G_j^c}(w) \abs{ \ip{T^*\Kbz}{\Kbw}_{\ensuremath{{\mathcal{F}}_\phi ^2 } }}\abs{f(w)} e^{-2\phi(w)} \, dv(w)\\
& = & \int_{G_{j_0} ^c} \abs{ \ip{T^*\Kbz}{\Kbw}_{\ensuremath{{\mathcal{F}}_\phi ^2 }} }\abs{f(w)} e^{-2\phi(w)} \, dv(w) \\
& \leq & \int_{D(z,r)^c} \abs{ \ip{T^*\Kbz}{\Kbw}_{\ensuremath{{\mathcal{F}}_\phi ^2 }} }\abs{f(w)} e^{-2\phi(w)}\, dv(w).
\end{eqnarray*}
To finish the proof when $p = 2$, we will estimate the operator norm of the integral operator on $\ensuremath{L_\phi ^2}$ with kernel $1_{D(z, r)^c} (w) \abs{ \ip{T^*\Kbz}{\Kbw}_{\ensuremath{{\mathcal{F}}_\phi ^2 }} }$ using the classical Schur test. To that end, let $h(z) = e^{\frac{1}{2} \phi(z)}$ so that
\begin{equation*} \int_{\C^n} 1_{D(z, r)^c} (w) \abs{ \ip{T^*\Kbz}{\Kbw}_{\ensuremath{{\mathcal{F}}_\phi ^2 }} }h(w)^2 e^{-2\phi(w)} \, dv(w) \approx h(z)^2 \int_{D(z, r)^c} \abs{\ip{T^* k_z}{k_w}_{\ensuremath{{\mathcal{F}}_\phi ^2 }} } \, dv(w) \lesssim \epsilon h(z) ^2. \end{equation*} Similarly, we have that \begin{equation*} \int_{\C^n} 1_{D(z, r)^c} (w) \abs{ \ip{T^*\Kbz}{\Kbw}_{\ensuremath{{\mathcal{F}}_\phi ^2 } }}h(z)^2 e^{-2\phi(z)} \, dv(z) \lesssim \epsilon h(w) ^2 \end{equation*} which finishes the proof when $p = 2$.
Now assume that $1 < p < 2$. Since $T$ is bounded on $\ensuremath{\mathcal{F}_\phi ^1 }$, we easily get that \begin{equation*} \norm{\sum_j M_{1_{F_j}} TP M_{1_{G_j}}}_{\ensuremath{\mathcal{F}_\phi ^1 } \rightarrow \ensuremath{L_\phi ^1}} < \infty \end{equation*} which by complex interpolation proves the proposition when $1 < p < 2$. Finally when $2 < p < \infty$, one can similarly get a trivial $\ensuremath{L_\phi ^1} \rightarrow \ensuremath{\mathcal{F}_\phi ^1 }$ operator norm bound on \begin{equation*} \left(\sum_j M_{1_{F_j}} TP M_{1_{G_j}}\right)^* = \sum_j P M_{1_{G_j}} T^* P M_{1_{F_j}} \end{equation*} since $T^*$ is bounded on $\ensuremath{\mathcal{F}_\phi ^1 }$. Since $(\ensuremath{{\mathcal{F}}_\phi ^p })^* = \ensuremath{\mathcal{F}_\phi ^q }$ when $1 < p < \infty$ where $q$ is the conjugate exponent of $p$ (see \cite{SV}), duality and complex interpolation now proves the proposition when $2 < p < \infty$. \end{proof}
Because of (\ref{EquivOfNormRepKer}), the proof of the next result is basically the same as the proof of Theorem~\ref{essBerg} and therefore we skip it.
\begin{thm} \label{essFock} Let $1 < p < \infty$ and let $T$ be in the norm closure of $\AaFock$. Then there exists $r, C > 0$ (both depending on $T$) such that \begin{equation*} \|T\|_{\text{e}} \leq C \limsup_{|z| \rightarrow\infty} \sup_{w \in D(z, r)} \abs{\ip{Tk_z}{ k_w }_{\ensuremath{{\mathcal{F}}_\phi ^2 }}} \end{equation*} where $\|T\|_{\text{e}}$ is the essential norm of $T$ as a bounded operator on $\ensuremath{{\mathcal{F}}_\phi ^p }$.\end{thm}
As was stated in the beginning of this section, the operator $U_z$ for $z \in \C^n$ is an isometry on $\ensuremath{\mathcal{F} ^p }} \newcommand{\Fq}{\ensuremath{\mathcal{F} ^q }$. Furthermore, since a direct calculation shows that \begin{equation*}\abs{ U_z k_w (u)} \approx \abs{k_{z - w} (u)},\end{equation*} the proof of Theorem~\ref{local-ordinaryFock} now follows immediately by combining Theorem \ref{essFock} with \cite{I}*{Proposition 1.4}.
\section{Concluding remarks} \label{ConcRemSec}
The reader should clearly notice that the proof of Theorem \ref{essBerg} did \textit{not} in any way use the existence of a family of ``translation" operators $\{U_z ^{(p)} \}_{z \in \mathbb{B}} \newcommand{\Sn}{{\mathbb{S}_n}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}_n}$ on $A^p$ that satisfies \begin{equation} \label{TransOpForm2} \abs{(U_z ^{(p)})^* k_w ^{(p')}} \approx \abs{k_{\phi_z (w)} ^{(p')}} \end{equation} (and moreover, one can make a similar remark regarding Theorem \ref{essFock}). In particular, a trivial application of H\"{o}lder's inequality in conjunction with the above remark implies that one can prove the so called ``reproducing kernel thesis" for operators in the norm closure of $\Aa_p(\B_n)} \newcommand{\AaFock}{\Aa_\phi (\C^n)} \newcommand{\AatwoBerg}{\Aa_2(\B_n)$ (respectively, $\AaFock$) \textit{without} the use of any ``translation" operators. It would therefore be interesting to know if our results can be proved for the weighted Bergman spaces on the ball that were considered in \cite{BO} for example. Moreover, it would be interesting to know whether one can use the ideas in this paper to modify the results in \cite{MW} to include spaces where condition A.5 on the space of holomorphic functions at hand is not necessarily true (note that it is precisely this condition that allows one to easily cook up ``translation operators").
It would also be very interesting to know whether ``translation" operators are in fact crucial for proving Proposition \ref{BerVanProp} and its generalized Bargmann-Fock space analog (again see \cite{I}*{Proposition 1.4}). More generally, it would be fascinating to know precisely how these translation operators fit into the ``Berezin transform implies compactness" philosophy since at present the answer to this seems rather mysterious.
As was noted earlier, the techniques in \cite{XZ} are essentially frame theoretic, and therefore are rather different than the techniques used in this paper. In particular, a crucial aspect of \cite{XZ} involves a localization result somewhat similar in spirit to Proposition \ref{MainEst2} and which essentially involves treating a ``sufficiently localized" operator $T$ as a sort of matrix with respect to the frame $\{k_\sigma\}_{\sigma \in \mathbb{Z}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\scrH}{\mathcal{H}^{2n}}$ for $\FF^2$. Also, note that the techniques in \cite{XZ} were extended in \cite{I} to the generalized Bargmann-Fock space setting to obtain results for $\ensuremath{{\mathcal{F}}_\phi ^2 }$ that are similar to (but slightly weaker than) the results obtained in this paper. Because of these considerable differences in localization schemes, it would be interesting to know if one can combine the localization ideas from this paper with that of \cite{I, XZ} to obtain new or sharper results on $\ensuremath{{\mathcal{F}}_\phi ^2 }$ (or even just new or sharper results on $\ensuremath{\mathcal{F} ^2 }} \newcommand{\Lt}{\ensuremath{L ^2 }$).
\begin{bibdiv} \begin{biblist}
\bib{BI}{article}{
author={Bauer, W.},
author={Isralowitz, J.},
title={Compactness characterization of operators in the Toeplitz algebra
of the Fock space $F^p_\alpha$},
journal={J. Funct. Anal.},
volume={263},
date={2012},
number={5},
pages={1323--1355}
}
\bib{BC}{article}{
author={ Berger, C.},
author={Coburn, L.},
title={Heat flow and Berezin-Toeplitz estimates},
journal={Amer. J. Math.},
volume={116},
date={1994},
number={3},
pages={563--590}}
\bib{BO}{article}{
author={Berndtsson, B.},
author={Ortega-Cerd\`{a}, J.},
title={On interpolation and sampling in Hilbert spaces of analytic functions.},
journal={J. Reine Angew. Math.},
volume={464},
date={1995},
number={5},
pages={109–-128}
}
\bib{CZ}{article}{
author={Cho, H. R.},
author={Zhu, K.},
title={Fock-Sobolev spaces and their Carleson measures},
journal={J. Funct. Anal.},
volume={263},
date={2012},
number={8},
pages={2483–-2506}
}
\bib{I}{article}{
author={Isralowitz, J.}
title={Compactness and essential norm properties of operators on generalized Fock spaces},
eprint={http://arxiv.org/abs/1305.7475},
status={to appear in J. Operator Theory},
date={2013}
pages={1--28}
}
\bib{MW}{article}{
author={Mitkovski, M.},
author={Wick, B. D.},
title={A Reproducing Kernel Thesis for Operators on Bergman-type Function Spaces},
journal={J. Funct. Anal.},
volume={267},
date={2014},
pages={2028--2055}
}
\bib{MSW}{article}{
author={Mitkovski, M.},
author={Su{\'a}rez, D.},
author={Wick, B. D.},
title={The Essential Norm of Operators on $A^p_\alpha(\mathbb{B}_n)$},
journal={Integral Equations Operator Theory},
volume={75},
date={2013},
number={2},
pages={197--233}
}
\bib{SV}{article}{
author={Schuster, A.}
author={Varolin, D.}
title={Toeplitz operators and Carleson measures on generalized Bargmann-Fock spaces,}
journal={Integral Equations Operator Theory}
volume={72}
date={2012}
number={3}
pages={363--392}
}
\bib{Sua}{article}{
author={Su{\'a}rez, D.},
title={The essential norm of operators in the Toeplitz algebra on $A^p(\mathbb{B}_n)$},
journal={Indiana Univ. Math. J.},
volume={56},
date={2007},
number={5},
pages={2185--2232}
}
\bib{XZ}{article}{
author={Xia, J.},
author={Zheng, D.},
title={Localization and Berezin transform on the Fock space},
journal={J. Funct. Anal.},
volume={264},
date={2013},
number={1},
pages={97--117}
}
\bib{Zhu}{book}{
author={Zhu, K.},
title={Spaces of holomorphic functions in the unit ball},
series={Graduate Texts in Mathematics},
volume={226},
publisher={Springer-Verlag},
place={New York},
date={2005},
pages={x+271}
}
\bib{Zhu2}{book}{
author={Zhu, K.}
title={Analysis on Fock spaces},
series={Graduate Texts in Mathematics},
volume={263},
publisher={Springer-Verlag},
place={New York},
date={2012},
pages={x+344}
}
\end{biblist} \end{bibdiv}
\end{document}
|
2,877,628,088,872 | arxiv | \section*{Introduction}
Braces appear in connections to the study of set-theoretic solutions of the Yang-Baxter equation.
A {\em set-theoretic solution of the Yang-Baxter equation} is a pair $(X,r)$, where $X$ is a set, $r\colon X\times X\to X\times X$ is a bijection, and $(r\times\mbox{\rm id})(\mbox{\rm id}\times r)(r\times\mbox{\rm id})=(\mbox{\rm id}\times r)(r\times\mbox{\rm id})(\mbox{\rm id}\times r)$
\cite{23}. Set-theoretic solutions of the Yang-Baxter equation appear, for instance, in the study of representations of braid groups, and form a category {\bf SYBE}, whose objects are these pairs $(X,r)$, and morphisms $f\colon (X,r)\to(X',r')$ are the mappings $f\colon X\to X'$ that make the diagram
$$\xymatrix{
X\times X\ar[r]^{f\times f}\ar[d]^r&X'\times X'\ar[d]_r\\
X\times X\ar[r]_{f\times f}&X'\times X'}$$ commute.
One way to produce set-theoretic solutions of the Yang-Baxter equation is using left skew braces.
\medskip
\noindent\textbf{Definition}
{\rm \cite{GV} A {\sl (left) skew brace} is
a triple $(A, *,\circ)$, where $(A,*) $ and $(A,\circ)$ are groups (not necessarily abelian) such that \begin{equation}a\circ (b * c) = (a\circ b)*a^{-*}* ( a\circ c)\label{lsb}\tag{B}\end{equation}
for every $a,b,c\in A$. Here $a^{-*}$ denotes the inverse of $a$ in the group $(A,*)$. The inverse of $a$ in the group $(A,\circ)$ will be denoted by $a^{-\circ}$.}
A brace is sometimes seen as an algebraic structure similar to that of a ring, with distributivity warped in some sense. But a better description of a brace is probably that of an algebraic structure with two group structures out of phase with each other.
\medskip
For every left skew brace $(A, *,\circ)$, the mapping $$r \colon A\times A \to A \times A,\quad r(x,y) = (x^{-*} *(x\circ y),(x^{-*} *(x\circ y))^{-\circ}\circ x\circ y),$$
is a non-degenerate set-theoretic solution of the Yang-Baxter equation (\cite[Theorem~3.1]{GV} and \cite[p.~96]{KSV}). Here ``non-degenerate'' means that the mappings $\pi_1r(x_0,-)\colon A\to A$ and $\pi_2r(-,y_0)\colon A\to A$ are bijections for every $x_0\in A$ and every $y_0\in A$.
The simplest examples of left skew braces are:
(1) For any associative ring $(R,+, \cdot)$, the Jacobson radical $(J(R),+, \circ)$, where $\circ$ is the operation on $J(R)$ defined by $x\circ y=xy+x+y$ for every $x,y\in J(R)$.
(2) For any group $(G,*)$, the left skew braces $(G,*,*)$ and $(G,*,*^{op})$.
Several non-trivial examples of skew braces can be found in \cite{SV}. A complete classification of braces of low cardinality has been obtained via computer \cite{KSV}.
A homomorphism of skew braces is a mapping which is a group homomorphism for both the operations. This defines the category $\mathsf{SKB}$ of skew braces.
From \cite{GV}, we know that in a skew brace the units of the two groups coincide. So, $\mathsf{SKB}$ appears as a fully faithful subcategory $\mathsf{SKB}\hookrightarrow \mathsf{DiGp}$ of the category $\mathsf{DiGp}$ of digroups, where a digroup is a triple $(G,*,\circ)$ of a set $G$ endowed with two group structures with same unit. This notion was introduced in \cite{Normal} and devised during discussions between the first author and G. Janelidze.
There are two forgetful functors $U_i:\mathsf{DiGp} \to \mathsf{Gp}, \; i\in\{0,1\},$ associating respectively the first and the second group structures. They both reflect isomorphisms. Since $U_0$ is left exact and reflects isomorphisms, it naturally allows the lifting of the protomodular aspects of the category $\mathsf{Gp}$ of groups to the category $\mathsf{DiGp}$. In turn, the left exact fully faithful embedding $\mathsf{SKB}\hookrightarrow \mathsf{DiGp}$ makes $\mathsf{SKB}$ a pointed protomodular category. The protomodular axiom was introduced in \cite{B0} in order to extract the essence of the homological constructions and in particular to induce an {\em intrinsic notion of
exact sequence}.
In this paper, after recalling the basic facts about protomodular categories, we study the ``protomodular aspects'' of left skew braces, in particular in relation to the category of digroups. We study the notion of commutator of ideals in a left skew brace (in the literature, ``product'' of ideals of skew braces is often considered). We show that Huq=Smith for left skew braces. Notice that Huq${}\ne{}$Smith for digroups and near-rings \cite{Commutators}. We give a set of generators for the commutator of two ideals, and prove that every ideal of a left skew brace has a centralizer.
\section{Basic recalls on protomodular categories}
In this work, any category $ \mathbb{E} $ will be supposed finitely complete, which implies that it has a terminal object $1$. The terminal map from $X$ is denoted $\tau_X: X\to 1$. Given any map $f: X\to Y$, the equivalence relation $R[f]$ on $X$ is produced by the pullback of $f$ along itself. The map $f$ is said to be a {\em regular epimorphism} in $ \mathbb{E} $ when $f$ is the quotient of $R[f]$. When it is the case, we denote it by a double head arrow $X \twoheadrightarrow Y$.
\subsection{Pointed protomodular categories}
The category $ \mathbb{E} $ is said to be pointed when the terminal object $1$ is initial as well. Let us recall that a pointed category $ \mathbb{A} $ is additive if and only if, given any split epimorphism $f: X \rightleftarrows Y, \; fs=1_Y$, the following downward pullback:
$$
\xymatrix@=20pt{
\ensuremath{\mathrm{Ker}} f \ar@{ >->}[rr]^{k_f} \ar@<-4pt>[d]_{} && X \ar@<-4pt>[d]_{f} && \\
1 \ar@{ >->}[rr]_{0_Y} \ar@{ >->}[u]_{0_{K}} & & Y \ar@{ >->}[u]_{s} }
$$
is an upward pushout, namely if and only if $X$ is the direct sum (= coproduct) of $Y$ and $ \ensuremath{\mathrm{Ker}} f$. Let us recall the following:
\begin{definition}{\rm \cite{B0}
A pointed category $ \mathbb{E} $ is said to be {\em protomodular} when, given any split epimorphism as above, the pair $(k_f,s)$ of monomorphisms is jointly strongly epic.}
\end{definition}
This means that the only subobject $u:U \rightarrowtail X$ containing the pair $(k_f,s)$ of subobjects is, up to isomorphism, $1_X$. It implies that, given any pair $(f,g): X \rightrightarrows Z$ of arrows which are equalized by $k_f$ and $s$, they are necessarily equal (take the equalizer of this pair). Pulling back the split epimorphisms along the initial map $0_Y: 1 \rightarrowtail Y$ being a left exact process, the previous definition is equivalent to saying that this process reflects isomorphisms.
The category $\mathsf{Gp}$ of groups is clearly pointed protomodular. This is the case of the category $\mathsf{Rng}$ of rings as well, and more generally, given a commutative ring $R$, of any category $R$-$\mathsf{Alg}$ of any given kind of $R$-algebras without unit, possibly non-associative. This is in particular the case of the category $R$-$\mathsf{Lie}$ of Lie $R$-algebras. Even for $R$ a non-commutative ring, in which case $R$-algebras have a more complex behaviour (they are usually called $R$-rings, see \cite[p.~36]{Bergman} or \cite[p.~52]{libromio}), one has that the category $R$-$\mathsf{Rng}$ of $R$-rings is pointed protomodular, as can be seen from the fact that the forgetul functor $R$-$\mathsf{Rng}\to Ab$ reflects isomorphisms and $Ab$ is protomodular.
The pointed protomodular axiom implies that the category $ \mathbb{E} $ shares with the category $\mathsf{Gp}$ of groups the following well-known {\em Five Principles}:\\
(1) a morphism $f$ is a monomorphism if and only if its kernel $ \ensuremath{\mathrm{Ker}} f$ is trivial \cite{B0};\\
(2) any regular epimorphism is the cokernel of its kernel, in other words any regular epimorphism produces an exact sequence, which determines \emph{an intrinsic notion of exact sequences} in $ \mathbb{E} $ \cite{B0};\\
(3) there a specific class of monomorphisms $u:U \rightarrowtail X$, the \emph{normal monomorphisms} \cite{B1}, see next section ;\\
(4) there is an intrinsic notion of abelian object \cite{B1}, see section \ref{abob};\\
(5) any reflexive relation in $ \mathbb{E} $ is an equivalence relation, i.e. the category $ \mathbb{E} $ is a Mal'tsev one \cite{B2}.
So, according to Principle (1), a pointed protomodular category is characterized by the validity of the \emph{split short five lemma}. Generally, Principle (5) is not directly exploited in $\mathsf{Gp}$; we shall show in Section \ref{asc} how importantly it works out inside a pointed protomodular category $ \mathbb{E} $. Pointed protomodular varieties of universal algebras are characterized in \cite{BJ}.
\subsection{Normal monomorphisms}
\begin{definition}{\rm \cite{B1}
In any category $ \mathbb{E} $, given a pair $(u,R)$ of a monomorphism $u:U \rightarrowtail X$ and an equivalence relation $R$ on $X$, the monomorphism $u$ is said to be {\em normal to $R$ }when the equivalence relation $u^{-1}(R)$ is the indiscrete equivalence relation $\nabla_X=R[\tau_X]$ on $X$ and, moreover, any commutative square in the following induced diagram is a pullback:}
$$\xymatrix@=3pt{
{U\times U\;} \ar@<-2ex>[ddd]_{d_0^U} \ar@{>->}[rrrrr]^{\check u} \ar@<2ex>[ddd]^{d_1^U} &&&&& {R\;} \ar@<-2ex>[ddd]_{d_0^R} \ar@<2ex>[ddd]^{d_1^R} \\
&&&&\\
&&&&\\
{U\;} \ar@{>->}[rrrrr]_{u} \ar[uuu]|{s_0^U} &&&&& X \ar[uuu]|{s_0^R}
}
$$
\end{definition}
In the category $Set$, provided that $U\neq \emptyset$, these two properties characterize the equivalence classes of $R$. By the Yoneda embedding, this implies the following:
\begin{proposition}
Given any equivalence relation $R$ on an object $X$ in a category $ \mathbb{E} $, for any map $x$, the following upper monomorphism $\check x=d_1^R.\bar x$ is normal to $R$:
$$\xymatrix@=3pt{
{I_R^x;} \ar[ddd] \ar@{>->}[rrrrr]^{\bar x} &&&&& {R\;} \ar[ddd]_{d_0^R} \ar[rrrrr]^{d_1^R} &&&&& X\\
&&&&\\
&&&&\\
{1\;} \ar@{>->}[rrrrr]_{x} &&&&& X
}
$$
\end{proposition}
In a pointed category $ \mathbb{E} $, taking the initial map $0_X: 1 \rightarrowtail X$ gives rise to a monomorhism $\iota_R: I_R \rightarrowtail X$ which is normal to $R$. This construction produces a preorder mapping $\iota^X: \mathsf{Equ}_X \mathbb{E} \to \mathsf{Mon}_X \mathbb{E} $ from the preorder of the equivalence relations on $X$ to the preorder of subobjects of $X$ which preserves intersections. Starting with any map $f: X\to Y$, we get $I_{R[f]}= \ensuremath{\mathrm{Ker}} f$ which says that any kernel map $k_f$ is normal to $R[f]$. Principle (3) above is a consequence of the fact \cite{B1} that in a protomodular category a monomorphism is normal to at most one equivalence relation (up to isomorphism). So that being normal, for a monomorphism $u$, becomes a property in this kind of categories. This is equivalent to saying that the preorder homomorphism $\iota_X: \mathsf{Equ}_X \mathbb{E} \to \mathsf{Mon}_X \mathbb{E} $ reflects inclusions; so, the preorder $\mathsf{Norm}_X$ of normal subobjects of $X$ is just the image $\iota^X(\mathsf{Equ}_X)\subset \mathsf{Mon}_X$.
\subsection{Regular context}
Let us recall from \cite{Barr} the following:
\begin{definition} {\rm
A category $ \mathbb{E} $ is {\em regular }when it satisfies the two first conditions, and {\em exact} when it satisfies all the three conditions:\\
(1) regular epimorphisms are stable under pullbacks;\\
(2) any kernel equivalence relation $R[f]$ has a quotient $q_f$;\\
(3) any equivalence relation $R$ is a kernel equivalence relation.}
\end{definition}
Then, in the regular context, given any map $f: X\to Y$, the following canonical factorization $m$ is necessarily a monomorphism:
$$\xymatrix@=3pt{
&&& \mathsf{Im}_f \ar@{ >.>}[dddrrr]^{m} \\
&&&&\\
&&&&\\
{X\;} \ar@{->>}[rrruuu]^{q_f} \ar[rrrrrr]_{f} &&&&&& Y
}
$$
This produces a canonical decomposition of the map $f$ in a monomorphism and a regular epimorphism which is stable under pullbacks. Now, given any regular epimorhism $f:X \twoheadrightarrow Y$ and any subobject $u:U \rightarrowtail X$, the {\em direct image} $f(u): f(U) \rightarrowtail Y$ of $u$ along the regular epimorphism $f$ is given by $f(U)=\mathsf{Im}_{f.u} \rightarrowtail Y$.
Any variety in the sense of Universal Algebra is exact and regular epimorphisms coincide with surjective homomorphisms.
\subsection{Homological categories}
The significance of pointed protomodular categories grows up in the regular context since, in this context, the split short five lemma can be extended to any exact sequence. Furthermore, the $3\times 3$ lemma, Noether isomorphisms and snake lemma hold; they are all collected in \cite{BB}. This is the reason why a regular pointed protomodular category $ \mathbb{E} $ is called \emph{homological}.
\section{Protomodular aspects of skew braces}
\subsection{Digroups}
From \cite{Normal}, we get the characterization of normal monomorphisms in $\mathsf{DiGp}$:
\begin{proposition}\label{normal1}
A suboject $i: (G,*,\circ) \rightarrowtail (K,*,\circ)$ is normal in the category $\mathsf{DiGp}$ if and only if the three following conditions hold:\\
{\rm (1)} $i:(G,*) \rightarrowtail (K,*)$ is normal in $\mathsf{Gp}$,\\{\rm (2)} $i:(G,\circ) \rightarrowtail (K,\circ)$ is normal in $\mathsf{Gp}$,\\
{\rm (3)} for all $(x,y)\in K\times K$, $x^{-*}*y\in G$ if and only if $x^{-\circ}\circ y\in G$.
\end{proposition}
\subsection{Skew braces}
The following observation is very important:
\begin{proposition}
Let $(G,*,\circ)$ be any skew brace. Consider the mapping $\lambda:G\times G\to G$ defined by $\lambda(a,u)=a^{-*}*(a\circ u)$. Then:\\
{\rm (1)} $\lambda_{a}=\lambda(a,-)$ is underlying a group homomorphism $(G,\circ)\to \mathsf{Aut} (G,*)$, and this condition is equivalent to {\rm (\ref{lsb})};\\
{\rm (2)} we have \begin{equation}\lambda({a^{-\circ}},u)=(a^{-\circ})^{-*}*(a^{-\circ}\circ u)=a^{-\circ}\circ(a*u).\label{doublech}\end{equation}
\end{proposition}
\proof
For (1), see \cite{GV}. For (2), we have $(a^{-\circ}\circ a)*(a^{-\circ})^{-*}*(a^{-\circ}\circ u)=
(a^{-\circ})^{-*}*(a^{-\circ}*u)=\lambda({a^{-\circ}},u)$.
\endproof
\subsection{First properties of skew braces}
The following observation is straightforward:
\begin{proposition}
$\mathsf{SKB}$ is a Birkhoff subcategory of $\mathsf{DiGp}$.
\end{proposition}
This means that any subobject of a skew brace in $\mathsf{DiGp}$ is a skew brace and that, given any surjective homomorphism $f: X \twoheadrightarrow Y$ in $\mathsf{DiGp}$, the digroup $Y$ is a skew brace as soon as so is $X$. In this way, any equivalence relation $R$ in $\mathsf{DiGp}$ on a skew brace $X$ actually lies in $\mathsf{SKB}$ since it determines a subobject $R\subset X\times X$ in $\mathsf{DiGp}$ and, moreover, its quotient in $\mathsf{SKB}$ is its quotient in $\mathsf{DiGp}$. The first part of this last sentence implies that any normal subobject $u:U \rightarrowtail X$ in $\mathsf{DiGp}$ with $X\in \mathsf{SKB}$ is normal in $\mathsf{SKB}$.
We are now going to show that the normal subobjects in $\mathsf{SKB}$ coincide with the ideals of \cite{GV}.
\begin{proposition}\label{normal2}
A subobject $i: (G,*,\circ ) \rightarrowtail (K,*,\circ )$ is normal in the category $\mathsf{SKB}$ if and only if the three following conditions hold:\\
$(1)$ $i: (G,*) \rightarrowtail (K,*)$ is normal in $\mathsf{Gp}$,\\
$(2)$ $i: (G,\circ ) \rightarrowtail (K,\circ )$ is normal in $\mathsf{Gp}$,\\
$(3')$ $\lambda_x(G)=G$ for all $x\in K$.
\end{proposition}
\proof
Suppose (1) and (2). We are going to show $(3)\iff (3')$, with $(3)$ given in Proposition \ref{normal1}.\\
(i) $x^{-\circ}\circ y\in G\Rightarrow x^{-*}*y\in G$ if and only if $\lambda_{x}(G)\subset G$, setting $y=x\circ u, \; u\in G$.\\
(ii) from (\ref{doublech}):
$x^{-*}*y\in G \Rightarrow x^{-\circ}\circ y\in G$ if and only if $\lambda_{x^{-\circ}}(G)\subset G$, setting $y=x*u, \; u\in G$.\\
Finally $\lambda_{x}(G)\subset G$ for all $x$ is equivalent to $\lambda_{x}(G)=G$.
\endproof
\begin{corollary}
A subobject $i: (G,*,\circ ) \rightarrowtail (K,*,\circ )$ is normal in the category $\mathsf{SKB}$ if and only if it is an ideal in the sense of \cite{GV}, namely is such that:\\
1) $i: (G,\circ ) \rightarrowtail (K,\circ )$ is normal, 2) $G*a=a*G$ for all $a\in K$, 3) $\lambda_{a}(G)\subset G$ for all $a\in K$.
\end{corollary}
\proof
Straightforward.
\endproof
Being a variety in the sense of Universal Algebra, $\mathsf{SKB}$ is finitely cocomplete; accordingly it has binary sums (called coproducts as well). So, $\mathsf{SKB}$ is a semi-abelian category according to the definition introduced in \cite{JMT}:
\begin{definition}
{\rm A pointed category $ \mathbb{E} $ is said to be {\em semi-abelian} when it is protomodular, exact and has finite sums.}
\end{definition}
From the same \cite{JMT}, let us recall the following observation which explains the choice of the terminology: a pointed category $ \mathbb{E} $ is abelian if and only if both $ \mathbb{E} $ and $ \mathbb{E} ^{op}$ are semi-abelian.
\subsection{Internal skew braces}
Given any category $ \mathbb{E} $, the notion of internal group, digroup and skew brace is straightforward, determining the categories $\mathsf{Gp} \mathbb{E} $, $\mathsf{DiGp} \mathbb{E} $ and $\mathsf{SKB} \mathbb{E} $. Since $\mathsf{Gp} \mathbb{E} $ is protomodular, so are the two others. An important case is produced with $ \mathbb{E} =Top$ the category of topological spaces. Although $Top$ is not a regular category, so is the category $\mathsf{Gp} Top$, the regular epimorphisms being the open surjective homomorphisms. So $\mathsf{Gp} Top$ is homological but not semi-abelian.
Now let $f: X\to Y$ be any map in $\mathsf{DiGp} Top$. Let us show that $R[f]$ has a quotient in $\mathsf{DiGp} Top$. Take its quotient $q_{R[f]}: X \twoheadrightarrow Q_f$ in $\mathsf{DiGp}$, then endow $Q_f$ with the quotient topology with respect to $R[f]$; then $q_{R[f]}$ is an open surjective homomorphism since so is $U_0(q_{R[f]})$. Accordingly, a regular epimorphism in $\mathsf{DiGp} Top$ is again an open surjective homomorphism. Moreover this same functor $U_0: \mathsf{DiGp} Top \to \mathsf{Gp} Top$ being left exact and reflecting the homeomorphic isomorphisms, it reflects the regular epimorphisms; so, these regular epimorphisms in $\mathsf{DiGp} Top$ are stable under pullbacks. Accordingly the category $\mathsf{DiGp} Top$ is regular. Similarly the category $\mathsf{SKB} Top$ is homological as well, without being semi-abelian. As any category of topological semi-abelian algebras, both $\mathsf{DiGp} Top$ and $\mathsf{SKB} Top$ are finitely cocomplete, see \cite{BC}.
\section{Skew braces and their commutators}
\subsection{Protomodular aspects}
\subsubsection{Commutative pairs of subobjects, abelian objects}\label{abob}
Given any pointed category $ \mathbb{E} $, the protomodular axiom applies to the following specific downward pullback:
$$
\xymatrix@=20pt{
X \ar@{ >->}[rr]^{r_X} \ar@<-4pt>[d]_{} && X\times Y \ar@<-4pt>[d]_{p_Y} && \\
1 \ar@{ >->}[rr]_{0_Y} \ar@{ >->}[u]_{0_{K}} & & Y \ar@{ >->}[u]_{l_Y} }
$$
where the monomorphisms are the canonical inclusions. This is the definition of a \emph{unital category} \cite{B2}. In this kind of categories there is an intrisic notion of \emph{commutative pair of subobjects}:
\begin{definition} {\rm
Let $ \mathbb{E} $ be a unital category.
Given a pair $(u,v)$ of subobjects of $X$, we say that the subobjects $u$ and $v$ {\em cooperate} (or {\em commute}) when there is a (necessarily unique) map $\varphi $, called the {\em cooperator} of the pair $(u,v)$, making the following diagram commute:
$$
\xymatrix@=20pt{
& U \ar@{ >->}[dl]_{l_U} \ar@{ >->}[dr]^{u} & & \\
U \times V \ar@{.>}[rr]_{\varphi} && X \\
& V \ar@{ >->}[ul]^{r_V} \ar@{ >->}[ur]_{v} & &
}
$$
We denote this situation by $[u,v]=0$. A subobject $u:U \rightarrowtail Y$ is {\em central} when $[u,1_{X}]=0$. An object $X$ is {\em commutative} when $[1_{X},1_{X}]=0$.}
\end{definition}
Clearly $[1_{X},1_{X}]=0$ gives $X$ a structure of internal unitary magma, which, $ \mathbb{E} $ being unital, is necessarily underlying an internal commutative monoid structure. When $ \mathbb{E} $ is protomodular, this is actually an internal abelian group structure, so that we call $X$ an abelian object \cite{B1}. This gives rise to a fully faithful subcategory $Ab( \mathbb{E} )\hookrightarrow \mathbb{E} $, which is additive and stable under finite limits in $ \mathbb{E} $. From that we can derive:
\begin{proposition}\cite{B1}
A pointed protomodular category $ \mathbb{E} $ is additive if and only if any monomorphism is normal.
\end{proposition}
\subsubsection{Connected pairs $(R,S)$ of equivalence relations}
Since a protomodular category is necessarily a Mal'tsev one, we can transfer the following notions. Given any pair $(R,S)$ of equivalence relations on the object $X$ in $ \mathbb{E} $, take the following rightward and downward pullback:
$$
\xymatrix@=30pt
{
R {\overrightarrow\times}_{\!\! X} S \ar[r]^{p_S} \ar[d]_{p_R} & S \ar[d]_{d_0^S} \ar@<+1,ex>@{ >->}[l]^{r_S} \\
R \ar[r]^{d_1^R} \ar@<-1,ex>@{ >->}[u]_{l_R} & X \ar@<+1,ex>@{ >->}[l]^{s_0^R} \ar@<-1,ex>@{ >->}[u]_{s_0^S}
}
$$
where $l_R$ and $r_S$ are the sections induced by the maps $s_0^R$ and $s_0^S$. Let us recall the following definition from \cite{BG1}:
\begin{definition} {\rm
In a Mal'tsev category $ \mathbb{E} $, the pair $(R,S)$ is said to be {\em connected} when there is a (necessarily unique) morphism
$$
p : R {\overrightarrow\times}_{\!\! X} S \rightarrow X,\; xRySz\mapsto p(xRySz)
$$
such that $pr_S=d_1^S$ and $pl_R=d_0^R$, namely such that the following identities hold: $p(xRySy)=x$ and $p(yRySz)=z$. This morphism $p$ is then called the \emph{connector} of the pair, and we denote the situation by $[R,S]=0$.}
\end{definition}
From \cite{BG2}, let us recall that:
\begin{lemma}\label{func}
Let $ \mathbb{E} $ be a Mal'tsev category, $f: X\to Y$ any map, $(R,S)$ any pair of equivalence relations on $X$, $(\bar R,\bar S)$ any pair of equivalence relations on $Y$ such that $R\subset f^{-1}(\bar R)$ and $S\subset f^{-1}(\bar S)$. Suppose moreover that $[R,S]=0$ and $[\bar R,\bar S]=0$. Then the following diagram necessarily commutes:
$$
\xymatrix@=30pt
{
R {\overrightarrow\times}_{\!\! X} S \ar[rr]^{\tilde f} \ar[d]_{p_{(R,S)}} && \bar R {\overrightarrow\times}_{\!\! Y} \bar S \ar[d]^{p_{(\bar R,\bar S)}} \\
X \ar[rr]_{f} && Y
}
$$
where $\tilde f$ is the natural factorization induced by $f^{-1}(\bar R)$ and $S\subset f^{-1}(\bar S)$.
\end{lemma}
A pointed Mal'tsev category is necessarily unital. From \cite{BG1}, in any pointed Mal'sev category $ \mathbb{E} $, we have necessarily
\begin{equation}[R,S]=0 \;\; \Rightarrow \;\;\ [I_R,I_S]=0\label{h=s}\end{equation}
In this way, the ``Smith commutation" \cite{S1976} implies the ``Huq commutation" \cite{H}.
\subsection{Huq=Smith}
The converse is not necessarily true, even if $ \mathbb{E} $ is pointed protomodular, see Proposition \ref{notsh} below. When this is the case, we say that $ \mathbb{E} $ satisfies the {\rm (Huq=Smith)} condition. Any pointed strongly protomodular category satisfies {\rm (Huq=Smith)}, see \cite{BG1}. {\rm (Huq=Smith)} is true for $\mathsf{Gp}$ by the following straighforward:
\begin{proposition}\label{SHGp}
Let $(R,S)$ be a pair of equivalence relations in $\mathsf{Gp}$ on the group $(G,*)$. The following conditions are equivalent:\\
{\rm (1)} $[I_R,I_S]=0$;\\
{\rm (2)} $p(x,y,z)=x*y^{-1}*z$ defines a group homomorphism $p: G\times G \times G \to G$;\\
{\rm (3)} $[R,S]=0$.
\end{proposition}
\begin{proposition}\label{notsh}
The category $\mathsf{DiGp}$ of digroups does not satisfy {\rm (Huq=Smith)}.
\end{proposition}
\proof
We can use the counterexample introduced in \cite{Normal} for another purpose. Start with an abelian group $(A,+)$ and an object $a$ such that $-a\neq a$. Then define $\theta: A\times A \to A\times A$ as the involutive bijection which leaves fixed any object $(x,y)$ except $(a,a)$ which is exchanged with $(-a,a)$. Then defined the group structure $(A\times A,\circ)$ on $A\times A$ as the transformed along $\theta$ of $(A\times A,+)$. So, we get:
$$ (x,z)\circ(x',z')=\theta(\theta(x,z)+\theta(x',z'))$$
Clearly we have $(a,a)^{-\circ}=(a,-a)$. Since the second projection $\pi:A\times A \to A$ is such that $\pi\theta=\pi$, we get a digroup homomorphism $\pi: (A\times A,+,\circ)\to (A,+,+)$ whose kernel map is, up to isomorphism, $\iota_A: (A,+,+) \rightarrowtail (A\times A,+,\circ)$ defined by $\iota(a)=(a,0)$. The commutativity of the law $+$ makes $[\iota_A,\iota_A]=0$ inside $\mathsf{DiGp}$. We are going to show that, however we do not have $[R[\pi],R[\pi]]=0$. If it was the case, according to the previous proposition and considering the images by $U_0$ and $U_1$ of the desired ternary operation, we should have, for any triple $(x,y)R[\pi](x',y)R[\pi](x",y)$:
$$(x,y)-(x',y)+(x",y)=(x,y)\circ(x',y) ^{-\circ}\circ(x",y)$$
namely $(x,y)\circ(x',y) ^{-\circ}\circ(x",y)=(x-x'+x",y)$.
Now take $y=a=x'$ and $a\neq x \neq -a$. Then we get:\\ $(x,a)\circ(a,a) ^{-\circ}\circ(x",a)=(x,a)\circ(a,-a)\circ(x",a)=(x+a,0)\circ(x",a)$\\ $=(x+a+x",a)$, if moreover $a\neq x" \neq -a$. Now, clearly we get $x+a+x"\neq x-a+x"$ since $a\neq -a$.
\endproof
However we have the following very general observation:
\begin{proposition}
Let $ \mathbb{E} $ be any pointed Mal'tsev satisfying {\rm (Huq=Smith)}. So is any functor category $\mathcal F( \mathbb{C} , \mathbb{E} )$.
\end{proposition}
\proof
Let $(R,S)$ be a pair of equivalence relation on an object $F\in F( \mathbb{C} , \mathbb{E} )$. We have $[R,S]=0$ if and only if for each object $C\in \mathbb{C} $ we have $[R(C),S(C)]=0$ since, by Lemma \ref{func}, the naturality follows. In the same way, if $(u,v)$ is a pair of subfunctors of $F$, we have $[u,v]=0$ if and only if for each object $C\in \mathbb{C} $ we have $[u(C),v(C)]=0$. Suppose now that $ \mathbb{E} $ satisfies {\rm (Huq=Smith)}, and that $[I_R,I_S]=0$. So, for each object $C\in \mathbb{C} $ we have $[I_R(C),I_S(C)]=0$, which implies $[R(C),S(C)]=0$. Accordingly $[R,S]=0$.
\endproof
Let $ \mathbb{T} $ be any finitary algebraic theory, and denote by $ \mathbb{T} ( \mathbb{E} )$ the category of internal $ \mathbb{T} $-algebras in $ \mathbb{E} $. Let us recall that, given any variety of algebras $ \mathbb{V} ( \mathbb{T} )$, we have a {\em Yoneda embedding for the internal $ \mathbb{T} $-algebras}, namely a left exact fully faithful factorization of the Yoneda embedding for $ \mathbb{E} $:
\[\xymatrix@C=2pc@R=2pc{ \mathbb{T} ( \mathbb{E} ) \ar@{-->}[rr]^{\bar Y_{ \mathbb{T} }} \ar[d]_{\mathcal U_{ \mathbb{T} }} && \mathcal F(\mathbb E^{op}, \mathbb{V} ( \mathbb{T} )) \ar[d]^{\mathcal F( \mathbb E^{op}, \mathcal U)} \\
\mathbb{E} \ar[rr]_Y && \mathcal F(\mathbb E^{op},Set)}\]
where $\mathcal U: \mathbb{V} ( \mathbb{T} ) \to Set$ is the canonical forgetful functor.
\begin{theorem}\label{TTE}
Let $ \mathbb{T} $ be any finitary algebraic theory
such that the associated variety of algebras $ \mathbb{V} ( \mathbb{T} )$ is pointed protomodular. If $ \mathbb{V} ( \mathbb{T} )$ satisfies {\rm (Huq=Smith)}, so does
any category $ \mathbb{T} ( \mathbb{E} )$.
\end{theorem}
\proof
If $ \mathbb{V} ( \mathbb{T} )$ satisfies {\rm (Huq=Smith)}, so does $\mathcal F(\mathbb E^{op}, \mathbb{V} ( \mathbb{T} ))$ by the previous proposition. Accordingly, $\bar Y_{ \mathbb{T} }$ being left exact and fully faithful, so does $ \mathbb{T} ( \mathbb{E} )$.
\endproof
\subsection{Any category $\mathsf{SKB} \mathbb{E} $ does satisfy {\rm (Huq=Smith)}}
\begin{proposition}\label{U,V}
Given any pair $(U,V)$ of subobjects of $X$ in $\mathsf{SKB}$, the following conditions are equivalent:\\
{\rm (1)} $[U,V]=0$;\\
{\rm (2)} for all $(u,v)\in U\times V$, we get $u\circ v=u*v$ and this restriction is commutative;\\
{\rm (3)} for all $(u,v)\in U\times V, \; \lambda_u(v)=v$, $[U_0(U),U_0(V)]=0$ and $[U_1(U),U_1(V)]=0$.\\
Accordingly, an abelian object in $\mathsf{SKB}$ is necessarily of the form $(A,+,+)$ with $(A,+)$ abelian.
\end{proposition}
\proof
Straightforward, setting $\varphi(u,v)=u+v$ and using an Eckmann-Hilton argument.
\endproof
\begin{proposition}[$\mathsf{SKB}$ does satisfy {\rm (Huq=Smith)}]
Let $R$ and $S$ be two equivalence relations on an object $X\in \mathsf{SKB}$. The following conditions are equivalent:\\
{\rm (1)} $[I_R,I_S]=0$;\\
{\rm (2)} $[U_0(U),U_0(V)]=0$, $[U_1(U),U_1(V)]=0$ and
$x*y^{-*}*z=x\circ y^{-\circ}\circ z$ for all $xRySz$;\\
{\rm (3)} $[R,S]=0$.
\end{proposition}
\proof
The identity $x*y^{-*}*z=x\circ y^{-\circ}\circ z$ is equivalent to
$$y^{-\circ}\circ z=x^{-\circ}\circ(x*y^{-*}*z)=(x^{-\circ}\circ x)*(x^{-\circ})^{-*}*(x^{-\circ}\circ(y^{-*}*z))=$$ $$\qquad\qquad\qquad=(x^{-\circ})^{-*}*(x^{-\circ}\circ(y^{-*}*z)),$$ which, in turn, is equivalent to $$\lambda_{x^{-\circ}}(y^{-*}*z)=y^{-\circ}\circ z.$$
\smallskip
Suppose $xRySy$. Setting $z=y*v,\; v\in I_S$, this is equivalent to $\lambda_{x^{-\circ}}(v)=y^{-\circ}\circ (y*v)=\lambda_{y^{-\circ}}(v)$ by (\ref{doublech}). This in turn is equivalent to $\lambda_{y}\circ \lambda_{x^{-\circ}}(v)=\lambda_{y\circ x^{-\circ}}(v)=v$, $v\in I_S$.
Setting $y=u \circ x ,\; u\in I_R$, this is equivalent to $\lambda_u(v)=v$, $(u,v)\in I_R\times I_S$.
\smallskip
Now, by Proposition \ref{U,V}, $[I_R,I_S]=0$ is equivalent to: for all $(u,v)\in I_R\times I_S$, we get $\lambda_u(v)=v$, $[U_0(U),U_0(V)]=0$ and $[U_1(U),U_1(V)]=0$. So we get $[1) \iff 2)]$.
\smallskip
Suppose (2). From $[U_0(U),U_0(V)]=0$, we know by Proposition \ref{U,V} that $p(x,y,z)=x*y^{-*}*z$ is a group homomorphism $(R {\overrightarrow\times}_X S,*),\to (X,*)$, and from $[U_1(U),U_1(V)]=0$ that $q(x,y,z)=x\circ y^{-\circ}\circ z$ is a group homomorphism $(R {\overrightarrow\times}_X S,\circ)\to (X,\circ)$. If $p=q$, this produces the desired $R {\overrightarrow\times}_{\!\! X} S \to X$ in $\mathsf{SKB}$ showing that $[R,S]=0$. Whence $[(2)\Rightarrow (3)]$. We have already noticed that the last implication $[(3)\Rightarrow (1)]$ holds in any pointed category.
\endproof
According to Theorem \ref{TTE}, we get the following:
\begin{corollary}\label{SHGpE}
Given any category $ \mathbb{E} $, the category $\mathsf{SKB} \mathbb{E} $ satisfies {\rm (Huq= Smith)}. This is the case in particular for the category $\mathsf{SKB} Top$ of topological skew braces.
\end{corollary}
\subsection{Homological aspects of commutators}
\subsubsection{Abstract Huq commutator}
Suppose now that $ \mathbb{E} $ is any finitely cocomplete regular unital category.
In this setting, we gave in \cite{B10}, for any pair $u: U \rightarrowtail X$, $v : V \rightarrowtail X$ of subobjects, the construction of a regular epimorphism $\psi_{(u,v)}$ which universally makes their direct images cooperate. Indeed consider the following diagram, where $Q[u,v]$ is the limit of the plain arrows:
$$
\xymatrix@=25pt{
& U \ar@{ >->}[dl]_{l_U} \ar@{ >->}[dr]^{v} & & \\
U \times V \ar@{.>}[r]_{\bar{\psi}_{(u,v)}} & Q[u,v] & X \ar@{.>}[l]^{\psi_{(u,v)}} \\
& V \ar@{ >->}[ul]^{r_V} \ar@{ >->}[ur]_{v} & &
}
$$
The map $\psi_{(u,v)}$ is necessarily a regular epimorphism and the map $\bar{\psi}_{(u,v)}$ induces the cooperator of the direct images of the pair $(u,v)$ along $\psi_{(u,v)}$. This regular epimorphism $\psi_{(u,v)}$ measures the lack of cooperation of the pair $(u,v)$ in the sense that the map $\psi_{(u,v)}$ is an isomorphism if and only if $[u,v]=0$. We then get a symmetric tensor product: $I_{R[\psi_{(-,-)}]}: \mathsf{Mon}_X\times \mathsf{Mon}_X \to \mathsf{Mon}_X$ of preordered sets.
Since the map $\psi_{(u,v)}$ is a regular epimorphism, its distance from being an isomorphism is its distance from being a monomorphism, which is measured by the kernel equivalence relation $R[\psi_{(u,v)}]$. Accordingly, in the homological context, it is meaningful to introduce the following definition, see also \cite{MM}:
\begin{definition}
Given any finitely cocomplete homological category $ \mathbb{E} $ and any pair $(u,v)$ of subobjects of $X$, their abstract Huq commutator {\rm $[u,v]$} is defined as $I_{R[\psi_{(u,v)}]}$ or equivalently as the kernel map $k_{\psi_{(u,v)}}$.
\end{definition}
By this universal definition, in the category $\mathsf{Gp}$, this $[u,v]$ coincides with the usual $[U,V]$.
\subsubsection{Abstract Smith commutator}\label{asc}
Suppose $ \mathbb{E} $ is a regular category. Then, given any regular epimorphism $f: X \twoheadrightarrow Y$ and any equivalence relation $R$ on $X$, the direct image $f(R) \rightarrowtail Y\times Y$ of $R \rightarrowtail X\times X$ along the regular epimorphism $f\times f: X\times X \twoheadrightarrow Y\times Y$ is reflexive and symmetric, but generally not transitive. Now, when $ \mathbb{E} $ is a regular Malt'sev category, this direct image $f(R)$, being a reflexive relation, is an equivalence relation.
Suppose moreover that $ \mathbb{E} $ is finitely cocomplete. Let $(R,S)$ be a pair of equivalence relations on $X$,
and consider the following diagram, where $Q[R,S]$ is the colimit of the plain arrows:
$$
\xymatrix@=25pt{
& R \ar@{ >->}[dl]_{l_R} \ar[dr]^{d_{0,R}} & & \\
R \times_X S \ar@{.>}[r]_{\bar{\chi}_{(R,S)}} & Q[R,S] & X \ar@{.>}[l]^{\chi_{(R,S)}} \\
& S \ar@{ >->}[ul]^{r_S} \ar[ur]_{d_{1,S}} & &
}
$$
Notice that, here, in consideration of the pullback defining $R \overrightarrow{\times}_X S$, the role of the projections $d_0$ and $d_1$ have been interchanged. This map $\chi_{(R,S)}$ measures the lack of connection between $R$ and $S$, see \cite{B10}:
\begin{theorem}
Let $ \mathbb{E} $ be a finitely cocomplete regular Mal'tsev category. Then the map $\chi_{(R,S)}$ is a regular epimorphism and is the universal one which makes the direct images $\chi_{(R,S)}(R)$ and $\chi_{(R,S)}(S)$ connected. The equivalence relations $R$ and $S$ are connected (i.e. $[R,S]=0$) if and only if $\chi_{(R,S)}$ is an isomorphism.
\end{theorem}
Since the map $\chi_{(R,S)}$ is a regular epimorphism, its distance from being an isomorphism is its distance from being a monomorphism, which is exactly measured by its kernel equivalence relation $R[\chi_{(R,S)}]$. Accordingly, we give the following definition:
\begin{definition}
Let $ \mathbb{E} $ be any finitely cocomplete regular Mal'tsev category. Given any pair $(R,S)$ of equivalence relations on $X$, their abstract Smith commutator $[R,S]$ is defined as the kernel equivalence relation $R[\chi_{(R,S)}]$ of the map $\chi_{(R,S)}$.
\end{definition}
In this way, we define a symmetric tensor product $[-,-]=R[\chi_{(-,-)}]: \ensuremath{\mathrm{Equ}}_X\times \ensuremath{\mathrm{Equ}}_X \to \ensuremath{\mathrm{Equ}}_X$ of preorered sets. It is clear that, with this definition, we get $[R,S]=0$ in the sense of connected pairs if and only if $[R,S]=\Delta_X$ (the identity equivalence relation on $X$) in the sense of this new definition. This is coherent since $\Delta_X$ is effectively the $0$ of the preorder $ \ensuremath{\mathrm{Equ}}_X$. Let us recall the following:
\begin{proposition}\label{dim}
Let $ \mathbb{E} $ be a pointed regular Mal'tsev category. Let $f : X \twoheadrightarrow Y$ be a regular epimorphism and $R$ an equivalence relation on $X$. Then the direct image $f(I_R)$ of the normal subjobject $I_R$ along $f$ is $I_{f(R)}$.
\end{proposition}
From that, we can assert the following:
\begin{proposition}
Let $ \mathbb{E} $ be a finitely cocomplete homological category. Given any pair $(R,S)$ of equivalence relations on $X$, we have {\em $[I_R,I_S] \subset I_{[R,S]}$}.
\end{proposition}
\proof
From (\ref{h=s}), we get $$ [\chi_{(R,S)}(R),\chi_{(R,S)}(S)]=0 \;\; \Rightarrow \;\; [I_{\chi_{(R,S)}(R)},I_{\chi_{(R,S)}(S)}]=0 $$
By the previous proposition we have: $$0=[I_{\chi_{(R,S)}(R)},I_{\chi_{(R,S)}(S)}]=[ \chi_{(R,S)}(I_R),\chi_{(R,S)}(I_S)].$$
Accordingly, by the universal property of the regular epimorphism $\psi_{(I_R,I_S)}$ we get a factorization:
$$
\xymatrix@=20pt{
X \ar@{->>}[rr]^{\psi_{(I_R,I_S)}} \ar@{->>}[rrd]_{\chi_{(R,S)}}&& Q[I_R,I_S] \ar@{.>}[d]\\
&& Q[R,S]
}
$$
which shows that $[I_R,I_S]\subset I_{[R,S]}$.
\endproof
\begin{theorem}
In a finitely cocomplete homological category $ \mathbb{E} $
the following conditions are equivalent:\\
{\rm (1)} $ \mathbb{E} $ satisfies {\rm (Huq=Smith)};\\
{\rm (2)} {\em $[I_R,I_S]= I_{[R,S]}$} for any pair $(R,S)$ of equivalence relations on $X$.
Under any of these conditions, the regular epimorphisms $\chi_{(R,S)}$ and $\psi_{(I_R,I_S)}$ do coincide.
\end{theorem}
\proof
Suppose (2). Then $[I_R,I_S]=0$ means that $\psi_{(I_R,I_S)}$ is an isomorphism, so that $0=[I_R,I_S]= I_{[R,S]}$. In a homological category $I_{[R,S]}=0$ is equivalent to $[R,S]=0$. Conversely, suppose (1). We have to find a factorization:
$$
\xymatrix@=25pt{
X \ar@{->>}[rr]^{\psi_{(I_R,I_S)}} \ar@{->>}[rrd]_{\chi_{(R,S)}}&& Q[I_R,I_S] \\
&& Q[R,S] \ar@{.>}[u]
}
$$
namely to show that $[\psi_{(I_R,I_S)}(R),\psi_{(I_R,I_S)}(S)]=0$. By (1) this is equivalent to $0=[I_{\psi_{(I_R,I_S)}(R)},I_{\psi_{(I_R,I_S)}(S)}]$, namely to $0=[\psi_{(I_R,I_S)}(I_R),\psi_{(I_R,I_S)}(I_S)]$ by Proposition \ref{dim}. This is true by the universal property of the regular epimorphism $\psi_{(I_R,I_S)}$.
\endproof
\subsection{Skew braces and their commutators}
Since the categories $\mathsf{SKB}$ and $\mathsf{SKB} Top$ are finitely cocomplete homological categories, all the results of the previous section concerning commutators do apply and, in particular, thanks to the {\rm (Huq=Smith)} condition, the two notions of commutator are equivalent. It remains now to make explicit the description of the Huq commutator.
\bigskip
We will determine a set of generators for the Huq commutator of two ideals in a skew brace:
\begin{proposition}\label{2.5} If $I$ and $J$ are two ideals of a left skew brace $(A,*,\circ)$, their Huq commutator $[I,J]$ is the ideal of $A$ generated by the union of the following three sets: \\ \noindent {\rm (1)} the set $\{\, i\circ j\circ(j\circ i)^{-\circ}\mid i\in I,\ j\in J\,\}$, (which generates the commutator $[I,J]_{(A,\circ)}$ of the normal subgroups $I$ and $J$ of the group $(A,\circ)$); \\ \noindent {\rm (2)} the set $\{\, i* j*(j*i)^{-*}\mid i\in I,\ j\in J\,\}$, (which generates the commutator $[I,J]_{(A,*)}$ of the normal subgroups $I$ and $J$ of the group $(A,*))$; and \\ \noindent {\rm (3)} the set $\{\,(i\circ j)*(i* j)^{-*}
\mid i\in I,\ j\in J\,\}$.\end{proposition}
\begin{proof} Assume that the mapping $\mu\colon I\times J\to A/K$, $\mu(i,j)=i*j*K$ is a skew brace morphism for some ideal $K$ of $A$. Then $$\begin{array}{l} (i\circ j)\circ K=(i\circ K)\circ(j\circ K)=(i* K)\circ(j*K)=\\ \qquad=\mu(i,1)\circ\mu(1,j)=\mu((i,1)\circ(1,j))=\mu(i,j)=\mu((1,j)\circ(i,1))=\\ \qquad=\mu((1,j)\circ\mu(i,1))=(j*K)\circ(i*K)=(j\circ K)\circ(i\circ K)=(j\circ i)\circ K.\end{array}$$ This proves that the set $(1)$ is contained in $K$.
Similarly, $$\begin{array}{l}(i* j)* K=(i* K)*(j* K)=\mu(i,1)*\mu(1,j)=\mu((i,1)*(1,j))=\mu(i,j)=\\ \qquad=\mu((1,j)*(i,1))=\mu((1,j)*\mu(i,1))=(j*K)*(i*K)=(j*i)*K.\end{array}$$ Thus the set $(2)$ is contained in $K$.
Also, $$\begin{array}{l}(i\circ j)*K=(i\circ j)\circ K=(i\circ K)\circ(j\circ K)=(i*K)\circ(j*K)= \\ \qquad=\mu(i,1))\circ\mu(1,j)=\mu((i,1)\circ(1,j))=\mu(i,j)=\mu((i,1)*(1,j))=\\ \qquad=\mu(i,1)*\mu(1,j)=(i*K)*(j*K)=(i*j)*K.\end{array}$$ Hence the set (3) is also contained in $K$.
Conversely, let $K$ be the ideal of $A$ generated by the union of the three sets. It is then very easy to check that he mapping $\mu\colon I\times J\to A/K$, $\mu(i,j)=i*j*K$ is a skew brace morphism.\end{proof}
It the literature, great attention has been posed in the study of product $I\cdot J$ of two ideals $I,J$ of a (left skew) brace $(A,*,\circ)$. This product is with respect to the product $\cdot$ in the brace $A$ defined, for every $x,y\in A$ by $x\cdot y=y^{-*}*\lambda_x(y)$. Then, for every $i\in I$ and $j\in J$, $i\cdot j=j^{-*}*\lambda_i(j)=j^{-*}*i^{-*}*(i\circ j)=(i*j)^{-*}*(i\circ j)$. Hence the ideal of $A$ generated by the set $I\cdot J$ of all products $i\cdot j$ coincides with the ideal of $A$ generated by the set (3) in the statement of Proposition~\ref{2.5}.
\bigskip
Clearly, for a left skew brace $A$, the Huq commutator $[I,J]$ is equal to the Huq commutator $[J,I]$. Also, $I\cdot J=(J\cdot I)^{-*}$, so that the left annihilator of $I$ in $(A,\cdot)$ is equal to the right annihilator of $I$ in $(A,\cdot)$. Moreover, the condition ``$I\cdot J=0$'' can be equivalently expressed as ``$J$ is contained in the kernel of the group homomorphism $\lambda|^I\colon (A,\circ)\to\mathsf{Aut}(I,*)$.
\begin{proposition} For an ideal $I$ of a left skew brace $A$, there is a largest ideal of $A$ that centralizes $I$ (the {\em centralizer} of $I$).\end{proposition}
\begin{proof} The zero ideal centralizes $I$ and the union of a chain of ideals that centralize $I$ centralizes $I$. Hence there is a maximal element in the set of all the ideals of $A$ that centralize $I$. Now if $J_1$ and $J_2$ are two ideals of $A$, then $J_1*J_2=J_1\circ J_2$ is the join of $\{J_1,J_2\}$ in the lattice of all ideals of $A$. Now $J_1$ centralizes $I$ if and only if (1) $J_1\subseteq C_{(A,*)}(I)$, the centralizer of the normal subgroup $I$ in the group $(A,*)$; (2) $J_1\subseteq C_{(A,\circ)}(I)$, the centralizer of the normal subgroup $I$ in the group $(A,\circ)$; and (3) $J_1$ is contained in the kernel of the group morphism $\lambda|^I\colon (A,\circ)\to\mathsf{Aut}(I,*)$, which is a normal subgroup of $(A,\circ)$. Similarly for $J_2$. Hence if both $J_1$ and $J_2$ centralize $I$, then $J_1*J_2\subseteq C_{(A,*)}(I)$, and $J_1\circ J_2\subseteq C_{(A,\circ)}(I)\cap\ker\lambda|^I$. Therefore $J_1*J_2=J_1\circ J_2$ centralizes $I$. It follows that the set of all the ideals of $A$ that centralize $I$ is a lattice. Hence the maximal element in the set of all the ideals of $A$ that centralize $I$ is the largest element in that set.\end{proof}
In particular, the centralizer of the improper ideal of a left skew brace $A$ is the {\em center} of $A$.
\bigskip
A description of the free left skew brace over a set $X$ is available, in a language very different from ours, in \cite{Orza}.
|
2,877,628,088,873 | arxiv | \section{Introduction}
In the recent years, the tensions between the Standard Model (SM) and the experimental predictions in the $b \rightarrow s l^+ l^-$ transitions have kept on increasing: a series of measurements have shown $2-3 \sigma$ disagreements with the SM predictions, started with the angular observables (in particular $P_5^\prime$) in the $B^0 \rightarrow K^{*0} \mu^+ \mu^-$ decay (see e.g. \cite{LHCb:2015svh}), followed by the measurement of ratios testing lepton flavour universality \cite{LHCb:2017avl,LHCb:2021trn,LHCb:2021lvy}. The LHCb collaboration has recently measured also the $B^+ \rightarrow K^{*+} \mu^+ \mu^- $ angular observables using the full data coming from the Runs 1 \& 2 corresponding to an integrated luminosity of 9 fb$^{-1}$ \cite{LHCb:2020gog} which confirms the previously observed tensions in the similar neutral decay modes. Model independent global fits to all available $b \rightarrow s l^+ l^-$ data seem to consistently indicate New Physics (NP) compatible with a single shift in $C_9$ from its SM value by about 25\% (see e.g.~\cite{Alguero:2021anc,Hurth:2021nsi,Geng:2021nhg,Bhom:2020lmk}).
In this context of strong and persisting flavour anomalies, the exploration of non-universal flavour models is pertinent to find a compatible model. In particular, one such promising model is supersymmetry, namely the Minimal Supersymmetric Standard Model (MSSM).
Until now, most studies conducted in the MSSM assumed harsh constraints on the 105 MSSM free parameters due to computational challenges. Simplifications were made for instance by taking most of the parameters to be zero or assuming the so-called Minimal Flavour Violation (MFV) hypothesis. These approximations of the full MSSM have not proven to be enough to explain the $B$ anomalies so far. Moreover, no signal predicted by such models as the constrained MSSM (cMSSM, 5 free parameters) has been detected at $\sqrt{s} = 13$ TeV at the LHC, or elsewhere.
Therefore in this study we present a more general approach to $b \rightarrow s l l$ transitions, by looking at the impact of a more general model, namely the phenomenological MSSM (pMSSM, 19 free parameters) by in addition evading the MFV assumptions through the Mass Insertion Approximation (MIA) approach~\cite{Gabrielli:1995bd}.
We then discuss the obtained contributions, and compare them to the pMSSM.
\section{Theoretical Set-up}
The pMSSM is known to not be able to shift $C_9$ enough without violating the constraints on $C_7$ coming from $b \rightarrow s \gamma$ \cite{Mahmoudi:2014mja}. To evade such a limitation, we relax only the MFV hypothesis and use the MIA to turn to Non Minimal Flavour Violating (NMFV) scenarios in this extended pMSSM, by allowing mixing between squarks of the second and third generations. By doing so, additional 1-loop diagrams contribute to $C_9,C_7$ and $C_{10}$, as displayed in Figures~\ref{fig:sfigs},\ref{fig:box}.\\
The MIA gives then a simple way of expressing all relevant quantities~\cite{Dedes:2015twa} (amplitudes, Wilson coefficients, etc.) in term of the flavour violating parameters: in our case the off-diagonal entries of the squark soft-breaking masses. \\
Starting from the squark soft breaking mass matrix and following the conventions of~\cite{Allanach:2008qq}:
\begin{equation}
\label{eq:squark_mass_matrices}
\begin{split}
\mathcal{M}^2_{\boldsymbol{\tilde{d}}} ~&=~
\begin{pmatrix}
M^2_{\tilde{Q}} + m^2_{d} + D_{\tilde{d},L}
~~&~~
\frac{v_d}{\sqrt{2}} T_d^\dag - m_d \mu \tan\beta \\
\frac{v_d}{\sqrt{2}} T_d - m_d \mu^* \tan\beta
~~&~~
M^2_{\tilde{D}} + m^2_{d} + D_{\tilde{d},R}
\end{pmatrix} \, \\
\mathcal{M}^2_{\boldsymbol{\tilde{u}}} ~&=~
\begin{pmatrix}
V_{\rm CKM} M^2_{\tilde{Q}} V^{\dag}_{\rm CKM} + m^2_{u} + D_{\tilde{u},L}
~~&~~
\frac{v_u}{\sqrt{2}} T_u^\dag - m_u \frac{\mu}{\tan\beta} \\
\frac{v_u}{\sqrt{2}} T_u - m_u \frac{\mu^*}{\tan\beta}
~~&~~
M^2_{\tilde{U}} + m^2_{u} + D_{\tilde{u},R}
\end{pmatrix}
\end{split}
\end{equation}
we define the dimensionless Mass Insertion (MI) parameters as:
\begin{equation}
(\delta^{\tilde{f}}_{ij})_{AB} \equiv \frac{(\Delta^{\tilde{f}}_{ij})_{AB}}{M_{sq}}
\end{equation}
where $(\Delta^{\tilde{f}}_{ij})_{AB}$ is an off-diagonal element of the $\tilde{f}=\tilde{u},\tilde{d}$ squark squared mass matrix, while the indices $(i,j) \in \{2,3\}$ span generation space, $(A,B)\in \{L,R\}$ are chirality indices, and $M_{sq}$ is the first and second generations' average squark mass, following the conventions of~\cite{Lunghi:1999uk}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.3\linewidth]{MIA_tL_tR_c_chargino_penguin.png}
\hspace{1cm}
\includegraphics[width=0.3\linewidth]{MIA_charg_zgamm_t_c.png}\\
\vspace{1cm}
\includegraphics[width=0.3\linewidth]{MIA_Zgamma_penguin_bL_bR_s_L.png}
\hspace{1cm} \vspace{1cm}
\includegraphics[width=0.3\linewidth]{MIA_Z_penguin_t_s.png}
\caption[]{Some of the relevant penguin diagrams for $b\rightarrow
s \ell^+ \ell^-$. The red cross
indicates a Mass Insertion. Top diagrams are based on chargino interactions.
The bottom ones consider gluino interactions.
}
\protect\label{fig:sfigs}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.3]{MIA_box_tc_charg.png}
\caption[]{Relevant box diagram for $b\rightarrow s \ell^+
\ell^-$. The red cross
indicates a Mass Insertion. }
\protect\label{fig:box}
\end{center}
\end{figure}
\section{Results}
We perform a uniform sampling of all the 19 parameters of the pMSSM and the 9 additional MI parameters. We then compute the spectra at the electroweak scale using \texttt{Softsusy}~\cite{Allanach:2001kg} and the pMSSM observables (including the Wilson coefficients) with the publicly available code \texttt{SuperIso}~\cite{Mahmoudi:2008tp}. The additional Wilson coefficient NMFV contributions are computed using the analytical expressions in~\cite{Lunghi:1999uk} including their corrections in~\cite{Behring:2012mv}, and cross-checked with the full calculations performed with \texttt{MARTY}~\cite{Uhlrich:2020ltd}.
The results for the Wilson coefficients are then run down to the $m_b$ scale. To better understand the effect of the MI parameters, we also compute the pMSSM Wilson coefficients for the same points by turning the MI parameters to zero. This is shown in Figure~\ref{fig:C9C7}. \\
As expected, the pMSSM is not able to significantly shift $C_9$, whereas we can see an impressive oyster-shaped spreading of the $C_9,C_7$ distribution in the NMFV case.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\linewidth]{C9C7_NMFV_oyster_newcolors.png}
\caption{Distribution of the sampled points in the $(C_9,C_7)$ plane. The pMSSM and NMFV distributions are shown in red and blue respectively. Each point in the NMFV is recast to the pMSSM by setting the MIs to zero. In the SM, we have: $C_9^{\rm SM}(\mu_b) = 4.2$.}
\label{fig:C9C7}
\end{figure}
In Figure~\ref{fig:C9bf}, we calculated the shift from the SM values for $C_9$ and $C_7$, and imposed the theoretical bounds on the MI parameters to ensure no tachyonic squarks are produced in the full spectrum, i.e. all MI parameters should satisfy $ (\delta^{\tilde{f}}_{ij})_{AB} \in [-0.85,0.85]$~\cite{DeCausmaecker:2015yca}. We also impose the LEP constraints on sparticle masses.
The best fit ranges are taken from~\cite{Hurth:2021nsi}. Again, it is clear that the pMSSM fails to fuly account for the $b\rightarrow s l l $ anomalies, while the NMFV is compatible at the 1$\sigma$ level. In Table~\ref{tab:wcf}, we show the Wilson coefficients $C_7, C_9, C_{10}$ for the two best-fit points obtained in our study. These scenarios are compatible with a full explanation of the $B$ anomalies.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth]{C9C7_bf_newcolors.png}
\caption{Distribution of the sampled points in the $(\delta C_9, \delta C_7)$ plane, with $\delta C_i = C_i^{\rm NP} - C_i^{\rm SM}$. The best fit regions are shown as orange vertical and horizontal patches.}
\label{fig:C9bf}
\end{figure}
Following the recent study of the recast of LHC mass limits when assuming flavour mixing~\cite{Chakraborty:2018rpn}, the corresponding squark and Lightest Supersymmetric Particle (LSP) masses can escape the current mass limits. Also, severely degenerate LSP scenarios as is the case here (with a $\Delta m(\chi^\pm_1,\chi^0_1) \sim0.14 $ GeV ) are an additional experimental challenge for detection in collider events due to extremely soft final states. \\
\begin{table}[ht]
\centering
\begin{tabular}{rrr}
\toprule
$C_7(\mu_b)$ & $C_9(\mu_b)$ & $C_{10}(\mu_b)$ \\
\midrule
-0.233 & 3.119 & -3.993 \\
-0.243 & 3.050 & -3.867 \\
\bottomrule
\end{tabular}
\caption{Wilson coefficient values at the $b$ quark mass scale for the best-fit matching points.}
\label{tab:wcf}
\end{table}
\section{Conclusion}
We presented a first study of the effect of NMFV scenarios on the pMSSM contributions to the relevant Wilson coefficients for the $b \rightarrow s l l $ anomalies. The NMFV allows to shift these coefficients enough to fully explain the anomalies. Imposing theoretical constraints on the flavour violating parameters leaves compatible benchmark scenarios to further explore.
This clearly shows the interest of NMFV models with respect to more constrained models such as the cMSSM and the pMSSM. These first results are promising, and indicate the necessity of investigating such models in the future, while more and more experimental measurements of lepton flavour violating observables are expected.
|
2,877,628,088,874 | arxiv | \section*{\normalsize\bf 1. Introduction}
\hspace{0.6cm} In every normal cell, there is a protective
mechanism against tumoral degeneration. This mechanism is based on
the P53 network. P53, also known as "the guardian of the genome",
is a gene that codes a protein in order to regulate the cell
cycle. The name is due to its molecular mass: it is a 53 kilo
Dalton fraction of cell proteins. Mdm2 gene plays a very important
role in P53 network. It regulates the levels of intracellular P53
protein concentration through a feedback loop. Under normal
conditions the P53 levels are kept very low. When there is DNA
damage the levels of P53 protein rise and if there is a prolonged
elevation the cell shifts to apoptosis, and if there is only a
short elevation the cell cycle is arrested and the repair process
is begun. The first pathway protects the cell from tumoral
transformation when there is a massive DNA damage that cannot be
repaired, and the second pathway protects a number of important
cells (neurons, myocardic cells) from death after DNA damage. In
these cells first pathway, apoptosis, is not an option because
they do not divide in adult life and their importance is obvious.
Due to its major implication in cancer prevention and due to the
actions described above, P53 has been intensively studied in the
last two decades.
During the years, several models which describe the interaction
between P53 and Mdm2 have been studied. We mention some of them in
references [2], [3], [4], [6], [7], [8], [9], [10]. This paper
gives a mathematical approach to the model described in [10]. The
authors of paper [10] make a molecular energy calculation based on
the classical force fields, and they also use chemical reactions
constants from literature. Their results obtained by simulations
in accordance with experimental behavior of the P53-Mdm2 complex,
but they lack the mathematical part which we develop in this
paper. We analyze the Hopf bifurcation with time delay as a
bifurcation parameter using the methods from [1], [5], [7].
The paper is organized as follows. We present the mathematical
model in section 2 and the existence of the stationary state is
study. In section 3, we discuss the local stability for the
stationary state of system (2) and we investigate the existence of
the Hopf bifurcation for system (2) using time delay as the
bifurcation parameter. In section 4, the direction of Hopf
bifurcation is analyzed according to the normal form theory and
the center manifold theorem introduced by Hassard [5]. Numerical
simulations for confirming the theoretical results are illustrated
in section 5. Finally, some conclusions are made.
\section*{\normalsize\bf 2. The mathematical model and the stationary state}
\hspace{0.6cm}
The state variables are: $y_1(t), y_2(t)$ the total number of p53
molecules and the total number of Mdm2 proteins.
The interaction function between P53 and MDM2 is
$f:{\rm{I \! R}}^2_+\rightarrow{\rm{I \! R}}$ given by [10]:
$$f(y_1, y_2)=\displaystyle\frac{1}{2}(y_1+y_2+k-\sqrt
{(y_1+y_2+k)^2-4y_1y_2}).\eqno(1)$$
The parameters of the model are: $s$ the production rate of p53,
$a$ the degradation rate of p53 (through ubiquitin pathway), and
also the rate at which Mdm2 re-enters the loop, $b$ the
spontaneous decay rate of p53, $d$ the decay rate of the protein
rate Mdm2, $k_1$ the dissociation constant of the complex
P53-Mdm2, $c$ the constant of proportionality of the production
rate of Mdm2 gene with the probability that the complex P53-Mdm2
is build. These parameters are positive numbers.
The mathematical model is described by the following differential
system with time delay [10]:
$$\begin{array}{l}
\vspace{0.1cm}
\dot y{}_1(t)=s-af(y_1(t), y_2(t))-by_1(t),\\
\dot y{}_2(t)=cg(y_1(t-\tau),
y_2(t-\tau))-dy_2(t),\end{array}\eqno(2)$$ where $f$ is given by
(1) and $g:{\rm{I \! R}}^2_+\rightarrow{\rm{I \! R}}$, is
$$g(y_1, y_2)=\displaystyle\frac{y_1-f(y_1, y_2)}{k_1+y_1-f(y_1,y_2)}.\eqno(3)$$
For the study of the model (1) we consider the following initial
values:
$$y_1(\theta)=\varphi_1(\theta), y_2(\theta)=\varphi_2(\theta),\theta\in[-\tau,0],
$$ with $\varphi_1, \varphi_2:[-\tau, 0]\rightarrow{\rm{I \! R}}_+$ are differentiable functions.
In the second equation of (2) there is delay, because the
trancription and translation of Mdm2 last for some time after that
p53 was bound to the gene.
The stationary state $(y_{10}, y_{20})\in {\rm{I \! R}}^2_+$ is given by the
solution of the system of equations:
$$\begin{array}{l}
s-af(y_1, y_2)-by_1=0,\\
cg(y_1,y_2)-dy_2=0.\end{array}\eqno(4)$$
From (1), (3) and (4) we deduce that the stationary state can be
found through the intersection of the curves:
$$y_2=f_1(y_1), \quad y_1=f_2(y_1),\eqno(5)$$
where $f_1, f_2:{\rm{I \! R}}_+\rightarrow{\rm{I \! R}}$ are given by:
$$\begin{array}{l}
f_1(y_1)=\displaystyle\frac{c(y_1(a+b)-s)}{d(k_1a+(a+b)y_1-s)},\\
f_2(y_2)=\displaystyle\frac{(s-by_1)(ka+(a+b)y_1-s)}{a((a+b)y_1-s)}.\end{array}\eqno(6)$$
{\bf Proposition 1}. {\it The function $f_1, f_2$ from (6) have
the following properties:
(i) $f_1$ is strictly increasing, $f_2$ is strictly decreasing on
$\left (\displaystyle\frac{s}{a+b}, \displaystyle\frac{s}{b}\right )$;
(ii) There is a unique value $y_{10}\in \left (\displaystyle\frac{s}{a+b},
\displaystyle\frac{s}{b}\right )$ so that $f_1(y_{10})=f_2(y_{10})$, where
$y_{10}$ is the solution of the equation $\varphi(x)=0$, from
$\left (\displaystyle\frac{s}{a+b}, \displaystyle\frac{s}{b}\right )$ where}
$$\begin{array}{l}
\varphi(x)\!=\!ac((a\!+\!b)x\!-\!s)^2\!-\!d(k_1a\!+\!(a\!+\!b)x\!-\!s)(ka\!+\!(a\!+\!b)x\!-\!s)(s\!-\!bx).\end{array}\eqno(7)$$
{\bf Proof}. (i) From (6) we have:
$$\begin{array}{l}
f'_1(y_1)=\displaystyle\frac{(a+b)ack_1}{d(k_1a+(a+b)y_1-s)^2},\\
f'_2(y_2)=-\displaystyle\frac{b}{a}-\displaystyle\frac{bk}{(a+b)y_1-s}-\displaystyle\frac{k(a+b)(s-by_1)}{((a+b)y_1-s)^2}.\end{array}\eqno(8)$$
For $y_1\in \left (\displaystyle\frac{s}{a+b}, \displaystyle\frac{s}{b}\right )$, the
relations (8) lead to $f'_1(y_1)>0$, $f'_2(y_1)<0$, so that $f_1$
is strictly increasing and $f_2$ is strictly deceasing.
(ii) By (i) there is $y_{10}\in \left (\displaystyle\frac{s}{a+b},
\displaystyle\frac{s}{b}\right )$ so that $f_1(y_{10})=f_2(y_{10}).$ From (6)
and (7) it results that:
$$\varphi(y_1)=f_{12}(y_1)h(y_1),$$ where
$$\begin{array}{l}
h(y_1)=ad((a+b)y_1-s)((a+b)y_1+k_1a-s)\\
f_{12}(y_1)=f_1(y_1)-f_2(y_1).\end{array}$$ On $\left
(\displaystyle\frac{s}{a+b}, \displaystyle\frac{s}{b}\right )$, the functions $f_{12}$ and
$h$ are strictly increasing and consequently $\varphi$ is strictly
increasing. Because
$\varphi(\displaystyle\frac{s}{a+b})=-\displaystyle\frac{a^3dkk_1s}{a+b}<0$ and
$\varphi(\displaystyle\frac{s}{b})=\displaystyle\frac{a^3cs^2}{b^2}>0$ we can conclude that
equation $\varphi(x)=0$ has a unique solution $y_{10}$ on $\left
(\displaystyle\frac{s}{a+b}, \displaystyle\frac{s}{b}\right )$.
\section*{\normalsize\bf{3. The analysis of the stationary state and the existence of the Hopf bifurcation.}}
\hspace{0.6cm}
We consider the following translation:
$$y_1=x_1+y_{10}, y_2=x_2+y_{20}$$ and system (2) can be
expressed as:
$$\begin{array}{l}
\vspace{0.1cm}
\dot x{}_1(t)=s-af(x_1(t)+y_{10}, x_2(t)+y_{20})-b(y_1(t)+y_{10}),\\
\vspace{0.1cm}
\dot x{}_2(t)=cg(x_1(t-\tau)+y_{10}, x_2(t-\tau)+y_{20})-d(x_2(t)+y_{20}).\\
\end{array}\eqno(9)$$
System (9) has a unique stationary state $(0,0)$. To investigate
the local stability of the equilibrium state we linearize system
(9). Let $u_1(t)$ and $u_2(t)$ be the linearized system variables.
Then (9) is rewritten as:
$$\dot U(t)=AU(t)+BU(t-\tau),\eqno(10)$$
where
$$A\!\!=\!\!\left(\!\!\!\!\begin{array}{cc}
\vspace{0.2cm}
-(b+a\rho_{10}) & -a\rho_{01} \\
\vspace{0.2cm}
0 & -d\\
\end{array}\!\!\!\!\right)\!\!, B\!\!=\!\!\left(\!\!\!\!\begin{array}{cc}
\vspace{0.2cm}
0 & 0 \\
\vspace{0.2cm} c\gamma_{10} &
c\gamma_{01}\end{array}\!\!\!\!\right)\eqno(11)$$ with
$U(t)=(u_1(t), u_2(t))^T$ and $\rho_{10}$, $\rho_{01}$,
$\gamma_{10}$, $\gamma_{01}$ are the values of the first order
derivatives for the functions:
$$f(x,y)=\displaystyle\frac{1}{2}(x+y+k-\sqrt{(x+y+k)^2-4xy})$$
and $$g(x,y)=\displaystyle\frac{x-f(x,y)}{k_1+x-f(x,y)}$$ evaluated at
$(y_{10}, y_{20})$.
The characteristic equation corresponding to system (10) is
$\Delta(\lambda, \tau)=det(\lambda I-A-e^{-\lambda \tau}B)=0$
which leads to:
$$\lambda^2+p_1\lambda+p_0-(q_1\lambda+q_0) e^{-\lambda \tau}=0,\eqno(12)$$
where
$$\begin{array}{l}
p_1=b+d+a\rho_{10}, \quad p_0=d(b+a\rho_{10}), \quad q_1=c\gamma_{01},\\
q_0=c\gamma_{01}(b+a\rho_{10})-ac\rho_{01}\gamma_{10}
.\end{array}$$
If there is no delay, the characteristic equation (12) becomes:
$$\Delta(\lambda, 0)=\lambda^2+(p_1-q_1)\lambda+p_0-q_0.\eqno(13)$$
Then, the stationary state $(0,0)$ is locally asymptotically
stable if
$$p_1-q_1>0, \quad p_0-q_0>0.\eqno(14)$$
When $\tau>0$, the stationary state is asymptotically stable if
and only if all roots of equation (12) have a negative real part.
We are determining the interval $[0, \tau_0)$ so that the
stationary state remain asymptotically stable.
In what follows we study the existence of the Hopf bifurcation for
equation (10) choosing $\tau$ as the bifurcation parameter. We are
looking for the values $\tau_0$ of $\tau$ so that the stationary
state $(0,0)$ changes from local asymptotic stability to
instability or vice versa. We need the pure imaginary solutions of
equation (12). Let $\lambda=\pm i\omega_0$ be these solutions and
without loss of generality we assume $\omega_0>0$. Replacing
$\lambda=i\omega_0$ and $\tau=\tau_0$ in (12) we obtain:
$$\begin{array}{l}
q_0cos\omega_0\tau_0+q_1\omega_0sin\omega_0\tau_0=p_0-\omega_0^2\\
q_0sin\omega_0\tau_0-q_1\omega_0cos\omega_0\tau_0=-\omega_0p_1,\end{array}$$
which implies that
$$\tau_0\!=\!\displaystyle\frac{1}{\omega_0}\!\left (\!(2k\!+\!1)\pi\!+\!arcsin\displaystyle\frac{p_1\omega_0}
{\sqrt{(p_0\!-\!\omega_0^2)^2\!+\!\omega_0^2p_1^2}}\!+\!arcsin\displaystyle\frac{q_1\omega_0}
{\sqrt{(p_0\!-\!\omega_0^2)^2\!+\!\omega_0^2p_1^2}}\!\right )\eqno(15)$$
where $\omega_0$ is a solution of the equation:
$$\omega^4+(-p_1^2-2p_0+q_1^2)\omega^2+p_0^2-q_0^2=0.$$
Now we have to calculate $Re\left (\displaystyle\frac{d\lambda}{d\tau}\right )$
evaluated at $\lambda=i\omega_0$ and $\tau=\tau_0$. We have:
$$\displaystyle\frac{d\lambda}{d\tau}|_{\lambda=i\omega_0,\tau=\tau_0}=M+iN$$ where
$$M=\displaystyle\frac{q_1^2\omega_0^6+2q_0^2\omega_0^4+(p_1^2q_0^2-p_0^2q_1^2-2p_0q_0^2)\omega_0^2}
{l_1^2+l_2^2}\eqno(16)$$ and
$$\begin{array}{lll}
N & = &
\displaystyle\frac{-q_1^2\tau_0\omega_0^7+\omega_0^5(q_0q_1-p_1q_1^2\!+\!\tau_0(2p_0q_1^2-p_1^2q_1^2-q_0^2))}{l_1^2+l_2^2}+\\
& + &
\displaystyle\frac{\omega_0^3(-p_1q_0^2\!-\!2p_0q_0q_1+p_1^2q_0q_1-p_0p_1q_1^2\!+\!\tau_0(\!-\!p_1^2q_0^2\!-\!q_1^2p_0^2\!+\!2p_0q_0^2))}{l_1^2+l_2^2}+\\
& + &
\displaystyle\frac{\omega_0(-p_0p_1q_0^2+p_0^2q_0q_1\!-\!\tau_0p_0^2q_0^2)}{l_1^2+l_2^2}
\end{array}\eqno(17)$$
with
$$\begin{array}{l}
l_1=-q_1\omega_0^2\!+\!p_1q_0\!-\!q_1p_0+\tau_0(-q_1p_1\omega_0^2\!-\!q_0\omega_0^2\!+\!q_0p_0),\\
l_2=2\omega_0q_0\!+\!\tau_0(-q_1\omega_0^3\!+\!p_0q_1\omega_0\!+\!p_1q_0\omega_0).\end{array}$$
We conclude with:
{\bf Theorem 1.} {\it If there is no delay, under condition (14)
system (10) has an asymtotically stable stationary state. If $\tau
>0$ and $p_1^2q_0^2-q_1^2p_0^2-2p_0q_0^2>0$ then there is
$\tau=\tau_0$ given by (15) so that $Re\left
(\displaystyle\frac{d\lambda}{d\tau}\right )_{\lambda=i\omega_0, \tau=\tau_0}>0$ and
therefor a Hopf bifurcation occurs at $(y_{10}, y_{20})$.}
\section*{{\normalsize\bf 4. Direction and stability of the Hopf bifurcation}}
In this section we describe the direction, stability and the
period of the bifurcating periodic solutions of system (2). The
method we use is based on the normal form theory and the center
manifold theorem introduced by Hassard [5]. Taking into account
the previous section, if $\tau=\tau_0$ then all roots of equation
(11) other than $\pm i\omega_0$ have negative real parts, and any
root of equation (11) of the form $\lambda(\tau)=\alpha(\tau)\pm
i\omega(\tau)$ satisfies $\alpha(\tau_0)=0$, $\omega(\tau_0)=\omega_0$ and
$\displaystyle\frac{d\alpha(\tau_0)}{d\tau}\neq0$. For notational convenience let
$\tau=\tau_0+\mu, \mu\in{\rm{I \! R}}$. Then $\mu=0$ is the Hopf bifurcation
value for equations (2).
The Taylor expansion at $(y_{10}, y_{20})$ of the right members
from (9) until the third order leads to:
$$\dot x(t)=Ax(t)+Bx(t-\tau)+F(x(t), x(t-\tau)),\eqno(18)$$ where
$x(t)=(x_1(t), x_2(t))^T$, $A$, $B$ are given by (11) and
$$F(x(t),x(t-\tau))=(F^1(x(t)),F^2(x(t-\tau)))^T,\eqno(19)$$ where
$$\begin{array}{lll}
F^1(x_1(t),x_2(t)) & = & -\displaystyle\frac{a}{2}[\rho_{20}x_1^2(t)+2\rho_{11}x_1(t)x_2(t)+\rho_{02}x_2^2(t)]-\\
& - &
\displaystyle\frac{a}{6}[\rho_{30}x_1^3(t)+3\rho_{21}x_1^2(t)x_2(t)+3\rho_{12}x_1(t)x_2^2(t)+\rho_{03}x_2^3(t)]\end{array}$$
$$\begin{array}{lll}
F^2(x_1(t-\tau),x_2(t-\tau)) & = & \displaystyle\frac{c}{2}[\gamma_{20}x_1^2(t-\tau)+2\gamma_{11}x_1(t-\tau)x_2(t-\tau)+\\
& + & \gamma_{02}x_2^2(t-\tau)]+\\
& + &
\displaystyle\frac{c}{6}[\gamma_{30}x_1^3(t-\tau)+3\gamma_{21}x_1^2(t-\tau)x_2(t-\tau)+\\
& + &
3\gamma_{12}x_1(t-\tau)x_2^2(t-\tau)+\gamma_{03}x_2^3(t-\tau)]\end{array}$$
and $\rho_{ij}$, $\gamma_{ij}$, $i,j=0,1,2,3$ are the values of
the second and third order derivatives for the functions $f(x,y)$
and $g(x,y)$.
Define the space of
continuous real-valued functions as $C=C([-\tau_0,0],{\rm{I \! R}}^4).$
In $\tau=\tau_0+\mu, \mu\in{\rm{I \! R}}$, we regard $\mu$ as the
bifurcation parameter. For $\Phi\in C$ we define a linear
operator:
$$L(\mu)\Phi=A\Phi(0)+B\Phi(-\tau)$$ where A and B are given by
(11) and a nonlinear operator $F(\mu, \Phi)=F(\Phi(0)$,
$\Phi(-\tau))$, where $F(\Phi(0), \Phi(-\tau))$ is given by (19).
According to the Riesz representation theorem, there is a matrix
whose components are bounded variation functions, $\eta(\theta,
\mu)$ with $\theta\in[-\tau_0, 0]$ so that:
$$L(\mu)\Phi=\int\limits_{-\tau_0}^0d\eta(\theta,\mu)\phi(\theta),
\quad \theta\in[-\tau_0,0].$$
For $\Phi\in C^1([-\tau_0, 0], {\rm{I \! R}}^{4})$ we define:
$${\cal A}(\mu)\Phi(\theta)=\left\{\begin{array}{ll} \vspace{0.2cm}
\displaystyle\frac{d\Phi(\theta)}{d\theta}, & \theta\in[-\tau_0,0)\\
\int\limits_{-\tau_0}^0d\eta(t,\mu)\phi(t), &
\theta=0,\end{array}\right.$$
$$R(\mu)\Phi(\theta)=\left\{\begin{array}{ll} \vspace{0.2cm}
0, & \theta\in[-\tau_0,0)\\
F(\mu, \Phi), & \theta=0.\end{array}\right.$$
We can rewrite (18) in the following vector form:
$$\dot u_t={\cal A}(\mu)u_t+R(\mu)u_t\eqno(20)$$
where $u=(u_1, u_2)^T$, $u_t=u(t+\theta)$ for $\theta\in[-\tau_0,
0]$.
For $\Psi\in C^1([0,\tau_0], {\rm{I \! R}}^{*4})$, we define the adjunct
operator ${\cal A}^*$ of ${\cal A}$ by:
$${\cal A}^*\Psi(s)=\left\{\begin{array}{ll} \vspace{0.2cm}
-\displaystyle\frac{d\Psi(s)}{ds}, & s\in(0, \tau_0]\\
\int\limits_{-\tau_0}^0d\eta^T(t,0)\psi(-t), &
s=0.\end{array}\right.$$ We define the following bilinear form:
$$<\Psi(\theta),
\Phi(\theta)>=\bar\Psi^T(0)\Phi(0)-\int_{-\tau_0}^0\int_{\xi=0}^\theta\bar\Psi^T(\xi-\theta)d\eta(\theta)\phi(\xi)d\xi,$$
where $\eta(\theta)=\eta(\theta,0)$.
We assume that $\pm i\omega_0$ are eigenvalues of ${\cal A}(0)$. Thus,
they are also eigenvalues of ${\cal A}^*$. We can easily obtain:
$$\Phi(\theta)=ve^{\lambda_1\theta},\quad \theta\in[-\tau_0, 0]\eqno(21)$$
where $v=(v_1, v_2)^T$,
$$v_1=-a\rho_{01}, v_2=\lambda_1+b+a\rho_{10},$$
is the eigenvector of ${\cal A}(0)$ corresponding to $\lambda_1=i\omega_0$
and
$$\Psi(s)=we^{\lambda_2s},\quad s\in[0,\infty)$$ where
$w=(w_1, w_2)$,
$$w_1=\displaystyle\frac{w_2c\gamma_{10}e^{-\lambda_1\tau}}{b+\lambda_1+a\rho_{10}}, w_2=\displaystyle\frac{1}{\bar\eta},$$
$$\eta=v_1\displaystyle\frac{c\gamma_{10}e^{-\lambda_2\tau}}{b+\lambda_2+a\rho_{10}}+v_2-
\displaystyle\frac{c\gamma_{10}v_1+c\gamma_{01}v_2}{\lambda_1^2}(-\tau_0\lambda_1e^{-\lambda_1\tau_0}-1+e^{-\lambda_1\tau_0})]$$
is the eigenvector of ${\cal A}(0)$ corresponding to
$\lambda_2=-i\omega_0.$
We can verify that: $<\Psi(s), \Phi(s)>=1$, $<\Psi(s),
\bar\Phi(s)>=<\bar\Psi(s), \Phi(s)>=0$, $<\bar\Psi(s),
\bar\Phi(s)>=1.$
Using the approach of Hassard [5], we next compute the coordinates
to describe the center manifold $\Omega_0$ at $\mu=0$. Let
$u_t=u_t(t+\theta), \theta\in[-\tau_0,0)$ be the solution of equation
(20) when $\mu=0$ and
$$z(t)=<\Psi, u_t>,
\quad w(t,\theta)=u_t(\theta)-2Re\{z(t)\Phi(\theta)\}.$$
On the center manifold $\Omega_0$, we have:
$$w(t,\theta)=w(z(t), \bar z(t), \theta)$$ where
$$w(z,\bar z, \theta)=w_{20}(\theta)\displaystyle\frac{z^2}{2}+w_{11}(\theta)z\bar
z+w_{02}(\theta)\displaystyle\frac{\bar z^2}{2}+w_{30}(\theta)\displaystyle\frac{z^3}{6}+\dots$$
in which $z$ and $\bar z$ are local coordinates for the center
manifold $\Omega_0$ in the direction of $\Psi$ and $\bar\Psi$ and
$w_{02}(\theta)=\bar w_{20}(\theta)$. Note that $w$ and $u_t$ are real.
For solution $u_t\in \Omega_0$ of equation (20), as long as
$\mu=0$, we have:
$$\dot z(t)=\lambda_1z(t)+g(z, \bar z)\eqno(22)$$ where
$$\begin{array}{ll}
g(z, \bar z)& =\bar\Psi(0)F(w(z(t),\bar z(t), 0)+2Re(z(t)\Phi(0)))=\\
& =g_{20}\displaystyle\frac{z(t)^2}{2}+g_{11}z(t)\bar z(t)+g_{02}\displaystyle\frac{\bar
z(t)^2}{2}+g_{21}\displaystyle\frac{z(t)^2\bar z(t)}{2}+\dots\end{array}$$
where
$$g_{20}=F^1_{20}\bar w_1+F^2_{20}\bar w_2,
g_{11}=F^1_{11}\bar w_1+F^2_{11}\bar w_2, g_{02}=F^1_{02}\bar
w_1+F^2_{02}\bar w_2,\eqno(23)$$ with
$$\begin{array}{l}F_{20}^1=-a(\rho_{20}v_1^2+2\rho_{11}v_1v_2+\rho_{02}v_2^2,\\
F_{20}^2=c[\gamma_{20}v_1^2e^{-2\lambda_1\tau}+\gamma_{02}v_2^2e^{-2\lambda_1\tau}+2\gamma_{11}v_1v_2e^{-2\lambda_1\tau}],\\
F_{11}^1=-a[\rho_{20}v_1\bar v_1+\rho_{11}(v_1\bar v_2+\bar v_1v_2)+\rho_{02}v_2\bar v_2],\\
F_{11}^2=c[\gamma_{20}v_1\bar v_1+\gamma_{11}(v_1\bar v_2+\bar
v_1v_2)+\gamma_{02}v_2\bar v_2],\\
F_{02}^1= \bar F_{20}^1, F_{02}^2=\bar F_{20}^2,
\end{array}$$ and
$$g_{21}=F^1_{21}\bar w_1+F^2_{21}\bar w_2\eqno(24)$$ where
$$\begin{array}{l}
F_{21}^1= -a[\rho_{20}(2v_1w_{11}^1(0)+\bar
v_1w_{20}^1(0))+2\rho_{11}(v_1w_{11}^2(0)+\\
\displaystyle\frac{\bar v_1w_{20}^2(0)}{2}+\displaystyle\frac{\bar
v_2w_{20}^1(0)}{2}+v_2w_{11}^1(0))+ \rho_{02}(2v_2w_{11}^2(0)+\bar
v_2w_{20}^2(0))-\\\rho_{30}v_1^2\bar v_1+2\rho_{21}v_1v_2\bar
v_1+2\rho_{12}v_1v_2\bar v_2+ \rho_{03}v_2^2\bar
v_2+\rho_{21}v_1^2\bar v_2+\rho_{12}\bar v_1v_2^2]\end{array}$$
$$\begin{array}{l}F_{21}^2=c[\gamma_{20}(2v_1w_{11}^1(-\tau)e^{-\lambda_1\tau}+\bar v_1w_{20}^1(-\tau)e^{\lambda_1\tau})
+2\gamma_{11}(v_1w_{11}^2(-\tau)e^{-\lambda_1\tau}+\\+ \displaystyle\frac{\bar
v_1w_{20}^2(-\tau)e^{\lambda_1\tau}}{2}\!+\!\displaystyle\frac{\bar
v_2w_{20}^1(-\tau)e^{\lambda_1\tau}}{2}\!+\!v_2w_{11}^1(-\tau)e^{-\lambda_1\tau})
\!+\!\gamma_{02}(2v_2w_{11}^2(-\tau)e^{-\lambda_1\tau}+\\+\bar
v_2w_{20}^2(-\tau)e^{\lambda_1\tau}) +\gamma_{30}v_1^2\bar
v_1e^{-\lambda_1\tau}+\gamma_{21}(2v_1\bar
v_1v_2e^{-\lambda_1\tau}+v_1^2\bar v_2e^{-\lambda_1\tau})+\\
+\gamma_{12}(2v_1v_2\bar v_2e^{-\lambda_1\tau}+\bar
v_1v_2^2e^{-\lambda_1\tau})+\gamma_{03}v_2^2\bar
v_2e^{-\lambda_1\tau}].\end{array}$$
The vectors $w_{20}(\theta)$, $w_{11}(\theta)$ with
$\theta\in[-\tau,0]$ are given by:
$$\begin{array}{l}
w_{20}(\theta)=-\displaystyle\frac{g_{20}}{\lambda_1}ve^{\lambda_1\theta}-\displaystyle\frac{\bar
g_{02}}{3\lambda_1}\bar ve^{\lambda_2\theta}+E_1e^{2\lambda_1\theta}\\
w_{11}(\theta)=\displaystyle\frac{g_{11}}{\lambda_1}ve^{\lambda_1\theta}-\displaystyle\frac{\bar
g_{11}}{\lambda_1}\bar ve^{\lambda_2\theta}+E_2\end{array}\eqno(25)$$
where
$$
E_1=-(A+e^{-2\lambda_1\tau_0}B-2\lambda_1I)^{-1}F_{20},\quad
E_2=-(A+B)^{-1}F_{11},$$ where $F_{20}=(F_{20}^1, F_{20}^2)^T$,
$F_{11}=(F_{11}^1, F_{11}^2)^T$.
Based on the above analysis and calculation, we can see that each
$g_{ij}$ in (23), (24) are determined by the parameters and delay
of system (2). Thus, we can explicitly compute the following
quantities:
$$\begin{array}{l}
C_1(0)=\displaystyle\frac{i}{2\omega_0}(g_{20}g_{11}-2|g_{11}|^2-\displaystyle\frac{1}{3}|g_{02}|^2)+\displaystyle\frac{g_{21}}{2},\\
\mu_2=-\displaystyle\frac{Re(C_1(0))}{M}, T_2=-\displaystyle\frac{Im(C_1(0))+\mu_2N}{\omega_0},
\beta_2=2Re(C_1(0)),\end{array}\eqno(26)$$ where $M$ and $N$ are
given by (16) and (17).
In short, this leads to the following result:
{\bf Theorem 3.} {\it In formulas (26), $\mu_2$ determines the
direction of the Hopf bifurcation: if $\mu_2>0 (<0)$, then the
Hopf bifurcation is supercritical (subcritical) and the
bifurcating periodic solutions exist for $\tau>\tau_0 (<\tau_0)$;
$\beta_2$ determines the stability of the bifurcating periodic
solutions: the solutions are orbitally stable (unstable) if
$\beta_2<0 (>0)$; and $T_2$ determines the period of the
bifurcating periodic solutions: the period increases (decreases)
if $T_2>0 (<0)$.}
\vspace{0.6cm}
\section*{\normalsize\bf 4. Numerical example.}
\hspace{0.6cm} In this section we find the waveform plots through
the formula:
$$X(t+\theta)\!=\! z(t)\Phi(\theta)\!+\!\bar
z(t)\bar\Phi(\theta)\!+\!\displaystyle\frac{1}{2}w_{20}(\theta)z^2(t)+w_{11}(\theta)z(t)\bar
z(t)\!+\!\displaystyle\frac{1}{2}w_{02}(\theta)\bar z(t)^2+X_0,$$ where $z(t)$ is
the solution of (20), $\Phi(\theta)$ is given by (21), $w_{20}(\theta),
w_{11}(\theta), w_{02}(\theta)=\bar w_{20}(\theta)$ are given by (25) and
$X_0=(y_{10}, y_{20})^T$ is the equilibrium state.
For the numerical simulations we use Maple 9.5. and the data from
[10]: the degradation of p53 through ubiquintin pathway
$a=3\times10^{-2}$sec${^-1}$, the spontaneous degradation of P53
is $b=10^{-4}$sec$^{-1}$, the dissociation constant between P53
and Mdm2 gene is $k_1=28$, the degradation rate of Mdm2 protein is
$d=10^{-2}$sec$^{-1}$, the P53 protein production rate is
$S=0.01$sec$^{-1}$ and the production rate of Mdm2 is
$c=1$sec$^{-1}$. For this date we consider the different values
for the constant k. By changing instantaneously the dissociation
constant k the response of the system is different.
For $k=17.5$ we obtain: the equilibrium point
$y_{10}=1.674122637$, $y_{20}=4.587857801$, the coefficients which
describe the limit cycle: $\mu_2\!=\! 0.002340098338$,
$\beta_2\!=\!-0.000001452701020$, $T_2\!=\!0.00007793850775$ and
$\omega_0\!=\! 0.04577286901$, $\tau_0\!=\! 60.52296388$. Then the
Hopf bifurcation is supercri\-ti\-cal, the solutions are orbitally
stable and the period of the solution is increasing. The wave
plots are displayed in fig1 and fig2:
\begin{center}
{\small \begin{tabular}{c|c} \hline Fig.1. $(t,y_1(t))$&Fig.2.
$(t,y_2(t))$\\&\\
\cline{1-2} \epsfxsize=5cm
\epsfysize=5cm
\epsffile{fig1.eps} &
\epsfxsize=6cm
\epsfysize=5cm
\epsffile{fig2.eps}
\\
\hline
\end{tabular}}
\end{center}
\medskip
For $k=120$ we obtain: the equilibrium point $y_{10}=3.853801769$,
$y_{20}=11.20502079$, the coefficients which describe the limit
cycle: $\mu_2\!=\!-8.219839362\cdot 10^{-7}$,
$\beta_2\!=\!2.0434666644\cdot 10^{-10}$,
$T_2\!=\!2.243699328\cdot 10^{-7}$ and $\omega_0\!=\!
0.02923152517$, $\tau_0\!=\! 92.68683554$. Then the Hopf
bifurcation is subcri\-ti\-cal, the solutions are orbitally
unstable and the period of the solution is increasing. The wave
plots are presented in fig3 and fig4:
\begin{center}
{\small \begin{tabular}{c|c} \hline Fig.3. $(t,y_1(t))$&Fig.4.
$(t,y_2(t))$\\&\\
\cline{1-2} \epsfxsize=5cm
\epsfysize=5cm
\epsffile{fig1.eps} &
\epsfxsize=6cm
\epsfysize=5cm
\epsffile{fig2.eps}
\\
\hline
\end{tabular}}
\end{center}
\medskip
For $k=1750$ we obtain: the equilibrium point
$y_{10}=14.88816840$, $y_{20}=34.27918463$, the coefficients which
describe the limit cycle: $\mu_2\!=\!1.104273140\cdot 10^{-11}$,
$\beta_2\!=\!-6.4139104\cdot 10^{-16}$, $T_2\!=\!1.924908056\cdot
10^{-11}$ and $\omega_0\!=\! 0.01423761906$, $\tau_0\!=\!
174.4149631$. Then the Hopf bifurcation is supercri\-ti\-cal, the
solutions are orbitally stable and the period of the solution is
increasing. The wave plots are given in fig5 and fig6:
\begin{center}
{\small \begin{tabular}{c|c} \hline Fig.5. $(t,y_1(t))$&Fig.6.
$(t,y_2(t))$\\&\\
\cline{1-2} \epsfxsize=5cm
\epsfysize=5cm
\epsffile{fig1.eps} &
\epsfxsize=6cm
\epsfysize=5cm
\epsffile{fig2.eps}
\\
\hline
\end{tabular}}
\end{center}
\medskip
\section*{\normalsize\bf 5. Conclusions.}
\hspace{0.6cm} For the present model, we obtain an oscillatory
behavior similar with the findings in [10], according with the
qualitative study.
We have proved that a limit cycle exists and it is characterized
by the coefficients from (26). For different values for the
equilibrium constant k, in Section 4 we obtain stable or unstable
periodic solutions with increasing periods, via a Hopf
bifurcation.
|
2,877,628,088,875 | arxiv | \section{Notations, models and applications} \label{sec:model}
\subsection{Notations}
The row vector $(u_1, \ldots, u_q)$ is denoted by $\gvect{u}$,
and the column vector $\transp{(u_1, \ldots, u_q)}$ by $\vect{u}$.
The diagonal matrix $\diag{u}$ has main diagonal $\vect{u}$,
and $\vect{\scriptstyle{1}}$ is the vector with all coefficients equal to $1$.
We adopt the notation $\vect{u}^{\, \vect{v}}$ for the product $\prod_i u_i^{v_i}$.
The functions $\log$ and $\exp$ are applied coefficient-wise to the vectors,
\textit{i.e.} $\log(\gvect{u}) = ( \log(u_1), \ldots, \log(u_q) )$.
When $\sum_i n_i = n$, the multinomial notation $\binom{n}{\vect{n}}$
denotes $n! / \prod_i n_i !$.
The adjugate of a matrix $M$, equal to the transpose of the cofactor matrix, is $\operatorname{adj}(M)$.
Open intervals, closed intervals and integer intervals
are denoted by $]x,y[$, $[x,y]$ and $[a..b]$.
\subsection{Graph model}
The \emph{uniform graph model},
also called \emph{multigraph process},
has been studied using analytic combinatorics
by \cite{FKP89} and \cite{JKLP93}.
This model produces a random
vertex-labelled graph
with $n$ vertices and $m$ edges
by drawing $2m$ vertices
$v_1 w_1 \ldots v_m w_m$
uniformly independently in $[1..n]$,
and adding to the graph the edges $\edge{v_i w_i}$
for $i$ from $1$ to $m$:
\[
\operatorname{edge}(G) = \{ \edge{v_i w_i}\ |\ 1 \leq i \leq m\}.
\]
The graph is \emph{simple} if it contains neither loops nor multiple edges.
If the output of the process is conditioned to be simple, the model reduces
to the classic $G(n,m)$ graph model of Erd\H{o}s and R\'enyi.
The number of ordered \emph{sequences of vertices} $v_1 w_1 \ldots v_m w_m$
that correspond to a graph $G$ is denoted by $\operatorname{seqv}(G)$
\[
\operatorname{seqv}(G) = |
\{ v_1 w_1 \ldots v_m w_m\ |\
\{ \edge{v_i w_i}\ |\ 1 \leq i \leq m\} = \operatorname{edge}(G) \} |.
\]
Observe that a graph~$G$ with $m$ edges is simple
if and only if
its number of sequences of vertices $\operatorname{seqv}(G)$
is equal to $2^m m!$.
For this reason, \cite{JKLP93} introduced
the \emph{compensation factor}
\[
\kappa(G) = \frac{\operatorname{seqv}(G)}{2^m m!}.
\]
The \emph{number of graphs} in a family
is defined as the sum of their compensation factors,
although this quantity needs not be an integer.
However, when the graphs are simple,
the number of graphs is equal
to the actual cardinality of the family.
For example, the total number of multigraphs
with $n$ vertices and $m$ edges is $\frac{n^{2m}}{2^m m!}$.
\subsection{Inhomogeneous graph model}
The original \emph{inhomogeneous graph model} was introduced by \cite{S02}
as a generalization of the classic $G(n,p)$ random graph model,
and extended by \cite{BJR07}.
In this model, each vertex receives a \emph{type},
which is an integer in $[1..q]$,
and the probability that a vertex
of type~$i$ and one of type~$j$
are linked is the coefficient~$(i,j)$
of a symmetric $q \times q$ matrix $R$.
We consider in this paper a variant of this model,
introduced by \cite{PR14} and
closer to the uniform graph model.
Its parameters are an irreducible symmetric $q \times q$ matrix $R$
and a vector $\vect{r}$ of size $q$,
both with non-negative coefficients.
We call \emph{inhomogeneous graph}, or \emph{$(R,\vect{r})$-graph},
a labelled graph where
(loops and multiple edges are allowed) where
\begin{itemize}
\item
each vertex $v$ has a \emph{type} $t(v)$
which is an integer in $[1..q]$
and a \emph{weight} $r_{t(v)}$,
\item
each edge $\edge{vw}$ receives a weight $R_{t(v), t(w)}$.
\end{itemize}
The \emph{weight} $\omega(G)$ of an $(R,\vect{r})$-graph $G$
is the product of
the compensation factor of the underlying graph
(which is equal to $1$ if the graph is simple),
the weights of the vertices
and the weights of the edges
\[
\omega(G) =
\kappa(G)
\prod_{u \in G} r_{t(u)}
\prod_{\edge{vw} \in G} R_{t(v), t(w)}.
\]
One can also think of the parameters $(r_i)$ and $(R_{i,j})$
as variables marking the vertices and the edges
according to their types and the types of their ends.
We define the \emph{number of $(R,\vect{r})$-graphs} in a family
as the sum of their weights.
This convention will be justified by the applications.
An \emph{$(n,m)$-$(R, \vect{r})$-graph} is an $(R,\vect{r})$-graph
with $n$ vertices and $m$ edges.
Let $n(G)$ denote the number
of vertices of a graph $G$
and $\mathcal{F}$ be a family of $(R,\vect{r})$-graphs,
then the \emph{generating function} $F(z)$ of $\mathcal{F}$
is defined by
\[
F(z) = \sum_{G \in \mathcal{F}} \omega(G) \frac{z^{n(G)}}{n(G)!}.
\]
Observe that an $(R,\vect{r})$-graph that contains
an edge of weight zero
has weight zero, and thus
does not contribute to the number of $(R,\vect{r})$-graphs.
The next lemma justifies our assumption for $R$ to be irreducible.
\begin{lemma}
Let $G_{R,\vect{r}}^{*}$ denote the set of $(R,\vect{r})$-graphs
with non-zero weight.
If the matrix $R$ is reducible,
then there exist
a non-trivial partition $T_1 \uplus \cdots \uplus T_k = [1..q]$
of the set of types,
symmetric irreducible matrices $S_1, \ldots, S_k$
and vectors $\vect{s}_1, \ldots, \vect{s}_k$
such that $G_{R,\vect{r}}^{*}$ is in bijection
with the Cartesian product $G_{S_1,\vect{s}_1}^{*} \times \cdots \times G_{S_s,\vect{s}_k}^{*}$.
%
Specifically, for any graph $G$ in $G_{R,\vect{r}}^{*}$\ ,
\begin{itemize}
\item
for all $i \neq j$, there is no edge between
a vertex of type in $T_i$ and one of type in $T_j$,
\item
for all $i$, the graph induced by $G$
on the vertices with types in $T_i$
is in $G_{S_i,\vect{s}_i}^{*}$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $(\vect{e}_1, \ldots, \vect{e}_q)$ denote the canonical basis of $\mathds{R}^q$.
Since $R$ is reducible, there is a partition $T_1, \ldots, T_k$ of $[1..q]$
such that the matrix of $R$ on the basis
\[
( \vect{e}_{T_1[1]}, \vect{e}_{T_1[2]}, \ldots, \vect{e}_{T_2[1]}, \vect{e}_{T_2[2]}, \ldots )
\]
has a block-diagonal shape $\operatorname{diag}(S_1, \ldots, S_k)$.
For each $i$, we set $\vect{s}_i = (r_{T_i[1]}, r_{T_i[2]}, \ldots)$.
There can be no edge between types in $T_i$ and in $T_j$ for $i \neq j$
because its weight would be $0$.
Therefore, any component of $G_{R,\vect{r}}^{*}$
with a vertex of type in $T_i$ has all its types in $T_i$.
By construction, such a component is in $G_{S_i,\vect{s}_i}^{*}$.
\end{proof}
In the following, the matrix $R$ is therefore
always assumed to be irreducible.
In this paper, we analyze asymptotic properties
of $(n,m)$-$(R, \vect{r})$-graphs when $n$ goes to infinity, $m$ is equal to $c n$,
and $R$, $\vect{r}$ and $c$ are fixed.
We look forward to applications that would require
to relax those conditions, to guide us
toward the generalization of the model.
\subsection{Applications}
Inhomogeneous graphs have been used in~\cite{PR14}
to analyze the phase transition of satisfiability problems.
We present two new applications.
In the \emph{properly $q$-colored} graphs,
each vertex has a color in $[1..q]$
and no edge links two vertices with the same one.
We give a new proof of \cite[Theorem~$3$]{W72} on their enumeration.
This result is not to be confused with
an enumeration of \emph{$q$-colorable} graphs,
a problem expected to be much more difficult
and addressed by \cite{AC08}.
\begin{application}
Let $R^{\operatorname{(col)}}$ denote the $q \times q$ matrix
with all coefficients equal to $1$,
except on the diagonal where they are equal to $0$.
Then the number of properly $q$-colored $(n,m)$-graphs
is equal to the number of $(n,m)$-$(R^{\operatorname{(col)}},\vect{\scriptstyle{1}})$-graphs.
When $\frac{m}{n}$ is in a compact interval of $\mathds{R}_{>0}$, its asymptotics is
\[
\frac{n^{2m}}{2^m m!}
\left( 1 + \frac{2}{q-1} \frac{m}{n} \right)^{-\frac{q-1}{2}}
\left( 1 - \frac{1}{q} \right)^m
q^n
+ o(1).
\]
For properly $q$-colored simple graphs, the previous asymptotics
is replaced by
\[
\frac{n^{2m}}{2^m m!}
\left( 1 + \frac{2}{q-1} \frac{m}{n} \right)^{-\frac{q-1}{2}}
\left( 1 - \frac{1}{q} \right)^m
q^n
\exp \big( \left( \frac{m}{n} \right)^2 \frac{q}{q-1} \big)
+ o(1).
\]
\end{application}
\begin{proof}
Let us identify the types and the colors.
If an $(R^{\operatorname{(col)}}, \vect{\scriptstyle{1}})$-graph is properly colored,
the product of the weights of its edges is $1$,
otherwise it is $0$,
hence the first assertion of the theorem.
%
The eigenvalues of $R^{\operatorname{(col)}}$ are $q-1$ and $-1$,
with multiplicities $1$ and $q-1$.
The asymptotics is then a direct application
of Theorem~\ref{th:eigenvectorone}
and the remark following Theorem~\ref{th:simple}.
\end{proof}
As a second example, we consider a world
where each person has a number $k$ of topics of interest
among a set of size $t \geq 2k$,
and where two people can only become friend
if they share at least one common topic of interest.
We call \emph{friendship graph}
the graph where each vertex represents a person
and each edge a friendship relation.
The graph is naturally assumed to be simple.
To analyze the friendship graphs, we introduce the following notations.
Let $\sigma$ denote a numbering of the subsets of size $k$ of $[1..t]$,
and $R^{\operatorname{(fs)}}$ the adjacency matrix
of the complement of the \emph{Kneser graph}
(see, for example, \cite{GR01}).
This matrix of dimension $q = \binom{t}{k}$ is defined by
\[
R^{\operatorname{(fs)}}_{i,j} =
\begin{cases}
1 & \text{if } | \sigma(i) \cap \sigma(j) | \geq 1,\\
0 & \text{otherwise}.
\end{cases}
\]
\begin{application}
The number of friendship graphs with $n$ people
and $m$ friendship relations is equal
to the number of simple $(n,m)$-$(R^{\operatorname{(fs)}}, \vect{\scriptstyle{1}})$-graphs.
When $\frac{m}{n}$ is in a closed interval of $]0,\frac{1}{2}[$,
almost all friendship graphs contain no component with more than one cycle.
There is a value $\beta > \frac{1}{2}$ such that
when $\frac{m}{n}$ is in a compact interval of $]0, \beta[$,
the asymptotics of friendship graphs is
\[
\frac{n^{2m}}{2^m m!}
\left(
\binom{t}{k} - \binom{t-k}{k}
\right)^{m}
\binom{t}{k}^{n-m}
C
+ o(1)
\]
where the value $C$, bounded with respect to $n$, is
\[
C =
\exp \left(
- \frac{ \binom{t}{k} }
{ \binom{t}{k} - \binom{t-k}{k} }
\frac{m}{n}
\left( 1 + \frac{m}{n} \right)
\right)
\prod_{j=1}^k
\left(
1
- (-1)^j
\frac{2 m}{n}
\frac{ \binom{t-k-j}{k-j} }
{ \binom{t}{k} - \binom{t-k}{k} }
\right)^{- \frac{1}{2} \left( \binom{t}{j} - \binom{t}{j-1} \right)}.
\]
\end{application}
\begin{proof}
Identifying each type $i$
with the set of topics of interest $\sigma(i)$,
the definition of the matrix $R^{\operatorname{(fs)}}$
implies that the weight of a simple $(R^{\operatorname{(fs)}},\vect{\scriptstyle{1}})$-graph
is $1$ if it is a friendship graph, and $0$ otherwise.
The spectrum of the Kneser graph is known, and available in \cite{DL98}.
The spectrum of its complement follows:
the dominant eigenvalue is $\binom{t}{k} - \binom{t-k}{k}$
and for all $j$ from $1$ to $k$, $(-1)^j \binom{t-k-j}{k-j}$
is an eigenvalue with multiplicity $\binom{t}{j} - \binom{t}{j-1}$.
The result is then a consequence of Theorem~\ref{th:eigenvectorone}
and the remark that follows Theorem~\ref{th:simple},
with parameters $\operatorname{Tr}(R^{\operatorname{(fs)}}) = \binom{t}{k}$
and $\operatorname{Tr}((R^{\operatorname{(fs)}})^2) = \binom{t}{k} - \binom{t-k}{k}$.
\end{proof}
Inhomogeneous graphs can as well handle generalizations of the model.
For example, the weight of a friendship could be a real value,
function of the number of common topics of interest.
The parameters of the asymptotics, although still computable,
will become less explicit.
\section{Global enumeration} \label{sec:global}
In this section, we reduce the problem of deriving the asymptotics
of $(n,m)$-$(R, \vect{r})$-graphs to the location of the minimums of a function
parameterized by $R$, $\vect{r}$ and $\frac{m}{n}$.
We start with an explicit formula.
\begin{theorem} \label{th:exact}
The number of $(n,m)$-$(R, \vect{r})$-graphs is
\begin{equation} \label{eq:Gexact}
G_{R, \vect{r}}(n,m) =
\frac{1}{2^m m!}
\sum_{\{ \vect{n} \in \mathds{N}^q\ |\ \gvect{1} \vect{n} = n \}}
\binom{n}{\vect{n}}
\vect{r}^{\ \vect{n}}
\left( \gvect{n} R \vect{n} \right)^m.
\end{equation}
\end{theorem}
\begin{proof}
Let us consider a fixed partition of the set of labels $[1..n]$
into $q$ sets $V_1, \ldots, V_q$ of sizes $n_1, \ldots, n_q$,
and denote by $G(\vect{n},m)$ the set of $(n,m)$-$(R, \vect{r})$-graphs
where the type of the vertices in $V_i$ is $i$.
Then
the number of graphs in $G(\vect{n}, m)$ is expressed
by summation over all sequences of vertices as
\[
\sum_{G \in G(\vect{n},m)} \omega(G)
=
\frac{1}{2^m m!}
\prod_{i=1}^q r_i^{n_i}
\sum_{v_1,w_1, \ldots, v_m, w_m \in [1..n]^{2m}}
\prod_{i=1}^m R_{t(v_i), t(w_i)}.
\]
Switching the sum and the product, the previous equation becomes
\[
\sum_{G \in G(\vect{n},m)} \omega(G)
=
\frac{1}{2^m m!}
\prod_{i=1}^q r_i^{n_i}
\Bigg( \sum_{1 \leq i, j \leq q} n_i n_j R \Bigg)^m
=
\vect{r}^{\ \vect{n}}
\frac{(\gvect{n} R \vect{n})^m}{2^m m!}.
\]
Equation~\eqref{eq:Gexact} is obtained by summation
over all possible partitions $V_1 \uplus \cdots \uplus V_q = [1..n]$.
\end{proof}
To obtain the asymptotics of $G_{R, \vect{r}}(n,m)$,
we will apply in the proof of Theorem~\ref{th:global}
a multivariate Laplace method.
This method requires to give to the previous expression
a more suitable shape.
\begin{lemma} \label{th:laplace_setting}
Let $\mathcal{S}$ denote the set
$\{ \vect{x} \in \mathds{R}^q_{\geq 0}\ |\ \gvect{\scriptstyle{1}} \vect{x} = 1\}$.
The number of $(n,m)$-$(R, \vect{r})$-graphs is
\begin{equation} \label{eq:Glaplace}
G_{R, \vect{r}}(n,m) =
\frac{n^{2m}}{2^m m!}
\frac{1}{(2 \pi n)^{\frac{q-1}{2}}}
\sum_{\{\vect{n} \in \mathds{N}^q\ |\ \gvect{\scriptstyle{1}} \vect{n} = n\}}
A_n \left( \frac{\vect{n}}{n} \right)
e^{- n \Phi_{\frac{m}{n}} \left( \frac{\vect{n}}{n} \right)},
\end{equation}
where, with the usual conventions $0 \log(0) = 0$ and $0^0 = 1$,
the functions $A_n$ and $\Phi_{c}$
are defined on $\mathcal{S}$ by
\begin{align*}
A_n(\vect{x})
&=
\frac{n!}{n^n e^{-n} \sqrt{2 \pi n}}
\prod_{i=1}^q
\frac{(n x_i)^{n x_i} e^{-n x_i} \sqrt{2 \pi n x_i}}{\Gamma(n x_i + 1)}
\frac{1}{\sqrt{x_i}},
\\
\Phi_c(\vect{x})
&=
\left( \log(\gvect{x}) - \log(\gvect{r}) \right) \vect{x}
- c \log \left( \gvect{x} R \vect{x} \right).
\end{align*}
\end{lemma}
\begin{proof}
We introduce in Expression~\eqref{eq:Gexact}
the Stirling approximations
of the factorials of the multinomial coefficient,
and rescale each $n_i$ by a factor $1/n$
\[
G_{R, \vect{r}}(n,m) =
\frac{n^{2m}}{2^m m!}
\frac{1}{(2 \pi n)^{\frac{q-1}{2}}}
\sum_{\substack{\vect{n} \in \mathds{N}^q\\ \gvect{\scriptstyle{1}} \vect{n} = n}}
\frac{n!}{n^n e^{-n} \sqrt{2 \pi n}}
\prod_{i=1}^q
\frac{n_i^{n_i} e^{-n_i} \sqrt{2 \pi n_i}}{n_i! \sqrt{n_i/n}}
\left(
\frac{ \vect{r}^{\ \vect{n} / n} }
{ \left( \vect{n} / n \right)^{ \vect{n} / n } }
\right)^n
\left( \frac{\gvect{n}}{n} R \frac{\vect{n}}{n} \right)^m.
\]
The functions $A_n$ and $\Phi_c$ are then introduced
to simplify this expression.
\end{proof}
Let $E$ denote a $q \times (q-1)$ matrix
with left-kernel of dimension $1$
containing the vector $\gvect{\scriptstyle{1}}$, \textit{e.g.}
\[
E =
\smat{1 & 0 & \cdots & 0 \\
0 & 1 & {\scriptstyle{\ddots}} & \vdots \\
\vdots & \ddots & \ddots & 0 \\
0 & \cdots & 0 & 1 \\
-1 & \cdots & \cdots & -1}.
\]
Two vectors $\vect{u}$ and $\vect{v}$ belong to $\mathcal{S}$
only if there is a vector $\vect{\epsilon}$ of dimension $q-1$
for which
$
\vect{u} = \vect{v} + E \vect{\epsilon}.
$
The following lemma provides tools to locate
the minimums of the function $\Phi_{c}$.
\begin{lemma} \label{th:beta}
For all $c > 0$, the Taylor expansion of $\Phi_{c}$ near any point $\vect{x}$
in the interior of $\mathcal{S}$ is
\[
\Phi_{c}(\vect{x} + E \vect{\epsilon})
=
\Phi_{c}(\vect{x})
+
\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{x})} \vect{\epsilon}
+
\frac{1}{2}
\gvect{\epsilon} \mathcal{H}_{\ff_{\mn}(\vect{x})} \vect{\epsilon}
+
\mathcal{O}(\| \epsilon \|^3),
\]
where the gradient vector and the Hessian matrix
have dimension $q-1$ and are defined by
\begin{align*}
\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{x})} &=
\left(
\log (\gvect{x})
- \log (\gvect{r})
- \frac{2c}{\gvect{x} R \vect{x}} \gvect{x} R
\right) E,
\\
\mathcal{H}_{\ff_{\mn}(\vect{x})} &=
\transp{E}
\left(
\diag{x} \phantom{}^{-1}
+
\frac{2c}{\gvect{x} R \vect{x}}
\left(
\frac{2}{\gvect{x} R \vect{x}}
R \vect{x}\, \gvect{x} R
- R
\right)
\right)
E.
\end{align*}
If $\vect{\varphi}$ is a minimum of $\Phi_{c}$,
then $\vect{\varphi}$ is in the interior of $\mathcal{S}$,
$\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{\varphi})} = \gvect{\scriptstyle{0}}$ and $\mathcal{H}_{\ff_{\mn}(\vect{\varphi}_{\mn})}$ is positive-semidefinite.
\end{lemma}
\begin{proof}
Let $\Psi_{c}$ denote the function
$
\vect{x} \to
\left( \log(\gvect{x}) - \log(\gvect{r}) \right) \vect{x}
- c \log \left( \gvect{x} R \vect{x} \right)
$
from $[0,1]^q \setminus \{\vect{\scriptstyle{0}}\}$ to $\mathds{R}$.
Its restriction to $\mathcal{S}$ is equal to $\Phi_{c}$
and its Taylor expansion starts with
\[
\Psi_c(\vect{x} + \vect{\epsilon})
=
\Psi_c(\vect{x})
+
\gvect{\nabla}_{\Psi_{c}(\vect{x})} \vect{\epsilon}
+
\frac{1}{2}
\gvect{\epsilon}
\mathcal{H}_{\Psi_{c}(\vect{x})}
\vect{\epsilon}
+
\mathcal{O}(\| \vect{\epsilon} \|^3)
\]
where the gradient $\gvect{\nabla}_{\Psi_{c}(\vect{x})}$
and the Hessian matrix $\mathcal{H}_{\Psi_{c}(\vect{x})}$
of $\Psi_{c}$ are computed using partial derivations.
It follows that the Taylor expansion of $\Phi_{c}$
near any point $\vect{x}$ in the interior of $\mathcal{S}$ is
\[
\Phi_c(\vect{x} + E \vect{\epsilon})
=
\Phi_c(\vect{x})
+
\gvect{\nabla}_{\Psi_{c}(\vect{x})} E \vect{\epsilon}
+
\frac{1}{2}
\gvect{\epsilon} \transp{E}
\mathcal{H}_{\Psi_{c}(\vect{x})}
E \vect{\epsilon}
+
\mathcal{O}(\| \vect{\epsilon} \|^3).
\]
Observing the limit of the gradient $\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{x})}$ of $\Phi_{c}$
when one coordinate of~$\vect{x}$ vanishes,
we conclude that the local minimums of $\Phi_{c}$
cannot be reached on the boundary of $\mathcal{S}$,
and must therefore cancel $\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{x})}$.
\end{proof}
Gathering the previous results, we can finally apply the multivariate Laplace method.
\begin{theorem} \label{th:global}
Let $[a,b]$ be a compact interval such that the function
$(c,\vect{x}) \to \det(\mathcal{H}_{\ff_{\mn}(\vect{x})})$ from $[a,b] \times \mathcal{S}$ to $\mathds{R}$
does not vanish,
and let $\vect{\varphi}_{c,1}, \ldots, \vect{\varphi}_{c,s}$ denote the solutions
of $\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{\varphi})} = \gvect{\scriptstyle{0}}$ that correspond to local minimums of $\Phi_{c}$,
then when $c = \frac{m}{n}$ is in $[a,b]$, the asymptotics of $(n,m)$-$(R, \vect{r})$-graphs is
\[
G_{R, \vect{r}}(n,m) \sim
\frac{n^{2m}}{2^m m!}
\sum_{ \vect{\varphi} \in \{ \vect{\varphi}_{c,1}, \ldots, \vect{\varphi}_{c, s} \} }
\left(
\frac{ \vect{r}^{\, \vect{\varphi} }}
{ \vect{\varphi}^{\, \vect{\varphi} }}
\right)^n
\frac{ \left( \gvect{\varphi} R \vect{\varphi} \right)^m }
{ \sqrt{\det(\mathcal{H}_{\Phi_{c}(\vect{\varphi})}) \prod_{i=1}^q \varphi_i} }.
\]
\end{theorem}
\begin{proof}
We inject in the integral representation of the sum~\eqref{eq:Glaplace}
the following relations
\[
A_n(\vect{\varphi} + E \vect{\epsilon}) \sim
\prod_{i=1}^q \varphi_i^{-1/2}
+ \mathcal{O}(\|\vect{\epsilon}\|),
\
\quad
e^{-n \Phi_{c}(\vect{\varphi} + E \vect{\epsilon})} =
\left(
\frac{ \vect{r}^{\, \vect{\varphi} }}
{\vect{\varphi}^{\, \vect{\varphi} }}
\right)^n
\left( \gvect{\varphi} R \vect{\varphi} \right)^m
e^{- \frac{1}{2} n \gvect{\epsilon} \mathcal{H}_{\ff_{\mn}(\vect{\varphi}_{\mn})} \vect{\epsilon} + \mathcal{O}(n \| \vect{\epsilon} \|^3)},
\]
valid for any minimum $\vect{\varphi}$ of $\Phi_{c}$,
and apply the multivariate Laplace method,
presented in \cite[Chapter~$5$]{PW13}).
\end{proof}
The previous theorem requires to locate the minimums of $\Phi_{c}$,
and to avoid the values of $c$ for which those minimums cross or merge.
Even when the dimension of the matrix $R$ is $2$,
those minimums can exhibit interesting behaviors.
For example,
for $R = \smat{2&1\\1&2}$,
$\Phi_{c}$ has a unique minimum when $c \leq 1/6$
and two local minimums when $c > 1/6$.
It would be interesting to investigate if this transformation
in the analytic properties of $\Phi_{c}$ matches
an evolution in the typical structure of $(R,\vect{r})$-graphs.
Theorems~\ref{th:small_mn} and \ref{th:eigenvectorone}
provide two examples of families of parameters $R$, $\vect{r}$ and $\frac{m}{n}$
for which $\Phi_{c}$ has a unique minimum and
a more explicit asymptotics for the number of $(n,m)$-$(R, \vect{r})$-graphs can be derive.
\section{Trees and unicycles} \label{sec:UV}
An \emph{$(R,\vect{r})$-tree} is a connected $(R,\vect{r})$-graph that contains no cycle.
Such a graph is \emph{rooted} if one vertex,
called the \emph{root}, is marked.
An \emph{$(R,\vect{r})$-unicycle} is a connected $(R,\vect{r})$-graph
with exactly one cycle.
A classic result of \cite{ER60}
states that almost all $(n,m)$-graphs with $\frac{m}{n} < \frac{1}{2}$
contain only trees and unicycles.
In this section, we derive a similar result for $(R,\vect{r})$-graphs.
We also give a more explicit asymptotics for the number of $(n,m)$-$(R, \vect{r})$-graphs
than in Theorem~\ref{th:global}
when $\frac{m}{n}$ is smaller than a value $\beta$, defined in Theorem~\ref{th:small_mn}.
\begin{lemma}
Let $T_i(z)$, $U(z)$ and $V(z)$ denote the generating functions
of $(R,\vect{r})$-rooted trees with root of type $i$, $(R,\vect{r})$-trees and $(R,\vect{r})$-unicycles,
and let $\vect{T}(z)$ denote the vector $\transp{(T_1(z), \ldots, T_q(z))}$,
then
\[
\vect{T}(z)
= z \diag{r} \exp \left( R \vect{T}(z) \right), \quad
U(z) =
\gvect{\scriptstyle{1}} \vect{T}(z) - \frac{1}{2} \gvect{T}(z) R \vect{T}(z), \quad
V(z) =
- \frac{1}{2} \log \left( \det \left( I - \diag{T}(z) R \right) \right).
\]
\end{lemma}
\begin{proof}
An $(R,\vect{r})$-rooted tree is a root with a set of sons which
are $(R,\vect{r})$-rooted trees themself.
Let $i$ denote the type of the root,
and $j$ the type of the root of one of those sons,
then the root has weight $r_i$ and
the weight of the edge linking the root to the son
is the coefficient $R_{i,j}$.
%
Using the \emph{Symbolic Method}
(see for example the book of \cite{FS09})
the previous combinatorial description translates into
the first relation on $\vect{T}(z)$.
%
The expression for $U(z)$ is obtained
using the \emph{Dissymmetry Theorem} presented by \cite{BLL97}.
%
The proof of the expression of $V(z)$
is a variation of~\cite[Proposition~$V.6$]{FS09}.
\end{proof}
\begin{lemma}
The generating functions $\vect{T}(z)$
has the following singular expansions
\[
\vect{T}(z) =
\vect{\tau}
- \vect{\gamma} \sqrt{1 - z/\rho}
+ \mathcal{O}(1-z/\rho)
\]
where the value $\rho$ and the vectors $\vect{\tau}$ and $\vect{\gamma}$
have positive coefficients and are characterized by
the system
\begin{equation} \label{eq:tau_rho_gamma}
\vect{\tau} =
\rho \diag{r} \exp \left( R \vect{\tau} \right),
\quad
\left( I - \diag{\tau} R \right) \vect{\gamma} = 0,
\quad
\frac{1}{2} \gvect{\gamma} R \diag{\gamma} R \vect{\gamma} = \gvect{\scriptstyle{1}} \vect{\gamma}.
\end{equation}
\end{lemma}
\begin{proof}
The square-root singular expansion of $\vect{T}(z)$
is a consequence of \cite[Proposition~$3$]{D99}.
The constraints on its coefficients are obtained
by injection of this expansion into the relation $\vect{T}(z) = z \diag{r} \exp(R \vect{T}(z))$
and identification of the coefficients corresponding to the same power of $\sqrt{1-z/\rho}$.
%
\end{proof}
We can now build $(R,\vect{r})$-graphs
that contain only trees and unicycles.
\begin{theorem} \label{th:UV}
We set $\alpha = \frac{1}{2} \frac{\gvect{\tau} R \vect{\tau}}{\gvect{\scriptstyle{1}} \vect{\tau}}$.
For $c = \frac{m}{n}$ in any closed interval
of $] 0, \alpha [$, let $\zeta_{c}$ and $\vect{\varphi}_{c}$ be characterized by
\begin{equation} \label{eq:UV_saddle_point}
\frac{1}{2}
\frac{\gvect{T}(\zeta_{c}) R \vect{T}(\zeta_{c})}
{\gvect{\scriptstyle{1}} \vect{T}(\zeta_{c})}
=
c
\quad
\text{and }
\quad
\vect{\varphi}_{c} = \frac{\vect{T}(\zeta_{c})}{\gvect{\scriptstyle{1}} \vect{T}(\zeta_{c})},
\end{equation}
then the number of $(n,m)$-$(R, \vect{r})$-graphs that contain
only trees and unicycles is
\begin{align}
&G^{(U, V)}_{R, \vect{r}}(n,m)
\sim
\frac{n^{2m}}{2^m m!}
C_{c, \vect{\varphi}_{c}}^{-1/2}
\left(
\frac{\vect{r}^{\, \vect{\varphi}_{c}}}
{\vect{\varphi}_{c}^{\, \vect{\varphi}_{c}}}
\right)^n
\left( \gvect{\varphi}_{c} R \gvect{\varphi}_{c} \right)^m, \nonumber
\\
\text{where } \quad&
\label{eq:C}
C_{c, \vect{x}} =
\frac{1}{c}
\left(
\left(1-c \right) \gvect{\scriptstyle{1}}
\Big( I - \frac{2 c}{\gvect{x} R \vect{x}} \diag{x} R \Big)^{-1}
\vect{x}
- 1
\right)
\det \left( I - \frac{2 c}{\gvect{x} R \vect{x}} \diag{x} R \right).
\end{align}
\end{theorem}
\begin{proof}
A tree with $n$ vertices has $n-1$ edges,
and a unicycle with $n$ vertices has $n$ edges.
Therefore, an $(n,m)$-$(R, \vect{r})$-graph that contains only trees and unicycles
is a set of $n-m$ trees and a set of unicycles
\[
G^{(U, V)}_{R, \vect{r}}(n,m) =
n! [z^n] \frac{U(z)^{n-m}}{(n-m)!} e^{V(z)}.
\]
We apply \cite[Theorem VIII.8]{FS09}
to extract its asymptotics.
The saddle-point equation is Equation~\eqref{eq:UV_saddle_point}.
We then introduce the notation $\vect{\varphi}$, which satisfies the relation
\[
\vect{T}(\zeta_{c}) =
\frac{2 c}{\gvect{\varphi}_{c} R \vect{\varphi}_{c}}
\vect{\varphi}_{c},
\]
and apply Stirling approximations to rearrange the expression.
\end{proof}
In a longer version of this article,
we plan to enumerate connected $(n,m)$-$(R, \vect{r})$-graphs
when $m-n \geq 1$ is fixed,
and to prove that such components
appear with a non-zero probability
when $m = \alpha n + \mathcal{O}(n^{2/3})$.
This would extend to $(R,\vect{r})$-graphs
the result obtained for graphs by \cite{JKLP93}
with $\alpha = \frac{1}{2}$.
Therefore, we conjecture that $\frac{m}{n} = \alpha$
is the threshold for the emergence of components
with at least two cycles.
Following the approach of \cite{FKP89},
one could as well derive the limit law
of the number of edges when the first cycle appear
in a random $(n,m)$-$(R, \vect{r})$-graph.
The following lemma links the determinant of the Hessian matrix~$\mathcal{H}_{\ff_{\mn}(\vect{x})}$
to the value $C_{c, \vect{x}}$.
\begin{lemma} \label{th:detEHE_equal_C}
Let $\vect{\tau}$, $C_{c, \vect{x}}$ and $\mathcal{H}_{\ff_{\mn}(\vect{x})}$
be defined by Equations~\eqref{eq:tau_rho_gamma},
\eqref{eq:C} and Lemma~\ref{th:beta}, and set $\alpha$ to $\frac{1}{2} \frac{\gvect{\tau} R \vect{\tau}}{\gvect{\scriptstyle{1}} \vect{\tau}}$,
then
for all $c \in [0,\alpha[$ and $\vect{x} \in \mathcal{S}$, the following identity holds
\[
C_{c, \vect{x}}
=
\det(\mathcal{H}_{\ff_{\mn}(\vect{x})}) \prod_{i=1}^q x_i.
\]
\end{lemma}
\begin{proof}
We introduce the matrix $M = \frac{\gvect{x} R \vect{x}}{2 c} \diag{x} \phantom{}^{-1} - R$
and the vector $\vect{v} = \sqrt{ 2 / (\gvect{x} R \vect{x}) } \vect{x}$.
Since $E$ is a $q \times (q-1)$ matrix,
it becomes a square matrix $F$ if we concatenate to its right
the column vector $\vect{v}$.
The determinant of $\transp{F} M F$ can be expressed as $\det(F)^2 \det(M)$
or using a block-determinant formula.
The equality between those two expressions is
\begin{equation} \label{eq:detEHE_equal_C}
\frac{2}{\gvect{x} R \vect{x}} \det(M)
=
(\gvect{v} M \vect{v} + 1)
\det(\transp{E} M E)
-
\det(\transp{E}(M + M \vect{v}\, \gvect{v} M) E).
\end{equation}
The properties of the matrix $E$ imply
\[
\det(\transp{E} M E) = \det(M) \gvect{\scriptstyle{1}} M^{-1} \vect{\scriptstyle{1}}
\quad \text{and } \quad
\transp{E}(M + M \vect{v}\, \gvect{v} M) E = \frac{\gvect{x} R \vect{x}}{2 c} \mathcal{H}_{\ff_{\mn}(\vect{x})}.
\]
The result is obtained by injection of those relations in Equation~\eqref{eq:detEHE_equal_C}
and rearrangement of the terms.
\end{proof}
\section{Global enumeration when $\Phi_{c}$ has a unique minimum} \label{sec:unique_min}
In this section, we present two cases where
the result of Theorem~\ref{th:global}
can be made more specific.
\begin{theorem} \label{th:small_mn}
Let $\beta$ denote the value
$\sup \{ c\ |\ \forall \vect{x} \in \mathcal{S},\ \det(\mathcal{H}_{\ff_{\mn}(\vect{x})}) > 0 \}$,
then $\beta$ is greater than $\alpha$.
The equation $\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{\varphi})} = \gvect{\scriptstyle{0}}$ defines implicitly on $[0, \beta[$
a unique solution $\vect{\varphi}_{c}$,
given by Equation~\eqref{eq:UV_saddle_point}
when $c \in ]0,\alpha]$.
Finally, when $\frac{m}{n}$ is in a compact interval of $]0, \beta[$,
the asymptotics of $(n,m)$-$(R, \vect{r})$-graphs is
\[
G_{R, \vect{r}}(n,m)
\sim
\frac{n^{2m}}{2^m m!}
\left(
\frac{ \vect{r}^{\, \vect{\varphi}_{c} }}
{ \vect{\varphi}_{c}^{\, \vect{\varphi}_{c} }}
\right)^n
\frac{ \left( \gvect{\varphi}_{c} R \vect{\varphi}_{c} \right)^m }
{ \sqrt{\det(\mathcal{H}_{\ff_{\mn}(\vect{\varphi}_{\mn})}) \det(\diag{\varphi}_{c})} }.
\]
\end{theorem}
\begin{proof}
When $c = 0$, for all $\vect{x} \in \mathcal{S}$
the matrix $\mathcal{H}_{\Phi_0(\vect{x})}$ is positive-definite.
By continuity of its eigenvalues with respect to $c$ and $\vect{x}$,
$\mathcal{H}_{\ff_{\mn}(\vect{x})}$ stays positive-definite for all $c \in [0, \beta[$ and $\vect{x} \in \mathcal{S}$,
so the function $\Phi_{c}$ is convex.
According to Lemma~\ref{th:beta},
$\Phi_{c}$ has no minimum on the boundary of $\mathcal{S}$,
so it has a unique minimum, which cancels its gradient $\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{x})}$.
The asymptotics is then a consequence of Theorem~\ref{th:global}.
For $c \in ]0, \alpha[$, let us define
$\vect{\varphi}_{c}$ as in Equation~\eqref{eq:UV_saddle_point}.
A direct computation shows that
$\log(\gvect{\varphi}_{c}) - \log(\gvect{r}) - 2 c \gvect{\varphi} R / (\gvect{\varphi} R \vect{\varphi})$
is collinear to $\gvect{\scriptstyle{1}}$, so $\vect{\varphi}_{c}$ cancels $\gvect{\scriptstyle{\nabla}}_{\ff_{\mn}(\vect{x})}$.
We extend continuously $\vect{\varphi}_{c}$ for $c = 0$ and $c = \alpha$ with
\[
\vect{\varphi}_0
= \lim_{z \to 0} \frac{\vect{T}(z)}{\gvect{\scriptstyle{1}} \vect{T}(z)}
= \frac{\vect{r}}{\gvect{\scriptstyle{1}} \vect{r}},
\qquad
\vect{\varphi}_{\alpha}
= \lim_{z \to \rho} \frac{\vect{T}(z)}{\gvect{\scriptstyle{1}} \vect{T}(z)}
= \frac{\vect{\tau}}{\gvect{\scriptstyle{1}} \vect{\tau}}.
\]
The last point we need to prove is that $\beta > \alpha$.
According to Lemma~\ref{th:detEHE_equal_C},
$\det(\mathcal{H}_{\ff_{\mn}(\vect{x})})$ vanishes only if $C_{c, \vect{x}}$ does,
and Theorem~\ref{th:UV} implies in particular $C_{c, \vect{\varphi}_{c}} > 0$
when $c \in ]0, \alpha[$.
Observe that $\mathcal{H}_{\ff_{\mn}(\vect{x})}$ is independent of $\vect{r}$
and that for each $\vect{x} \in \mathcal{S}$,
there is a vector $\vect{r} \in \mathds{R}_{>0}^q$ such that $\vect{\varphi}_{c} = \vect{x}$.
This proves $\beta \geq \alpha$.
For $c = \alpha$,
$\vect{\varphi}_{\alpha}$ is equal to $\vect{\tau} / (\gvect{\scriptstyle{1}} \vect{\tau})$.
The definitions~\eqref{eq:tau_rho_gamma} of $\vect{\tau}$
and~\eqref{eq:C} of $C_{c, \vect{\varphi}_{c}}$ then imply
\[
\det \left( I - \frac{2 \alpha}{\gvect{\varphi}_{\alpha} R \vect{\varphi}_{\alpha}} \diag{\varphi}_{\alpha} R \right)
= 0
\quad \text{and } \quad
C_{\alpha, \vect{\varphi}_{\alpha}}
=
\frac{1-\alpha}{\alpha} \gvect{\scriptstyle{1}} \operatorname{adj}(I - \diag{\tau} R) \frac{\vect{\tau}}{\gvect{\scriptstyle{1}} \vect{\tau}}.
\]
The value of the generating function of unrooted $(R,\vect{r})$-trees
$\gvect{\scriptstyle{1}} \vect{T}(z) - \gvect{T}(z) R \vect{T}(z) / 2$
is positive at its dominant singularity $\rho$,
and $\vect{\tau} = \vect{T}(\rho)$,
so $\alpha = \gvect{\tau} R \vect{\tau} / (2 \gvect{\scriptstyle{1}} \vect{\tau})$
is smaller than $1$.
By definition of $\vect{\tau}$, the irreducible matrix $\diag{\tau} R$
has dominant eigenvalue $1$ of dimension $1$ with eigenvector $\vect{\gamma}$,
defined in Equation~\eqref{eq:tau_rho_gamma}.
Therefore, $\operatorname{adj}(I - \vect{\tau} R)$ is proportional to $\vect{\gamma}\, \gvect{\gamma}$,
which has positive coefficients.
This implies $C_{\alpha, \vect{\varphi}_{\alpha}} > 0$,
so $\det(\mathcal{H}_{\Phi_{\alpha}(\vect{x})})$
is positive for all $\vect{x}$ in $\mathcal{S}$ and $\Phi_{\alpha}$ is still strictly convex.
Therefore, $\beta$ is greater than $\alpha$.
\end{proof}
The following corollary is obtained by comparison
of the asymptotics of $G^{(U, V)}_{R, \vect{r}}(n,m)$ from Theorem~\ref{th:UV}
and $G_{R, \vect{r}}(n,m)$ from Theorem~\ref{th:small_mn},
using the relation
$C_{c, \vect{x}} = \det(\mathcal{H}_{\ff_{\mn}(\vect{x})}) \prod_{i=1}^q x_i$
from Lemma~\ref{th:detEHE_equal_C}.
\begin{corollary}
When $c = \frac{m}{n}$ is in a closed interval of $]0, \alpha[$,
then almost all $(n,m)$-$(R, \vect{r})$-graphs contain only trees and unicycles.
\end{corollary}
When $R$ is the adjacency matrix of a regular graph,
then $\vect{\scriptstyle{1}}$ is an eigenvector and $\vect{\varphi}_{c}$ becomes explicit.
\begin{theorem} \label{th:eigenvectorone}
Let $\lambda_1 \geq \cdots \geq \lambda_q$ denote the eigenvalues of $R$,
and assume that $\vect{\scriptstyle{1}}$ is an eigenvector of $R$,
then
when $\frac{m}{n}$ is in a compact interval of $]0, \beta[$,
the asymptotics of $(n,m)$-$(R,\vect{\scriptstyle{1}})$-graphs is
\[
G_{R, \vect{\scriptstyle{1}}}(n,m)
\sim
\frac{n^{2m}}{2^m m!}
\lambda_1^m
q^{n-m}
\prod_{i=2}^q
\left( 1 - 2 \frac{\lambda_i}{\lambda_1} \frac{m}{n} \right)^{-1/2}.
\]
If $\lambda_2$ is positive, $\beta < \frac{\lambda_1}{2\lambda_2}$,
otherwise the previous asymptotics
holds for $\frac{m}{n}$ in any compact interval of $\mathds{R}_{> 0}$.
\end{theorem}
\begin{proof}
$R$ has non-negative coefficients and is irreducible.
The Perron-Fr\"obenius theorem implies that its dominant eigenvalue
is positive and corresponds to the unique eigenvector with positive coefficients,
thus $R \vect{\scriptstyle{1}} = \lambda_1 \vect{\scriptstyle{1}}$.
The first assertion of the theorem is then a consequence of Theorem~\ref{th:small_mn}
with $\vect{\varphi}_{c} = \vect{\scriptstyle{1}} / q$,
\[
\gvect{\scriptstyle{\nabla}}_{\Phi_{c}(\vect{\scriptstyle{1}} / q)} = \gvect{\scriptstyle{0}}
\quad \text{and } \quad
\det(\mathcal{H}_{\Phi_{c}(\vect{\scriptstyle{1}}/q)})
=
\frac{q^q}{1-2c}
\det \left( I - \frac{2c}{\lambda_1} R \right)
=
q^q
\prod_{i=2}^q
\left( 1 - 2 c \frac{\lambda_i}{\lambda_1} \right).
\]
Observe that $\det(\mathcal{H}_{\Phi_{c}(\vect{\scriptstyle{1}}/q)})$ vanishes at $c = \frac{\lambda_1}{2 \lambda_2}$.
Let us now consider the case $\lambda_2 \leq 0$.
The function $\log(\gvect{x}) \vect{x}$
reaches its unique minimum on $\mathcal{S}$ at $\vect{\scriptstyle{1}} / q$.
To prove that the function
$\Phi_{c}(\vect{x}) = \log(\gvect{x}) \vect{x} - c \log(\gvect{x} R \vect{x})$
does the same, it is then sufficient to prove that
$\gvect{x} R \vect{x}$ reaches at $\vect{\scriptstyle{1}}/q$ its global maximum.
Since $\vect{x}$ is in $\mathcal{S}$, there is a vector $\vect{y} \in \mathds{R}^{q-1}$
that satisfies $\vect{x} = \vect{\scriptstyle{1}} /q + E \vect{y}$.
Since $\gvect{\scriptstyle{1}} E$ is equal to $\gvect{\scriptstyle{0}}$, we obtain
\[
\gvect{x} R \vect{x} = \frac{\lambda_1}{q} + \gvect{y} \transp{E} R E \vect{y}.
\]
The symmetry of $R$ implies the existence of an orthogonal matrix $Q$
with first row $\gvect{\scriptstyle{1}}/\sqrt{q}$ such that
$R = \transp{Q} \operatorname{diag}(\lambda_1, \ldots, \lambda_q) Q$.
The first row of $Q E$ is $\gvect{\scriptstyle{0}}$.
Let $P$ denote the $(q-1) \times (q-1)$ matrix in the lower block of $Q E$.
Then $P$ is invertible, and
\[
\transp{E} \transp{Q} \operatorname{diag}(\lambda_1, \ldots, \lambda_q) Q E
= \transp{P} \operatorname{diag}(\lambda_2, \ldots, \lambda_q) P.
\]
We set $\vect{z} = P \vect{y}$ and obtain
$
\gvect{x} R \vect{x} = \frac{\vect{\scriptstyle{1}}}{q} + \sum_{i=2}^q \lambda_i z_i^2.
$
Therefore, $\gvect{x} R \vect{x}$ reaches its global maximum $\lambda_1 / q$
when $\vect{z} = \vect{\scriptstyle{0}}$, which implies $\vect{y} = \vect{\scriptstyle{0}}$ and $\vect{x} = \vect{\scriptstyle{1}}/q$.
\end{proof}
A future direction of research
is the expansion of the family of parameters $R$, $\vect{r}$ and $c$
for which we can link the asymptotics of $(n,m)$-$(R, \vect{r})$-graphs
to the spectrum of $R$.
Information on the location of the minimums of $\Phi_{c}$
for $c$ greater than $\beta$ would also be instructive.
\section{Simple $(R,\vect{r})$-Graphs} \label{sec:simple}
The previous sections focused on $(R,\vect{r})$-graphs
where loops and multiple edges were allowed.
We now extend those results to simple $(R,\vect{r})$-graphs,
starting with a theorem similar to Theorem~\ref{th:exact}.
\begin{theorem}
The number of simple $(n,m)$-$(R, \vect{r})$-graphs is
\begin{equation} \label{eq:SGexact}
\mathit{SG}_{R, \vect{r}}(n,m) =
[w^m] \sum_{\gvect{\scriptstyle{1}} \vect{n} = n}
\binom{n}{\vect{n}}
\vect{r}^{\, \vect{n}}
\prod_{1 \leq i < j \leq q}
( 1 + R_{i,j} w)^{n_i n_j}
\prod_{i=1}^q
(1 + R_{i,i} w)^{n_i(n_i-1) / 2}.
\end{equation}
\end{theorem}
\begin{proof}
We consider a partition $V_1 \uplus \cdots \uplus V_q = [1..n]$
of the set of vertices
and define $n_i = |V_i|$ for all~$i$.
Let $\mathit{SG}(\vect{n},m)$ denote the set of simple $(n,m)$-$(R, \vect{r})$-graphs
where each vertex of $V_i$ has type $i$.
When $i \neq j$, there are $n_i n_j$ available edges between vertices of $V_i$ and $V_j$,
and for all $i$, there are $n_i (n_i-1)/2$ possible edges between vertices of $V_i$.
Therefore, the number of graphs
in $\mathit{SG}(\vect{n}, m)$ is
\[
\sum_{G \in \mathit{SG}(\vect{n},m)}
\omega(G)
=
[w^m]
\vect{r}^{\, \vect{n}}
\prod_{1 \leq i < j \leq q}
( 1 + R_{i,j} w)^{n_i n_j}
\prod_{i=1}^q
(1 + R_{i,i} w)^{n_i(n_i-1) / 2}.
\]
The result of the Lemma is obtained by summation
over all partitions $V_1 \uplus \cdots \uplus V_q = [1..n]$.
\end{proof}
\begin{theorem} \label{th:simple}
With the notations of Theorem~\ref{th:global},
for all $c=\frac{m}{n}$ in $[a,b]$,
the asymptotics of simple $(n,m)$-$(R, \vect{r})$-graphs is
\[
\mathit{SG}_{R, \vect{r}}(n,m) \sim
\frac{n^{2m}}{2^m m!}
\sum_{ \vect{\varphi} \in \{ \vect{\varphi}_{c,1}, \ldots, \vect{\varphi}_{c, s} \} }
\left(
\frac{ \vect{r}^{\, \vect{\varphi} }}
{ \vect{\varphi}^{\, \vect{\varphi} }}
\right)^n
\left( \gvect{\varphi} R \vect{\varphi} \right)^m
\frac{ e^{
- \frac{c}{\gvect{\varphi} R \vect{\varphi}} \operatorname{Tr} (\diag{\varphi} R)
- \left( \frac{c}{\gvect{\varphi} R \vect{\varphi}} \right)^2 \operatorname{Tr} ((\diag{\varphi} R)^2) }}
{ \sqrt{\det(\mathcal{H}_{\Phi_{c}(\vect{\varphi})}) \prod_{i=1}^q \varphi_i} }.
\]
\end{theorem}
\begin{proof}
Starting with the exact expression~\eqref{eq:SGexact},
we replace $w$ with $n w$
\begin{align} \label{eq:SGR1}
&\mathit{SG}_{R, \vect{r}}(n,m) =
n^{m}
\sum_{\gvect{\scriptstyle{1}} \vect{n} = n}
\binom{n}{\vect{n}}
\vect{r}^{\, \vect{n}}
[w^m]
e^{F_{n}(\vect{n}/n, w)},
\\
\text{where } \quad
&
F_{n}(\vect{x}, w)
=
\log \Bigg(
\prod_{1 \leq i < j \leq q}
\left( 1 + R_{i,j} \frac{w}{n} \right)^{n x_i n x_j}
\prod_{i=1}^q
\left( 1 + R_{i,i} \frac{w}{n} \right)^{n x_i (n x_i-1) / 2}
\Bigg). \nonumber
\end{align}
An expansion of the logarithm reduces this expression to
\[
F_{n}(\vect{x}, w)
=
n \gvect{x} R \vect{x} \frac{w}{2}
- \frac{1}{2} \operatorname{Tr}(\diag{x} R) w
- \frac{1}{4} \operatorname{Tr}((\diag{x} R)^2) w^2
+ \mathcal{O}(n^{-1}).
\]
With $c = \frac{m}{n}$ bounded,
we apply \cite[Theorem~VIII.8]{FS09}
with saddle-point $\zeta = 2 c /(\gvect{x} R \vect{x})$:
\[
[w^m] e^{F_n(\vect{x},w)}
=
\frac{n^m}{2^m m!}
(\gvect{x} R \vect{x})^m
\exp \left(
- \frac{1}{2} \operatorname{Tr}(\diag{x} R) \frac{2 c}{\gvect{x} R \vect{x}}
- \frac{1}{4} \operatorname{Tr}((\diag{x} R)^2) \left(\frac{2 c}{\gvect{x} R \vect{x}}\right)^2
\right)
(1 + \mathcal{O}(n^{-1}))
\]
holds uniformly for $\vect{x} \in \mathcal{S}$.
Adopting the notation $\Phi_{c}$ of Lemma~\ref{th:laplace_setting},
Equation~\eqref{eq:SGR1} then becomes
\begin{align*}
&\mathit{SG}_{R, \vect{r}}(n,m) =
\frac{n^{2m}}{2^m m!}
\frac{1}{(2 \pi n)^{\frac{q-1}{2}}}
\sum_{\{\vect{n} \in \mathds{N}^q\ |\ \gvect{\scriptstyle{1}} \vect{n} = n\}}
\mathit{SA}_n \left( \frac{\vect{n}}{n} \right)
e^{- n \Phi_{\frac{m}{n}} \left( \frac{\vect{n}}{n} \right)},
\\ \text{where } \quad &
\mathit{SA}_n(\vect{x})
=
\prod_{i=1}^q
\frac{(n x_i)^{n x_i} e^{-n x_i} \sqrt{2 \pi n x_i}}{\Gamma(n x_i + 1)}
\frac{1}{\sqrt{x_i}}
e^{
- \operatorname{Tr}(\diag{x} R) \frac{c}{\gvect{x} R \vect{x}}
- \operatorname{Tr}((\diag{x} R)^2) \left(\frac{c}{\gvect{x} R \vect{x}}\right)^2
}
(1+\mathcal{O}(n^{-1})).
\end{align*}
The end of the proof is the same as in Theorem~\ref{th:global}.
\end{proof}
Theorems\ref{th:small_mn} and~\ref{th:eigenvectorone}
extend to simple $(R,\vect{r})$-graphs in the same way.
\begin{corollary}
When $\frac{m}{n}$ is in a closed interval of $]0, \alpha[$,
almost all simple $(n,m)$-$(R, \vect{r})$-graphs contain only trees and unicycles.
\end{corollary}
\begin{proof}
We verify that the asymptotics
of simple $(n,m)$-$(R, \vect{r})$-graphs containing only trees and unicycles
is equal to the asymptotics of all simple $(n,m)$-$(R, \vect{r})$-graphs, derived in Theorem~\ref{th:simple}.
The generating function $U(z)$ of $(R,\vect{r})$-trees is the same for graphs and simple graphs.
The generating function $\mathit{SV}(z)$ of simple $(R,\vect{r})$-unicycles
becomes
\[
\mathit{SV}(z) =
V(z)
- \frac{1}{2} \operatorname{Tr}( \diag{T}(z) R )
- \frac{1}{4} \operatorname{Tr} \left( (\diag{T}(z) R)^2 \right)
\]
to avoid loops and double edges in the cycle.
The end of the proof is the same as in Theorem~\ref{th:UV}.
\end{proof}
\bibliographystyle{abbrvnat}
|
2,877,628,088,876 | arxiv | \section{Introduction}
\label{sec:intro}
Since the hot, X-ray emitting haloes -- or intracluster medium (ICM) -- pervading galaxy clusters, groups, and massive ellipticals are in collisional ionisation equilibrium (CIE), abundances of the chemical elements they retain over Gyrs can be robustly measured. These elements include oxygen (O), neon (Ne), and magnesium (Mg), mainly produced by core-collapse supernovae (SNcc), as well as chromium (Cr), manganese (Mn), iron (Fe), and nickel (Ni), mainly produced by Type Ia supernovae (SNIa), and silicon (Si), sulfur (S), argon (Ar), calcium (Ca) coming from both SNIa and SNcc \citep[for recent reviews, see e.g.][]{2008SSRv..134..337W,2013ARA&A..51..457N}.
Interestingly, these two types of supernovae originate from very different progenitors with different lifetimes and, consequently, should not enrich their surroundings in the same way and with the same delays. Whereas SNcc result from the explosion of a massive star and occur a few million years after the formation of their progenitor, SNIa are thought to occur with a substantial delay, i.e. when a white dwarf (WD) gains enough material from a companion object (either a main-sequence star or another degenerate core remnant) to ignite explosive nucleosynthesis. Consequently, accurate measurements of the chemical composition of the ICM (or of any other astrophysical system) provide invaluable clues to understand its enrichment, as well as its subsequent evolution \citep[for previous studies, see e.g.][]{1996ApJ...466..686M,2002A&A...381...21F,2005ApJ...620..680B,2007A&A...465..345D,2007ApJ...667L..41S}.
Recently, we compiled deep \textit{XMM-Newton} EPIC and RGS observations of 44 nearby cool-core ellipticals, galaxy groups, and clusters (the CHEERS\footnote{CHEmical Enrichment Rgs Sample} catalogue) in order to measure accurately the O/Fe, Ne/Fe, Mg/Fe, Si/Fe, S/Fe, Ar/Fe, Ca/Fe, Cr/Fe, Mn/Fe, and Ni/Fe ratios and to derive a complete abundance pattern, representative of the nearby ICM \citep[][hereafter Paper I]{2016A&A...592A.157M}. This pattern was found to be consistent with solar values for the seven former ratios while, on the contrary, Cr/Fe, Mn/Fe, and Ni/Fe were measured higher than solar. In a second paper \citep[][hereafter Paper II]{2016A&A...595A.126M}, we compared the measured X/Fe abundance ratios with recent or commonly used SNIa and SNcc yield models, in order to provide reliable constraints on SN explosions and/or their progenitors. In particular, we showed that an accurate determination of the Ni/Fe is extremely important as it may dramatically alter our astrophysical interpretations of the ICM enrichment.
Some of these results, however, appeared to be in tension with recent measurements of the Perseus cluster core obtained with the \textit{Hitomi} observatory \citep[][hereafter H17]{2017Natur.551..478H}. Unlike our super-solar values measured for Cr/Fe, Mn/Fe, and Ni/Fe in Paper I, the excellent spectral resolution offered by the micro-calorimeter SXS showed, instead, a chemical composition that is remarkably close to that of the Solar neighbourhood for all the investigated ratios. These discrepancies have been recently confirmed by \citet[][hereafter S18]{2018arXiv180600932S}, who revisited the Perseus abundance ratios by combining \textit{Hitomi} SXS and \textit{XMM-Newton} RGS instruments. Although these three studies (Paper I; H17; S18) are currently the state-of-the-art of our knowledge of the chemical composition of the ICM, the discrepancies noted above may find various origins, since the observations were performed on different systems (44 systems vs. Perseus only), with different instruments (CCDs vs. micro-calorimeter) and with different spectral codes (see below). Two questions particularly arise: (i) To which extent are CCD instruments reliable to constrain abundance ratios? (ii) Is the chemical composition of Perseus rather unique, or do most other clusters exhibit a solar chemical composition as well?
Spectral codes, like \textsc{apec} \citep{2012ApJ...756..128F} and \textsc{spex} \citep[][]{1996uxsa.conf..411K} have had major updates in preparation for \textit{Hitomi}. Although the codes still differ \citep{2018PASJ...70...12H}, they have converged considerably in recent years.
Between 2016 and 2017, however, a major update of the atomic data included in the spectral fitting package \textsc{spex} has been publicly released. Whereas H17 used the new version of the code, Papers I and II appeared beforehand. It was shown recently that this update has a major impact on measuring the Fe abundance in groups and ellipticals \citep{2018MNRAS.478L.116M} and could thus potentially affect the abundance of other elements \citep[see also figure~9 in][]{2017A&A...603A..80M}. By essence, the estimated abundances of a CIE plasma depend on the input atomic calculations in the spectral model that is used. Therefore, comparing the CHEERS results with the best measurements of the Perseus cluster (H17; S18) in a consistent way is essential for setting the most accurate constraints on the chemical composition of the ICM.
In this Letter, we explore the effects of such model improvements on our previously measured abundance ratios in the CHEERS sample (Paper I). We assume $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_m$ = 0.3, and $\Omega_\Lambda$= 0.7. Unless otherwise stated, the error bars are given within a 68\% confidence interval. All the abundances mentioned in this work are given with respect to the proto-solar values (referred to as "solar" for convenience) derived from \citet{2009LanB...4B...44L}.
\section{Methods}
\label{sec:methods}
\subsection{The sample}
\label{sec:the_sample}
The sample, the data reduction, and the spectral analysis and strategies are all described in detail in Paper I. Like our present work, that previous study focused on the \textit{XMM-Newton} observations of 44 nearby ($z < 0.1$) cool-core clusters, groups, and ellipticals, all being part of the CHEERS project \citep[see also][]{2015A&A...575A..38P,2017A&A...607A..98D}. The main criterion of the sample is the detection of the \ion{O}{viii} emission line measured by the RGS instrument with >5$\sigma$ of significance. In this way, we ensure selecting clusters with prominent metal lines in their cores. This allows a robust determination of most of the abundances also with the EPIC instruments.
As explained in Paper I, by selection the 24 hottest systems of the sample ($kT > 1.7$ keV; "clusters") are investigated within $0.2 r_{500}$ while the remaining 20 cooler systems ($kT < 1.7$ keV; "groups/ellipticals") could only be studied within $0.05 r_{500}$. The only exception is the very nearby elliptical galaxy M\,87, whose central temperature is about $\sim$2 keV but whose $0.2 r_{500}$ limit stands beyond the EPIC field of view. This system is thus studied within $0.05 r_{500}$. In Paper I, we showed that adopting these different extraction radii has negligible impact on our final results.
\subsection{Reanalysis of our data}\label{sec:SPEX2vs3}
Since 1996, the original \textsc{mekal} code, used to model thermal plasmas \citep{1972A&A....20..215M,1985A&AS...62..197M,1986A&AS...65..511M}, has been developed independently within the \textsc{spex} spectral fitting package \citep{1996uxsa.conf..411K} and gradually improved. Up to the version 2.06, the code made use of an atomic database and a collection of routines that are all referred to \textsc{spexact} (\textsc{spex} Atomic Code and Tables) v2.
Since 2016, however, a major update (version 3.04, hereafter \textsc{spexact} v3) has been released on both the atomic database and the corresponding routines. The main changes, including updates in line emissivities \citep[e.g.][]{2016A&A...587A..84M,2017A&A...601A..85U} and the incorporation of $\sim$400 times more transitions, are described in \citet{2017A&A...607A..98D} and \citet{2018MNRAS.478L.116M}.
In Paper I, the O/Fe and Ne/Fe ratios were already corrected to their \textsc{spexact} v3 estimates; therefore, there is no need to reconsider them further and we devote this Letter to the EPIC measurements of Mg/Fe, Si/Fe, S/Fe, Ar/Fe, Ca/Fe, Cr/Fe, Mn/Fe, and Ni/Fe. The general fitting procedure, including (i) the estimation of the hydrogen column density $n_\text{H}$, (ii) the treatment of the EPIC background, and (iii) the approach of refitting K-shell lines locally, is described extensively in Paper I. The only exception regarding (iii) is M\,87, which show significantly different "full band" and "local" Fe values, and for which we adopt the latter (since its Fe-L complex may suffer from unexpected biases, such as a complex temperature structure). In addition, we account for the multi-temperature state of the plasma by adopting the approach described in \citet[][3-T, with $kT_\text{mean}$ the temperature of the main component]{2018MNRAS.478L.116M}.
\section{Results}
\label{sec:results}
In Fig.~\ref{fig:kT_XFe}, we report the Mg/Fe, Si/Fe, and S/Fe ratios of the CHEERS systems as a function of their $kT_\text{mean}$. We choose to report the EPIC MOS and pn results individually because these instruments are known to have slight but significant discrepancies in their best-fit parameters \citep[][]{2015A&A...575A..30S,2015A&A...575A..37M}. It clearly appears that these ratios remain very similar in the full range of $kT_\text{mean}$ considered here. In fact, when averaging these ratios over the "clusters" and "groups/ellipticals" subsamples (Fig.~\ref{fig:kT_XFe}, dashed lines and filled areas), we find that except a moderate ($\sim$26\%) decrease of Mg/Fe from "clusters" to "groups/ellipticals" in the MOS measurements, the other measurements show either <1$\sigma$ consistent mean ratios, or very limited (i.e. less than 15\%) differences, with no systematic trend. Since Mg, Si, and S may be considered as reliable tracers of SNcc products while Fe originates predominantly from SNIa (e.g. Paper II), the similar chemical composition in all our systems strongly suggests that SNIa and SNcc enrich the hot atmosphere of clusters, groups, and ellipticals with the same relative importance. Similar conclusions were obtained by \citet{2009A&A...508..565D}, \citet{2007A&A...465..345D}, and in Paper I (albeit with more restricted samples and/or outdated spectral codes).
\begin{figure}
\includegraphics[trim=0.3cm 0.6cm 1.5cm 1.9cm, clip=true,width=\columnwidth]{fig_kT_XFe.pdf}
\caption{Abundance ratios (\textit{top:} Mg/Fe; \textit{middle:} Si/Fe; \textit{bottom:} S/Fe) as a function of the mean temperature of the CHEERS systems (using \textsc{spexact} v3). EPIC MOS and pn measurements are shown separately. The vertical dotted line ($kT_\text{mean} = 1.7$ keV) separates the "groups/ellipticals" and the "clusters" subsamples. The blue and red horizontal dashed lines (and filled areas) indicate the mean values (and errors) averaged over these two sub-samples for MOS and pn, respectively (see text).}
\label{fig:kT_XFe}
\end{figure}
The remarkable similarity of the X/Fe ratios in all these systems also allows us to average them over the entire sample in order to obtain a combined abundance pattern, representative of the (nearby cool-core) ICM as a whole. These CHEERS average abundance ratios are shown in Fig.~\ref{fig:abun_CHEERS} for MOS and pn independently (along with the RGS measurements of O/Fe and Ne/Fe from Paper I). Because in most cases the statistical uncertainties are lower than the respective MOS-pn discrepancies, it is very important to account for the latter in our total uncertainties. In order to be conservative, for each ratio we consider the two extreme values reached by the individual MOS and pn measurements (and their 1$\sigma$ statistical uncertainties) as the total MOS-pn cross-calibration uncertainties (green darker boxes in Fig.~\ref{fig:abun_CHEERS}). This approach allows to define mean values for the X/Fe ratios and their associated conservative limits ($\sigma_\text{cons}$; including the statistical uncertainties and, when applicable, the MOS-pn discrepancies), as provided in Table~\ref{table:systematics_SPEX3}.
\begin{table}
\begin{centering}
\caption{Average abundance ratios re-estimated from the CHEERS sample (using \textsc{spexact} v3), as well as their systematic and total uncertainties. An absence of value ($-$) means that no further uncertainty was required.}
\label{table:systematics_SPEX3}
\setlength{\tabcolsep}{9pt}
\begin{tabular}{c| c| c c c}
\hline \hline
Element & Mean value& $\sigma_\text{cons}$ & $\sigma_\text{int}$ & $\sigma_\text{tot}$ \\
\hline
O/Fe & $0.817$ & $0.018$ & $0.174$ & $0.175$ \\
Ne/Fe & $0.724$ & $0.028$ & $0.130$ & $0.133$ \\
Mg/Fe & $0.937$ & $0.071$ & $0.013$ & $0.072$ \\
Si/Fe & $0.949$ & $0.034$ & $0.051$ & $0.061$ \\
S/Fe & $1.004$ & $0.021$ & $-$ & $0.021$ \\
Ar/Fe & $0.980$ & $0.085$ & $-$ & $0.085$ \\
Ca/Fe & $1.272$ & $0.103$ & $-$ & $0.103$ \\
Cr/Fe & $0.986$ & $0.188$ & $-$ & $0.188$ \\
Mn/Fe & $1.557$ & $0.774$ & $-$ & $0.774$ \\
Ni/Fe & $0.959$ & $0.073$ & $0.375$ & $0.382$ \\
\hline
\end{tabular}
\par\end{centering}
\end{table}
\begin{figure}
\includegraphics[trim=0.3cm 0.7cm 0.5cm 0.4cm, clip=true,width=\columnwidth]{fig_abun_CHEERS.pdf}
\caption{Average abundance ratios re-estimated from the entire CHEERS sample (using \textsc{spexact} v3), for EPIC MOS and pn separately (blue and red data points), then adopting conservative limits accounting for the MOS-pn discrepancies (green darker boxes) and possible additional scatter (green lighter boxes). For Ni/Fe, only the MOS measurements are considered in these conservative limits (see text). The O/Fe and Ne/Fe ratios (grey boxes) are adopted from Paper I (already corrected from \textsc{spexact} v3). The same conservative limits obtained when excluding Perseus from the sample are shown by the black contours.}
\label{fig:abun_CHEERS}
\end{figure}
The Ni/Fe ratio deserves some extra attention. In particular, the large discrepancy between MOS and pn measurements (Paper I) suggests that this ratio is very sensitive to the instrumental background. In fact, in pn spectra we note the presence of two strong and unstable fluorescent lines, at $\sim$7.5 keV (Ni K$\alpha$) and $\sim$8.0 keV (Cu K$\alpha$). Since these instrumental lines are partly blended with the Ni-K complex from the source, they strongly bias the pn measurements of the Ni abundance. On the contrary, no instrumental feature is reported in MOS spectra within this band. This explains the inconsistent values between MOS and pn, which previously led to large systematic error bars (figure~6 right of Paper I). For this reason, in the rest of this Letter we only rely on the MOS values of Ni/Fe.
In addition, individual measurements may also have a (limited) intrinsic scatter, due to the slight differences in the chemical histories of each system. This additional uncertainty, $\sigma_\text{int}$, is evaluated as described in Paper I. In short, the combined MOS+pn measurements are fit with a constant. If $\chi^2/\text{d.o.f.} > 1$, we increment $\sigma_\text{int}$ that we add in quadrature to all the individual measurements until $\chi^2/\text{d.o.f.}$ reaches unity. This scatter (green lighter boxes in Fig.~\ref{fig:abun_CHEERS}) is then added in quadrature to the MOS-pn discrepancies (or, for the Ni/Fe ratio, to the MOS results only) to obtain our final uncertainties, $\sigma_\text{tot}$ (see Table~\ref{table:systematics_SPEX3}).
Our CHEERS abundance pattern is found to be remarkably consistent with the solar ratios. However, because (i) Perseus is the brightest object of the CHEERS sample and (ii) the abundance pattern of that system is already known to be solar (H17; S18), one might wonder how this particular system weights the results of our entire sample. In Fig.~\ref{fig:abun_CHEERS} (black contours), we show how the conservative limits presented above change when the Perseus measurements are excluded from the sample. The excellent consistency between the two patterns clearly illustrates that, like Perseus, most other systems also show a chemical composition that is surprisingly close to solar.
\section{Comparison with previous measurements}
\label{sec:Ni_bias}
We have shown that, when applying the latest atomic database to the CHEERS sample, the chemical composition of the ICM is found to be remarkably similar to that of the Solar neighbourhood. As shown in Fig.~\ref{fig:abun_CHEERS_comparison}, this result is in excellent agreement with the recent measurements of several abundance ratios in the Perseus core by \textit{Hitomi} (H17; see also S18), although the measurements were done on different systems (the CHEERS sample vs. Perseus) and with different instruments (EPIC CCDs vs. SXS micro-calorimeter). The solar Cr/Fe and Ni/Fe ratios found in this work differ from those in Paper I. We conclude that the previous discrepancies in Cr/Fe and Ni/Fe reported by H17 (see their figure~2) were due to the use of different spectral code versions rather than the moderate spectral resolution of EPIC. These solar abundance ratios were also obtained in Perseus using the up-to-date version of \textsc{apec}, with limited differences compared to \textsc{spexact} v3 \citep[S18; see also][]{2018PASJ...70...12H}.
\begin{figure}
\includegraphics[trim=0.3cm 0.7cm 0.5cm 0.4cm, clip=true,width=\columnwidth]{fig_abun_CHEERS_comparison.pdf}
\caption{Comparison of our updated (\textsc{spexact} v3) averaged abundance ratios (blue squares; thick error bars include statistical uncertainties and MOS-pn discrepancies while thin error bars include additional scatter) with recent results from the literature. For comparison, current uncertainties on the solar ratios \citep{2009LanB...4B...44L} are shown by the grey boxes.}
\label{fig:abun_CHEERS_comparison}
\end{figure}
Among the \textsc{spexact} v2--v3 differences seen in Fig.~\ref{fig:abun_CHEERS_comparison}, the Ni/Fe ratio is the most striking. To better understand the reasons for such a decrease, we show in Fig.~\ref{fig:Ni_models} the CIE emission calculated successively with these two versions for a moderately hot plasma ($kT = 3$ keV, solar abundances).
Within the Ni-K energy band ($\sim$7.5--8 keV), only Fe and Ni ions produce emission lines, and we separate the transitions of these two elements in the upper and lower panels of Fig.~\ref{fig:Ni_models}, respectively.
\begin{figure}
\includegraphics[trim=0.2cm 0cm 0.2cm 0.6cm, clip=true,width=\columnwidth]{fig_Ni_line_total.pdf}
\caption{Comparison between \textsc{spexact} v2 and \textsc{spexact} v3 for a $kT = 3$ keV plasma, zoomed on the Ni-K complex ($\sim$7.5--8 keV). The transitions of the Fe and Ni ions are shown separately in the upper and lower panels, respectively.}
\label{fig:Ni_models}
\end{figure}
Although the emissivities of many Ni lines have been notably revised with the latest update of \textsc{spexact}, we note that the overall equivalent width (EW) of all the Ni-K lines remain comparable between v2 and v3 (Fig.~\ref{fig:Ni_models}, lower panel). On the contrary, while \textsc{spexact} v2 includes only one Fe transition in the Ni-K complex (\ion{Fe}{xxv} at $\sim$7.88 keV), \textsc{spexact} v3 shows that many more Fe lines (mostly from \ion{Fe}{xxiii}, \ion{Fe}{xxiv}, and \ion{Fe}{xxv}) contaminate this energy band (Fig.~\ref{fig:Ni_models}, upper panel). Assuming that \textsc{spexact} v3 reproduces realistically all the transitions that contribute to the Ni-K bump observed with EPIC, the incorrectly high Ni/Fe ratio measured in Paper I can be naturally explained. Indeed, in order to compensate for the total EW of the unaccounted Fe lines in the Ni-K complex, \textsc{spexact} v2 incorrectly raised the Ni parameter until it fully fits the Ni-K bump.
Unlike the other ratios measured with EPIC, Cr/Fe and Mn/Fe mentioned in Paper I were not calculated with \textsc{spexact} v2 but with a precursor of \textsc{spexact} v3.00. Between that intermediate version and the one used in this work (v3.04), significant transitions have been included and/or updated. In particular, \ion{Cr}{xxii} transitions significantly contribute to the total EW of the Cr-K complex, but were only incorporated later on, in the up-to-date version. The absence of such transitions in previous models explain why the Cr/Fe was significantly overestimated. Similarly, the EW of Mn-K transitions have been revised higher, which explains why the MOS average Mn/Fe ratio is now remarkably consistent with the solar value. Its pn counterpart, however, is measured significantly larger than solar (though with large error bars). The origin of such a discrepancy is, unfortunately, difficult to identify. Unlike the case of Ni, the Mn-K band does not contain instrumental lines that may affect one specific detector. Therefore, Mn cannot be well constrained with EPIC instruments (see also discussion in S18).
Another ratio of interest is Ca/Fe. Compared to Paper I, this ratio remains essentially unchanged and appears somewhat in tension with the (lower) value reported by S18. This discrepancy was already pointed out in Perseus only (S18; see their figure~9); nevertheless this ratio is left unchanged when discarding Perseus from our analysis (Fig.~\ref{fig:abun_CHEERS}). Interestingly, nucleosynthesis models are currently unable to reproduce the \textit{XMM-Newton} measurements of Ca/Fe \citep[][Paper II]{2007A&A...465..345D}. Several possibilities have been investigated, among which alternative SNIa models reproducing the spectral features of the Tycho supernova \citep{2007A&A...465..345D} or a significant contribution of Ca-rich gap transients to the ICM enrichment \citep[][Paper II]{2014ApJ...780L..34M}. However, the significantly lower Ca/Fe measurements of Perseus in both \textit{Hitomi} and \textit{Suzaku} suggest an issue that could be specific to the EPIC instruments.
\section{Implications for the chemical enrichment of the central ICM}
\label{sec:implications}
More than demonstrating the relative reliability of CCD instruments in deriving ICM abundances, our results strongly suggest that a solar chemical composition is a feature common to most (if not all) nearby cool-core systems, and not specific to Perseus only.
As already discussed by S18, such a "simple" chemical composition of the central ICM is not trivial to explain. Stellar populations of massive early-type galaxies, which are often found in the centre of groups and clusters, exhibit super-solar $\alpha$/Fe ratios, probably associated to relatively short star formation timescales \citep{2014ApJ...780...33C}. In fact, if the bulk of SNIa explode and enrich their surroundings with a significant delay compared to the end of the parent star formation, their products are not efficiently incorporated in the stellar population we see today (S18). While enrichment by SNIa dominate in BCGs -- where SNcc are rarely, if ever, observed, these $\alpha$-enriched stars might also enrich the surrounding ICM via their stellar winds \citep[e.g.][]{2009A&A...493..409S}. It would be a surprising coincidence, however, that for nearly all the CHEERS systems these two sources of ICM enrichment would compensate each other to reach exactly the solar ratios within their current uncertainties.
In addition, the remarkable uniformity in the chemical composition of the ICM from the very core to the outskirts \citep[][]{2015ApJ...811L..25S,2017A&A...603A..80M} may seriously question the role of the BCG in the central enrichment of clusters and groups.
In summary, our results strengthen the "ICM solar composition paradox" already reported by H17 and S18, and extend it to a large number of cool-core clusters, groups, and ellipticals. Interesting ways of exploring this paradox in the future are to constrain (i) the redshift evolution of $\alpha$/Fe ratios and (ii) the abundance of other (in particular odd-$Z$) elements in the ICM. While sodium (Na) or aluminium (Al) are crucial to constrain the initial metallicities of SNcc progenitors \citep{2013ARA&A..51..457N}, lighter elements such as carbon (C) or nitrogen (N) are mostly enriched via asymptotic giant branch stars, and their C/Fe and N/Fe ratios are not necessarily expected to be solar \citep[][Mao et al. submitted]{2006A&A...459..353W,2011A&A...531A..15G}. Extending abundance studies to all these unexplored ratios will definitely help to understand how and when the ICM has been enriched.
Evidently, this is out of the scope of this Letter, as deeper observations with micro-calorimeter instruments onboard future missions (e.g. \textit{XARM}, \textit{Athena}) are required.
\section*{Acknowledgements}
F.M. is supported by the Lend\"ulet LP2016-11 grant awarded by the Hungarian Academy of Sciences. This work is partly based on the \textit{XMM-Newton} AO-12 proposal ``\emph{The XMM-Newton view of chemical enrichment in bright galaxy clusters and groups}'' (PI: de Plaa), and is a part of the CHEERS (CHEmical Evolution Rgs cluster Sample) collaboration. This work is based on observations obtained with \textit{XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA member states and the USA (NASA). The SRON Netherlands Institute for Space Research is supported financially by NWO, the Netherlands Organisation for Scientific Research.
\bibliographystyle{mnras}
|
2,877,628,088,877 | arxiv | \section{Introduction}
Massive MIMO (multiple-input multiple-output) is an emerging technology for cellular networks, where each base station (BS) is equipped with hundreds of antennas and able to spatially multiplex tens of users on the same time-frequency resource \cite{Marzetta2010a,massivemimobook}. Massive MIMO increases the spectral efficiency (SE) by the orders of magnitude compared to conventional cellular networks \cite{massivemimobook}. In case of i.i.d.~Rayleigh fading channels, Massive MIMO achieves nearly optimal SE using linear detection techniques such as maximum ratio combining (MRC) or zero-forcing \cite{Bjornson2016b}. Since there are many users that interfere with each other in Massive MIMO, power control is essential to achieve high SE \cite{Bjornson2016b}. The channels are estimated from uplink pilots, thus both the power of the uplink pilot and data signals can be optimized \cite{Victor2017a}. An important feature of Massive MIMO is that the SE expressions can be obtained in closed form, making it possible to efficiently optimize the powers based on the large-scale fading coefficients instead of the small-scale fading realizations, so that the power control exploits diversity against fading instead of inverting the fading.
Deep learning (DL) has become a popular way to solve problems in a data-driven fashion and has shown great performance in applications such as image restoration and pattern recognition. DL is capable of providing simple and efficient solutions, given that the complicated design and training phases have been successful. DL has recently been applied to power-control problems in wireless communications.
The authors of \cite{sun2017learning} construct a deep neural network for optimizing the sum SE in a system serving a few tens of users. The neural network structure is reused in \cite{zappone2018model} for solving an energy-efficiency problem. These works utilize classical fully connected feed-forward networks with many layers. When these networks are used, instead of directly solving the original problems, there is a substantial performance loss: it is $5\%$ for a system serving $10$ users in \cite{zappone2018model} and $16\%$ in case of $30$ users in \cite{sun2017learning}.
Moreover, the optimization in these prior works is based on the instantaneous channel realizations, which requires the problems to be solved every few milliseconds and to combat bad channel realization rather than exploiting the long-term benefits of fading. The use of such methods is not scalable in Massive MIMO, where the number of channel coefficients is proportional to the number of antennas and users.
In this paper, we first formulate a joint data and pilot power control for maximizing the ergodic sum SE. The non-convexity of this problem is overcome by proposing a new algorithm, inspired by the weighted MMSE approach, that finds a stationary point instead of seeking the global optimum with exponential computational complexity. We then explore the possibility of using DL to train a neural network to solve the power control problem in sub-millisecond runtime. To this end, we model the power optimization as a supervised learning problem where the transmit powers should be predicted based on an input of large-scale fading coefficients. Instead of using a fully-connected feed-forward network as in \cite{sun2017learning,zappone2018model}, we use a deep convolutional neural network (CNN) and show that it achieves very high accuracy in the power prediction. Our proposed deep CNN is named PowerNet and utilizes the state-of-the-art residual structure \cite{he2016deep, zhang2018a} and dense connection \cite{huang2017}.
The loss in SE from using PowerNet, instead of solving the power control problem directly, is only about $2\%$ in a network with $9$ cells and $10$ users per cell, while the runtime is 0.03\,ms.
\textit{Notation}: Upper (lower) bold letters are used to denote matrices (vectors). $\mathbb{E} \{ \cdot \}$ is the expectation of a random variable. The Frobenius norm denotes as $\| . \|_F$ and $\mathcal{CN}(\cdot, \cdot)$ is circularly symmetric complex Gaussian distribution.
\section{System Model} \label{Section: System Model}
We consider the uplink of a multi-cell Massive MIMO system comprising $L$ cells, each having a BS equipped with $M$ antennas and serving $K$ users.
Since the propagation channels vary over time and frequency, we divide the time-frequency resources into coherence intervals of $\tau_c$ symbols where the channels are static and frequency flat \cite{massivemimobook}. The channel between user~$t$ in cell~$i$ and BS~$l$ is denoted as $\mathbf{h}_{i,t}^l \in \mathbb{C}^{M}$ and follows an i.i.d.~Rayleigh fading distribution:
\begin{equation}
\mathbf{h}_{i,t}^l \sim \mathcal{CN} \left(\mathbf{0}, \beta_{i,t}^l \mathbf{I}_{M} \right),
\end{equation}
where $\beta_{i,t}^l$ is large-scale fading coefficient that models geometric pathloss and shadow fading. The distributions are known at the BSs, but the realizations are unknown and estimated independently in every coherence interval using a pilot phase.
\subsection{Uplink Pilot Transmission Phase}
We assume that a set of $K$ mutually orthogonal $K$-length pilot signals are used in the system.
User~$k$ in each cell uses the pilot $\pmb{\psi}_{k} \in \mathbb{C}^{K}$ with $\|\pmb{\psi}_{k} \|^2 = K$. The channel estimation in cell $l$ is therefore interfered by the users in other cells, which is called pilot contamination. The received signal $\mathbf{Y}_l \in \mathbb{C}^{M \times K}$ at BS~$l$ from the pilot transmission is
\begin{equation}
\mathbf{Y}_l = \sum_{i=1 }^L \sum_{t=1}^K \sqrt{\hat{p}_{i,t}}\mathbf{h}_{i,t}^l \pmb{\psi}_{t}^H + \mathbf{N}_l,
\end{equation}
where $\mathbf{N}_l \in \mathbb{C}^{M \times K }$ is the additive noise with independent $\mathcal{CN}(0, \sigma_{\mathrm{UL}}^2)$ elements. Meanwhile, $\hat{p}_{i,t}$ is the pilot power that user~$t$ in cell~$i$ allocates to its pilot transmission. By applying MMSE estimation, a BS obtains estimates of the channels from its users and uses these when receiving the uplink data.
\subsection{Uplink Data Transmission Phase}
During the uplink data transmission phase, user~$t$ in cell~$i$ transmits data symbol $s_{i,t}$ with $\mathbb{E} \{ |s_{i,t}|^2\} =1$. The received signal $\mathbf{y}_{l} \in \mathbb{C}^{M}$ at BS~$l$ is the superposition of the signals transmitted from all users and is given by
\begin{equation}
\mathbf{y}_{l} = \sum_{i=1}^L \sum_{t=1}^K \sqrt{p_{i,t}} \mathbf{h}_{i,t} s_{i,t} + \mathbf{n}_{l},
\end{equation}
where $p_{i,t}$ is the power that user $t$ in cell $i$ allocates to the data transmission. $\mathbf{n}_{l} \sim \mathcal{CN} \left(\mathbf{0}, \sigma_{\mathrm{UL}}^2 \mathbf{I}_{M} \right)$ is the additive noise.
We assume that each BS uses MRC (based on MMSE channel estimates) to detect the desired signals from its users. Using the standard Massive MIMO methodology, the following closed-form lower bounds on the ergodic capacities are obtained.
\begin{lemma}[\!\!\cite{Chien2017a}, Corollary~$1$] \label{Lemma1}
If the BSs use MRC for data detection, an achievable ergodic SE of user $k$ in cell $l$ is
\begin{equation} \label{eq:ULRate}
R_{l,k} = \left(1 -\frac{K}{\tau_c} \right) \log_2 \left( 1 + \mathrm{SINR}_{l,k}\right),
\end{equation}
where the effective SINR value of this user is
\begin{equation} \label{eq:SINRlk}
\mathrm{SINR}_{l,k} = M K p_{l,k} \hat{p}_{l,k} (\beta_{l,k}^l)^2 / \mathit{D}_{l,k}
\end{equation}
and the interference and noise term $\mathit{D}_{l,k}$ is given by
\begin{equation}
\begin{split}
&\mathit{D}_{l,k} = M K \sum\limits_{\substack{i=1, i \neq l}}^L p_{i,k} \hat{p}_{i,k} (\beta_{i,k}^l)^2 \\
& + \left( K \sum\limits_{i=1}^L \hat{p}_{i,k} \beta_{i,k}^l + \sigma_{\mathrm{UL}}^2 \right)\left( \sum\limits_{i=1}^L \sum\limits_{t=1}^K p_{i,t} \beta_{i,t}^l + \sigma_{\mathrm{UL}}^2 \right).
\end{split}
\end{equation}
\end{lemma}
In \eqref{eq:SINRlk}, the numerator contains the array gain which is proportional to the number of antennas $M$ at the receiving BS. The first part in the denominator is the effect of the pilot contamination and it is also proportional to $M$. The remaining terms are non-coherent mutual interference and noise, which are independent of $M$ and thus are negligible when the number of antennas is very large.
\section{Joint Pilot and Data Power Control for Sum Spectral Efficiency Maximization}
\label{section:joint-optimization}
In this section, we formulate a sum SE maximization problem where the pilot and data powers are jointly optimized. This important problem has an inherent non-convex structure, different from the single-cell case in \cite{Victor2017a} which has an efficient solution. Hence, our first contribution is to derive an iterative algorithm that obtains a stationary point in polynomial time, by solving a series of convex sub-problems.
\subsection{Problem Formulation}
\setcounter{eqnback}{\value{equation}} \setcounter{equation}{14}
\begin{figure*}
\begin{equation} \label{eq:rhohatlk}
\hat{\rho}_{l,k}^{(n)} = \min \left( \frac{ \sqrt{M K} \rho_{l,k}^{(n-1)} u_{l,k}^{(n)} w_{l,k}^{(n)} \beta_{l,k}^l}{(\rho_{l,k}^{(n-1)})^2 M K \sum\limits_{i=1}^L w_{i,k}^{(n)} (u_{i,k}^{(n)})^2 (\beta_{l,k}^i)^2 + K \sum\limits_{j=1}^L w_{j,k}^{(n)} (u_{j,k}^{(n)})^2 \beta_{l,k}^j \left( \sum\limits_{i=1}^L \sum\limits_{t=1}^K (\rho_{i,t}^{(n)})^2 \beta_{i,t}^j + \sigma_{\mathrm{UL}}^2 \right) } , \sqrt{P}\right)
\end{equation} \vspace*{-0.3cm}
\begin{equation} \label{eq:rholk}
\rho_{l,k}^{(n)} = \min \left( \frac{\sqrt{MK} \hat{\rho}_{l,k}^{(n)} u_{l,k}^{(n)} w_{l,k}^{(n)} \beta_{l,k}^l }{ (\hat{\rho}_{l,k}^{(n)})^2 M K \sum\limits_{i=1}^L w_{i,k}^{(n)} (u_{i,k})^2 (\beta_{l,k}^i)^2 + \sum\limits_{i=1}^L \sum\limits_{t=1}^K w_{i,t}^{(n)} (u_{i,t}^{(n)})^2 \beta_{l,k}^i \left( K \sum\limits_{j=1}^L (\hat{\rho}_{j,t}^{(n)})^2 \beta_{j,t}^i + \sigma_{\mathrm{UL}}^2\right) }, \sqrt{P}\right)
\end{equation}
\hrule
\vspace*{-0.45cm}
\end{figure*}
\setcounter{eqncnt}{\value{equation}}
\setcounter{equation}{\value{eqnback}}
We want to maximize the sum SE of the $LK$ users under constraints on the power per pilot and data symbol:
\begin{equation} \label{Prob:SumRate}
\begin{aligned}
& \underset{\{ \hat{p}_{l,k}, p_{l,k} \geq 0 \} }{\textrm{maximize}}
&& \sum_{l=1}^{L} \sum_{k=1}^K R_{l,k}\\
& \,\,\textrm{subject to}
&& \hat{p}_{l,k} \leq P, \; \forall l,k,\\
&&& p_{l,k} \leq P, \; \forall l,k,
\end{aligned}
\end{equation}
where $P$ is the maximum power that each user can supply to its transmitted symbols. Plugging \eqref{eq:ULRate} into \eqref{Prob:SumRate} and removing the constant pre-log factor, we obtain the equivalent problem
\begin{equation} \label{Prob:SumRatev2}
\begin{aligned}
& \underset{\{ \hat{p}_{l,k}, p_{l,k} \geq 0 \} }{\textrm{maximize}}
&& \sum_{l=1}^{L} \sum_{k=1}^K \log_2 \left(1 + \mathrm{SINR}_{l,k} \right)\\
& \,\,\textrm{subject to}
&& \hat{p}_{l,k} \leq P, \; \forall l,k,\\
&&& p_{l,k} \leq P, \; \forall l,k.
\end{aligned}
\end{equation}
This problem is independent of the small-scale fading due to the SINR expression in \eqref{eq:SINRlk}. Hence, its solution can be used for a long period of time, if the users are continuously active and there is no large-scale user mobility. However, in practical systems, some users are moving quickly (such that $\beta_{i,t}^{l}$ changes) and new scheduling decisions are made every few milliseconds based on the users' data traffic. It is important to be able to solve \eqref{Prob:SumRatev2} very quickly to adapt to these changes.\footnote{Note that the ergodic SE is a reasonable performance metric also in this scenario, since long codewords can span over the frequency domain and the channel hardening makes the channel after MRC almost deterministic.}
Inspired by the weighted MMSE methodology \cite{Christensen2008a}, we will now propose an iterative algorithm to find a stationary point. To this end, we define $\hat{\rho}_{l,k} = \sqrt{\hat{p}_{l,k} }$ and $\rho_{l,k} = \sqrt{p_{l,k} }$,$\forall l,k,$ as new optimization variables and the derive following theorem obtains a new problem formulation that is equivalent with \eqref{Prob:SumRatev2}.
\begin{theorem} \label{Theorem:WMMSE}
The following optimization problem is equivalent to problem \eqref{Prob:SumRate}:
\begin{equation} \label{Prob:WMMSEv1}
\begin{aligned}
& \underset{\substack{ \{ w_{l,k} \geq 0, u_{l,k} \}, \\ \{ \hat{\rho}_{l,k}, \rho_{l,k} \geq 0 \} }}{\mathrm{minimize}}
&& \sum_{l=1}^{L} \sum_{k=1}^K w_{l,k} e_{l,k} - \ln (w_{l,k}) \\
& \,\,\,\mathrm{subject\,to}
&& \hat{\rho}_{l,k}^2 \leq P, \; \forall l,k,\\
&&& \rho_{l,k}^2 \leq P, \; \forall l,k,
\end{aligned}
\end{equation}
where $e_{l,k}$ is given by
\begin{align} \notag
& e_{l,k} = MK u_{l,k}^2 \sum_{i=1}^L \rho_{i,k}^2 \hat{\rho}_{i,k}^2 (\beta_{i,k}^l)^2 - 2 \sqrt{M K} \rho_{l,k} \hat{\rho}_{l,k} u_{l,k} \beta_{l,k}^l \\ &+ u_{l,k}^2 \!\left( K \sum_{i=1}^L \hat{\rho}_{i,k}^2 \beta_{i,k}^l + \sigma_{\mathrm{UL}}^2 \right) \!\! \left( \sum_{i=1}^L \sum_{t=1}^K \rho_{i,t}^2 \beta_{i,t}^l + \sigma_{\mathrm{UL}}^2 \right)
\!+\!1. \label{Prob:WMMSEv2}
\end{align}
More precisely, if $\{u_{l,k}^{\ast}, w_{l,k}^{\ast}, \hat{\rho}_{l,k}^{\ast}, \rho_{l,k}^{\ast} \}$ is a global optimum to \eqref{Prob:WMMSEv1}, then $\{ (\hat{\rho}_{l,k}^{\ast})^2, (\rho_{l,k}^{\ast})^2 \}$ is a global optimum to \eqref{Prob:SumRate}.
\end{theorem}
\begin{IEEEproof}
The main procedure is similar to \cite{Chien2018a}, where only the data powers were optimized while the pilot powers were constant. The proof for the extension to joint data and pilot power control can be done in two main steps. We first derive the mean square error $e_{l,k}$ for user $k$ in cell $l$, considering a single-input single-output (SISO) communication system with deterministic channels having the same SE as in Lemma~\ref{Lemma1}, where $u_{l,k}$ is the beamforming coefficient utilized in such a SISO system and $w_{l,k}$ is the weight value in the receiver. After that, the equivalence of the problems \eqref{Prob:SumRate} and \eqref{Prob:WMMSEv1} is obtained by finding the optimal solution of $u_{l,k}$ and $w_{l,k},\forall l,k,$ given the other optimization variables. The detailed proof is omitted due to space limitations.
\end{IEEEproof}
The new problem formulation in Theorem~\ref{Theorem:WMMSE} is still non-convex, but it has an important desired property: \eqref{Prob:WMMSEv2} is convex with respect to each of the variable sets $\{ u_{l,k}\}$, $\{w_{l,k} \}$, $\{\hat{\rho}_{l,k} \}$, and $\{\rho_{l,k} \}$ when the other three variable sets are treated as constants. In fact, we can find closed-form solutions by equating the first derivatives to zero. We exploit this property to derive an iterative algorithm to find a local optimum (stationary point) to \eqref{Prob:WMMSEv2} in the following subsection.
\begin{figure*}[t]
\centering
\includegraphics[trim=0.5cm 7.2cm 22cm 0.38cm, clip=true, width=5.3in]{FigResDenseNetCh.pdf} \vspace*{-0.2cm}
\caption{The proposed PowerNet for the joint pilot and data power control from a given set of large-scale fading coefficients. }
\label{FigCNN}
\vspace*{-0.5cm}
\end{figure*}
\subsection{Iterative Algorithm}
The next theorem provides an iterative algorithm to obtain a stationary point to problem \eqref{Prob:WMMSEv1} by alternatingly updating the optimization variables.
\begin{theorem} \label{Theorem:IterativeAl}
From an initial point $\{ \hat{\rho}_{l,k}^{(0)}, \rho_{l,k}^{(0)} \}$ satisfying the constraints, a stationary point to problem~\eqref{Prob:WMMSEv1} is obtained by updating $\{ u_{l,k}, w_{l,k}, \hat{\rho}_{l,k}, \rho_{l,k} \}$ in an iterative manner. At iteration~$n$, the variables $\{ u_{l,k}, w_{l,k}, \hat{\rho}_{l,k}, \rho_{l,k} \}$ are updated as follows:
\begin{enumerate}[leftmargin=*]
\item The variables $u_{l,k}$, for all $l,k$, are updated as
\begin{equation} \label{eq:ulk}
u_{l,k}^{(n)} = \sqrt{M K} \rho_{l,k}^{(n-1)} \hat{\rho}_{l,k}^{(n-1)} \beta_{l,k}^l / \tilde{u}_{l,k}^{(n-1)},
\end{equation}
where $\tilde{u}_{l,k}^{(n-1)}$ is given by
\begin{equation} \label{eq:tildeulk}
\begin{split}
& M K \sum\limits_{i=1}^L (\rho_{i,k}^{(n-1)})^2 (\hat{\rho}_{i,k}^{(n-1)})^2 (\beta_{i,k}^l)^2 + \left( \sum\limits_{i=1}^L (\hat{\rho}_{i,k}^{(n-1)})^2 \right. \\
&\times K \beta_{i,k}^l + \sigma_{\mathrm{UL}}^2 \Bigg) \left( \sum\limits_{i=1}^L \sum\limits_{t=1}^K (\rho_{i,t}^{(n-1)})^2 \beta_{i,t}^l + \sigma_{\mathrm{UL}}^2 \right).
\end{split}
\end{equation}
\item The variables $w_{l,k}$, for all $l,k$, are updated as
\begin{equation} \label{eq:wlk}
w_{l,k}^{(n)} = 1/e_{l,k}^{(n)},
\end{equation}
where $e_{l,k}^{(n)}$ is
\begin{equation} \label{eq:elkn}
e_{l,k}^{(n)} = (u_{l,k}^{(n)})^2 \tilde{u}_{l,k}^{(n-1)} - 2 \sqrt{M K} \rho_{l,k}^{(n-1)} \hat{\rho}_{l,k}^{(n-1)} u_{l,k}^{(n)} \beta_{l,k}^l +1.
\end{equation}
\item The variables $\hat{\rho}_{l,k}$, for all $l,k$, are updated as in \eqref{eq:rhohatlk}.
\item The variables $\rho_{l,k}$, for all $l,k$, are updated as in \eqref{eq:rholk}.
\end{enumerate}
This iterative process converges to a stationary point $\{ u_{l,k}^{\mathrm{opt}}, w_{l,k}^{\mathrm{opt}}, \hat{\rho}_{l,k}^{\mathrm{opt}}, \rho_{l,k}^{\mathrm{opt}}\}$ to problem \eqref{Prob:WMMSEv1} and $\{ (\hat{\rho}_{l,k}^{\mathrm{opt}})^2, (\rho_{l,k}^{\mathrm{opt}})^2 \}$ is also a stationary point to problem \eqref{Prob:SumRate}.
\end{theorem}
\begin{IEEEproof}
The closed-form expression for each optimization variable, as shown in \eqref{eq:ulk}--\eqref{eq:rholk}. The expressions for $w_{l,k}$ and $u_{l,k}$ are obtained by taking the first derivative of the objective function due to the fact that there are no constraints on $w_{l,k}$ and $u_{l,k}$. The expressions for the power variables are obtained by taking the first derivative of the Lagrangian function of \eqref{Prob:WMMSEv1} and equating it to zero.
The convergence of the iterative process in Theorem~\ref{Theorem:IterativeAl} to a stationary point follows from the convexity of the Lagrangian with respect to each of the four sets of optimization variables when the other are constant \cite{Weeraddana2012a}. The solution $\{ \hat{\rho}_{l,k}^{\mathrm{opt}}, \rho_{l,k}^{\mathrm{opt}}\}$ obtained after convergence is a stationary point to~\eqref{Prob:WMMSEv1} and \eqref{Prob:SumRate} as a consequence of the chain rule \cite{Chien2018a}. The detail proof is omitted due to space limitations.
\end{IEEEproof}
Theorem~\ref{Theorem:IterativeAl} provides an iterative algorithm to obtain a local optimum with relatively low computational complexity because each subproblem is solved in closed-form.
From any feasible initial set of powers $\{ \hat{\rho}_{l,k}^{(0)}, \rho_{l,k}^{(0)} \}$, in each iteration, we update each optimization variable according to \eqref{eq:ulk}--\eqref{eq:rholk} and improve the objective function in each step. This iterative process will be terminated when the variation between two consecutive iterations is small. For instance the stopping condition may be defined for a given accuracy $\epsilon > 0$ as
\setcounter{eqnback}{\value{equation}} \setcounter{equation}{16}
\begin{equation} \label{eq:Stoping}
\left| \sum_{l=1}^L \sum_{k=1}^K R_{l,k}^{(n)} - \sum_{l=1}^L \sum_{k=1}^K R_{l,k}^{(n-1)} \right| \leq \epsilon.
\end{equation}
From Theorem~\ref{Theorem:IterativeAl}, we further observe the relationship of data and pilot power when a user is out of service as the following.
\begin{remark} \label{Remark:phatlkandplk}
In order to maximizing the sum SE, the system may reject some users from service. A particular user~$t$ in cell~$i$ is not served if $\hat{p}_{i,t}^{\mathrm{opt}} = 0$ and $p_{i,t}^{\mathrm{opt}} = 0$. Hence, this user is neither transmitting in the pilot nor data phase.
\end{remark}
\section{A Low-Complexity Implementation Using a Convolutional Neural Network} \label{Sec:CNN}
In this section, we explore the feasibility of using a CNN to learn how to perform joint optimal pilot and data power control, in order to achieve an extremely low-complexity implementation. The input to the CNN is only the large-scale fading coefficients and the output is the data and pilot powers. This is fundamentally different from the previous works \cite{sun2017learning,zappone2018model} that use deep learning to predict the data power allocation based on perfect instantaneous CSI (i.e., small-scale fading).
\subsection{Convolutional Neural Network Architecture}
As the second main contribution of this paper, we introduce a deep learning framework for power allocation in cellular Massive MIMO systems, which uses supervised learning to mimic the power control solution developed in Section~\ref{section:joint-optimization}. We stress that for non-convex optimization problems, a supervised learning with high prediction accuracy provides a good baseline for any further activities, e.g., supervised learning as a warm start for unsupervised learning, to improve the performance of the testing phase \cite{lee2018deep}.
Among the neural network structures in the literature, CNN is currently the most popular family since it achieves higher performance than conventional fully-connected feed-forward deep neural networks.
We will use a CNN to learn how to perform power control in a given setup, where the BSs locations are fixed and the active users change over time.
Before going further, we need to make an assumption on how the user locations and large-scale fading coefficients are generated in different realizations of the network.
\begin{assumption} \label{Proposition:UserLocation}
The location of all users are drawn as independent and identically distributed realizations from a given user location distribution, from which the large-scale fading coefficients are also obtained.
\end{assumption}
This assumption indicates that a user should be handled equally irrespective of which number that it has in the cell. A CNN can utilize this property to construct a unified structure for all training samples and reduce the number of parameters as compared to a conventional fully-connected network.\footnote{In this paper, the main task is to design an efficient CNN which is able to obtain performance close to the stationary point with lower runtime than the iterative algorithm in Theorem~\ref{Theorem:IterativeAl}. This CNN may not have the minimal number of parameters. The CNN with lowest cost is left for future work.} The proposed deep CNN is named PowerNet and is designed to provide good power control solutions in multi-cell Massive MIMO systems. In particular, we define a tensor $\mathsf{I} \in \mathbb{R}_{+}^{L\times L \times K}$ containing all the large-scale fading coefficients. We let $\mathsf{O}_d^{\textrm{opt}} \in \mathbb{R}_{+}^{L\times 1 \times K}$ denote the tensor with the optimal data powers and $\mathsf{O}_p^{\textrm{opt}} \in \mathbb{R}_{+}^{L\times 1 \times K}$ denote the tensor with the optimal pilot powers. PowerNet is used to learn the mapping
\begin{equation}
\mathcal{F}(\mathsf{I}, \mathsf{O}_d^{(0)}, \mathsf{O}_p^{(0)} ) = \{ \mathsf{O}_d^{\textrm{opt}} , \mathsf{O}_p^{\textrm{opt}} \},
\end{equation}
where $\mathsf{O}_d^{(0)}$ and $\mathsf{O}_p^{(0)}$ are the initial set of data and pilot powers, respectively. $\mathcal{F}(\cdot, \cdot, \cdot)$ is the continuous process to obtain the optimal powers from the input set of large-scale fading coefficients and the initial power tensors $\mathsf{O}_d^{(0)}, \mathsf{O}_p^{(0)}$. \footnote{The proposed neural network involves a series of activities such as convolutional operations and ReLUs adopted to predict pilot and data power variables to the sum SE problem as demonstrated Fig.~\ref{FigCNN}. Comprehensive mathematical explanation step by step over modules is omitted due to space limitations.} We adopt the state-of-the-art residual dense block (ResDense) \cite{zhang2018a} that consists of densely connected convolutions \cite{huang2017} with the residual learning \cite{he2016deep}. As shown in Fig.~\ref{FigCNN}, a ResDense block inherits the Densely Connected block in \cite{huang2017} with residual connection similar to \cite{he2016deep}. Compared with ResDense in \cite{zhang2018a}, we have an additional (rectified linear unit) ReLU activation after the residual connection.
Our proposed PowerNet is constructed from $N$ sequentially connected ResDense blocks to extract better features from the large-scale fading coefficients (we observed that $N=5$ is sufficient for our problem to strike a balance between prediction accuracy and computational complexity). The input and output size of the neural network are different. We are therefore using multiple 1D convolutions to make the sides equal. To exploit the correlation in both horizontal and vertical direction, both horizontal and vertical 1D convolutions are used. A regular transpose layer is applied following vertical 1D convolution to ensure a data size of $L \times 1 \times K$. The output of these two 1D convolutions are summed up to obtain the final prediction output. This prediction is used for both pilot and data power as depicted in Fig.~\ref{FigCNN}. When training PowerNet, we use a loss function based on the Frobenius norm as
\begin{equation} \label{eq:Loss}
\begin{split}
f(\Theta) =& \| \mathsf{O}_{d} - \mathsf{O}_{d}^{\mathrm{opt}} \|_F^2 + \| \mathsf{O}_{p} - \mathsf{O}_{p}^{\mathrm{opt}} \|_F^2,
\end{split}
\end{equation}
where $\Theta$ includes all convolution kernels and biases used in our neural network.
The loss in \eqref{eq:Loss} is applied for one realization of user locations, so the ultimate loss is averaged over the training data set.
\subsection{Data Set \& Training Process}
In order to train the PowerNet, we generated a training set with more than $1 000 000$ realizations of user locations (i.e., large-scale fading coefficients) and the corresponding output $\mathsf{O}_{p}^{\mathrm{opt}}, \mathsf{O}_{d}^{\mathrm{opt}}$ obtained by our new algorithm presented in Theorem~\ref{Theorem:IterativeAl}. We generated $3000$ and $100$ mini-batches of size $L \times L \times K \times 512$ for the training and testing phase, respectively. The number of epochs was selected to be a function of the network size, e.g., it is $200$ epochs if $L=4, K=5$ and $1000$ epochs if $L=9,K=10$. We use a momentum of $0.99$ and babysitting of the learning rate (varing from $10^{-3}$ to $10^{-5}$) to get the best prediction performance and minimize the training time as well. We note that the learning rate may be reduced by approximately $3$ times if the test loss remains the same for $50$ consecutive epochs. The Adam optimization \cite{kingma2014adam} was used to train PowerNet.
\begin{figure*}[t]
\noindent\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[trim=0.55cm 0cm 0.3cm 0cm, clip=true, width=2.45in]{FigPerUserRateL4K5} \vspace*{-0.5cm}
\caption{CDF of SE per user [b/s/Hz] with $L=4$ and $K = 5$.}
\label{FigPower}
\vspace*{-0.45cm}
\end{minipage}
\hfill
\noindent\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[trim=0.55cm 0cm 0.3cm 0cm, clip=true, width=2.4in]{FigSumRateL4K5} \vspace*{-0.5cm}
\caption{CDF of sum SE per cell [b/s/Hz] with $L=4$ and $K = 5$.}
\label{FigCDFL4K5}
\vspace*{-0.45cm}
\noindent \end{minipage}
\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[trim=0.55cm 0cm 0.3cm 0cm, clip=true, width=2.4in]{FigSumRateL9K10} \vspace*{-0.5cm}
\caption{CDF of sum SE per cell [b/s/Hz] with $L=9$ and $K = 10$.}
\label{FigCDFL9K10}
\vspace*{-0.45cm}
\end{minipage}
\end{figure*}
\section{Numerical Results}
For simulations, we consider a multi-cell Massive MIMO system with $L$ square cells ($L =4, 9$) in a square area of $1$~km$^2$. In every cell, the BS is equipped with $200$ antennas and located at the center. $K$ users are uniformly distributed in each cell at distances to the BS that are larger than $35$\,m. To simulate interference in real cellular systems, the coverage area is wrapped around. The system uses a bandwidth of $20$ MHz and the noise power is $-96$ dBm. Each coherence interval consists of $200$ symbols. The $3$GPP pathloss model from \cite{LTE2010b} is used to compute the large-scale fading coefficients as
\begin{equation}
\beta_{i,t}^l \mbox{[dBm]} = -148.1 - 37.6\log_{10} (d_{i,t}^l / 1\,\mbox{km}) + z_{i,t}^l,
\end{equation}
where $d_{i,t}^l$ is the distance between user~$t$ in cell~$i$ and BS~$l$, while $ z_{i,t}^l$ is the related shadow fading that follows a Gaussian distribution with zero mean and standard derivation $7$~dB. The maximum power level is $P=200$~mW. Simulation results of the following benchmarks are considered for comparison:
\begin{itemize}[leftmargin=4mm]
\item[$1)$] \textit{Fixed power level}: The system uses an equal power level $200$ mW to allocate for both pilot and data symbols. It is denoted as ``FP'' in the figures.
\item[$2)$] \textit{Data power optimization only}: The system uses a simplification of Theorem~\ref{Theorem:IterativeAl} to perform data power control, while the pilot power is fixed at the maximum of $200$ mW. It is denoted as ``DPOO'' in the figures.
\item[$3)$] \textit{Joint pilot and data power optimization}: The system uses Theorem~\ref{Theorem:IterativeAl} to find the optimal pilot and data powers for all users. It is denoted as ``JPDPO'' in the figures.
\item[$4)$] \textit{Joint pilot and data power optimization based on deep learning}: The system uses the proposed CNN described in Section \ref{Sec:CNN} to find the pilot and data powers for all users. It is denoted as ``PowerNet'' in the figures.
\end{itemize}
\subsection{Power Prediction Performance \& Sum Spectral Efficiency}
The accuracy of the proposed neural network in predicting the SE per user [b/s/Hz] is shown in Fig.~\ref{FigPower} for a multi-cell Massive MIMO system with $L=4$ and $K=5$. The predicted SEs produced by PowerNet are almost equal to those obtained by Theorem~\ref{Theorem:IterativeAl}. The average prediction error is $1\%$.
Fig.~\ref{FigCDFL4K5} displays the cumulative distribution function (CDF) of the sum SE per cell [b/s/Hz] in the same setup as in Fig.~\ref{FigPower}. Because of the high prediction accuracy for the power coefficients, the sum SE per cell obtained by PowerNet is only about $1\%$ lower than what is obtained by using Theorem~\ref{Theorem:IterativeAl} directly. We observe that FP performs much worse than the case with optimized power, while DPOO with only optimized data powers achieve $98\%$ of the sum SE that is produced by the joint pilot and data power control. Fig.~\ref{FigCDFL9K10} considers a larger network with $9$ cells serving $90$ users in total. The gap is now even larger between FP and the cases with optimized powers. In particular, by jointly optimizing the pilot and data power, $33\%$ higher sum SE is achieved. Applying power control for both the data and pilot powers give about $4\%$ higher sum SE than the counterpart that only optimizes the data power. PowerNet achieves $98\%$ of Theorem~\ref{Theorem:IterativeAl}.
\subsection{Runtime}
\begin{table}[t]
\caption{Average runtime of different methods in millisecond.}
\centerline{
\begin{tabular}{|c|c|c|c|c|}
\hline
\diagbox[width=10em]{System \\ specifications}{Benchmark} &JPDPO & DPOO & \thead{ PowerNet \\ (CPU) }& \thead{PowerNet \\ (GPU)} \\
\hline
$L=4,$ $ K=5$ & $42.24$ & $30.90$ & $2.99$ & $0.0177$ \\
\hline
$L=9 , K=10$ & $491.08$ & $269.33$ & $14.90$ & $0.0283$ \\
\hline
\end{tabular}
} \vspace*{-0.5cm}
\label{tablerunningitme}
\end{table}
To numerically evaluate the computational complexity of all benchmarks, we implement the testing phase using MatConvNet \cite{vedaldi2015matconvnet} on a Windows $10$ computer, with the central processing unit (CPU) AMD Ryzen $1950$x $16$-Core, 3.40 GHz, and a graphics processing unit (GPU) Titan XP Nvidia GPU. The proposed neural network is tested when using only the CPU and when using both CPU and GPU.
For comparison, Theorem~\ref{Theorem:IterativeAl} was also implemented in MATLAB.
The average runtimes are given in Table~\ref{tablerunningitme}, where the accuracy value $\epsilon$ in \eqref{eq:Stoping} is $0.01$. We implemented Theorem~\ref{Theorem:IterativeAl} in a sequential fashion, but the runtime is anyway low. For a system with $4$ cells, each serving $5$ users, the runtime for Theorem~\ref{Theorem:IterativeAl} is $42.24$~ms, while it is $30.90$~ms if we only consider data powers as optimization variables. If there are $9$ cells and $10$ users per cell, the runtime increased by $12 \times$ and $8.72 \times$, respectively.
These numbers can potentially be reduced by a factor $LK$ with an ideal parallelized implementation.
When PowerNet runs on the CPU only, the runtime is $2.99$~ms with $L=4, K=5$, and $14.90$~ms with $L=9, K = 10$. When using the GPU, the runtime of PowerNet is reduced to $0.0177$~ms and $0.0283$~ms, respectively. Since the typical duration of a coherence interval is around $1$~ms, this means that PowerNet implemented on a GPU can be used for real-time power control in multi-cell Massive MIMO systems.
\section{Conclusion}
This paper has investigated the joint pilot and data power control for the sum SE maximization in uplink Massive MIMO systems. This is a non-convex problem but we proposed a new algorithm, inspired by weighted MMSE approach, to find a stationary point. The joint pilot and data power optimization can obtain $30\%$ higher sum SE than equal power transmission. We used the proposed algorithm to also construct a neural network, called PowerNet, that predicts both data and pilot powers very well, leading to only about $2\%$ loss in sum SE in a multi-cell system serving $90$ users. PowerNet uses only the large-scale fading coefficients to predict the power control, making it scalable to Massive MIMO systems with many antennas.
It has a runtime that is far below a 1\,ms, meaning that it enables real-time power control in systems where new power control coefficients need to be obtained at the millisecond level due to changes in the scheduling decisions or user mobility. This demonstrates the feasibility of using deep learning for real-time power control in Massive MIMO, but we stress that other methods can potentially reach the same or better runtime.
\bibliographystyle{IEEEtran}
|
2,877,628,088,878 | arxiv | \section{Introduction}
\label{sec:intro}
Modern neural networks, despite their great success in various applications \cite{he2016deep,devlin2019bert}, typically suffer from a severe drawback of lacking adversarial robustness. In classification tasks, given an image ${\bm{x}}$ correctly classified by a neural network, there often exists a small adversarial perturbation $\boldsymbol\delta$, making the perturbed image ${\bm{x}} + \boldsymbol\delta$ look indistinguishable to humans but fool the network to predict a wrong class with high confidence \cite{szegedy2013intriguing,biggio2013evasion}.
It is well-known that the adversarial robustness of a neural network is closely related to its Lipschitz continuity \citep{cisse2017parseval,tsuzuku2018lipschitz} (see Section \ref{sec:preliminaries}). Accordingly, training neural networks with bounded Lipschitz constant has been considered a promising way to address the problem. A variety of works studied Lipschitz architectures for the ordinary Euclidean norm \citep{tsuzuku2018lipschitz,trockman2021orthogonalizing,leino21gloro,singla2021skew}, and recent works even established state-of-the-art (deterministic) certified $\ell_2$ robustness \citep{singla2022improved,meunier2022dynamical}. However, when it comes to the more critical (realistic) $\ell_\infty$ perturbation setting, the progress seems to be rather limited. In fact, standard Lipschitz ReLU networks have been shown to perform poorly in terms of $\ell_\infty$ robustness \citep{tsuzuku2018lipschitz,huster2018limitations,anil2019sorting}. While other more advanced Lipschitz networks have been proposed \citep{anil2019sorting,cohen2019universal}, the achieved results are still not satisfactory even on simple datasets like MNIST. Until recently, Zhang et al. \citep{zhang2021towards,zhang2022boosting} designed a quite \emph{unusual} Lipschitz network based on a heuristic choice of the \emph{$\ell_\infty$-distance function}, which surprisingly established state-of-the-art certified $\ell_\infty$ robustness on multiple datasets over prior works. Yet, it remains unclear why previous Lipschitz networks typically failed, and what is the essential reason behind the success of this particular $\ell_\infty$-distance network structure.
\textbf{Theoretical contributions}. In this work, we systematically investigate how to design expressive Lipschitz neural networks (w.r.t. $\ell_\infty$-norm) through the novel lens of representing discrete Boolean functions, which provides a deep understanding on the aforementioned problems. Specifically, we first figure out a fundamental limitation of standard Lipschitz networks in representing a class of logical operations called \emph{symmetric Boolean functions} (SBF), which comprises the basic logical AND/OR as special cases. We prove that for any non-constant SBF of $d$ variables, there exists a finite dataset of size $\mathcal O(d)$ such that the certified $\ell_\infty$ robust radius must vanish as $\mathcal O(1/d)$ for any classifier induced by a standard Lipschitz network. Remarkably, since logical AND/OR operations correspond to perhaps the most basic classifiers, our result indicates an intrinsic difficulty of such networks in fitting high-dimensional real-world datasets with guaranteed certified $\ell_\infty$ robustness.
Our analysis can be readily extended into the Lipschitz function approximation setting. We point out the relationship between \emph{monotonic} SBF and the \emph{order statistics} (which are 1-Lipschitz functions), and then prove that any $d$-dimensional order statistic (including the max/min function) on a compact domain cannot be approximated by standard Lipschitz networks with error $\mathcal O(1-1/d)$, regardless of the network size. This impossibility result is significant in that: $(\mathrm{i})$ it applies to all Lipschitz activations (thus extending prior works \citep{anil2019sorting,huster2018limitations}), $(\mathrm{ii})$ it resolves an open problem raised recently in \citep{neumayer2022approximation}, and $(\mathrm{iii})$ a \emph{quantitative} lower bound of approximation error is established.
Equipped by the above impossibility results, we proceed to examine two advanced Lipschitz architectures: the GroupSort network \citep{anil2019sorting} and the recently proposed $\ell_\infty$-distance net \citep{zhang2021towards,zhang2022boosting}. We find that besides the linear operation, both networks incorporate other Lipschitz aggregation operations into the neuron design, especially the order statistic functions, thus shedding light on how they work. However, for the MaxMin network \citep{anil2019sorting} --- a computationally efficient version of the GroupSort network implemented in practice, representing Boolean functions and order statistics is possible only when the network is very \emph{deep}. In particular, we prove that representing certain $d$-dimensional Boolean functions requires a depth of $\Omega(d)$, implying that shallow MaxMin networks are not Lipschitz-universal function approximators. In contrast, we show a \emph{two-layer} $\ell_\infty$-distance net suffices to represent any order statistic function on a compact domain or even all Boolean functions. This strongly justifies the empirical success of $\ell_\infty$-distance net over GroupSort (MaxMin) networks.
\textbf{Practical contributions}. Our theoretical insights can also guide in designing better Lipschitz network architectures. Inspired by the importance of order statistics, we propose a general form of Lipschitz network, called SortNet, that extends both GroupSort and $\ell_\infty$ distance networks and incorporates them into a unified framework. Yet, the full-sort operation is computationally expensive and leads to optimization difficulties (as with the GroupSort network). We further propose a specialized SortNet that can be efficiently trained, by assigning each weight vector ${\bm{w}}$ using \emph{geometric series}, i.e. $w_i$ proportional to $\rho^{i}$ for some $0\le \rho<1$. This leads to a restricted version of SortNet but still covers $\ell_\infty$-distance net as a special case. For this particular SortNet, we skillfully derive a stochastic estimation that gives an unbiased approximation of the neuron output without performing sorting operations explicitly. This eventually yields an efficient training strategy with similar cost as training standard networks, thus making certified robust training free. Extensive experiments demonstrate that the proposed SortNet is scalable, efficient, and consistently achieves better certified robustness than prior Lipschitz networks across multiple datasets and perturbation radii. In particular, our approach even scales on a variant of ImageNet, and surpasses the best-known result \citep{xu2020automatic} with a 22-fold decrease in training time thanks to our ``free'' certified training approach.
The contribution and organization of this paper can be summarized as follows:
\begin{itemize}[topsep=0pt,leftmargin=30pt]
\setlength{\itemsep}{0pt}
\item We develop a systematic study for the expressive power of Lipschitz neural networks using the tools of Boolean function theory. We prove the impossibility results of standard Lipschitz networks in two settings: a) certified $\ell_\infty$ robustness on discrete datasets (Section \ref{sec:discrete}); b) Lipschitz function approximation (Section \ref{sec:function_approximation}).
\item We provide insights into how recently proposed networks can bypass the impossibility results. In particular, we show that a \emph{two-layer} $\ell_\infty$-distance net can precisely represent any Boolean functions, while \emph{shallow} GroupSort networks cannot (Section \ref{sec:order_statistic}).
\item We propose SortNet, a Lipschitz network that generalizes GroupSort and $\ell_\infty$-distance net. For a special type of SortNet, we derive a stochastic training approach that bypasses the difficulties in calculating sorting operations explicitly and makes certified training free (Section \ref{sec:sortnet}).
\item Extensive experiments demonstrate that SortNet exhibits better certified robustness on several benchmark datasets over baseline methods with high training efficiency (Section \ref{sec:experiments}).
\end{itemize}
\section{Related Work}
Extensive studies have been devoted to developing neural networks with certified robustness guarantees. Existing approaches can be mainly divided into the following three categories.
\textbf{Certified defenses for standard networks}. A variety of works focus on establishing certified robustness for standard neural networks. However, exactly calculating the certified radius of a standard ReLU network is known to be NP-hard \citep{katz2017reluplex}. Researchers thus developed a class of relaxation-based approaches that provide a tight lower bound estimate of the certified robustness efficiently. These approaches typically use convex relaxation to calculate a bound of the neuron outputs under input perturbations layer by layer \citep{wong2018provable,wong2018scaling,weng2018towards,singh2018fast,mirman2018differentiable,gehr2018ai2,wang2018efficient,zhang2018efficient}. See also \citep{balunovic2020Adversarial,krishnamurthy2018dual,dvijotham2020efficient,raghunathan2018certified,raghunathan2018semidefinite,xiao2019training,croce2019provable,lee2020lipschitz,wang2021beta,huang2021training,shi2022efficient} for more advanced approaches. However, most of these works suffer from high computational costs and are hard to scale up to large datasets. Currently, the only scalable convex relaxation approach is based on \emph{interval bound propagation} (IBP) \citep{mirman2018differentiable,gowal2018effectiveness,zhang2020towards,xu2020automatic,shi2021fast}, but the produced bound is known to be loose \citep{salman2019convex}, and a recent study showed that IBP cannot achieve enough certified robustness on simple datasets for any standard ReLU network \citep{mirman2021fundamental}.
\textbf{Certified defenses using Lipschitz networks}. On the other hand, Lipschitz networks inherently imply certified robustness, resulting in a \emph{much simpler} certification process based on the output margin (see Proposition \ref{thm:lipschitz}). Yet, most prior works can only handle the $\ell_2$-norm Lipschitz situation by leveraging specific mathematical properties such as the spectral norm \citep{cisse2017parseval,yoshida2017spectral,gouk2018regularisation,tsuzuku2018lipschitz,farnia2019generalizable,qian2019lnonexpansive,anil2019sorting,leino21gloro,meunier2022dynamical} or orthogonality of weight matrices \citep{li2019preventing,trockman2021orthogonalizing,singla2021skew,singla2022improved}. For the $\ell_\infty$-norm, standard Lipschitz networks were shown to give only a vanishingly small certified radius \citep{tsuzuku2018lipschitz}. Huster et al. \citep{huster2018limitations} found that standard Lipschitz ReLU networks cannot represent certain simple functions such as the absolute value, which inspired the first expressive Lipschitz architecture called the GroupSort network \citep{anil2019sorting}. Since then, GroupSort has been extensively investigated \citep{cohen2019universal,tanielian2021approximating}, but its performance is still much worse than the above relaxation-based approaches even on MNIST. Recently, Zhang et al. \citep{zhang2021towards,zhang2022boosting} first proposed a practical 1-Lipschitz architecture w.r.t. $\ell_\infty$-norm based on a special neuron called the $\ell_\infty$-distance neuron, which can scale to TinyImageNet with state-of-the-art certified robustness over relaxation-based approaches. However, despite its practical success, it is rather puzzling how such a simple architecture can work while prior approaches all failed. Answering this question may require an in-depth re-examination of Lipschitz networks (w.r.t. $\ell_\infty$-norm), which is the focus of this paper.
\textbf{Certified defenses via randomized smoothing}. As a rather different and parallel research line, randomized smoothing typically provides \emph{probabilistic} certified $\ell_2$ robustness guarantees. Due to the wide applicability, randomized smoothing has been scaled up to ImageNet and achieves state-of-the-art certified accuracy for $\ell_2$ perturbations \citep{lecuyer2019certified,li2019certified,cohen2019certified,salman2019provably,zhai2019macer,jeong2020consistency}. However, certifying robustness with high probability requires sampling a large number of noisy inputs (e.g., $10^5$) for a single image, leading to a high computational cost at inference. Moreover, theoretical results pointed out that it cannot achieve non-trivial certified $\ell_\infty$ robustness if the perturbation radius is larger than $\Omega(d^{-1/2})$ where $d$ is the input dimension \citep{yang2020randomized,blum2020random,kumar2020curse,wu2021completing}.
\section{The Expressive Power of Lipschitz Neural Networks}
\label{sec:theory}
\subsection{Preliminaries}
\label{sec:preliminaries}
\textbf{Notations}. We use boldface letters to denote vectors (e.g., ${\bm{x}}$) or vector functions (e.g., ${\bm{f}}$), and use $x_i$ (or $f_i$) to denote its $i$-th element. For a unary function $\sigma$, $\sigma({\bm{x}})$ applies $\sigma(\cdot)$ element-wise on vector ${\bm{x}}$. The $\ell_p$-norm ($p\ge 1$) and $\ell_\infty$-norm of a vector ${\bm{x}}$ are defined as $\|{\bm{x}}\|_p=(\sum_i |x_i|^p)^{1/p}$ and $\|{\bm{x}}\|_\infty=\max_i |x_i|$, respectively. The matrix $\infty$-norm is defined as $\|\mathbf W\|_\infty=\max_i \|\mathbf W_{i,:}\|_1$ where $\mathbf W_{i,:}$ is the $i$-th row of the matrix $\mathbf W$. The $k$-th largest element of a vector ${\bm{x}}$ is denoted as $x_{(k)}$. We use $[n]$ to denote the set $\{1,\cdots,n\}$, and use ${\bm{e}}_i$ to denote the unit vector with the $i$-th element being one. We adopt the big O notations by using $\mathcal O(\cdot)$, $\Omega(\cdot)$, and $\Theta(\cdot)$ to hide universal constants.
\textbf{Lipschitzness}. A mapping ${\bm{f}}:\mathbb R^n\to \mathbb R^m$ is said to be $L$-Lipschitz continuous w.r.t. norm $\|\cdot\|$ if for any pair of inputs ${\bm{x}}_1,{\bm{x}}_2\in\mathbb R^n$,
\begin{equation}
\label{eq:lipschitz}
\|{\bm{f}}({\bm{x}}_1)-{\bm{f}}({\bm{x}}_2)\|\le L \|{\bm{x}}_1-{\bm{x}}_2\|.
\end{equation}
If the mapping ${\bm{f}}$ represented by a neural network has a small Lipschitz constant $L$, then (\ref{eq:lipschitz}) implies that the change of network output can be strictly controlled under input perturbations, resulting in \emph{certified} robustness guarantees as shown in the following proposition.
\begin{proposition}
\label{thm:lipschitz}
\emph{(Certified robustness of Lipschitz networks)} For a neural network ${\bm{f}}$ with Lipschitz constant $L$ under $\ell_p$-norm $\|\cdot\|_p$, define the resulting classifier $g$ as $g({\bm{x}}):=\arg\max_k f_k({\bm{x}})$ for an input ${\bm{x}}$. Then $g$ is provably robust under perturbations $\|\boldsymbol\delta\|_p< \frac c L\operatorname{margin}({\bm{f}}({\bm{x}}))$, i.e.
\begin{equation}
\setlength{\belowdisplayskip}{4pt}
\setlength{\abovedisplayskip}{6pt}
\label{eq:margin_certification}
g({\bm{x}}+\boldsymbol\delta)=g({\bm{x}}) \quad\text{ for all }\bm\delta \text{ with } \|\boldsymbol\delta\|_p< c/L\cdot \operatorname{margin}({\bm{f}}({\bm{x}})).
\end{equation}
Here $c=\sqrt[p] 2/2$ is a constant depending only on the norm $\|\cdot\|_p$, which is $1/2$ for the $\ell_\infty$-norm, and $\operatorname{margin}({\bm{f}}({\bm{x}}))$ is the margin between the largest and second largest output logits.
\end{proposition}
\vspace{-5pt}
The proof of Proposition \ref{thm:lipschitz} is simple and can be found in Appendix \ref{sec:proof_lipschitz} or \citep[Appendix P]{li2019preventing}. It can be seen that the robust radius is inversely proportional to the Lipschitz constant $L$.
\textbf{Standard Lipschitz networks}. Throughout this paper, we refer to standard neural networks as neural networks formed by affine layers (e.g., fully-connected or convolutional layers) and element-wise activation functions. Based on the Lipschitz property of composite functions, most prior works enforce the 1-Lipschitzness of a multi-layer neural network by constraining each layer to be a 1-Lipschitz mapping. For the $\ell_\infty$-norm, it is further equivalent to constraining the weight matrices to have bounded $\infty$-norm, plus using Lipschitz activation functions \citep{anil2019sorting}, which can be formalized as
\begin{equation}
\setlength{\belowdisplayskip}{4pt}
\setlength{\abovedisplayskip}{4pt}
\label{eq:standatd_lipschitz}
{\bm{x}}^{(l)}=\sigma^{(l)}(\mathbf W^{(l)}{\bm{x}}^{(l-1)}+{\bm{b}}^{(l)})\quad \text{s.t. }\|\mathbf W^{(l)}\|_\infty\le 1 \text{ and } \sigma^{(l)} \text{ being 1-Lipschitz},\ l\in [M].
\end{equation}
Here $M$ is the number of layers and usually $\sigma^{(M)}(x)=x$ is the identity function. The network takes ${\bm{x}}^{(0)}:={\bm{x}}$ as the input and outputs ${\bm{x}}^{(M)}$. For $K$-class classification problems, ${\bm{x}}^{(M)}\in \mathbb R^K$ and the network predicts the class $g({\bm{x}}):=\arg\max_{k\in [K]} x^{(M)}_k$. We refer to the resulting network as a standard Lipschitz network.
\subsection{Certified robustness on discrete Boolean datasets}
\label{sec:discrete}
In this section, we will construct a class of counterexamples for which certified $\ell_\infty$ robustness can be arbitrarily poor using standard Lipschitz networks. We focus on the \emph{Boolean dataset}, a discrete dataset where both inputs and labels are Boolean-valued and the relationship between inputs and their labels $({\bm{x}}^{(i)}, y^{(i)})\in \{0,1\}^d\times \{0,1\}$ can be described using a Boolean function $g^\text{B}:\{0,1\}^d\to \{0,1\}$. The key motivation lies in the finding that Boolean vectors correspond to the vertices of a $d$-dimensional hypercube, and thus are geometrically related to the $\ell_\infty$-distance metric. In particular, the $\ell_\infty$-distance between any two different data points in a Boolean dataset is always 1, which means that the dataset is \emph{well-separated}. This yields the following proposition, stating that it is always possible to achieve \emph{optimal} certified $\ell_\infty$ robustness on Boolean datasets by using Lipschitz classifiers.
\begin{proposition}
\label{thm:nearest_neighbor}
For any Boolean dataset $\mathcal D=\{({\bm{x}}^{(i)},y^{(i)})\}_{i=1}^n$, there exists a classifier $\hat g:\mathbb R^d\to \{0,1\}$ induced by a 1-Lipschitz mapping $\hat {\bm{f}}:\mathbb R^d\to \mathbb R^2$, such that $\hat g$ can fit the whole dataset with $\operatorname{margin}(\hat {\bm{f}}({\bm{x}}^{(i)}))=1$ $\forall i\in[n]$, thus achieving a certified $\ell_\infty$ radius of $1/2$ by Proposition \ref{thm:lipschitz}.
\end{proposition}
\vspace{-5pt}
The key observation in the proof (Appendix \ref{sec:proof_nearest_neighbor}) is that one can construct a so-called \emph{nearest neighbor classifier} that achieves a large margin on the whole dataset and is \emph{1-Lipschitz}. Based on Proposition \ref{thm:nearest_neighbor}, it is natural to ask whether standard Lipschitz networks of the form (\ref{eq:standatd_lipschitz}) can perform well on Boolean datasets. Unfortunately, we show it is not the case, even on a class of simple datasets constructed using \emph{symmetric} Boolean functions.
\begin{definition}
\label{def:stmmetric_boolean}
A Boolean function $g^\text{B}:\{0,1\}^d\to \{0,1\}$ is symmetric if it is invariant under input permutation, i.e. $g^\text{B}(x_1,\cdots,x_d)=g^\text{B}(x_{\pi(1)},\cdots,x_{\pi(d)})$ for any $ {\bm{x}}\in\{0,1\}^d$ and $\pi\in S_d$.
\end{definition}
\begin{example}
Two of the most basic operations in Boolean algebra are the logical AND/OR, both of which belong to the class of symmetric Boolean functions. Other important examples include the exclusive-or (XOR, also called the parity function), NAND, NOR, and the majority function (or generally, the threshold functions). See Appendix \ref{sec:symmetirc_Boolean} for a detailed description of these examples.
\end{example}
\begin{theorem}
\label{thm:discrete}
For any non-constant symmetric Boolean function $g^\text{B}:\{0,1\}^d\to \{0,1\}$, there exists a Boolean dataset with labels $y^{(i)}=g^\text{B}({\bm{x}}^{(i)})$, such that no standard Lipschitz network can achieve a certified $\ell_\infty$ robust radius larger than $1/2d$ on the dataset.
\end{theorem}
\vspace{-5pt}
\textbf{Implications}. Theorem \ref{thm:discrete} shows that the certified radius of standard Lipschitz networks must vanish as dimension $d$ grows, which is in stark contrast to the constant radius given by Proposition \ref{thm:nearest_neighbor}. Also, note that the logical AND/OR functions are perhaps the most basic classifiers (which simply make predictions based on the existence of binary input features). It is thus not surprising to see that standard Lipschitz networks perform poorly on real-world datasets (e.g., even the simple MNIST dataset where input pixels are almost Boolean-valued (black/white) \citep{tsuzuku2018lipschitz}).
We give an elegant proof of Theorem \ref{thm:discrete} in Appendix \ref{sec:proof_discrete}, where we also prove that the bound of $1/2d$ is \emph{tight} in Proposition \ref{thm:discrete_tight}. Moreover, we discuss the sample complexity by proving that a dataset of size $\mathcal O(d)$ already suffices to give $\mathcal O(1/d)$ certified radius (Corollary \ref{thm:discrete_efficient}). To the best of our knowledge, Theorem \ref{thm:discrete} is the first impossibility result that targets the certified $\ell_\infty$ robustness of standard Lipschitz networks with an \emph{quantitative} upper bound on the certified radius. In the next section, we will extend our analysis to the function approximation setting and make discussions with literature results \citep{huster2018limitations,anil2019sorting}.
\subsection{Lipschitz function approximation}
\label{sec:function_approximation}
Classic approximation theory has shown that standard neural networks are universal function approximators \citep{cybenko1989approximation,leshno1993multilayer}, in that they can approximate any continuous function on a compact domain arbitrarily well. For 1-Lipschitz neural networks, an analogous question is whether they can approximate all 1-Lipschitz functions accordingly. Unfortunately, the result in Section \ref{sec:discrete} already implies a negative answer. Indeed, by combining Proposition \ref{thm:nearest_neighbor} and Theorem \ref{thm:discrete}, $\hat {\bm{f}}$ is clearly a 1-Lipschitz function that cannot be approximated by any standard Lipschitz network.
To gain further insights into the structure of unrepresentable 1-Lipschitz functions, let us consider the \emph{continuousization} of discrete Boolean functions. For the symmetric case, one needs to find a class of 1-Lipschitz continuous functions that are also invariant under permutations. It can be found that a simple class of candidates is the \emph{order statistics}, i.e. the $k$-th largest element of a vector. One can check that the $k$-th order statistic $x_{(k)}$ is indeed 1-Lipschitz and is precisely the continuousization of the $k$-threshold Boolean function defined as $g^{\text{B},k}({\bm{x}}):=\mathbb I(\sum_i x_i\ge k)$. In particular, $x_{(1)}=\max_i x_i$ and $x_{(d)}=\min_i x_i$ corresponds to the logical OR/AND functions, respectively. Importantly, note that any Boolean function that is both symmetric and \emph{monotonic} is a $k$-threshold function, and vice versa (Appendix \ref{sec:symmetirc_Boolean}). Therefore, $k$-threshold functions can be regarded as the most elementary Boolean functions, suggesting that the ability to express order statistics is necessary for neural networks.
Unfortunately, using a similar analysis as the previous section, we have the following theorem:
\begin{theorem}
\label{thm:function_approximation}
Any standard Lipschitz network $f:\mathbb R^d\to \mathbb R$ cannot approximate the simple 1-Lipschitz function ${\bm{x}}\to x_{(k)}$ for arbitrary $k\in [d]$ on a bounded domain $\mathcal K= [0,1]^d$ if $d\ge 2$. Moreover, there exists a point $\widehat{\bm{x}}\in \mathcal K$, such that
\begin{equation}
\setlength{\belowdisplayskip}{1pt}
\setlength{\abovedisplayskip}{2pt}
\label{eq:function_approximation_error}
|f(\widehat{\bm{x}})-\widehat x_{(k)}|\ge \frac 1 2 - \frac 1 {2d}.
\end{equation}
\end{theorem}
\vspace{-2pt}
We give a proof in Appendix \ref{sec:proof_function_approximation}. The above theorem indicates that order statistics cannot be \emph{uniformly} approximated using any standard Lipschitz network regardless of the network size. Moreover, note that the trivial constant function $\tilde f({\bm{x}})=1/2$ already achieves an approximation error of $1/2$ uniformly on $\mathcal K$, implying that standard Lipschitz networks can hardly improve upon trivial solutions.
\begin{remark}
Theorem \ref{thm:function_approximation} can be easily generalized to weaker forms of non-uniform approximation, e.g., using the $\ell_p$-norm as distance metrics \citep{pinkus1999approximation}, by proving that there exists a hypercube $\mathcal B_\infty(\widehat{\bm{x}})$ centered at $\widehat {\bm{x}}$ with length $\Theta(1)$, such that $|f(\tilde{\bm{x}})-\tilde x_{(k)}|\ge \Theta(1)$ holds for all $\tilde x\in \mathcal B_\infty(\widehat {\bm{x}})$ when $d\ge 2$. See Corollary \ref{thm:function_approximation_corollary} for details.
\end{remark}
\vspace{-4pt}
\textbf{Discussion with prior works} \citep{anil2019sorting,huster2018limitations,neumayer2022approximation}.
The work of Anil et al. also gave negative results on the expressive power of standard Lipschitz networks\footnote{The result is described w.r.t. $\ell_2$-norm, but with some effort, it can be extended to the $\ell_\infty$-norm case.} \citep[Theorem 1]{anil2019sorting}. They proved a different (weaker) version of Theorem \ref{thm:function_approximation}, showing that if the activation function $\sigma$ is \emph{monotonic}, the network cannot \emph{precisely} represent non-linear 1-Lipschitz functions whose gradient norm is 1 almost everywhere (e.g., the absolute value function proved before by \citep{huster2018limitations}). They did not give a quantitative approximation error. The intuition is that any monotonic non-linear 1-Lipschitz activation (e.g., ReLU) must have regions with slopes less than 1, leading to \emph{gradient attenuation} during backpropagation. The authors thus attributed the reason to the activation function, which is not \emph{gradient-norm preserving} (GNP). However, such an explanation is still not fully satisfactory, as GNP can be simply achieved using a non-monotonic activation (e.g., $\sigma(x)=|x|$). Consequently, one may expect that a standard Lipschitz network built on a suitable (non-monotonic) activation function can have sufficient expressive power. Such an idea is recently explored in \citep{neumayer2022approximation}, where the authors proved that using a general 1-Lipschitz piecewise linear activation with 3 linear regions, the corresponding network achieves the maximum expressive power compared with other Lipschitz activations and can approximate any \emph{one-dimensional} 1-Lipschitz function. They pose the high dimension setting as an open problem.
Unfortunately, Theorem \ref{thm:function_approximation} addressed the open problem with negative answer, stating that such networks are not expressive even for the two-dimensional setting. It also implies that GNP is not \emph{sufficient} to explain the failure of standard Lipschitz networks. Instead, we draw a rather different conclusion, arguing that the lack of expressiveness is due to the inability of \emph{weight-constrained} affine transformations to perform basic Boolean operations (even in two dimensions). A further justification is given in Section \ref{sec:order_statistic}. Finally, compared with Anil et al. \citep{anil2019sorting}, the form of Theorem \ref{thm:function_approximation} is more fundamental, in the sense that it does not make assumptions on the activation function, and it gives a quantitative error bound on the approximation that is arbitrarily close to the plain fit $f({\bm{x}})=1/2$.
\vspace{-1.5pt}
\subsection{Investigating more advanced Lipschitz networks}
\vspace{-1.5pt}
\label{sec:order_statistic}
Seeing the above impossibility results, we then examine two representative works of (non-standard) Lipschitz networks in literature: the GroupSort network \citep{anil2019sorting} and the recently proposed $\ell_\infty$-distance net \citep{zhang2021towards,zhang2022boosting}. Notably, both networks are Lipschitz-universal function approximators and thus fully expressive. The GroupSort network makes minimum changes to standard Lipschitz networks (\ref{eq:standatd_lipschitz}), by replacing element-wise activation $\sigma$ with GroupSort layers. GroupSort partitions the input vector into groups, sorts the sub-vector of each group in descending order, and finally concatenates the resulting sub-vectors. Since sorting is computationally expensive, the authors considered a practical version of GroupSort with a group size of 2, called MaxMin, which simply calculates the maximum and minimum pair by pair \citep{chernodub2016norm}. $\ell_\infty$-distance net, on the other hand, is fundamentally different from standard Lipschitz networks. Each neuron in an $\ell_\infty$-distance net is designed based on the $\ell_\infty$-distance function $y=\|{\bm{x}}-{\bm{w}}\|_\infty+b$ (with parameters $\{{\bm{w}},b\}$). Despite the somewhat unusual structure, $\ell_\infty$-distance net has been shown to substantially outperform GroupSort (MaxMin) in terms of certified $\ell_\infty$ robustness according to \citep{zhang2021towards}, a puzzling thing to be understood.
We provide a possible explanation for this. We find that both networks incorporate order statistics into the neuron design, either explicitly (GroupSort) or implicitly ($\ell_\infty$-distance net), thus bypassing the impossibility result in Theorem \ref{thm:function_approximation}. Indeed, the sorting operations in GroupSort explicitly calculate order statistics. As for $\ell_\infty$-distance net, we show its basic neuron can implicitly represent the max function on a bounded domain, by assigning the weight ${\bm{w}}=-c\mathbf 1$ and the bias $b=-c$ with a sufficiently large constant $c$:
\begin{equation}
\setlength{\belowdisplayskip}{5pt}
\setlength{\abovedisplayskip}{5pt}
\textstyle y=\|{\bm{x}}-{\bm{w}}\|_\infty+b=\max_i |x_i-(-c)|-c=\max_i x_i\quad \text{if }c\ge \max_i -x_i,
\end{equation}
and thus can represent the logical OR operation. In general, we have the following theorem:
\begin{theorem}
\label{thm:ell_inf_net}
A two-layer $\ell_\infty$-distance net can exactly represent the following functions: $(\mathrm{i})$ any discrete Boolean function; $(\mathrm{ii})$ any continuous order-statistic function on a compact domain.
\end{theorem}
\vspace{-4pt}
We give a proof in Appendix \ref{sec:proof_ell_inf_net}. Our proof uses the fundamental result in Boolean algebra that any Boolean function can be written in its disjunctive normal form (DNF, see Appendix \ref{sec:dnf}), which can be further reduced to using only the composition of logical OR operations of \emph{literals} and thus be realized by a two-layer $\ell_\infty$-distance net. To represent order statistics, we formulate them as nested max-min functions, which can also be realized by a two-layer $\ell_\infty$-distance net. Therefore, the construction in our proof provides a novel understanding of the mechanism behind the success of $\ell_\infty$-distance nets, since \emph{each $\ell_\infty$-distance neuron can be regarded as a basic ``logical gate'' and the whole network can simulate any Boolean circuit}.
For GroupSort networks with a group size $G\ge d$, a similar result holds. However, it is not the case for practically used MaxMin networks ($G=2$), where we have the following impossibility results:
\begin{theorem}
\label{thm:groupsort}
An $M$-layer MaxMin network $f:\mathbb R^d\to \mathbb R$ cannot approximate any $k$-th order statistic function on a bounded domain $\mathcal K= [0,1]^d$ if $M\le \lceil \log_2 d\rceil$ (no matter how wide the network is). Moreover, there exists a point $\widehat {\bm{x}}\in \mathcal K$, such that
\begin{equation}
\setlength{\belowdisplayskip}{5pt}
\setlength{\abovedisplayskip}{2pt}
|f(\widehat{\bm{x}})- \widehat x_{(k)}|\ge \frac 1 2 -\frac {2^{M-2}} d\ge \frac 1 4\quad \text{if }M\le \lfloor \log_2 d\rfloor.
\end{equation}
\end{theorem}
\begin{theorem}
\label{thm:groupsort_boolean}
Let $M_d$ be the minimum depth such that an $M_d$-layer MaxMin network can represent any (discrete) $d$-dimensional Boolean function. Then $M_d=\Omega(d)$.
\end{theorem}
\vspace{-4pt}
The above theorems show that MaxMin networks must be \emph{very deep} in order to represent Boolean functions and order statistics, which is in stark contrast to Theorem \ref{thm:ell_inf_net}, as a constant depth is sufficient for $\ell_\infty$-distance nets. Based on Theorem \ref{thm:groupsort_boolean}, we directly have the following corollary:
\begin{corollary}
\label{thm:groupsort_universal}
The function class induced by $M_d$-layer MaxMin networks is not a universal approximator to the $d$-dimensional 1-Lipschitz functions if $M_d=o(d)$.
\end{corollary}
The proofs of Theorems \ref{thm:groupsort} and \ref{thm:groupsort_boolean} are deferred to Appendix \ref{sec:proof_groupsort} and \ref{sec:proof_groupsort_boolean}, respectively. In particular, the proof of Theorem \ref{thm:groupsort_boolean} is non-trivial and makes elegant use of Boolean circuit theory, so we present a proof sketch here. The key insight is that for any Boolean function, if it can be expressed by some MaxMin network $f$, then it can be expressed by a special MaxMin network with the same topology as $f$ such that all the weight vectors ${\bm{w}}$ are sparse with at most one non-zero element, either $1$ or $-1$ (Corollary \ref{thm:groupsort_weight_sparse}). This implies that \emph{weight vectors have no use} in representing Boolean functions and thus MaxMin networks reduce to \emph{2-ary Boolean circuits}, i.e. directed acyclic graphs whose internal nodes are logical gates including NOT and the 2-ary AND/OR. Note that for a 2-ary Boolean circuit that has $M$ layers and outputs a scalar, the number of nodes will not exceed $2^{M+1}-1$ (achieved by a complete binary tree). However, the classic result in Boolean circuit theory (Shannon 1942) showed that for most Boolean functions of $d$ variables, a lower bound on the minimum size of 2-ary Boolean circuits is $\Omega(2^d/d)$ , which thus yields $M=\Omega(d)$ and concludes the proof.
In Appendix, we also discuss the tightness of the above theorems. We prove that a depth of $\mathcal O(\log_2 d)$ is sufficient to represent any order statistic function using Boolean circuit theory (Theorem \ref{thm:groupsort_deep}), and a straightforward construction using DNF shows that a depth of $\mathcal O(d)$ is sufficient to represent any Boolean functions (Proposition \ref{thm:groupsort_upper_bound_boolean}). Thus both theorems are tight.
Unfortunately, training deep MaxMin networks is known to be challenging due to optimization difficulties \citep{cohen2019universal}. Consequently, prior works only use a shallow MaxMin network with no more than 4 layers \citep{anil2019sorting,cohen2019universal}, which severely lacks expressive power. One possible solution is to increase the group size, and several works explored this aspect using toy examples and observed significant benefits empirically \citep{cohen2019universal,tanielian2021approximating}. However, a large group size involves computationally expensive sorting operations and makes the network hard to train \citep{anil2019sorting}, limiting its value in practice.
\section{A Unified Framework of Lipschitz Neural Networks}
\label{sec:sortnet}
The above theoretical results have justified order statistics as a crucial component in representing a class of Boolean functions, shedding light on how GroupSort and $\ell_\infty$-distance net work. Based on these insights, in this section, we will propose a unified framework of Lipschitz networks that take the respective advantage of prior Lipschitz architectures, and then give a practical (specialized) version that enables efficient training.
Consider a general Lipschitz network constructed using the following three basic types of 1-Lipschitz operations: $(\mathrm{i})$ norm-bounded affine transformations, e.g. $y={\bm{w}}^{\mathrm{T}}{\bm{x}}$ ($\|{\bm{w}}\|_1\le 1$) and $y=x+b$; $(\mathrm{ii})$ 1-Lipschitz unary activation functions $\sigma$; $(\mathrm{iii})$ order statistics. The first two types are extensively used in standard Lipschitz networks, while the last type is motivated by Section \ref{sec:theory} and is of crucial importance. We propose the following network which naturally combines the above components:
\begin{definition}
\label{def:sortnet}
(SortNet) Define an $M$-layer fully-connected SortNet ${\bm{f}}$ as follows. The network takes ${\bm{x}}={\bm{x}}^{(0)}$ as input, and the $k$-th unit in the $l$-th hidden layer $x^{(l)}_k$ is computed by
\begin{equation}
\setlength{\belowdisplayskip}{3pt}
\setlength{\abovedisplayskip}{3pt}
\label{eq:sortnet_neuron}
\begin{aligned}
x^{(l)}_k=({\bm{w}}^{(l,k)})^{\mathrm{T}}\operatorname{sort}(\sigma({\bm{x}}^{(l-1)}+{\bm{b}}^{(l,k)})),\quad
\text{s.t.}\ \|{\bm{w}}^{(l,k)}\|_1\le 1,\quad l\in [M], k \in [d_l]
\end{aligned}
\end{equation}
where $d_l$ is the size of the $l$-th layer, and $\operatorname{sort}({\bm{x}}):=(x_{(1)},\cdots,x_{(d)})^{\mathrm{T}}$ calculates all order statistics of ${\bm{x}}\in\mathbb R^d$. The network outputs ${\bm{f}}({\bm{x}})={\bm{x}}^{(M)}+{\bm{b}}^{\text{out}}$. Here $\{{\bm{w}}^{(l,k)}\}$, $\{{\bm{b}}^{(l,k)}\}$ and ${\bm{b}}^{\text{out}}$ are parameters.
\end{definition}
\vspace{-2pt}
It is easy to see that SortNet is 1-Lipschitz w.r.t. $\ell_\infty$-norm. We now show that SortNet is a general architecture that extends both GroupSort and $\ell_\infty$-distance networks into a unified framework.
\begin{proposition}
Any GroupSort network with an arbitrary group size on a compact input domain can be represented by a SortNet with the same topological structure using activation $\sigma(x)=x$.
\end{proposition}
\begin{proposition}
Any $\ell_\infty$-distance net can be represented by a SortNet with the same topological structure by fixing the weights ${\bm{w}}^{(l,k)}={\bm{e}}_1$ and using the absolute-value activation $\sigma(x)=|x|$.
\end{proposition}
\vspace{-3pt}
See Appendix \ref{sec:special_sortnet} for a proof. As a result, SortNet can exploit the respective advantage of these Lipschitz networks. Compared with GroupSort, SortNet can freely use activation functions such as the absolute value, thus easily addressing the problem claimed in \citep{huster2018limitations,anil2019sorting}. Moreover, unlike GroupSort, the bias vector ${\bm{b}}^{(l,k)}$ in SortNet (\ref{eq:sortnet_neuron}) can be assigned diversely for different neurons in the same layer. In this way, one can control the sorting behavior of each neuron individually by varying the bias value without disturbing the output of other neurons, which is very beneficial (see Appendix \ref{sec:special_sortnet_groupsort} for details). Compared with $\ell_\infty$-distance net, SortNet adds linear transformation and incorporates all order statistics (rather than only the maximum), thus can represent certain functions more effectively.
\textbf{A practical version of SortNet}. As with GroupSort networks, we also design a practical (specialized) version of SortNet which enjoys efficient training. But different from the MaxMin network that reduces the group size, we keep the full-dimensional order statistics as they are crucial for the expressive power (Section \ref{sec:order_statistic}). The key observation is that in (\ref{eq:sortnet_neuron}), the only required computation is the linear combination of order statistics (i.e. ${\bm{w}}^{\mathrm{T}} \operatorname{sort}(\cdot)$), rather than the entire sorting results (i.e. $\operatorname{sort}(\cdot)$). We find that for certain carefully designed choices of the weight vector ${\bm{w}}$, there exist efficient approximation algorithms that can give a good estimation of ${\bm{w}}^{\mathrm{T}} \operatorname{sort}(\cdot)$. In particular, we propose an assignment of the weight vector that follows geometric series, i.e. $w_i$ proportional to $\rho^{i}$, in which case we have the following result:
\begin{proposition}
\label{thm:dropout}
Let ${\bm{w}}\in \mathbb R^d$ be a vector satisfying $w_i=(1-\rho)\rho^{i-1}, i\in[d]$ for some $0\le \rho<1$. Then for any vector ${\bm{x}}\in\mathbb R_+^d$ with non-negative elements,
\begin{equation}
\setlength{\belowdisplayskip}{2pt}
\setlength{\abovedisplayskip}{2pt}
\label{eq:expectation_dropout}
{\bm{w}}^{\mathrm{T}}\operatorname{sort}({\bm{x}})=\mathbb E_{{\bm{s}}\sim\operatorname{Ber(1-\rho)}}[\max_i s_i x_i].
\end{equation}
Here ${\bm{s}}$ is a random vector following independent Bernoulli distribution with probability $1-\rho$ being 1 and $\rho$ being 0.
\end{proposition}
\vspace{-8pt}
\begin{proof}
Without loss of generality, assume $x_1,\cdots,x_d$ are different from each other. Denote $j_1,\cdots,j_d$ as the sorting indices such that $\operatorname{sort}({\bm{x}})=(x_{j_1},\cdots,x_{j_d})$. Then
\begin{align*}
\textstyle\mathbb E_{{\bm{s}}\sim\operatorname{Ber(\rho)}}[\max_i s_i x_i]
=&\textstyle\sum_{k\in [d]} \operatorname{Pr}_{{\bm{s}}\sim\operatorname{Ber(\rho)}}\left[\max_{i} s_i x_i=x_{j_k}\right] x_{j_k}\\
=&\textstyle\sum_{k\in [d]} \operatorname{Pr}_{{\bm{s}}\sim\operatorname{Ber(\rho)}}\left[s_{j_k}=1 \text{ and }s_{j_i}=0\ \forall 1\le i<k\right] x_{j_k}\\
=&\textstyle\sum_{k\in [d]} (1-\rho)\rho^{k-1} \cdot x_{(k)}={\bm{w}}^{\mathrm{T}}\operatorname{sort}({\bm{x}}).
\end{align*}
\vspace{-25pt}
\end{proof}
\vspace{-5pt}
It is easy to check that the weight ${\bm{w}}$ in the above proposition satisfies $\|{\bm{w}}\|_1\le 1$, which guarantees the Lipschitzness. The non-negative condition on ${\bm{x}}$ in Proposition \ref{thm:dropout} holds when using a suitable activation function in neuron (\ref{eq:sortnet_neuron}), such as the absolute value function.
Proposition \ref{thm:dropout} suggests that one can use $\max_i s_i x_i$ to give an \emph{unbiased} estimation of ${\bm{w}}^{\mathrm{T}} \operatorname{sort}({\bm{x}})$. In this way, the expensive sorting operation is avoided and replaced by a max operation, thus significantly reducing the computational cost in training. We give an efficiently GPU implementation for training SortNet in Appendix \ref{sec:gpu_implementaion}. Note that ${\bm{s}}$ is a random Bernoulli vector, so the above calculation is similar to applying a mask on the input of each neuron, like dropout \citep{srivastava2014dropout}. It means that the introduced stochasticity may further prevent overfitting and benefit generalization performance.
\textbf{Regarding the value of $\rho$}. When $\rho=0$, only the maximum value is taken into the computation and the resulting network can recover the $\ell_\infty$-distance net by choosing the activation function $\sigma(x)=|x|$. This means the specialized SortNet still extends $\ell_\infty$-distance net and thus has sufficient expressive power. When $\rho>0$, all order statistics become utilized. A simple way of selecting $\rho$ is to regard it as a hyper-parameter and set its value by cross-validation, which is adopted in our experiments. One can also consider treating $\rho$ as learnable parameters for each neuron that participate in the optimization process, but this involves calculating the gradient of $\rho$ which may be complicated due to the stochastic sampling procedure (\ref{eq:expectation_dropout}). We will leave the study as future work.
\section{Experiments}
\label{sec:experiments}
In this section, we perform extensive empirical evaluations of the proposed SortNet architecture as well as various prior works in the certified $\ell_\infty$ robustness area. To show the scalability of different approaches, we consider a variety of benchmark datasets, including MNIST \citep{lecun1998mnist}, CIFAR-10 \citep{krizhevsky2009learning}, TinyImageNet \citep{le2015tiny}, and ImageNet ($64\times 64$) \citep{chrabaszcz2017downsampled}. Due to space limitations, a complete training recipe is given in Appendix \ref{sec:exp_details}. Our code and trained models are released at \texttt{\href{https://github.com/zbh2047/SortNet}{https://github.com/zbh2047/SortNet}}.
\subsection{Experimental setting}
\textbf{SortNet model configuration}. Since SortNet generalizes the $\ell_\infty$-distance net, we simply follow the same model configurations as \citep{zhang2021towards} and consider two types of models. The first one is a simple SortNet consisting of $M$ fully-connected layers with a hidden size of 5120, which is used in MNIST and CIFAR-10. Like \citep{zhang2021towards}, we choose $M=5$ for MNIST and $M=6$ for CIFAR-10. Since SortNet is Lipschitz, we directly apply the margin-based certification method to calculate the certified accuracy (Proposition \ref{thm:lipschitz}). To achieve the best results on ImageNet-like datasets, in our second type of model we consider using a composite architecture consisting of a base SortNet backbone and a prediction head (denoted as SortNet+MLP). Following \citep{zhang2021towards}, the SortNet backbone has 5 layers with a width of 5120 neurons, which serves as a robust feature extractor. The top prediction head is a lightweight 2-layer perceptron with 512 hidden neurons (or 2048 for ImageNet), which takes the robust features as input to give classification results. We also try a larger SortNet backbone, denoted as SortNet+MLP (2x), that has roughly four times the training cost (see Appendix \ref{sec:models} for architectural details). We use the same approach as \citep{zhang2021towards} to train and certify these models, i.e. by combining margin-based certification for the SortNet backbone and interval bound propagation for the top MLP \citep{gowal2018effectiveness}.
\textbf{Baseline methods and metrics}. We compare SortNet with representative literature approaches including relaxation-based certification (for standard networks), margin-based certification (using Lipschitz networks), and mixed-integer linear programming (MILP) \citep{tjeng2019evaluating}. In Appendix \ref{sec:randomized_smoothing}, we also discuss randomized smoothing approaches \citep{cohen2019certified,salman2019provably}, which provide probabilistic guarantees rather than deterministic ones. For each method in these tables, we report five metrics: $(\mathrm{i})$ training efficiency, measured by the wall-clock time per training epoch; $(\mathrm{ii})$ certification efficiency, measured by the time needed to calculate the certified accuracy on the test dataset; $(\mathrm{iii})$ the clean test accuracy without perturbation (denoted as Clean); $(\mathrm{iv})$ the robust test accuracy under 100-step PGD attack (denoted as PGD); $(\mathrm{v})$ the certified robust test accuracy (denoted as Certified). For a fair comparison, we reproduce most of baseline methods using the official codes and report the wall-clock time under the same NVIDIA-RTX 3090 GPU. These results are presented in Tables \ref{tab:results_mnist}, \ref{tab:results_cifar10} and \ref{tab:results_imagenet}.
In Appendix \ref{sec:full_result}, we also show the training variance of each setting by running 8 sets of experiments independently, and full results (including the median performance) are reported in Table \ref{tab:results_full_clean} and \ref{tab:results_full_certified}.
\subsection{Experimental results}
\textbf{Performance on MNIST.} The results are presented in Table \ref{tab:results_mnist}. Following the common practice, we consider both a small perturbation radius $\epsilon=0.1$ and a larger one $\epsilon=0.3$. It can be seen that the SortNet models can achieve \textbf{98.14\%} ($\epsilon=0.1$) and \textbf{93.40\%} ($\epsilon=0.3$) certified accuracy, respectively, both of which outperform all previous baseline methods. In contrast, the GroupSort network can only achieve a trivial certified accuracy for $\epsilon=0.3$. This matches our theory in Section \ref{sec:order_statistic}, indicating that the expressive power of shallow MaxMin networks is insufficient in real-world applications.
\textbf{Performance on CIFAR-10.}
The results are presented in Table \ref{tab:results_cifar10}. Following the common practice, we consider two perturbation radii: $\epsilon=2/255$ and $\epsilon=8/255$. Our models can achieve \textbf{56.94\%} ($\epsilon=2/255$) and \textbf{40.39\%} ($\epsilon=8/255$) certified accuracy, respectively. Moreover, the training approach proposed in Section \ref{sec:sortnet} is very efficient, e.g., with a training time of \textbf{13$\sim$14} seconds per epoch. For both radii, our models perform the best among all existing approaches that can be certified in a reasonable time. Compared with relaxation-based methods, the certified accuracy of SortNet models is much higher (typically $+3\sim+6$ point for both radii), despite our training speed being several times faster. Such results may indicate that certified $\ell_\infty$ robustness can be better achieved by designing suitable Lipschitz models than by devising relaxation procedures for non-Lipschitz models.
\textbf{Performance on TinyImageNet and ImageNet.}
To demonstrate the scalability of SortNet models, we finally run experiments on two large-scale datasets: Tiny-ImageNet and ImageNet ($64\times 64$). Notably, the ImageNet dataset has 1000 classes and contains 1.28 million images for training and 50,000 images for testing. Due to both the large size and the huge number of classes, achieving certified $\ell_\infty$ robustness on the ImageNet level has long been a challenging task.
Table \ref{tab:results_imagenet} presents our results along with existing baselines. Among them, we achieve \textbf{18.18\%} certified accuracy on TinyImageNet and achieve \textbf{9.54\%} certified accuracy on ImageNet, both of which establish state-of-the-art results. The gap is most prominent on ImageNet, where our small SortNet+MLP model already outperforms the largest model of \citep{xu2020automatic} while being \textbf{22} times faster to train. Even for the largest model (SortNet+MLP 2x), the training is still 7 times faster, resulting in a training overhead of 4 days using two GPUs. We suspect that continuing to increase the model size will yield better results, given the noticeable improvement of the larger model over the smaller one.
\textbf{Comparing with $\ell_\infty$-distance net}. As can be seen, SortNet models consistently achieve better certified accuracy than $\ell_\infty$-distance nets for all different datasets and perturbation levels, and the performance gap is quite prominent compared with the original work \citep{zhang2021towards}. Very recently, a follow-up paper \citep{zhang2022boosting} significantly improved the performance of $\ell_\infty$-distance net using a carefully designed training strategy, creating a strong baseline on CIFAR-10. However, we find their approach does not suit the ImageNet-like datasets when the number of classes is large (see Appendix \ref{sec:reproducing_baseline}). In contrast, SortNet models enjoy great scalability ranging from MNIST to ImageNet and consistently outperform \citep{zhang2022boosting}. The improvement is also remarkable for ${\epsilon}=2/255$ on CIFAR-10 ($+7.11\%$ and $+2.82\%$ in clean / certified accuracy).
In Appendix \ref{sec:ablation}, we conduct ablation studies on CIFAR-10 by varying the value of $\rho$ and comparing SortNet models ($\rho>0$) with $\ell_\infty$-distance net ($\rho=0$), under \emph{the same training strategy} in this paper without additional tricks. We observe a large gain in certified accuracy when switching from $\ell_\infty$-distance net to general SortNet. This empirically indicates that incorporating other order statistics has extra benefits in certified $\ell_\infty$ robustness than using only the maximum (the first order statistic).
\begin{table*}[t]
\small
\vspace{-12pt}
\caption{Comparison of our results with existing methods on MNIST dataset.}
\label{tab:results_mnist}
\vspace{2pt}
\begin{adjustwidth}{-.5in}{-.5in}
\centering
\begin{tabular}{cc||c||c|c|c|c|c|c}
\Xhline{0.75pt}
\multicolumn{2}{c||}{\multirow{2}{*}{Method}} & Train & \multicolumn{3}{c|}{MNIST ($\epsilon=0.1$)} & \multicolumn{3}{c}{MNIST ($\epsilon=0.3$)} \\ \cline{4-9}
\multicolumn{2}{c||}{} & \hspace{-4pt }Time (s)\hspace{-4pt } & \hspace{1pt }Clean\hspace{1pt } & \hspace{2pt }PGD\hspace{2pt } & \hspace{-4pt }Certified\hspace{-4pt } & \hspace{1pt }Clean\hspace{1pt } & \hspace{2pt }PGD\hspace{2pt } & \hspace{-4pt }Certified\hspace{-4pt }\\ \Xhline{0.6pt}
\multicolumn{1}{c|}{\multirow{3}{*}{Relaxation}}
& IBP \citep{gowal2018effectiveness} & 17.5 & 98.92 & 97.98 & 97.25 & 97.88 & 93.22 & 91.79 \\
\multicolumn{1}{c|}{} & IBP \citep{shi2021fast} & 34.7 & 98.84 & -- & 97.95 & 97.67 & -- & 93.10 \\
\multicolumn{1}{c|}{} & CROWN-IBP \citep{zhang2020towards} & 60.3 & 98.83 & 98.19 & 97.76 & 98.18 & 93.95 & 92.98 \\ \hline
\multicolumn{1}{c|}{\multirow{5}{*}{Lipschitz}}
& GroupSort (MaxMin) \citep{anil2019sorting} & -- & 97.0 & 84.0 & 79.0 & 97.0 & 34.0 & 2.0 \\
\multicolumn{1}{c|}{} & $\ell_\infty$-dist Net \citep{zhang2021towards} & 17.2 & 98.66 & 97.85 & 97.73 & 98.54 & 94.62 & 92.64 \\
\multicolumn{1}{c|}{} & $\ell_\infty$-dist Net+MLP \citep{zhang2021towards} & 17.2 & 98.86 & 97.77 & 97.60 & 98.56 & 95.05 & 93.09 \\
\multicolumn{1}{c|}{} & $\ell_\infty$-dist Net \citep{zhang2022boosting} & 17.0 & 98.93 & 98.03 & 97.95 & 98.56 & 94.73 & 93.20 \\
\multicolumn{1}{c|}{} & SortNet & \textbf{10.6} & 99.01 & 98.21 & \textbf{98.14} & 98.46 & 94.64 & \textbf{93.40} \\ \hline
\multicolumn{1}{c|}{\multirow{1}{*}{MILP}}
& COLT \citep{balunovic2020Adversarial} & -- & 99.2 & -- & 97.1 & 97.3 & -- & 85.7 \\
\Xhline{0.75pt}
\end{tabular}
\end{adjustwidth}
\caption{Comparison of our results with existing methods on CIFAR-10 dataset.}
\label{tab:results_cifar10}
\vspace{2pt}
\begin{adjustwidth}{-.5in}{-.5in}
\centering
\begin{tabular}{cc||r|r||c|c|c|c|c|c}
\Xhline{0.75pt}
\multicolumn{2}{c||}{\multirow{2}{*}{Method}} & \multicolumn{2}{c||}{ Time (s)} & \multicolumn{3}{c|}{$\epsilon=2/255$} & \multicolumn{3}{c}{$\epsilon=8/255$} \\ \cline{3-10}
\multicolumn{2}{c||}{} & \hspace{1pt }Train\hspace{1pt } & \hspace{-2pt }Certify\hspace{-2pt } & \hspace{1pt }Clean\hspace{1pt } & \hspace{2pt }PGD\hspace{2pt } & \hspace{-4pt }Certified\hspace{-4pt } & \hspace{1pt }Clean\hspace{1pt } & \hspace{2pt }PGD\hspace{2pt } & \hspace{-4pt }Certified\hspace{-4pt } \\ \Xhline{0.6pt}
\multicolumn{1}{c|}{\multirow{5}{*}{\hspace{-3pt}Relaxation\hspace{-3pt}}}
& CAP \citep{wong2018scaling} & 659.0 & 7,570 & 68.28 & -- & 53.89 & 28.67 & -- & 21.78 \\
\multicolumn{1}{c|}{} & IBP \citep{gowal2018effectiveness} & 19.0 & 2.74 & 61.46 & 50.28 & 44.79 & 50.99 & 31.27 & 29.19 \\
\multicolumn{1}{c|}{} & IBP \citep{shi2021fast} & 70.4 & 4.02 & 66.84 & -- & 52.85 & 48.94 & -- & 34.97 \\
\multicolumn{1}{c|}{} & CROWN-IBP \citep{zhang2020towards} & 87.2 & 7.01 & 71.52 &59.72 & 53.97 & 45.98 &34.58 & 33.06 \\
\multicolumn{1}{c|}{} & CROWN-IBP \citep{xu2020automatic} & 45.0 & 4.02 & -- & -- & -- & 46.29 & 35.69 & 33.38 \\ \hline
\multicolumn{1}{c|}{\multirow{5}{*}{Lipschitz}}
& $\ell_\infty$-dist Net \citep{zhang2021towards} & 19.7 & 1.73 & 60.33 & 51.55 & 50.94 & 56.80 & 36.19 & 33.30 \\
\multicolumn{1}{c|}{} & \hspace{-3pt}$\ell_\infty$-dist Net+MLP \citep{zhang2021towards}\hspace{-3pt} & 19.7 & 1.74 & 65.62 & 51.47 & 51.05 & 50.80 & 36.51 & 35.42 \\
\multicolumn{1}{c|}{} & $\ell_\infty$-dist Net \citep{zhang2022boosting} & 18.9 & 1.73 & 60.61 & 54.28 & 54.12 & 54.30 & 41.84 & 40.06 \\
\multicolumn{1}{c|}{} & SortNet & 14.0 & 8.00 & 65.96 & 57.03 & 56.67 & 54.84 & 41.50 & \textbf{40.39} \\
\multicolumn{1}{c|}{} & SortNet+MLP & \textbf{13.4} & 8.01 & 67.72 & 57.83 & \textbf{56.94} & 54.13 & 41.58 & 39.99 \\
\hline
\multicolumn{1}{c|}{\multirow{1}{*}{MILP}}
& COLT \citep{balunovic2020Adversarial} & 252.0 & $\sim 10^5$ & 78.4 & -- & 60.5 & 51.7 & -- & 27.5 \\
\Xhline{0.75pt}
\end{tabular}
\end{adjustwidth}
\caption{Comparison of our results with existing methods on TinyImageNet and ImageNet datasets.}
\label{tab:results_imagenet}
\vspace{2pt}
\begin{adjustwidth}{-.5in}{-.5in}
\centering
\begin{tabular}{c||r|c|c|c||r|c|c|c}
\Xhline{0.75pt}
\multirow{2}{*}{Method} & \multicolumn{4}{c||}{TinyImageNet ($\epsilon=1/255$)} & \multicolumn{4}{c}{ImageNet $64\times 64$ ($\epsilon=1/255$)} \\ \cline{2-9}
& \hspace{-4pt }Time (s)\hspace{-4pt } & \hspace{1pt }Clean\hspace{1pt } & \hspace{2pt }PGD\hspace{2pt } & \hspace{-4pt }Certified\hspace{-4pt } & \hspace{-4pt }Time (s)\hspace{-4pt } & \hspace{1pt }Clean\hspace{1pt } & \hspace{2pt }PGD\hspace{2pt } & \hspace{-4pt }Certified\hspace{-4pt } \\ \Xhline{0.6pt}
IBP \citep{gowal2018effectiveness} & 735 & 26.46 & 20.60 & 14.85 & 11,026 & 15.96 & 9.12 & 6.13 \\
IBP \citep{shi2021fast} & 284 & 25.71 & -- & 17.64 & -- & -- & -- & -- \\
CROWN-IBP \citep{xu2020automatic} & 1,256 & 27.82 & 20.52 & 15.86 & 16,269 & 16.23 & 10.26 & 8.73 \\
$\ell_\infty$-dist Net+MLP \citep{zhang2021towards} & 55 & 21.82 & -- & 16.31 & -- & -- & -- & -- \\
$\ell_\infty$-dist Net \citep{zhang2022boosting} & 55 & 12.57 & 11.09 & 11.04 & -- & -- & -- & -- \\
SortNet+MLP & \textbf{39} & 24.17 & 20.57 & 17.92 & \textbf{715} & 13.48 & 10.93 & 9.02 \\
SortNet+MLP (2x larger) & 156 & 25.69 & 21.57 & \textbf{18.18} & 2,192 & 14.79 & 11.93 & \textbf{9.54} \\ \Xhline{0.75pt}
\end{tabular}
\end{adjustwidth}
\vspace{-10pt}
\end{table*}
\section{Conclusion}
In this paper, we study certified $\ell_\infty$ robustness from the novel perspective of representing Boolean functions. Our analysis points out an inherent problem in the expressive power of standard Lipschitz networks, and provides novel insights on how recently proposed Lipschitz networks resolve the problem. We also answer several previous open problems, such as $(\mathrm{i})$ the expressive power of standard Lipschitz networks with general activations \citep{neumayer2022approximation} and $(\mathrm{ii})$ the lower bound on the depth of MaxMin networks to become universal approximators \citep{tanielian2021approximating,neumayer2022approximation}. Finally, guided by the theoretical results, we design a new Lipschitz network with better empirical performance than prior works.
\section*{Limitations, Open Problems, and Broader impact}
\textbf{Regarding the $\ell_p$-norm}. One major limitation of this work is that we only focus on $\ell_\infty$ robustness. While such results may shed light on general $\ell_p$-norm settings when $p$ is large, it does not apply to the standard $\ell_2$-norm. In particular, in this case MaxMin is \emph{equivalent} to the absolute value activation function in terms of expressive power \citep{anil2019sorting}, which contrasts to the $\ell_\infty$-norm case for which MaxMin networks are strcitly more expressive. Moreover, empirical results suggest that these $\ell_2$ Lipschitz networks may have sufficient expressive power \citep{singla2022improved} (although it remains a fantastic open problem to prove that MaxMin networks with bounded matrix 2-norm are universal approximators).
Based on the above finding, this work reflects an interesting ``phase transition'' in the expressive power of standard Lipschitz networks when $p$ is switched from 2 to a large number. Coincidentally, a similar limitation is also proved when using randomized smoothing, which suffers from the curse of dimensionality when $p>2$ \citep{yang2020randomized}. This raises an interesting question of why the effect of $p$ is very similar for both methods and how things change as $p$ increases.
\textbf{Beyond standard Lipschitz networks}. Another limitation is that our results apply only for standard Lipschitz networks. When the Lipschitz constant is constrained using carefully designed bounding methods \citep{raghunathan2018certified,fazlyab2019efficient,latorre2020lipschitz,shi2022efficient} (rather than a simple stacking of 1-Lipschitz layers), the robustness certification will be less efficient, but the resulting networks are likely to bypass the impossibility results in this paper. It is an interesting direction to study whether we can just use \emph{standard networks} with a carefully-designed Lipschitz bounding method that can achieve good certified robustness while still keeping adequate efficiency.
\textbf{Other promising directions}. On the application side, it is interesting to study how to design efficient training approaches for the general SortNet models with learnable weights or learnable $\rho$. Another meaningful question is how to encode inductive biases into these Lipschitz networks (e.g., designing convolutional architectures) to better suit image tasks.
\textbf{Broader impact}.
Interestingly, our theoretical results point out a surprising connection between MaxMin/$\ell_\infty$-distance networks and Boolean circuits. We believe the value of this paper may go beyond the certified robustness community and link to the field of theoretical computer science.
\section*{Acknowledgement}
This work is supported by National Science Foundation of China (NSFC62276005), The Major Key Project of PCL (PCL2021A12), Exploratory Research Project of Zhejiang Lab (No. 2022RC0AN02), and Project 2020BD006 supported by PKUBaidu Fund. Bohang Zhang would like to thank Ruichen Li and Yuxin Dong for helpful discussions. We also thank all the anonymous reviewers for the very careful and detailed reviews as well as the valuable suggestions. Their help has further enhanced our work.
\bibliographystyle{plain}
|
2,877,628,088,879 | arxiv | \section{Introduction}
\label{Sec:Intro}
Many studies of the dynamical evolution of high-multiplicity, tightly-packed planetary systems (known as Kepler Multis) have shown that systems similar to those observed in the Kepler sample may experience planet-planet instabilities that result in physical collisions (\citealt{2015ApJ...807...44P}; \citealt{2015ApJ...806L..26V}; \citealt{2017MNRAS.470.4145H}, hereafter referred to as {\it Paper 1}).
Studies of generic planetary systems show that post-disk dynamical interactions may be important in shaping the observed structure of Kepler Multis (\citealt{1996Icar..119..261C}; \citealt{2008ApJ...686..603J}; \citealt{2008ApJ...686..621F}; \citealt{2008ApJ...686..580C}), and additional studies (\citealt{2008Sci...321..814T}; \citealt{2010ApJ...725.1995M}) confirm the occurence of planet-planet interactions are consistent with orbits after the planetesimal disk dissipates.
\citet{2012ApJ...761...92F} find many Kepler Multis are dynamically packed, in that no additional planets may be added in between neighboring orbits without leading to planet-planet interactions, suggesting many observed systems are on the cusp of dynamical instability.
Studies of the structure of the planets in these systems have shown most of these planets are made up of a high-density core with a tenuous gas envelope dominating the volume (e.g., \citealt{2015ApJ...806..183W}; \citealt{2014ApJ...783L...6W}; \citealt{2015ApJ...801...41R}).
This highly differentiated structure suggests the outcomes of planet-planet collisions in these systems is very sensitive to the details of the collision, and in many cases the sticky-sphere approximation, ubiquitous in dynamical integrators, may not be valid for many of these collisions.
The sticky-sphere approximation assumes a collision occurs in the integration when the planets have a minimum separation less than the sum of the physical radii, and results in a single surviving planet with mass equal to the sum of the two colliding planets' masses, conserving center of mass, linear momentum, and angular momentum.
We perform five sets of detailed calculations, varying the mass ratio and core mass fractions, to better understand and predict the outcomes of planet-planet collisions that may be instrumental in shaping both the orbits and planet structures of these systems \citep{2016ApJ...817L..13I}.
For our primary suite of integrations, we use Kepler-36 as a nominal remnant of a previous planet-planet collision.
To examine how the mass ratio and planet structures affect the outcomes of these collisions, we also perform calculations between planets of mass ratio $q=1$ and $q=1/3$, where the less-massive planet has a mass of $4\ M_\mathrm{E}$, and core mass fractions of $m_\mathrm{c}/m=0.85$ and $m_\mathrm{c}/m=0.95.$
Kepler-36 is a 2-planet system discovered by the Kepler mission \citep{2012Sci...337..556C}, where Kepler-36b and Kepler-36c (henceforth referred to as {\it b} and {\it c}) have an orbital separation of less than $0.015$ AU, and a density ratio of $\rho_\mathrm{b}/\rho_\mathrm{c}=0.12$.
{\it b} has a density consistent with little or no gas envelope ($\rho_\mathrm{b}=7.5\ \mathrm{g}\ \mathrm{cm}^{-3}$, \citealt{2012Sci...337..556C}), while {\it c} has a density consistent with a gas mass fraction of $12\%$ and a gas radius fraction of $55\%$ ($\rho_\mathrm{c}=0.89\ \mathrm{g}\ \mathrm{cm}^{-3}$).
\citet{2012ApJ...755L..21D} show Kepler-36 likely undergoes short timescale dynamical chaos and provide best-fit densities of $\rho_b = 7.65\ \mathrm{g}\ \mathrm{cm}^{-3}$ and $\rho_c = 0.93\ \mathrm{g}\ \mathrm{cm}^{-3}$.
\citet{2013AN....334..992N} explore the possibility of additional planets in the system and use long term stability to contrain the semi-major axes of potential additional planets to $a<0.1\ \mathrm{AU}$ and $a>0.14\ \mathrm{AU}$.
The high density difference and small dynamical separation motivates many studies on the formation of Kepler-36.
Many formation channels have been explored to explain the observed properties of Kepler-36.
\citet{2013MNRAS.434.3018P} show migration in a turbulent disc can result in systems very similar to Kepler-36.
\citet{2013ApJ...776....2L} and \citet{2016ApJ...819L..10O} explore the possibility that the density difference can be explained by evaporation of the envelopes, and constrain the cooling timescale and initial structures of {\it b} and {\it c}.
Without migration, the planets' structure is unlikely to have formed in-situ due to the combination of small orbital separation and extreme density ratio, and may be a possible remnant of a previous planet-planet collision that depleted the gas envelope in the smaller planet.
We use Kepler-36 as a potential collision remnant to guide our choice of initial conditions for our primary set of collision calculations.
For this set of calculations we provide a generic analytic calculation of the orbits and amount of conservative mass-transfer required to create an initially unstable system that becomes stable after a collision resulting in two surviving planets (see \S\ref{A:Analytic}).
Many studies have been conducted to understand the details of planet-planet collisions and close encounters.
Much of the early work studies potential Moon-formation collisions, specifically between a proto-earth and an impactor, varying the planet structures (\citealt{1975Icar...24..504H}, \citealt{1986Icar...66..515B}; \citealt{2000orem.book..133C}; \citealt{2001Natur.412..708C}; \citealt{2004ARA&A..42..441C}; \citealt{2013Icar..222..200C}; \citealt{2014DPS....4650109C}) and using various tabulated equations of state to handle the abrupt changes in density and phase transitions of the impacted rock.
\citet{2015ApJ...812..164L} present calculations of direct collisions between a rocky, terrestrial planet and a gas giant, treating the rocky core with a multi-phase equation of state (Tillotson 1962) and the gas as a polytrope with $\gamma=5/3$.
\citet{2010arXiv1007.1418P} discuss the possibility of tidal dissipation of orbital energy leading to potentially stable gas-giant binaries.
\citet{2015MNRAS.448.1751I} and \citet{2016ApJ...817L..13I} combine 1-D hydrodynamic calculations and a thermal evolution model to examine the effect of collisions resulting in mergers during the giant-impact phase of planet formation, and find that these collisions result in a large range of planet densities, similar to the observed densities found in Kepler Multis.
In this work we explore the outcomes of grazing collisions between sub-Neptunes close to the host star, varying the mass ratio and gas mass fractions.
We present several suites of collision calculations from initial conditions sourced from dynamical integrations and find several distinct outcomes, including scatterings, mergers, and a potential planet-planet binary (formed from energy dissipation due to the physical collision).
We aggregate the results and develop a fit predicting the outcomes and changes in mass from such collisions for use in dynamical integrators.
The rest of the paper is organized as follows:
In \S\ref{Sec:Methods} we discuss the initial conditions and numerical methods used for our dynamical integrations of planetary systems, and our method for creating 3-D planet models and numerical methods used in the subsequent hydrodynamic calculations.
In \S\ref{Sec:Outcomes} we present the results of our hydrodynamic calculations, classifying the collision outcomes as bound planet-pairs, two surviving planets in stable or unstable orbits, or mergers, and explore the stability and potential observable indicators of collisions in both the orbits and planet structures.
In \S\ref{Sec:Predictions} we present a prescription, aggregating the results from all sets of collision calculations, for predicting the outcomes and modeling the mass loss of planet-planet collisions for use in dynamical integrators.
Finally, we summarize our conclusions in \S\ref{Sec:Conclusions}.
\section{Numerical Methods}
\label{Sec:Methods}
We perform five sets of collision calculations, varying the mass ratio and core mass fractions of the two-planet system.
We choose our combinations of mass ratios and core mass fractions to broadly cover the properties of neighboring planets seen in Kepler Multis.
Table~\ref{TBL:q_mc} shows the planet masses, radii, and core properties, and host star properties used in each set, where \S\ref{SSec:MassTransfer} describes how we assign the masses and core mass fractions for the Kepler-36 progenitor calculations.
Sub-Neptune structures are more dependent on the core mass fraction than the total mass; despite having three times more gas, the $12\ M_\mathrm{E}$ models have a physical size similar to the $4\ M_\mathrm{E}$ models of identical gas mass fraction.
For each set we generate a suite of dynamical integrations and use a subset of the resultant collisions as initial conditions for later detailed hydrodynamic calculations.
\begin{deluxetable*}{lccccccccccccccc}
\tabletypesize{\footnotesize}
\tablewidth{15.5cm}
\tablecolumns{11}
\tablecaption{System Properties\label{TBL:q_mc}}
\tablehead{
\colhead{Set} & \colhead{$m_1$} & \colhead{$m_2$} & \colhead{$m_\mathrm{1,c}/m_1$} & \colhead{$m_\mathrm{2,c}/m_2$} & \colhead{$R_1$} & \colhead{$R_2$} & \colhead{$R_\mathrm{1,c}/R_1$} & \colhead{$R_\mathrm{2,c}/R_2$} & \colhead{$M_*$} & \colhead{$T_*$}}
\startdata
$ Kepler-36\ progenitor $ & $ 4.67 $ & $ 7.87 $ & $ 0.95 $ & $ 0.91 $ & $ 2.65 $ & $ 3.37 $ & $ 0.56 $ & $ 0.49 $ & $ 1.071 $ & $ 5911 $ & \\
$ q=1;\ m_\mathrm{c}=0.85 $ & $ 4.00 $ & $ 4.00 $ & $ 0.85 $ & $ 0.85 $ & $ 4.21 $ & $ 3.96 $ & $ 0.33 $ & $ 0.35 $ & $ 1.0 $ & $ 5778 $ & \\
$ q=1;\ m_\mathrm{c}=0.95 $ & $ 4.00 $ & $ 4.00 $ & $ 0.95 $ & $ 0.95 $ & $ 2.84 $ & $ 2.72 $ & $ 0.50 $ & $ 0.52 $ & $ 1.0 $ & $ 5778 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.85 $ & $ 4.00 $ & $ 12.00 $ & $ 0.85 $ & $ 0.85 $ & $ 4.21 $ & $ 4.01 $ & $ 0.33 $ & $ 0.45 $ & $ 1.0 $ & $ 5778 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.95 $ & $ 4.00 $ & $ 12.00 $ & $ 0.95 $ & $ 0.95 $ & $ 2.84 $ & $ 2.94 $ & $ 0.50 $ & $ 0.63 $ & $ 1.0 $ & $ 5778 $ & \\
\enddata
\tablenotetext{}{$Set$ designates the set name, $m_1$ and $m_2$ are the masses of the inner and outer planet in Earth masses, $m_\mathrm{1,c}/m_1$ and $m_\mathrm{2,c}/m_2$ are the core mass fractions of the inner and outer planet, $R_1$ and $R_2$ are the radii of the inner and outer planet in Earth radii, $R_\mathrm{1,c}/R_1$ and $R_\mathrm{2,c}/R_2$ are the core radius fractions of the inner and outer planet, and $M_*$ and $T_*$ are the mass and temperature of the host star in solar masses and K.}
\end{deluxetable*}
\subsection{Dynamical Integrations}
\label{Sec:Nbody}
For each set of systems, we use a Monte Carlo method to generate 1000 realizations in the point-mass limit, where we set the planets' density to some arbitrary large value, using the Burlish-Stoer integrator within the N-body dynamics package, {\it Mercury} 6.2 \citep{1999MNRAS.304..793C}.
We later impose planet radii to generate collisions.
Table~\ref{TBL:a_e} summarizes the initial orbits used in each set of calculations, where we set the initial period ratio, $P_2/P_1=1.3$ and assign the eccentricities such that the total angular momentum is low enough to result in orbit crossings.
The results from \S\ref{A:Analytic} are used to generate the initial orbits for the Kepler-36 progenitor calculations, using the observed Kepler-36 properties to estimate the initial masses and semi-major axes.
In the more generic sets of calculations, the inner planet's semi-major axis is set to $a_1 = 0.1\ \mathrm{AU}$, and the initial eccentricities are randomly assigned, enforcing $a_1(1+e_1)=a_2(1-e_2)$ to ensure fast orbit crossing.
The inclinations are randomly drawn from a distribution uniform in $\cos(i)$, where we choose a small, but non-zero inclination, $-\theta\le i\le\theta;\ \theta=0.017$, ensuring that $a\sin(\theta)>R$, where $R$ is the radii of the planets, to simulate a nearly coplanar system.
Finally, we randomly sample the argument of periapsis, ascending node, and true anomaly from a uniform distribution from $0$ to $2\pi$.
\begin{deluxetable*}{lccccccccccc}
\tabletypesize{\footnotesize}
\tablewidth{10.0cm}
\tablecolumns{7}
\tablecaption{Initial Orbits\label{TBL:a_e}}
\tablehead{
\colhead{Set} & \colhead{$a_1$} & \colhead{$a_2$} & \colhead{$e_{1,\mathrm{min}}$} & \colhead{$e_{1,\mathrm{max}}$} & \colhead{$e_{2,\mathrm{min}}$} & \colhead{$e_{2,\mathrm{max}}$}}
\startdata
$ Kepler-36\ progenitor $ & $ 0.111 $ & $ 0.132 $ & $ 0.11 $ & $ 0.11 $ & $ 0.07 $ & $ 0.07 $ & \\
$ q=1 $ & $ 0.100 $ & $ 0.119 $ & $ 0.00 $ & $ 0.18 $ & $ 0.01 $ & $ 0.16 $ & \\
$ q=1/3 $ & $ 0.100 $ & $ 0.119 $ & $ 0.00 $ & $ 0.19 $ & $ 0.00 $ & $ 0.16 $ & \\
\enddata
\tablenotetext{}{$Set$ designates the set name, $a_1$ and $a_2$ are the semi-major axes of the inner and outer planet in AU, and $e_{1,\mathrm{min}}$, $e_{1,\mathrm{max}}$, $e_{2,\mathrm{min}}$, and $e_{2,\mathrm{max}}$ are the minimum and maximum eccentricities of the inner and outer planets. The initial orbits are identical for the sets of runs with a given mass ratio.}
\end{deluxetable*}
A collision occurs in the integration when the planets have a minimum separation less than the sum of the physical radii, calculated using {\it MESA} assuming an Earth-like core composition and a core-density calculated by interpolating core-mass core-density tables in \citet{2014ApJ...787..173H}.
For each calculation, the first planet-planet collision is characterized by the distance of closest approach, $d_\mathrm{min}$, in units of the sum of the planets' physical radii, $R_1$ and $R_2$, and the collision energy, $E_\mathrm{c}=E_\mathrm{k}+E_\mathrm{g}$ in units of the binding energy of both planets up to a coefficient, \begin{equation}\label{EQ:binding_energy}E_\mathrm{b}=\sum_{k=1,2}\frac{Gm_k^2}{R_k},\end{equation} where $E_\mathrm{k}$ and $E_\mathrm{g}$ are the kinetic and gravitational potential energy of the planets in the center of mass frame, ignoring the host star.
The degree of contact, \begin{equation}\label{EQ:Degree_of_Contact}\eta=\frac{d_\mathrm{min}}{R_1+R_2},\end{equation} characterizes the depth of the collision, where $\eta=1$ describes an extremely grazing collision and $\eta=0$ describes a head-on collision.
Table~\ref{TBL:NBody_Stats} shows the distribution of collisions for each set of calculations, categorizing the number of collisions that result in a direct impact of the two cores, grazing collisions, and near misses, defined as close encounters where $1<d_\mathrm{min}/(R_1+R_2)<1.2.$
We note that due to tidal bulging, many of the ``near misses'' would also result in physical contact between the planets.
We find that the number of collisions does not decrease significantly for systems with planets of the same mass but smaller physical size, and the fraction of collisions that result in a direct impact between the planets' cores increases with larger core mass fractions.
Figure~\ref{Fig:Collision_Parameters} shows the distribution of the first planet-planet collisions in each set of integrations.
In contrast to {\it Paper 1}, we find that the number of initial collisions decreases at higher distances of closest approach, likely due to the lower multiplicity of our systems; since higher-multiplicity systems can go unstable with wider orbital spacings (\citealt{1993Icar..106..247G}, \citealt{1996Icar..119..261C}), crossing orbits will extend to higher eccentricities---and hence, higher relative velocities.
In \S\ref{SSec:Hydro}, we describe how we perform detailed hydrodynamic calculations exploring how the distance of closest approach and collision energy affect the outcomes, specifically if the cores merge, if the planets survive, and how much gas is retained by the remnant planet(s) \citep{2008ApJ...686..580C}.
\begin{deluxetable*}{lccccccccc}
\tabletypesize{\footnotesize}
\tablewidth{9.0cm}
\tablecolumns{5}
\tablecaption{Statistics of N-body Collisions\label{TBL:NBody_Stats}}
\tablehead{
\colhead{Set} & \colhead{$N_\mathrm{total}$} & \colhead{$N_\mathrm{direct\ impact}$} & \colhead{$N_\mathrm{grazing}$} & \colhead{$N_\mathrm{near\ miss}$}}
\startdata
$ Kepler-36\ progenitor $ & $ 502 $ & $ 290 $ & $ 148 $ & $ 64 $ & \\
$ q=1;\ m_\mathrm{c} = 0.85 $ & $ 578 $ & $ 245 $ & $ 266 $ & $ 67 $ & \\
$ q=1;\ m_\mathrm{c} = 0.85 $ & $ 572 $ & $ 303 $ & $ 181 $ & $ 81 $ & \\
$ q=1/3;\ m_\mathrm{c} = 0.95 $ & $ 708 $ & $ 360 $ & $ 256 $ & $ 92 $ & \\
$ q=1/3;\ m_\mathrm{c} = 0.95 $ & $ 701 $ & $ 447 $ & $ 171 $ & $ 83 $ & \\
\enddata
\tablenotetext{}{$Set$ designates the set name, $N_\mathrm{total}$ is the number of collisions that occur within $1000$ years, $N_\mathrm{direct\ impact}$ is the number of collisions resulting in a direct impact between the planets' cores, $N_\mathrm{grazing}$ is the number of collisions that do not result in a direct impact between the planets' cores, and $N_\mathrm{near\ miss}$ is the number of near misses, defined as close encounters where $1<d_\mathrm{min}/(R_1+R_2)<1.2.$}
\end{deluxetable*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f1a.eps}
\color{white}
\rule{8.0cm}{5.0cm}
\color{black} \\
\includegraphics[width=8.0cm]{./f1b.eps}
\includegraphics[width=8.0cm]{./f1c.eps}
\end{tabular}
\end{center}
\caption{Scatter plots showing the distance of closest approach, $d_\mathrm{min}$, in units of the sum of the planets' physical radii, and collision energy, $E_\mathrm{c}=E_\mathrm{k}+E_\mathrm{g}$ in units of the binding energy of both planets up to a coefficient, $E_\mathrm{b}=Gm_1^2/R_1+Gm_2^2/R_2$, where $E_\mathrm{k}$ and $E_\mathrm{g}$ are the kinetic and gravitational potential energy of the planets in the center of mass frame, ignoring the host star, for the first collision or near miss in each N-body integration. The histograms above the scatter plots show the distribution of distances of closest approach and the histograms to the right of the scatter plots show the distribution of collision energies for the first collision or near miss in each integration. We show the distribution of collisions with a small enough distance of closest approach to result in a direct impact between the two cores (red), using the nominal core radii given by \citet{2013ApJ...770..131L} for the Kepler-36 progenitor integrations (top left), and using the nominal core radii calculated using the assigned core mass fraction and a core density from \citet{2014ApJ...787..173H} for the $q=1$ (bottom left) and $q=1/3$ (bottom right) integrations. We also show the distribution of grazing collisions (blue), and near misses (green), defined as close encounters where $1<d_\mathrm{min}/(R_1+R_2)<1.2$. For the more generic calculations, collisions are calculated assuming a core mass fraction of $85\%$. Collisions in between planets with a higher core mass fraction require a smaller distance of closest approach, and we show the minimum distance for a core mass fraction of $95\%$ (dotted black line).}
\label{Fig:Collision_Parameters}
\end{figure*}
\subsection{Hydrodynamics}
\label{SSec:Hydro}
We perform 102 hydrodynamic calculations sampled from each set of dynamical integrations described in \S\ref{Sec:Nbody}, using an SPH code, {\it StarSmasher}\footnote{{\it StarSmasher} is available at https://jalombar.github.io/starsmasher/.} (previously {\it StarCrash}, originally developed by \citealt{1991PhDT........11R} and later updated as described in \citealt{1999JCoPh.152..687L} and \citealt{2000PhRvD..62f4012F}), to treat the hydrodynamics.
{\it StarSmasher} implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards as described in \citet{2010MNRAS.402..105G}.
Using direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed \citep{2010ProCS...1.1119G}.
The code uses a cubic spline \citep{1985A&A...149..135M} for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch \citep{1995JCoPh.121..357B} to prevent unphysical inter-particle penetration, specifically described in \citet{2015ApJ...806..135H}.
We sample collisions at varying degrees of contact and collision energies, using the position and velocity coordinates of the planets many dynamical times defined as the free-fall time of a test-particle around the planet) prior to the close encounter, where we treat the host star as a point-mass particle that interacts only gravitationally.
We preferentially sample collisions to explore the boundary between merger and scattering results.
\subsection{Planet Models and Equations of State}
Following the method described in $\S A.3$ in {\it Paper 1}, we first use {\it MESA} to generate gas envelopes with a constant density core of mass, $m_\mathrm{c}$, where the core density is determined using tables from \citet{2014ApJ...787..173H} assuming an Earth-like core composition ($67.5\%$ silicate mantle and $32.5\%$ iron core).
We irradiate the planet at the chosen semi-major axis and host-star temperature (shown in Tables~\ref{TBL:q_mc} and \ref{TBL:a_e}) for $4.5\times10^9$ years.
We then replace the isothermal and constant-density cores {\it MESA} generates with a differentiated core, with equations of state from \citet{2007ApJ...669.1279S}, using the core mass and radius as boundary conditions to solve for the iron core and silicate-mantle mass fractions, recovering solutions very close to the Earth-like input values.
{\bf Planet structures are very time-sensitive.
\citet{2016ApJ...831..180C} and \citet{2013ApJ...776....2L} show the time-dependent structure of Kepler-36b and Kepler-36c (Figure 1 in both papers), where Kepler-36b contracts by a factor of 7 and Kepler-36c contracts by a factor of 3 between 100 Myr and 4.5 Gyr.
The age of planets when orbital instability first develops is not well understood, in part because we do not know what might trigger the instability in the system, just the internal secular evolution of the system (e.g. \citealt{2008ApJ...686..580C}; \citealt{2008ApJ...686..603J}; \citealt{2012ApJ...755L..21D}), or an external trigger (e.g., \citealt{2004AJ....128..869Z}).
In general, orbital instability timescales in chaotic systems have very broad distributions (e.g., \citealt{2008ApJ...686..580C}; \citealt{2007ApJ...666..423Z}; \citealt{2010A&A...516A..82F}), so here we take a conservative assumption, assuming the planets are old and therefore their radii have shrunk to near equilibrium.
With younger, more inflated planets, collisions happen even faster, but this would make our initial conditions very arbitrary (very sensitive to exact assumptions about the age of the planets at the time they collide).
Collisions occurring between younger planets would be more dramatic, due to lower density and hotter envelopes, and our calculations should be treated as a lower limit.}
{\it StarSmasher} uses the combined gas and core profiles, and because we are interested in the evolution of the low-density gas envelope, creates planet models with equal number-density, non-equal mass, particle-distributions \citep{2006MNRAS.365..991M} using $\sim10^5$ particles per planet.
Gas particles follow a semi-analytic equation of state fit to a grid of {\it MESA} models, \begin{equation}\label{EQ:gasEoS_SPH}p_i=\frac{\rho_iu_i}{\beta_i}+K_\mathrm{e}\rho_i^{\gamma_\mathrm{e}}\left(1-\frac{1}{\beta_i(\gamma_\mathrm{e}-1)}\right),\end{equation}\begin{equation}u_i=\frac{K_\mathrm{e}\rho_i^{\gamma_\mathrm{e}-1}}{\gamma_\mathrm{e}-1}+\frac{\beta_ik_\mathrm{B}T_i}{\mu_im_H},\end{equation} where we use $\gamma_\mathrm{e}=3$, while $K_e$ and $\beta_i$ are fitted parameters.
Core particles follow an equation of state from \citet{2007ApJ...669.1279S}, \begin{equation}p_i=\frac{u_i(\gamma_\mathrm{c}-1)\rho_i\left(1-\frac{\rho_{\rm c}^\prime}{\rho_i}\right)^{\gamma_\mathrm{c}}}{{}_2F_1\left(1-\gamma_\mathrm{c},-\gamma_\mathrm{c},2-\gamma_\mathrm{c},\frac{\rho_{\rm c}^\prime}{\rho_i}\right)},\end{equation} where the internal energy is initially set as \begin{equation}u_i=\frac{c\rho_i^{\gamma_\mathrm{c}-1}{}_2F_1\left(1-\gamma_\mathrm{c},-\gamma_\mathrm{c},2-\gamma_\mathrm{c},\frac{\rho_{\rm c}^\prime}{\rho}\right)}{\gamma_\mathrm{c}-1},\end{equation} $c$, $\gamma_\mathrm{c}$, and $\rho_{\rm c}^\prime$ are constants determined by the composition, and ${}_2F_1(1-\gamma_\mathrm{c},-\gamma_\mathrm{c},2-\gamma_\mathrm{c},\frac{\rho_{\rm c}^\prime}{\rho})$ is the ordinary hypergeometric function.
Expressions for the mantle are like those for the core but with coefficients applicable to a silicate composition employed.
Particles near the interfaces of the gas envelope, silicate mantle, and iron core are treated as mixed-composition in order to resolve the high density-gradient between components.
For more details of how we generate our planet models, including how we fit the equation of state used for the gas envelope, and the algorithms used to handle mixed-composition particles, see \S$A.3$ in {\it Paper 1}.
Figures~\ref{Fig:Radial_Profiles_b} and \ref{Fig:Radial_Profiles_c} show two comparisons of the initial profiles from {\it MESA} and the semi-analytic core equations of state, to the planet profiles after 2000 code timescales in {\it StarSmasher}, relaxed in isolation to both test the stability of the models and minimize the initial spurious noise, where a code timescale is of order unity to the free-fall time of the planets, $t_\mathrm{free-fall}\sim\left(\frac{\rho_\odot}{\rho_\mathrm{planet}}\right)^{1/2}t_\mathrm{code}$.
The planet models are very stable at the end of the relaxation; the radial acceleration on the particles is near zero throughout the profile.
The models for {\it b} and {\it c} for the Kepler-36 progenitor calculations have 12114 and 8190 core particles and 87840 and 91764 gas particles, respectively.
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f2a.eps}
\includegraphics[width=8.0cm]{./f2b.eps}\\
\includegraphics[width=8.0cm]{./f2c.eps}
\includegraphics[width=8.0cm]{./f2d.eps}\\
\end{tabular}
\end{center}
\caption{Radial profiles at the end of 2000 code timescales for an isolated model of a Kepler-36b progenitor, where we enable a relaxation force (see \citealt{2015ApJ...806..135H} for details) for the first 100 code timescales.
In each plot we mark the transitions from core to mantle and from mantle to gas envelope (vertical dashed lines).
Density (top left) and internal energy (top right) profiles for the input profile (blue), generated using {\it MESA} and the equations of state from \citet{2007ApJ...669.1279S}, the initial values assigned to each particle (black), and the values after 2000 code timescales (red).
Particle composition, $x$, profile (bottom left) after 2000 code timescales, where $x$ represents the mass fraction of the particle belonging to the heavier composition near the interface.
Equilibrium profile (bottom right) after 2000 code timescales, showing the radial acceleration, {\bf in code units}, from the hydrostatic force (red), the gravitational force (green), and the total force (black).
These figures show how we assigned the particle composition and redistributed internal energy based on the smoothed densities.
The model relaxes into a very stable hydrostatic equilibrium, stable for at least the timescales used for the dynamical calculations.}
\label{Fig:Radial_Profiles_b}
\end{figure*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f3a.eps}
\includegraphics[width=8.0cm]{./f3b.eps}\\
\includegraphics[width=8.0cm]{./f3c.eps}
\includegraphics[width=8.0cm]{./f3d.eps}\\
\end{tabular}
\end{center}
\caption{Like Fig.~\ref{Fig:Radial_Profiles_b} but for an isolated model of a Kepler-36c progenitor.}
\label{Fig:Radial_Profiles_c}
\end{figure*}
We note that much of the previous literature exploring the mass-radius relationship of sub-Neptunes perform calculations with both a homogeneous and a differentiated equation of state (\citealt{2014ApJ...787..173H} and \citealt{2007ApJ...669.1279S}).
We emphasize that here we are focused on differentiated cores.
Homogeneous cores would be denser and smaller, which would impact the results of collisions simulations.
\subsection{Analysis of Hydro Calculations}
For each output snapshot from the hydrodynamic calculations, we build the planets starting from the dense iron core.
For each particle, in ascending order of distance from the center of mass of the planet, we calculate a Jacobi Constant, \begin{equation}C_\mathrm{J}=\frac{x^2+y^2}{a}+2a\left(\frac{k_1}{r_1}+\frac{k_2}{r_2}\right)-\frac{v_x^2+v_y^2+v_z^2}{n^2a^2},\end{equation} where $n$ is the orbital frequency, $k_1 = M_*/(m_\mathrm{p}+M_*)$ and $\mu_2 = m_\mathrm{p}/(m_\mathrm{p}+M_*)$ are the mass fractions of the star and planet, where $k_1+k_2=1$, $r_1$ and $r_2$ are the distances from the particle to the host star and planet, and the position and velocity vectors are defined in the corotating frame of the planet and the star, with $x^2+y^2+z^2\equiv 1$.
We calculate the Jacobi Constant at $L_1$, assuming a circular orbit \citep{1999ssd..book.....M}, \begin{equation}C_{L_1}\simeq3+3^{4/3}\mu_2^{2/3}-10\frac{\mu_2}{3},\end{equation} and assign the particle to the planet if $C_\mathrm{J} > C_{L_1}$, updating the planet's mass, position, and velocity.
For each particle initially assigned to both planets, we assign them to the planet with lower relative orbital energy with respect to the particle, \begin{equation}E=m\left(\frac{v^2}{2}-\frac{Gm_\mathrm{p}}{r}\right),\end{equation} where $m_\mathrm{p}$ is the mass of the planet, $m$ is the mass of the particle, and $r$ and $v$ are the scalar distance and relative velocity between the particle and the planet.
We categorize the collision outcomes by examining the relative orbits of the two planets, ignoring the host star, where the semi-major axis, \begin{equation}a=\frac{-G(m_1+m_2)}{2E_\mathrm{orb}},\end{equation} and the eccentricity, \begin{equation}e=\left|\left(\frac{{\bf v}^2}{G(m_1+m_2)}-\frac{1}{|\bf{r}|}\right){\bf r}-\frac{\bf{r}\cdot\bf{v}}{G(m_1+m_2)}\bf{v}\right|,\end{equation} where \begin{equation}E_\mathrm{orb}=\frac{1}{2}\mu{\bf v}^2-\frac{Gm_1m_2}{|\bf{r}|}\end{equation} is the relative orbital energy, $\mu$ is the reduced mass, ${\bf v} = {\bf v}_1-{\bf v}_2,$ and ${\bf r}={\bf r}_1-{\bf r}_2$.
We categorize the outcome as a merger if the relative periapsis of the two planets is less than twice the sum of the core radii (to account for the observed tidal deformation of the cores), as a bound planet-pair if the relative orbital energy results in an apoapsis less than the mutual Hill radius of the planets, and a scattering if the two planets leave their mutual Hill spheres, where the mutual Hill radius, \begin{equation}\label{EQ:RHill}R_\mathrm{H} = a\left(\frac{\mu}{3M_*}\right)^{1/3}.\end{equation}
Figure~\ref{Fig:Orbit_Examples} shows the orbital evolution of two collisions after the initial contact, one resulting in a scattering and the other a merger.
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f4a.eps}
\includegraphics[width=8.0cm]{./f4b.eps} \\
\includegraphics[width=8.0cm]{./f4c.eps}
\includegraphics[width=8.0cm]{./f4d.eps}
\end{tabular}
\end{center}
\caption{Relative planet-planet orbits (top) and final snapshots (bottom) of collisions resulting in a scattering (left) and a merger (right).
The semi-major axes (solid blue line), and periastron and apoastron (dotted blue lines) are calculated using the relative separation (dashed black line) and velocity vectors of the two planets.
The calculation is categorized as a merger if the periapsis is less than twice the sum of the core radii (dotted red line), as a potential planet-planet binary if the apoapsis does not exceed the mutual Hill radius between the two planets, and as a scattering if the planets leave their mutual Hill sphere. The snapshots show the logarithm of column density plots in the orbital plane, centered on the host star (left) and the planet-planet merger (right). The axes are scaled to $R_\odot$, and the column density has units of $M_\odot R_\odot^{-2}$}
\label{Fig:Orbit_Examples}
\end{figure*}
\section{Collisional Outcomes}
\label{Sec:Outcomes}
Tables~\ref{TBL:Results_Kepler36}-\ref{TBL:Results_q3} summarize the results of our collision calculations, showing the changes in mass and energy lost in the collision as a function of the N-body collision parameters.
We find that the distances of closest approach and relative collision energies are, in general, larger than the values from the N-body calculations, due to resolving the colliding atmospheres.
The calculations have adequate energy conservation, with the maximum fractional change in total energy $\Delta E_\mathrm{tot}/E_\mathrm{tot,i}<2\times10^{-5}$.
Figure~\ref{Fig:Collision_Results} shows the collision outcomes as a function of the distance of closest approach and collision energy, as calculated in {\it Mercury 6.2}, and in general, scatterings occur more often at larger distances of closest approach and collision energies, and planet-planet captures (bound planet-pairs or mergers) occur at smaller distances of closest approach and energies.
The same initial orbits are used for the two sets of calculations at a given mass ratio but different core mass fractions, and the results are very similar, with the only categorically different outcome occuring in the $q=1/3$ calculations; specifically, a merger in the $m_\mathrm{c}/m = 0.85$ calculations results instead in a scattering in the $m_\mathrm{c}/m = 0.95$ calculations due to less energy dissipated in the latter collision.
In total we find 62 scatterings, 39 mergers, and 1 potential bound-planet-pair.
\begin{deluxetable*}{ccc|cccccccc}
\tabletypesize{\footnotesize}
\tablewidth{14.5cm}
\tablecolumns{9}
\tablecaption{Results of Kepler-36 Progenitor SPH Calculations\label{TBL:Results_Kepler36}}
\tablehead{
\colhead{} & \multicolumn{2}{c}{N-body} & \multicolumn{2}{c}{SPH} & \\
\cline{2-3} \cline{4-5}
\colhead{Run} & \colhead{$d_\mathrm{min}$} & \colhead{$E_\mathrm{c}$} & \colhead{$d_\mathrm{min}$} & \colhead{$E_\mathrm{c}$} & \colhead{$\Delta m_\mathrm{1}$} & \colhead{$\Delta m_\mathrm{2}$} & \colhead{$\Delta E_\mathrm{orb}$} & \colhead{$\epsilon_\mathrm{acc}$} & \colhead{outcome}}
\startdata
$ 1 $ & $ 0.576 $ & $ -0.019 $ & $ 0.641 $ & $ 0.019 $ & $ -0.067 $ & $ 0.009 $ & $ 0.0404 $ & $ 2.55\times10^{-6} $ & $ bound $ & \\
$ 2 $ & $ 0.628 $ & $ 0.023 $ & $ 0.606 $ & $ 0.023 $ & $ -1.310 $ & $ -0.082 $ & $ 0.0491 $ & $ 6.37\times10^{-6} $ & $ merged $ & \\
$ 3 $ & $ 0.615 $ & $ -0.054 $ & $ 0.669 $ & $ -0.025 $ & $ -0.604 $ & $ -0.030 $ & $ 0.0275 $ & $ 5.09\times10^{-7} $ & $ merged $ & \\
$ 4 $ & $ 0.633 $ & $ -0.023 $ & $ 0.668 $ & $ 0.011 $ & $ -0.067 $ & $ 0.003 $ & $ 0.0356 $ & $ 5.37\times10^{-5} $ & $ merged $ & \\
$ 5 $ & $ 0.557 $ & $ -0.071 $ & $ 0.647 $ & $ -0.025 $ & $ -1.104 $ & $ -0.293 $ & $ 0.0439 $ & $ 1.27\times10^{-6} $ & $ merged $ & \\
$ 6 $ & $ 0.522 $ & $ 0.014 $ & $ 0.548 $ & $ 0.044 $ & $ -0.851 $ & $ -0.704 $ & $ 0.0635 $ & $ 3.06\times10^{-6} $ & $ merged $ & \\
$ 7 $ & $ 0.557 $ & $ -0.010 $ & $ 0.553 $ & $ 0.011 $ & $ -1.288 $ & $ -0.629 $ & $ 0.0433 $ & $ 7.64\times10^{-7} $ & $ merged $ & \\
$ 8 $ & $ 0.562 $ & $ -0.010 $ & $ 0.610 $ & $ 0.005 $ & $ -1.012 $ & $ -0.503 $ & $ 0.0455 $ & $ 5.09\times10^{-7} $ & $ merged $ & \\
$ 9 $ & $ 0.529 $ & $ -0.041 $ & $ 0.613 $ & $ -0.012 $ & $ -1.029 $ & $ -0.077 $ & $ 0.0267 $ & $ 2.55\times10^{-7} $ & $ merged $ & \\
$ 10 $ & $ 0.503 $ & $ 0.004 $ & $ 0.542 $ & $ 0.035 $ & $ -0.968 $ & $ -0.047 $ & $ 0.0592 $ & $ 1.02\times10^{-6} $ & $ merged $ & \\
$ 11 $ & $ 0.746 $ & $ -0.043 $ & $ 0.768 $ & $ -0.017 $ & $ -0.779 $ & $ -0.216 $ & $ 0.0157 $ & $ 4.58\times10^{-6} $ & $ merged $ & \\
$ 12 $ & $ 0.566 $ & $ -0.025 $ & $ 0.586 $ & $ -0.023 $ & $ -1.023 $ & $ -0.041 $ & $ 0.0193 $ & $ 2.55\times10^{-7} $ & $ merged $ & \\
$ 13 $ & $ 0.543 $ & $ 0.001 $ & $ 0.539 $ & $ 0.017 $ & $ -1.158 $ & $ -0.742 $ & $ 0.0512 $ & $ 1.27\times10^{-6} $ & $ merged $ & \\
$ 14 $ & $ 0.800 $ & $ -0.036 $ & $ 0.791 $ & $ -0.017 $ & $ -0.008 $ & $ -0.006 $ & $ 0.0086 $ & $ 1.43\times10^{-5} $ & $ merged $ & \\
$ 15 $ & $ 0.767 $ & $ -0.039 $ & $ 0.772 $ & $ -0.022 $ & $ -0.021 $ & $ -0.043 $ & $ 0.0103 $ & $ 9.17\times10^{-6} $ & $ merged $ & \\
$ 16 $ & $ 0.563 $ & $ 0.035 $ & $ 0.596 $ & $ 0.052 $ & $ -0.104 $ & $ 0.002 $ & $ 0.0589 $ & $ 2.09\times10^{-5} $ & $ unstable $ & \\
$ 17 $ & $ 0.623 $ & $ 0.012 $ & $ 0.671 $ & $ 0.041 $ & $ -0.049 $ & $ -0.001 $ & $ 0.0379 $ & $ 8.91\times10^{-6} $ & $ unstable $ & \\
$ 18 $ & $ 0.664 $ & $ 0.019 $ & $ 0.682 $ & $ 0.040 $ & $ -0.013 $ & $ -0.004 $ & $ 0.0256 $ & $ 2.52\times10^{-5} $ & $ unstable $ & \\
$ 19 $ & $ 0.587 $ & $ 0.013 $ & $ 0.691 $ & $ 0.029 $ & $ -0.017 $ & $ -0.004 $ & $ 0.0257 $ & $ 2.04\times10^{-6} $ & $ unstable $ & \\
$ 20 $ & $ 0.547 $ & $ 0.013 $ & $ 0.664 $ & $ 0.045 $ & $ -0.112 $ & $ -0.007 $ & $ 0.0537 $ & $ 9.42\times10^{-6} $ & $ unstable $ & \\
$ 21 $ & $ 0.652 $ & $ 0.039 $ & $ 0.737 $ & $ 0.044 $ & $ -0.001 $ & $ -0.002 $ & $ 0.0129 $ & $ 3.06\times10^{-6} $ & $ unstable $ & \\
$ 22 $ & $ 0.666 $ & $ 0.033 $ & $ 0.771 $ & $ 0.047 $ & $ -0.001 $ & $ -0.002 $ & $ 0.0111 $ & $ 4.58\times10^{-6} $ & $ unstable $ & \\
$ 23 $ & $ 0.807 $ & $ 0.021 $ & $ 0.771 $ & $ 0.047 $ & $ -0.001 $ & $ -0.002 $ & $ 0.0111 $ & $ 7.89\times10^{-6} $ & $ unstable $ & \\
$ 24 $ & $ 0.570 $ & $ 0.050 $ & $ 0.667 $ & $ 0.055 $ & $ -0.095 $ & $ 0.002 $ & $ 0.0565 $ & $ 9.42\times10^{-6} $ & $ unstable $ & \\
$ 25 $ & $ 0.589 $ & $ 0.030 $ & $ 0.736 $ & $ 0.054 $ & $ -0.008 $ & $ -0.004 $ & $ 0.0238 $ & $ 2.47\times10^{-5} $ & $ unstable $ & \\
$ 26 $ & $ 0.671 $ & $ 0.006 $ & $ 0.737 $ & $ 0.015 $ & $ -0.005 $ & $ -0.003 $ & $ 0.0182 $ & $ 2.55\times10^{-6} $ & $ unstable $ & \\
$ 27 $ & $ 0.748 $ & $ -0.005 $ & $ 0.751 $ & $ 0.015 $ & $ -0.001 $ & $ -0.003 $ & $ 0.0129 $ & $ 2.04\times10^{-6} $ & $ unstable $ & \\
$ 28 $ & $ 0.656 $ & $ 0.011 $ & $ 0.564 $ & $ 0.057 $ & $ -0.187 $ & $ -0.013 $ & $ 0.0595 $ & $ 1.27\times10^{-6} $ & $ unstable $ & \\
$ 29 $ & $ 0.647 $ & $ -0.005 $ & $ 0.693 $ & $ 0.029 $ & $ -0.043 $ & $ 0.000 $ & $ 0.0337 $ & $ 1.53\times10^{-6} $ & $ unstable $ & \\
$ 30 $ & $ 0.765 $ & $ 0.023 $ & $ 0.754 $ & $ 0.048 $ & $ -0.001 $ & $ -0.002 $ & $ 0.0130 $ & $ 3.31\times10^{-6} $ & $ unstable $ & \\
$ 31 $ & $ 0.763 $ & $ 0.028 $ & $ 0.776 $ & $ 0.045 $ & $ -0.001 $ & $ -0.002 $ & $ 0.0101 $ & $ 1.53\times10^{-6} $ & $ unstable $ & \\
$ 32 $ & $ 0.646 $ & $ 0.020 $ & $ 0.828 $ & $ 0.053 $ & $ -0.001 $ & $ -0.001 $ & $ 0.0074 $ & $ 2.80\times10^{-6} $ & $ unstable $ & \\
$ 33 $ & $ 0.704 $ & $ 0.053 $ & $ 0.740 $ & $ 0.059 $ & $ -0.003 $ & $ -0.003 $ & $ 0.0180 $ & $ 1.53\times10^{-6} $ & $ unstable $ & \\
\enddata
\tablenotetext{}{$Run$ designates the run number, $d_\mathrm{min}$ and $E_\mathrm{c}$ are the distance of closest approach and energy of each collision from the N-body and SPH calculations in units of the combined physical radii, $R_1+R_2$, and binding energy, $E_\mathrm{b}$, described in \eqref{EQ:binding_energy}, $\Delta m_1$ and $\Delta m_2$ are the changes in mass for the inner and outer planet in Earth masses, $\Delta E_\mathrm{orb}$ is the change in relative planet-planet orbital energy during the collision in units of the binding energy, $\epsilon_\mathrm{acc}$ is the fractional change in the total energy, and outcome designates the result of the collision.}
\end{deluxetable*}
\begin{deluxetable*}{ccc|cccccccc}
\tabletypesize{\footnotesize}
\tablewidth{14.5cm}
\tablecolumns{9}
\tablecaption{Results of $q=1$ SPH Calculations\label{TBL:Results_q1}}
\tablehead{
\colhead{} & \multicolumn{2}{c}{N-body} & \multicolumn{2}{c}{SPH} & \\
\cline{2-3} \cline{4-5}
\colhead{Run} & \colhead{$d_\mathrm{min}$} & \colhead{$E_\mathrm{c}$} & \colhead{$d_\mathrm{min}$} & \colhead{$E_\mathrm{c}$} & \colhead{$\Delta m_\mathrm{1}$} & \colhead{$\Delta m_\mathrm{2}$} & \colhead{$\Delta E_\mathrm{orb}$} & \colhead{$\epsilon_\mathrm{acc}$} & \colhead{outcome}}
\startdata
$ m_\mathrm{c}/m = 0.85$ & \\
$ 1 $ & $ 0.394 $ & $ 0.086 $ & $ 0.408 $ & $ 0.087 $ & $ -0.142 $ & $ -0.167 $ & $ 0.1333 $ & $ 3.36\times10^{-7} $ & $ merged $ & \\
$ 2 $ & $ 0.376 $ & $ -0.031 $ & $ 0.382 $ & $ -0.030 $ & $ -1.213 $ & $ -1.100 $ & $ 0.0744 $ & $ 6.71\times10^{-7} $ & $ merged $ & \\
$ 3 $ & $ 0.692 $ & $ -0.047 $ & $ 0.690 $ & $ -0.007 $ & $ -0.013 $ & $ -0.011 $ & $ 0.0102 $ & $ 6.72\times10^{-7} $ & $ merged $ & \\
$ 4 $ & $ 0.452 $ & $ -0.010 $ & $ 0.451 $ & $ 0.032 $ & $ -0.020 $ & $ -0.021 $ & $ 0.0502 $ & $ 2.35\times10^{-6} $ & $ unstable $ & \\
$ 5 $ & $ 0.737 $ & $ 0.050 $ & $ 0.680 $ & $ 0.067 $ & $ -0.001 $ & $ -0.001 $ & $ 0.0047 $ & $ 6.71\times10^{-7} $ & $ unstable $ & \\
$ 6 $ & $ 0.649 $ & $ 0.024 $ & $ 0.608 $ & $ 0.042 $ & $ -0.001 $ & $ -0.001 $ & $ 0.0126 $ & $ 3.36\times10^{-7} $ & $ unstable $ & \\
$ 7 $ & $ 0.746 $ & $ -0.023 $ & $ 0.755 $ & $ -0.011 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0040 $ & $ 1.34\times10^{-6} $ & $ unstable $ & \\
$ 8 $ & $ 0.451 $ & $ 0.078 $ & $ 0.470 $ & $ 0.082 $ & $ -0.021 $ & $ -0.024 $ & $ 0.0539 $ & $ 3.36\times10^{-7} $ & $ unstable $ & \\
$ 9 $ & $ 0.396 $ & $ 0.065 $ & $ 0.384 $ & $ 0.093 $ & $ -0.063 $ & $ -0.063 $ & $ 0.1045 $ & $ 2.01\times10^{-6} $ & $ unstable $ & \\
$ 10 $ & $ 0.534 $ & $ 0.106 $ & $ 0.561 $ & $ 0.136 $ & $ -0.003 $ & $ -0.003 $ & $ 0.0185 $ & $ 2.35\times10^{-6} $ & $ unstable $ & \\
$ 11 $ & $ 0.654 $ & $ 0.105 $ & $ 0.742 $ & $ 0.127 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0034 $ & $ 1.68\times10^{-6} $ & $ unstable $ & \\
$ 12 $ & $ 0.558 $ & $ 0.067 $ & $ 0.554 $ & $ 0.117 $ & $ -0.003 $ & $ -0.003 $ & $ 0.0187 $ & $ 3.36\times10^{-7} $ & $ unstable $ & \\
$ 13 $ & $ 0.804 $ & $ 0.042 $ & $ 0.795 $ & $ 0.081 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0027 $ & $ 3.36\times10^{-6} $ & $ unstable $ & \\
$ 14 $ & $ 0.481 $ & $ 0.027 $ & $ 0.538 $ & $ 0.103 $ & $ -0.003 $ & $ -0.004 $ & $ 0.0210 $ & $ 1.01\times10^{-6} $ & $ unstable $ & \\
$ 15 $ & $ 0.502 $ & $ 0.063 $ & $ 0.531 $ & $ 0.082 $ & $ -0.004 $ & $ -0.003 $ & $ 0.0234 $ & $ <10^{-7} $ & $ unstable $ & \\
$ 16 $ & $ 0.675 $ & $ -0.039 $ & $ 0.676 $ & $ -0.010 $ & $ -0.001 $ & $ -0.000 $ & $ 0.0055 $ & $ 1.01\times10^{-6} $ & $ unstable $ & \\
$ 17 $ & $ 0.586 $ & $ 0.107 $ & $ 0.571 $ & $ 0.115 $ & $ -0.029 $ & $ -0.036 $ & $ 0.0017 $ & $ 2.69\times10^{-6} $ & $ unstable $ & \\
$ 18 $ & $ 0.523 $ & $ 0.050 $ & $ 0.564 $ & $ 0.064 $ & $ -0.003 $ & $ -0.002 $ & $ 0.0122 $ & $ 1.01\times10^{-6} $ & $ unstable $ & \\
$ 19 $ & $ 0.493 $ & $ 0.050 $ & $ 0.474 $ & $ 0.099 $ & $ -0.017 $ & $ -0.016 $ & $ 0.0485 $ & $ 1.68\times10^{-6} $ & $ unstable $ & \\
\hline \\
$ m_\mathrm{c}/m = 0.95$ & \\
$ 1 $ & $ 0.394 $ & $ 0.086 $ & $ 0.578 $ & $ 0.060 $ & $ -0.046 $ & $ -0.036 $ & $ 0.0876 $ & $ 1.11\times10^{-6} $ & $ merged $ & \\
$ 2 $ & $ 0.376 $ & $ -0.031 $ & $ 0.540 $ & $ -0.021 $ & $ -1.112 $ & $ -1.152 $ & $ 0.0417 $ & $ 7.39\times10^{-7} $ & $ merged $ & \\
$ 3 $ & $ 0.452 $ & $ -0.010 $ & $ 0.640 $ & $ 0.022 $ & $ -0.005 $ & $ -0.005 $ & $ 0.0375 $ & $ 7.39\times10^{-7} $ & $ unstable $ & \\
$ 4 $ & $ 0.737 $ & $ 0.050 $ & $ 0.999 $ & $ 0.046 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0009 $ & $ 5.91\times10^{-6} $ & $ unstable $ & \\
$ 5 $ & $ 0.649 $ & $ 0.024 $ & $ 0.888 $ & $ 0.029 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0042 $ & $ 4.43\times10^{-6} $ & $ unstable $ & \\
$ 6 $ & $ 0.746 $ & $ -0.023 $ & $ 1.107 $ & $ -0.007 $ & $ -0.000 $ & $ 0.000 $ & $ 0.0008 $ & $ 4.80\times10^{-6} $ & $ unstable $ & \\
$ 7 $ & $ 0.451 $ & $ 0.078 $ & $ 0.690 $ & $ 0.056 $ & $ -0.005 $ & $ -0.006 $ & $ 0.0390 $ & $ 3.69\times10^{-7} $ & $ unstable $ & \\
$ 8 $ & $ 0.396 $ & $ 0.065 $ & $ 0.551 $ & $ 0.063 $ & $ -0.032 $ & $ -0.031 $ & $ 0.0721 $ & $ 1.11\times10^{-6} $ & $ unstable $ & \\
$ 9 $ & $ 0.534 $ & $ 0.106 $ & $ 0.816 $ & $ 0.093 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0079 $ & $ 4.06\times10^{-6} $ & $ unstable $ & \\
$ 10 $ & $ 0.654 $ & $ 0.105 $ & $ 1.090 $ & $ 0.087 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0007 $ & $ 5.17\times10^{-6} $ & $ unstable $ & \\
$ 11 $ & $ 0.558 $ & $ 0.067 $ & $ 0.803 $ & $ 0.080 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0082 $ & $ 2.22\times10^{-6} $ & $ unstable $ & \\
$ 12 $ & $ 0.804 $ & $ 0.042 $ & $ 1.168 $ & $ 0.055 $ & $ -0.000 $ & $ 0.000 $ & $ 0.0005 $ & $ 6.65\times10^{-6} $ & $ unstable $ & \\
$ 13 $ & $ 0.481 $ & $ 0.027 $ & $ 0.779 $ & $ 0.070 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0094 $ & $ 7.39\times10^{-7} $ & $ unstable $ & \\
$ 14 $ & $ 0.502 $ & $ 0.063 $ & $ 0.767 $ & $ 0.056 $ & $ -0.001 $ & $ -0.001 $ & $ 0.0112 $ & $ 3.69\times10^{-6} $ & $ unstable $ & \\
$ 15 $ & $ 0.675 $ & $ -0.039 $ & $ 0.992 $ & $ -0.007 $ & $ -0.000 $ & $ 0.000 $ & $ 0.0006 $ & $ 4.06\times10^{-6} $ & $ unstable $ & \\
$ 16 $ & $ 0.586 $ & $ 0.107 $ & $ 1.090 $ & $ 0.087 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0007 $ & $ 5.17\times10^{-6} $ & $ unstable $ & \\
$ 17 $ & $ 0.523 $ & $ 0.050 $ & $ 0.823 $ & $ 0.043 $ & $ -0.000 $ & $ -0.000 $ & $ 0.0049 $ & $ 2.59\times10^{-6} $ & $ unstable $ & \\
$ 18 $ & $ 0.493 $ & $ 0.050 $ & $ 0.693 $ & $ 0.067 $ & $ -0.003 $ & $ -0.003 $ & $ 0.0325 $ & $ 3.32\times10^{-6} $ & $ unstable $ & \\
$ 19 $ & $ 0.692 $ & $ -0.047 $ & $ 1.013 $ & $ -0.004 $ & $ -0.000 $ & $ 0.000 $ & $ 0.0036 $ & $ 3.69\times10^{-6} $ & $ unstable $ & \\
\enddata
\tablenotetext{}{$Run$ designates the run number, $d_\mathrm{min}$ and $E_\mathrm{c}$ are the distance of closest approach and energy of each collision from the N-body and SPH calculations in units of the combined physical radii, $R_1+R_2$, and binding energy, $E_\mathrm{b}$, described in \eqref{EQ:binding_energy}, $\Delta m_1$ and $\Delta m_2$ are the changes in mass for the inner and outer planet in Earth masses, $\Delta E_\mathrm{orb}$ is the change in relative planet-planet orbital energy during the collision in units of the binding energy, $\epsilon_\mathrm{acc}$ is the fractional change in the total energy, and outcome designates the result of the collision.}
\end{deluxetable*}
\begin{deluxetable*}{ccc|cccccccc}
\tabletypesize{\footnotesize}
\tablewidth{14.5cm}
\tablecolumns{9}
\tablecaption{Results of $q=1/3$ SPH Calculations\label{TBL:Results_q3}}
\tablehead{
\colhead{} & \multicolumn{2}{c}{N-body} & \multicolumn{2}{c}{SPH} & \\
\cline{2-3} \cline{4-5}
\colhead{Run} & \colhead{$d_\mathrm{min}$} & \colhead{$E_\mathrm{c}$} & \colhead{$d_\mathrm{min}$} & \colhead{$E_\mathrm{c}$} & \colhead{$\Delta m_\mathrm{1}$} & \colhead{$\Delta m_\mathrm{2}$} & \colhead{$\Delta E_\mathrm{orb}$} & \colhead{$\epsilon_\mathrm{acc}$} & \colhead{outcome}}
\startdata
$ m_\mathrm{c}/m = 0.85$ & \\
$ 1 $ & $ 0.512 $ & $ -0.018 $ & $ 0.530 $ & $ -0.018 $ & $ -1.135 $ & $ -0.016 $ & $ 0.0185 $ & $ 6.02\times10^{-6} $ & $ merged $ & \\
$ 2 $ & $ 0.445 $ & $ 0.003 $ & $ 0.530 $ & $ 0.020 $ & $ -1.066 $ & $ -0.264 $ & $ 0.0000 $ & $ 3.19\times10^{-6} $ & $ merged $ & \\
$ 3 $ & $ 0.449 $ & $ -0.043 $ & $ 0.485 $ & $ -0.014 $ & $ -1.177 $ & $ -0.172 $ & $ 0.0000 $ & $ 4.07\times10^{-6} $ & $ merged $ & \\
$ 4 $ & $ 0.475 $ & $ -0.004 $ & $ 0.532 $ & $ -0.002 $ & $ -0.814 $ & $ -0.102 $ & $ 0.0102 $ & $ 4.78\times10^{-6} $ & $ merged $ & \\
$ 5 $ & $ 0.498 $ & $ -0.022 $ & $ 0.530 $ & $ -0.018 $ & $ -0.644 $ & $ -0.038 $ & $ 0.0092 $ & $ 6.37\times10^{-6} $ & $ merged $ & \\
$ 6 $ & $ 0.432 $ & $ -0.022 $ & $ 0.519 $ & $ -0.015 $ & $ -1.238 $ & $ -0.139 $ & $ 0.0235 $ & $ 6.72\times10^{-6} $ & $ merged $ & \\
$ 7 $ & $ 0.679 $ & $ -0.023 $ & $ 0.683 $ & $ -0.021 $ & $ -1.171 $ & $ 0.088 $ & $ 0.0052 $ & $ 5.13\times10^{-6} $ & $ merged $ & \\
$ 8 $ & $ 0.469 $ & $ -0.038 $ & $ 0.489 $ & $ -0.018 $ & $ -1.104 $ & $ -0.151 $ & $ 0.0000 $ & $ 4.78\times10^{-6} $ & $ merged $ & \\
$ 9 $ & $ 0.538 $ & $ -0.019 $ & $ 0.566 $ & $ -0.019 $ & $ -1.061 $ & $ -0.088 $ & $ 0.0160 $ & $ 5.84\times10^{-6} $ & $ merged $ & \\
$ 10 $ & $ 0.382 $ & $ 0.023 $ & $ 0.440 $ & $ 0.041 $ & $ -1.324 $ & $ -0.286 $ & $ 0.0105 $ & $ 1.24\times10^{-6} $ & $ merged $ & \\
$ 11 $ & $ 0.800 $ & $ -0.005 $ & $ 0.887 $ & $ -0.005 $ & $ -0.014 $ & $ -0.002 $ & $ 0.0002 $ & $ 3.01\times10^{-6} $ & $ unstable $ & \\
$ 12 $ & $ 0.822 $ & $ -0.012 $ & $ 0.850 $ & $ 0.002 $ & $ -0.019 $ & $ 0.015 $ & $ 0.0017 $ & $ 6.37\times10^{-6} $ & $ unstable $ & \\
$ 13 $ & $ 0.737 $ & $ -0.018 $ & $ 0.737 $ & $ -0.005 $ & $ -0.032 $ & $ 0.019 $ & $ 0.0048 $ & $ 4.96\times10^{-6} $ & $ unstable $ & \\
$ 14 $ & $ 0.643 $ & $ 0.025 $ & $ 0.678 $ & $ 0.027 $ & $ -0.039 $ & $ 0.013 $ & $ 0.0088 $ & $ 4.60\times10^{-6} $ & $ unstable $ & \\
$ 15 $ & $ 0.608 $ & $ 0.039 $ & $ 0.631 $ & $ 0.039 $ & $ -0.057 $ & $ 0.014 $ & $ 0.0125 $ & $ 5.13\times10^{-6} $ & $ unstable $ & \\
$ 16 $ & $ 0.668 $ & $ 0.005 $ & $ 0.668 $ & $ 0.020 $ & $ -0.036 $ & $ 0.013 $ & $ 0.0052 $ & $ 1.42\times10^{-6} $ & $ unstable $ & \\
\hline \\
$ m_\mathrm{c}/m = 0.95$ & \\
$ 1 $ & $ 0.512 $ & $ -0.018 $ & $ 0.744 $ & $ -0.013 $ & $ -0.938 $ & $ 0.006 $ & $ 0.0000 $ & $ 1.77\times10^{-6} $ & $ merged $ & \\
$ 2 $ & $ 0.445 $ & $ 0.003 $ & $ 0.741 $ & $ 0.014 $ & $ -1.133 $ & $ -0.036 $ & $ 0.0075 $ & $ 9.03\times10^{-6} $ & $ merged $ & \\
$ 3 $ & $ 0.449 $ & $ -0.043 $ & $ 0.666 $ & $ -0.010 $ & $ -1.186 $ & $ -0.074 $ & $ 0.0000 $ & $ 1.18\times10^{-6} $ & $ merged $ & \\
$ 4 $ & $ 0.498 $ & $ -0.022 $ & $ 0.738 $ & $ -0.013 $ & $ -0.693 $ & $ -0.022 $ & $ 0.0000 $ & $ 3.93\times10^{-7} $ & $ merged $ & \\
$ 5 $ & $ 0.737 $ & $ -0.018 $ & $ 1.045 $ & $ -0.003 $ & $ -0.018 $ & $ -0.015 $ & $ 0.0018 $ & $ 1.81\times10^{-5} $ & $ merged $ & \\
$ 6 $ & $ 0.432 $ & $ -0.022 $ & $ 0.739 $ & $ -0.011 $ & $ -0.852 $ & $ -0.016 $ & $ 0.0001 $ & $ 1.77\times10^{-6} $ & $ merged $ & \\
$ 7 $ & $ 0.679 $ & $ -0.023 $ & $ 0.971 $ & $ -0.016 $ & $ -0.018 $ & $ -0.002 $ & $ 0.0017 $ & $ 1.37\times10^{-6} $ & $ merged $ & \\
$ 8 $ & $ 0.469 $ & $ -0.038 $ & $ 0.678 $ & $ -0.013 $ & $ -0.987 $ & $ -0.058 $ & $ 0.0000 $ & $ 9.03\times10^{-6} $ & $ merged $ & \\
$ 9 $ & $ 0.538 $ & $ -0.019 $ & $ 0.796 $ & $ -0.013 $ & $ -1.017 $ & $ -0.019 $ & $ 0.0102 $ & $ 4.91\times10^{-6} $ & $ merged $ & \\
$ 10 $ & $ 0.475 $ & $ -0.004 $ & $ 0.766 $ & $ -0.002 $ & $ -0.845 $ & $ -0.036 $ & $ 0.0000 $ & $ 2.94\times10^{-6} $ & $ merged $ & \\
$ 11 $ & $ 0.800 $ & $ -0.005 $ & $ 1.046 $ & $ -0.003 $ & $ -0.004 $ & $ 0.003 $ & $ 0.0007 $ & $ 2.16\times10^{-6} $ & $ unstable $ & \\
$ 12 $ & $ 0.822 $ & $ -0.012 $ & $ 1.204 $ & $ 0.002 $ & $ -0.000 $ & $ 0.000 $ & $ 0.0003 $ & $ 2.75\times10^{-6} $ & $ unstable $ & \\
$ 13 $ & $ 0.643 $ & $ 0.025 $ & $ 0.962 $ & $ 0.020 $ & $ -0.006 $ & $ 0.005 $ & $ 0.0042 $ & $ 2.16\times10^{-6} $ & $ unstable $ & \\
$ 14 $ & $ 0.608 $ & $ 0.039 $ & $ 0.893 $ & $ 0.028 $ & $ -0.012 $ & $ 0.006 $ & $ 0.0090 $ & $ 3.93\times10^{-6} $ & $ unstable $ & \\
$ 15 $ & $ 0.668 $ & $ 0.005 $ & $ 0.951 $ & $ 0.014 $ & $ -0.005 $ & $ 0.004 $ & $ 0.0018 $ & $ 1.77\times10^{-6} $ & $ unstable $ & \\
\enddata
\tablenotetext{}{$Run$ designates the run number, $d_\mathrm{min}$ and $E_\mathrm{c}$ are the distance of closest approach and energy of each collision from the N-body and SPH calculations in units of the combined physical radii, $R_1+R_2$, and binding energy, $E_\mathrm{b}$, described in \eqref{EQ:binding_energy}, $\Delta m_1$ and $\Delta m_2$ are the changes in mass for the inner and outer planet in Earth masses, $\Delta E_\mathrm{orb}$ is the change in relative planet-planet orbital energy during the collision in units of the binding energy, $\epsilon_\mathrm{acc}$ is the fractional change in the total energy, and outcome designates the result of the collision.}
\end{deluxetable*}
Tables~\ref{TBL:Observable_Kepler36}-\ref{TBL:Observable_q3} summarize the orbits and planet properties of the remnant planets, where the angular momentum deficit is used to quantify the stability of the systems, discussed in more detail in \S\ref{SSec:Scatterings}, and the densities are estimated using the same method presented in {\it Paper 1}, where we match the total mass and core mass fraction of the remnant planets to a grid of {\it MESA} models.
The initial gas mass fractions and densities are listed on the first row for each set of calculations.
We examine each set of outcomes in more detail and discuss the orbits and planet structures of the remnants.
In \S\ref{Sec:Predictions} we use these calculations to develop prescriptions for predicting the outcomes and modeling the changes in mass during these collisions.
\begin{deluxetable*}{c|ccccccc|cccccc}
\tabletypesize{\footnotesize}
\tablewidth{14.cm}
\tablecolumns{11}
\tablecaption{Post-Collision Orbits and Planet Properties of Kepler-36 Progenitor Calculations\label{TBL:Observable_Kepler36}}
\tablehead{
\colhead{} & \multicolumn{7}{c}{Orbit Properties} & \multicolumn{5}{c}{Planet Properties} & \\
\cline{2-8} \cline{9-13}
\colhead{Run} & \colhead{$a_1$} & \colhead{$e_1$} & \colhead{$a_2$} & \colhead{$e_2$} & \colhead{$P_2/P_1$} & \colhead{$C$} & \colhead{$C_\mathrm{min}$} & \colhead{$m_\mathrm{gas,1}$} & \colhead{$m_\mathrm{gas,2}$} & \colhead{$\rho_1$} & \colhead{$\rho_2$} & \colhead{$\rho_1/\rho_2$}}
\startdata
$ $ & $ $ & $ $ & $ $ & $ $ & $ $ & \multicolumn{2}{l}{Initial Properties} & $ 0.233 $ & $ 0.708 $ & $ 1.12 $ & $ 1.29 $ & $ 1.15 $ & \\
\hline
$ 1 $ & $ 0.121 $ & $ 0.028 $ & $ 0.125 $ & $ 0.042 $ & $ 1.055 $ & $ 0.003 $ & $ 0.003 $ & $ 0.024 $ & $ 0.086 $ & $ 1.875 $ & $ 1.376 $ & $ 0.734 $ & \\
$ 2 $ & $ 0.137 $ & $ 0.115 $ & $ 0.118 $ & $ 0.061 $ & $ 1.251 $ & $ 0.010 $ & $ 0.005 $ & $ 0.003 $ & $ 0.051 $ & $ 2.263 $ & $ 1.672 $ & $ 0.739 $ & \\
$ 3 $ & $ 0.120 $ & $ 0.045 $ & $ 0.127 $ & $ 0.081 $ & $ 1.083 $ & $ 0.011 $ & $ 0.010 $ & $ 0.015 $ & $ 0.065 $ & $ 2.041 $ & $ 1.571 $ & $ 0.770 $ & \\
$ 4 $ & $ 0.127 $ & $ 0.148 $ & $ 0.123 $ & $ 0.023 $ & $ 1.046 $ & $ 0.019 $ & $ 0.019 $ & $ 0.021 $ & $ 0.086 $ & $ 1.970 $ & $ 1.372 $ & $ 0.696 $ & \\
$ 5 $ & $ 0.138 $ & $ 0.109 $ & $ 0.119 $ & $ 0.097 $ & $ 1.000 $ & $ 0.018 $ & $ 0.013 $ & $ 0.008 $ & $ 0.049 $ & $ 2.216 $ & $ 1.744 $ & $ 0.787 $ & \\
$ 6 $ & $ 0.104 $ & $ 0.199 $ & $ 0.140 $ & $ 0.114 $ & $ 1.000 $ & $ 0.036 $ & $ 0.010 $ & $ 0.008 $ & $ 0.036 $ & $ 2.484 $ & $ 2.182 $ & $ 0.879 $ & \\
$ 7 $ & $ 0.141 $ & $ 0.143 $ & $ 0.116 $ & $ 0.069 $ & $ 1.000 $ & $ 0.014 $ & $ 0.006 $ & $ 0.002 $ & $ 0.033 $ & $ 2.897 $ & $ 2.270 $ & $ 0.784 $ & \\
$ 8 $ & $ 0.116 $ & $ 0.193 $ & $ 0.132 $ & $ 0.059 $ & $ 1.000 $ & $ 0.022 $ & $ 0.018 $ & $ 0.004 $ & $ 0.031 $ & $ 2.681 $ & $ 2.355 $ & $ 0.878 $ & \\
$ 9 $ & $ 0.132 $ & $ 0.115 $ & $ 0.119 $ & $ 0.022 $ & $ 1.167 $ & $ 0.007 $ & $ 0.004 $ & $ 0.008 $ & $ 0.054 $ & $ 2.290 $ & $ 1.645 $ & $ 0.718 $ & \\
$ 10 $ & $ 0.134 $ & $ 0.133 $ & $ 0.120 $ & $ 0.057 $ & $ 1.000 $ & $ 0.014 $ & $ 0.011 $ & $ 0.004 $ & $ 0.068 $ & $ 2.585 $ & $ 1.553 $ & $ 0.601 $ & \\
$ 11 $ & $ 0.118 $ & $ 0.135 $ & $ 0.127 $ & $ 0.053 $ & $ 1.122 $ & $ 0.014 $ & $ 0.012 $ & $ 0.012 $ & $ 0.050 $ & $ 2.053 $ & $ 1.639 $ & $ 0.798 $ & \\
$ 12 $ & $ 0.136 $ & $ 0.121 $ & $ 0.118 $ & $ 0.056 $ & $ 1.230 $ & $ 0.012 $ & $ 0.007 $ & $ 0.007 $ & $ 0.056 $ & $ 2.327 $ & $ 1.635 $ & $ 0.702 $ & \\
$ 13 $ & $ 0.124 $ & $ 0.134 $ & $ 0.126 $ & $ 0.088 $ & $ 1.000 $ & $ 0.017 $ & $ 0.017 $ & $ 0.002 $ & $ 0.023 $ & $ 3.274 $ & $ 2.421 $ & $ 0.739 $ & \\
$ 14 $ & $ 0.167 $ & $ 0.249 $ & $ 0.109 $ & $ 0.167 $ & $ 1.000 $ & $ 0.099 $ & $ 0.026 $ & $ 0.040 $ & $ 0.088 $ & $ 1.633 $ & $ 1.340 $ & $ 0.821 $ & \\
$ 15 $ & $ 0.099 $ & $ 0.265 $ & $ 0.149 $ & $ 0.171 $ & $ 1.000 $ & $ 0.099 $ & $ 0.032 $ & $ 0.034 $ & $ 0.079 $ & $ 1.729 $ & $ 1.464 $ & $ 0.847 $ & \\
$ 16 $ & $ 0.120 $ & $ 0.008 $ & $ 0.126 $ & $ 0.004 $ & $ 1.071 $ & $ 0.000 $ & $ -0.001 $ & $ 0.013 $ & $ 0.081 $ & $ 2.451 $ & $ 1.451 $ & $ 0.592 $ & \\
$ 17 $ & $ 0.119 $ & $ 0.057 $ & $ 0.126 $ & $ 0.006 $ & $ 1.092 $ & $ 0.003 $ & $ 0.001 $ & $ 0.026 $ & $ 0.087 $ & $ 1.843 $ & $ 1.357 $ & $ 0.736 $ & \\
$ 18 $ & $ 0.121 $ & $ 0.036 $ & $ 0.125 $ & $ 0.030 $ & $ 1.047 $ & $ 0.002 $ & $ 0.002 $ & $ 0.038 $ & $ 0.088 $ & $ 1.682 $ & $ 1.332 $ & $ 0.792 $ & \\
$ 19 $ & $ 0.128 $ & $ 0.017 $ & $ 0.121 $ & $ 0.045 $ & $ 1.093 $ & $ 0.003 $ & $ 0.002 $ & $ 0.036 $ & $ 0.088 $ & $ 1.706 $ & $ 1.331 $ & $ 0.780 $ & \\
$ 20 $ & $ 0.122 $ & $ 0.034 $ & $ 0.124 $ & $ 0.004 $ & $ 1.025 $ & $ 0.001 $ & $ 0.001 $ & $ 0.010 $ & $ 0.080 $ & $ 2.957 $ & $ 1.464 $ & $ 0.495 $ & \\
$ 21 $ & $ 0.121 $ & $ 0.040 $ & $ 0.125 $ & $ 0.042 $ & $ 1.057 $ & $ 0.004 $ & $ 0.003 $ & $ 0.042 $ & $ 0.089 $ & $ 1.554 $ & $ 1.321 $ & $ 0.850 $ & \\
$ 22 $ & $ 0.121 $ & $ 0.055 $ & $ 0.125 $ & $ 0.027 $ & $ 1.045 $ & $ 0.004 $ & $ 0.003 $ & $ 0.043 $ & $ 0.089 $ & $ 1.549 $ & $ 1.320 $ & $ 0.852 $ & \\
$ 23 $ & $ 0.132 $ & $ 0.083 $ & $ 0.119 $ & $ 0.027 $ & $ 1.157 $ & $ 0.007 $ & $ 0.003 $ & $ 0.043 $ & $ 0.089 $ & $ 1.545 $ & $ 1.320 $ & $ 0.855 $ & \\
$ 24 $ & $ 0.120 $ & $ 0.003 $ & $ 0.125 $ & $ 0.006 $ & $ 1.061 $ & $ 0.000 $ & $ -0.001 $ & $ 0.015 $ & $ 0.083 $ & $ 2.325 $ & $ 1.432 $ & $ 0.616 $ & \\
$ 25 $ & $ 0.134 $ & $ 0.081 $ & $ 0.118 $ & $ 0.043 $ & $ 1.202 $ & $ 0.008 $ & $ 0.002 $ & $ 0.040 $ & $ 0.088 $ & $ 1.634 $ & $ 1.330 $ & $ 0.814 $ & \\
$ 26 $ & $ 0.128 $ & $ 0.032 $ & $ 0.121 $ & $ 0.048 $ & $ 1.091 $ & $ 0.004 $ & $ 0.003 $ & $ 0.041 $ & $ 0.088 $ & $ 1.603 $ & $ 1.327 $ & $ 0.828 $ & \\
$ 27 $ & $ 0.121 $ & $ 0.041 $ & $ 0.125 $ & $ 0.036 $ & $ 1.049 $ & $ 0.003 $ & $ 0.003 $ & $ 0.043 $ & $ 0.089 $ & $ 1.546 $ & $ 1.324 $ & $ 0.857 $ & \\
$ 28 $ & $ 0.130 $ & $ 0.039 $ & $ 0.120 $ & $ 0.024 $ & $ 1.127 $ & $ 0.002 $ & $ -0.000 $ & $ 0.008 $ & $ 0.079 $ & $ 3.680 $ & $ 1.472 $ & $ 0.400 $ & \\
$ 29 $ & $ 0.122 $ & $ 0.027 $ & $ 0.124 $ & $ 0.030 $ & $ 1.034 $ & $ 0.002 $ & $ 0.002 $ & $ 0.029 $ & $ 0.087 $ & $ 1.795 $ & $ 1.353 $ & $ 0.753 $ & \\
$ 30 $ & $ 0.135 $ & $ 0.086 $ & $ 0.117 $ & $ 0.058 $ & $ 1.236 $ & $ 0.011 $ & $ 0.003 $ & $ 0.043 $ & $ 0.089 $ & $ 1.547 $ & $ 1.320 $ & $ 0.853 $ & \\
$ 31 $ & $ 0.130 $ & $ 0.076 $ & $ 0.120 $ & $ 0.030 $ & $ 1.129 $ & $ 0.006 $ & $ 0.004 $ & $ 0.043 $ & $ 0.089 $ & $ 1.547 $ & $ 1.319 $ & $ 0.852 $ & \\
$ 32 $ & $ 0.120 $ & $ 0.044 $ & $ 0.126 $ & $ 0.048 $ & $ 1.084 $ & $ 0.005 $ & $ 0.004 $ & $ 0.043 $ & $ 0.089 $ & $ 1.544 $ & $ 1.315 $ & $ 0.852 $ & \\
$ 33 $ & $ 0.135 $ & $ 0.092 $ & $ 0.118 $ & $ 0.048 $ & $ 1.232 $ & $ 0.011 $ & $ 0.003 $ & $ 0.042 $ & $ 0.089 $ & $ 1.580 $ & $ 1.324 $ & $ 0.838 $ & \\
\enddata
\tablenotetext{}{$Run$ designates the run number, $a$, $e$, $m_\mathrm{gas}$, and $\rho$ are the semi-major axis (in AU), eccentricity, gas mass (in Earth masses), and density (in CGS) of the inner out and outer planets, $P_2/P_1$ is the period ratio, $C$ and $C_\mathrm{min}$ are the angular momentum deficit and minimum possible angular momentum deficit, normalized as described in \S\ref{SSec:Scatterings}, and $\rho_1/\rho_2$ is the density ratio.
The initial gas mass fractions and densities are listed on the first row of each set of calculations.}
\end{deluxetable*}
\begin{deluxetable*}{c|ccccccc|cccccc}
\tabletypesize{\footnotesize}
\tablewidth{14.cm}
\tablecolumns{11}
\tablecaption{Post-Collision Orbits and Planet Properties of $q=1$ Calculations\label{TBL:Observable_q1}}
\tablehead{
\colhead{} & \multicolumn{7}{c}{Orbit Properties} & \multicolumn{5}{c}{Planet Properties} & \\
\cline{2-8} \cline{9-13}
\colhead{Run} & \colhead{$a_1$} & \colhead{$e_1$} & \colhead{$a_2$} & \colhead{$e_2$} & \colhead{$P_2/P_1$} & \colhead{$C$} & \colhead{$C_\mathrm{min}$} & \colhead{$m_\mathrm{gas,1}$} & \colhead{$m_\mathrm{gas,2}$} & \colhead{$\rho_1$} & \colhead{$\rho_2$} & \colhead{$\rho_1/\rho_2$}}
\startdata
$ m_\mathrm{c}/m = 0.85 $ & $ $ & $ $ & $ $ & $ $ & $ $ & \multicolumn{2}{l}{Initial Properties} & $ 0.600 $ & $ 0.600 $ & $ 0.509 $ & $ 0.509 $ & $ 1.00 $ & \\
\hline
$ 1 $ & $ 0.102 $ & $ 0.060 $ & $ 0.118 $ & $ 0.148 $ & $ 1.000 $ & $ 0.016 $ & $ 0.011 $ & $ 0.079 $ & $ 0.067 $ & $ 0.829 $ & $ 0.856 $ & $ 1.033 $ & \\
$ 2 $ & $ 0.117 $ & $ 0.145 $ & $ 0.102 $ & $ 0.048 $ & $ 1.000 $ & $ 0.006 $ & $ 0.005 $ & $ 0.006 $ & $ 0.009 $ & $ 1.705 $ & $ 1.962 $ & $ 1.150 $ & \\
$ 3 $ & $ 0.127 $ & $ 0.109 $ & $ 0.096 $ & $ 0.194 $ & $ 1.000 $ & $ 0.032 $ & $ 0.013 $ & $ 0.139 $ & $ 0.139 $ & $ 0.503 $ & $ 0.504 $ & $ 1.001 $ & \\
$ 4 $ & $ 0.112 $ & $ 0.033 $ & $ 0.105 $ & $ 0.078 $ & $ 1.107 $ & $ 0.005 $ & $ 0.004 $ & $ 0.136 $ & $ 0.135 $ & $ 0.499 $ & $ 0.498 $ & $ 0.998 $ & \\
$ 5 $ & $ 0.113 $ & $ 0.042 $ & $ 0.105 $ & $ 0.062 $ & $ 1.110 $ & $ 0.004 $ & $ 0.002 $ & $ 0.144 $ & $ 0.144 $ & $ 0.508 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 6 $ & $ 0.104 $ & $ 0.095 $ & $ 0.114 $ & $ 0.022 $ & $ 1.141 $ & $ 0.006 $ & $ 0.004 $ & $ 0.144 $ & $ 0.143 $ & $ 0.508 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 7 $ & $ 0.106 $ & $ 0.066 $ & $ 0.112 $ & $ 0.082 $ & $ 1.078 $ & $ 0.007 $ & $ 0.007 $ & $ 0.144 $ & $ 0.144 $ & $ 0.509 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 8 $ & $ 0.113 $ & $ 0.107 $ & $ 0.105 $ & $ 0.053 $ & $ 1.117 $ & $ 0.009 $ & $ 0.008 $ & $ 0.135 $ & $ 0.134 $ & $ 0.498 $ & $ 0.496 $ & $ 0.996 $ & \\
$ 9 $ & $ 0.111 $ & $ 0.041 $ & $ 0.106 $ & $ 0.042 $ & $ 1.075 $ & $ 0.002 $ & $ 0.002 $ & $ 0.117 $ & $ 0.116 $ & $ 0.474 $ & $ 0.473 $ & $ 0.999 $ & \\
$ 10 $ & $ 0.105 $ & $ 0.067 $ & $ 0.112 $ & $ 0.092 $ & $ 1.103 $ & $ 0.009 $ & $ 0.007 $ & $ 0.143 $ & $ 0.143 $ & $ 0.508 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 11 $ & $ 0.106 $ & $ 0.096 $ & $ 0.112 $ & $ 0.054 $ & $ 1.083 $ & $ 0.008 $ & $ 0.007 $ & $ 0.144 $ & $ 0.144 $ & $ 0.509 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 12 $ & $ 0.101 $ & $ 0.127 $ & $ 0.118 $ & $ 0.076 $ & $ 1.271 $ & $ 0.014 $ & $ 0.008 $ & $ 0.143 $ & $ 0.143 $ & $ 0.508 $ & $ 0.508 $ & $ 0.999 $ & \\
$ 13 $ & $ 0.105 $ & $ 0.040 $ & $ 0.113 $ & $ 0.091 $ & $ 1.122 $ & $ 0.007 $ & $ 0.005 $ & $ 0.144 $ & $ 0.144 $ & $ 0.509 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 14 $ & $ 0.117 $ & $ 0.103 $ & $ 0.101 $ & $ 0.059 $ & $ 1.251 $ & $ 0.009 $ & $ 0.004 $ & $ 0.143 $ & $ 0.142 $ & $ 0.507 $ & $ 0.507 $ & $ 0.999 $ & \\
$ 15 $ & $ 0.102 $ & $ 0.017 $ & $ 0.117 $ & $ 0.144 $ & $ 1.232 $ & $ 0.014 $ & $ 0.010 $ & $ 0.143 $ & $ 0.142 $ & $ 0.507 $ & $ 0.507 $ & $ 1.000 $ & \\
$ 16 $ & $ 0.111 $ & $ 0.092 $ & $ 0.106 $ & $ 0.042 $ & $ 1.065 $ & $ 0.007 $ & $ 0.006 $ & $ 0.144 $ & $ 0.144 $ & $ 0.508 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 17 $ & $ 0.106 $ & $ 0.082 $ & $ 0.113 $ & $ 0.082 $ & $ 1.099 $ & $ 0.009 $ & $ 0.008 $ & $ 0.132 $ & $ 0.128 $ & $ 0.493 $ & $ 0.488 $ & $ 0.990 $ & \\
$ 18 $ & $ 0.107 $ & $ 0.054 $ & $ 0.111 $ & $ 0.104 $ & $ 1.058 $ & $ 0.009 $ & $ 0.009 $ & $ 0.143 $ & $ 0.143 $ & $ 0.508 $ & $ 0.508 $ & $ 1.000 $ & \\
$ 19 $ & $ 0.110 $ & $ 0.056 $ & $ 0.107 $ & $ 0.057 $ & $ 1.049 $ & $ 0.004 $ & $ 0.004 $ & $ 0.137 $ & $ 0.137 $ & $ 0.501 $ & $ 0.501 $ & $ 1.000 $ & \\
\hline
$ m_\mathrm{c}/m = 0.95 $ & $ $ & $ $ & $ $ & $ $ & $ $ & \multicolumn{2}{l}{Initial Properties} & $ 0.200 $ & $ 0.200 $ & $ 0.977 $ & $ 0.977 $ & $ 1.00 $ & \\
\hline
$ 1 $ & $ 0.114 $ & $ 0.034 $ & $ 0.104 $ & $ 0.136 $ & $ 1.000 $ & $ 0.013 $ & $ 0.010 $ & $ 0.025 $ & $ 0.028 $ & $ 1.648 $ & $ 1.605 $ & $ 0.974 $ & \\
$ 2 $ & $ 0.106 $ & $ 0.064 $ & $ 0.112 $ & $ 0.105 $ & $ 1.000 $ & $ 0.004 $ & $ 0.004 $ & $ 0.003 $ & $ 0.002 $ & $ 2.119 $ & $ 2.225 $ & $ 1.050 $ & \\
$ 3 $ & $ 0.112 $ & $ 0.039 $ & $ 0.106 $ & $ 0.072 $ & $ 1.084 $ & $ 0.004 $ & $ 0.004 $ & $ 0.045 $ & $ 0.044 $ & $ 1.305 $ & $ 1.314 $ & $ 1.007 $ & \\
$ 4 $ & $ 0.112 $ & $ 0.041 $ & $ 0.105 $ & $ 0.062 $ & $ 1.107 $ & $ 0.004 $ & $ 0.003 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.008 $ & \\
$ 5 $ & $ 0.104 $ & $ 0.098 $ & $ 0.114 $ & $ 0.022 $ & $ 1.152 $ & $ 0.007 $ & $ 0.004 $ & $ 0.047 $ & $ 0.047 $ & $ 1.200 $ & $ 1.209 $ & $ 1.008 $ & \\
$ 6 $ & $ 0.106 $ & $ 0.069 $ & $ 0.111 $ & $ 0.079 $ & $ 1.069 $ & $ 0.007 $ & $ 0.007 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.009 $ & \\
$ 7 $ & $ 0.114 $ & $ 0.113 $ & $ 0.104 $ & $ 0.046 $ & $ 1.142 $ & $ 0.010 $ & $ 0.008 $ & $ 0.044 $ & $ 0.044 $ & $ 1.322 $ & $ 1.345 $ & $ 1.018 $ & \\
$ 8 $ & $ 0.111 $ & $ 0.040 $ & $ 0.106 $ & $ 0.042 $ & $ 1.071 $ & $ 0.002 $ & $ 0.002 $ & $ 0.032 $ & $ 0.031 $ & $ 1.568 $ & $ 1.579 $ & $ 1.007 $ & \\
$ 9 $ & $ 0.105 $ & $ 0.066 $ & $ 0.112 $ & $ 0.091 $ & $ 1.104 $ & $ 0.008 $ & $ 0.007 $ & $ 0.046 $ & $ 0.046 $ & $ 1.210 $ & $ 1.219 $ & $ 1.007 $ & \\
$ 10 $ & $ 0.106 $ & $ 0.097 $ & $ 0.112 $ & $ 0.053 $ & $ 1.086 $ & $ 0.008 $ & $ 0.007 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.009 $ & \\
$ 11 $ & $ 0.101 $ & $ 0.128 $ & $ 0.118 $ & $ 0.076 $ & $ 1.277 $ & $ 0.014 $ & $ 0.008 $ & $ 0.047 $ & $ 0.046 $ & $ 1.206 $ & $ 1.216 $ & $ 1.008 $ & \\
$ 12 $ & $ 0.105 $ & $ 0.041 $ & $ 0.113 $ & $ 0.092 $ & $ 1.123 $ & $ 0.007 $ & $ 0.005 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.009 $ & \\
$ 13 $ & $ 0.118 $ & $ 0.104 $ & $ 0.101 $ & $ 0.061 $ & $ 1.260 $ & $ 0.010 $ & $ 0.004 $ & $ 0.046 $ & $ 0.046 $ & $ 1.211 $ & $ 1.219 $ & $ 1.007 $ & \\
$ 14 $ & $ 0.101 $ & $ 0.015 $ & $ 0.117 $ & $ 0.147 $ & $ 1.245 $ & $ 0.015 $ & $ 0.010 $ & $ 0.046 $ & $ 0.046 $ & $ 1.213 $ & $ 1.222 $ & $ 1.007 $ & \\
$ 15 $ & $ 0.112 $ & $ 0.073 $ & $ 0.105 $ & $ 0.072 $ & $ 1.103 $ & $ 0.007 $ & $ 0.006 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.008 $ & \\
$ 16 $ & $ 0.106 $ & $ 0.097 $ & $ 0.112 $ & $ 0.053 $ & $ 1.086 $ & $ 0.008 $ & $ 0.007 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.009 $ & \\
$ 17 $ & $ 0.107 $ & $ 0.051 $ & $ 0.111 $ & $ 0.107 $ & $ 1.063 $ & $ 0.009 $ & $ 0.009 $ & $ 0.047 $ & $ 0.046 $ & $ 1.206 $ & $ 1.215 $ & $ 1.008 $ & \\
$ 18 $ & $ 0.111 $ & $ 0.064 $ & $ 0.106 $ & $ 0.052 $ & $ 1.074 $ & $ 0.005 $ & $ 0.004 $ & $ 0.045 $ & $ 0.045 $ & $ 1.271 $ & $ 1.281 $ & $ 1.008 $ & \\
$ 19 $ & $ 0.111 $ & $ 0.051 $ & $ 0.106 $ & $ 0.077 $ & $ 1.075 $ & $ 0.006 $ & $ 0.005 $ & $ 0.047 $ & $ 0.047 $ & $ 1.198 $ & $ 1.208 $ & $ 1.008 $ & \\
\enddata
\tablenotetext{}{$Run$ designates the run number, $a$, $e$, $m_\mathrm{gas}$, and $\rho$ are the semi-major axis (in AU), eccentricity, gas mass (in Earth masses), and density (in CGS) of the inner out and outer planets, $P_2/P_1$ is the period ratio, $C$ and $C_\mathrm{min}$ are the angular momentum deficit and minimum possible angular momentum deficit, normalized as described in \S\ref{SSec:Scatterings}, and $\rho_1/\rho_2$ is the density ratio.
The initial gas mass fractions and densities are listed on the first row of each set of calculations.}
\end{deluxetable*}
\begin{deluxetable*}{c|ccccccc|cccccc}
\tabletypesize{\footnotesize}
\tablewidth{14.cm}
\tablecolumns{11}
\tablecaption{Post-Collision Orbits and Planet Properties of $q=1/3$ Calculations\label{TBL:Observable_q3}}
\tablehead{
\colhead{} & \multicolumn{7}{c}{Orbit Properties} & \multicolumn{5}{c}{Planet Properties} & \\
\cline{2-8} \cline{9-13}
\colhead{Run} & \colhead{$a_1$} & \colhead{$e_1$} & \colhead{$a_2$} & \colhead{$e_2$} & \colhead{$P_2/P_1$} & \colhead{$C$} & \colhead{$C_\mathrm{min}$} & \colhead{$m_\mathrm{gas,1}$} & \colhead{$m_\mathrm{gas,2}$} & \colhead{$\rho_1$} & \colhead{$\rho_2$} & \colhead{$\rho_1/\rho_2$}}
\startdata
$ m_\mathrm{c}/m = 0.85 $ & $ $ & $ $ & $ $ & $ $ & $ $ & \multicolumn{2}{l}{Initial Properties} & $ 0.200 $ & $ 0.600 $ & $ 0.509 $ & $ 1.204 $ & $ 2.37 $ & \\
\hline
$ 1 $ & $ 0.153 $ & $ 0.288 $ & $ 0.106 $ & $ 0.095 $ & $ 1.000 $ & $ 0.046 $ & $ 0.022 $ & $ 0.006 $ & $ 0.140 $ & $ 1.849 $ & $ 1.224 $ & $ 0.662 $ & \\
$ 2 $ & $ 0.117 $ & $ 0.211 $ & $ 0.114 $ & $ 0.112 $ & $ 1.000 $ & $ 0.039 $ & $ 0.039 $ & $ 0.005 $ & $ 0.108 $ & $ 1.893 $ & $ 1.252 $ & $ 0.661 $ & \\
$ 3 $ & $ 0.142 $ & $ 0.246 $ & $ 0.108 $ & $ 0.009 $ & $ 1.509 $ & $ 0.019 $ & $ 0.006 $ & $ 0.003 $ & $ 0.112 $ & $ 1.821 $ & $ 1.254 $ & $ 0.688 $ & \\
$ 4 $ & $ 0.109 $ & $ 0.152 $ & $ 0.115 $ & $ 0.052 $ & $ 1.083 $ & $ 0.015 $ & $ 0.014 $ & $ 0.012 $ & $ 0.127 $ & $ 1.667 $ & $ 1.228 $ & $ 0.737 $ & \\
$ 5 $ & $ 0.110 $ & $ 0.068 $ & $ 0.115 $ & $ 0.088 $ & $ 1.075 $ & $ 0.018 $ & $ 0.017 $ & $ 0.021 $ & $ 0.136 $ & $ 1.307 $ & $ 1.226 $ & $ 0.938 $ & \\
$ 6 $ & $ 0.148 $ & $ 0.234 $ & $ 0.107 $ & $ 0.141 $ & $ 1.000 $ & $ 0.055 $ & $ 0.038 $ & $ 0.002 $ & $ 0.119 $ & $ 2.101 $ & $ 1.237 $ & $ 0.589 $ & \\
$ 7 $ & $ 0.125 $ & $ 0.139 $ & $ 0.111 $ & $ 0.113 $ & $ 1.200 $ & $ 0.032 $ & $ 0.029 $ & $ 0.006 $ & $ 0.148 $ & $ 1.813 $ & $ 1.223 $ & $ 0.674 $ & \\
$ 8 $ & $ 0.133 $ & $ 0.196 $ & $ 0.109 $ & $ 0.074 $ & $ 1.000 $ & $ 0.023 $ & $ 0.016 $ & $ 0.008 $ & $ 0.124 $ & $ 2.050 $ & $ 1.226 $ & $ 0.598 $ & \\
$ 9 $ & $ 0.133 $ & $ 0.116 $ & $ 0.110 $ & $ 0.147 $ & $ 1.000 $ & $ 0.047 $ & $ 0.039 $ & $ 0.008 $ & $ 0.132 $ & $ 2.122 $ & $ 1.224 $ & $ 0.577 $ & \\
$ 10 $ & $ 0.165 $ & $ 0.339 $ & $ 0.104 $ & $ 0.063 $ & $ 2.010 $ & $ 0.040 $ & $ 0.009 $ & $ 0.001 $ & $ 0.109 $ & $ 2.492 $ & $ 1.246 $ & $ 0.500 $ & \\
$ 11 $ & $ 0.095 $ & $ 0.339 $ & $ 0.123 $ & $ 0.065 $ & $ 1.469 $ & $ 0.082 $ & $ 0.056 $ & $ 0.138 $ & $ 0.146 $ & $ 0.502 $ & $ 1.215 $ & $ 2.419 $ & \\
$ 12 $ & $ 0.111 $ & $ 0.103 $ & $ 0.115 $ & $ 0.113 $ & $ 1.043 $ & $ 0.033 $ & $ 0.033 $ & $ 0.136 $ & $ 0.148 $ & $ 0.500 $ & $ 1.212 $ & $ 2.426 $ & \\
$ 13 $ & $ 0.101 $ & $ 0.035 $ & $ 0.119 $ & $ 0.123 $ & $ 1.283 $ & $ 0.032 $ & $ 0.022 $ & $ 0.131 $ & $ 0.149 $ & $ 0.491 $ & $ 1.211 $ & $ 2.464 $ & \\
$ 14 $ & $ 0.127 $ & $ 0.088 $ & $ 0.110 $ & $ 0.062 $ & $ 1.237 $ & $ 0.013 $ & $ 0.006 $ & $ 0.127 $ & $ 0.148 $ & $ 0.487 $ & $ 1.213 $ & $ 2.492 $ & \\
$ 15 $ & $ 0.123 $ & $ 0.111 $ & $ 0.111 $ & $ 0.006 $ & $ 1.170 $ & $ 0.009 $ & $ 0.004 $ & $ 0.119 $ & $ 0.148 $ & $ 0.476 $ & $ 1.212 $ & $ 2.545 $ & \\
$ 16 $ & $ 0.112 $ & $ 0.127 $ & $ 0.114 $ & $ 0.084 $ & $ 1.035 $ & $ 0.025 $ & $ 0.025 $ & $ 0.129 $ & $ 0.148 $ & $ 0.489 $ & $ 1.212 $ & $ 2.480 $ & \\
\hline
$ m_\mathrm{c}/m = 0.95 $ & $ $ & $ $ & $ $ & $ $ & $ $ & \multicolumn{2}{l}{Initial Properties} & $ 0.600 $ & $ 1.800 $ & $ 1.204 $ & $ 2.314 $ & $ 2.37 $ & \\
\hline
$ 1 $ & $ 0.143 $ & $ 0.244 $ & $ 0.107 $ & $ 0.075 $ & $ 1.536 $ & $ 0.035 $ & $ 0.018 $ & $ 0.010 $ & $ 0.046 $ & $ 1.854 $ & $ 2.729 $ & $ 1.472 $ & \\
$ 2 $ & $ 0.136 $ & $ 0.262 $ & $ 0.108 $ & $ 0.054 $ & $ 1.415 $ & $ 0.028 $ & $ 0.018 $ & $ 0.003 $ & $ 0.042 $ & $ 2.064 $ & $ 2.922 $ & $ 1.415 $ & \\
$ 3 $ & $ 0.150 $ & $ 0.288 $ & $ 0.106 $ & $ 0.004 $ & $ 1.669 $ & $ 0.027 $ & $ 0.007 $ & $ 0.002 $ & $ 0.040 $ & $ 2.598 $ & $ 3.000 $ & $ 1.155 $ & \\
$ 4 $ & $ 0.104 $ & $ 0.140 $ & $ 0.118 $ & $ 0.093 $ & $ 1.206 $ & $ 0.026 $ & $ 0.022 $ & $ 0.018 $ & $ 0.044 $ & $ 1.551 $ & $ 2.805 $ & $ 1.808 $ & \\
$ 5 $ & $ 0.083 $ & $ 0.277 $ & $ 0.132 $ & $ 0.210 $ & $ 1.000 $ & $ 0.141 $ & $ 0.053 $ & $ 0.038 $ & $ 0.045 $ & $ 1.484 $ & $ 2.769 $ & $ 1.866 $ & \\
$ 6 $ & $ 0.121 $ & $ 0.174 $ & $ 0.113 $ & $ 0.100 $ & $ 1.113 $ & $ 0.032 $ & $ 0.031 $ & $ 0.012 $ & $ 0.045 $ & $ 1.610 $ & $ 2.787 $ & $ 1.732 $ & \\
$ 7 $ & $ 0.089 $ & $ 0.352 $ & $ 0.127 $ & $ 0.129 $ & $ 1.000 $ & $ 0.111 $ & $ 0.061 $ & $ 0.038 $ & $ 0.047 $ & $ 1.488 $ & $ 2.615 $ & $ 1.758 $ & \\
$ 8 $ & $ 0.115 $ & $ 0.095 $ & $ 0.115 $ & $ 0.083 $ & $ 1.010 $ & $ 0.017 $ & $ 0.017 $ & $ 0.008 $ & $ 0.042 $ & $ 2.195 $ & $ 2.954 $ & $ 1.346 $ & \\
$ 9 $ & $ 0.127 $ & $ 0.105 $ & $ 0.112 $ & $ 0.140 $ & $ 1.000 $ & $ 0.043 $ & $ 0.040 $ & $ 0.005 $ & $ 0.041 $ & $ 1.956 $ & $ 2.987 $ & $ 1.527 $ & \\
$ 10 $ & $ 0.110 $ & $ 0.175 $ & $ 0.116 $ & $ 0.052 $ & $ 1.086 $ & $ 0.017 $ & $ 0.017 $ & $ 0.012 $ & $ 0.044 $ & $ 1.637 $ & $ 2.842 $ & $ 1.736 $ & \\
$ 11 $ & $ 0.118 $ & $ 0.142 $ & $ 0.112 $ & $ 0.104 $ & $ 1.077 $ & $ 0.036 $ & $ 0.035 $ & $ 0.045 $ & $ 0.048 $ & $ 1.289 $ & $ 2.545 $ & $ 1.975 $ & \\
$ 12 $ & $ 0.106 $ & $ 0.138 $ & $ 0.116 $ & $ 0.106 $ & $ 1.146 $ & $ 0.036 $ & $ 0.033 $ & $ 0.047 $ & $ 0.047 $ & $ 1.207 $ & $ 2.585 $ & $ 2.142 $ & \\
$ 13 $ & $ 0.128 $ & $ 0.094 $ & $ 0.110 $ & $ 0.064 $ & $ 1.253 $ & $ 0.014 $ & $ 0.006 $ & $ 0.044 $ & $ 0.048 $ & $ 1.330 $ & $ 2.524 $ & $ 1.897 $ & \\
$ 14 $ & $ 0.124 $ & $ 0.116 $ & $ 0.111 $ & $ 0.008 $ & $ 1.187 $ & $ 0.010 $ & $ 0.005 $ & $ 0.041 $ & $ 0.048 $ & $ 1.432 $ & $ 2.506 $ & $ 1.751 $ & \\
$ 15 $ & $ 0.112 $ & $ 0.131 $ & $ 0.114 $ & $ 0.083 $ & $ 1.034 $ & $ 0.026 $ & $ 0.025 $ & $ 0.044 $ & $ 0.048 $ & $ 1.319 $ & $ 2.529 $ & $ 1.917 $ & \\
\enddata
\tablenotetext{}{$Run$ designates the run number, $a$, $e$, $m_\mathrm{gas}$, and $\rho$ are the semi-major axis (in AU), eccentricity, gas mass (in Earth masses), and density (in CGS) of the inner out and outer planets, $P_2/P_1$ is the period ratio, $C$ and $C_\mathrm{min}$ are the angular momentum deficit and minimum possible angular momentum deficit, normalized as described in \S\ref{SSec:Scatterings}, and $\rho_1/\rho_2$ is the density ratio.
The initial gas mass fractions and densities are listed on the first row of each set of calculations.}
\end{deluxetable*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f5a.eps}
\color{white}
\rule{8.0cm}{5.0cm}
\color{black} \\
\includegraphics[width=8.0cm]{./f5b.eps}
\includegraphics[width=8.0cm]{./f5c.eps} \\
\includegraphics[width=8.0cm]{./f5d.eps}
\includegraphics[width=8.0cm]{./f5e.eps}
\end{tabular}
\end{center}
\caption{Results of collisions as a function of the degree of contact, $\eta = d_\mathrm{min}/(R_1+R_2),$ and the collision energy in units of the binding energy, as described in \eqref{EQ:binding_energy}, from the N-body calculations.
In the Kepler-36 progenitor calculations (top left), we find 14 mergers ($\times$), 1 bound planet-pair ($\star$), and 18 scatterings (circle), in the $q=1;\ m_\mathrm{c}=0.85$ (middle left) calculations we find 3 mergers and 16 scatterings, in the $q=1;\ m_\mathrm{c}=0.95$ (middle right) calculations we find 2 mergers and 17 scatterings, in the $q=1/3;\ m_\mathrm{c}=0.85$ (bottom left) calculations we find 9 mergers and 7 scatterings, and in the $q=1/3;\ m_\mathrm{c}=0.95$ (bottom right) calculations we find 10 mergers and 5 scatterings.
Of the 63 collisions resulting in a scattering, 28 exhibit a flipped orbit (red edge-color), where the initially outer planet becomes the inner planet after the collision.}
\label{Fig:Collision_Results}
\end{figure*}
\subsection{Scatterings}
\label{SSec:Scatterings}
To quantify the effect the collisions have on the orbits, we calculate the angular momentum deficit, following \citet{1997A&A...317L..75L}, \begin{equation}C=\sum_k^nm_k\sqrt{GM_*a_k}\left(1-\sqrt{1-e_k^2}\cos(i_k)\right).\end{equation}
Enforcing conservation of energy and angular momentum, we calculate the angular momentum deficit as a function of the period ratio, and for each set of initial and post-collision orbits, the minimum angular momentum deficit, $C_\mathrm{min},$ where $C_\mathrm{min}\le0$ defines a set of orbits with a circular solution.
Figure~\ref{Fig:Scattering_Plots1} shows the angular momentum, period-ratio solutions for the initial and final orbits of each calculation, with the Hill-stable solution \citep{1993Icar..106..247G} as a reference.
In general, the minimum angular momentum deficits of the post-collision orbits are less than the minimum angular momentum deficits of the initial orbits, and closer to the Hill-stable solution.
We use the final position and velocities from the hydrodynamic calculations as initial conditions for followup dynamical integrations to determine the long-term stability of the system, and while each scattering outcome eventually results in a subsequent collision, further collisions may eventually stabilize the system.
During the collision, the lower mass planets ($m\sim4.0\ M_\mathrm{E}$), tend to lose some fraction of their gas envelopes, and only the largest planets ($m=12.0\ M_\mathrm{E}$) gain mass during the collision.
We develop a model for predicting the change in mass during a collision in \S\ref{Sec:Predictions}.
Using the same method as {\it Paper 1}, we find that the density ratios tend to become more extreme for systems with mass ratios, $q\ne1$, with the lower-mass planets losing more mass than the higher-mass companions.
These results suggest that two very tightly-packed planets with a large density ratio (e.g. Kepler-36; \citealt{2012Sci...337..556C}) may be evidence of a previous planet-planet collision (\citealt{2016ApJ...823..162L}, \citealt{2016ApJ...817L..13I}).
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f6a.eps}
\color{white}
\rule{8.0cm}{5.0cm}
\color{black} \\
\includegraphics[width=8.0cm]{./f6b.eps}
\includegraphics[width=8.0cm]{./f6c.eps} \\
\includegraphics[width=8.0cm]{./f6d.eps}
\includegraphics[width=8.0cm]{./f6e.eps}
\end{tabular}
\end{center}
\caption{Solutions of period ratio, $P_2/P_1$, and angular momentum deficit, $C$, scaled by $C_0 = \mathrm{M}_\mathrm{E}(\mathrm{GM}_\odot\ \mathrm{AU})^{1/2},$ before the collision (green) and after the collision (red), along with the coordinates of the first and last snapshots in the SPH calculation (black points). The solution for a Hill-stable orbit is shown for comparison (solid black line) for the pre-collision masses of each calculation, and we see that, in general, systems become more stable after the collision. Angular momentum deficits below zero are unphysical, and orbits with solutions where $C<0$ have period-ratio boundaries at the circular solutions.}
\label{Fig:Scattering_Plots1}
\end{figure*}
\subsection{Mergers}
Collisions that eventually result in the merger of the two cores generally undergo multiple episodes of physical collisions at the periapsis of each planet-planet orbit, losing some relative orbital energy at each periapsis passage.
While our treatment of the cores does not allow us to fully integrate through the merger, we are able to follow the changes in mass and orbital energy in each orbit until the cores physically collide.
In previous versions of these calculations, where the equations of state of both the core and gas are approximated simply as polytropes, we found several distinct outcomes, including fragmentation of the less-massive core, where some mass is accreted onto the more massive core and the remainder becomes bound, forming a planet-planet binary with a more extreme mass-ratio.
Preferential shedding of the mantle may lead to remnants where the less-massive planet survives with a much higher iron-core mass fraction, and the more-massive planet has an excess of mantle material.
Follow-up studies using equations of state that more accurately model direct core-core collisions are necessary to fully understand the details of these outcomes.
\subsection{Bound Planet-Pair}
While most collisions resulting in the two planets becoming a bound pair have an orbit with a periapsis that results in a core-core collision, and a likely merger, one calculation shows a potentially long-lived planet-planet binary.
Following \citet{2010arXiv1007.1418P}, we define the inner binary as the two bound planets and the outer binary as the bound planet-pair and the host star, and compare the maximum apoapsis of the inner binary to the mutual Hill radius at the periapsis of the outer binary, \begin{equation}\label{EQ:max_apoapsis}r_\mathrm{1,apo} = a_\mathrm{in}(1+e_\mathrm{in})\left(\frac{m_2}{m_1+m_2}\right),\end{equation} \begin{equation}\label{EQ:R_Hill_periapsis}R_\mathrm{H,peri}=a_\mathrm{out}(1-e_\mathrm{out})\left(\frac{\mu}{3M_*}\right)^{1/3},\end{equation} where $m_2 > m_1$, and $a_\mathrm{in}$, $e_\mathrm{in}$, $a_\mathrm{out}$, and $e_\mathrm{out}$ are the semi-major axes and eccentricities of the inner and outer binary.
We estimate the inner binary's maximum apoapsis by enforcing the maximum distance of a planet from the center of mass of the inner binary be less than the Hill radius at the outer binary's periapsis, \begin{equation}\label{EQ:max_periapsis}r_\mathrm{max, apo} = a_\mathrm{out}(1-e_\mathrm{out})\left(\frac{\mu}{3M_*}\right)^{1/3}\left(\frac{m_2}{m_1+m_2}\right).\end{equation}
Figure~\ref{Fig:Bound_Orbits} shows the orbital evolution of the inner and outer binaries of this calculation, and we see that the orbit of the planet-planet binary has a periapsis large enough to prevent the cores from colliding and an apoapsis small enough that the planets remain inside the mutual Hill sphere.
Determing the long-term stability of this planet-planet binary is complex and requires including tides and decay of the inner binary's orbits due to the gas envelope.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f7a.eps} \\
\includegraphics[width=8.0cm]{./f7b.eps}
\end{tabular}
\end{center}
\caption{Orbital evolution of a collison resulting in a potential planet-planet binary. The planet-star orbits (top), with the semi-major axes (solid line), and periastron and apoastron (dotted lines).
The planet-planet orbit (bottom) after the collision shows the physical separation (dashed black line), semi-major axes (blue), and periastron and apoastron (dotted lines) of the planet-planet binary.
The sum of the physical size of the cores (dotted red line) and the maximum periapsis (dotted black line) from \eqref{EQ:max_periapsis} for comparison.}
\label{Fig:Bound_Orbits}
\end{figure}
\section{Recipes to Predict Collisional Outcomes}
\label{Sec:Predictions}
The details of the collision are important in determining both the outcomes and the structures of the remnant planet(s).
The maximum relative energy of a bound planet-planet pair (resulting in either a merger or a long-lived planet-planet binary) can be calculated, as the maximum orbital separation must be less than the mutual Hill Radius, \begin{equation}\label{EQ:E_max}E_\mathrm{max} = \frac{-Gm_1m_2}{2a_\mathrm{max}}.\end{equation}
Using \eqref{EQ:max_apoapsis} and \eqref{EQ:R_Hill_periapsis}, \begin{equation}\label{EQ:a_max}a_\mathrm{max} = a_\mathrm{out}\left(\frac{1-e_\mathrm{out}}{1+e_\mathrm{in}}\right)\left(\frac{m_2}{m_1+m_2}\right)\left(\frac{\mu}{3M_*}\right)^{1/3},\end{equation} where $a_\mathrm{out}$, $e_\mathrm{out}$, $a_\mathrm{in}$, and $e_\mathrm{in}$ are the semi-major axes and eccentricities of the outer (planets' center of mass and host star) and inner (planet-planet) binaries, and $m_1$, $m_2$, and $\mu$ are the planet masses and reduced mass after the collision, where $m_1 < m_2$.
In each planet-planet collision, some amount of energy, $E_\mathrm{d},$ is lost from the planets' orbit, and we predict that the remnant will result in a scattering event, where both planets survive, if \begin{equation}E_\mathrm{c} - E_\mathrm{d} > E_\mathrm{max},\end{equation} and may otherwise become a capture, either a merger or bound planet-pair.
\subsection{Fitting Energy Dissipated and Change in Mass}
\label{SSec:Fits}
To predict the outcomes of a generic collision, we examine the energy dissipated and changes in mass during the first periastron passage of the two planets.
Figures~\ref{Fig:Energy_Dissipated}-\ref{Fig:mass_loss2} show the energy dissipated and changes in mass for each planet as a function of the distance of closest approach.
We fit the energy dissipated and change in mass of each planet with a power law as a function of distance of closest approach, $f=Ad^k$, discarding collisions where we do not fully resolve the first passage, or with a distance of closest approach close enough to cause contact between the physical cores, as this introduces additional physics and requires a different fit.
We use only scattering collisions to fit the changes in mass, as we are interested in the masses of each planet after leaving their mutual Hill sphere.
Table~\ref{TBL:Best_fit_values} summarizes the best fit values for each set of calculations.
We find that higher mass ratios and core mass fractions exhibit a steeper exponent in the power law for both the energy dissipated and change in mass.
The change in mass of the less-massive planet is predicted well by the power-law fit, but we find that the change in mass of the more-massive planet in $q\ne1$ calculations is not as well modeled by a single power-law and likely depends on additional factors, for example, the angle of impact relative to the host star.
\begin{deluxetable}{lccccc}
\tabletypesize{\footnotesize}
\tablewidth{8.0cm}
\tablecolumns{3}
\tablecaption{Best Fits for Energy Dissipated and Change in Mass\label{TBL:Best_fit_values}}
\tablehead{
\colhead{Set} & \colhead{$A$} & \colhead{$k$}}
\startdata
$E_\mathrm{d}/E_\mathrm{b} = A_\mathrm{E}\eta^{k_\mathrm{E}}$ \\
$ Kepler-36\ Progenitor $ & $ 3.57\times10^{-3} $ & $ -5.15 $ & \\
$ q=1;\ m_\mathrm{c}=0.85 $ & $ 1.47\times10^{-3} $ & $ -4.36 $ & \\
$ q=1;\ m_\mathrm{c}=0.95 $ & $ 2.92\times10^{-3} $ & $ -5.43 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.85 $ & $ 1.07\times10^{-3} $ & $ -4.55 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.95 $ & $ 2.11\times10^{-3} $ & $ -6.83 $ & \\
\hline \\
$m_{\mathrm{lost},1}/m_1 = A_{\mathrm{m},1}\eta^{k_{\mathrm{m},1}}$ \\
$ Kepler-36\ Progenitor $ & $ 1.79\times10^{-4} $ & $ -10.56 $ & \\
$ q=1;\ m_\mathrm{c}=0.85 $ & $ 1.36\times10^{-5} $ & $ -8.09 $ & \\
$ q=1;\ m_\mathrm{c}=0.95 $ & $ 6.57\times10^{-6} $ & $ -13.06 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.85 $ & $ 5.23\times10^{-3} $ & $ -3.24 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.95 $ & $ 1.87\times10^{-3} $ & $ -7.15 $ & \\
\hline \\
$m_{\mathrm{lost},2}/m_2 = A_{\mathrm{m},2}\eta^{\gamma_{\mathrm{m},2}}$ \\
$ Kepler-36\ Progenitor $ & $ 3.80\times10^{-5} $ & $ -6.81 $ & \\
$ q=1;\ m_\mathrm{c}=0.85 $ & $ 1.62\times10^{-5} $ & $ -7.92 $ & \\
$ q=1;\ m_\mathrm{c}=0.95 $ & $ 9.45\times10^{-6} $ & $ -12.40 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.85 $ & $ -3.08\times10^{-3} $ & $ 0.62 $ & \\
$ q=1/3;\ m_\mathrm{c}=0.95 $ & $ -5.10\times10^{-4} $ & $ -4.29 $ & \\
\enddata
\tablenotetext{}{$Set$ designates the set name, and $A$ and $k$ are dimensionless variables used in models predicting the energy dissipated and changes in mass as a function of the distance of closest approach.}
\end{deluxetable}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f8a.eps}
\color{white}
\rule{8.0cm}{5.0cm}
\color{black} \\
\includegraphics[width=8.0cm]{./f8b.eps}
\includegraphics[width=8.0cm]{./f8c.eps} \\
\includegraphics[width=8.0cm]{./f8d.eps}
\includegraphics[width=8.0cm]{./f8e.eps}
\end{tabular}
\end{center}
\caption{Change of orbital energy of the planets as a function of the degree of contact, $\eta = d_\mathrm{min}/(R_1+R_2)$, where the symbols designate the outcomes as described in Figure~\ref{Fig:Collision_Results}.
We show the fits, $E_\mathrm{d}/E_\mathrm{b} = A_\mathrm{E}\eta^{k_\mathrm{E}},$ (dashed black lines) where the best-fit values are reported in Table~\ref{TBL:Best_fit_values}.}
\label{Fig:Energy_Dissipated}
\end{figure*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f9a.eps}
\includegraphics[width=8.0cm]{./f9b.eps} \\
\includegraphics[width=8.0cm]{./f9c.eps}
\includegraphics[width=8.0cm]{./f9d.eps} \\
\includegraphics[width=8.0cm]{./f9e.eps}
\includegraphics[width=8.0cm]{./f9f.eps}
\end{tabular}
\end{center}
\caption{Fractional change in mass after the first close encounter, of the inner (left) and outer (right) planet as a function of the degree of contact, $\eta = d_\mathrm{min}/(R_1+R_2)$, for the Kepler-36 progenitor (top), $q=1;\ m_\mathrm{c}=0.85$ (middle), and $q=1;\ m_\mathrm{c}=0.95$ (bottom) calculations, where the symbols designate the outcomes as described in Figure~\ref{Fig:Collision_Results}.
We show the fits, $m_{\mathrm{lost},i}/m_i = A_{\mathrm{m},i}\eta^{k_{\mathrm{m},i}},$ (dashed black lines) where the best-fit values are reported in Table~\ref{TBL:Best_fit_values}.}
\label{Fig:mass_loss1}
\end{figure*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f10a.eps}
\includegraphics[width=8.0cm]{./f10b.eps} \\
\includegraphics[width=8.0cm]{./f10c.eps}
\includegraphics[width=8.0cm]{./f10d.eps}
\end{tabular}
\end{center}
\caption{Fractional change in mass after the first close encounter, of the inner (left) and outer (right) planet as a function of the degree of contact, $\eta = d_\mathrm{min}/(R_1+R_2)$, for the $q=1/3;\ m_\mathrm{c}=0.85$ (top), and $q=1/3;\ m_\mathrm{c}=0.95$ (bottom) calculations.
Symbols and fits are as in Fig.~\ref{Fig:mass_loss1}.}
\label{Fig:mass_loss2}
\end{figure*}
\subsection{Prescription}
For each set of collisions, we develop models using the initial energy, distance of closest approach, masses, and orbits to predict the outcome and final masses and orbits of the remnant planet(s).
Using \eqref{EQ:E_max} and \eqref{EQ:a_max}, the critical initial collision energy required for a scattering may be expressed, \begin{equation}\label{EQ:E}E_\mathrm{c} = E_\mathrm{esc} + E_\mathrm{d},\end{equation} where \begin{equation}\label{EQ:E_esc}E_\mathrm{esc}=\frac{-Gm_1(m_1+m_2)}{2a_\mathrm{out}}\frac{1+e_\mathrm{in}}{1-e_\mathrm{out}}\left(\frac{\mu}{3M_*}\right)^{-1/3},\end{equation} and $m_1$, $m_2$, and $E_\mathrm{d}$ may be estimated as a function of $d_\mathrm{min}$ using the fits to discussed in the previous subsection.
Figure~\ref{Fig:Outcome_Predictions} shows, for each set of calculations, the critical initial collision energy in excess of the energy required for a planet to leave the mutual Hill sphere (to normalize the distance from the host star), as a function of the distance of closest approach, separating the predicted captures and scatterings.
We see that the potential planet-planet binary is very close to the critical energy separating captures and scatterings.
The model accurately predicts 98 out of 102 outcomes, and we examine closely the outcomes that are misclassified.
Of the four misclassified outcomes, two are mergers that occur after initially leaving the mutual Hill sphere, and can be attributed to a second, separate close encounter, and one is a collision very close to the critical collision energy, resulting in a potentially long-lived planet-planet binary.
This model may be adapted for a generic collision between sub-Neptunes using the fits from the calculations most similar to the target system.
Figure~\ref{Fig:Collision_Parameters_Predictions} shows the distribution of collisions from our dynamical integrations discussed in \S\ref{Sec:Nbody} with the predicted critical collision energy for separating grazing collisions resulting in mergers from scatterings.
Table~\ref{TBL:Predicted_Nbody} summarizes the outcomes of grazing collisions and near misses, where $72\%$ - $96\%$ of grazing collisions result in a scattering, motivating an improvement on the sticky-sphere approximation.
Based on the predicted outcome, the final properties of the collision remnant may be estimated, specifically, the amount of gas retained by a merger remnant, and the energy and gas lost during a scattering.
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f11a.eps}
\color{white}
\rule{8.0cm}{5.0cm}
\color{black} \\
\includegraphics[width=8.0cm]{./f11b.eps}
\includegraphics[width=8.0cm]{./f11c.eps} \\
\includegraphics[width=8.0cm]{./f11d.eps}
\includegraphics[width=8.0cm]{./f11e.eps}
\end{tabular}
\end{center}
\caption{The outcomes of each collision, designated as described in \ref{Fig:Collision_Results}, as a function of the degree of contact, $\eta = d_\mathrm{min}/(R_1+R_2),$ and the collision energy in units of the binding energy, as described by eq.~\eqref{EQ:binding_energy}, from the SPH calculations.
The collision energy is normalized by the escape energy as described by eq.~\eqref{EQ:E_esc}.
We show the predicted critical collision energy separating scattering collisions from captures (dashed black lines), and we see that the model accurately predicts the outcome in 98 out of 102 collisions, where 2 of the misclassified outcomes are the result of a second, separate close encounter, and 1 results in the only bound-planet pair found in our set of calculations.}
\label{Fig:Outcome_Predictions}
\end{figure*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.0cm]{./f12a.eps}
\color{white}
\rule{8.0cm}{5.0cm}
\color{black} \\
\includegraphics[width=8.0cm]{./f12b.eps}
\includegraphics[width=8.0cm]{./f12c.eps} \\
\includegraphics[width=8.0cm]{./f12d.eps}
\includegraphics[width=8.0cm]{./f12e.eps}
\end{tabular}
\end{center}
\caption{Distribution of collisions characterized as in Figure~\ref{Fig:Collision_Parameters}, with the predicted critical initial collision energy described by eq.~\eqref{EQ:E} (dashed black line) separating grazing collisions resulting in mergers (blue) and scatterings (green).
Table~\ref{TBL:Predicted_Nbody} summarizes the predicted outcomes, where $72\%$ to $96\%$ of the grazing collisions result in a scattering, and are poorly modeled by the sticky-sphere approximation.}
\label{Fig:Collision_Parameters_Predictions}
\end{figure*}
\begin{deluxetable*}{lcccccccc}
\tabletypesize{\footnotesize}
\tablewidth{6.0cm}
\tablecolumns{3}
\tablecaption{Predicted Outcomes of Grazing Collisions\label{TBL:Predicted_Nbody}}
\tablehead{
\colhead{Set} & \colhead{$N_\mathrm{grazing}$} & \colhead{$N_\mathrm{scattering}$}}
\startdata
$ Kepler-36\ progenitor $ & $ 212 $ & $ 167 $ & \\
$ q=1;\ m_\mathrm{c} = 0.85 $ & $ 333 $ & $ 320 $ & \\
$ q=1;\ m_\mathrm{c} = 0.95 $ & $ 269 $ & $ 255 $ & \\
$ q=1/3;\ m_\mathrm{c} = 0.85 $ & $ 348 $ & $ 295 $ & \\
$ q=1/3;\ m_\mathrm{c} = 0.95 $ & $ 254 $ & $ 183 $ & \\
\enddata
\tablenotetext{}{$Set$ designates the set name, $N_\mathrm{grazing}$ is the number of grazing collisions (including near misses described in Table~\ref{TBL:NBody_Stats}) that occur within $1000$ years and $N_\mathrm{scattering}$ is the number of grazing collisions that result in a scattering, where the planets leave their mutual Hill radii without merging.}
\end{deluxetable*}
\subsubsection{Scatterings}
We find that a majority of grazing collisions where the cores do not physically come into contact result in a scattering with both planets leaving their mutual Hill sphere.
During the scattering, both planets lose mass and orbital energy, which is deposited into unbinding gas and also converted into internal energy.
The change in mass and orbital energy during a scattering collision can significantly affect the dynamical evolution of the system.
The prescription for treating scattering collisions is as follows, using as inputs the degree of contact, masses, mass ratio, and core mass fractions of the planets:
\begin{itemize}\setlength{\itemsep}{-4pt}
\item [1)] Integrate until reaching the projected minimum separation.
\item [2)] Use the fits presented in Table~\ref{TBL:Best_fit_values} to estimate the mass fraction, $m_{\mathrm{lost},i}$, and orbital energy, $E_\mathrm{d}$, lost from both planets. The best fit values may be chosen either from the set of collisions most similar to the two colliding planets {\bf or using a linear interpolation.}
\item [3)] Calculate the remnant masses for both planets \begin{equation}m^\prime_i = m_i(1-m_{\mathrm{lost},i}(\eta)).\end{equation} For cases where the mass ratio of the two planets is large ($q<2/3$), the change in mass of the larger planet is not as well characterized by our fits, likely depending on details such as the exact trajectory of the collision; the observed change in mass for such large planets is relatively small, and can be ignored.
\item [4)] Decrease the magnitude of the relative velocity between the two planets to account for the decrease in orbital energy, conserving energy (accounting for the energy lost), \begin{equation}\frac{v^\prime}{v}=\sqrt{\frac{1}{\mu^\prime}\left[\frac{2G}{d_\mathrm{min}v^2}\left(m_1m_2-m^\prime_1m^\prime_2\right)+\mu-\frac{E_\mathrm{d}(\eta)}{v^2}\right]},\end{equation} where $\mu^\prime$ and $v^\prime$ are the remnant reduced-mass and magnitude of relative velocity.
\item [5)] Numerically solve for the remnant velocities of each planet, conserving specific angular momentum, \begin{equation}v^\prime=|{\bf v^\prime}_1-{\bf v^\prime}_2|,\end{equation}\begin{equation}\frac{m_1{\bf r}_1\times{\bf v}_1+m_2{\bf r}_2\times{\bf v}_2}{m_1+m_2}=\frac{m^\prime_1{\bf r}_1\times{\bf v^\prime}_1+m^\prime_2{\bf r}_2\times{\bf v^\prime}_2}{m^\prime_1+m^\prime_2},\end{equation} where ${\bf v^\prime}_i=v^\prime_i{\bf \hat{v}}_i$ is the remnant velocity vector parallel to the initial velocity vector.
\item [6)] (Optional) Adjust the radii of both planets to account for the loss of gas. This step assumes that subsequent collisions occur after the planets have relaxed back into equilibrium.
\end{itemize}
Using this prescription should result in a material difference in dynamical integrations that exhibit scattering collisions, generally driving the planets into more stable orbits, and in cases with a subsequent merger, less-massive remnants.
\subsubsection{Mergers}
While we do not integrate collisions resulting in a physical collision between the cores through the merger due to the computational cost, we find that the short term change in mass is largely determined by the planets' core masses.
$4\ M_\mathrm{E}$ planets tend to lose a majority of their envelope in addition to some mantle, while $12\ M_\mathrm{E}$ planets do not exhibit significant changes in mass.
Our method for treating for merger events is the same as from {\it Paper 1}, ejecting the gas from both planets and using the sticky-sphere approximation assuming most of the core mass ($\sim95\%-100\%$) is retained.
\citet{2016ApJ...817L..13I} provide a model for estimating the final gas mass fraction of the remnant planet after a merger as a function of the impact velocity; because the collisions in this study generally have energies comparable to the escape energy, our prediction that most of the gas envelope is ejected in a merger is consistent with this study.
\section{Conclusions}
\label{Sec:Conclusions}
We conduct a detailed study on collisions between two planets in initially unstable orbits, varying the mass ratios and core mass fractions of the planets to sample a range of typical neighbors in Kepler Multis, and summarize our findings as follows:
\begin{itemize}\setlength{\itemsep}{-4pt}
\item[1)]102 collision calculations result in 62 scatterings, 39 mergers, and 1 bound planet-pair.
\item[2)]While every scattering remnant eventually resulted in an eventual crossing orbit during post-collision dynamical integrations, we find that collisions tend to stabilize the system. Further collisions may eventually result in a stable system with two surviving planets.
\item[3)]The collisions that result in mergers tend to eject a majority of the gas from the less-massive planet.
Future calculations with a better treatment of the core are required to fully resolve planet-planet mergers.
\item[4)]One collision results in a potentially long-lived planet-planet binary with an eccentric orbit such that the periapsis avoids collision of the planets' cores and the apoapsis is small enough that the planets remain inside the mutual Hill sphere.
\item[5)]The outcome of a collision depends very sensitively on the distance of closest approach and the relative energy between the two planets at the collision; specifically, high relative energies and large distances of closest approach lead to more scatterings and low relative energies and low distances of closest approach are more likely to result in a merger or bound planet-pair.
\item[6)]After a collision, the density ratio of $q\ne1$ planets tend to become more extreme, with the higher-mass core retaining more of, or even accreting, the disrupted gas, and the lower-mass core losing a higher fraction of gas; we found a minimum density ratio of $\rho_2/\rho_1 = 0.4$ in our Kepler-36 progenitor calculations with a pre-collision density ratio, $\rho_2/\rho_1 = 1.15$.
\item[7)]Many interesting outcomes provide motivation for further study; specifically the possibility of a long-lived planet-planet binary, and potential moon-forming collisions encourage development of models capable of accurately modeling violent collisions of the dense core material and resolving the gas envelope.
\end{itemize}
We aggregate the collision calculations to create fits for the energy dissipated and mass lost during a collision as a function of the distance of closest approach and planet properties.
We use these fits to develop a prescription for use in N-body integrators to predict and model the outcomes of such collisions between sub-Neptunes.
We find that collisions resulting in two surviving planets leaving their mutual Hill sphere tend to increase the dynamical stability of the system and reduce the overall gas mass fraction.
The amount of gas lost strongly depends on the core masses of the planets and a low-mass gas-poor planet neighboring a high-mass gas-rich planet, with a very small dynamical separation, may be evidence of one or more previous planet-planet collisions.
The collisions that result in direct contact between the two cores are not fully resolved through the likely eventual merger in our simulations, and motivate further study to understand the details of these outcomes.
Previous calculations of these collisions with simple polytropic equations of state showed many distinct outcomes including fragmentation and partial accretion of the less-massive core.
If the lighter mantle material were preferentially stripped off of the less-massive core and accreted by the more-massive core, the remnant planets would exhibit very different core compositions: the less-massive planet with an excess iron-core mass fraction and the more-massive planet with an higher silicate-mantle mass fraction.
\subsection{Future Work}
Given our choice of equations of state to model the core, which were chosen to provide a stable boundary condition, we focused on studying grazing collisions and were not able to integrate core-core impacts through the entire merger.
Improving the algorithms used to resolve interfaces between components and improving our core equations of state to better models shocks and mixing, will allow the study of the deeper impacts, for example the long term evolution of mergers.
In this work we focused on performing many calculations to better understand the outcomes of collisions after the first periastron passage and we do not resolve collisions past a few orbital periods of the innermost planet.
Calculations that integrate scattering collisions for long timescales are important in fully understanding the behavior of the disrupted gas, particularly if the gas eventually falls back onto one of the planets.
A study of the long-term evolution of the gas will require additional physics such as a model to handle stellar winds.
We studied in detail five sets of 2-planet systems, made up of two sub-Neptunes, where both planets have a tenuous gas envelope that dominates the volume, varying the mass ratios and gas mass fractions.
However, there still remains much work to be done to better understand the likely common collisions in high-multiplicity, tightly-packed systems.
{\bf Performing similar calculations at lower planet-ages will increase the size of the planet envelopes, resulting in collisions at larger distances of closest approach, and the change in structure will likely lead to qualitatively different outcomes.
Increasing the semi-major axes of the planets will lower the impact of the host star, and including terrestrial planets and gas giants will also expand the scope of this type of study.}
The existence of long-lived planet-planet binaries is an interesting problem and the absence of an observation allows us to place an upper limit on the occurence rate.
A detailed treatment of the stability of such a system could provide insight to the types of collisions that are able to create these phenomena.
\subsection{Acknowledgements}
This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University, which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
FAR and JAH were supported by NASA Grant NNX12AI86G.
JAH was also supported by an NSF GK-12 Fellowship funded through NSF Award DGE-0948017 to Northwestern University.
JCL was supported by NSF grant number AST-1313091.
JHS was supported by NASA grants NNX16AK08G and NNX16AK32G.
We thank Joshua Fixelle for useful discussions while developing the equations of state used in the SPH calculations and Francesca Valsecchi for help with using {\it MESA} to generate sub-Neptune envelopes.
We thank the anonymous referee for providing very thorough and helpful feedback leading to major improvements in this manuscript.
This work used the SPLASH visualization software \citep{2007PASA...24..159P}.
\pagebreak
|
2,877,628,088,880 | arxiv | \section{Introduction}
Nanopore technology has opened a completely new window for probing the
properties of polymers in general and biopolymers in
particular~\cite{KasianowitzNanopores1996,MellerDNANanopore2001,NanoporeExperimentsDekkerGroup}.
In a nanopore setup two macroscopic chambers filled with a buffer
solution are separated from each other by a wall. Embedded into this
wall is a single nanopore, i.e., a hole with a diameter in the few
nanometer range, connecting the two chambers. When charged polymers
are added into one chamber, an electric field applied across the
nanopore can drive these polymers through the pore one by one. Drops
in the induced counter ion current due to the occlusion of the pore by
the translocating polymer allow the translocation dynamics of
individual polymers to be observed. In recent years, this technique
has been applied extensively to study
DNA~\cite{KasianowitzNanopores1996,MellerDNANanopore2001,NanoporeExperimentsDekkerGroup,MellerSequenceDependence2000,DeamerHairpinsNanopore2001,MellerHairpinUnzipping2004,SauerBudgeHairpinUnzipping2003,MatheArinsteinMeller2006,KeyserDirectForceMeasurement,MellerSolidState2008,MellerOrientationDependence2008} and
RNA~\cite{AkesonRNATranslocation1999} molecules as well as
proteins~\cite{StefureacPeptideTranslocation2006,GoodrichPeptideTranslocation}.
The emergence of this new experimental technique has also spurred a
lot of activity on the theoretical side. There has been particular
interest in understanding the nonequilibrium statistical mechanics
associated with the translocation of unstructured, linear polymers,
e.g., single-stranded DNA in which all nucleotides are the
same~\cite{SungParkTranslocation1996,DiMarzioMandell1997,MuthukumarTranslocation1999,MuthukumarPRL2001,ChenWangLuoTranslocation,SlonkinaKolomeisky2003,MilchevBinderBhattacharya2004,MellerTranslocationDynamicsReview,ChuangKantorKardarAnomalousDynamics}. The
quantities of interest are the (experimentally measurable)
distribution of translocation times, and the asymptotic behavior of
the typical translocation time as the polymers become very long.
On the simplest level of description, the translocation of a linear
polymer is hindered by an entropic barrier. An entropic barrier
emerges since the wall separating the two chambers effectively
separates the polymer into two sections: the {\it trans} section which
has already translocated and the {\it cis} section which yet has to
translocate. Each of these sections is constrained in its motion by
the wall, and the constraint is most severe when the polymer has
translocated half way through the pore. More quantitatively, if a
polymer with sequence length $N$ is divided into sections of length
$\baseinpore$ and $N-\baseinpore$, respectively, the total number of
configurations available to this polymer is reduced (compared to a
free polymer) by the power law factors $\baseinpore^{-\gamma_u}$ and
$(N-\baseinpore)^{-\gamma_u}$~\cite{EisenrieglerPolymers}. Here, the exponent $\gamma_u$ depends on
the asymptotic statistical properties of the polymer that are affected
only by the spatial dimensionality and a possible self-avoidance
interaction ($\gamma_u=1/2$ for an ideal, noninteracting chain). As a
consequence, the entropic barrier experienced by the translocating
polymer (i.e., the difference in free energy between a polymer
that just entered the pore ($\baseinpore=1$) and a polymer with
$\baseinpore$ bases on the \textit{trans} side) has the shape
\begin{equation}\label{eq_logarithmiclandscape}
F(\baseinpore) = \gamma \, k_{B}T \, \ln [(N-\baseinpore)\baseinpore/N]
\end{equation}
with $\gamma=\gamma_u\gtrapprox1/2$. The maximum of this barrier at
$\baseinpore=N/2$ depends logarithmically on $N$, with $\gamma_u$ as a
prefactor. Modeling the translocation process as a one-dimensional
diffusion across this entropic barrier is an appropriate description,
if the translocation process is adiabatically slow, e.g. due to
friction at the pore, such that the polymer ends on each side sample
many different configurations during the time required to translocate
a macroscopic portion of the polymer. It has been established that if
entropy reduction is the only barrier, translocation is purely
diffusive (i.e., the translocation times scale as $N^2$) in the limit
of zero voltage and ballistic (i.e., the translocation times scale as
$N$ for long polymers) at finite voltages independent of the
characteristics of the polymer model (i.e., independent of the precise
value of the exponent
$\gamma_u$)~\cite{ChuangKantorKardarAnomalousDynamics}. However, there
is still an ongoing debate what effect the actual polymer dynamics may
have on the translocation time distributions under conditions where
the adiabatic approximation breaks down, such that the polymer
dynamics is directly coupled to the translocation
dynamics~\cite{MuthukumarPRL2001,MilchevBinderBhattacharya2004,MetzlerKlafterAnomalousTranslocation2003,KantorKardar2004,MatsuyamaTranslocationKinetics2004,PanjaBarkemaTranslocationScaling2007,ChatelainKantorKardar2008}.
Here, we focus on a different, but similarly challenging theoretical
question, which arises when the translocating polymers are structured
heteropolymers. This issue has obtained some
experimental~\cite{DeamerHairpinsNanopore2001,MellerHairpinUnzipping2004,SauerBudgeHairpinUnzipping2003}
and theoretical~\cite{SauerBudgeHairpinUnzipping2003,GerlandTranslocation,BundschuhCoupledDynamics} attention but
far less than the case of unstructured molecules.
In particular, we consider a polynucleotide, an RNA or
a single-stranded DNA, consisting of a specific sequence of individual
nucleotides, i.e. A, C, G, and U for the case of RNA. For simplicity,
we will loosely use `RNA' to refer to both RNA and single-stranded DNA
in this article, as the biochemical difference between these
polynucleotides is insignificant for the questions we address. RNA
molecules have a strong propensity to form intramolecular
Watson-Crick, i.e., G--C and A--U, base pairs. The formation of such
base pairs forces the molecules to fold into sequence-dependent
structures, which are characterized by their basepairing pattern. The
naturally evolved sequences of structural RNA's, e.g. ribosomal RNA,
are biased to stably fold into particular, functional structures,
whereas the sequences of many other RNA's, e.g. most messenger RNA's,
primarily encode information, not structure. The structural features
of this latter class can be modelled via the ensemble of random RNA
sequences~\cite{HiggsRandomRNA1996,HiggsReview,BundschuhStatisticalMechanicsRNA}. Here, we characterize the translocation dynamics of
this class, focusing on the slow translocation limit. We identify
nontrivial translocation behavior, and study the physical origin of
this behavior.
Even with a random sequence, a single RNA molecule may spend most of
the time in a dominant basepairing pattern (`glassy
behavior'~\cite{HiggsRandomRNA1996}). Or else it may sample a
promiscuous array of alternative structures with different
shapes~\cite{deGennesMoltenRNA}. The transition between these two
types of behavior occurs as a function of temperature, with low
temperatures favoring glassy
behavior~\cite{BundschuhStatisticalMechanicsRNA,PagnaniParisiRNAGlass2000,BundschuhPhasesRNA}. It
is interesting to ask whether this transition is reflected also in the
translocation behavior, and if so, how?
Generally, if a folded molecule is to translocate through a very
narrow pore that allows only single strands to pass, it has to break
its base pairs in the process. This yields a coupling between the
observed translocation dynamics and the base pairing properties of the
molecule~\cite{BundschuhCoupledDynamics}. In this system, the
separation of the polymer into a {\it cis} and a {\it trans} section
has an additional effect, namely that bases on each side of the pore
can only pair with bases on the same side of the pore thus limiting
the possible pairing partners. On average, this restriction in the
base pairing pattern again is believed to lead to a free energy
barrier that is logarithmic in the length of the polymer (see below
and~\cite{BundschuhStatisticalMechanicsRNA}). Thus, at least
superficially, the problem of a structured RNA molecule translocating
through a nanopore is mathematically similar to the problem of
homopolymer translocation, even though the physical origin of the
logarithmic barrier is completely different in nature. However, the
problem is deeper than this analogy suggests: while the logarithmic
barrier is insignificant for the translocation of homopolymers (see
above), we will see below that for structured heteropolymers the
translocation dynamics is drastically affected. This is a consequence
of the fact that in the structured case, the prefactor $\gamma$ of the
logarithmic barrier is both bigger in magnitude (such that it exceeds
a critical threshold) and dependent on temperature.
The rest of this manuscript is organized as follows. In
Sec.~\ref{sec-methods}, we lay out our model assumptions and the
general theoretical framework used here to describe the translocation
dynamics, review the relevant aspects of the statistical physics of
RNA folding, and then link the folding and translocation
characteristics of random RNA. In
Sec.~\ref{sec_results} we first explore the translocation dynamics of
random RNA sequences numerically, and identify an anomalous scaling of
the typical translocation time with the length of the RNA. Then, we
provide some theoretical insight into the origin of this anomalous
scaling in the discussion. Sec.~\ref{sec_conclusion} summarizes our
results and provides an outlook to future work.
\begin{figure}[tb]
\begin{indented}
\item[]\includegraphics[width=8cm]{figure1}
\end{indented}
\caption{Sketch of a structured RNA molecule translocating through a
narrow pore, which allows single but not double strands to
pass. Translocation can be driven by an applied voltage acting on
the negative charges of the RNA backbone. An appropriate reaction
coordinate for the translocation process is the number of bases $m$
that have reached the {\it trans} side. If the translocation is
sufficiently slow, for instance due to molecular friction at the
pore or energetic barriers caused by basepairing, $m$ becomes the
only relevant degree of freedom. In this slow translocation limit,
there is sufficient time for the base-pairing patterns on the {\it
cis} and {\it trans} sides to reoptimize whenever $m$ changes.}
\label{fig1}
\end{figure}
\section{Materials and methods}
\label{sec-methods}
\subsection{Translocation dynamics: general framework}
\label{sec-translocation-1}
As illustrated in figure~\ref{fig1}, we consider a polynucleotide
translocating from the {\it cis} to the {\it trans} side of a pore in
a membrane. The pore is so narrow that only a single strand of the
polynucleotide can pass through, and hence only unpaired bases can
enter the pore. If an external electric voltage $V$ is applied across
the pore, translocation is biased towards the positive terminal, since
RNA has a negatively charged backbone. The translocation process has a
natural ``reaction coordinate'': the number of bases $\baseinpore(t)$
that have reached the {\it trans} side at time $t$. For simplicity we
will consider an ideal pore with a negligible depth, i.e. we assume
that the remaining $N-\baseinpore$ bases are all exposed on the {\it
cis} side and none reside within the pore. In general, the
translocation process cannot be described solely by a dynamic equation
for the coordinate $m$, since the spatial and basepairing degrees of
freedom of the polymer are coupled to $\baseinpore(t)$,
see~\cite{BundschuhCoupledDynamics}. However, under conditions where
the translocation process is sufficiently slow, the translocation
dynamics becomes effectively one-dimensional, as $\baseinpore(t)$
reduces to a stochastic hopping process in an appropriate
one-dimensional free energy landscape $F(\baseinpore)$. Such a
description is appropriate if the base-pairing patterns
on the {\it cis} and {\it trans} sides have sufficient time to
reoptimize whenever $\baseinpore$ changes. Slow translocation arises
when the molecular friction at the pore is large, the voltage bias $V$
is small, and the energetic barriers due to basepairing are
significant. Throughout the present paper, we focus entirely on this
slow translocation limit.
With the above assumptions, the stochastic translocation process is described by a master equation for $P(m,t)$, the probability to find an RNA molecule with a given sequence in a state with $m$ bases on the {\it trans} side at time $t$. This master equation takes the general form
\begin{eqnarray}
\label{master-eq}
\partial_{t} P(\baseinpore,t) &=&
k_{+}(\baseinpore\!-\!1)\,P(\baseinpore\!-\!1,t) + \\
& & { } + k_{-}(\baseinpore\!+\!1)\,P(\baseinpore\!+\!1,t) + \nonumber \\
& & { } - \left[k_{+}(\baseinpore) + k_{-}(\baseinpore)\right] P(\baseinpore,t) \nonumber
\end{eqnarray}
with a set of ``hopping'' rates $k_{+}(\baseinpore)$ and $k_{-}(\baseinpore)$ that depend explicitly on the translocation coordinate $m$. Here, $k_{+}(\baseinpore)$ is the rate to translocate the base with index $\baseinpore+1$ from the {\it cis} to the {\it trans} side, whereas $k_{-}(\baseinpore)$ is the rate at which base $m$ translocates back from the {\it trans} to the {\it cis} side.
The hopping rates also depend on the voltage bias $V$, the temperature $T$, and the nucleotide sequence of the RNA. In other words, at a given voltage bias and temperature, we need to obtain a set of $2N$ hopping rates for each RNA sequence, such that (\ref{master-eq}) describes the translocation dynamics. We then want to characterize the translocation behavior for the ensemble of random sequences.
If the $\baseinpore$-dependence of the hopping rates is dropped, $k_{+}(\baseinpore)\equiv k_{+}$ and $k_{-}(\baseinpore)\equiv k_{-}$, (\ref{master-eq}) describes a homogeneous drift-diffusion process and becomes equivalent to the Fokker-Planck equation
\begin{equation}
\partial_{t} P(x,t) = D\,\partial_{x}^2 P(x,t) - v\,\partial_{x} P(x,t)
\label{Eq_Drift_Diffusion}
\end{equation}
in the continuum limit, where $\baseinpore$ is replaced by a
continuous reaction coordinate $0<x<N$. Here, $D$ and $v$ are the
effective diffusion constant and drift velocity, respectively. As was
shown by Lubensky and Nelson~\cite{LubenskyNelsonDrivenTranslocation},
past translocation experiments with unstructured single-stranded
polynucleotides are quantitatively consistent
with~(\ref{Eq_Drift_Diffusion}): The experimental distribution of
translocation times $p(\tau)$ is well described by the corresponding
distribution from (\ref{Eq_Drift_Diffusion}), which is determined by
the probability current into the absorbing boundary at $x\!=\!N$.
For structured RNA's, we express the hopping rates of (\ref{master-eq}) more explicitly in the form
\begin{eqnarray}
\label{hopping-rates}
k_{+}(\baseinpore) &=& k_{0} \cdot w_{cis}(\baseinpore) \cdot \exp\left(\eta\,\frac{q_{\rm eff}V}{k_{B}T}\right) \\
k_{-}(\baseinpore) &=& k_{0} \cdot w_{trans}(\baseinpore) \cdot \exp\left((\eta-1)\,\frac{q_{\rm eff}V}{k_{B}T}\right) \nonumber \;.
\end{eqnarray}
Here $k_{0}$ denotes the basic ``attempt'' rate for the translocation
of a single unpaired base, while $w_{cis}(\baseinpore)$ and
$w_{trans}(\baseinpore)$ denote the probability that the base
attempting to translocate is indeed not paired. The exponential
(Arrhenius) factors account for the voltage bias $V$ across the pore,
which acts on the effective charge $q_{\rm eff}$ of a
nucleotide~\cite{KeyserDirectForceMeasurement,ShklovskiiEffectiveCharge}
(note that the applied voltage drops primarily directly across the
pore, while the nucleotides do not experience a significant
electrostatic force on either side). The dimensionless factor $\eta$
is a measure for the position of the microscopic transition state that
limits the rate for the crossing of a single nucleotide. More
precisely, $\eta$ is the relative distance of this transition state
from the entrance of the pore; for a symmetric pore, $\eta=1/2$ (see
figure~\ref{fig_etasketch}).
\begin{figure}
\begin{indented}
\item[]\includegraphics[width=0.8\columnwidth]{figure2}
\end{indented}
\caption{Illustration of the voltage-dependence of the translocation
rates in (4) of the main text. Even in the absence of secondary
structure, the translocation of a single base is envisaged as a
barrier crossing process. The coarse-grained, discrete reaction
coordinate $m$ (the number of translocated bases) then corresponds
to the minima of a continuous microscopic free energy landscape. The
distance of the minima reflects the base-to-base distance $b$ of the
RNA. The position of the transition state, at a fractional distance
$\eta$ from the minimum to the left, is an unknown microscopic
parameter which determines how the biasing effect of the applied
voltage is split between the forward and reverse translocation
rates: When the unbiased landscape of (a) is tilted by the applied
voltage as shown in (b), the reduction in the free energy barrier
for forward translocation is proportional to $\eta$, while the
increase in the barrier for reverse translocation is proportional to
$(1-\eta)$.\label{fig_etasketch}}
\end{figure}
For a fixed but arbitrary set of hopping rates $k_{+}(\baseinpore)$,
$k_{-}(\baseinpore)$, the (thermal) average of the translocation time
can be calculated analytically using the mean first passage time
formalism~\cite{GardinerStochasticMethods}. One obtains
\begin{equation}
\label{MFPT}
\langle\tau(\baseinpore_{0})\rangle = \sum_{m=\baseinpore_{0}}^{N-1}\sum_{l=0}^{m} \frac{1}{k_{+}(l)} \prod_{j=l+1}^{m} \frac{k_{-}(j)}{k_{+}(j)} \;.
\end{equation}
This equation assumes that at time $t=0$, the translocation process
has already proceeded to the translocation coordinate
$\baseinpore(0)=\baseinpore_{0}$. While the entire translocation
process consists of an entrance stage (with possible failed attempts),
followed by a passage stage, our focus here is only on the
latter. More precisely, we are interested in the detailed passage
dynamics of the successful translocation events. Equation~(\ref{MFPT})
assumes reflecting boundary conditions at $\baseinpore=0$, i.e. the
molecule is only allowed to exit the pore on the {\it trans} side, as
in previous theoretical studies~\cite{MuthukumarPRL2001,ChuangKantorKardarAnomalousDynamics,MetzlerKlafterAnomalousTranslocation2003,KantorKardar2004}. Experimentally, this
corresponds to a situation where, e.g., a protein or a small bead is
attached to the {\it trans} end of the molecule, preventing exit to
the {\it cis} side. In particular at low driving voltages, such a
``road block'' will be experimentally required, since otherwise it
would not be possible to separate failed translocation attempts from
full translocation events to the {\it trans} side. At larger driving
voltages, the boundary condition at $\baseinpore=0$ is expected to be
less relevant, since molecules are then unlikely to exit the pore on
the {\it cis} side once they are inserted into the pore. At the other
end, $\baseinpore=N$, (\ref{MFPT}) assumes an absorbing boundary,
i.e., the translocation time $\tau$ is defined as the time when the
state $\baseinpore=N$ is first reached.
To determine the hopping rates (\ref{hopping-rates}) for an RNA molecule with a given sequence, we first need to calculate the probabilities $w_{cis}(\baseinpore)$ and $w_{trans}(\baseinpore)$. To this end, we review in the following section the physics of RNA folding and the characteristics of random RNA sequences, before we return to link these characteristics to the translocation dynamics in Sec.~\ref{sec-translocation-2}.
\subsection{Folding of random RNA sequences}
\label{sec-random-RNA}
In this section, we will review the aspects of the statistical physics
of structures of random RNA molecules that are relevant for our
study. We will follow the bulk of the previous literature and
exclusively focus on RNA secondary
structures~\cite{HiggsRandomRNA1996,HiggsReview,PagnaniParisiRNAGlass2000,BundschuhPhasesRNA,MarinariRNAZeroTemperature2002,KrzakalaMezardMuller2002,LeihanRNAGlass2006}. An
RNA secondary structure is the collection of all base pairs formed by
a molecule. Formally, it can be described as a set
$\structure=\{(i_1,j_1),\ldots,(i_n,j_n)\}$ of all pairs of indices
$(i_k,j_k)$ (with $i_k<j_k$) of bases that are paired. A pairing
configuration is only considered to be a valid secondary structure if
it fulfils two conditions: (i) Each base is paired with at most one
other base. (ii) If $(i,j)$ is a base pair and $(k,l)$ is a base pair
with $i<k$, they have to be either nested, i.e., fulfil $i<k<l<j$, or
independent, i.e., fulfil $i<j<k<l$. Forbidden base pairing
configurations with $i<k<j<l$ are called pseudo-knots. Restricting the
allowable secondary structures to only those that contain neither base
triplets nor pseudo-knots is an approximation since both structural
elements do occur in actual structures. However, the approximation is
reasonable since base triplets and pseudo-knots are believed to be
rare in natural structures and can be effectively suppressed by
performing experiments in the absence of multi-valent ions, such as
Mg$^{2+}$~\cite{TinocoHowRNAFolds}.
The energy $E[\structure]$ of a given structure $\structure$ depends
on the sequence $\base_1\ldots\base_N$ of the RNA molecule. For
quantitative analyzes such as the prediction of the actual secondary
structure of an RNA
molecule~\cite{HofackerVienna1994,MfoldServer2003}, very detailed
energy models with hundreds of parameters have been
developed~\cite{MathewsRNAParameters1999}. Since we are interested in
more generic questions such as the scaling behavior of translocation
times, we will use a strongly simplified energy model that focuses on
the base pairing alone. More specifically, we will assign an energy
solely derived from the base pairs formed in the structure, i.e.,
\begin{equation}
E[\structure]=\sum_{(i,j)\in \structure}\varepsilon_{i,j}
\end{equation}
where $\varepsilon_{i,j}$ is the energy for the formation of a base
pair between base $b_i$ and
$b_j$. Following~\cite{BundschuhStatisticalMechanicsRNA,BundschuhPhasesRNA}
we will even ignore the differences between the stability of different
Watson-Crick base pairs and use the simplest possible model
\begin{equation}\label{eq_matchmismatchenergies}
\varepsilon_{i,j}=\left\{\begin{array}{ll}-\ematch&\mbox{$b_i$ and
$b_j$ are a Watson-Crick
pair}\\\emismatch&\mbox{otherwise}\end{array}
\right.
\end{equation}
where the match and mismatch energies $\ematch$ and $\emismatch$ are positive constants. Such a simplified energy model clearly is not suitable for the quantitative prediction of the behavior of an individual RNA molecule. However, the universal properties of the RNA folding problem, such as the thermodynamic phases, the topology of the phase diagram, and the critical exponents characterizing these phases in the thermodynamic limit are expected to be correctly captured.
For this as well as other more complicated energy models, the
partition function of an RNA molecule of a given sequence
$\base_1\ldots\base_N$ can be calculated exactly in polynomial
time~\cite{McCaskillRNAPartitionFunction}. This can be done by
introducing as an auxiliary quantity the partition function $Z_{i,j}$
for the substrand $\base_i\ldots\base_j$ of the original molecule. The
$j$th base can either be unpaired or paired with the $k$th base, where
$k$ takes all of the possibilities from $i$ to $j-1$. If the $j$th
base is unpaired, the allowable structures are exactly the allowable
structures for the substrand $\base_i\ldots\base_{j-1}$. If the $j$th
base is paired with the $k$th base, the exclusion of pseudo-knots
implies that in the presence of the $(k,j)$ base pair, any structure
is possible on the substrand $\base_i\ldots\base_{k-1}$ and on the
substrand $\base_{k+1}\ldots\base_{j-1}$ but base pairs between these
two substrands are forbidden. That yields the recursion equation
\begin{equation}
\label{eq_rnarecursion}
Z_{i,j}=Z_{i,j-1}+\sum_{k=i}^{j-1}Z_{i,k-1}e^{-\beta\varepsilon_{k,j}}Z_{k+1,j-1}
\end{equation}
where $\beta=(k_BT)^{-1}$. Since the substrands referred to on the
right hand side of this equation are shorter than the substrands
referred to on the left hand side, this recursion equation can be used
to start from the trivial single and two base substrands and calculate
the partition functions for the increasingly larger substrands. The
partition function $Z_{1,N}$ is then the partition function of the
whole molecule. Since in this process $O(N^2)$ of the $Z_{i,j}$ have
to be calculated with each calculation requiring one summation over
the index $k$, the total computational complexity for this calculation
is $O(N^3)$.
Through various numerical and analytical arguments it has been
established that RNA secondary structures undergo a glass transition
between a high temperature molten and a low temperature glassy
phase~\cite{HiggsRandomRNA1996,BundschuhStatisticalMechanicsRNA,PagnaniParisiRNAGlass2000,BundschuhPhasesRNA,KrzakalaMezardMuller2002,HartmannGlassComment2001,PagnaniReply2001}. In
the (high temperature) molten phase the energetic differences between
different structures become irrelevant and configurational entropy is
the main contributor to the free energy of the structural
ensemble~\cite{deGennesMoltenRNA} (it is to be noted that our
simplified model of RNA secondary structures does not show a
denaturation transition where base pairing itself becomes unfavorable
and the molecule becomes completely unstructured. Thus, ``high
temperature'' in terms of real RNA molecules refers to temperatures
still below the denaturation temperature, but close enough so that the
energetic differences between different base pairs are smeared
out). In the glassy phase, one or a few structures (determined by the
specific sequence of the molecule) become dominant in the thermal
ensemble --- the molecule ``freezes'' into those structures.
The molten (high temperature) phase of RNA secondary structures is
completely understood analytically~\cite{deGennesMoltenRNA}. Since in
the molten phase by definition the base pairing energetics do not play
a role any more, the behavior of the molten phase can be determined by
setting all base pairing energies equal, i.e., by choosing
$\varepsilon_{i,j}=-\varepsilon_0$ with some positive
$\varepsilon_0$. Under this choice the partition functions $Z_{i,j}$
no longer depend on the nucleotide sequence and thus become
translationally invariant, i.e., $Z_{i,j}\equiv Z(j-i+1)$. The
recursion equation~(\ref{eq_rnarecursion}) then simplifies
to
\begin{equation}
Z(N+1)=Z(N)+q\sum_{k=1}^{N}Z(k-1)Z(N-k)
\end{equation}
where $q\equiv\exp(\beta\varepsilon_0)$ is the Boltzmann factor
associated with a base pair. This recursion equation can be solved in
the limit of large $N$ and yields
\begin{equation}
\label{Z-molten}
Z(N)\approx A N^{-\gamma_m} z_0^N
\end{equation}
where $A$ and $z_0$ depend on the Boltzmann factor $q$. The exponent $\gamma_m=3/2$, however, is universal and is characteristic of the molten phase.
\subsection{Translocation of random RNA sequences}
\label{sec-translocation-2}
In the context of polymer translocation, it is necessary to determine
what effect the pore has on the possible secondary structures of the
molecule. If direct interactions with the pore are ignored, the only
effect of the pore is that it divides the molecule into two segments,
namely the \textit{trans} part with $\baseinpore$ bases and the
\textit{cis} part with $N-\baseinpore$ bases. Each part of the
molecule can still form RNA secondary structures, but base pairs
between a base on the \textit{trans} side and a base on the
\textit{cis} side become impossible. This constraint results in a free
energy cost. In the entropically dominated molten phase a reduction in
the number of possibilities for base pairing will decrease the
entropy; in the energetically dominated glassy phase, a reduction in
the number of possibilities to find well matching substrands will
increase the energy. In both cases, the free energy cost provides a barrier to the translocation process, and we refer to the cost as the pinch free energy $F(\baseinpore)$. The pinch free energy depends explicitly on our reaction coordinate $\baseinpore$ and hence constitutes a free energy landscape for the translocation process\footnote{To keep the notation concise, we suppress the dependence of the pinch free energy $F(\baseinpore)$ on the total sequence length $N$.}.
With the help of the partition function $Z_{i,j}$ introduced in the previous section, the pinch free energy can be easily calculated:
The partition function for the RNA molecule at position $\baseinpore$ in the pore has the product form $Z_{1,m}Z_{m+1,N}$ (the structures on the \textit{cis} and \textit{trans} sides are uncorrelated), whereas the partition function of the unconstrained RNA in solution is $Z_{1,N}$. The free energy difference between these states is the pinch free energy,
\begin{equation}
\label{pinchFE}
F(\baseinpore) = -k_{B}T \left[\ln \left(Z_{1,m}Z_{m+1,N}\right)-\ln Z_{1,N}\right] \;.
\end{equation}
Using the definition of the partition function, we can also establish the explicit link of the pinch free energy landscape to the translocation dynamics model of Sec.~\ref{sec-translocation-1}. To this end, we need to determine the probabilities $w_{cis}(\baseinpore)$ and $w_{trans}(\baseinpore)$ in (\ref{hopping-rates}). Since $Z_{i,j}$ represents the total statistical weight of all permitted basepairing patterns for the RNA substrand from base $i$ to base $j$, the probability $w_{cis}(\baseinpore)$ for the base immediately in front of the pore on the {\it cis} side to be unpaired is given by
\begin{equation}
\label{wcis}
w_{cis}(\baseinpore) = \frac{Z_{\baseinpore+2,N}}{Z_{\baseinpore+1,N}} \;.
\end{equation}
Similarly, the probability for the base immediately in front of the pore on the {\it trans} side to be unpaired is given by
\begin{equation}
\label{wtrans}
w_{trans}(\baseinpore) = \frac{Z_{1,\baseinpore-1}}{Z_{1,\baseinpore}} \;.
\end{equation}
Together, (\ref{master-eq}), (\ref{hopping-rates}), (\ref{eq_rnarecursion}), (\ref{wcis}), and (\ref{wtrans}) fully specify the translocation dynamics of structured RNA molecules within our model. The general form (\ref{MFPT}) for the average translocation time then simplifies to~\cite{BundschuhCoupledDynamics}
\begin{eqnarray}
\label{EQmeantau}
k_{0} \langle\tau\rangle &=& e^{-\eta\,\frac{q_{\rm eff}V}{k_{B}T}}
\sum_{m=\baseinpore_{0}}^{N-1} \sum_{\ell=0}^{m} e^{-(m-\ell)\frac{q_{\rm eff}V}{k_{B}T}}
\frac{Z_{1,\ell}Z_{\ell+1,N}}{Z_{1,m}Z_{m+1,N}} \nonumber \\
&=& e^{-\eta\,\frac{q_{\rm eff}V}{k_{B}T}}
\sum_{m=\baseinpore_{0}}^{N-1} \sum_{\ell=0}^{m}
e^{\frac{F(m)-F(\ell)-(m-\ell) q_{\rm eff}V}{k_{B}T}}
\end{eqnarray}
using the free energy $F(\baseinpore)$ as defined in (\ref{pinchFE}).
It is now evident that the translocation dynamics of
Sec.~\ref{sec-translocation-1} corresponds to a random walk in the pinch free energy landscape which is tilted by the applied voltage.
Equation (\ref{pinchFE}) can be used to compute the free energy landscape for a specific RNA sequence. To characterize the typical translocation behavior of structured RNA molecules, we need to generate such landscapes for a large sample from the ensemble of random sequences. We will take this numerical approach in section \ref{sec_numerics}. However, using (\ref{Z-molten}), we can analytically determine the typical form $F_{\mathrm{molten}}(\baseinpore)$ of the landscape in the molten phase,
\begin{eqnarray}
\label{Fmolten}
F_{\mathrm{molten}}(\baseinpore)&=&
-k_BT\ln\frac{Z(\baseinpore)Z(N-\baseinpore)}{Z(N)} \nonumber \\
&\approx&-k_BT\ln\frac{\baseinpore^{-\gamma_m}(N-\baseinpore)^{-\gamma_m}}{N^{-\gamma_m}} \nonumber \\
&=&\gamma_m k_B T\ln[\baseinpore(N-\baseinpore)/N] \;.
\end{eqnarray}
This is formally the same logarithmic free energy landscape as for the translocation of unstructured polymers, (\ref{eq_logarithmiclandscape}). However, its physical origin is completely different (namely, the structural entropy of base pairing configurations rather than the positional entropy of the backbone), and its prefactor $\gamma_m=3/2$ is larger, which will be important below.
It is interesting to note that the logarithmic behavior of
(\ref{Fmolten}) and the value of the prefactor can be physically
understood by realizing that the ensemble of secondary structures in
the molten phase corresponds to the ensemble of (rooted) branched
polymers: The number of possible configurations of a rooted branched
polymer of molecular weight $\baseinpore$ is known to scale like
$\baseinpore^{-3/2}$~\cite{LubenskyObukhovBranchedPolymers1981} (in
addition to the non universal extensive factor) and thus the pinch
free energy landscape of a translocating RNA molecule in the molten
phase is the same as the landscape generated by cutting a branched
polymer of molecular weight $N$ into two rooted branched polymers of
molecular weights $\baseinpore$ and $N-\baseinpore$,
respectively~\cite{BundschuhStatisticalMechanicsRNA}.
\begin{figure}
\begin{indented}
\item[]\includegraphics[width=0.8\columnwidth]{figure3}
\end{indented}
\caption{Numerically determined prefactor \protect$\gamma(T)$ of the
logarithmic free energy landscape
(\ref{eq_logarithmiclandscape}) as a function of temperature
(most of the data from~\cite{BundschuhStatisticalMechanicsRNA}). The
statistical error of the data is on the order of the symbol size.
It can be seen that the prefactor is constant $\frac{3}{2}$ in the high
temperature (molten) phase. In the low temperature (glassy) phase
the prefactor becomes temperature dependent and diverges. The
prefactors were determined by generating many random RNA sequences
with equal probability of the four bases of lengths $N=160$, $320$,
$640$, and $1280$, calculating the restricted partition functions
for the energy model (\ref{eq_matchmismatchenergies}) with
$\ematch=\emismatch$ via (\protect\ref{eq_rnarecursion}) and
extracting the pinch free energies at $\baseinpore=N/2$ via
(\ref{pinchFE}). The prefactor $a(T)$ of the logarithmic law for
the pinch free energy was determined by fitting such a logarithmic
law to the numerical data and the corresponding prefactor of the
logarithmic free energy landscape $\gamma(T)$ was calculated from
(\protect\ref{eq_gammafroma}).
\label{fig_prefactorvstemperature}}
\end{figure}
In the glassy phase the situation is much less clear, since there are
no analytical calculations of the typical pinch free energy for the
ensemble of random RNA sequences. Furthermore, different numerical
studies~\cite{BundschuhStatisticalMechanicsRNA,KrzakalaMezardMuller2002,LeihanRNAGlass2006},
which examined the maximal pinch (at $\baseinpore=N/2$), disagree
whether $F(N/2)$ scales logarithmically or as a small power with the
sequence length $N$. One numerical argument in favor of a logarithmic
dependence is that different choices of the sequence disorder yield
different prefactors of the logarithm or different exponents in the
power law. While different prefactors of the logarithm are not
problematic, exponents that depend on the choice of the disorder
contradict the notion that exponents should be universal.
In~\cite{BundschuhStatisticalMechanicsRNA} the
dependence of the maximal pinch free energy on sequence length and
temperature was studied in detail and it was found that the dependence
of the maximal pinch free energy on sequence length can be described
rather well by a logarithmic law for all temperatures. At high
temperatures, the prefactor $a(T)$ of the logarithmic dependence is
$\frac{3}{2}k_B T$, as expected. However, at low temperatures this
prefactor ceases to be proportional to temperature and converges
toward a finite value at zero temperature. Thus, if we assume that
the entire averaged pinch free energy landscape still has the
logarithmic form of (\ref{eq_logarithmiclandscape}) in the glassy
phase, the logarithmic dependence of its maximum on sequence length
implies that the effective prefactor
\begin{equation}
\label{eq_gammafroma}
\gamma(T)=\frac{a(T)}{k_B T}
\end{equation}
equals $3/2$ at high temperatures and diverges as the temperature is lowered below the glass transition temperature. For random sequences with equal probability for all four bases and energies $\ematch=\emismatch$ this behavior is numerically
illustrated in figure~\ref{fig_prefactorvstemperature}.
\section{Results and discussion}
\label{sec_results}
\subsection{Numerical analysis}
\label{sec_numerics}
From the arguments in section~\ref{sec-methods}, one may expect that the translocation of structured
RNA molecules can be described as a one-dimensional diffusion process
in the logarithmic energy landscape
(\ref{eq_logarithmiclandscape}) with a temperature dependent,
potentially large prefactor $\gamma$. Of course, the scaling of the
translocation time with sequence length in such a landscape can be
derived analytically as a function of the prefactor $\gamma$.
However, there are several uncertainties in this description. First,
different numerical studies of the pinch free energy of random RNA
molecules do not even agree on the question if the maximum of the
landscape at $\baseinpore=N/2$ scales logarithmically or with a small
power of the sequence length in the glass
phase~\cite{BundschuhStatisticalMechanicsRNA,KrzakalaMezardMuller2002,LeihanRNAGlass2006}. Second,
even if the maximum of the landscape scales logarithmically, it has
not been established that the whole average landscape follows the
simple shape (\ref{eq_logarithmiclandscape}). Third, even if the
{\em average} landscape has the suggested shape, the landscape of any
given RNA molecule can differ significantly from the average
landscape. Thus, it is not obvious how the ensemble of translocation
times of actual landscapes is related to the translocation time over
the average landscape.
To clarify these points, we perform a detailed numerical study of the
translocation dynamics of random RNA molecules on the basis of the
model defined in sections \ref{sec-translocation-1} and
\ref{sec-translocation-2}. We generated free energy landscapes for
$2500$ different RNA molecules of length $N=1600$ using the partition
function recursion\footnote{During the calculation, the auxiliary
partition functions $Z_{i,j}$ are rescaled to avoid numerical
overflows due to the large exponential factors at low
temperatures.}, (\ref{eq_rnarecursion}), and the definition
(\ref{pinchFE}) of the pinch free energy. The RNA sequences are random
and uncorrelated, with equal probabilities of $1/4$ for each of the
four bases. We use the energy model (\ref{eq_matchmismatchenergies})
with $\ematch=\emismatch$ and quote all energies in units of
$\ematch$.
\begin{figure}
\begin{indented}
\item[]\includegraphics[width=0.8\columnwidth]{figure4}
\end{indented}
\caption{Numerically determined average free energy landscapes for
translocation of RNA molecules through a nanopore. For clarity, the
numerically determined free energies are averaged over ranges of the
reaction coordinate $\baseinpore$ of size $40$. The statistical
errors on the data are smaller than the size of the symbols. It can
be seen that the average free energy landscape follows the
logarithmic shape (\protect\ref{eq_logarithmiclandscape}) with
prefactors \protect$\gamma k_BT$ where \protect$\gamma$ is taken from
figure~\protect\ref{fig_prefactorvstemperature} up to irrelevant
additive constants.\label{fig_landscapes}}
\end{figure}
First of all, our ensemble of free energy landscapes allows us to
directly inspect the shape of the average
landscape. Figure~\ref{fig_landscapes} shows the pinch free energy
landscape $F(\baseinpore)$ averaged
over the $2500$ realizations of the random sequences (symbols).
Superimposed as lines are logarithmic energy landscapes as given by
(\ref{eq_logarithmiclandscape}) with prefactors $\gamma$ that are
directly obtained by multiplying the values shown in
figure~\ref{fig_prefactorvstemperature} by $k_BT$. These energy
landscapes are shifted by fitted offsets which reflect the behavior of
the pinching free energy at very small $\baseinpore$ and which are
irrelevant for the scaling behavior of the translocation dynamics. The
comparison indicates that the overall shape of the average free energy
landscapes is indeed the logarithmic one, even in the glassy
temperature regime of figure~\ref{fig_prefactorvstemperature} where
$\gamma(T)$ is significantly larger than $\frac{3}{2}$.
\begin{figure}
\begin{indented}
\item[]\includegraphics[width=0.8\columnwidth]{figure5}
\end{indented}
\caption{Histogram of thermally averaged translocation times for
$2500$ random sequences of length $N=1600$ at temperatures
$k_BT/\ematch=0.2$ and $k_BT/\ematch=1$. Note, that counts in the
histogram for $k_BT/\ematch=1$
are rescaled by a factor of $10$ to fit into the same plot as the
histogram for $k_BT/\ematch=0.2$. It can
be seen that the distribution already at $k_BT/\ematch=1$ spans
several decades. At the low temperature $k_BT/\ematch=0.2$ the
distribution develops a very fat tail consisting of few sequences
with very long translocation times.\label{fig_tauhist}}
\end{figure}
Next, we examine the translocation times. For each sequence, we
calculate the thermal average of the translocation time
$\langle\tau\rangle$ using the exact expression (\ref{EQmeantau}). The
most straightforward quantity to extract from the 2500 translocation
times thus obtained would be the ensemble averaged translocation time
$\overline{\langle\tau\rangle}$, where the bar denotes averaging over
the sequence ensemble. However, as frequently observed in disordered
systems, the distribution of characteristic times develops a fat tail
at low temperatures, which renders the ensemble averaged translocation
time ill defined (see figure~\ref{fig_tauhist}). Instead, we must use a
definition for the typical value that does not rely on the existence
of the mean value. For instance, the median or the average of the
logarithm both provide a well-defined typical value even for fat
tailed distributions. Here, we use the average of the logarithm of the
translocation time, which can be interpreted as a typical effective
energy barrier.
Figure~\ref{fig_logscaling} shows the resulting ensemble averages
of the logarithms of the translocation times for different sequence
lengths of $N=50$, $100$, $200$, $400$, $800$, and $1600$ and for
temperatures of $k_B T/\ematch=0.1$, $0.13$, $0.17$, $0.2$, $0.3$, $0.6$, $0.8$, and $1.0$. Two features of figure~\ref{fig_logscaling} are immediately obvious. First, at all temperatures, the double logarithmic plot of translocation times as a function of sequence length is perfectly linear over the whole range of sequence lengths studied. Second, the slope of these lines is independent of temperature for large temperatures and increases sharply as the temperature is lowered.
To quantify the power laws and their temperature dependence, we
apply linear regression to our logarithmic data. The resulting slopes (i.e., exponents of the power law) are shown in table~\ref{tab_exponents}. It can be seen that {\em all} exponents are larger than two, i.e. the trivial diffusive scaling $\tau\sim N^2$ does not describe the translocation dynamics. In the high temperature regime, the exponent is clearly independent of temperature, while it becomes large and very sensitive to temperature in the low temperature regime. This salient feature in the translocation behavior of structured RNA molecules implies highly anomalous sub-diffusive dynamics for the translocation process. In the next section, we will discuss this behavior from a theoretical perspective.
\begin{figure}
\begin{indented}
\item[]\includegraphics[width=0.8\columnwidth]{figure6}
\end{indented}
\caption{Dependence of the typical translocation time on sequence
length for several different temperatures. To avoid problems with
fat tails of the distribution of the translocation times at low
temperatures, the ensemble average of the logarithms of the
translocation times is taken to determine the typical translocation
time. The statistical errors on the translocation times are on the
order of the size of the symbols. It can be seen that the typical
translocation time has a very clean power law dependence over the
whole range of sequence lengths and for all temperatures. The
translocation time is independent of temperature for large
temperatures and becomes very sensitive to temperature at low
temperatures.\label{fig_logscaling}}
\end{figure}
\subsection{Anomalous scaling of the translocation times}
\label{sec_discussion}
Given that the translocation dynamics on a free energy landscape of
the logarithmic form~(\ref{eq_logarithmiclandscape}) was previously
studied, and its scaling behavior was found to be normally
diffusive~\cite{ChuangKantorKardarAnomalousDynamics}, our finding of
anomalous scaling in the present case is surprising. Our numerical
computation of the {\em average} free energy landscape shown in
figure~\ref{fig_landscapes} indeed confirmed that the typical landscape
for the translocation of structured RNA molecules has the logarithmic
shape, as the theoretical arguments of Sec.~\ref{sec-translocation-2}
had suggested. To resolve the apparent contradiction, we now revisit
the arguments of reference~\cite{ChuangKantorKardarAnomalousDynamics}
that led to the diffusive scaling.
\begin{table}
\caption{Exponents of the power law dependence of the typical
translocation time on the length of the molecule. These exponents
are determined by linear regression of the data in
figure~\protect\ref{fig_logscaling}.\label{tab_exponents}}
\lineup
\begin{indented}
\item[]\begin{tabular}{@{}ll}
\br
$k_BT/\ematch$&exponent\\
\mr
$0.1$&$14.05\pm0.08$\\
$0.13$&\0$9.62\pm0.03$\\
$0.17$&\0$6.23\pm0.04$\\
$0.2$&\0$4.79\pm0.06$\\
$0.3$&\0$2.94\pm0.06$\\
$0.6$&\0$2.44\pm0.02$\\
$0.8$&\0$2.43\pm0.01$\\
$1.0$&\0$2.44\pm0.01$\\
\br
\end{tabular}
\end{indented}
\end{table}
Chuang, Kantor, and Kardar considered a continuum description of the
translocation process, based on a Fokker-Planck equation similar to
(\ref{Eq_Drift_Diffusion}), but with the drift velocity $v$
replaced by the local gradient of the free energy landscape,
\begin{displaymath}
\frac{\partial}{\partial t} P = D\,\frac{\partial^2}{\partial x^2} P + \frac{D}{k_B T}
\frac{\partial}{\partial x} \left(P \,\frac{\partial}{\partial x} F(x)\right)\;,
\end{displaymath}
with $F(x)=\gamma\, k_BT \ln [(N-x)x/N]$. They note that the polymer length $N$ and
the diffusion constant can be eliminated from this equation by introducing a rescaled time $\tau = t D/N^2 $ and translocation coordinate $s = x/N$,
\begin{equation}
\label{dimlessFP}
\frac{\partial p}{\partial \tau} = \frac{\partial^{2}p}{\partial s^{2}} + \gamma \frac{\partial}{\partial s} \left( \frac{1-2s}{(1-s)s}\,p \right)\;,
\end{equation}
where $p=p(s,\tau)$ now is the probability distribution in the
rescaled variables. Consequently, the authors then argue that the
solution of this dimensionless equation may be converted back to real
time by multiplying the time axis by $N^2/D$, resulting in a diffusive
scaling of the translocation time. Indeed, as the authors point out,
the argument is independent of the value of $\gamma$. Application of
the argument to the present case, with $\gamma\ge 3/2$, would suggest
that the secondary structure of the RNA is irrelevant in the slow
translocation limit considered here.
However, we will now see that this conclusion cannot be drawn. The
argument rests on the tacit assumption that the probability
distribution $p=p(s,\tau)$ develops no structure on a microscopic
scale. For instance, the continuum Fokker-Planck description breaks
down, if most of the probability is localized on one or a few points
along the translocation coordinate $m$. Indeed, such a localization
transition occurs, if $\gamma$ exceeds a threshold value of one:
Assuming a quasi-stationary solution to (\ref{dimlessFP}) which is
localized at the $s=0$ border is a self-consistent ansatz, if $p$
behaves as $p\sim s^{-\gamma}$ for small $s$. For $\gamma>1$ the
integral of this distribution diverges at the boundary, i.e. the free
energy barrier to translocation becomes strong enough for the
quasi-stationary distribution to localize at the boundary.
In the regime $\gamma>1$ where the argument for the independence of
the translocation time on $\gamma$ breaks down, the correct scaling
behavior of the translocation time can be obtained using the standard
Kramers rate theory for thermally-induced barrier
crossing~\cite{GardinerStochasticMethods,KramersProblem}. In the
present case, this approach yields
\begin{equation}
\label{eq_Kramers}
\tau\sim \frac{e^{F(N/2)/k_BT}}{\sqrt{F''(N/2)}} \sim N^{\gamma+1} \;.
\end{equation}
It is important to note that the barrier height itself according
to (\ref{eq_logarithmiclandscape}) only yields a power law
of $N^\gamma$ and that the additional power of $N$ results from the
prefactor which is often ignored in applications of Kramers rate theory.
\begin{figure}
\begin{indented}
\item[]\includegraphics[width=0.8\columnwidth]{figure7}
\end{indented}
\caption{Comparison of the landscape prefactors $\gamma(T)$ from
figure~\ref{fig_prefactorvstemperature} and the numerically
determined translocation time exponents from
table~\ref{tab_exponents} for the temperatures $k_B T/\ematch$ where
RNA is expected to be in the glassy phase. It can be seen that
the observed translocation time exponents empirically behave like
$2\gamma(T)$.\label{fig_transexp}}
\end{figure}
If we apply (\ref{eq_Kramers}) to the molten phase where
$\gamma=\frac{3}{2}$, this yields $\tau\sim N^{2.5}$ in good agreement with
our numerical estimates, see table~\ref{tab_exponents}. Furthermore,
(\ref{eq_Kramers}) rationalizes the sharp increase of the scaling
exponent as the temperature is lowered into the glass phase, i.e.,
for $k_B T/\ematch\le0.2$.
However, in that regime the quantitative comparison of the translocation time exponents
in table~\ref{tab_exponents} and the barrier heights in
figure~\ref{fig_prefactorvstemperature} shown in figure~\ref{fig_transexp}
reveals that the translocation
time exponents increase even more dramatically than the increase of
the landscape prefactors suggests, namely approximately like
$2\gamma(T)$. This indicates that the typical translocation of individual
molecules in the glass phase of RNA is {\em not} well approximated by
translocation in the average landscape but rather must be dominated by
the fluctuations of the free energy landscape around the average.
\section{Conclusion and outlook}
\label{sec_conclusion}
In conclusion, we see that our numerical observation that the scaling
of the typical translocation time is drastically affected by the
secondary structure, is qualitatively well in accord with theoretical
expectation but quantitatively even exceeds the magnitude of the
effect expected from the theory. For translocation in a logarithmic
landscape it is clear from (\ref{eq_Kramers}) that
$\gamma=1$ constitutes a threshold for a change in the translocation
behavior: The regime $\gamma<1$ is marked by an insignificant barrier,
diffusive translocation, and failure of the Kramers approximation,
which assumes a ``reaction-limited'' process and ignores the time
required to diffuse from the starting to the end point. In contrast,
for $\gamma>1$ the barrier dominates the translocation dynamics and
leads to the sub-diffusive scaling (\ref{eq_Kramers}). Importantly,
for unstructured polymers where the logarithmic free energy landscape
is only due to the configurational entropy of the polymer, we have
$\gamma<1$ even if self-avoidance is included. Here, we found that the
case of structured RNA molecules is always in the opposite regime of
$\gamma>1$. Thus, despite the similarity in the form of the free
energy landscape, (\ref{eq_logarithmiclandscape}), the
translocation behavior of unstructured and structured polynucleotides
is quite different.
The anomalous scaling of translocation times found in our study is
only observable in the absence of an external voltage. In the presence
of an external voltage the gain in electrostatic energy due to moving
$N/2$ bases into the pore is linear in $N$ and thus for large $N$
always overcomes the logarithmic barrier
(\ref{eq_logarithmiclandscape}) leading to a linear dependence of
the translocation time on sequence length. However, for finite but
small voltages the anomalous scaling could still be observed in an
intermediate regime of sequence lengths where $Nq_{\rm
eff}V/2<\gamma(T)$.
Our empirical finding of even stronger anomalous scaling in the glassy
phase than expected from the average free energy landscape indicates
that translocation in the glassy phase is strongly affected by the
fluctuations and the free energy landscapes of the individual RNA
molecules. Understanding these fluctuations and the origin of the
intriguing empirically found $2\gamma(T)$ law for the translocation
time exponent will be subject of future research.
\ack
We gratefully acknowledge stimulating discussions with Yariv Kafri and
Julius Lucks. This work was supported in part by the Petroleum
Research Fund of the American Chemical Society through Grant
No.~42555-G9 to RB, by the National Science Foundation through grant
No.~DMR-0706002 to RB, and by the Deutsche Forschungsgemeinschaft via
SFB 486 to UG.
\section*{References}
|
2,877,628,088,881 | arxiv | \section{Introduction}
In natural language understanding, it is important to understand how sentences are used in order to argue for or against particular topics in conversations and articles. This not only relates to automatic debates \cite{Slonim2021-bg} but also is useful in detecting fake news in media and allowing colorful persona in robots \cite{Kucuk2020-fa}. Stance detection or stance classification \cite{Bar-Haim2017-hv,Kucuk2020-fa} is the task where one has to decide whether a given claim is in support of or against a given topic, or the two are unrelated in terms of argumentation. The support and against relations between topics and claims are usually more abstract than such relations in opinion mining, because in instead of directly taking or refuting a topic, a claim is usually a piece of evidence or a logical consequence following a stance towards some topic, which makes detecting the stance of such claims difficult and knowledge-intensive. This problem is partially tackled by approaches where topic-specific models are used. Obviously it is difficult to generalize to new topics with these models, because new models have to be trained with annotated data for the new topics, and possible topics in real life scenarios are numerous. Generalizability is also a problem for machine learning models from the stand point of training data, because common stance detection datasets have only a couple hundred topics but thousands of claims, allowing such models to easily overfit to the topics in training data.
We propose to address the generalizability issue as well as the knowledge-intensive nature of the task with knowledge-rich pretrained models. Pretrained models have shown good performance in a variety of natural language understanding tasks which require both linguistic and commonsense knowledge. Such knowledge is invaluable to the stance detection task. In order to further improve model performance, we extract a noisy training dataset from large quantities of unlabeled text, following the intuition that discourse relations are indicative of stance in general. For example, the relationship between a topic, such as ``大数据带来了更多的好处 (big data brings more good than bad)'', and a supporting argument, such as ``生成的大数据可作为预测工具和预防策略 (the generated big data can be used as a predictive tool and preventive strategy)'', may be rewritten as a causal relation:
\begin{enumerate}
\item 因为生成的大数据可作为预测工具和预防策略,所以大数据带来了更多的好处。(Because the generated big data can be used as a predictive tool and preventive strategy, big data brings more good than bad.),
\end{enumerate}
and the same topic and an against argument, such as ``大数据的准确性难以确保 (the accuracy of big data is hard to be sure of)'', may be rewritten as a contradiction relation:
\begin{enumerate}[resume]
\item 虽然大数据带来了更多的好处,大数据的准确性难以确保。(Although big data brings more good than bad, the accuracy of big data is hard to be sure of.),
\end{enumerate}
which suggests that raw sentences in such relations may be in turn used as noisy training instances for the stance detection task.
Training neural network models with such noisy datasets improves robustness of the model, reduces greatly the chance of overfitting, allows the model to acclimate to task-specific data format, and provides chances to learn more knowledge for the stance detection task. Experiments on development data show large improvements over baselines where such noisy data is not used. Amongst the 26 teams participating in the Claim Stance Classification for Debating track of the Argumentative Text Understanding for AI Debater shared task, our approach ranks 1$\text{st}$, with 2.3\% absolute performance improvement over the runner-up approach.
\section{Related work}
Stance classification has been a subject of research in many different environments, such as congressional debates \cite{Thomas2006-dc}, online debates on social media \cite{Somasundaran2009-id,Conforti2020-kc} and company-internal discussions \cite{Murakami2010-lf}. Previous approaches focus on learning topic-specific models to classify stances of related claims with machine learning models \cite{Anand2011-qu,Hasan2013-ek,Sobhani2016-gd} as well as deep learning models \cite{Sun2018-ea,Dey2018-xa,Ghosh2019-ou,Popat2019-nf,Sirrianni2020-lt,Yu2020-mo}. Previous work has also looked at doing stance classification at challenging situations such as zero-shot \cite{Allaway2020-ua} and unsupervised settings \cite{Somasundaran2009-id,Ghosh2018-rx,Kobbe2020-nq}. Since stance classification has been thought of as a subtask of sentiment analysis \cite{Kucuk2020-fa}, the use of sentiment lexicon is popular in previous work.
Compared to previous work, our approach does not rely on any sentiment lexicon, which is a linguistic resource difficult to construct. Our approach also does not require topic-specific model training, which improves generalizability of a trained model to unseen topics and claims.
\section{Unsupervised Data Preparation}
We follow the intuition that the Support relation in stance classification between claims and topics can be categorized as a causal or conditional relation, because one should be able to deduce the topic from the claim if the claim supports the topic. Similarly, the Against relation between claims and topics can be categorized as a contraction relation where the claim does not naturally follow a topic. If a claim and a topic are to be connected by discourse connectives, connectives of corresponding discourse relations need to be used in order to preserve discourse coherence. Sentences with such discourse relations could better prepare the pretrained language models for finetuning with gold data and help the language models fight against overfitting. We first present a few different sets of data we extract from raw text with no supervision, and then explain how they are used in our finetuning framework.
\subsection{Data $D_1$ Extraction for Distant Finetuning}
\label{sec:data1_extraction}
A dataset for unsupervised distant finetuning is extracted from a large text corpus CLUE \cite{xu-etal-2020-clue}\footnote{\url{https://github.com/CLUEbenchmark/CLUE}} based on discourse relations. Table \ref{tab:connectives} shows examples of discourse connectives used for extracting sentences with particular discourse relations. A pair of sentences are kept when the second sentence starts with a multiple line connective, which follows this pattern ``$S_1\text{。} c_1 S_2\text{。}$'' where $S_i$ is a sentence or a sentence fragment, and $c_i$ is a discourse connective. For Support, $S_1$ is a topic and $S_2$ is a claim, where the opposite is adopted for Against.
For single sentences, one sentence is kept if it contains a pair of single line connectives where the second connective is in a sentence fragment directly after a comma, which follows this pattern ``$S_1 c_1 S_2\text{,} S_3 c_2 S_4\text{。}$''. In the case of single sentences, for Support, $S_1c_1S_2$ is a topic and $S_3c_2S_4$ is a claim, where the opposite is adopted for Against. Candidate sentences are discarded when they contain non-Chinese characters, exceed 100 characters, or contain pronouns. The discourse connectives are deleted from the sentences to remove obvious and easy cues to the relation classes. The sentence pairs with the Neutral label are selected randomly from sentences in the same article which are close to the topic sentence. The final $D_1$ dataset includes 1.2 million data points labeled as Support, 0.7 million labeled as Against, and 1.9 million labeled as Neutral. Table \ref{tab:example_sentences} shows examples of extracted sentences with different silver labels.
\begin{table}[]
\caption{Example Chinese discourse connectives used in extraction.}\label{tab:connectives}
\centering
\begin{tabular}{|l|l|l|}
\hline
Relation & Type & Connectives \\
\hline
\multirow{2}{*}{Support} & Multiple line & 因此, 因而, 所以\\
& Single line & 因为...所以..., 只要...就..., 要是...就..., 之所以...是因为... \\
\hline
\multirow{2}{*}{Against} & Multiple line & 但是, 然而, 可是 \\
& Single line & 虽然...但是..., 虽然...可是..., 尽管...但是... \\
\hline
\end{tabular}
\end{table}
\begin{table}[]
\caption{Examples of extracted sentence pairs from raw text.}\label{tab:example_sentences}
\centering
\begin{tabular}{|l|l|l|}
\hline
Relation & Type & Connectives \\
\hline
\multirow{2}{*}{Support} & Topic & 常将弹性工时与变形工时相互混淆\\
& Claim & 国内学界对于弹性工时概念未有统一解释 \\
\hline
\multirow{2}{*}{Against} & Topic & 选择不同作用机制的癫痫药物,才可能获得疗效的叠加 \\
& Claim & 如果两种癫痫药物有相同的不良反应,就不能联合使用 \\
\hline
\multirow{2}{*}{Neutral} & Topic & 其中的人数是最基本的数据 \\
& Claim & 人口数据是一个国家和地区的基本数据 \\
\hline
\end{tabular}
\end{table}
\subsection{Low-noise Finetuning Data $D_2$ Extraction}
\label{sec:data2_extraction}
Although the distant finetuning data prepared in Section \ref{sec:data1_extraction} can provide training signal to further pretrain language models, it may be too noisy for final finetuning purposes. The Conditional relation does not always equal to Support, as portrayed in this example ``只要小明去,小张就会去。(If Xiao Ming goes, Xiao Zhang goes too.)'' in which the condition has only an arbitrary connection to the result. Similarly the Contradiction relation is not always Against, shown in this example ``虽然兔毛可以抵御严寒,但是兔子也怕热。(Although rabbit fur can be good for rigid cold, rabbits are also prone to overheating)'' where the two facts are more supplementary than contradictory to each other. Further filtering is needed to reduce the noise level within the extracted pretrain dataset.
A list of high frequency topic indicators is used to find sentences that are most likely to be statements of positions on certain issues, which are the best candidates for topcis. The list includes words such as ``应 (should)'' and ``最 (most)''. More importantly, we consider Entailment and Contradiction relations from the natural language inference (NLI) task very close to the Support and Against relations in stance detection, therefore we employ an NLI model for data selection. Specifically, a Chinese BERT with a classification layer is finetuned with the XNLI dataset on all available languages and the best model is chosen based on evaluation on the Chinese NLI portion of the XNLI evaluation dataset. This model is then used to make predictions of NLI labels on all data points in $D_1$. Finally, 30,000 data points which are either labeled Support by the connectives and Entailment by the XNLI model, or Against and Contradiction, or Neutral and No Entailment are randomly sampled from $D_1$, resulting in a low-noise finetuning dataset $D_2$ with 30,000 data points in total, which is about 5 times the size of the gold training set.
\subsection{Stance Detection Data in other languages}
Datasets for stance detection also exist in other languages such as English. With a pretrained language model able to take multilingual input, we expect such datasets help the model learn the concept of Support and Against more robustly. The multilingual stance detection dataset XArgMining \cite{Toledo-Ronen2020-tm} from the IBM Debater project contains human-authored data points for stance detection in English, as well as such data points translated into 5 other languages: Spanish, French, Italian, German and Dutch. With both human authored and machine translated data points combined, the dataset used for training has 400,000 data points. The dataset $D_x$ is the concatenation of these datasets.
\section{Staged Training with Noisy Finetuning}
Our model used for the task follows the standard pretraining-finetuning paradigm. A base transformer-based language model pretrained on large quantities of unlabeled data is used as an encoder to encode the topic and the claim. The contextualized embedding of the [CLS] token is used for classification, which goes through a linear layer to generate the logits for the three labels.
In order to utilize the large amount of noisy data to help our model get better results, a novel training process where datasets with different noise levels are used in different stages to finetune the model, which is shown in Fig.~\ref{fig:train}. There are three stages in the whole finetuning process. The first stage is to use $D_1$ and $D_x$ for distant finetuning, and the second stage is to use the low-noise refined dataset and the back-translated gold dataset for noisy finetuning, and the final stage is to use the gold data with a small portion of noisy data for final finetuning.
\begin{figure}
\includegraphics[width=\textwidth]{graphs/debater.pdf}
\caption{The training process with noisy datasets.} \label{fig:train}
\end{figure}
\subsection{Distant finetuning}
\label{sec:distant_finetuning}
In this finetuning stage, datasets with high noise level $D_1$ or with data points in other languages $D_x$ are used as training data. There are two training objectives used in this stage: conditional masked language modeling and classification. For each batch of training data points, one training objective is randomly chosen. For the conditional masked language modeling objective, the topic sentence and the claim sentence are first concatenated and tokenized by a tokenizer from a pretrained language model, and then the [CLS] token at the beginning of the tokenized sequence is replaced by a special token indicating the label of the pair.
Part of either the topic or the claim, chosen randomly, will be masked with a special [MASK] token and predicted by the language model. For the classification objective, the concatenated sequence without any modification is encoded by the language model, and the [CLS] token is used for classifying the pair. The classification objective is identical to the one used in a common clean finetuning setup for a classification task. In a noise-free scenario, using the classification objective may be enough for finetuning the language model. However, the conditional masked language modeling objective is able to allow the model to learn how a topic and a claim interacts conditioned on a noisy relation without forgetting how to do language modeling. Preliminary experiments show that this objective is very important in ensuring model performance. Shown in Section \ref{sec:data1_extraction}, the $D_1$ dataset is imbalanced with a large number of data points labeled as Neutral or Support. Random sampling with small weights on Support and Neutral is performed on this dataset such that there are 0.7 million data points for all classes, ensuring balanced training of all labels.
\subsection{Noisy and clean finetuning}
\label{sec:clean_finetuning}
After distant finetuning, the encoder from the pretrained language model is ready for a finetuning stage where training data is less noisy and more similar to data used in the downstream task. At this stage, the refined noisy dataset described in Section \ref{sec:data2_extraction}, combined with the original gold dataset and a dataset with gold data points back-translated from English, is used for training. Only the classification objective is used in this stage, resembling the common finetuning process. After two epochs, the encoder is ready for clean finetuning with the gold training set. In order to increase robustness of the model and regularize learning, a small portion of $D_2$ equal to 8\% of the gold training set is added into the gold training set for the final clean finetuning.
\subsection{Ensembling}
Due to the small size of gold training data, different random seeds yield models with varying performances. Randomness caused by the noisy data sampling process also causes models to be trained with different training sets thus having different performances. We propose to ensemble best-performing models trained with different configurations together, which leads to a final composite model with high robustness. The final prediction probabilities are calculated as the product of the prediction probabilities from all the models:
\begin{equation}
p_{\text{final}}(\mathbf{y}|\mathbf{x}_i) = \prod_{j} p_{j}(\mathbf{y}|\mathbf{x}_i)
\end{equation}
where $i$ is the index of an input $\mathbf{x}$, and $\mathbf{y}$ is the output probabilities and $j$ is the index of a model in the ensemble.
\section{Experiments}
The datasets provided in the shared task include a training set with 6,416 data points and a development set with 990 data points, which are used for model development and hyperparameter tuning. For hyperparameters, we use the XLM RoBERTa large model \cite{Conneau2020-kz} as the base pretrained language model encoder in our classifier, which has 24 hidden layers, 16 attention heads with 4096 as intermediate embedding size and 1024 as the size of the final hidden embeddings. Dropout for all layers is set to be 0.1.
The classifier is first trained with the distant finetuning setup with $D_1$ and $D_x$ datasets for 58,500 steps with a batch size of 8 per step. A gradient update is performed every 4 steps, making the effective batch size to be 32. The learning rate for this stage is set to be $8 \times 10^{-6}$. The mix ratio between $D_1$ and $D_x$ is 4:1, meaning that 80\% of the time, a batch is sampled from $D_1$. After a batch is sampled from a dataset, a training objective is chosen randomly between classification and language modeling.
For the noisy and clean finetuning, the number of epochs is chosen to be 2. The learning rate is $6 \times 10^{-6}$ and the batch size is 32.
AdamW \cite{Loshchilov2017-ms} is used as the optimizer at all stages. The top classification layer is re-initialized between stages. Performances of different experiment setups are reported in accuracy on the development set, because the test set is not released.
\subsection{Encoders}
We first examine performances of different pretrained languages models as the base encoder in clean finetuning. The goal of this experiment is to measure model performances when finetuned with gold data only. Table \ref{tab:encoders} shows the results of finetuning with a number of popular Chinese pretrained models \cite{Cui2020-pg} as well as the XLM-RoBERTa model. Interestingly, the only model that is not trained entirely on Chinese data, XLM-RoBERTa large, is the best performing model of all. This indicates that multilingual training is helpful even when the downstream task is in a specific language only. The Electra model, which has been reported to reach state-of-the-art performances on many language understanding tasks, is not able to outperform both RoBERTa large and XLM-RoBERTa. Finally, there is a substantial performance gap between smaller BERT base models and larger XLM-RoBERTa models, showcasing the importance of training data size for pretraining as well as objectives used in pretraining.
\begin{table}
\caption{Performance of different encoders with finetuning on the development set}\label{tab:encoders}
\begin{tabular}{|l|l|}
\hline
Encoder Type & Development accuracy \\
\hline
Chinese BERT wwm base & 76.22 \\
Chinese BERT wwm ext & 78.78 \\
Chinese Electra 180g large & 80.80 \\
Chinese RoBERTa wwm ext large & 82.61 \\
\bf XLM-RoBERTa large & \bf 85.24 \\
\hline
\end{tabular}
\end{table}
\subsection{Distance finetuning}
With XLM-RoBERTa large chosen as the encoder of the model, we explore the number of steps needed for the best performance with distant finetuning. Table \ref{tab:distant} shows the model performance on the development set with only distant finetuning with no gold training set used at all. Model performance increases steadily as the number of steps increases. A pretrained encoder with a randomly initialized classification layer gets 32.72 accuracy, but when distant finetuning is used, the model is able to reach 70.49, which is close to how Chinese BERT base performs with finetuning. This shows that the training signal in the dataset used in distant finetuning is very strong, and the model is able to learn robustly to detect stances of sentences, despite the fact that it has not seen any gold training data and there exists a style difference between the noisy dataset from the internet and human-authored gold training data. Finally, the model trained with 58500 steps is used for clean finetuning because of time constraints in the shared task, but it is likely that further improvement may be acquired with even more training steps.
\begin{table}
\caption{Performance of the model in distant finetuning on the development set with XLM-RoBERTa large}\label{tab:distant}
\begin{tabular}{|l|l|}
\hline
Number of distant finetuning steps & Development accuracy \\
\hline
0 & 32.72\\
16500 & 68.28\\
38500 & 70.20\\
\textbf{58500} & \textbf{70.49} \\
\hline
\end{tabular}
\end{table}
\subsection{Stages of finetuning}
Good performance from distant finetuning can be further improved by finetuning the model with gold training data. As described in Section \ref{sec:clean_finetuning}, two finetuning stages follow the distant finetuning, which both involve gold training data. Table \ref{tab:clean_finetune} shows how different combinations of finetuning stages affect model performance. The 3-stage finetuning is most effective in improving model performance and robustness, as it further increases model accuracy by 1.52 points compared to directly using clean finetuning after distant finetuning. Although a large amount of automatically generated data is used in noisy finetuning, model performance is only slightly lower than clean finetuning, showing both the high quality of the noisy data and high robustness of the model.
\begin{table}
\caption{Model performance with different combinations of finetuning stages.}\label{tab:clean_finetune}
\begin{tabular}{|l|l|}
\hline
Stage & Development accuracy \\
\hline
Distant finetuning & 70.49 \\
Distant + Noisy finetuning & 89.80 \\
Distant + Clean finetuning & 90.20 \\
\bf Distant + Noisy + Clean finetuning & \bf 91.72 \\
\hline
\end{tabular}
\end{table}
\subsection{Added noisy samples in finetuning}
We also look at if adding noisy samples into the clean training set in clean finetuning is able to help the model improve its performance, most likely by regularizing model training. Different numbers of noisy training data points from $D_2$ are randomly sampled and added to the gold training set, as shown in Table \ref{tab:noisy_in_clean}. Model performance averaged across 50 seeds is reported here. Using no noisy data in final clean finetuning yields lowest performance in general, and adding a small amount of noisy data does help model performance. Comparing to the whole gold training set with more than 6000 training instances, adding 500 noisy data points does not introduce too much noise but the regularizing effect from the noisy data points helps the model to be more robust to test items not found in training.
\begin{table}
\caption{Model performance with different number of noisy data points added into the gold training set in the final clean finetuning. Performance numbers are average accuracy over 50 random seeds.}\label{tab:noisy_in_clean}
\begin{tabular}{|l|l|}
\hline
Number of samples from $D_2$ & Development accuracy \\
\hline
0 & 89.25 \\
250 & 89.49 \\
\bf 500 & \bf 89.51 \\
1000 & 89.35 \\
\hline
\end{tabular}
\end{table}
\end{CJK*}
\section{Conclusion}
A new method to extract data with silver labels from raw text to finetune a system for stance classification has been proposed in this paper . The reliance on specific discourse relations in the data extraction has ensured that the extracted silver topic and claim pairs are of high quality and the relations between the extracted pairs are relevant to the stance classification task. In order to use such silver data, a 3-stage training scheme where the noisy level in the training data decreases over different stages going from most noisy to least noisy is also proposed in the paper. We show through detailed experiments that the automatically annotated dataset as well as the 3-stage training help improve model performance in stance classification. Our approach ranks 1$^{\text{st}}$ among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater, which confirms the effectiveness of our approach.
\bibliographystyle{splncs04}
|
2,877,628,088,882 | arxiv | \section{Introduction}
\label{sec:Intro}
The excitonic insulator (EI) is a longstanding problem in condensed matter physics. Although first theoretical work dates back almost half a century,~\cite{Mo61, Kno63, KK64, *KK65, JRK67, HR68a} the experimental realization of the EI phase has proven to be quite challenging. In recent years a number of mixed-valent rare-earth chalcogenide and transition-metal dichalcogenide materials have been investigated,~\cite{BSW91, CMCBDGBAPBF07, WSTMANTKNT09} which are promising in this respect and have renewed the interest in the EI also from the theoretical side.~\cite{BF06, IPBBF08, MCCBSDGBABFP09, ZIBF10, PBF10, ZIBF12}
In particular, the mechanism of the formation of the EI has been analyzed in detail.~\cite{BF06,IPBBF08,PBF10,SEO11,ZIBF12,EKOF14} In the weak coupling, semimetallic regime the Coulomb-driven EI formation reveals a formal analogy to the BCS theory of superconductivity. In the strong coupling, semiconducting regime, on the other hand, the transition to the anticipated EI phase is a Bose-Einstein condensation (BEC) of preformed excitons. Then, within the EI, a smooth crossover from a BCS- to a BEC-like state should occur.
An EI instability can be triggered by the Coulomb interaction between electrons and holes. Therefore, the theoretical modeling typically focuses on a purely electronic mechanism. First attempts to include a coupling to the lattice degrees of freedom have been made quite recently, motivated by several experiments indicating that the lattice is involved at the phase transition to the anticipated EI phase.~\cite{MBCAB11,KTKO13,KTKO13err,ZFBMB13,PBF13}
For example, in the TmSe$_{0.45}$Te$_{0.55}$ compound a drop of the specific heat and an increase of the lattice constant have been interpreted as a strong coupling between excitons and phonons.~\cite{WBM04}
Furthermore, in 1$T$-TiSe$_2$ there is a longstanding debate whether the charge-density wave and the concomitant structural phase transition observed in this material are the results of an excitonic~\cite{CMCBDGBAPBF07, MCCBSDGBABFP09} or a lattice instability.~\cite{Hu77,RKS02} A combination of both instabilities was also proposed.~\cite{KMCC02,WNS10} Without any doubt, lattice effects are crucial in this material.
Finally, at the transition to the suggested EI phase in Ta$_2$NiSe$_5$ the lattice structure changes from orthorhombic to monoclinic, although the charge does not modulate.~\cite{SCFWDSI86,KTKO13,KTKO13err} Therefore, the electron-phonon interaction seems non-negligible in this material as well.
Motivated by these findings, we analyze the EI formation in the framework of a rather generic two-band model that comprises both the Coulomb interaction and an explicit electron-phonon coupling. Besides its relevance to the materials under study, some fundamental theoretical questions are brought up in this model. So we address the electron-hole pair spectrum and the nature of the ordered ground state.
The paper is organized as follows. In Sec.~\ref{sec:model} we introduce our model.
A mean-field treatment in terms of the electron Green functions is given in Sec.~\ref{sec:meanfield}.
In Sec.~\ref{sec:selfEnergy} we calculate the electronic self-energies using a Kadanoff-Baym approach. From this, we argue that the considered electron-phonon interaction does not lead to a qualitative modification of the single-particle spectra.
The electron-hole pair spectrum, on the other hand, indicates a strong influence of the phonons. This is shown in Sec.~\ref{sec:ehpair}. How the lattice dynamics affects the electron-hole pairing is analyzed in the framework of the Kadanoff-Baym scheme. We present some numerical results in Sec.~\ref{sec:results} and show that the purely electronic model possesses an acoustic mode, whereas the collective mode becomes massive if phonons participate.
In Sec.~\ref{sec:condensate} we discuss the problem of off-diagonal long-range order.
A short summary of our results is given in Sec.~\ref{sec:conclusion}.
\section{Model}
\label{sec:model}
For our analysis, we start from a two-band model with interband Coulomb interaction and an explicit electron-phonon coupling,
\begin{equation}
H=H_{\rm e} + H_{\rm e-e} + H_{\rm ph} + H_{\rm e-ph}.
\label{H}
\end{equation}
The noninteracting band-electron contribution is given by
\begin{equation}
H_{\rm e} = \sum_{\bf k} \varepsilon_{{\bf k} v} c_{{\bf k} v}^\dagger c_{{\bf k} v}^{\phantom{\dagger}}
+ \sum_{\bf k} \varepsilon_{{\bf k} c} c_{{\bf k} c}^\dagger c_{{\bf k} c}^{\phantom{\dagger}} \,,
\label{H_e}
\end{equation}
where $c_{{\bf k}\sigma}^{(\dagger)}$ is the annihilation (creation) operator for an electron with momentum ${\bf k}$ in the valence band (band index $\sigma=v$) or in the conduction band ($\sigma=c$).
The corresponding band dispersions are denoted as $\varepsilon_{{\bf k}\sigma}$.
We consider a valence band (conduction band) with a single, nondegenerate maximum (minimum). Moreover, the electron-electron interaction is supposed to be
\begin{equation}
H_{\rm e-e} = \sum_{{\bf k},{\bf k}',{\bf q}} \frac{V({\bf q})}{N} c_{{\bf k} c}^\dagger c_{{\bf k}+{\bf q} c}^{\phantom{\dagger}}
c_{{\bf k}' v}^\dagger c_{{\bf k}'-{\bf q} v}^{\phantom{\dagger}} ,
\label{H_ee}
\end{equation}
where $V({\bf q})$ is the effective Coulomb repulsion. $N$ is the number of unit cells.
In harmonic approximation, the phonon Hamiltonian reads
\begin{equation}
H_{\rm ph} = \sum_{\bf q} \omega_{\bf q} b_{\bf q}^\dagger b_{\bf q}^{\phantom{\dagger}} \,,
\label{H_ph}
\end{equation}
where $\omega_{\bf q}$ is the bare phonon frequency, and $b_{\bf q}^{(\dagger)}$ is the annihilation (creation) operator for a phonon with momentum ${\bf q}$. Throughout this paper we set $\hbar=1$.
If the electron-phonon interaction is assumed to be
\begin{eqnarray}
H_{\rm e-ph} &=& \sum_{{\bf k},{\bf q}} \bigg( \frac{g_{-{\bf q}}}{\sqrt{N}}
(b_{-{\bf q}}^\dagger +b_{\bf q}^{\phantom{\dagger}}) c_{{\bf k}+{\bf q} c}^\dagger c_{{\bf k} v}^{\phantom{\dagger}}
\nonumber \\
&&+ \frac{g_{\bf q}}{\sqrt{N}} (b_{\bf q}^\dagger + b_{-{\bf q}}^{\phantom{\dagger}} ) c_{{\bf k} v}^\dagger c_{{\bf k}+{\bf q} c}^{\phantom{\dagger}} \bigg) ,
\label{H_eph}
\end{eqnarray}
the phonon directly couples to an electron-hole pair with the (real) coupling constant $g_{\bf q}$.
Then, the annihilation of a phonon is inevitably connected with a transfer of an electron from the valence band to the conduction band and vice versa. Such a coupling of phonons to excitons may look rather specific, but for materials near the semimetal-semiconductor transition (SM-SC) it is of relevance.
In order to model the SM-SC transition, we consider the case of half-filling,
\begin{equation}
n_c + n_v = 1,
\label{halfFilling}
\end{equation}
where $n_\sigma = \frac{1}{N} \sum_{\bf k} \langle c_{{\bf k}\sigma}^\dagger c_{{\bf k}\sigma}^{\phantom{\dagger}} \rangle$.
\section{Mean-field Green functions}
\label{sec:meanfield}
The electron-phonon coupling~\eqref{H_eph} may cause a deformation of the lattice at sufficiently low temperatures.\cite{MMAB12b}
A static lattice distortion is characterized by
\begin{equation}
\delta_{\bar{\bf Q}} = \frac{2}{\sqrt{N}} g_{\bar{\bf Q}} \langle b_{\bar{\bf Q}}^\dagger \rangle ,
\label{phOP}
\end{equation}
where the ordering vector of the dimerized phase is denoted as $\bar{\bf Q}$. Working at half-filling, we assume that $\bar{\bf Q}$ is either zero or half a reciprocal lattice vector. Then $b_{\bar{\bf Q}}^\dagger=b_{-\bar{\bf Q}}^\dagger$.
As a consequence, the parameter $\delta_{\bar{\bf Q}}$ is a real number that measures the amplitude of the static lattice distortion.
For charge-density-wave systems with more complex lattice deformations, e.g., the chiral charge-density wave in the transition metal-dichalcogenide 1$T$-TiSe$_2$, $\delta_{\bar{\bf Q}}$ might be complex.\cite{ZFBMB13} Nevertheless, since $\delta_{\bar{\bf Q}}^\ast = \delta_{-\bar{\bf Q}}$, the static lattice distortion---in real space---is a real quantity.
Adopting the frozen phonon approximation, we replace the phonon operators by their averages. Then, the Hamiltonian~\eqref{H} describes an effective electronic system.
Applying subsequently a Hartree-Fock decoupling scheme, our model reduces to
\begin{align}
H^{\rm MF} =& \sum_{\bf k} \bar\varepsilon_{{\bf k} v} c_{{\bf k} v}^\dagger c_{{\bf k} v}^{\phantom{\dagger}}
+\sum_{\bf k} \bar\varepsilon_{{\bf k}+\bar{\bf Q} c} \, c_{{\bf k}+\bar{\bf Q} c}^\dagger c_{{\bf k}+\bar{\bf Q} c}^{\phantom{\dagger}}
\nonumber \\
&+\sum_{\bf k} \left( x_{{\bf k} \bar{\bf Q}} c_{{\bf k}+\bar{\bf Q} c}^\dagger c_{{\bf k} v}^{\phantom{\dagger}}
+ x_{{\bf k} \bar{\bf Q}}^\ast c_{{\bf k} v}^\dagger c_{{\bf k}+\bar{\bf Q} c}^{\phantom{\dagger}} \right) ,
+C_{\rm dec}
\label{H_MF}
\end{align}
with renormalized dispersions $\bar\varepsilon_{{\bf k} \sigma} = \varepsilon_{{\bf k} \sigma} + V(0) n_{-\sigma}$.
In Eq.~\eqref{H_MF},
\begin{equation}
x_{{\bf k}\bar{\bf Q}} = \delta_{\bar{\bf Q}} - \Delta_{{\bf k}\bar{\bf Q}}
\label{x_kQ}
\end{equation}
is the gap parameter,
\begin{equation}
\Delta_{{\bf k}\bar{\bf Q}} = \frac{1}{N}\sum_{{\bf k}'} V({\bf k}'-{\bf k}+\bar{\bf Q})
\langle c_{{\bf k}' v}^\dagger c_{{\bf k}'+\bar{\bf Q} c}^{\phantom{\dagger}} \rangle
\label{Delta_kQ}
\end{equation}
is the Coulomb-induced hybridization between the valence band and the conduction band, and
\begin{eqnarray}
C_{\rm dec} &=& \frac{1}{N} \sum_{{\bf k},{\bf k}'} V({\bf k}'-{\bf k}+\bar{\bf Q})
\langle c_{{\bf k}+\bar{\bf Q} c}^\dagger c_{{\bf k} v}^{\phantom{\dagger}} \rangle
\langle c_{{\bf k}' v}^\dagger c_{{\bf k}'+\bar{\bf Q} c}^{\phantom{\dagger}} \rangle
\nonumber \\
&& + \frac{N}{4} \frac{\omega_{\bar{\bf Q}}}{|g_{\bar{\bf Q}}|^2} \delta_{\bar{\bf Q}}^2
- N V(0) n_c n_v .
\label{MF_decouplingC}
\end{eqnarray}
For an undistorted lattice $\Delta_{{\bf k}\bar{\bf Q}}$ serves as the EI order parameter, whose phase is undetermined and can be chosen arbitrarily.~\cite{ZFB10,ZFBMB13}
A finite electron-phonon interaction removes this freedom.
The gap equation that determines $\Delta_{{\bf k}\bar{\bf Q}}$ and the conservation of the particle number [Eq.~\eqref{halfFilling}] are valid on both sides of the SM-SC transition, i.e., these relations hold in the BCS as well as BEC regimes.\cite{ZIBF12}
In mean-field approximation the electronic Green functions become
\begin{eqnarray}
G_v({\bf k}, z_1) &=&\langle\langle c_{{\bf k} v}^{\phantom{\dagger}} ; c_{{\bf k} v}^\dagger \rangle \rangle
\nonumber \\
&=& v_{\bf k}^2 G_A({\bf k},z_1) + u_{\bf k}^2 G_B({\bf k},z_1) ,
\label{Gv_HF}
\end{eqnarray}
\begin{eqnarray}
G_c({\bf k}+\bar{\bf Q}, z_1) &=& \langle\langle c_{{\bf k}+\bar{\bf Q} c}^{\phantom{\dagger}} ;
c_{{\bf k}+\bar{\bf Q} c}^\dagger \rangle\rangle
\nonumber \\
&=& u_{\bf k}^2 G_A({\bf k},z_1) + v_{\bf k}^2 G_B({\bf k},z_1) ,
\label{Gc_HF}
\end{eqnarray}
\begin{eqnarray}
F({\bf k},z_1) &=& \langle\langle c_{{\bf k}+\bar{\bf Q} c}^{\phantom{\dagger}} ; c_{{\bf k} v}^\dagger \rangle \rangle
\nonumber \\
&=& -u_{\bf k} v_{\bf k} \big[ G_B({\bf k},z_1)-G_A({\bf k},z_1) \big]
\nonumber \\
&=& \langle\langle c_{{\bf k} v}^{\phantom{\dagger}} ; c_{{\bf k}+\bar{\bf Q} c}^\dagger \rangle\rangle
=F^\dagger({\bf k},z_1) ,
\label{F_HF}
\end{eqnarray}
where $z_1$ denotes fermionic Matsubara frequencies, and
\begin{equation}
G_{A/B}({\bf k},z_1) = \frac{1}{z_1-E_{{\bf k} A/B}}\;,
\label{GaB_HF}
\end{equation}
\begin{eqnarray}
E_{{\bf k} A/B} &=& \frac{1}{2}(\bar\varepsilon_{{\bf k}+\bar{\bf Q} c}+\bar\varepsilon_{{\bf k} v})
\nonumber \\
&&\pm \sqrt{\frac{1}{4} (\bar\varepsilon_{{\bf k}+\bar{\bf Q} c}-\bar\varepsilon_{{\bf k} v})^2 +
|x_{{\bf k} \bar{\bf Q}}| ^2} \;,
\label{quasEn_HF}
\end{eqnarray}
\begin{equation}
u_{\bf k}^2 / v_{\bf k}^2 = \frac{1}{2}\pm \frac{\frac{1}{4}(\bar\varepsilon_{{\bf k}+\bar{\bf Q} c}
-\bar\varepsilon_{{\bf k} v})}{\sqrt{\frac{1}{4} (\bar\varepsilon_{{\bf k}+\bar{\bf Q} c}-\bar\varepsilon_{{\bf k} v})^2 +
|x_{{\bf k}\bar{\bf Q}}|^2}}\;.
\label{cohFac_1}
\end{equation}
One can easily show that $|\Delta_{{\bf k}\bar{\bf Q}}| \propto |\delta_{\bar{\bf Q}}|$.~\cite{ZFBMB13}
Moreover, $\delta_{\bar{\bf Q}}$ and $\Delta_{{\bf k}\bar{\bf Q}}$ couple to the same set of operators and, therefore, enter the quasiparticle dispersion in an equal manner. Hence, at the mean-field level of approximation we cannot discriminate between a Coulomb-driven or a phonon-driven phase transition.
\section{Electronic self-energy}
\label{sec:selfEnergy}
We now analyze self-energy effects.
To this end, we use the technique developed by Kadanoff and Baym and determine the self-energy of the electrons.~\cite{KB62}
The imaginary-time Green functions are defined as
\begin{eqnarray}
G_v({\bf k},t-t') &=& -i\langle T[c_{{\bf k} v}^{\phantom{\dagger}}(t) c_{{\bf k} v}^\dagger(t')] \rangle ,
\label{Gv_time} \\
G_c({\bf k},t-t') &=& -i\langle T[c_{{\bf k} c}^{\phantom{\dagger}}(t) c_{{\bf k} c}^\dagger(t')] \rangle ,
\label{Gc_time} \\
F({\bf k},t-t') &=& -i\langle T[c_{{\bf k}+\bar{\bf Q} c}^{\phantom{\dagger}}(t) c_{{\bf k} v}^\dagger(t')] \rangle ,
\label{F_time} \\
F^\dagger({\bf k},t-t') &=&-i\langle T[c_{{\bf k} v}^{\phantom{\dagger}}(t) c_{{\bf k}+\bar{\bf Q} c}^\dagger(t')]\rangle,
\label{Fdagger_time}
\end{eqnarray}
with imaginary-time variables $t$ and $t'$.
We start from the equation of motion (EOM) for the valence-electron Green function,
\begin{align}
\bigg( i\frac{\partial}{\partial t}-\varepsilon_{{\bf k} v} \bigg) & G_v({\bf k},t-t')
=\delta(t-t')
\nonumber \\
&- i\sum_{\bf q} \frac{g_{\bf q}}{\sqrt{N}} G_2^P({\bf k},{\bf q},t,t')
\nonumber \\
&-i\sum_{{\bf k}',{\bf q}} \frac{V_c({\bf q})}{N} G_2^V({\bf k},{\bf k}',{\bf q},t,t') ,
\label{EOM_Gv}
\end{align}
where
\begin{equation}
G_2^V({\bf k},{\bf k}',{\bf q},t,t') = \left\langle T[c_{{\bf k}-{\bf q} v}^{\phantom{\dagger}}(t) c_{{\bf k}' c}^\dagger(t)
c_{{\bf k}'+{\bf q} c}^{\phantom{\dagger}}(t) c_{{\bf k} v}^\dagger(t') ] \right\rangle ,
\label{G2V}
\end{equation}
\begin{equation}
G_2^P({\bf k},{\bf q},t,t') = \left\langle T[(b_{\bf q}^\dagger(t)+b_{-{\bf q}}^{\phantom{\dagger}}(t))
c_{{\bf k}+{\bf q} c}^{\phantom{\dagger}}(t) c_{{\bf k} v}^\dagger(t') ] \right\rangle ,
\label{G2P}
\end{equation}
and proceed as follows: The auxiliary correlation functions~\eqref{G2V} and \eqref{G2P} are expanded up to first order in the interactions they couple to, i.e., $G_2^V({\bf k},{\bf k}',{\bf q},t,t')$ is expanded up to linear order in $V_c({\bf q})$, and $G_2^P({\bf k},{\bf q},t,t')$ is expanded up to linear order in $g_{\bf q}$. Subsequently, we decouple the correlation functions taking only electron-hole fluctuations into account.
Straight forward calculation yields
\begin{align}
\bigg( i\frac{\partial}{\partial t}-\bar\varepsilon_{{\bf k} v} \bigg)& G_v({\bf k},t-t')
=\delta(t-t') + x_{{\bf k}\bar{\bf Q}} F({\bf k},t-t')
\nonumber \\
&-\int_0^{-i\beta} d\!\tau\, \sigma_{vv}({\bf k},t-\tau) G_v({\bf k},\tau-t')
\nonumber \\
&- \int_0^{-i\beta} d\!\tau\, \sigma_{vF}({\bf k},t-\tau) F({\bf k},\tau-t')
\label{EOM_Gv_final}
\end{align}
(here $\beta$ is the inverse temperature),
with the self-energies
\begin{align}
\sigma_{vv}({\bf k},t&-\tau) = \frac{1}{N^2} \sum_{{\bf q},{\bf q}',{\bf Q}} V_c({\bf q}) V_c({\bf q}') G_c({\bf k}+{\bf Q},t-\tau)
\nonumber \\
&\times G_2({\bf Q},{\bf k}+{\bf q}',{\bf k}-{\bf q},\tau-t)
\nonumber \\
&-\frac{i}{N}\sum_{\bf q} |g_{\bf q}|^2 D({\bf q},\tau-t) G_c({\bf k}+{\bf q},t-\tau) ,
\label{sigma_vv}
\end{align}
\begin{align}
\sigma_{vF}&({\bf k},t-\tau) = \frac{1}{N^2}\sum_{{\bf q},{\bf q}',{\bf Q}} V_c({\bf q}) V_c({\bf q}') F({\bf k}+{\bf Q}+\bar{\bf Q},t-\tau)
\nonumber \\
&\times F_2({\bf Q},{\bf k}+\bar{\bf Q}-{\bf q}'+{\bf Q},{\bf k}-{\bf q},\tau-t)
\nonumber \\
&-\frac{i}{N}\sum_{\bf q} |g_{\bf q}|^2 D({\bf q},\tau-t) F({\bf k}+{\bf q}+\bar{\bf Q},t-\tau) .
\label{sigma_vF}
\end{align}
The electron-hole pair correlation functions are defined as
\begin{equation}
G_2({\bf Q},{\bf k},{\bf k}',t-t') = -\left\langle T[c_{{\bf k} v}^\dagger(t) c_{{\bf k}+{\bf Q} c}^{\phantom{\dagger}}(t)
c_{{\bf k}'+{\bf Q} c}^\dagger(t') c_{{\bf k}' v}^{\phantom{\dagger}}(t') ]\right\rangle ,
\label{G2_1}
\end{equation}
\begin{equation}
F_2({\bf Q},{\bf k},{\bf k}',t-t') = -\left\langle T[c_{{\bf k}-{\bf Q} c}^\dagger(t) c_{{\bf k} v}^{\phantom{\dagger}}(t)
c_{{\bf k}'+{\bf Q} c}^\dagger(t') c_{{\bf k}' v}^{\phantom{\dagger}}(t') ]\right\rangle .
\label{F2_1}
\end{equation}
The phonon Green function is given by
\begin{equation}
D({\bf q},t-t') = -i\left\langle T[(b_{-{\bf q}}^\dagger (t)+b_{\bf q}^{\phantom{\dagger}} (t)) (b_{\bf q}^\dagger (t') + b_{-{\bf q}}^{\phantom{\dagger}} (t')) ]\right\rangle .
\label{D}
\end{equation}
With the same procedure we obtain the EOM of the conduction-electron Green function and the EOM of the anomalous Green function. These equations can be found in Appendix~\ref{app:EOMs}.
Note that both the electron-electron interaction and the electron-phonon interaction couple different species (valence electrons, conduction electrons, and electrons in the hybridized state) to each other.
The structure of the self-energies shows that the one-particle spectrum cannot be used--at least at this level of approximation--to decide whether the ordered ground state is the effect of the Coulomb interaction alone or if phonons contribute. Let us therefore analyze the electron-hole pair spectrum in the following.
\section{Electron-hole pair spectrum}
\label{sec:ehpair}
In the Bethe-Salpeter equation, describing the correlations of electron-hole pairs, the Coulomb interaction is treated in ladder approximation.~\cite{KKER86} In the vicinity of the SM-SC transition, the small number of free electrons and holes makes two-particle collisions to be the dominant process. The ladder approximation takes the sequence of these collisions into account and is suitable to describe both the build-up of excitons and the formation of the EI.~\cite{ZIBF12}
We now work out the influence of $H_{\rm e-ph}$ [Eq.~\eqref{H_eph}] on the electron-hole pairs.
The four-time electron-hole pair correlation functions are defined as
\begin{align}
G_2({\bf Q} & ,{\bf k},{\bf k}',t_1,t_2,t_3,t_4) =
\nonumber \\
& -\left\langle T[ c_{{\bf k} v}^\dagger(t_1) c_{{\bf k}+{\bf Q} c}^{\phantom{\dagger}}(t_2)
c_{{\bf k}'+{\bf Q} c}^\dagger(t_4) c_{{\bf k}' v}^{\phantom{\dagger}} (t_3) ]\right\rangle,
\label{G2_fourtime}
\end{align}
\begin{align}
F_2({\bf Q} & ,{\bf k},{\bf k}',t_1,t_2,t_3,t_4) =
\nonumber \\
&-\left\langle T[ c_{{\bf k}-{\bf Q} c}^\dagger(t_1)c_{{\bf k} v}^{\phantom{\dagger}}(t_2)
c_{{\bf k}'+{\bf Q} c}^\dagger (t_4) c_{{\bf k}' v}^{\phantom{\dagger}}(t_3) ]\right\rangle.
\label{F2_fourtime}
\end{align}
The relations to the two-time electron-hole pair correlation functions, occurring in Sec.~\ref{sec:selfEnergy}, are $G_2({\bf Q},{\bf k},{\bf k}',t-t') = G_2({\bf Q},{\bf k},{\bf k}',t,t,t',t')$ and $F_2({\bf Q},{\bf k},{\bf k}',t-t') = F_2({\bf Q},{\bf k},{\bf k}',t,t,t',t')$.
In order to analyze the effects of the phonons within the Kadanoff-Baym scheme,~\cite{KB62} we expand the correlation functions~\eqref{G2_fourtime} and \eqref{F2_fourtime} to leading order in the electron-phonon coupling.
Restricting ourselves to the study of electron-hole pairs, there are no incoming or outgoing phonon branches. Hence, the phonons must be created and annihilated in one diagram, and the first non-vanishing contribution is of second order in the electron-phonon coupling constant $g_{\bf q}$.
The many-particle correlation functions that occur in the leading-order expansion of $G_2({\bf Q},{\bf k},{\bf k}',t_1,t_2,t_3,t_4)$ and $F_2({\bf Q},{\bf k},{\bf k}',t_1,t_2,t_3,t_4)$ are subsequently decoupled into electron-hole pair correlation functions, electron Green functions, and phonon Green functions. We identify two effects of $H_{\rm e-ph}$: Excitons can be created (annihilated) by the annihilation (creation) of a phonon, and phonons may change the individual momenta of the electron and the hole in the bound state without modifying the momentum of the exciton. This is illustrated by the diagrams depicted in Fig.~\ref{fig:diagrams}.
The explicit equations for the electron-hole pair correlation functions can be found in Appendix~\ref{app:correlationFuncs}.
\begin{figure}[h]
\centering
\subfigure{\includegraphics[width=0.3\linewidth]{fig_1a}}
\hspace{0.2\linewidth}
\subfigure{\includegraphics[width=0.3\linewidth]{fig_1b}}
\\
\vspace{0.6cm}
\subfigure{\includegraphics[width=0.49\linewidth]{fig_1c}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{fig_1d}}
\\
\subfigure{\includegraphics[width=0.49\linewidth]{fig_1e}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{fig_1f}}
\\
\subfigure{\includegraphics[width=0.49\linewidth]{fig_1g}}
\hfill
\subfigure{\includegraphics[width=0.49\linewidth]{fig_1h}}
\caption{Diagrams occurring in the equations for the electron-hole pair correlation functions. First row: Single-particle Green functions $G_v$ [$G_c$] (left-hand side) and $F$ [$F^\dagger$] (right-hand side). Second row: Ladder approximation for the Coulomb interaction. Third row: Ring diagrams including the electron-phonon interaction. Fourth row: Ladder diagrams including the electron-phonon interaction. The dashed lines with the vertex points represent the Coulomb interaction, the wavy lines represent the phonon Green function, and the vertex squares represent our electron-phonon interaction.}
\label{fig:diagrams}
\end{figure}
Following Ref.~\onlinecite{JRK67}, we analyze the collective modes by finding poles of the ``phase" correlation function
\begin{equation}
P({\bf Q},z_\nu) = X({\bf Q},z_\nu) - Y({\bf Q},z_\nu) ,
\label{Phase_func}
\end{equation}
where
\begin{align}
X({\bf Q},z_\nu) =& \left( \frac{1}{-i\beta}\right)^{2}
\frac{i}{N} \sum_{{\bf k},{\bf k}'} \sum_{z_{2}, z_{3}}
G_2({\bf Q},{\bf k},{\bf k}',z_\nu-z_{2},z_{2},z_{3}),
\label{G2_oneTime} \\
Y({\bf Q},z_\nu) =& \left( \frac{1}{-i\beta}\right)^{2}
\frac{i}{N} \sum_{{\bf k},{\bf k}'} \sum_{z_{2}, z_{3}}
F_2({\bf Q},{\bf k},{\bf k}',z_\nu-z_{2},z_{2},z_{3}).
\label{F2_oneTime}
\end{align}
\section{Results and Discussion}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=0.4\linewidth]{fig_2a}
\hspace{0.1\linewidth}
\includegraphics[width=0.4\linewidth]{fig_2b}
\caption{(Color online) Electron-hole excitation spectrum at zero temperature without electron-phonon coupling. Black solid lines show the phase mode, red dashed lines display the lower boundary of the electron-hole continuum. Results are given for the BCS-type pairing regime with $U=3.03$ (left panel) and the BEC-type pairing regime with $U=5.03$ (right panel).}
\label{fig:modes_EI}
\end{figure*}
In the numerical evaluation of the equations derived so far we work at zero temperature and assume a local Coulomb potential [$V({\bf q})=U$], a momentum-independent electron-phonon coupling ($g_{{\bf q}}=g_{\bar{\bf Q}}$), and dispersionless Einstein phonons ($\omega_{\bf q} = \omega_{\bar{\bf Q}}$). We furthermore consider a direct band-gap situation, i.e., the valence-band maximum and the conduction-band minimum are located at the Brillouin-zone center. Then, the ordering vector of the low-temperature phase is $\bar{\bf Q}=0$.
For $\bar{\bf Q}\neq 0$ the EI with lattice deformation is accompanied by a charge-density wave. Apart from that, the situation for a finite ordering vector corresponds to the situation considered here.
To avoid hard numerics, we consider a two-dimensional (square) lattice. For this, the bare band dispersions $\varepsilon_{{\bf k}\sigma}= E_\sigma -2t_\sigma [\cos(k_x)+\cos(k_y)]$ ($\sigma=v,c$), where $t_c$ sets the unit of energy. Typical model parameters are: $E_v=-2.4$, $E_c=0$, $t_v=-0.8$, and $\omega_{\bar{\bf Q}}=0.01$.
Let us emphasize that the present analytical calculations and the scenario that will be discussed below hold for both a bare semimetallic and semiconductive band structure. Furthermore, the two-dimensional (square) lattice is used for the sake of convenience only, the results obtained below stay valid also for three-dimensional systems (and in this case also for finite temperatures).
Performing the analytic continuation $z_\nu\rightarrow \omega+i\delta$ we take $\delta=2\cdot 10^{-3}$. Moreover, we utilize the Hartree-Fock single-particle Green functions in the calculation.
\subsection{Vanishing electron-phonon coupling}
\label{sec:Phasemode}
We start our analysis for a system, where the phonons are neglected ($g_{\bar{\bf Q}}=0$). In this case, the correlation function~\eqref{Phase_func} can be calculated according to
\begin{align}
P({\bf Q},z_\nu) &=\frac{ X^{(0)}({\bf Q},z_\nu)\left[ 1+ a(-{\bf Q},-z_\nu)+b({\bf Q},z_\nu)\right]} {L({\bf Q},z_\nu)}
\nonumber \\
&-\frac{Y^{(0)}({\bf Q},z_\nu) \left[1+a({\bf Q},z_\nu)+b({\bf Q},z_\nu)\right]}{ L({\bf Q},z_\nu)} ,
\label{calc_P}
\end{align}
where
\begin{align}
a({\bf Q},z_\nu) =& UX^{(0)}({\bf Q},z_\nu) ,
\label{a} \\
b({\bf Q},z_\nu) =& UY^{(0)}({\bf Q},z_\nu) ,
\label{b}
\end{align}
and the denominator reads
\begin{align}
L({\bf Q},z_\nu) =& \left[1+a({\bf Q},z_\nu)\right] \left[1+a(-{\bf Q},-z_\nu)\right]
-\left[b({\bf Q},z_\nu)\right]^2 .
\label{denom}
\end{align}
The definitions of $X^{(0)}$ and $Y^{(0)}$ are analogous to Eqs.~\eqref{G2_oneTime} and \eqref{F2_oneTime}, respectively, except that we use the [according to Eq.~\eqref{FourierTrafo} transformed] bare electron-hole pair correlation functions \eqref{G2_0} and \eqref{F2_0}.
Figure~\ref{fig:modes_EI} shows the so-called ``phase mode" for weak and strong couplings. Obviously, there exists a gapless phase mode in the EI state, i.e., $\omega({\bf Q})\rightarrow 0$ for ${\bf Q}\rightarrow 0$.~\cite{KM65a, *KM65b, KM65c, *KM66a, JRK67, LEKMSS04} The appearance of this mode can be attributed to the $U(1)$ symmetry of the underlying electronic model $H=H_{\rm e}+H_{\rm e-e}$.~\cite{La66b} Because of this symmetry the phase of $\Delta_{{\bf k}\bar{\bf Q}}$ can be chosen arbitrarily, which results in such an acoustic mode.
\begin{figure*}
\centering
\includegraphics[width=0.4\linewidth]{fig_3a}
\hspace{0.1\linewidth}
\includegraphics[width=0.4\linewidth]{fig_3b}
\caption{(Color online) Electron-hole excitation spectrum for a distorted lattice at zero temperature. The black, solid lines show the phase mode and the red, dashed lines show the lower boundary of the electron-hole continuum.}
\label{fig:modes_dist}
\end{figure*}
Figure~\ref{fig:modes_EI} furthermore reveals the different character of the phase mode for weak- and strong-coupling situations. In the weak-coupling, BCS-type pairing regime ($U=3.03$) $\omega({\bf Q})$ exhibits a steep increase for small momenta and, as a result, quickly enters the electron-hole continuum, which it leaves again close to the Brillouin-zone corner. The lower boundary of the electron-hole continuum is given by
\begin{equation}
\omega_C({\bf Q}) = {\rm min}_{{\bf k}} (E_{{\bf k}+{\bf Q} A}-E_{{\bf k} B}) ,
\label{continuum}
\end{equation}
where $E_{{\bf k} A}$ and $E_{{\bf k} B}$ ($E_{{\bf k} A}>E_{{\bf k} B}$) are the renormalized quasiparticle energies in the ordered ground state. In Hartree-Fock approximation $E_{{\bf k} A,B}$ follow from Eq.~\eqref{quasEn_HF}.
The momentum dependence of the excitation energy of the mode changes remarkably when the boundary to the electron-hole continuum is crossed. Contrariwise, in the strong-coupling, BEC-type pairing regime, the collective phase mode entirely lies below the electron-hole continuum and is a smooth function.~\cite{LEKMSS04}
The existence of an acoustic phase mode can be understood as follows. Here, the static uniform limit of the noninteracting phase correlation function is well defined, i.e.,
\begin{align}
&\lim_{\omega\rightarrow 0} \left[ \lim_{{\bf Q}\rightarrow 0} P^{(0)}(\pm{\bf Q},\pm\omega) \right]
= \lim_{{\bf Q}\rightarrow 0} \left[ \lim_{\omega\rightarrow 0} P^{(0)}(\pm{\bf Q},\pm\omega) \right]
\nonumber \\
&= \lim_{\omega,{\bf Q}\rightarrow 0}P^{(0)}(\pm{\bf Q},\pm\omega) = P^{(0)}(0,0) .
\label{SUlimit1}
\end{align}
According to Eq.~\eqref{SUlimit1} and since we consider only interband correlations, the static, uniform limit of $P({\bf Q},\omega)$ exists, contrary to the case when additional intraband correlations are taken into account.~\cite{LP14}
We find for the static, uniform phase correlation function
\begin{equation}
P(0,0) = \frac{P^{(0)}(0,0)}{ 1+U P^{(0)}(0,0) } .
\label{PhasecorrSceneA}
\end{equation}
The (Hartree-Fock) gap equation~\eqref{x_kQ} is
\begin{equation}
1+U P^{(0)}(0,0)=0 .
\label{GapEqSceneA}
\end{equation}
Comparing Eq.~\eqref{PhasecorrSceneA} with Eq.~\eqref{GapEqSceneA} unveils that $P(0,0)$ exhibits a pole; hence, the phase mode is acoustic.
\subsection{Static electron-phonon coupling}
\label{sec:dist_Coulomb}
Let us now discuss the behavior of the phase mode if the lattice deforms at the EI phase transition, i.e., we have $\delta_{\bar{\bf Q}}\neq 0$.
According to the strong coupling of electron-hole pair fluctuations and phonons, the phonon frequency is significantly renormalized at the SM-SC transition and might even vanish at low temperatures, leading to a static deformation of the lattice.~\cite{MMAB12b}
The lattice distortion is contained in the electron Green functions but does not explicitly appear in the Bethe-Salpeter equation. Hence, the phase correlation function is determined by Eq.~\eqref{calc_P}.
Figure~\ref{fig:modes_dist} shows that the phase mode is massive in this case, i.e., $\omega({\bf Q})\propto ({\bf Q}^2+C)$ for ${\bf Q}\rightarrow 0$ (with a constant $C>0$). Apart from the uniform limit, the spectrum resembles the result for the undistorted lattice since the influence of the phonons is weak for large excitation energies.
\begin{figure*}
\centering
\includegraphics[width=0.4\linewidth]{fig_4a}
\hspace{0.1\linewidth}
\includegraphics[width=0.4\linewidth]{fig_4b}
\caption{(Color online) Electron-hole excitation spectrum for a dynamical electron-phonon coupling in instantaneous approximation at zero temperature. The black, solid lines show the phase mode and the red, dashed lines show the lower boundary of the electron-hole continuum.}
\label{fig:modes_dyn}
\end{figure*}
The absence of the acoustic phase mode can be shown analytically. The phase correlation function exhibits a pole at $z_\nu=0$ and ${\bf Q}=0$ if the denominator of Eq.~\eqref{PhasecorrSceneA} vanishes.
For a deformed lattice the (Hartree-Fock) gap equation takes the form
\begin{equation}
0 = 1 + \left( U + 4\frac{|g_{0}|^2}{\omega_0} \right) P^{(0)}(0,0) .
\label{GapEq2}
\end{equation}
The condition for an acoustic phase mode significantly differs from Eq.~\eqref{GapEq2}. We can argue that the static lattice distortion breaks explicitly the $U(1)$ symmetry of the model and removes the phase invariance of $\Delta_{{\bf k}\bar{\bf Q}}$. As a consequence, any phase-mode excitation requires a finite energy. Hence, the phase mode is massive.
\subsection{Dynamical electron-phonon coupling}
\label{sec:Undist_CoulombPhonon}
As just has been shown, the softening of a phonon mode and the accompanying lattice deformation lead to a massive phase mode. Let us now analyze the effect of dynamical phonons that do not become soft but offer a way to transfer electrons from the valence band to the conduction band. Thereby, we include the phonons in the Bethe-Salpeter equations, Eqs.~\eqref{G2_fourtime_calc} and \eqref{F2_fourtime_calc}, and take the self-energies resulting from the coupling to the lattice in the single-particle Green functions into account.
In particular, we ask whether the phase mode in the ordered ground state is acoustic or not. To this end, we investigate the static, uniform limit of the phase correlation function with respect to its pole structure. We note that the electron-phonon coupling leads to an effective electron-electron interaction that is nonlocal in (imaginary) time. This complicates the numerical evaluation considerably. We therefore only consider the limiting cases of slow phonons and fast phonons in comparison to the time scale of the electron transport. For these two limits, we ask whether the additional electron-phonon interaction supports electron-hole pairing or not. To this end, we analyze the phonon contribution in the gap equations taking the following bare phonon contribution into account:
\begin{equation}
D({\bf q},z_\nu) = -\frac{2\omega_{\bf q}}{z_\nu^2 - \omega_{\bf q}^2} .
\label{barePhononGF}
\end{equation}
First, we assume the phonons to be much slower than the electrons.
We then neglect the frequencies $z_4$ and $z_5$, which appear in the phonon Green function, in the electron-hole pair correlation functions since they only can attain small values.
In this limit the equations determine $X({\bf Q},z_\nu)$ and $Y({\bf Q},z_\nu)$, occurring in Eq.~\eqref{Phase_func}, are given in Appendix~\ref{App:slow_quantities}.
The corresponding gap equation reads
\begin{equation}
1 = \frac{1}{-i\beta} \sum_{z_1} \frac{ U R(z_1) }{ 1-|g_{0}|^2 \bar D R(z_1) } ,
\label{GapEq_slow}
\end{equation}
where
\begin{equation}
R(z_1) = \frac{1}{N}\sum_{\bf k} \frac{i}{\Omega({\bf k},z_1)} ,
\label{def_R}
\end{equation}
\begin{eqnarray}
\Omega({\bf k},z_1) &=& \left[ z_1-\bar\varepsilon_{{\bf k} v}-\sigma_{vv}({\bf k},z_1)\right]
\left[ z_1-\bar\varepsilon_{{\bf k} c}-\sigma_{cc}({\bf k},z_1)\right]
\nonumber \\
&&- \left|\Delta_{{\bf k}\bar{\bf Q}} + \sigma_{Fv}({\bf k},z_1) \right|^2 ,
\label{def_Nkz}
\end{eqnarray}
\begin{equation}
\bar D = \frac{1}{-i\beta} \sum_{z_\mu} D(0,z_\mu) .
\label{D_bar}
\end{equation}
The $z_1$ ($z_\mu$) are fermionic (bosonic) Matsubara frequencies.
The structure of the phase correlation function remains complicated in this case. We note, however, that the phonon contribution simply modifies the Coulomb interaction strength in Eqs.~\eqref{appendix_eq1} and \eqref{appendix_eq2}. In the gap equation~\eqref{GapEq_slow}, on the other hand, the phonon contribution enters in a qualitatively different way. This suggests that $P(0,0)$ does not exhibit a pole and, consequently, the phase mode is massive.
In the gap equation for slow phonons, Eq.~\eqref{GapEq_slow}, we find $\bar D = -2p(\omega_0)-1<0$ with the Bose distribution function $p(x)$, and, in that
$\frac{1}{-i\beta}\sum_{z_1} R(z_1) >0$, we can conclude that the local Coulomb potential is weakened. Self-evidently slow phonons give rise to retardation effects
and thereby induce an effective long-ranged electron-hole interaction potential that reduces the effect of the local Coulomb attraction.
In the opposite limit, when the phonons are much faster than the electrons, we can integrate out, in principle, the lattice degrees of freedom (instantaneous approximation). Considering this limit is technical rather than physically motivated since in most materials the phonon frequency is much smaller than the characteristic electronic energy scale. Due to the fact that the qualitative behavior of the phase mode is mainly determined by the underlying symmetry of the state, the instantaneous approximation is nevertheless instructive. In this limit, we can replace the phonon Green function according to $D({\bf q},\tau-\tau')=D({\bf q},0) \delta(\tau-\tau')$. Then, the phase correlation function in the static, uniform limit becomes
\begin{equation}
P(0,0) = \frac{P^{(0)}(0,0)}{1+\left[U-|g_0|^2 D(0,0)\right] P^{(0)}(0,0)} ,
\label{PhaseMode_instant}
\end{equation}
and the gap equation is given by
\begin{equation}
1 = \left[U+|g_0|^2 D(0,0)\right] \frac{1}{-i\beta} \sum_{z_1} R(z_1)
\label{GapEq_instant}
\end{equation}
(again $z_1$ are fermionic Matsubara frequencies).
Obviously, the instantaneous phonons lead to a static renormalization of the Coulomb interaction. However, in the phase correlation function~\eqref{PhaseMode_instant} the phonon contribution $|g_0|^2 D(0,0)$ enters with a negative sign, while $|g_0|^2 D(0,0)$ enters with a positive sign in the gap equation~\eqref{GapEq_instant}. This discrepancy rules out that $P(0,z_\nu)$ exhibits a pole at $z_\nu=0$. Consequently the phase mode is massive, see Fig.~\ref{fig:modes_dyn}.
Obviously, in this limit, there are no retardation effects at all, and, due to the fact that $D(0,0)=2/\omega_0>0$, the phonons enhance the strength of the local Coulomb interaction [cf. Eq.~\eqref{GapEq_instant}].
That is, if the lattice is not deformed statically the phonons affect the electrons in two ways: They enhance the effective masses of the electrons and the holes (thereby modifying the band structure) and renormalize the Coulomb interaction. The former effect is less important for the basic mechanism of exciton condensation. The latter effect, on the other hand, is crucial, since it generates an effective electron-electron interaction that explicitly breaks the $U(1)$ symmetry. This is demonstrated by the diagrams shown in Fig.~\ref{fig:diagrams}. Here, the incoming and outgoing branches at the vertices, i.e., at $\tau$ and $\tau'$, describe the effective two-particle interaction. For the Coulomb interaction, diagramed in the second row of Fig.~\ref{fig:diagrams}, there is one incoming and outgoing branch for the valence electrons (labeled with $v$ and $v^\dagger$, respectively) and one incoming and outgoing branch for the conduction electrons (labeled with $c$ and $c^\dagger$, respectively). Hence, the interaction $V_{\rm Coul} \propto c_{{\bf k}_1 c}^\dagger c_{{\bf k}_2 c}^{\phantom{\dagger}} c_{{\bf k}_3 v}^\dagger c_{{\bf k}_4 v}^{\phantom{\dagger}}$. In the ladder terms arising from the electron-phonon coupling (fourth row in Fig.~\ref{fig:diagrams}) there are two incoming branches of conduction electrons and two outgoing branches of valence electrons (or vice versa), which establish an effective electron-electron interaction
\begin{equation}
V_{\rm ph} \propto c_{{\bf k}_1 c}^\dagger c_{{\bf k}_2 v}^{\phantom{\dagger}} c_{{\bf k}_3 c}^\dagger c_{{\bf k}_4 v}^{\phantom{\dagger}} + c_{{\bf k}_1 v}^\dagger c_{{\bf k}_2 c}^{\phantom{\dagger}} c_{{\bf k}_3 v}^\dagger c_{{\bf k}_4 c}^{\phantom{\dagger}}
\propto \cos(2\phi) .
\label{form_phonInt}
\end{equation}
Here, $\phi$ denotes the phase of $\Delta_{{\bf k}\bar{\bf Q}}$.
An electron-electron interaction of identical form might appear if exchange terms are considered.~\cite{GK72,*GK73,LZ96} Such an interaction fixes $\phi$ and, consequently, destroys the acoustic phase mode.
Let us note that if the electron-phonon interaction is neglected, and the Coulomb interaction is of the form~\eqref{H_ee}, the free energy is independent of $\phi$, which leads to a gapless electron-hole excitation spectrum.~\cite{JRK67}
Without loss of generality the order parameter $\Delta_{{\bf k}\bar{\bf Q}}$ can then be assumed to be real.~\cite{ZFB10, ZFBMB13}
Taking the electron-phonon interaction into account, a possible static lattice distortion (but also the coupling of electrons and holes to dynamical phonons without lattice dimerization) induces a phase fixation and, therefore, give rise to a massive phase mode. Of course, a more complicated form of the electron-electron interaction may also lead to a gapped electron-hole excitation spectrum. The phase $\phi$ is determined by the extremal free energy varying $\phi$ (in this regard the case of a static lattice distortion has been analyzed in Ref.~\onlinecite{ZFBMB13}).
If (the momentum-space quantity) $\delta_{\bar{\bf Q}}$ is real, the phase of $\Delta_{{\bf k}\bar{\bf Q}}$ is pinned to zero or $\pi$, i.e., both $\Delta_{{\bf k}\bar{\bf Q}}$ and the gap parameter $x_{{\bf k}\bar{\bf Q}}$ are real. A dynamical electron-phonon interaction does not necessarily fixate the phase of $\Delta_{{\bf k}\bar{\bf Q}}$ to zero or $\pi$, accordingly $\Delta_{{\bf k}\bar{\bf Q}}$ and $x_{{\bf k}\bar{\bf Q}}$ are, in general, complex numbers.
The phase stiffness is obtained from the second derivative of the free energy with respect to $\phi$. It corresponds to the phase-mode excitation energy for ${\bf Q}=0$. That is, $\omega(0)$ can be taken as a measure for the phase fixation.
\section{Discussion of off-diagonal long-range order of electron-hole pairs}
\label{sec:condensate}
The EI is a promising candidate to observe a BCS-BEC crossover in an equilibrium situation.\cite{BF06,PBF10,ZIBF12} Since both BCS-type superconductors and Bose-Einstein condensates exhibit off-diagonal long-range order (ODLRO),~\cite{Pe51,PO56,Ya62} the question whether the EI ground state shows ODLRO or not is obvious. Here we follow (in form) the treatment of ODLRO for BCS superconductors (see Annett's textbook~\onlinecite{An04}, Chap.~5.7), and test possible ODLRO for electron-hole pairs.\cite{HH74}
The one-particle density matrix for bound electron-hole pairs $\rho_1^{\rm X}({\bf R}-{\bf R}')$ is related to the two-particle density matrix for electrons and holes by
\begin{equation}
\rho_1^{\rm X}({\bf R}-{\bf R}') = \int d{\bf r} \int d{\bf r}' \Psi({\bf r}) \Psi({\bf r}')
\rho_2^{\rm e-h} ({\bf r},{\bf r}',{\bf R},{\bf R}'),
\label{singleDM_twoDM}
\end{equation}
where ${\bf R}$ and ${\bf R}'$ denote the center-of-mass coordinates of the excitons, ${\bf r}$ and ${\bf r}'$ are the relative coordinates of the (bound) electron and hole in the exciton, respectively, and $\Psi({\bf r})$ denotes the excitonic wave function. The two-particle density matrix for electrons and holes in Eq.~\eqref{singleDM_twoDM} is given by
\begin{align}
\rho&_2^{\rm e-h} ({\bf r},{\bf r}',{\bf R},{\bf R}') =
\nonumber \\
&\frac{1}{N} \left\langle c_{c}^\dagger ({\bf R}+{\bf r}/2)
c_{v}^{\phantom{\dagger}} ({\bf R}-{\bf r}/2) c_{v}^\dagger ({\bf R'}+{\bf r'}/2)
c_{c}^{\phantom{\dagger}} ({\bf R'}-{\bf r'}/2) \right\rangle .
\label{densMatrix}
\end{align}
ODLRO is present if the one-particle density matrix for electron-hole pairs $\rho_1^{\rm X}({\bf R}-{\bf R}')$ remains finite for arbitrarily large separated pairs. That is, $\rho_2^{\rm e-h} ({\bf r},{\bf r}',{\bf R},{\bf R}')$ [Eq.~\eqref{densMatrix}] stays finite for $|{\bf R}-{\bf R}'|\rightarrow \infty$.
Fourier transformation of $\rho_2^{\rm e-h}$ yields
\begin{eqnarray}
\rho_2^{\rm e-h} &=& \frac{1}{N^2} \sum_{{\bf k},{\bf k}',{\bf q}} \langle c_{{\bf k}+{\bf q}/2 \,c}^\dagger
c_{{\bf k}-{\bf q}/2 \,v}^{\phantom{\dagger}} c_{{\bf k}'-{\bf q}/2 \,v}^\dagger c_{{\bf k}'+{\bf q}/2\, c}^{\phantom{\dagger}} \rangle
\nonumber \\
&&\times e^{i{\bf k}{\bf r}} e^{i{\bf k}'{\bf r}'} e^{i{\bf q}({\bf R}-{\bf R}')} .
\label{rho_FT}
\end{eqnarray}
At this point we stop in following Ref.~\onlinecite{An04} because the order parameter $\Delta_{{\bf k}\bar{\bf Q}}$ gives no deeper insights into the nature of the excitonic ground state. $\Delta_{{\bf k}\bar{\bf Q}}$ is finite for low temperatures regardless of the specific mechanisms which drive the phase transition and establish long-range order (BCS-type electron-hole pairing or condensation of tightly bound, preformed excitons). This is different from BCS superconductors and Bose-Einstein condensates, where the (mean-field) order parameters unambiguously characterize superconductivity, respectively, superfluidity. That is, a decoupling of Eq.~\eqref{rho_FT}, that assigns $\rho_2^{\rm e-h}$ with the order parameter $\Delta_{{\bf k}\bar{\bf Q}}$, would be a too crude approximation in our case. Therefore we relate the density matrix to the pair correlation functions which contain valuable information about the forces driving the electron-hole pairing and condensation process.
The extent of the excitons, given by $|{\bf r}|$ and $|{\bf r}'|$, are of the order of the electron-hole pair coherence length, which is small compared with the system size. We therefore neglect the ${\bf r}$- and ${\bf r}'$-dependencies in the following and write
\begin{eqnarray}
\rho_2^{\rm e-h} &=&-\frac{1}{N\beta} \sum_{\bf q} \sum_{z_\nu} X({\bf q},z_\nu)
e^{i{\bf q}({\bf R}-{\bf R}')}
\nonumber \\
&=&-\sum_{\bf q} e^{i{\bf q}({\bf R}-{\bf R}')} I_{\bf q} ,
\label{rho_calc}
\end{eqnarray}
with $X({\bf q},z_\nu)=\frac{i}{N}\sum_{{\bf k},{\bf k}'} G_2({\bf q},{\bf k},{\bf k}',z_\nu)$ ($z_\nu$ are bosonic Matsubara frequencies).
The condition for ODLRO can only be satisfied if $\rho_2^{\rm e-h}$ contains averages $I_q$ of the order of unity.~\cite{Ya62}
Since we have found only one pole for a given momentum in our numerics, in what follows we restrict ourselves to the case that $X({\bf q},z_\nu)$
exhibits a single pole (the generalization to multiple poles would be straightforward). We have
\begin{equation}
I_{\bf q} = \frac{1}{N\beta} \sum_{z_\nu} X({\bf q},z_\nu) = \frac{1}{N} R({\bf q},\omega_X) ,
\label{I_q}
\end{equation}
where $R({\bf q},\omega_X)$ is the residuum of the pole $\omega_X$ at momentum ${\bf q}$. For sufficiently low-lying poles we find $R({\bf q},\omega_X) \propto p(\omega_X)$ (note that the boundary to the electron-hole continuum is located at finite energies).\cite{ZIBF12} For $R(\omega_X)$ to be of the order of $N$, $\omega_X$ must vanish. That is, the presence of ODLRO of electron-hole pairs implies a gapless electron-hole excitation spectrum.
Since any finite electron-phonon coupling introduces a gap, in our model ODLRO is only present if the EI phase transition is driven by the electronic correlations caused by the Coulomb interaction of type Eq.~\eqref{H_ee}.
Regardless of the particular driving mechanism, $\Delta_{{\bf k}\bar {\bf Q}}$ serves as an order parameter for the low-temperature long-range ordered phase.
Phase fluctuations of $\Delta_{{\bf k}\bar{\bf Q}}$ may destroy the ordered state, e.g., in one-dimensional systems or two-dimensional systems at finite temperatures.\cite{Ho67} The lattice degrees of freedom may suppress these fluctuations, supporting thereby long-range order.
In this connection, we like to emphasize that the nature of the ordered low-temperature phase in the purely electronic model, exhibiting a $U(1)$ symmetry in the normal phase, significantly differs from the low-temperature phase in the model containing the coupling to the lattice, where the $U(1)$ symmetry is absent even in the normal phase. For the latter, ODLRO is absent (see discussion above), and we therefore suppose that a finite $\Delta_{{\bf k}\bar{\bf Q}}$ is not indicative of any kind of ``electron-hole pair condensate" with ``supertransport" properties. To date the identification of a measurable quantity verifying ODLRO in the materials considered as potential candidates for realizing the EI phase is, to the best of our knowledge, an open problem.
\section{Conclusions}
\label{sec:conclusion}
In this work we have revisited on what terms an excitonic insulator (EI) forms. In particular, we have analyzed the effects of an explicit electron-phonon interaction $H_{\rm e-ph}$. The potential EI state then may possess a static lattice distortion. We have shown that $H_{\rm e-ph}$ will not change the single-particle spectra qualitatively, even if self-energy effects are taken into account. However, $H_{\rm e-ph}$ significantly modifies the electron-hole pair spectrum. To demonstrate this, we have calculated the contributions of the electron-phonon interaction to electron-hole pairing within the Kadanoff-Baym approach including ring and ladder diagrams. When the electron-phonon coupling is neglected the phase mode is acoustic. Electron-lattice coupling destroys the acoustic mode regardless if it causes a static lattice distortion or renormalizes the effective electron-electron interaction.
We pointed out that an acoustic phase mode implies the presence of off-diagonal long range order (ODLRO), and therefore indicates---in a strongly coupled electron-hole system---an exciton ``condensate". This applies to the EI phase in pure electronic models as, e.g., the extended Falicov-Kimball model.~\cite{Ba02b, BGBL04, IPBBF08,ZIBF12} Since in most of the (potential) EI materials considered so far, the lattice degrees of freedom play a non-negligible role, they should prevent, according to the reasonings of this paper, the appearance of an acoustic phase mode. Hence these materials embody rather unusual (gapped) charge-density-wave systems than true exciton condensates with super-transport properties (cf. the remark by Kohn in the supplementary discussion in Ref.~\onlinecite{HR68a}).
To realize an exciton condensate in equilibrium experimentally, bilayer systems, such as graphene double layers and bilayers,~\cite{LY76, LY77,LS08,DH08,MBSM08,KE08,PF12,PNH13} are the most promising candidates at present. Since the interband tunneling processes can be suppressed by suitable dielectrics, an acoustic collective mode, and hence ODLRO, may emerge.
In these systems electrons and holes occupy different layers and the exciton condensation is presumably accompanied by the appearance of supercurrents in both layers that flow in opposite directions,~\cite{LY76} respectively, the occurrence of a dipolar supercurrent.~\cite{BJL04}
Let us finally emphasize that the numerical results presented in this work are obtained using rather crude approximations. That is why a more elaborated numerical treatment is highly desirable.
A possible next step is to calculate the dynamical structure factor, which is accessible experimentally by electron energy-loss spectroscopy.~\cite{WSKKBBB11} Here, collective modes show up as peaks and one might address the acoustic phase-mode problem.
The phase invariance leading to the acoustic phase mode might also be reflected in Josephson-like phenomena induced by tunneling excitons.
Moreover, the behavior of the plasmon mode in the low-temperature state has not been elaborated yet. This mode is generated by intraband correlations and shows an acoustic behavior in the normal phase.~\cite{GU73}
We mentioned that the inclusion of exchange terms in the Coulomb interaction destroys the phase invariance, just as the electron-phonon interaction considered in this work.~\cite{GK73, LZ96} However, electron-electron and electron-phonon interactions do not have to promote the same values for the phase of $\Delta_{{\bf k}\bar{\bf Q}}$; thus it is interesting to analyze the consequences of their interplay. In particular, cooling down the system, the phase realized may alter.
Another worthwhile continuation concerns the possible formation and condensation of ``polaron excitons," i.e., the buildup of a condensate of excitons which are dressed by a phonon cloud.
\section*{Acknowledgements}
This work was supported by the DFG through SFB 652, project B5.
\bibliographystyle{apsrev}
|
2,877,628,088,883 | arxiv | \section{Approximation Algorithms}
\section{Level-Wise Algorithm}
\label{sec:approximation-algorithms}
\prob{Level-wise} algorithm solves the problem of CECD using a greedy approach.
It returns a design whose concepts are all from a same level
of the input taxonomy.
Our algorithm finds
the design with maximum queriability for each level using the algorithm
proposed in \cite{Termehchy:SIGMOD:14}, called
approximate popularity maximization ({\it APM} for short),
for finding the cost-effective subset of concepts over
a set of concepts.
It eventually delivers the
design with largest queriability across all levels
in the taxonomy.
\begin{comment}
Precisely, let $\mC[i]$ be the set of all concepts of
depth $i$ in $\mX$, and
let $\mX_i = (\mC[i]\cup \{R_i\}, \mR_i)$ be
a tree of height one with root $R_i$.
All nodes of level $i$ in $\mX$
become leaves in $\mX_i$; thus, $\mR = \set{(R_i,C)|C \text{ is a concept of level } i \text{ in } \mX}$.
For any leaf $C\in \mX_i$
we define its popularity $u_i(C)$
to be the total popularity of its descendant
leaves in its corresponding subtree in $\mX$.
Level-wise algorithm calls the APM algorithm
to find the cost-effective
subset of concepts for every $\mX_i$. It then
compares various selected designs across $\mX_i$s
and returns the answer with maximum queriability
as its solution for the problem of CECD over taxonomy $\mX$.
The APM algorithm runs in $O(\card{\mC} \log(\card{\mC}))$ time, where $\card{\mC}$ denotes the
number of concepts in taxonomy $\mX=(R, \mC, \mR)$.
Thus, the time complexity of \prob{Level-wise} algorithm is
$O(h \card{\mC}\log(\card{\mC})$ over taxonomy $\mX$, where $h$ is the number of levels in $\mX$.
\end{comment}
Precisely, let $\mC[i]$ be the set of all concepts of
depth $i$ in $\mX=$ $(R, \mC, \mR)$.
For any concept $C\in \mC[i]$,
we define its popularity $u(C)$
to be the total popularity of its descendant
leaf concepts in $\mX$.
Level-wise algorithm calls the APM algorithm
to find the cost-effective
subset of concepts for every $\mC[i]$.
It also computes the queriability of the design
that contains only the most popular leaf concept, i.e.,
the leaf concept with maximum $u$ value.
It then compares various selected designs across $\mC[i]$s
and returns the answer with maximum queriability
as its solution for the problem of CECD over taxonomy $\mX$.
Figure~\ref{fig:level-or-max} illustrates the
level-wise algorithm.
Let $\card{\mC}$ denote the
number of concepts in taxonomy $\mX$.
The APM algorithm runs in $O(\card{\mC} \log\card{\mC})$.
Thus, the time complexity of level-wise algorithm is
$O(h \card{\mC}\log\card{\mC})$ over taxonomy $\mX$.
In addition to being efficient,
level-wise algorithm also has bounded and reasonably
small worst-case approximation ratio for an
interesting case of CECD problem.
Sometimes, it may be easier
to use and manage designs whose
concepts are not subclass/ superclass of each other.
We call such a design a {\it disjoint design}.
Our empirical results in Section~\ref{sec:experiment}
shows that this strategy returns effective
designs in the cases that the budget is relatively small.
In this case, we should restrict
the feasible solutions in the CECD problem to be disjoint.
We call this case of CECD, {\it disjoint CECD}.
Recent empirical results suggest
that the distribution of concept frequencies over a
large collection generally follows a \emph{power law} distribution
\cite{Probase:Wu:12}.
We show that the \prob{level-wise} algorithm
has a bounded and reasonably small worst-case
approximation ratio for CECD with disjoint design
given that distribution of concept
frequencies follows a power law distribution.
The following lemma bounds the queriability
that is obtained from the free concepts in any
solution given that distribution of concept
frequencies follows a power law distribution.
\begin{lemma}
\label{lemma:costly-intell}
Let $C_{\max}$ be the leaf concept in
taxonomy $\mX=(R, \mC, \mR)$ with maximum $u$ value and let assume that distribution of $u$ over leaf concepts follows a \emph{power law} distribution.
Let $\mS$ be any schema. Then,
\begin{align*}
QU(\free(\mS)) \leq 2u(\mC_{\max})\log\card{\mC}.
\end{align*}
\end{lemma}
\begin{proof}
We have:
$${\sum_{C\in \free(\mS)} u(C)d(C) \leq u(C_{\max})} \sum_{C\in \free(\mS)}d(C).$$
\iffalse
Recent empirical results suggest
that the distribution of concept frequencies over a
large collection generally follows a power law distribution
\cite{Probase:Wu:12} ({\it research.microsoft.com/en-us/projects/probase}).
\fi
Since the frequencies of leaf concepts in $\mX$ follow a ``power law''
distribution,
\begin{align*}
\sum_{C \in \leaf(\mC)} d(C) \leq 1+\log(\card{\leaf(\mC)}),
\end{align*}
where $\leaf(\mC)$ is the set leaf concepts in $\mC$
and $\card{\leaf(\mC)}$ is the number of such concepts.
Since $|\leaf(\mC)|$ $\leq \card{\mC}$,
$$QU(\free(\mS)) \leq \sum_{C\in \free(\mS)} u(C)d(C) \leq (1 + \log\card{\mC})\ u(C_{\max}) \leq 2u(\mC_{\max})\log\card{\mC}.$$
\end{proof}
\begin{figure}[h]
\begin{algo}
\\ \underline{\textbf{\prob{Level-wise}} \;\; \Comment{Input: $\left<\mX\right>$}}
\\
\\ $\sol_{\tt level} \leftarrow 0$ and $\sol_{\tt max} \leftarrow 0$
\\
\\ \Comment{\textbf{Return the output of the best level}}
\\ For $i=0$ to $h$ do
\\\> For each concept $C$ in distance $i$ from the root
\\\>\> $\mC_i \leftarrow \mC_i \cup C$
\\\> $\sol_{i} \leftarrow \text{ approximate solution over }(\mC_i)$
\\\> $\sol_{\tt level} \leftarrow \max(\sol_{\tt level}, \sol_{i})$
\\
\\ \Comment{\textbf{Most Popular Leaf Concept Only}}
\\ Let $C_{\max}$ be the leaf concept with the largest $u$ value.
\\ $\sol_{\tt max} \leftarrow u(C_{\max}) + \sum_{C\in\free(C_{\max})} u(C)d(C)$
\\
\\ Return the best of $\sol_{\tt level}$ and $\sol_{\tt max}$
\\
\end{algo}
\vspace{-0.5 cm}
\caption{Level-wise algorithm.}
\label{fig:level-or-max}
\end{figure}
\begin{theorem}
\label{theorem:approx}
Let $\mX = (R, \mC, \mR)$ be a taxonomy with height $h$
and the minimum accuracy of $\pr_{\min}$ $=\min_{C\in \mC}{\pr(C)}$.
The \prob{Level-wise} algorithm is a
$O({h+\log\card{\mC}\over\pr_{\min}})$-approximation for the CECD problem with disjoint solution on $\mX$ and budget $B$
given that the distribution of frequencies in $\mC$
follows a power law distribution.
\end{theorem}
\begin{proof}
Let $\mS^*$ be a disjoint schema over $\mX$ with total
cost at most $B$ that maximizes $QU$ function.
Let $\mS^*[i]$ be the set of concepts in $S^*$ of depth $i$.
By the definition of disjointness,
$\part(\mS^*[i])$ $\cap$ $\part(\mS^*[j]) = \emptyset$,
for all $1\leq i,j\leq h$. It follows:
\[
QU(\mS^*) = \sum_{1\leq i\leq h}{QU(\mS^*[i])} + QU(\free(\mS^*)),
\]
where $QU(\free(\mS^*)) = {\sum_{C\in \free(\mS^*)} u(C)d(C)}$
is the queriability obtained from the
free concepts in $\mS^*$.
We consider two possible cases.
First, assume that $\sum_{i=1}^h$ $QU(\mS^*[i])$ $\geq QU(\free(\mS^{*}))$.
It immediately follows that the \prob{level-wise} algorithm output gives a $(2h/\pr_{\min})$-approximation. In the other case in which $QU(\free(\mS^{*})) \geq$ $\sum_{1\leq i\leq h} QU(\mS^*[i])$, by Lemma~\ref{lemma:costly-intell}, extracting the
concept with the maximum $u$ value gives a $(4\log(\card{\mC})/$ $\pr_{\min})$-approximation.
These two cases together imply that we have an $O({h+\log\card{\mC}\over\pr_{\min}})$-approximation.
\end{proof}
\noindent
The value of $\pr_{\min}$ is generally large because concept annotation algorithm are reasonably accurate \cite{ChiticariuLRR10,McCALLUM:ACMQueue:05}.
\begin{comment}
Let $\mX = (R, \mC, \mR)$ be a taxonomy with height $h$
and the minimum accuracy of $\pr_{\min}$ $=\min_{C\in \mC}{\pr(C)}$.
an algorithm that return the best of level-wise algorithm's output and the algorithm annotating only the most popular concept (see Figure~\ref{fig:level-or-max}) is an
$\max(2h, 4\log(\card{\mC}))\over\pr_{\min}$-approximation for the CESD problem with disjoint solution on $\mX$ and budget $B$.
\begin{figure}[h]
\begin{algo}
\\ \underline{\textbf{\prob{Level-wise}} \;\; \Comment{Input: $\left<\mX\right>$}}
\\
\\ $\sol_{\tt level} \leftarrow 0$ and $\sol_{\tt max} \leftarrow 0$
\\
\\ \Comment{\textbf{Level-wise}: Return the output of the best level}
\\ For $i=0$ to $h$ do
\\\> $R_i \leftarrow \emptyset$ \Comment{Root of the corresponding tree of level $i$}
\\\> For each concept $C$ in distance $i$ from the root
\\\>\> $\mC_i \leftarrow \mC_i \cup C$
\\\>\> $\mR_i \leftarrow (R_i, C)$
\\\> $\mX_i \leftarrow (R_i,\mC_i,\mR_i)$
\\\> $\sol_{i} \leftarrow \prob{APM-Algo}(\mX_i)$
\\\> $\sol_{\tt level-wise} \leftarrow \max(\sol_{\tt level-wise}, \sol_{i})$
\\
\\ \Comment{\textbf{Most Popular Concept Only}}
\\ Let $C_{\max}$ be the concept with the largest $u$ value.
\\ $\sol_{\tt max} \leftarrow u(C_{\max}) + \sum_{C: C \neq C_{\max}} u(C)d(C)$
\\
\\ Return the best of $\sol_{\tt level-wise}$ and $\sol_{\tt max}$
\\
\end{algo}
\vspace{-0.5 cm}
\caption{Return the best of level-wise approach and annotating the concept with maximum $u$ value only.}
\label{fig:level-or-max1}
\end{figure}
\subsection{Dynamic Programming Method}
Let $C$ be any concept of the tree taxonomy $\mX = (R, \mC, \mR)$.
Let $QU(C)$ be the queriability we obtain by picking a design whose sole member is $C$, and $QU_{\max}$ be the maximum queriability
achieved over $\mX$
value of $QU(\cdot)$ over .
Finally let $b(C)$ be the budget we need to choose $C$.
The subproblem $P(C, q)$ asks for the minimum budget of any disjoint design in the sub-taxonomy rooted at $C$ with queriability $q$.
We would like to find the maximum $q$ for which $P(R, q)$ is not larger than $B$, where $R$ is the root of $\mX$.
We start by describing a recursive solution for $P(C,q)$. $C_{\ell}$ and $C_r$ are left and right children of $C$, respectively.
\[
P(C, q) = \min{\left( b(C), \min_{0\leq q_{\ell}\leq q}{\left( P(C_{\ell}, q_{\ell}) + P(C_r, q-q_{\ell}) \right)} \right)}
\]
A dynamic programming algorithm can be built based on this recursive relation.
To that end, we build a table of size $q_{\max} \cdot n$, and fill it based on our recursive relation.
Having all subproblems solved $P(C,q)$ can be computed in $O(q) = O(q_{\max})$ time, because we need to consider $q$ possibilities for $q_{\ell}$.
Therefore, the total running time of the algorithms is $O(q_{\max}^2 n)$.
We follow the classic method of designing FPTAS for the regular knapsack problem.
First, we define $K = \varepsilon\cdot q_{\max}/n$. We scale the queribilities by $K$ for any concept $C\in\mC$.
\[
q'(C) = \lfloor q(C)/K \rfloor
\]
As a result the maximum value of $q'(\cdot)$ is bounded by $n/{\varepsilon}$.
Therefore, the pseudo-polynomial algorithm works in $O(n^3\cdot \varepsilon^{-2})$ time.
It remains, to show that the the output is close to the optimal solution.
To that end, let $S'$ be the output of our scaled pseudo-polynomial algorithm, and let $S^*$ be the optimal solution.
We use $q(S)$ and $q'(S)$ for any set $S$ of concepts to denote its queribilaty based on the functions $q(\cdot)$ and $q'(\cdot)$, respectively.
We have:
\begin{equation}
\label{eqn:fptasOne}
q(S') \geq K\cdot q'(S') \geq K\cdot q'(S^*).
\end{equation}
The first inequality is because of the floor in the definition of $q'(\cdot)$, and the second one is because $S'$ is the optimal solution with respect to $q'(\cdot)$.
By the definition of $q'(\cdot)$, and because $S^*$ is composed of at most $n$ items, we have:
\begin{equation}
\label{eqn:fptasTwo}
q'(S^*) \geq \frac{q(S^*)}{K} - n.
\end{equation}
Putting (\ref{eqn:fptasOne}) and (\ref{eqn:fptasTwo}) together, we obtain:
\[
q(S') \geq q(S^*) - nK = q(S^*) - \varepsilon\cdot q_{\max} \geq (1-\varepsilon)q(S^*).
\]
\subsubsection{Taxonomies With Equally Costly Concepts}
If there is not any reliable
cost estimation for concepts in a taxonomy,
an enterprise may assume that it is almost equally costly
to develop and maintain
programs to identify and organize entities of different concepts.
This is an special case of CESD problem
where the concepts are equally costly.
In other words, given taxonomy $\mX$, we like to find
schema $\mS$ with the most queriability and
minimum number of concepts.
\begin{theorem}
\label{th:equalcost}
Level-wise algorithm is $O(\log(|\mC|))$-approximation
on taxonomy $\mX = (R, \mC, \mR)$ with equally costly concepts.
\end{theorem}
\begin{proof}
Without loss of generality, we assume that each concept has
a unit cost and the budget is $K \geq 1$.
Let $\mS^*$ be the optimal schema.
We have $QU(\mS^*) = QU(\free(\mS^*)) + QU(\part(\mS^*))$,
where $\part(\mS^*)$ is the set of partitions in $\mS^*$.
It is reasonable to assume that the accuracy of
extracting a concept is larger than random selection \cite{IEModel:Downey}. More formally, if $L$ is a final concept,
we have $u(L)\pr(L)$ $\geq u(L) d(L)\over d(p) \pr(p)$.
The queriability of a partition that contains a single
leaf node $L$ is greater than the queriability of
every partition whose root is an ancestor of $L$.
Let $C_i$, $1 \leq i \leq K$ denote the leaf nodes with
maximum $u$ values. We have:
$$\sum_{i=1}^{K} u(C_i) \geq QU(\part(\mS^*))$$
According to Lemma~\ref{lemma:costly-intell}, we have
$$u(C_{max}) \geq \frac{QU(\free(\mS^*))}{(1+\log(|\mC|))}$$.
Hence, an algorithm that picks the
top $K$ popular final concepts in $\mX$
has an approximation ratio of $O(\log(|\mC|))$.
\end{proof}
We call the queriability of schema $\mS$ over taxonomy $\mX$ without
considering the queriability of the free concepts
the {\it costly queriability} of $\mS$,
denoted as $CQU(\mS)$.
Given a fixed budget, the schemas with
maximum total queriability and maximum
costly queriability over $\mX$
may be different. The following
lemma establishes a relationship between the total and costly
queriabilities of these schemas.
\begin{lemma}
\label{lemma:costly-intell-mx}
Given a fixed budget $B$, let $\mS^*$ and $\mS^*_{cost}$
be the schema with maximum total
and costly queriabilities over
taxonomy $\mX$, respectively.
We have $QU(\mS^*)$ $\leq 2 QU(\mS^*_{cost})$.
\end{lemma}
\begin{proof}
Let $\free(\mS^*)$ denote the set of free
concepts in the optimal solution $\mS^*$.
The queriability of partition $\free(\mS^*)$,
$QU(\free(\mS^*))$, is:
$${\sum_{C\in \free(\mS^*)} u(C)d(C)}.$$
Let $C_{max}$ be the final concept in $\mX$
with maximum $u$ value.
Because we have:
$${\sum_{C\in \free(\mS^*)} u(C)d(C) \leq u(C_{max})} ,$$
the queriability of the set of free concepts
in the optimal solution $\mS^*$
is at most equal to $u(C_{max})$.
On the other hand,
$\mS^*_{cost}$ maximizes the value of
costly queriability over $\mX$, therefore,
$u(C_{max}) \leq$ $CQU(\mS^*_{cost})$.
Thus, we have $QU(fr(\mS^*)) \leq$ $CQU(\mS^*_{cost})$.
Because $QU(\mS^*) = $ $CQU(\mS^*)$ $+ QU(fr(\mS^*))$,
we have $QU(\mS^*)$ $\leq 2 QU(\mS^*_{cost})$.
\end{proof}
\end{comment}
\iffalse
\subsection{Multidimensional Knapsack-based approach}
By Lemma~\ref{lemma:costly-intell}, if we ignore the queriability
of free concepts and design an
$\alpha$-approximation for the
modified problem we will have a
$2\alpha$-approximation for the original problem.
Since the concepts in the selected schema, $\mS$,
should be pair-wise disjoint,
the queriability obtained from the
partition corresponds to a concept $C \in \mT$ is
$$\sum_{C'\in \ell(T_C)} u(C') d(C') / \sum_{C'\in \ell(T_C)} d(C'),$$
no matter what other concepts in $\mS$ are.
\iffalse
Thus, for each concept in the taxonomy,
we define the queriability of set of final concepts $T_i$
in $\mX$, $QU(T_i)$, to be equal to
$$\sum_{C\in T_i} u(C_i) d(C_i) / \sum_{C\in T_i} d(C_i).$$
\fi
As we described, the selected concepts, $\mS$,
are required to be disjoint.
This means that on any $(r,\ell_i)$-path
$p_i$ from the root of $T$ to a
leaf of $T$, $\ell_i$, we should pick at most one concept of $p_i$.
In other words, for each $\ell_i\in \ell(T)$, $\card{p_i \cap \mS} \leq 1$.
Since in the modified problem each node of $T$ has a fixed benefit and for
each leaf there exists a set of disjointness constraints and the goal is to maximize the total
benefit, we can apply an known algorithm for \textsc{Knapsack} problem with
multiple constraints. In the following we presented the problem in the
form of \textsc{Knapsack} problem with multiple constraints.
\begin{center}
\begin{boxedminipage}{0.4\textwidth}
\qquad\quad \Comment{$\set{T_1,\cdots,T_n}$ are nodes of $T$}
\vspace{-0.2in}
\begin{align*}
\\ \max \quad & \sum_{T_i \in T} p(T_i) x(T_i) &\\
\text{s.t.} \quad & \sum_{T_i\in T} w(T_i) x(T_i) \leq B \\
& \sum_{T_i\in \ell} x(T_i) \leq 1 & \forall \ell \in \ell(T)\\
& x(T_i) \in \set{0,1} & \forall T_i \in T
\end{align*}
\end{boxedminipage}
\end{center}
\textsc{Knapsack} with multiple constraints is an special case of \textsc{Packing Integer Program} (PIP) problem.
\begin{definition}
Consider $A \in [0,1]^{d\times n}$, $b\in[1,\infty)^d$, and $c\in [0,1]^n$ such that $\max_j {c_j} \leq 1$, the goal of \textsc{Packing Integer Program} (PIP) is to maximize $c^Tx$ subject to $x\in\set{0,1}^n$ and $Ax \leq b$. Moreover if values of $A_{i,j}$ are integral, $b$ is assumed to be integral too. Let $B = \min_i b_i$.
\end{definition}
Srinivasan \cite{Srinivasan99} proved that in polynomial time we can find an integral solution of value $\Omega(\opt^{B+1\over B}/d^{1/B})$. In our instance of \textsc{Knapsack} problem with multiple constraint, we are working with an instance of PIP where $B=1$ and $d=n$. Thus Srinivasan approach gives a solution of value $\Omega(\opt^2/n)$. \\
Moreover via the standard randomized rounding technique, we can obtain an integral $O(\log n)$-approximate solution of the LP-relaxation of the CESS with disjoint solutions. First we solve the LP-relaxation to find an optimal fractional solution $x$ and then we pick each concept $T_i$ with probability $T_i/\log n$. It is straightforward to verify that with high probability $x$ we construct a feasible solution and further the benefit of the solution is within an $O(\log n)$ factor of the optimal solution.
\iffalse
Following result for Knapsack has been shown in \cite{KellererPP04},
\begin{lemma}\label{lem:ptas-multiple-constraint-knapsack}
There exists a PTAS for Knapsack with multiple constraints with running time $O(n^{\ceil{d/\eps}-d})$ where $d$ is the number of constraints.
\end{lemma}
Hence, by Lemma~\ref{lemma:costly-intell} the problem of CESD where the optimal distortion is a disjoint schema admits an almost
2-approximation algorithm .
\begin{theorem}\label{thm:disjoint-2-approx}
There is a $2+\eps$-approximation algorithm for the disjoint-MIS problem on a tree that runs in poly$(n,1/\eps)$.
\end{theorem}
\fi
\fi
\section{Related Work}
\label{sec:background}
Researchers have noticed the overheads and costs
of curating and organizing large data sets \cite{Dong:2012:LMS:2448936.2448938,Resource:Kanani,OptimizeSQLText:Jain}.
For example, some researchers have recently considered
the problem of selecting data sources for fusion such that
the marginal cost of acquiring a new data source
does not exceed its marginal gain, where cost and gain are
measured using the same metric, e.g., US dollars
\cite{Dong:2012:LMS:2448936.2448938}.
Our work extends this line of research by
finding cost-effective designs over unstructured
or semi-structured data sets, which help users query
explore these data sets more easily.
We also use a different model, where the cost and
benefit of annotating concepts can be measured
in different units.
There is a large body of work on building
large-scale data management systems for
annotating and extracting entities and relationships
from unstructured and semi-structured data sources
\cite{ChiticariuLRR10,webconcept:ragu}.
In particular, researchers have proposed several techniques
to optimize the running time,
required computational power, and/or storage consumption
of concept annotation programs by processing only
a subset of the underlying collection that is
more likely to contain mentions to entities
of a given concept
\cite{SearchCrawl:Ipeirotis,OptimizeSQLText:Jain,PrioritizationIE:Huang,Resource:Kanani}.
Our work complements these efforts by
finding a cost-effective set of concepts for annotation
in the design phase.
Further, our framework can handle
other types of costs in creating and maintaining
annotated data set other than computational overheads.
Researchers have examined the problem of
selecting a cost effective subset of concepts
from a set of concepts for annotation \cite{Termehchy:SIGMOD:14}.
Concepts in many real-world domains, however, are maintained
in taxonomies rather than unorganized sets.
We build on this line of work by
considering the superclass/ subclass relationships
between concepts in taxonomies to find
cost-effective designs.
Because taxonomies have richer structures
than sets of concepts, they present new
opportunities for finding cost-effective designs.
For instance, an enterprise may not have
sufficient budget to annotate a concept $C$ in
a dataset, but have adequate resources to
annotate occurrences of a superclass of $C$, such as $D$,
in the dataset. Hence,
to answer queries about entities of $C$,
the query interface may examine only
the documents that contain mentions to the entities of $D$.
As the query interface does not need to
consider all documents in the data set,
it is more likely that it returns
relevant answers for queries about $C$.
Because the algorithms proposed in \cite{Termehchy:SIGMOD:14}
do not consider superclass/ subclass relationships between
concepts, one cannot use them to find cos-effective
designs over taxonomies.
Moreover, as we prove in this paper, it is more
challenging and harder to find cost-effective designs
over taxonomies than over sets of concepts.
Researchers have proposed methods to
semi-automatically construct or expand taxonomies by
discovering new concepts from large text collections \cite{Clarka:IPM:2012}.
We, however, focus on the problem of annotating instances
of the concepts in a given taxonomy over an unstructured
or semi-structured data set.
Conceptual design has been an important problem
in data management from its early days \cite{DBBook}.
Generally, conceptual designs have been created manually
by experts who identify the relevant concepts in
a domain of interest.
Because an enterprise may not afford to
annotate the instances of all relevant concepts
in a domain, this approach cannot be applied to
large-scale concept annotation. As a matter of fact,
our empirical studies indicate that
adapting this approach does not generally return
cost-effective conceptual designs for annotation.
Researchers have studied the problem of
predicting the costs of developing or maintaining
pieces of software \cite{Boehm:SoftwareCost}.
Our work is orthogonal to the methods used for estimating
the costs of creating and maintaining concept
annotation modules.
\begin{comment}
Taxonomies and ontologies have been used in some areas of
data management, such as data integration and
query refinement \cite{Calvanese:CM:2009,BolinDing:SIGMOD:2012,Lenzerini:CIKM:2011}.
We extend this line of research by using taxonomies in schema design.
There are some efforts to formally model the process
of information extraction
over text documents from a data management perspective \cite{Spanners:Fagin:13}.
Our framework is orthogonal to the process of
information extraction and
finds a schema for the organized data set that can answer
input queries most effectively given a fixed budget.
Researchers have proposed methods to summarize
complex conceptual, relational, and XML schemas in to
fixed number of clusters
\cite{Summarizing-Relational,Summarizing-XML,GraghSummarySchema:Yang:11,TopicalStructureSchema:Wu:08,ERClustering:Tavanaa:07}.
Each cluster in a summary may be represented by an element of the original schema
(e.g. relational table) \cite{Summarizing-Relational,Summarizing-XML,QueryComplexDB:Yu:2007}.
A cost effective schema, however, generally contains concepts other than
the ones in the ideal schema and may not cover all elements of the original schema.
The problems of schema summarization and cost
effective schema selection
have also different objectives.
Schema summarizations mainly help users to browse and learn
the full schema more easily, but
cost effectively schemas selection maximize the effectiveness of
queries where data set or queries
are represented according to the selected schemas.
Furthermore, the problem of schema summarization
assumes that all clusters are equal costly.
Since each entity in a concept also belongs to its
superclass concepts, one may consider a concept as a view over
its subclass concepts. There is a large body of work on using a set of
materialized views instead of the original database schema
to answer some queries more efficiently. An excellent survey on these
efforts can be found in \cite{ViewsSurvey:Halevy:01}.
As opposed to this problem, we like to maximize the effectiveness of queries in the
cost effective schema selection problem as our
{\it views} (concepts) may contain a large number of
non-relevant answers to the queries.
We also have to answer all queries over the selected schema.
In some setting, e.g. if the original database is not available,
views may not contain all answers to the queries \cite{ComplexityViews:Abiteboul:98,ViewsSurvey:Halevy:01}.
In the problem of cost effective schema selection,
the general concepts contains sufficient
amount of information to answer the queries.
Researchers have proposed methods to discover the schema of the
noisy outputs of information extraction modules \cite{SchemaDiscovery:Cafarella}.
We, however, find the set of concepts in which one should organize information and/ or express queries
given their associated costs.
\end{comment}
\section{Cost-Effective Design for DAG Taxonomies}
\label{sec:dag-taxonomy}
\subsection{Directed Acyclic Graph Taxonomies}
While taxonomies are traditionally in form of trees,
many of them have evolved into \emph{directed acyclic graphs} (DAGs)
to model more involved subclass/ superclass
relationships between concepts
in their domains. Figure~\ref{fig:schema-org} shows fragments of
{\it schema.org} taxonomy. Some concepts in
this taxonomy are included in multiple superclasses.
For example,
a {\it hospital} is both a {\it place} and an {\it organization}. Therefore, a tree
structure is not able to represent these relationships.
Formally, a {\it directed acyclic graph taxonomy}
$\mX = $ $(R, \mC, \mR)$, ({\it DAG taxonomy} for short),
is a DAG, with vertex set $\mC$, edge set $\mR$, and \textit{root} $R$.
$\mC$ is a set of concepts,
$(D, C) \in \mR$ iff $D, C \in \mC$ and
$D$ is a superclass of $C$. Finally, $R$ is a node
in $\mX$ without any superclass.
A concept $C \in \mC$ is a {\it leaf concept} iff it has no subclass in $\mX$; i.e, there is not
any node $D \in \mC$ where $(C, D) \in \mR$.
The definitions of {\it child}, {\it ancestor},
and {\it descendant}
over tree taxonomies naturally extends to DAG taxonomies.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[->,>=stealth',scale=0.84]
\tikzstyle{every node}=[circle,draw,fill=black!50,inner sep=1pt,minimum width=3pt,font=\scriptsize]
\tikzstyle{every label}=[rectangle,draw=none,fill=none,font=\scriptsize]
\node (thing) [label=above right:$thing$] at (1.25,3) {};
\node (place) [label=left:$place$] at (0,2) {};
\node (org) [label=left:$organization\ \ \ $] at (5,2) {};
\node (airline) [label=below:$airline$] at (5.5,1) {};
\node (NGO) [label=right:$NGO$] at (6,1) {};
\node (local-business) [label= left:$local\ business\ \ \ $] at (5,1) {};
\node (movie-theater) [label=left:$movie\ theater$] at (-1,1) {};
\node (hospital) [label=left:$hospital\ \ $] at (2,0.2) {};
\draw [->] (thing) -- (place);
\draw [->] (place) -- (movie-theater);
\draw [->] (place) -- (hospital);
\draw [->] (thing) -- (org);
\draw [->] (org) -- (local-business);
\draw [->] (local-business) -- (hospital);
\draw [->] (org) -- (airline);
\draw [->] (org) -- (NGO);
\end{tikzpicture}
\caption{Fragments of {\it schema.org} taxonomy}
\label{fig:schema-org}
\end{figure}
\subsection{Design Queriability}
Design $\mS$ over DAG taxonomy $\mX = $ $(R, \mC, \mR)$ is
a non-empty subset of $\mC - \set{R}$.
Due to the richer structure of DAG taxonomies,
designs over DAG taxonomies may improve the effectiveness
of answering queries in more ways than the ones
over tree taxonomies.
For example, let data set $DS$ be in the domain of
the DAG taxonomy in Figure~\ref{fig:schema-org}, and $\mS_1= $ $\{place,$ $organization\}$ be a design.
The query interface will examine
the documents that are organized under {\it organization}
in $DS$ to answer queries about concept {\it airline}.
As query interface does not have sufficient information
to pinpoint the entities of concept {\it airline} in $DS$,
it may return some non-relevant answers for these queries,
e.g., matching entities that are NGOs.
On the other hand, because concept {\it hospital}
is a subclass of both {\it place} and
{\it organization}, its entities in $DS$ are annotated
by both concepts {\it place} and {\it organization}.
By examining the entities that are annotated by both
{\it place} and {\it organization}, the query interface
is able to identify the instances of {\it hospital} in $DS$.
Thus, it will not return entities that belong to other concepts
when answering queries about instances of {\it hospital}.
Generally, the query interface may pinpoint instances of some concepts in the data set by considering the
intersections of multiple concepts in a design
over a DAG taxonomy. Hence, subsets of a design may
create partitions in a DAG taxonomy.
Next, we extend the notion of partitions for designs
over DAG taxonomies.
\begin{definition}
\label{def:direct-ans}
Let $\mS$ be a design over DAG taxonomy $\mX = (R, \mC, \mR)$, and let $C \in \mC$ be a leaf concept.
An ancestor $A$ of $C$ in $\mS$ is $C$'s direct ancestor iff one of the following properties hold.
\squishlisttwo
\item $A = C$.
\item For each $D\in\mS$, if $D$ is an ancestor of $C$ then $D$ is not a descendant of $A$.
\end{list}
\end{definition}
The {\it full-ancestor-set} of $C$ is the set of {\it all its direct ancestors}.
For instance, the set $\{place,$ $organization\}$ is the full-ancestor-set of the
concept {\it hospital} in design $\mS_1 =$ $\{place,$ $organization\}$, and the set
$\{place,$ $local~business\}$ is the full-ancestor-set of the concept {\it hospital} in design
$\mS_2 =$ \{{\it place, organization, local business}\}
over the taxonomy in Figure~\ref{fig:schema-org}.
\begin{definition}
\label{def:partitionDAG}
Given design $\mS$ over DAG taxonomy $\mX =$ $(R, \mC, \mR)$,
the partition of a set of concepts $\mathcal{D} \subseteq \mS$ is
a set of leaf concepts $\mathcal{L} \subseteq \mC$ such that for every leaf concept
$L \in \mathcal{L}$, $\mathcal{D}$ is the full-ancestor-set of $L$.
\end{definition}
\noindent
For instance, {\it hospital} belongs to
the partition of $\{place,$\\ $organization\}$
in $\mS_1$. But, it does not belong to the partition of
$\set{place}$, since $\set{place}$ is not the full-ancestor-set of {\it hospital}.
The definitions of functions $\part$
and $\free$ over DAG taxonomies
extend from their definitions over tree taxonomies.
Similar to tree taxonomies, we define the frequency of partition $P$, denoted by $d(P)$,
as the frequency of the intersection of concepts in its root.
Using a similar analysis to the one in Section~\ref{sec:design-queriability}, we
define the queriability of conceptual design $S$ over
DAG taxonomy $\mX = $ $(R, \mC, \mR)$ as follows.
\begin{equation}
QU(\mS) = \sum_{P \in \allparts(\mS)}{\sum_{C\in P} u(C)d(C) \over d(P)} + {\sum_{C\in \free(\mS)} u(C)d(C)}.
\label{eq:queriability-DAG}
\end{equation}
The function $\allparts(\mS) \subseteq 2^{\mS}$ returns the collection of all full-ancestor-sets of $\mS$ in $\mX$.
We remark that the size of $\allparts(\mS)$ is linear, since we have at most one new partition per any leaf concept in $\mX$.
\noindent
\subsection{Hardness of Cost-Effective Design Over DAG Taxonomies}
We define the CECD problem over DAG taxonomies similar to the
CECD problem over tree taxonomies.
Following from the $\NP$-hardness results for CECD problem
over tree taxonomy,
CECD problem over DAG taxonomies is NP-hard.
In this section, we prove that finding
an approximation algorithm
with a reasonably small bound on its approximation ratio
for the problem CECD over DAG taxonomies
is significantly hard. Unfortunately, this is true even for
the special cases where concepts in the taxonomy
have equal costs or the design is disjoint.
We show that the CECD problem
over a DAG taxonomy generalizes
a hard problem in the approximation algorithms literature: \prob{Densest-$k$-Subgraph} \cite{Khot:FOCS:04}.
Given a graph $G=(V,E)$, in the
the \prob{Densest-$k$-Subgraph}
problem, the goal is to compute a subset $U\in V$ of size $k$ that
maximizes the number of edges in the induced subgraph of $U$.
It is known that, unless $\mathbf{P}=\NP$, no polynomial time approximation
scheme, i.e., PTAS, exists to compute the densest subgraph~\cite{Khot:FOCS:04}.
Moreover, there are strong evidences that
\prob{Densest-$k$-Subgraph} does not admit any approximation
guarantee better than polylogarithmic factor \cite{Bhaskara:SODA:12,AroraDensest10}.
The following theorem shows that approximating the $k$-densest subgraph reduces to approximating CECD.
\begin{lemma}\label{lem:annotate-improve}
Let $\mS$ be a design over taxonomy $\mX=(R,\mC,\mR)$ that is constructed from input $G=(V,E)$ as above. Let $S_v\in \mC\setminus \mS$ be a non-leaf concept. Then $QU(\mS\cup \set{S_v}) \geq QU(\mS)$.
\end{lemma}
\begin{proof}
After annotating a non-leaf concept $S_v$, each leaf concept $C$ will be contained by a partition of either smaller or the same size. Since the contribution of a leaf concept $C$ to $QU$ only depends on the size of the partition contains $C$ and this dependence is a non-decreasing function in terms of the size of partition, after annotating $S_v$ the contribution of $C$ to $QU$ either increases or remains unchanged. Thus $QU(\mS \cup \set{S_v}) \geq QU(\mS)$.
\end{proof}
\begin{figure}[htb]
\centering
\includegraphics[height=1.15in]{dagHardness}
\caption{Reducing the \prob{Densest-$k$-Subgraph} problem to CECD over DAG taxonomies where colors show correspondences in the reduction. The input graph for densest $k$-subgraph problem is shown on the left and its corresponding DAG taxonomies are in right. Colored vertices are leaf concepts and white vertices are non-leaf concepts in the DAG taxonomy.}
\label{fig:drawing}
\end{figure}
\noindent
The main result of this section is the following theorem.
\begin{theorem}
\label{thm:log-apx}
A $(\log m)$-approximation algorithm for
the CECD problem over DAG taxonomy with $m$ number of concepts implies that
there is an algorithm for the \prob{Densest-$k$-Subgraph} problem on $G=(V,E)$
with $n$ vertices that returns a $O(\log n)$-approximate solution.
\end{theorem}
\begin{proof}
Given $G$ and $k$, we build an instance of the CECD over
a DAG taxonomy as follows.
\iffalse
{\bf First we build a collection of sets $\{S_1, S_2, \dots, S_n\}$
corresponding to the vertices of $G$ with the following properties: (Ali: It is only making it hard to read. What if we remove them and directly explain the reduction)}
\begin{itemize}
\item if $(i,j)$ is an edge in $G$ then $S_i\cap S_j \neq \emptyset$, $S_i\cap S_j \neq S_i$ and $S_i\cap S_j \neq S_j$.
\item if $(i,j)$ is not an edge in $G$ then $S_i\cap S_j = \emptyset$.
\item for any $i,j,k,\ell\in \{1,\dots,n\}$ if $S_i\cap S_j \neq \emptyset$ then $S_i\cap S_j \neq S_k\cap S_\ell$.
\end{itemize}
\fi
For each edge $e \in E$, we introduce a leaf concept $a_e$
and an for each vertex $v \in V$,
we introduce a leaf concept $a_v$ and a non-leaf concept $S_v$
such that $S_v$ is the super class of $a_v$ and all the
concepts corresponding to the incident edges to $v$ in $G$.
Further, we set the budget $B$ to $k$, the cost of each non-leaf
concept to $1$, and the cost of each leaf concept to $k+1$.
Note that if we select $S_v$ and $S_u$ in the design and $(u,v) \in E$, then $a_{e}$ will be a singleton partition.
We also set the popularities and frequencies of all concepts in the taxonomy respectively to the same fixed values $u$ and $d$. Let $m$ be the number of edges in $G$ (or equivalently the number of leaf concepts in $\mC$) and $n$ be the number of vertices in $G$ (or equivalently the number of non-leaf concepts in $\mC$). For each partition $p\in \part(\mS)$ we set $d(p) = 1/(m\log n)$ if $\card{p} =1$ and $d(p)=1$ otherwise.
By Lemma~\ref{lem:annotate-improve}, annotating a non-leaf concept will not decrease the queribility of the design.
Since the leaf concepts are not affordable, and annotating a non-leaf concept will not decrease the total queribility, there exists an optimal design that annotates exactly $k$ non-leaf concepts.
Note that in any design $\mS$ of size $k$, the contribution of any leaf concept in a non-singleton partition (partition of size greater than one) is exactly $u\cdot d$.
In what follows we show that a $\log n$-approximation algorithm for the CECD problem implies a $O(\log n)$-approximation for the \prob{Densest-$k$-Subgraph} problem. To this end, by contradiction, let $\mA$ be a $\log n$-approximation algorithm of CECD problem.
Let $H_{\mS}$ be the set of vertices in $G$ of whose corresponding non-leaf concepts in $\mC$ are annotated in design $\mS$. $E(H_{\mS})$ denotes the set of edges with both endpoint in $H$ which corresponds to the set of edge-concepts of $\mC$ whose both non-leaf concepts corresponding to their endpoints are annotated by $\mS$.
Let $\mS_{\opt}$ be an optimal solution of the CECD problem. Suppose that $QU(\mS_{\opt}) = (t + r)\cdot m\log n + (m-t + n-r)$ where $t$ denotes the number of edges in $H_{\mS_{\opt}}$ and $r$ denotes the number of vertices in $H_{\mS_{\opt}}$ whose all incident edges are in $E(H_{\mS_{\opt}})$. It is straightforward to see that the corresponding leaf concepts to edges in $E(H_{\mS_{\opt}})$ and vertices with all incident edges in $E(H_{\mS_{\opt}})$ are the only singleton partitions with respect to design $\mS_{\opt}$.
\iffalse
Assuming $k < n$ and since $G$ is connected any concept annotation
of total benefit $k+t$ corresponds to $k$ vertices whose induced subgraph
has $t$ edges.
\fi
Now, let $\mS_{\mA}$ be the design returned by $\mA$ and similarly assume that $QU(\mS_{\mA}) = (t'+r')\cdot m\log n + (m-t' + n-r')$. Since $\mA$ is a $\log n$-approximation algorithm of the CECD problem, $(t + r)\cdot m\log n + (m-t + n-r)$ is at most $\log n\cdot ( (t'+r')\cdot m\log n + (m-t' + n-r'))$. Thus,
\begin{align*}
t(m\log n -1) \leq t'(m\log^2n -1) + r'm\log^2 n + (m+n)\log n.
\end{align*}
Note that since the size of a feasible design is $k$, $r'\leq k$. Thus with some simplifications,
\begin{align*}
{tm\log n\over 2} \leq t'(m\log^2n) + km\log^2 n + 2m\log n,
\end{align*}
which implies that
\begin{align}\label{eq:approx-ratio}
t \leq 2t'\log n + 2k\log n + 4 \leq 5\log n \cdot \max\set{k,t'}.
\end{align}
Now consider the greedy approach of \prob{Densest-$k$-Subgraph} problem such that in each step the algorithm picks a vertex $v$ and add it to the already selected set of vertices $S$ if $v$ has the maximum number of edges incident to $S$. It is easy to see that the greedy approach guarantee $k/2$ number of edges. Note that if the input graph has less than $k/2$ edges, we can solve \prob{Densest-$k$-Subgraph} problem optimally by picking all edges.
Using the simple greedy approach and the result returned by $\mA$, we can find a set of $k$ vertices whose induced subgraph has at least $\max\set{k/2,t'}$ number of edges. Thus by (\ref{eq:approx-ratio}), we can find a $O(\log n)$-approximate solution of the \prob{Densest-$k$-Subgraph} problem which completes the proof.
\end{proof}
\iffalse
Theorem~\ref{thm:log-apx} gives strong evidences
that the existence of any poly-logarithmic
approximation guarantee for CECD over DAG taxonomies
is unlikely. Further, the following corollary
immediately results from Theorem~\ref{thm:log-apx}.
\begin{corollary}
\label{corollary:DAG}
If there is a pseudo polynomial algorithm for
CECD over DAG taxonomies, there is a constant approximation
algorithm with polynomial time complexity for the \prob{Densest $k$-Subgraph} problem.
\end{corollary}
\noindent
Hence, there are strong evidences that it is not possible to
find a pseudo polynomial algorithm for CECD over DAG taxonomies.
\fi
Since the concepts in the instance of the
CECD problem discussed in the proof of
Theorem~\ref{thm:log-apx} have equal costs and
its optimal solution is disjoint, i.e., there is no
directed path between any two of concepts in the design,
the hardness results of Theorem~\ref{thm:log-apx}
is true even for the special cases of CECD problem over DAG taxonomies where the {\it concepts are equally costly} and/or
{\it the problem has disjoint solutions}.
Figure~\ref{fig:DAG-instance} illustrates a simple example for which the \prob{level-wise} algorithm is arbitrarily
worse than the optimal solution over DAG taxonomies. For the sake of
simplicity, let $d$ and $u$ values be positive integers.
Let $u(C_4) = 4$, $d(C_4) = 1$, $u(C_5) = 1$, $d(C_5) = M$, $u(C_6) = M$, $d(C_6) = 1$, $u(C_7) = 1$ and $d(C_7) = M$.
Also, let $w(C_1) = w(C_2) = w(C_3) = 1$, and $B=2$.
The greedy algorithm first picks $C_1$ because of its high immediate queriability, and then $C_2$ or $C_3$ (but not both of them).
So its total queriability is $5$.
On the other hand, by picking $C_2$ and $C_3$ one may
acquire $C_6$ for free, whose queriability is $M$.
Since we can choose $M$ to be any number,
the optimal solution can be arbitrarily better that the solution
delivered by the greedy approach.
Intuitively, the situation can be exacerbated to a
large extent if the subset with large queriability
can be obtained by intersecting more than two concepts.
\begin{figure}
\centering
\begin{tikzpicture}[->,>=stealth',scale=0.84]
\tikzstyle{every node}=[circle,draw,fill=black!50,inner sep=1pt,minimum width=3pt,font=\scriptsize]
\tikzstyle{every label}=[rectangle,draw=none,fill=none,font=\scriptsize]
\node (C0) [label=above:$C_0$] at (2,3) {};
\node (C1) [label=above:$C_1$] at (0,2) {};
\node (C2) [label=left:$C_2$] at (2,2) {};
\node (C3) [label=right:$C_3$] at (4,2) {};
\node (C4) [label=left:$C_4$; u:4 d:1] at (0,1) {};
\node (C5) [label=left:$C_5$; u:1 d:M] at (2,1) {};
\node (C6) [label=below:$C_6$; u:M d:1] at (3,1) {};
\node (C7) [label=right:$C_7$; u:1 d:M] at (4,1) {};
\draw [->] (C0) -- (C1);
\draw [->] (C0) -- (C2);
\draw [->] (C0) -- (C3);
\draw [->] (C1) -- (C4);
\draw [->] (C2) -- (C5);
\draw [->] (C2) -- (C6);
\draw [->] (C3) -- (C6);
\draw [->] (C3) -- (C7);
\end{tikzpicture}
\caption{An instance of CECD problem over DAG taxonomy}
\label{fig:DAG-instance}
\end{figure}
\begin{comment}
We define a {\it complex concept} $O$ in conceptual design $\mS$
over taxonomy $\mX=$ $(R, \mC, \mR)$ as a non-empty subset of $\mS$
such that none of its concept is descendant
of another concept in $O$. For example, $\set{Place, Organization}$
is a complex concept in $\mS_1$. Obviously, every
single concept in a conceptual design is also a complex concept.
A descendant of complex concept $O$
is the descendant of all members of $O$.
For instance, {\it Hospital} is a descendant
of complex concepts $\set{Place, Organization}$ and
$\set{Place}$ in $\mS_1$.
{\bf Concept $C$ is a {\it direct descendant}
of complex concept $O$
iff it is a descendant of $O$ and for all $A \in O$,
there is not any concept in the path from
$A$ to $C$ that belongs to $\mS - O$. (Ali: Do we need this notation?)} For example,
{\it Hospital} is a direct descendant of complex
concepts $\set{Place, Organization}$
and $\set{Place}$ in conceptual design $\mS_1$. It is not, however,
the direct descendant of $\set{Place, Organization}$ in
conceptual design $\mS_2 =$ \{ {\it Place, Organization, LocalBusiness}\}.
If $C$ is a direct descendant of $O$,
$O$ is called a {\it direct ancestor} of $C$.
{\bf A {\it maximal ancestor} of concept $C$ is the ancestor of $C$ that is
superset or equal to every direct ancestor of $C$.
For example, $\set{Place, Organization}$ is the maximal
ancestor of {\it Hospital} in $\mS_1$. (Ali: What does it mean to be the superset of a direct ancestor of $C$?)}
Next, we extend the notion partitions of a conceptual design for
DAG taxonomies.
\begin{definition}
\label{def:partitionDAG}
Given conceptual design $\mS$ over DAG taxonomy $\mX =$ $(R, \mC, \mR)$,
the partition of complex concept $O$ in $\mS$ is
a set of final concepts $P \subset \mC$ such that
\begin{itemize}
\item If $O$ contains a single final concept $C$, $P= $ $\{ C \}$.
\item Otherwise, $O$ is {\bf the maximal?} ancestor of every concept $C \in P$.
\end{itemize}
\end{definition}
\noindent
Consider an instance of CESS problem where the input
taxonomy is a bi-partite directed acyclic
graph $G =$ $(\mC, \mQ, \mR)$ such that $\mQ$
and $\mC$ denote the sets of final and non-final
concepts and $\mR$ is a set of edges where
for each $(C,D) \in \mR$, concept $C$ is a
subclass of concept $D$.
In this section, we provide a formal framework to
compute and compare the amount and effectiveness
of queries over various conceptual designs selected from a tree taxonomy.
In this section, we assume that data sets and corpora contain the information
about entities in a domain whose concepts are modeled by taxonomy $\mX = $ $(\mC, \mR)$.
Without loss of generality, we assume that conceptual design $\mS$ is equal to the set of final concepts in $\mX$.
We like to compare the amount of effectiveness achieved by distortions of $\mS$ using $\mX$.
Let $DS$ denote the data set organized according to $\mS$ from $CO$.
Since the concepts in $\mS$ are final, the set $L$ in every tuple $(L, a, doc) \in DS$
contains a single concept. Given query $q = $ $(C, k)$, the query interface
ranks tuples $(L, a, doc)$ in $DS$ using a ranking function \cite{Chu-Carroll06,OptimizingAnnotation:Chakrabarti}.
The ranking function may use the similarity between $k$ and $a$ and possibly some
information about $doc$ such as its PageRank score \cite{IRBook}.
Our framework is orthogonal to the choice of such a ranking function.
From the ranked list, the query interface retains only the tuples
whose concepts contain $C$ and return their documents to users.
The total cost of organizing $CO$ and maintaining the organized data set
according to $\mS$ may exceed the available budget $B$.
One may decide to reduce the cost of organizing $CO$ by {\it distorting} $\mS$ to another conceptual design
$\hat{\mS}$, which may eliminate some of the concepts in $S$ and/or generalize
some others according to $\mX$.
\begin{definition}
\label{def:distortion}
Given conceptual design $\mS$, a distortion of $\mS$ ( using taxonomy $\mX$),
denoted as $\hat{S}$, is a subset of $\mC$ which is not equal to $\mS$.
\end{definition}
\noindent
For instance, conceptual design $\hat{\mS}_1 = $ $\{person, university,club\}$ is a distortion of conceptual design
$\mS_1 = $
{\it \{athlete, scientist, university, field, club\}}
using the taxonomy shown in figure~\ref{fig:t}.
We denote the power set of $\hat{\mS}$, which does not include the empty set, as $\mathcal P ({\hat{\mS}})$.
\begin{definition}
\label{def:distortionFunction}
Given conceptual designs $\mS$ and $\hat{\mS}$,
the function $ds_{\mS}^{\hat{\mS}}:\mS \rightarrow$ $\mathcal P ({\hat{\mS}})$ is a distortion
function if and only if $ds_{\mS}^{\hat{\mS}}(C) = $ $\{ C\}$ or
for all $\hat{C} \in ds_{\mS}^{\hat{\mS}}(C)$, there is a path from $\hat{C}$ to $C$
in $\mX$ that does not contain any other concept $\hat{D} \in \hat{\mS}$.
\end{definition}
\noindent
For instance, a distortion function from conceptual design $\mS_{1}$ to $\hat{\mS}_{1}$
maps concept $athlete$ to $person$.
A distortion function may not be total, therefore, there may be some concepts in $\mS$ that are
not mapped to any subset of $\hat{\mS}$.
The concept {\it field} in $\mS_{1}$ is not mapped to any sets of concepts in $\hat{\mS}_{1}$.
We call these concepts, {\it free concepts}.
A distortion function models the way $\hat{\mS}$ modifies $\mS$:
it may leave some concepts intact, eliminate some concepts (i.e. free concepts), and
generalize others. The distorted conceptual design $\hat{\mS}$ generalizes concept $C$
to the set of most specific concepts $\gamma \subset \hat{\mS}$ such that
$C \subset \cap_{R \in \gamma} R$. If $\mX$ is a rooted tree, $\hat{\mS}$ will generalize $C$
to its most specific ancestor in $\mX$.
There is one and only one distortion function from a conceptual design to its distortion.
Let $\hat{DS}$ denote the data set with conceptual design $\hat{\mS}$.
If the query concept $C \in \hat{\mS}$,
the query interface will follow the same process as the case
that the conceptual design of the data set is $\mS$ to answer the query over $\hat{DS}$.
Otherwise, if $C$ is not a free concept, the query interface
ranks tuples $(L, a, doc) \in \hat{DS}$, where $ds(C) = L$, as
they are the only tuples that contain information about entities in $C$, and returns
the ranked list of their documents to users.
If $C$ is a free concept, since the tuples in $\hat{DS}$ do not contain any
information about entities in $C$, the query interface has to search and rank
the documents in $CO$. It may rank document $doc$ according to its similarity to
the terms in the query $k$ \cite{IRBook}.
The query interface may also use the concept information in the query as terms to search over the documents in $CO$.
Nonetheless, the tuples in $\hat{DS}$ are
still helpful in eliminating non-relevant documents from the results.
The query interface can find tuples in $(L, a, doc) \in \hat{DS}$ where $a$ matches the terms in the query.
Because entities from different concepts and similar names do not appear in the a document,
the query interface can remove
the documents in the retrieved tuples of $\hat{DS}$
from the list of documents returned from $CO$.
If the concept of a query $C \in$ $\mS - \hat{\mS}$, the query interface has to search
through and rank a larger number of candidate answers over $\hat{DS}$ than the the number of candidate answers over
$DS$. Hence, it may not deliver as effective results for the query over $\hat{DS}$ as $DS$.
We like to measure reduction in the amount of effectiveness if the corpus organized
according to $\hat{\mS}$ instead of $\mS$.
Since most information needs over structured data are precision oriented \cite{SemTag:IBM,Chu-Carroll06,Classification-Enhanced:Bennett},
we use {\it precision at $k$}, which is the fraction of relevant answers in the top $k$ returned answer
(i.e. precision) to measure effectiveness of answering queries over a data set \cite{IRBook}.
One may use a database conceptual design to diversify the results of the query \cite{IRBook}.
Selecting conceptual designs based on other objective functions such as diversifying
query results is an interesting subject for future work.
\begin{definition}
\label{def:popularity}
Given corpus $CO$ and conceptual design $\mS$, the {\it popularity } of concept $C \in \mS$,
denoted as $u(C)$, is the probability that the concept of a query over $CO$ is $C$.
\end{definition}
\noindent
The information required to compute the popularity of a concept, e.g. the set of queries, submitted to
$CO$, may not be always available. Methods such as sampling can be used to effectively estimate the
popularities of concepts \cite{IRBook}.
Our framework is orthogonal to such methods.
\begin{definition}
\label{def:distortionmetric}
Given conceptual designs $\mS$ and $\hat{\mS}$, corpus $CO$, let $DS$
and $\hat{DS}$ be the organized data sets over $CO$ with conceptual design $\mS$ and $\hat{\mS}$, respectively.
The reconstruction probability of concept $C \in \mS$ given $ds_{\mS}^{\hat{\mS}}(C)$,
denoted by $r(C \mid ds_{\mS}^{\hat{\mS}}(C))$, is the probability that
$(\{C\}, a, doc) \in DS$ provided that $(ds_{\mS}^{\hat{\mS}}(C), a, doc) \in \hat{DS}$.
\end{definition}
\noindent
The larger the reconstruction probability for concept $C \in \mS$ is, the more likely it is that
an answer returned over a data set with conceptual design $\hat{S}$ contains information about entities of $C$
and is relevant to queries with concept $C$.
Given all other conditions are the same, the more general
concepts a concept in $\mS$ is mapped to by a distortion function,
the smaller their reconstruction probabilities will be.
A query interface is more effective if it answers more queries more effectively.
Hence, we define the degree of effectiveness achieved by distorting $\mS$ as follows.
\begin{definition}
\label{def:benefit}
Give corpus $CO$ and conceptual design $\mS$,
the benefit of $\hat{\mS}$ is $I(\hat{\mS}) =$ $\sum_{C \in \mS} u(C) r(C \mid ds(C))$.
\end{definition}
We like to compute the reconstruction probabilities for concepts in conceptual design $\mS$
given its distortion $\hat{\mS}$ over a corpus, without
organizing the corpus based on
$\mS$ and $\hat{\mS}$.
\begin{definition}
\label{def:partition}
Given conceptual designs $\mS$ and $\hat{\mS}$,
partition $G$ is a subset of $\mS$ such that for all $C, D \in G$,
we have $ds_{\mS}^{\hat{\mS}}(C) =$ $ ds_{\mS}^{\hat{\mS}}(D)$
or all concepts in $G$ are free concepts.
\end{definition}
\noindent
We call the partition of $\mS$ that contains free concepts, {\it free partition}.
Each conceptual design has at most one free partition.
Given distortion $\hat{\mS}$, let function $g_{\hat{\mS}}$ maps each concept $C \in \mS$ to its partition.
If the corpus is organized using $\hat{\mS}$,
the tuples whose concepts are in
$g_{\hat{\mS}}$ are stored as instances of the same set of concepts in $\hat{\mS}$.
Thus, the query interface does not have any way of recognizing the concepts of these entities in $\mS$.
It is reasonable to estimate the probability of picking an tuple whose
concept is $C$ by $\frac{d(C)}{d(g_{\hat{\mS}}(C))}$, where $d(C)$ is the probability
that the concept of an entity appearance is $C$, and $d(g_{\hat{\mS}}(C))$ is the probability that the concept
of an entity appearance belongs to $g(C)$. Since concepts in $\mS$ are mutually exclusive, we have
$d(g_{\hat{\mS}}(C)) = $ $\sum_{E \in g_{\hat{\mS}}(C)}d(E)$. Previous empirical results have validated this estimation
method \cite{CostEffectiveDesign:Termehchy:13}. The values of $d(C)$, $C \in \mS$, can also be effectively estimated
using a small sample of the corpus \cite{CostEffectiveDesign:Termehchy:13}.
According to this estimation for reconstruction probabilities, we can rewrite the benefit of the distortion of $\mS$ as:
\begin{equation}
\label{set-intelligbility}
I(\hat{\mS}) = \sum_{C \in \mS} \frac{ u(C) d(C)}{\sum_{E \in g_{\hat{\mS}}(C)}d(E)}.
\end{equation}
\noindent
For brevity, we call $d(C)$ the {\it frequency} of concept $C$.
Annotation and extraction programs may make mistakes in identifying and extracting entities \cite{SemTag:IBM}.
In this paper, we assume that
the annotators are sufficiently accurate and
the impact of their errors on the effectiveness of answering queries is negligible compared to the impact of ignoring or
generalizing concepts.
Extending our framework to consider
the errors of extractors is an interesting future work.
One can use the same procedure to derive
the benefit of the effectiveness achieved by formulating queries using a distorted conceptual design $\hat{\mS}$
over a data set organized in conceptual design $\mS$.
\end{comment}
\section{Experiments}
\label{sec:experiment}
\subsection{Experiment Setting}
\label{sec:expsetting}
\noindent{\bf Taxonomies}:
We have selected five taxonomies of YAGO ontology
version 2008-w40-2 \cite{Suchanek:YAGO}
to validate our model and evaluate the effectiveness
and efficiency of our proposed algorithms.
YAGO organizes its concepts using superclass/ subclass
relationships in a DAG with a single root.
We have selected our taxonomies from levels 3 to 7 in the
YAGO. We did not select any concept from higher levels
as they are very abstract. The concepts in levels
lower than 7 in YAGO are too specific and rarely do they
have any instance in our query workload.
To validate our model, we have to
compute and compare the effectiveness of
answering queries using
every feasible conceptual design over a taxonomy.
Thus, we need taxonomies with
relatively small number of concepts for our
validation experiments.
We have extracted three taxonomy
trees with relatively small number of
nodes, called {\em T1},
{\em T2}, {\em T3}, to use in our validation experiments.
T1 has small number of concepts and is not balanced.
T2 is a more balanced tree where each internal
(i.e., non-leaf and non-root concepts)
concept have at least two children.
T3 is quite similar to T2 but
is slightly deeper. We have further picked two
taxonomies with larger numbers of concepts, denoted as
{\em T4} and {\em T5}, from YAGO ontology.
We use all five taxonomies to evaluate
the effectiveness of our proposed algorithms
{\em T4} and {\em T5} to study the their efficiencies.
Table~\ref{tab:expinfo} depicts
the information about these taxonomies and
Table~\ref{tab:exconcepts} shows some sample
concepts from each taxonomy.
\medskip
\noindent {\bf Dataset}:
We have used the collection of English Wikipedia articles from the
Wikipedia dump of the October 8, 2008 that is
annotated by concepts from Yago ontology
in our experiments \cite{Suchanek:YAGO}.
This collection contains 2,666,190 articles.
For each taxonomy in our sets of taxonomies,
we have extracted a subset of the
original Wikipedia collection where each document contains
at least a mention to an entity of a concept in the taxonomy.
We use each data set in the experiments over its corresponding
taxonomy. Table~\ref{tab:expinfo} shows
the properties of these five data sets.
The annotation accuracies of the concepts in selected taxonomies
over these data sets are between 0.8 and 0.95.
\medskip
\noindent {\bf Query Workload}:
We use a subset of MSN query log whose
target URLs, i.e., relevant answers, are Wikipedia
articles.
Each query contains between 1 to 6 keywords and has
between one to two relevant answers with most queries
having one relevant answer. Because the query log does not
have the concepts behind its queries, we adapt an
automatic approach to find the concept associated with each query.
We label each query by the concept of the matching instance
in its relevant answer(s).
Using this method, we create a query workload per
each of our data sets.
It is well known that the effectiveness of
answering some queries may not improve by
annotating the data set~\cite{MoreSenses:Sanderson}.
For instance, all candidate answers for a query
may contain mentions to the entities of the query concept.
In order to reasonably evaluate our algorithms,
we have ignored the queries whose rankings remains the same
over the unannotated version and
the version of the data set where all concepts in
the taxonomy are annotated.
Table~\ref{tab:expinfo} shows the information about
the query workloads.
We use two-fold cross validation to calculate the
popularities, $u$,
of concepts in each taxonomy over their corresponding
query workload.
Because some concepts in a taxonomy
may not appear in its query workload,
we smooth popularities of concepts using the Bayesian
m-estimate method~\cite{IRBook}:
$\hat{u}(C) = \frac{\hat{P}(C|QW)+mp}{m+\sum_C{\hat{P}(C|QW)}}$,
where $\hat{P}(C|QW)$ is the probability that $C$ occurs
in the query workload and $p$ denotes the prior
probability. We set the value of
the smoothing parameter, $m$, to 1 and use a
uniform distribution for all the prior probabilities, $p$.
\medskip
\noindent {\bf Query Interface}:
We index our datasets using Lucene (\emph{lucene. apache.org}).
Given a query, we rank its candidate answers using
BM25 ranking formula, which is shown to be more effective than
other similar document ranking methods~\cite{IRBook}.
Then, we apply the information about the concepts
in the query and documents to return the answers
whose matching instances have the same concept as
the concept of the query. If the concept in the query has
not been annotated from the collection, the query interface
returns the list of document ranked by BM25 method without any
modification. We have implemented our query interface and
algorithms in Java 1.7 and performed our experiments
on a Linux server with 100 GB of main memory and two quad core processors.
\medskip
\noindent {\bf Effectiveness Metric}:
All queries in our query workloads have one or two relevant answers, thus, we measure the effectiveness of answering queries
over a dataset using Precision at 3 ($p@3$) and mean reciprocal
rank (MRR) \cite{IRBook}.
Since our theory is more focus on preicision metric, we will mainly discuss the results based on $p@3$.
However, the results of both $p@3$ and MRR generally follow similar trends.
We measures the statistical significance of our results using the paired-$t$-test at a significant level of 0.05.
\medskip
\noindent {\bf Cost Models}:
We use two models for generating costs of concept annotation
in our experiments.
First, we assign a randomly generated cost to each
concept in a taxonomy.
The results reported for this model are
averaged over 20 sets of random cost assignments per budget.
We call this model {\it random cost} model.
If there is not any reliable estimation available
for the cost of annotating concepts,
an enterprise may assume that
all concepts are equally costly. Hence, in our second
cost model, we assume that all concepts in the
input taxonomy have equal cost. We name this model
{\it uniform cost} model.
We use a range of budgets between 0 and 1 with
a step size of 0.1 where 1 means sufficient
budget to annotate all concepts
in a taxonomy and 0 means no budget is available.
\begin{table}
\begin{tabular}{ccl}
T1 & : & plant, animal, person, rich person, advocate\\
T2 & : & document, association, club, institute, facility \\
T3 & : & music, speech, literary composition, adaptation \\
T4 & : & event, show, contest, group, ethnic\_group\\
T5 & : & person, location, language, character, accident \\
\end{tabular}
\vspace{-2mm}
\caption{Sample concepts from taxonomies T1, T2, T3, T4, and T5}
\label{tab:exconcepts}
\end{table}
\begin{table}
\centering
{\tiny
\begin{tabular}{r|cc|rr|r}
Taxonomy&\#Concept & Depth&\#Distinct Queries&\#Total Queries&\#Documents \\
\hline\hline
T1 & 10 & 2 & 388 & 648 & 68982 \\
T2 & 17 & 2 & 156 & 256 & 267653 \\
T3 & 17 & 3 & 98 & 146 & 88479 \\
T4 & 56 & 4 & 1308 & 2028 & 955795 \\
T5 & 78 & 6 & 2800 & 4700 & 1470661 \\
\end{tabular}}
\caption{The sizes and depths of
taxonomies and the sizes of their corresponding
query workloads and data sets.}
\label{tab:expinfo}
\end{table}
\subsection{Validating Queriability Function}
\label{sec:expvalidate}
In this set of experiments, we evaluate
how accurately the queriability formula measures the
amount by which a design improves the
effectiveness of answering queries.
We use three following algorithms in these experiments.
\noindent
{\bf Oracle:} Given a fixed budget, Oracle enumerates all
feasible designs over the input taxonomy. For each design,
it computes the average $p@3$ for all queries in the
query workload over the data set annotated by the design.
It then picks the design with maximum value of
average $p@3$. Since oracle does not use
any heuristic to predict the amount
of improvement in $p@3$ by a design, we use it to
evaluate the accuracy of other methods that predict the
amount of improvement in $p@3$ achieved by a design.
We must note that due to time limitation, some results of Oracle are omitted.
\noindent
{\bf Popularity Maximization ({\it PM}):}
Following the traditional approach toward conceptual design
for databases, one may select concepts in a design that are
{\it more important} for users \cite{DBBook}.
Hence, we implement an algorithm,
called {\em PM}, that given a budget enumerates all feasible
designs, such as $\mS$, in a taxonomy and
selects the one with the maximum value of
$$\sum_{p \in \part(\mS)} \sum_{C \in p} u(C) \pr(p).$$
This design contains the concepts that are
more frequently queried by users and also annotated more accurately.
\noindent
{\bf Queriability Maximization ({\it QM}):}
QM enumerates all feasible designs
over the input taxonomy and returns the one with the maximum
queriability as computed in
Section~\ref{sec:design-queriability}.
Because we would like to explore how accurately PM and QM predict
the amount of improvement in the effectiveness of answering queries
by a design, we assume that these algorithms have complete
information about the popularities and frequencies of concepts.
As these algorithms enumerate all
feasible designs, it is not possible to run them over
large taxonomies. Hence, we run these
algorithms over small taxonomies,
namely T1, T2, and T3.
Further, Oracle has to enumerate all feasible designs per each
query in the query workload per each feasible design. Because
each result for an algorithm using random cost model
is the average of 20 different runs of the algorithm,
it takes extremely long time to run oracle for this
cost model. Thus, we run and report the results
of oracle only for uniform cost model.
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|ccc|ccc
\multirow{2}{*}{Taxonomy} & \multirow{2}{*}{Budget} & \multicolumn{3}{c}{Uniform Cost} & \multicolumn{3}{|c}{Random Cost} \\
\cline{3-8}
& & Oracle & PM & QM & PM & QM \\
\hline\hline
\multirow{7}{*}{T1}
& 0.0 & \multicolumn{6}{c}{0.088} \\ \cline{3-8}
& 0.1 & 0.149 & 0.089 & \mb{0.149} & 0.128 & 0.098 & \mb{0.128} \\
& 0.2 & 0.168 & 0.091 & \mb{0.168} & 0.163 & 0.097 & \mb{0.162} \\
& 0.3 & 0.183 & 0.106 & \mb{0.177} & 0.179 & 0.103 & \mb{0.177} \\
& 0.4 & 0.192 & 0.166 & \mb{0.192} & 0.188 & 0.137 & \mb{0.185} \\
& 0.5 & 0.194 & 0.185 & \mb{0.193} & 0.193 & 0.174 & \mb{0.193} \\
& 0.6 & 0.195 & 0.194 & 0.195 & 0.194 & 0.188 & \mb{0.194} \\
& 0.7 & 0.195 & 0.195 & 0.195 & 0.195 & 0.195 & 0.195 \\
& 0.8 & 0.195 & 0.195 & 0.195 & 0.195 & 0.195 & 0.195 \\
& 0.9 & 0.195 & 0.195 & 0.195 & 0.195 & 0.195 & 0.195 \\
\hline
\multirow{7}{*}{T2}
& 0.0 & \multicolumn{6}{c}{0.200} \\ \cline{3-8}
& 0.1 & 0.241 & 0.234 & 0.232 & - & 0.245 & \mb{0.259} \\
& 0.2 & 0.303 & 0.247 & \mbi{0.285} & - & 0.249 & \mb{0.292} \\
& 0.3 & 0.318 & 0.250 & \mb{0.315} & - & 0.259 & \mb{0.314} \\
& 0.4 & 0.320 & 0.258 & \mb{0.318} & - & 0.282 & \mb{0.320} \\
& 0.5 & 0.326 & 0.297 & \mb{0.324} & - & 0.310 & 0.324 \\
& 0.6 & 0.326 & 0.326 & 0.326 & - & 0.325 & 0.325 \\
& 0.7 & 0.326 & 0.326 & 0.326 & - & 0.326 & 0.326 \\
& 0.8 & 0.326 & 0.326 & 0.326 & - & 0.326 & 0.326 \\
& 0.9 & 0.326 & 0.326 & 0.326 & - & 0.326 & 0.326 \\
\hline
\multirow{5}{*}{T3}
& 0.0 & \multicolumn{6}{c}{0.171} \\ \cline{3-8}
& 0.1 & 0.221 & 0.208 & 0.210 & 0.254 & 0.252 & 0.242 \\
& 0.2 & 0.281 & 0.258 & 0.269 & 0.287 & 0.268 & \mb{0.278} \\
& 0.3 & 0.304 & 0.288 & \mb{0.304} & 0.303 & 0.291 & \mb{0.301} \\
& 0.4 & 0.306 & 0.299 & 0.304 & 0.303 & 0.304 & 0.305 \\
& 0.5 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 \\
& 0.6 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 \\
& 0.7 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 \\
& 0.8 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 \\
& 0.9 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 & 0.306 \\
\end{tabular}}
\caption{Average $p@3$ for Oracle, PM, and QM. Statistically significant differences between PM and QM, and between Oracle and QM are marked in bold and italic, respectively.}
\label{tab:model-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|ccc|ccc
\multirow{2}{*}{Taxonomy} & \multirow{2}{*}{Budget} & \multicolumn{3}{c}{Uniform Cost} & \multicolumn{3}{|c}{Random Cost} \\
\cline{3-8}
& & Oracle & PM & QM & Oracle & PM & QM \\
\hline\hline
\multirow{9}{*}{T1}
& 0.1 & 0.362 & 0.197 & \mb{0.362} & 0.299 & 0.215 & \mb{0.296} \\
& 0.2 & 0.415 & 0.203 & \mb{0.406} & 0.401 & 0.218 & \mb{0.398} \\
& 0.3 & 0.459 & 0.227 & \mb{0.459} & 0.446 & 0.230 & \mb{0.442} \\
& 0.4 & 0.492 & 0.400 & \mb{0.492} & 0.478 & 0.316 & \mb{0.477} \\
& 0.5 & 0.501 & 0.444 & \mb{0.501} & 0.497 & 0.421 & \mb{0.497} \\
& 0.6 & 0.507 & 0.497 & \mb{0.507} & 0.503 & 0.468 & \mb{0.503} \\
& 0.7 & 0.507 & 0.507 & 0.507 & 0.507 & 0.503 & 0.507 \\
& 0.8 & 0.507 & 0.507 & 0.507 & 0.507 & 0.507 & 0.507 \\
& 0.9 & 0.507 & 0.507 & 0.507 & 0.507 & 0.507 & 0.507 \\
\hline
\multirow{9}{*}{T2}
& 0.1 & - & 0.504 & 0.479 & - & 0.536 & 0.540 \\
& 0.2 & - & 0.574 & \mb{0.629} & - & 0.582 & \mb{0.641} \\
& 0.3 & - & 0.586 & \mb{0.729} & - & 0.613 & \mb{0.720} \\
& 0.4 & - & 0.615 & \mb{0.745} & - & 0.663 & \mb{0.749} \\
& 0.5 & - & 0.686 & 0.757 & - & 0.720 & \mb{0.760} \\
& 0.6 & - & 0.761 & 0.764 & - & 0.763 & 0.763 \\
& 0.7 & - & 0.764 & 0.764 & - & 0.764 & 0.764 \\
& 0.8 & - & 0.764 & 0.764 & - & 0.764 & 0.764 \\
& 0.9 & - & 0.764 & 0.764 & - & 0.764 & 0.764 \\
\hline
\multirow{9}{*}{T3}
& 0.1 & 0.469 & 0.453 & 0.469 & 0.580 & 0.570 & 0.562 \\
& 0.2 & 0.680 & 0.600 & \mb{0.679} & 0.695 & 0.632 & \mb{0.688} \\
& 0.3 & 0.734 & 0.685 & \mb{0.734} & 0.744 & 0.707 & \mb{0.737} \\
& 0.4 & 0.754 & 0.741 & 0.754 & 0.759 & 0.754 & 0.758 \\
& 0.5 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 \\
& 0.6 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 \\
& 0.7 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 \\
& 0.8 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 \\
& 0.9 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 & 0.760 \\
\end{tabular}}
\caption{Average $MRR$ for Oracle, PM, and QM. Statistically significant differences between PM and QM are marked in bold.}
\label{tab:model-result}
\end{table}
Table~\ref{tab:model-result} shows the average $p@3$
achieved by Oracle, PM, and QM over taxonomies T1,
T2, T3 under uniform cost model and for
PM and QM under random cost model over various
budgets.
The values of $p@3$ shown in front of $B = 0$
is the one achieved by pure BM25 ranking without
annotating any concept in the data sets.
Over all taxonomies and cost models, the designs picked by
QM deliver closer $p@3$ values to the ones selected by Oracle.
Particularly, in many budgets over taxonomies T1 and T3
QM delivers the same design as Oracle.
The only case where the results of QM is significantly worse than
the results of Oracle is for budget 0.2 over taxonomy T2.
In this case, QM picks a design that consists of
{\em dramatic composition} and {\em literary composition},
which are leaf concepts.
However, Oracle selects {\em writing}, which is
the parent of {\em dramatic composition},
{\em literary composition}, and a couple more concepts in T2.
The design selected by QM is not able to improve the effectiveness
of answering queries over other children of {\em writing}.
This observation suggests that sometimes if
the budget is relatively small, it is sometimes better
to annotate rather more general concepts.
With this choice, the resulting design can improve
the effectiveness of answering larger number of queries.
Although the amount of improvement
is not much per each query, it still delivers a higher average
$p@3$ over all queries.
However, this result does not generally hold
as QM can deliver
the same designs as Oracle or designs that improve the
effectiveness of answering queries close the
ones selected by Oracle.
QM also delivers designs that improve the
$p@3$ of answering queries more than the ones picked by PM.
Overall, PM annotates more general concepts from the taxonomy
in order to improve the effectiveness of larger number of
queries. Hence, to answer a query,
the query interface often has to examine
the documents annotated by an ancestor of the query concept.
As this set of documents contain many answers whose
concepts are different form the query concept,
the query interface is usually not able to improve
the value of $p@3$ for a query significantly.
On the other hand, QM selects the designs with
less ambiguous concepts. Although its designs may not improve
the ranking quality for most queries,
they significantly improve the ranking quality of
relatively large number of queries.
For example, for budget 0.3 over taxonomy T3,
PM picks a design of
{\em written communication}, {\em music}, and {\em message},
which are relatively general concepts. QM, however,
selects {\em statement}, which is a child of {\em message},
{\em literature}, and {\em dramatic composition},
which are descendants of {\em written communication}.
\subsection{Effectiveness of Proposed Algorithms}
\label{sec:approxalg}
Queriability formula needs the value of the frequency ($d$)
for each concept in the input taxonomy over the data
set. Nonetheless, it is not possible to find the
exact frequencies of concepts without annotating
the mentions to their entities in
the data set. Similar to \cite{Termehchy:SIGMOD:14},
we estimate the concept frequencies by sampling a small
subset of randomly selected documents from the data set.
We compute the frequency of each concept
using estimation error rate of 5\% under the 95\% confidence
level, which is almost 384 documents for all data sets.
We also smooth the sampled frequencies using Bayesian
$m$-estimates with smoothing parameter of 1 and uniform priors.
In the remaining of the paper, we denote
\prob{level-wise} algorithm as {\em LW}
and dynamic programming algorithm as {\em DP} for brevity.
{\em LW} and {\em DP} sometimes do not exhaust all
the available budget. In these cases, we select the
remaining concepts from the taxonomy in descending order
of the ratio of their popularities to their costs
till there is no budget left.
Since DP assumes popularity, frequency, and
cost to be positive integers, we
use a standard scaling technique to convert the
values of popularity, frequency, and cost of every concept
in the input taxonomy to positive integers \cite{Vazirani:Book:Approx}.
More precisely, let $u_{\max}$ be the maximum
popularity of leaf concepts in the taxonomy
and $\eps < 1$, we scale $u(C)$ as
$\hat{u}(C) =$ $\lfloor {u(C)\over{\eps\cdot u_{\max}} }\rfloor$.
We use similar techniques to scale the values of $d(C)$ and
$w(C)$. Intuitively, the smaller the value of $\eps$ is
the more exact result DP will deliver. However,
it will take longer to run the algorithm
for smaller values of $\eps$ as the range of
$U$, $D$, and $B_{\tt total}$ will become larger.
We set the value of $\eps$ to 0.1 for the
experiments in this section. We report the sensitivity
of the results of DP to the choices of values for $\eps$ in Section~\ref{sec:efficiency}.
Table~\ref{tab:approxt1t2t3-result} and \ref{tab:approxt4t5-result} show
the values of average $p@3$ for LW and DP over all taxonomies
and cost models.
We do not show the values of average $p@3$ for
budgets greater than 0.7 for T5 as they are equal
to the values reported for the budget of 0.7.
Overall, the designs returned by DP improve the effectiveness
of answering queries for all taxonomy except for T5
more than the designs returned by LW.
Because DP explores more feasible designs,
it will have a better chance of finding more effective
designs. LW, however, returns the designs that
delivers larger values of $p@3$ when the budget is relatively
small over T4.
Give a small budget, it is more reasonable to annotate
disjoint concepts to improve the effectiveness
of a larger number of queries.
Nevertheless, if the budget is relatively large
there are more choices of designs
and more effective designs are not necessarily disjoint.
Thus, DP finds more effective designs than
LW as shown in table~\ref{tab:approxt4t5-result} for T4.
LW also delivers designs that
with larger values of average $p@3$ for all budgets over T5.
The distribution of popularities of leaf concepts
in T5 follow a very skewed distribution where the
concept of more than 65\% of queries is {\it person}.
Since the distribution of concept frequencies over
the data set is not very skewed, the designs that contain
the most popular concepts generally deliver more effective
answers to queries. Because of its greedy approach,
LW finds the most popular concept(s).
Since DP has to use scaling, it cannot
explore all feasible designs and may miss some
very popular concepts. Nevertheless,
if the budget is relatively large
DP is able to find designs that are as effective as the
designs delivered by LW as shown in table~\ref{tab:approxt4t5-result} for T5.
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cc|cc
\multirow{2}{*}{Taxonomy} & \multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{3-6}
& & LW & DP & LW & DP \\
\hline\hline
\multirow{9}{*}{T1}
& 0.1 & 0.091 & \mb{0.103} & 0.089 & \mb{0.103} \\
& 0.2 & 0.091 & \mb{0.103} & 0.097 & \mb{0.126} \\
& 0.3 & 0.091 & \mb{0.164} & 0.094 & \mb{0.135} \\
& 0.4 & 0.106 & \mb{0.183} & 0.112 & \mb{0.171} \\
& 0.5 & 0.166 & \mb{0.192} & 0.144 & \mb{0.187} \\
& 0.6 & 0.185 & \mb{0.193} & 0.177 & \mb{0.193} \\
& 0.7 & 0.194 & 0.195 & 0.187 & \mb{0.194} \\
& 0.8 & 0.195 & 0.195 & 0.194 & 0.195 \\
& 0.9 & 0.195 & 0.195 & 0.195 & 0.195 \\
\hline
\multirow{9}{*}{T2}
& 0.1 & 0.234 & 0.232 & 0.235 & \mb{0.259} \\
& 0.2 & 0.247 & \mb{0.285} & 0.251 & \mb{0.296} \\
& 0.3 & 0.250 & \mb{0.315} & 0.258 & \mb{0.306} \\
& 0.4 & 0.258 & \mb{0.318} & 0.274 & \mb{0.312} \\
& 0.5 & 0.297 & \mb{0.323} & 0.304 & \mb{0.318} \\
& 0.6 & 0.326 & 0.326 & 0.323 & 0.322 \\
& 0.7 & 0.326 & 0.326 & \mb{0.326} & 0.324 \\
& 0.8 & 0.326 & 0.326 & 0.326 & 0.325 \\
& 0.9 & 0.326 & 0.326 & 0.326 & 0.326 \\
\hline
\multirow{9}{*}{T3}
& 0.1 & 0.208 & 0.215 & 0.242 & 0.240 \\
& 0.2 & 0.265 & 0.269 & 0.268 & 0.277 \\
& 0.3 & 0.281 & \mb{0.304} & 0.279 & \mb{0.300} \\
& 0.4 & 0.281 & \mb{0.304} & 0.283 & \mb{0.305} \\
& 0.5 & 0.281 & \mb{0.306} & 0.283& \mb{0.306} \\
& 0.6 & 0.281 & \mb{0.306} & 0.283 & \mb{0.306} \\
& 0.7 & 0.281 & \mb{0.306} & 0.288 & \mb{0.306} \\
& 0.8 & 0.295 & \mb{0.306} & 0.297 & \mb{0.306} \\
& 0.9 & 0.304 & 0.306 & 0.304 & 0.306 \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP$_{0.1\epsilon}$ over T1, T2 and T3.
Statistically significant difference between LW and DP are marked in bold and italic, respectively.}
\label{tab:approxt1t2t3-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cc|cc
\multirow{2}{*}{Taxonomy} & \multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{3-6}
& & LW & DP & LW & DP \\
\hline\hline
\multirow{9}{*}{T4}
& 0.1 & 0.221 & 0.223 & 0.229 & 0.228 \\
& 0.2 & \mb{0.274} & 0.250 & \mb{0.271} & 0.255 \\
& 0.3 & \mb{0.283} & 0.261 & \mb{0.283} & 0.270 \\
& 0.4 & \mb{0.285} & 0.278 & \mb{0.285} & 0.282 \\
& 0.5 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
& 0.6 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
& 0.7 & 0.285 & \mb{0.291} & 0.285 & \mb{0.292} \\
& 0.8 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
& 0.9 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
\hline
\multirow{9}{*}{T5}
& 0.1 & 0.211 & 0.210 & \mb{0.217} & 0.212 \\
& 0.2 & \mb{0.237} & 0.225 & \mb{0.237} & 0.226 \\
& 0.3 & \mb{0.244} & 0.233 & \mb{0.245} & 0.235 \\
& 0.4 & \mb{0.247} & 0.239 & \mb{0.247} & 0.242 \\
& 0.5 & \mb{0.248} & 0.246 & \mb{0.248} & 0.246 \\
& 0.6 & 0.248 & 0.247 & 0.248 & 0.247 \\
& 0.7 & 0.248 & 0.248 & 0.248 & 0.248 \\
& 0.8 & 0.248 & 0.248 & 0.248 & 0.248 \\
& 0.9 & 0.248 & 0.248 & 0.248 & 0.248 \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP$_{0.1\epsilon}$ over T4 and T5.
Statistically significant difference between LW and DP are marked in bold and italic, respectively.}
\label{tab:approxt4t5-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cc|cc
\multirow{2}{*}{Taxonomy} & \multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{3-6}
& & LW & DP & LW & DP \\
\hline\hline
\multirow{9}{*}{T1}
& 0.1 & 0.203 & \mb{0.220} & 0.195 & \mb{0.222} \\
& 0.2 & 0.203 & \mb{0.221} & 0.215 & \mb{0.284} \\
& 0.3 & 0.203 & \mb{0.394} & 0.209 & \mb{0.317} \\
& 0.4 & 0.227 & \mb{0.438} & 0.243 & \mb{0.424} \\
& 0.5 & 0.440 & \mb{0.492} & 0.340 & \mb{0.469} \\
& 0.6 & 0.444 & \mb{0.501} & 0.433 & \mb{0.497} \\
& 0.7 & 0.497 & 0.507 & 0.473 & \mb{0.503} \\
& 0.8 & 0.507 & 0.507 & 0.503 & 0.507 \\
& 0.9 & 0.507 & 0.507 & 0.507 & 0.507 \\
\hline
\multirow{9}{*}{T2}
& 0.1 & 0.504 & 0.479 & 0.506 & \mb{0.541} \\
& 0.2 & 0.574 & 0.616 & 0.581 & \mb{0.616} \\
& 0.3 & 0.586 & \mb{0.641} & 0.607 & \mb{0.647} \\
& 0.4 & 0.615 & \mb{0.670} & 0.646 & \mb{0.683} \\
& 0.5 & 0.685 & 0.713 & 0.709 & \mb{0.721} \\
& 0.6 & 0.761 & 0.753 & \mb{0.755} & 0.749 \\
& 0.7 & 0.762 & 0.757 & \mb{0.763} & 0.759 \\
& 0.8 & 0.763 & 0.763 & 0.763 & 0.762 \\
& 0.9 & 0.764 & 0.764 & 0.764 & 0.764 \\
\hline
\multirow{9}{*}{T3}
& 0.1 & 0.453 & 0.470 & 0.542 & 0.555 \\
& 0.2 & 0.624 & \mb{0.679} & 0.622 & \mb{0.682} \\
& 0.3 & 0.654 & \mb{0.734} & 0.649 & \mb{0.735} \\
& 0.4 & 0.654 & \mb{0.754} & 0.664 & \mb{0.757} \\
& 0.5 & 0.654 & \mb{0.760} & 0.664 & \mb{0.760} \\
& 0.6 & 0.654 & \mb{0.760} & 0.661 & \mb{0.760} \\
& 0.7 & 0.654 & \mb{0.760} & 0.683 & \mb{0.760} \\
& 0.8 & 0.703 & \mb{0.760} & 0.721 & \mb{0.760} \\
& 0.9 & 0.758 & 0.760 & 0.758 & 0.760 \\
\end{tabular}}
\caption{Average $MRR$ for LW and DP$_{0.1\epsilon}$ over T1, T2 and T3.
Statistically significant differences between LW and DP are marked in bold.}
\label{tab:approxt1t2t3-result-mrr}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cc|cc
\multirow{2}{*}{Taxonomy} & \multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{3-6}
& & LW & DP & LW & DP \\
\hline\hline
\multirow{9}{*}{T4}
& 0.1 & 0.527 & 0.523 & \mb{0.547} & 0.530 \\
& 0.2 & \mb{0.606} & 0.576 & \mb{0.609} & 0.576 \\
& 0.3 & \mb{0.624} & 0.605 & \mb{0.636} & 0.610 \\
& 0.4 & \mb{0.644} & 0.623 & \mb{0.644} & 0.630 \\
& 0.5 & \mb{0.646} & 0.642 & 0.646 & \mb{0.641} \\
& 0.6 & 0.646 & 0.646 & 0.646 & 0.646 \\
& 0.7 & 0.646 & 0.648 & 0.646 & 0.648 \\
& 0.8 & 0.646 & \mb{0.649} & 0.646 & \mb{0.649} \\
& 0.9 & 0.646 & \mb{0.649} & 0.646 & \mb{0.649} \\
\hline
\multirow{9}{*}{T5}
& 0.1 & 0.527 & 0.523 & \mb{0.547} & 0.530 \\
& 0.2 & \mb{0.606} & 0.576 & \mb{0.609} & 0.576 \\
& 0.3 & \mb{0.634} & 0.605 & \mb{0.636} & 0.610 \\
& 0.4 & \mb{0.644} & 0.623 & \mb{0.644} & 0.630 \\
& 0.5 & \mb{0.646} & 0.642 & \mb{0.646} & 0.641 \\
& 0.6 & 0.646 & 0.646 & 0.646 & 0.648 \\
& 0.7 & 0.646 & 0.648 & 0.646 & 0.248 \\
& 0.8 & 0.646 & \mb{0.649} & 0.646 & \mb{0.649} \\
& 0.9 & 0.646 & \mb{0.649} & 0.646 & \mb{0.649} \\
\end{tabular}}
\caption{Average $MRR$ for LW and DP$_{0.1\epsilon}$ over T4 and T5.
Statistically significant differences between LW and DP are marked in bold.}
\label{tab:approxt4t5-result-mrr}
\end{table}
\begin{comment}
\begin{table}
\begin{subtable}{0.45\textwidth}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline\hline
0.1 & 0.091 & \mb{0.103} & 0.089 & \mb{0.103} \\
0.2 & 0.091 & \mb{0.103} & 0.097 & \mb{0.126} \\
0.3 & 0.091 & \mb{0.164} & 0.094 & \mb{0.135} \\
0.4 & 0.106 & \mb{0.183} & 0.112 & \mb{0.171} \\
0.5 & 0.166 & \mb{0.192} & 0.144 & \mb{0.187} \\
0.6 & 0.185 & \mb{0.193} & 0.177 & \mb{0.193} \\
0.7 & 0.194 & 0.195 & 0.187 & \mb{0.194} \\
0.8 & 0.195 & 0.195 & 0.194 & 0.195 \\
0.9 & 0.195 & 0.195 & 0.195 & 0.195 \\
\end{tabular}}
\caption{T1}
\label{tab:approxT1-result}
\end{subtable}
\begin{subtable}{0.45\textwidth}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline\hline
0.1 & 0.234 & 0.232 & 0.235 & \mb{0.259} \\
0.2 & 0.247 & \mb{0.285} & 0.251 & \mb{0.296} \\
0.3 & 0.250 & \mb{0.315} & 0.258 & \mb{0.306} \\
0.4 & 0.258 & \mb{0.318} & 0.274 & \mb{0.312} \\
0.5 & 0.297 & \mb{0.323} & 0.304 & \mb{0.318} \\
0.6 & 0.326 & 0.326 & 0.323 & 0.322 \\
0.7 & 0.326 & 0.326 & \mb{0.326} & 0.324 \\
0.8 & 0.326 & 0.326 & 0.326 & 0.325 \\
0.9 & 0.326 & 0.326 & 0.326 & 0.326 \\
\end{tabular}}
\caption{T2}
\label{tab:approxT2-result}
\end{subtable}
\begin{subtable}{0.45\textwidth}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline\hline
0.1 & 0.208 & 0.215 & 0.242 & 0.240 \\
0.2 & 0.265 & 0.269 & 0.268 & 0.277 \\
0.3 & 0.281 & \mb{0.304} & 0.279 & \mb{0.300} \\
0.4 & 0.281 & \mb{0.304} & 0.283 & \mb{0.305} \\
0.5 & 0.281 & \mb{0.306} & 0.283& \mb{0.306} \\
0.6 & 0.281 & \mb{0.306} & 0.283 & \mb{0.306} \\
0.7 & 0.281 & \mb{0.306} & 0.288 & \mb{0.306} \\
0.8 & 0.295 & \mb{0.306} & 0.297 & \mb{0.306} \\
0.9 & 0.304 & 0.306 & 0.304 & 0.306 \\
\end{tabular}}
\caption{T3}
\label{tab:approxT3-result}
\end{subtable}
\begin{subtable}{0.45\textwidth}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline\hline
0.1 & 0.221 & 0.223 & 0.229 & 0.228 \\
0.2 & \mb{0.274} & 0.250 & \mb{0.271} & 0.255 \\
0.3 & \mb{0.283} & 0.261 & \mb{0.283} & 0.270 \\
0.4 & \mb{0.285} & 0.278 & \mb{0.285} & 0.282 \\
0.5 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
0.6 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
0.7 & 0.285 & \mb{0.291} & 0.285 & \mb{0.292} \\
0.8 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
0.9 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
\end{tabular}}
\caption{T4}
\label{tab:approxT4-result}
\end{subtable}
\begin{subtable}{0.45\textwidth}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline\hline
0.1 & 0.211 & 0.210 & \mb{0.217} & 0.212 \\
0.2 & \mb{0.237} & 0.225 & \mb{0.237} & 0.226 \\
0.3 & \mb{0.244} & 0.233 & \mb{0.245} & 0.235 \\
0.4 & \mb{0.247} & 0.239 & \mb{0.247} & 0.242 \\
0.5 & \mb{0.248} & 0.246 & \mb{0.248} & 0.246 \\
0.6 & 0.248 & 0.247 & 0.248 & 0.247 \\
0.7 & 0.248 & 0.248 & 0.248 & 0.248 \\
\end{tabular}}
\caption{T5}
\label{tab:approxT5-result}
\end{subtable}
\caption{Average $p@3$ for LW and DP($\eps=0.1$) over T1, T2, T3, T4 and T5. Statistically significant difference between LW and DP are marked in bold, respectively.}
\end{table}
\end{comment}
\begin{comment}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline
0.1 & 0.091 & \mb{0.103} & 0.089 & \mb{0.103} \\
0.2 & 0.091 & \mb{0.103} & 0.097 & \mb{0.126} \\
0.3 & 0.091 & \mb{0.164} & 0.094 & \mb{0.135} \\
0.4 & 0.106 & \mb{0.183} & 0.112 & \mb{0.171} \\
0.5 & 0.166 & \mb{0.192} & 0.144 & \mb{0.187} \\
0.6 & 0.185 & \mb{0.193} & 0.177 & \mb{0.193} \\
0.7 & 0.194 & 0.195 & 0.187 & \mb{0.194} \\
0.8 & 0.195 & 0.195 & 0.194 & 0.195 \\
0.9 & 0.195 & 0.195 & 0.195 & 0.195 \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP($\eps=0.1$) over T1. Statistically significant difference between LW and DP are marked in bold, respectively.}
\label{tab:approxT1-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline
0.1 & 0.234 & 0.232 & 0.235 & \mb{0.259} \\
0.2 & 0.247 & \mb{0.285} & 0.251 & \mb{0.296} \\
0.3 & 0.250 & \mb{0.315} & 0.258 & \mb{0.306} \\
0.4 & 0.258 & \mb{0.318} & 0.274 & \mb{0.312} \\
0.5 & 0.297 & \mb{0.323} & 0.304 & \mb{0.318} \\
0.6 & 0.326 & 0.326 & 0.323 & 0.322 \\
0.7 & 0.326 & 0.326 & \mb{0.326} & 0.324 \\
0.8 & 0.326 & 0.326 & 0.326 & 0.325 \\
0.9 & 0.326 & 0.326 & 0.326 & 0.326 \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP($\eps=0.1$) over T2. Statistically significant difference between LW and DP are marked in bold, respectively.}
\label{tab:approxT2-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline
0.1 & 0.208 & 0.215 & 0.242 & 0.240 \\
0.2 & 0.265 & 0.269 & 0.268 & 0.277 \\
0.3 & 0.281 & \mb{0.304} & 0.279 & \mb{0.300} \\
0.4 & 0.281 & \mb{0.304} & 0.283 & \mb{0.305} \\
0.5 & 0.281 & \mb{0.306} & 0.283& \mb{0.306} \\
0.6 & 0.281 & \mb{0.306} & 0.283 & \mb{0.306} \\
0.7 & 0.281 & \mb{0.306} & 0.288 & \mb{0.306} \\
0.8 & 0.295 & \mb{0.306} & 0.297 & \mb{0.306} \\
0.9 & 0.304 & 0.306 & 0.304 & 0.306 \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP($\eps=0.1$) over T3. Statistically significant difference between LW and DP are marked in bold, respectively.}
\label{tab:approxT3-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline
0.1 & 0.221 & 0.223 & 0.229 & 0.228 \\
0.2 & \mb{0.274} & 0.250 & \mb{0.271} & 0.255 \\
0.3 & \mb{0.283} & 0.261 & \mb{0.283} & 0.270 \\
0.4 & \mb{0.285} & 0.278 & \mb{0.285} & 0.282 \\
0.5 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
0.6 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
0.7 & 0.285 & \mb{0.291} & 0.285 & \mb{0.292} \\
0.8 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
0.9 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP($\eps=0.1$) over T4. Statistically significant difference between LW and DP are marked in bold, respectively.}
\label{tab:approxT4-result}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{c|cc|cc
\multirow{2}{*}{Budget} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{2-5}
& LW & DP & LW & DP \\
\hline
0.1 & 0.211 & 0.210 & \mb{0.217} & 0.212 \\
0.2 & \mb{0.237} & 0.225 & \mb{0.237} & 0.226 \\
0.3 & \mb{0.244} & 0.233 & \mb{0.245} & 0.235 \\
0.4 & \mb{0.247} & 0.239 & \mb{0.247} & 0.242 \\
0.5 & \mb{0.248} & 0.246 & \mb{0.248} & 0.246 \\
0.6 & 0.248 & 0.247 & 0.248 & 0.247 \\
0.7 & 0.248 & 0.248 & 0.248 & 0.248 \\
0.8 & 0.248 & 0.248 & 0.248 & 0.248 \\
0.9 & 0.248 & 0.248 & 0.248 & 0.248 \\
\end{tabular}}
\caption{Average $p@3$ for LW and DP($\eps=0.1$) over T5. Statistically significant difference between LW and DP are marked in bold, respectively.}
\label{tab:approxT5-result}
\end{table}
\end{comment}
\subsection{Efficiency of Proposed Algorithms}
\label{sec:efficiency}
Because the efficiency of LW and DP
do not depend on any specific cost model,
we analyze the their efficiencies using uniform cost over
the larger taxonomies, i.e., T4 and T5.
Table~\ref{tab:time} shows the average running time
of LW and DP for T4 and T5 over budgets 0.1 to 0.9
using the scaling factor $\eps$ of 0.05, 0.1, 0.2 and 0.3
for DP. Both LW and DP, with a reasonably small value of
$\eps$, $\eps \geq 0.1$, are efficient for a
design-time task.
Overall, LW is more efficient than DP, but DP is almost
as efficient as LW when $\eps>0.2$.
Both algorithms take longer to run over larger
taxonomies, with the exception of $\eps = 0.05$ for DP whose
reason we explain later in this section.
Also, DP takes longer to run
as the value of $\eps$ becomes smaller.
These observation confirm our theoretical
analysis of the time complexities of these algorithms.
The running time of DP significantly increases as the
value of $\eps$
changes from $\eps = 0.1$ to $\eps = 0.05$. Because
the size of the matrix required in DP algorithm becomes substantially
large for the case of $\eps = 0.05$, it occupies
most of the available main memory
and significantly slows down the program.
Also, Java garbage collector
spends a lot of time on managing available memory and
causes the program to run even more slowly.
Interestingly, DP with $\eps = 0.05$
is faster on T5 than on T4.
After scaling $u$ and $d$ values in DP
algorithm, we remove the concepts with $u$ or $d$ equal to
0 because these concepts will not increase the
queriability of any conceptual design. The distribution of
$u$ values in T5 is very skewed and has a long tail of
concepts with very small $u$ values. Hence, the popularity
of many of these concepts will be 0 after scaling. The difference
between T4 and T5 in the number of concepts with popularity
of 0 after scaling is more for smaller value of $\eps$.
Using a small value of $\eps$ for scaling,
T5 will have more such concepts.
As T4 has more concepts with non-zero popularities
than T5, DP takes longer to run over T4 than T5 for $\eps=0.05$.
Table~\ref{tab:eps-result} shows the effectiveness of
conceptual designs returned by DP for different values of $\eps$.
Overall, we observe that the effectivenesses of the designs
returned by DP consistently improves by reducing the value
of $\eps$. These results also indicate that DP
delivers effective designs using reasonably large values of
$\eps$, therefore, it can be effectively and efficiently
used over large taxonomies.
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cccc
\multirow{2}{*}{Taxonomy} & \multicolumn{5}{|c}{Average Running Time (minute)} \\
\cline{2-6}
& LW & DP ($0.05$) & DP ($0.1$) & DP ($0.2$) & DP ($0.3$) \\
\hline
T4 & 1 & 403 & 3 & 2 & 2 \\
T5 & 5 & 144 & 7 & 5 & 5 \\
\end{tabular}}
\caption{Average running time of LW and DP with different values of $\eps$ over T4 and T5}
\label{tab:time}
\end{table}
\begin{table}
\centering
{\scriptsize
\begin{tabular}{c|c|cccc
Taxonomy & Budget & DP ($0.05$) & DP ($0.1$) & DP ($0.2$) & DP ($0.3$) \\
\hline\hline
\multirow{9}{*}{T4}
& 0.1 & 0.220 & 0.223 & 0.206 & 0.202 \\
& 0.2 & 0.251 & 0.250 & 0.247 & 0.242 \\
& 0.3 & 0.264 & 0.261 & 0.261 & 0.267 \\
& 0.4 & 0.277 & 0.278 & 0.282 & 0.287 \\
& 0.5 & 0.291 & 0.291 & 0.291 & 0.291 \\
& 0.6 & 0.291 & 0.291 & 0.291 & 0.291 \\
& 0.7 & 0.291 & 0.291 & 0.291 & 0.291 \\
& 0.8 & 0.292 & 0.292 & 0.292 & 0.292 \\
& 0.9 & 0.292 & 0.292 & 0.292 & 0.292 \\
\hline
\multirow{9}{*}{T5}
& 0.1 & 0.208 & 0.210 & 0.211 & 0.200 \\
& 0.2 & 0.220 & 0.225 & 0.221 & 0.221 \\
& 0.3 & 0.233 & 0.233 & 0.233 & 0.233 \\
& 0.4 & 0.239 & 0.239 & 0.239 & 0.239 \\
& 0.5 & 0.246 & 0.246 & 0.246 & 0.246 \\
& 0.6 & 0.247 & 0.247 & 0.247 & 0.247 \\
& 0.7 & 0.248 & 0.248 & 0.248 & 0.248 \\
& 0.8 & 0.248 & 0.248 & 0.248 & 0.248 \\
& 0.9 & 0.248 & 0.248 & 0.248 & 0.248 \\
\end{tabular}}
\caption{Average $p@3$ of DP using different values of $\epsilon$ over T4 and T5}
\label{tab:eps-result}
\end{table}
\begin{comment}
When given large budget, DP delivers a more effective conceptual designs
result than LW as shown in table~\ref{tab:approxT5-result}.
This is because when there are enough budget, the algorithm can exhaust the budget on concepts that are not disjoint to maximize the ranking quality as much as possible and also maximize the number of queries to optimize the ranking at the same. LW is limited in this case as it is forced to pick less important concept because of the disjoint design result restriction.
For example, although LW selects {\em organism}, it can also selects {\em person} instead of selecting other concepts with zero popularity.
On the other hand, when the database is small, the schema that can deliver high $Prec@3$ are usually not disjoint schema. Thus LW is less effective than DP as shown by a superior result of DP over LW as shown in table~\ref{tab:approxt1t2t3-result}.
On a side note, we observe that the popularity distribution of concepts in T5 follows Zipf's law with long tail that has no significant popularity. Given budget 0.6 or more, DP can exhaust the budget by picking most of all concepts with significant popularity in user queries. Thus we observe no improvement of ranking quality for DP after budget 0.5.
While the distribution in T4 follows the Zipf's law as in T5, the tail concepts in T4 has significantly large popularity than in T5 and so we still observe improvement of ranking quality when given large budgets.
The reason that LW performs better than DP for
larger taxonomy is that the algorithm always
returns a disjoint schema while DP is not.
For example, with budget 0.2 over T5, LW selects a schema including {\em organism} and no other descendant or ancestor concepts of {\em organism}. However, DP selects a schema that includes both {\em object} and {\em person} in which the former is the ancestor concept of the latter.
Intuitively, picking more than one concepts in the same path from root of taxonomy can waste the budget. This is because the number of queries that can be improved are not increased by picking the concept in the lower level of tree once the one on the top is already picked.
Furthermore, for the concepts like {\em object} and {\em person}, selecting {\em object} to answers other {\em object} queries that are not {\em person} may not benefit the improvement of overall $Prec@3$ as selecting other concepts that are not {\em person} but still a descendant of {\em object} to maximize the ranking precision.
LW picks {\em organism} which is a parent concept of {\em person} and a descendant of {\em object}. It also picks {\em artifact*} which is also a descendant of {\em object}, too.
Since more than $90\%$ of all {\em organism} documents and queries are also {\em person} , the schema selects by LW can deliver the same ranking quality for {\em person} queries, and can also accurately answer {\em artifact*} queries than DP.
Table~\ref{tab:model-result} also shows that QM perform better than PM on all taxonomy trees except S1.
In general, QM does not aim or try to select schema that can help improve all queries as much as possible as PM does.
However, QM can provide higher ranking quality because for some query concept, with annotation of too general concept in the collection, the query is still hard to answer.
For example, given budget 0.3 on T3, PM selects a schema consisting of {\em written communication}, {\em music}, and {\em message} so that the schema can help improve all queries.
Instead, QM selects {\em statement} which is one of the three children of {\em message}, and selects {\em literature} and {\em dramatic composition} which are descendants of {\em written communication}.
With high document frequency, it is not easy to answer all queries by just annotating the collection with {\em message}. {\em statement}, in fact, has less than half document frequency of {\em message} while 70\% of {\em message} queries are {\em statement}. Hence, the average $Prec@3$ of {\em message} queries can be much higher for QM instead of PM.
The schema selected by QM cannot improve any of {\em music} queries. However, with the knowledge of the concept frequencies in the dataset, the {\em music} queries are not easy to answer even with the annotation because more than 36\% of documents in the collection are {\em music}. So QM can spend this budget to picks other concept that can help improve overall ranking quality more in this case {\em literature} and {\em dramatic composition}.
Furthermore, while 55\% of all queries in this taxonomy are {\em written communication}, annotating {\em written communication} alone cannot provide large improvement to the overall ranking quality because of the same reason as {\em message}.
Instead, QM can pick both {\em literary composition} and {\em dramatic composition} which are descendants of {\em written communication} where together they form almost the same popularity as {\em written communication}. Therefore, QM can provide a better queries ranking quality.
The only exception is budget
0.6 over taxonomy $T_2$ under random cost model.
Interestingly, in this case QM selects a schema that contains
relatively more specific concepts than the
one selected by UM.
Nevertheless, there is still a case where PM can perform better than QM especially when budget is enough to only annotate 1 or 2 concepts. As shown in T2 that the heuristic of PM is similar to that of Oracle and so it deliver a better result than QM.
However, in most cases, QM provides a more accurate estimation of improvement in query ranking quality than PM and can achieve the optimal ranking quality as Oracle on average.
\ignore{
==========================\\
We first verify the accuracy of our approximation algorithms with their actual computation.
We use QM to test the dynamic programing algorithm (DP).
For Greedy approach of level-wise algorithms ({\em LW}), we perform a brute-force algorithms, namely {\em LW-BF}, that picks the disjoint design with maximum queriability over a domain given a fixed budget.
Table~\ref{tab:approxvalid-result} shows the value of $Prec@3$ for LW-BF, LW, QM and DP($\epsilon=0.1$) for S1, T1, T2 and T3.
Overall, [STILL DONT KNOW WHAT TO PUT HERE!!!]\\
==========================\\
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cc|cc|cc|cc
\multirow{2}{*}{} & \multirow{2}{*}{B} & \multicolumn{4}{|c}{Uniform Cost} & \multicolumn{4}{|c}{Random Cost} \\
\cline{3-10}
& & LW-BF & LW & QM & DP & LW-BF & LW & QM & DP \\
\hline\hline
\multirow{9}{*}{T1}
& 0.1 & 0.149 & 0.091 & 0.149 & 0.103 & 0.128 & 0.089 & 0.128 & 0.103 \\
& 0.2 & 0.168 & 0.091 & 0.168 & 0.103 & 0.161 & 0.097 & 0.162 & 0.126 \\
& 0.3 & 0.177 & 0.091 & 0.177 & 0.164 & 0.177 & 0.094 & 0.177 & 0.135 \\
& 0.4 & 0.192 & 0.106 & 0.192 & 0.183 & 0.185 & 0.112 & 0.185 & 0.171 \\
& 0.5 & 0.193 & 0.166 & 0.193 & 0.192 & 0.193 & 0.144 & 0.192 & 0.187 \\
& 0.6 & 0.195 & 0.185 & 0.195 & 0.193 & 0.194 & 0.177 & 0.194 & 0.193 \\
& 0.7 & 0.195 & 0.194 & 0.195 & 0.195 & 0.194 & 0.187 & 0.195 & 0.194 \\
& 0.8 & 0.195 & 0.195 & 0.195 & 0.195 & 0.194 & 0.194 & 0.195 & 0.195 \\
& 0.9 & 0.195 & 0.195 & 0.195 & 0.195 & 0.194 & 0.195 & 0.195 & 0.195 \\
\hline
\multirow{9}{*}{T2}
& 0.1 & 0.232 & 0.234 & 0.232 & 0.232 & 0.259 & 0.235 & 0.257 & 0.259 \\
& 0.2 & 0.285 & 0.247 & 0.285 & 0.301 & 0.292 & 0.251 & 0.291 & 0.296 \\
& 0.3 & 0.315 & 0.250 & 0.315 & 0.306 & 0.314 & 0.258 & 0.313 & 0.306 \\
& 0.4 & 0.318 & 0.258 & 0.318 & 0.311 & 0.320 & 0.274 & 0.320 & 0.312 \\
& 0.5 & 0.324 & 0.297 & 0.323 & 0.316 & 0.324 & 0.304 & 0.323 & 0.318 \\
& 0.6 & 0.326 & 0.326 & 0.326 & 0.323 & 0.325 & 0.323 & 0.325 & 0.322 \\
& 0.7 & 0.326 & 0.326 & 0.326 & 0.323 & 0.325 & 0.326 & 0.326 & 0.324 \\
& 0.8 & 0.326 & 0.326 & 0.326 & 0.326 & 0.322 & 0.326 & 0.326 & 0.325 \\
& 0.9 & 0.318 & 0.326 & 0.326 & 0.326 & 0.320 & 0.326 & 0.326 & 0.326 \\
\hline
\multirow{9}{*}{T3}
& 0.1 & 0.210 & 0.208 & 0.210 & 0.215 & 0.242 & 0.242 & 0.242 & 0.240 \\
& 0.2 & 0.269 & 0.265 & 0.269 & 0.269 & 0.277 & 0.268 & 0.278 & 0.277 \\
& 0.3 & 0.304 & 0.281 & 0.304 & 0.304 & 0.300 & 0.283 & 0.301 & 0.300 \\
& 0.4 & 0.304 & 0.281 & 0.304 & 0.304 & 0.305 & 0.283 & 0.305 & 0.305 \\
& 0.5 & 0.306 & 0.281 & 0.306 & 0.306 & 0.306 & 0.283 & 0.306 & 0.306 \\
& 0.6 & 0.306 & 0.281 & 0.306 & 0.306 & 0.306 & 0.283 & 0.306 & 0.306 \\
& 0.7 & 0.306 & 0.281 & 0.306 & 0.306 & 0.306 & 0.288 & 0.306 & 0.306 \\
& 0.8 & 0.306 & 0.295 & 0.306 & 0.306 & 0.302 & 0.297 & 0.306 & 0.306 \\
& 0.9 & 0.299 & 0.304 & 0.306 & 0.306 & 0.303 & 0.304 & 0.306 & 0.306 \\
\end{tabular}}
\caption{Average $Prec@3$ for LW-BF, LW, QM and DP$_{0.1\epsilon}$ over S1, T1, T2 and T3.
}
\label{tab:approxvalid-result}
\end{table}
}
\ignore{
\begin{table}
\centering
{\scriptsize
\begin{tabular}{r|c|cc|cc
\multirow{2}{*}{B} & \multicolumn{2}{|c}{Uniform Cost} & \multicolumn{2}{|c}{Random Cost} \\
\cline{3-5}
& & LW & DP & LW & DP \\
\hline\hline
\multirow{9}{*}{T1}
& 0.1 & 0.091 & \mb{0.103} & 0.089 & \mb{0.103} \\
& 0.2 & 0.091 & \mb{0.103} & 0.097 & \mb{0.126} \\
& 0.3 & 0.091 & \mb{0.164} & 0.094 & \mb{0.135} \\
& 0.4 & 0.106 & \mb{0.183} & 0.112 & \mb{0.171} \\
& 0.5 & 0.166 & \mb{0.192} & 0.144 & \mb{0.187} \\
& 0.6 & 0.185 & \mb{0.193} & 0.177 & \mb{0.193} \\
& 0.7 & 0.194 & 0.195 & 0.187 & \mb{0.194} \\
& 0.8 & 0.195 & 0.195 & 0.194 & 0.195 \\
& 0.9 & 0.195 & 0.195 & 0.195 & 0.195 \\
\hline
\multirow{9}{*}{T2}
& 0.1 & 0.234 & 0.232 & 0.235 & \mb{0.259} \\
& 0.2 & 0.247 & \mb{0.285} & 0.251 & \mb{0.296} \\
& 0.3 & 0.250 & \mb{0.315} & 0.258 & \mb{0.306} \\
& 0.4 & 0.258 & \mb{0.318} & 0.274 & \mb{0.312} \\
& 0.5 & 0.297 & \mb{0.323} & 0.304 & \mb{0.318} \\
& 0.6 & 0.326 & 0.326 & 0.323 & 0.322 \\
& 0.7 & 0.326 & 0.326 & \mb{0.326} & 0.324 \\
& 0.8 & 0.326 & 0.326 & 0.326 & 0.325 \\
& 0.9 & 0.326 & 0.326 & 0.326 & 0.326 \\
\hline
\multirow{9}{*}{T3}
& 0.1 & 0.208 & 0.215 & 0.242 & 0.240 \\
& 0.2 & 0.265 & 0.269 & 0.268 & 0.277 \\
& 0.3 & 0.281 & \mb{0.304} & 0.279 & \mb{0.300} \\
& 0.4 & 0.281 & \mb{0.304} & 0.283 & \mb{0.305} \\
& 0.5 & 0.281 & \mb{0.306} & 0.283& \mb{0.306} \\
& 0.6 & 0.281 & \mb{0.306} & 0.283 & \mb{0.306} \\
& 0.7 & 0.281 & \mb{0.306} & 0.288 & \mb{0.306} \\
& 0.8 & 0.295 & \mb{0.306} & 0.297 & \mb{0.306} \\
& 0.9 & 0.304 & 0.306 & 0.304 & 0.306 \\
\hline\hline
\multirow{9}{*}{T4}
& 0.1 & 0.221 & 0.223 & 0.229 & 0.228 \\
& 0.2 & \mb{0.274} & 0.250 & \mb{0.271} & 0.255 \\
& 0.3 & \mb{0.283} & 0.261 & \mb{0.283} & 0.270 \\
& 0.4 & \mb{0.285} & 0.278 & \mb{0.285} & 0.282 \\
& 0.5 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
& 0.6 & 0.285 & \mb{0.291} & 0.285 & \mb{0.291} \\
& 0.7 & 0.285 & \mb{0.291} & 0.285 & \mb{0.292} \\
& 0.8 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
& 0.9 & 0.285 & \mb{0.292} & 0.285 & \mb{0.292} \\
\hline
\multirow{9}{*}{T5}
& 0.1 & 0.211 & 0.210 & \mb{0.217} & 0.212 \\
& 0.2 & \mb{0.237} & 0.225 & \mb{0.237} & 0.226 \\
& 0.3 & \mb{0.244} & 0.233 & \mb{0.245} & 0.235 \\
& 0.4 & \mb{0.247} & 0.239 & \mb{0.247} & 0.242 \\
& 0.5 & \mb{0.248} & 0.246 & \mb{0.248} & 0.246 \\
& 0.6 & 0.248 & 0.247 & 0.248 & 0.247 \\
& 0.7 & 0.248 & 0.248 & 0.248 & 0.248 \\
& 0.8 & 0.248 & 0.248 & 0.248 & 0.248 \\
& 0.9 & 0.248 & 0.248 & 0.248 & 0.248 \\
\end{tabular}}
\caption{Average $Prec@3$ for LW and DP$_{0.1\epsilon}$ over S1, T1, T2 and T3.
Statistically significant difference between LW and DP are marked in bold and italic, respectively.
}
\label{tab:approxall-result}
\end{table}
}
\end{comment}
\section{Introduction}
\label{sec:introduction}
\subsection{Concept Annotation}
Unstructured and semi-structured data sets,
such as HTML documents,
contain enormous information about
named entities like people and
products \cite{ChiticariuLRR10,webconcept:ragu}.
Users normally explore
these data sets using keyword queries
to find information about their entities of interest.
Unfortunately, as keyword queries are generally ambiguous,
query interfaces may not return the relevant answers for these
queries. For example, consider the excerpts of the
Wikipedia\\ ({\it wikipedia.org}) articles in
Figure~\ref{fig:wikipedia1}. Assume that a user likes
to find information about {\it John Adams}, the politician,
over this data set. If she
submits query $Q_1$:{\it John Adams},
the query interface may return the articles about
{\it John Adams}, the artist, or {\it John Adams},
the school, as relevant answers.
Users can further disambiguate their queries
by adding appropriate keywords. Nonetheless,
it is not easy to find such keywords \cite{YomTov:Difficulty}.
For instance, if one refines $Q_1$ to {\it John Adams Ohio},
the query interface may return the article about
{\it John Adams}, the high school, as the answer.
It will not help either to add keyword {\it Congressman}
to $Q_1$ as this keyword does not appear in the article
about {\it John Adams}, the politician. Formulating
the appropriate keyword query requires some knowledge
about the sought after entity and the data that
most users do not usually possess.
\begin{figure}
\centering
\scriptsize{
\begin{alltt}
<article>
John Adams has been a former member of the Ohio House of
Representatives from 2007 to 2014. ...
</article>
<article>
John Adams is a composer whose music is inspired by nature, ...
</article>
<article>
John Adams is a public high school located on the east side of
Cleveland, Ohio, ...
</article>
\end{alltt}
}
\vspace{-0.6cm}
\caption{Wikipedia article excerpts}
\label{fig:wikipedia1}
\end{figure}
\begin{figure}
\scriptsize{
\begin{alltt}
<article>
{\bf<politician>} John Adams {\bf</politician>} has been a former member
of the {\bf<legislature>} Ohio House of Representatives {\bf</legislature>}
from 2007 to 2014. ...
</article>
<article>
{\bf<artist>} John Adams {\bf</artist>} is a composer whose music is inspired
by nature, ...
</article>
<article>
{\bf<school>} John Adams {\bf</school>} is a public high school located on
the east side of {\bf<city>}Cleveland{\bf</city>}, {\bf<state>}Ohio{\bf</state>}, ...
</article>
\end{alltt}
}
\vspace{-0.5cm}
\caption{Annotated Wikipedia article excerpts}
\label{fig:wikipedia2}
\end{figure}
To make querying unstructured and semi-structured data sets
easier, data management researchers have proposed methods to
identify the mentions to entities
in these data sets and annotate them by their concepts \cite{ChiticariuLRR10,webconcept:ragu}.
Figure~\ref{fig:wikipedia2} shows excerpts of
the annotated Wikipedia articles whose original versions are
shown in Figure~\ref{fig:wikipedia1}.
Because entities in an annotated
data set are disambiguated by their concepts,
the query interface can answer queries
over these data sets more effectively.
Moreover, as the list of concepts used to annotate the data sets
are available to users, they can further clarify their
queries by mentioning the concepts of
entities in these queries.
For example, a user who would like to retrieve article(s)
about {\it John Adams}, the politician, over
the annotated Wikipedia data set in Figure~\ref{fig:wikipedia2}
may mention the concept of {\it politician} in her query.
The set of annotated concepts in a data set is
the {\it conceptual design} for the data set \cite{Termehchy:SIGMOD:14}.
For example, the conceptual
design of the data fragment in Figure~\ref{fig:wikipedia2}
is $D_1$ = \{{\it politician, legislature, artist, school, state, city}\}. Using $D_1$, the query interface is
able to disambiguate all entities in this data fragment.
\subsection{Costs of Concept Annotation}
Ideally, an enterprise would like to annotate
all relevant concepts from a data set to answer
all queries effectively.
Nonetheless, an enterprise has to spend significant
time, financial and computational resources,
and manual labor to accurately extract entities of a concept
in a large data set \cite{Anderson:CIDR:2013,IEMaintenance:Gulhane,ChiticariuLRR10,Termehchy:SIGMOD:14,OptimizeSQLText:Jain,Shen:SIGMOD:08,PrioritizationIE:Huang,Resource:Kanani}.
An enterprise usually has to develop or obtain a
complex program called {\it concept annotator} to annotate
entities of a concept from a collection of documents \cite{McCALLUM:ACMQueue:05}.
Enterprises develop concept annotator using {\it rule-based}
or {\it machine learning} approaches.
In the rule-based approach, developers have to design and write
{\it hand-tuned programming rules}
to identify and annotate entities of a given concept.
For example, one rule to annotate
entities of concept {\it person} is that
they start with a capital letter.
It is not uncommon for a rule-based concept annotator to
have thousands of programming rules, which
takes a great deal of resources
to design, write, and debug \cite{McCALLUM:ACMQueue:05}.
One may also use machine learning algorithms
to develop an extractor for a
concept \cite{McCALLUM:ACMQueue:05}. In this approach,
developers have to find a set of
{\it relevant features} for the learning algorithm.
Unfortunately, as the specifications of relevant features are
usually unclear, developers have to find the
relevant features through a time-consuming and
labor-intensive process
\cite{Anderson:CIDR:2013,Anderson:PVLDB:2014}.
First, they have to inspect the data set to
find some candidate features.
For each candidate feature, developers have to write a
program to extract the value(s) of the feature
from the data set. Finally,
they have to train and test the concept annotator
using the set of selected features.
If the concept annotator is not sufficiently
accurate, developers have to explore the data set
for new features. As a concept annotator normally uses
hundreds of features, developers have to iterate these
steps many times to find a set of reasonably
effective features, where each iteration usually takes considerable amount of time \cite{Anderson:CIDR:2013,Anderson:PVLDB:2014}.
The overheads feature engineering and computation
have been well recognized in machine learning community
\cite{Weiss:NIPS:2013}.
Moreover, if concept annotators use supervised learning algorithms,
developers have to collect or create training data, which
require additional time and manual labor.
It is more resource-intensive to develop annotators for
concepts in specific domains, such as biology,
as it requires expensive communication between domain
experts and developers.
Current studies indicate that these communications are not often
successful and developers have to slog through the data set
to find relevant features for concept annotators in these domains \cite{Anderson:CIDR:2013}.
Unfortunately, the overheads of developing a concept
annotator are not one-time costs. Because the structure and content
of underlying data sets evolve over time,
annotators should be regularly rewritten and repaired \cite{IEMaintenance:Gulhane}.
Recent studies show that many concept annotator
need to be rewritten in average
about every two months \cite{IEMaintenance:Gulhane}.
Thus, the enterprise often have to repeat
the resource-intensive steps of developing a concept annotator
to maintain an up-to-date annotated data set.
After developing concept annotators, the enterprise
executes them over the data set to generate the annotated
collection. As most concept annotators perform complex
text analysis, such as deep natural language parsing,
it may take them days to process a large data set \cite{OptimizeSQLText:Jain,Shen:SIGMOD:08,PrioritizationIE:Huang,Resource:Kanani}. As the content of the data set evolves,
extractors should be often rerun to create an updated
annotated collection.
\subsection{Cost-Effective Conceptual Design}
\begin{figure}
\centering
\begin{tikzpicture}[->,>=stealth',scale=0.84]
\tikzstyle{every node}=[circle,draw,fill=black!50,inner sep=1pt,minimum width=3pt,font=\scriptsize]
\tikzstyle{every label}=[rectangle,draw=none,fill=none,font=\scriptsize]
\node (CR) [label=right:$athlete$] at (0,0) {};
\node (thing) [label=above right:$thing$] at (1.25,3) {};
\node (agent) [label=left:$agent$] at (0,2) {};
\node (work) [label=below left:$place$] at (4,2) {};
\node (artwork) [label=left:{\it populated place}] at (4.5,1) {};
\node (sculpture) [label=left:$state$] at (4.2,0) {};
\node (painting) [label=right:$city$] at (5.2,0) {};
\node (person) [label=left:$person$] at (-1,1) {};
\node (org) [label=left:$organization\ $] at (2,1) {};
\node (JB) [label=below:$politician$] at (-1.25,0) {};
\node (artist) [label=left:$artist$] at (-3,0) {};
\node (BAFTA) [label=below left:{\it school}] at (1.25,-0.3) {};
\node (club) [label=below right:$legislature$] at (2.5,-0.3) {};
\path [->] (person) edge (CR);
\draw [->] (thing) -- (agent);
\draw [->] (agent) -- (person);
\draw [->] (agent) -- (org);
\draw [->] (person) -- (JB);
\draw [->] (person) -- (artist);
\draw [->] (org) -- (BAFTA);
\draw [->] (org) -- (club);
\draw [->] (thing) -- (work);
\draw [->] (work) -- (artwork);
\draw [->] (artwork) -- (sculpture);
\draw [->] (artwork) -- (painting);
\end{tikzpicture}
\caption{Fragments of DBpedia taxonomy from {\it dbpedia.org}}
\label{fig:DBpedia}
\end{figure}
\begin{figure}
\scriptsize{
\begin{alltt}
<article>
{\bf<person>} John Adams {\bf</person>} has been a former member
of the {\bf<organization>} Ohio House of Representatives {\bf</organization>}
from 2007 to 2014. ...
</article>
<article>
{\bf<person>} John Adams {\bf</person>} is a composer whose music is inspired
by nature, ...
</article>
<article>
{\bf<organization>} John Adams {\bf</organization>} is a public high school
located on the east side of {\bf<city>}Cleveland{\bf</city>}, {\bf<state>}Ohio{\bf</state>},
...
</article>
\end{alltt}
}
\vspace{-0.5cm}
\caption{Wikipedia article excerpts organized in more general concepts}
\label{fig:wikipedia-general}
\end{figure}
Because the available financial or computational
resources of an enterprise are limited, it may not afford
to develop, deploy, and maintain annotators for all concepts
in a domain. Also, many users
may need an annotated data set quickly
and cannot wait days for an (updated) annotated collection
\cite{Shen:SIGMOD:08,OptimizeSQLText:Jain}.
For example, a reporter who pursues some breaking news,
a stock broker that studies the relevant news and documents
about companies, and
an epidemiologist that follows the
pattern of a new potential pandemic on the Web
and social media need relevant answers to
their queries fast.
Hence, the enterprise may afford to annotate only a subset of
concepts in a domain.
Concepts in many domains are organized
in taxonomies \cite{WebDataManage:Abiteboule:11}.
Figure~\ref{fig:DBpedia} depicts
fragments of DBPedia {\it dbpedia.org} taxonomy, where nodes
are concepts and edges show superclass/ subclass relationships.
An enterprise can use the information in a taxonomy to find
a conceptual design whose associated costs do not exceed
its budget and deliver reasonably effective answers for
queries. For example, assume that
because an enterprise has to develop in-house annotators for
concepts {\it politician} and {\it artist},
the total cost of
annotating concepts in conceptual design
$D_1$ =
\{{\it politician, artist, legislature, school, state, city}\}
over original Wikipedia
collection exceeds its budget.
As some free and reasonably accurate annotators
are available for concept {\it person}, \\
e.g. {\it nlp.stanford.edu/software/CRF-NER.shtml},
the enterprise may annotate concept {\it person}
using smaller amount of resources than concepts
{\it politician} and {\it artist}.
Hence, it may afford to annotate concepts
$D_2$ =
\{{\it person, organization, state, city}\}
from this collection.
Thus, the enterprise may choose to annotate the data set
using $D_2$ instead of $D_1$.
Figure~\ref{fig:wikipedia-general} demonstrates
the annotated version of the excerpts of Wikipedia articles
in Figure~\ref{fig:wikipedia1} using conceptual design $D_2$.
Intuitively, a query interface can disambiguate fewer
queries over the data fragment in Figure~\ref{fig:wikipedia-general}
than the one in Figure~\ref{fig:wikipedia2}.
For instance, if a users ask for
information about {\it John Adams}, the politician,
over Figure~\ref{fig:wikipedia-general},
the query interface may return the document that
contains information about {\it John Adams}, the artist,
as an answer as both entities are annotated as {\it person}.
Nonetheless, the annotated data set in Figure~\ref{fig:wikipedia-general}
can still help the query interface to disambiguate some queries.
For example, the query interface can recognize the
occurrence of entity {\it John Adams}, the school,
from the people named {\it John Adams} in
Figure~\ref{fig:wikipedia-general}. Thus, it can
answer queries about the school entity over this data fragment
effectively.
Clearly, an enterprise would like to select a
conceptual design whose required time and/or resources
for extraction do not exceed its budget
and most improves the effectiveness of answering queries.
We call such a conceptual design for an annotated data
set, a {\it cost-effective conceptual design} for the data set.
\subsection{Our Contributions}
Currently, concept annotation experts use their intuitions
to discover cost-effective conceptual designs
from taxonomies. Because most taxonomies
contain hundreds of concepts \cite{Suchanek:YAGO},
this approach does not scale for real-world applications.
In this paper, we introduce and formalize the
problem of finding cost-effective conceptual designs
from taxonomies and
propose algorithms to solve the problem in general and interesting
special cases.
To this end, we make the following contributions.
\squishlisttwo
\item We develop a theoretical framework
that quantifies the amount of improvement in
the effectiveness of answering queries by annotating
a subset of concepts from a taxonomy.
Our framework takes into account possibility of error in concept
annotation.
\item We introduce and formally define the problem
of cost-effective
conceptual design over tree-shaped taxonomies and show
it to be NP-hard.
\item We propose an efficient approximation algorithm, called
the level-wise algorithm, and prove that it has
a bounded worst-case approximation ratio in an interesting
special case of the problem. We also propose an
exact algorithm for the problem with pseudo polynomial
running time.
\item We further define the problem over taxonomies that
are directed acyclic graphs and
prove that given a generally accepted hypothesis,
there is no approximation algorithm
with reasonably small approximation ratio and
no algorithm with pseudo polynomial running time for this problem.
We show that these results hold even for some restricted cases
of the problem, such as the case where all concepts are equally costly.
\item We evaluate the accuracy of our formal framework
using a large scale real-world data set, Wikipedia,
real-world taxonomies \cite{Suchanek:YAGO},
and a sample of a real-world query workload.
Our results indicate that the formal framework
accurately measures the amount of improvement
in the effectiveness of answering queries
using a subset of concepts from a taxonomy.
\item We perform extensive empirical studies
to evaluate the accuracy and efficiency of the proposed
algorithms over real-world data sets, taxonomies, and query workload.
Our results indicate that the pseudo polynomial algorithm
is generally able to deliver more effective schemas
that the level-wise algorithm in reasonable amounts of time.
They further show that level-wise algorithm provides more effective
conceptual designs than the pseudo polynomial algorithm if the distribution of concepts in queries is skewed.
\end{list}
The paper is organized as follows. Section~\ref{sec:background} reviews the related work.
Section~\ref{sec:cost-effective-design}
formalizes the problem of cost-effective
conceptual design over a tree-shaped taxonomy
and show that it is NP-hard.
Section~\ref{sec:approximation-algorithms} describes an efficient
approximation algorithm with
bounded approximation ratio in an interesting special case of the problem.
Section~\ref{sec:pseudo-polynomial}
proposes a pseudo-polynomial algorithm for the problem in
general case.
Section~\ref{sec:dag-taxonomy} defines the problem over
taxonomies that are directed acyclic graphs and
provides interesting hardness results for this setting.
Section~\ref{sec:conclusion} concludes the paper.
The proofs for the theorems of the paper are in the appendix.
\begin{comment}
It is traditionally hold that organizing information
in form of some schema enables
users to express their information needs in form of queries that can be understood and
precisely answered by database management systems \cite{DBBook}.
Due to the recent explosion in the amount of information and number
of its users, this fundamental principle has faced
two major challenges.
{\bf The costs of organizing data:}
First, the amount of available unorganized and unstructured information
has dramatically increased during the last decades \cite{webconcept:ragu}.
Researchers have proposed {\it information extraction} as a way of
organizing collection of unstructured or semi-structured
text documents in a schema \cite{webconcept:ragu,SemTag:IBM,ChiticariuLRR10,Enhancing:Chakrabarti,Chu-Carroll06}.
Figures~\ref{fig:wikipedia2} depicts a version of
the articles shown in figure~\ref{fig:wikipedia1}, where occurrences of
entities are extracted and annotated
by their concepts in
schema $\mS_1=$ \\
$\{athlete, scientist, educational\ inst., city, sports\ team\}$
from DBPedia taxonomy shown in Figure~\ref{fig:DBpeida}.
Users may formulate their queries using
relatively specific concepts to make their information
needs less ambiguous.
For example, because many people share the name {\it Jordan},
users may use concepts more specific than {\it person} such as {\it scientist} and {\it athlete}
to inform the query interface whether they look for the information about
{\it Micheal Jordan}, the basketball player, or {\it Micheal Jordan}, the computer scientist.
Nevertheless, identifying and extracting entities in a large corpus and maintaining them
require great deals of manual labor and/or financial and computational resources
\cite{webconcept:ragu,IEMaintenance:Gulhane,ChiticariuLRR10,finin2010annotating,Anderson:CIDR:2013}.
An enterprise should develop, train, and deploy a complex piece of software, called {\it extractor} or {\it annotator},
in order to detect and extract entities of a concept in a large corpus.
Because many of these programs perform complex text analysis tasks such as finding the grammatical structure of
sentences, they require large amounts of computational resources.
Moreover, as the content and layout
of the underlying unstructured or semi-structured data set evolves, most entity annotators need to be rewritten
in a relatively short period of time \cite{IEMaintenance:Gulhane}.
An empirical study of these programs in Yahoo! has shown that
they have to be rewritten in average about every two months \cite{IEMaintenance:Gulhane}.
These costs may be less for some extractors than other. For instance, an enterprise may
acquire an open source extractor for concept {\it person} from
OpenNLP ({\it opennlp.apache.org}), purchase and deploy the annotators for
other concepts such as {\it scientist} from companies like Ontotext ({\it ontotext.com}),
or spend relatively more financial resources
and manual labor to develop and maintain annotators in-house for other concepts.
These costs and overheads are particularly larger for concepts in specific domains such as biology, where developing an extractor requires collaboration between computer scientists and domain experts.
Since the available resources in an enterprise are limited,
it may have to choose a schema whose entities can be extracted,
organized, and maintain less costly than the ideal schema.
The new schema may not contain some concepts and/or generalize some other concepts in the ideal schema.
For instance, let the costs of annotating entities based on concepts in schema $\mS_1$
in Wikipedia corpus exceed some fixed budget. An enterprise may choose to eliminate some
concepts and generalize some based on the taxonomy shown
in Figure~\ref{fig:wikipedia3} and organize the data
based on schema $\mS_{2} = $ $\{person, educational\ inst., sports\ team\}$.
{\bf The costs of using schema:}
Using a schema that contains large number of
relatively special concepts, users will be able to
express and submit more varieties of queries
to the database management system. However,
as the size and/or complexities of schemas increase,
it becomes harder and more time consuming for users
to learn them and formulate their queries accordingly \cite{Summarizing-Relational,Summarizing-XML}.
Users may have to spend shorter time to learn and
frame their queries using
schemas with fewer concepts and/or the
ones that contain relatively general concepts \cite{Summarizing-Relational,Summarizing-XML,QueryComplexDB:Yu:2007}.
For instance, it may be easier for users to learn and submit their queries using schema
$\mS_2$ than schema $\mS_1$. Since the available time and the knowledge of users are
limited, they may have to use a schema with
fewer and/or more general concepts.
{\bf Cost effective schema selection:}
Due to the costs of creating and using schemas,
we may not be able to represent our data set
and queries in the ideal schema.
Given a taxonomy that describes
superclass /subclass relationships between
concepts in the domain of interest, we would like to
find a schema to represent our data sets
or queries that meets our budget constraint and
helps the query interface to disambiguate queries more than
other affordable schemas for the data set (or queries).
We call this problem, {\it cost effective schema selection}.
To the best of our knowledge, the problem of
cost effective schema selection has not yet been formally explored.
In this paper, we introduce and formalize this problem. We prove the problem to be NP-hard in general case
in the number of concepts in the
taxonomy used to generate the schema.
Since real world taxonomies normally contain a large number of concepts \cite{Probase:Wu:12,Suchanek:YAGO},
we provide some hardness results for and propose
pseudo-polynomial and efficient approximation algorithms to solve the problem
in general and interesting special cases.
Our contributions in this paper are:
\begin{itemize}
\item We introduce and formalize the problem of
cost effective schema selection over taxonomies
and and prove it to be NP-hard in general
case in the size of the taxonomy.
\item We propose a pseudo-polynomial
algorithm to solve the problem where the taxonomy is a tree.
\item We propose two efficient approximation
algorithms for the case where the taxonomy is a tree
and the concepts in the schema are
not subclass/ superclass of each other.
\item If the taxonomy is a tree and
all concepts are equally costly, we prove
that the problem has an efficient
2-approximation algorithm.
\item We prove that given some generally accepted hypothesis,
there is not any constant factor approximation for
the problem if the taxonomy is a directed acyclic graph.
\end{itemize}
\end{comment}
\section{Pseudo-polynomial Time Algorithm}
\label{sec:pseudo-polynomial}
In this section we describe a pseudo-polynomial time
algorithm for the CECD problem over tree taxonomies.
As many other optimization problems on the tree structure, one approach is to find an
optimal solution \emph{bottom-up} using dynamic programming technique.
The main idea is to define the CECD problem over all subtrees of the given taxonomy
$\mX=(R, \mC,\mR)$. Next we show that in order to solve the subproblem
defined over the subtree rooted at $C$, it is enough to solve the subproblems defined over
the subtrees rooted at the children of $C$.
Let $\child(C)$ be the set of all children of the concept
$C$ in $\mX$.
Moreover, let $\mX_C$ be the subtree of $\mX$ rooted
at $C$.
Formally given budget $B_C$, the subproblem over $\mX_C$ is to find a design $\mS_C \subseteq \mX_C$ whose total cost is at most $B_C$ and the queriability of the partitions obtained by $\mS_C$ is the maximum. Note that by annotating $\mS_C$ in $\mX_C$ there may exist a set of leaf concepts in $\mX_C$ that do not belong to any of $\part(S)$ for $S\in \mS$. Let $\nullpart(\mS_C, C)$ denotes the leaf concepts of $\mX_C$ that are not assigned to any partition of $\mS_C$.
In order to computer the maximum queriability of the best design in $\mX_C$, one of the cases we should consider is the one in which $C$ is annotated. To apply dynamic programming in this case we need to evaluate the queriability of $\part(C)$ which is $\sum_{Ch \in \child(C)}\sum_{C'\in \nullpart(\mS_{Ch}, Ch)} u(C')d(C')$. Thus besides the total queriability of partitions in $\mX_C$, we should compute the value of $\sum_{C' \in \nullpart(\mS_{Ch}, Ch)} u(C')d(C')$. All together we are required to solve the subproblem $Q$ defined over the subtree rooted at $C$ with parameter $B_C$ and $N_C$ where $B_C$ denotes the available budget for annotating concepts in $\mX_C$ and $N_C$ denotes the value of $\sum_{C'\in \nullpart(\mS_C, C)} u(C')d(C')$.
Further we assume that $u(C)$, $d(C)$, and $w(C)$ are positive integers for each $C \in \mC$.
In Section~\ref{sec:experiment}, we show that the algorithm
can handle real values with scaling techniques in expense
of reporting a near optimal solution instead of an optimal one.
We define $D = \sum_{C \in \leaf(\mC)} d(C)$,
$U =$ $\sum_{C \in \leaf(\mC)} u(C)$. Let
$B_{\tt total}$ denote the total available budget.
We propose an algorithm whose time complexity is polynomial
in $U$, $D$, $B_{\tt total}$, and $\card{\mC}$.
\iffalse
Further let $\nullpart(\mX_C)$ be the set of leaf concepts in $\mX_C$
that do {\bf not} belong to any partition
that is entirely in $\mX_C$.
We define $Q[C,B,N]$ to denote the
maximum queriability we can obtain constraint
to the budget $B$ from the partitions
in $\mX_C$
such that $N=\sum_{C\in \nullpart(\mX_C)} u(C)d(C)$.
Note that $Q$ does not include the queriabilty corresponding to the free concepts in $\mX_C$ which is $N$.
\fi
We have the following
recursive rules for the non-leaf concepts in $\mC$ based on the value of $Q$ for their children.
\begin{figure}[htb]
\centering
\includegraphics[height=1.50in]{dynamic-0}
\caption{The concepts in red denote the ones that are picked in the design. $(a)$, $(b)$ and $(c)$ show three different types of the subproblems required to solve in order to compute $Q[C,B,0]$.}
\label{fig:dynamic-0}
\end{figure}
\begin{align*}
Q[C,B,0] = \max\{&\max_{\mB,\mN} (\sum_{Ch\in \child(C)} Q[Ch,\mB(Ch),\mN(Ch)] + \frac{\pr(C)}{d(C)} {\sum_{Ch \in \child(C)} \mN(Ch)}),\\
&\max_{\mB'}\sum_{Ch\in \child(C)} Q[Ch,\mB'(Ch),0]\}
\end{align*}
For each $Ch$, $\mB(Ch)$, $\mB'(Ch)$ and $\mN(Ch)$ are integer
values satisfying the following conditions:
(1) $B = w(C) + \sum_{Ch\in\child(C)}\mB(Ch)$,
(2) $B = \sum_{Ch\in\child(C)}\mB'(Ch)$, and
(3) $U D \geq \sum_{Ch\in\child(C)}\mN(Ch)$.
The first term in the recursive rule corresponds to the case in which
we select concept $C$ in the output design ($(b)$ and $(c)$ in Figure~\ref{fig:dynamic-0}) and
the second term corresponds to the case in which
for any child of $C$, $Ch$, $\nullpart(Ch)=\emptyset$ ($(a)$ in Figure~\ref{fig:dynamic-0}).
In a design $\mS_C$ in $\mX_C$ with the maximum queriability and empty $\nullpart$ whose total cost is $B$, either $C$ is selected in the design and the budget $B-w(C)$ is divided among the children of $C$ (first term of the above rule), or the whole budget $B$ is divided among the children of $C$ and all leaf concepts of $\mX_C$ is assigned to a proper descendant of $C$ in the design (second term of the above rule).
Similarly, for the case in which $N\neq 0$ we have the following recursive rule:
\begin{align*}
Q[C,B,N] &= \max_{\mB,\mN} \sum_{Ch\in \child(C)} Q[Ch,\mB(Ch),\mN(Ch)]
\end{align*}
where $\mB=\set{\mB(Ch)|Ch\in\child(C)}$ and $\mN=$ $\{\mN(Ch)|$ $Ch\in\child(C)\}$ such that
$B = \sum_{Ch\in\child(C)}\mB(Ch)$ and
$N =\sum_{Ch\in\child(C)}\mN(Ch)$.
For each leaf concept $C_\ell$ in $\mC$, we have the following.
\squishlisttwo
\item{$Q[C_\ell,B,N] = 0 \text{ if } N=u(C_\ell)d(C_\ell) \text{ and } -\infty \text{ otherwise }$}
\item{$Q[C_\ell,B,0] = \pr(C_{\ell})u(C_\ell) \text{ if } B \geq w(C_\ell) \text{ and } -\infty \text{ otherwise}.$}
\end{list}
\noindent
The maximum value of the queriability on $\mX=(R, \mC, \mR)$ is
\begin{align}
\max_{N} Q[R,B_{\tt total},N] + N,
\label{eq:pseudo-final-recursion}
\end{align}
where $B_{\tt total}$ is the total available budget. The first term, $Q[R,B_{\tt total},N]$, denotes the profit obtained form the partitions of an optimal design and the second term corresponds to the profit obtained from the free concepts with respect to the output design.
To compute the running time of the algorithm we need to give an
upper bound on the number of cells in $Q$ and the time required
to compute the value of each cell.
The time to compute a single cell in $Q$ is
exponential in terms of the maximum degree of the taxonomy.
Consequently, the algorithm runs much faster
if the maximum degree in $\mX$ is bounded by a small constant.
As we show next,
we can modify the taxonomy $\mX$ to obtain
taxonomy $\mX'$ such that each concept $C$ in
$\mX'$ has at most two children and the number of nodes in $\mX'$
is at most twice the number of nodes in $\mX$.
Since each node in $\mX'$ has two children,
the required amount of time to compute a single cell in $Q$ is
$O(B_{\tt total} U D)$; at most $B_{\tt total}$ ways to divide the budget between the two children and at most $UD$ ways to divide $N$ between the two children.
Since the first argument in $Q$ can be any of the concepts
in $\mC$, $N \leq U D$ and $B \leq B_{\tt total}$,
there are $O(B_{\tt total}UD)$ cells to evaluate in order
to compute the design with maximum queribility.
Thus the total time for computing all cells
in $Q$ is $O(\card{\mC} (B_{\tt total} U D)^{2})$.
Next, we explain how to transform an arbitrary taxonomy to
a binary taxonomy.
Let $C$ be a non-leaf concept in $\mX$.
We replace the induced subtree of $C\cup\child(C)$ with a
full binary tree $\mX_C'$ whose root is $C$ and whose
leaves are $\child(C)$ as shown in Figure~\ref{fig:binary-transform}.
Some internal nodes of $\mX_C'$ do not
correspond to any node in $\mX$. We refer to such
internal nodes as {\it dummy nodes},
and set their cost to $B_{\tt total}+1$ to make sure that
our algorithm does not include them in
the output design.
\begin{figure}[htb]
\centering
\includegraphics[height=1.25in]{binary}
\caption{Transforming an input taxonomy $\mX$ into a binary taxonomy.
Blue square nodes correspond to dummy nodes.}
\label{fig:binary-transform}
\end{figure}
Applying the mentioned transformation to all
nodes of $\mX$, we obtain a binary taxonomy
$\mX'= (R, \mC', \mR')$.
The number of nodes in $\mC'$ is at most twice
the number of nodes in $\mC$.
It follows that the running time of our pseudo-polynomial algorithm
on the input $\mX'$ is
$O(\card{\mC} (B_{\tt total} U D)^{2})$.
Since this transformation does not change the subset of
leaf concepts in the subtree rooted in any internal node,
any internal node in $\mX$ corresponds to a
solution in $\mX'$ with the same cost and queriability.
Since dummy nodes are too expensive to be chosen,
they do not introduce any new solution to the
set of feasible solutions.
\begin{theorem}
There is an algorithm to solve the CECD problem over
taxonomy $\mX = (R, \mC, \mR)$ with budget $B$ in
$O(\card{\mC} B^2 U^2 D^2)$.
\end{theorem}
\iffalse
Next, as we mentioned earlier in applications $u$ and $d$ values are positive real.
To solve the problem with real $u$ and $d$ values, we define:
\begin{align*}
\hat{u}(C) &= \floor{u(C) \over (\eps u_{\max}/n)}, \text{ and } \\
\hat{d}(C) &= \floor{d(C) \over (\eps d_{\max}/n)}
\end{align*}
The accuracy of the dynamic programming algorithm depends on
values of $\eps_d$ and $\eps_u$. If these values are sufficiently
small, the algorithm finds the optimal schema.
Small values of these parameters, however, may result in
large $U$ and $D$ values and increase the running
time of the algorithm considerably.
Hence, one may choose relatively larger values for
$\eps_d$ and $\eps_u$ to obtain a schema whose
queriability is close to the one of the optimal schema
in a shorter time.
\fi
Table~\ref{table:results} presents a summary of
proposed algorithms for the CECD problem.
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Algorithm & Approximation ratio & Running time \\
\hline
Level-wise &
$O((h+\log\card{\mC})/\pr_{\min})$ (Disjoint CECD)& $O(h \card{\mC} \log(\card{\mC}))$\\
\hline
Dynamic Programming & Pseudo-polynomial & $O(\card{\mC} B^ 2 U^2 D^{2})$ \\
\hline
\end{tabular}
\end{center}
\vspace{-0.55cm}
\caption{Algorithms for the CECD problem.}
\label{table:results}
\end{table}
\begin{comment}
\iffalse
We state the recursive relation to compute $Q[C,B,N]$ as follows.
\begin{align}
Q[C,B,N] &= \min_{\mQ, \mN}
(\sum_{Ch\in \child(C)} B[Ch,Q_{Ch},N_{Ch}]),
\label{eq:pseudo-main-recursion}
\end{align}
where $\mQ=\set{Q_{Ch}|Ch\in\child(C)}$ and $\mN=\set{N_{Ch}|Ch\in\child(C)}$
such that $Q= \sum_{Q_{Ch}\in\mQ}Q_{Ch}$ and
$N = \sum_{N_{Ch}\in\mN}N_{Ch}$.
Because $Q$ is the queriability that we can get by selecting
some concepts in $\mX_C$, it does not exceed
$U_C = $$\sum_{E \in \final(\mX_C)} \hat{u}(E)$,
where $\final(\mX_C)$
is the set of final concepts in $\mX_C$. Hence, we choose
$\mQ$ to contain values between $0$ and $U_C$.
Similarly, $N$ does not exceed $U_C\ D_C$, where
$D_C = $$\sum_{E \in \final(\mX_C)} \hat{d}(E)$. Thus,
we set $\mN$ to contain values between $0$ and $U_C \ D_C$.
\fi
\iffalse
Our algorithm first uses a standard scaling technique to convert the
values of popularity and frequency of every concept
in $\mC$ to positive integers \cite{Vazirani:Book:Approx}.
More precisely, let $u_{\max}$ be the maximum
popularity of final concepts in $\mC$
and $\eps_{u} < 1$ be sufficiently small
such that $\hat{u}(C) =$ $u(C)\over{\eps_u\cdot u_{\max}}$ for
all final concepts in $\mC$ are positive integers.
Also, let $d_{max}$ be the maximum frequency of all
concepts $C \neq R$ in $\mC$ and
$\eps_d < 1$ be sufficiently small
such that $\hat{d}(C) =$ $d(C)\over{\eps_d\cdot d_{\max}}$
for all concepts $C \neq R$ in $\mC$
are positive integers.
\fi
We define $B[C,P,N,F]$ to denote the minimum required budget to obtain profit $P$ from the partitions entirely in $T_C$ such that $N=\sum_{C\in M_{i}} u(C)d(C)$ and $F=\sum_{C\in M_{i}} d(C)$ where $M_i$ is the set of concrete concepts in $T_C$ that do not belong to any partition which is entirely in $T_C$. We can state the recursive relation of $B[C,P,N,F]$ as follows.
\begin{align*}
B[C,P,N,F] &= \min_{\mP,\mN,\mF} (\sum_{D\in \child(C)} B[D,P_{D},N_{D},F_{D}]),
\end{align*}
where $\mP=\set{P_{D}|D\in\child(C)}$, $\mN=\set{N_{D}|D\in\child(C)}$, and $\mF=\set{F_{D}|D\in\child(C)}$ such that $P= \sum_{P_{D}\in\mP}P_{D}$, $N = \sum_{N_{D}\in\mN}N_{D}$, and $F = \sum_{F_{D}\in\mF}F_{D}$ (Note that by definition $N \leq F$ and accordingly $N_{D} \leq F_{D}$ for any $D$). And,
\begin{align*}
B[C,P,0,0] &= \min\{\min_{\mP,\mN,\mF} (\sum_{D\in \child(C)} P[D,P_{D},N_{D},F_{D}] \\
&+ w(C)),\\ &\min_{\mP'}\sum_{D\in \child(C)} B[D,P'_{D},0,0]\},
\end{align*}
where $\mP=\set{P_{D}|D\in\child(C)}$, $\mP'=\set{P'_{D}|D\in\child(C)}$, $\mN=\set{N_{D}|D\in\child(C)}$, and $\mF=\set{F_{D}|D\in\child(C)}$ such that
$P = \sum_{P'_{D}\in\mP'}P'_{D}$ and $P= \sum_{P_{D}\in\mP}P_{D} +\frac{\sum_D N_D}{\sum_D F_D}$. The first term corresponds the case that we pick concept $C$ in the output design and the second term corresponds the case that we pick all children of $C$ in the output design.
For each concept $C_\ell \in \ell(T)$,
\begin{itemize}
\item{$B[C_\ell,P,N,F] = 0 \quad \text{ if } P \leq N/F, N=u(C_\ell)d(C_\ell)\\ \text{ and } F = d(C_\ell), $}
\item{$B[C_\ell,P,0,0] = w(C_\ell) \quad \text{ if } P \leq u(C_\ell).$}
\end{itemize}
The maximum value of the benefit on $T=(\mC,\mR)$ is
\begin{align*}
\max_{B[\root(T),P,N,F]\leq budget} P + {N \over F}.
\end{align*}
To compute the running time of the algorithm we need to give an upper bound on the number of cells in $B$ we need to evaluate. For now assume that we know the exact value of $\opt$ in advance (later via some scaling technique we provide an upper bound for \opt).
First argument, $C$, can be any of $n$ concepts in $\mC$. Moreover, $N \leq \sum_{C\in\mC} u(C)d(C)$ and $F \leq \sum_{C\in \mC} d(C)$.
As we prove next, we can modify the taxonomy $\mT$ to obtain $\mT'$ such that each concept $C$ in the taxonomy $\mT'$ has at most two children.
Since each node in $\mT'$ has two children, the required amount of time to evaluate a single cell is $O(1)$. Thus the total running time is $O(n(|\opt| UD^2)^{3})$ where $D= \sum_{C\in \mC}d(C)$ and $U= \sum_{C\in \mC}u(C)$.
Note that we have assumed that for each concept $C\in \mC$, $d(C)$ and $u(C)$ are positive integers.
Let $C\in\mC$, and let $\child(C) = \set{C_1,\cdots, C_{k}}$, where $k = cn(C)$.
We replace the induced subtree of $C\cup\child(C)$ with a full binary tree $T_C'$ whose root is $C$ and whose leaves are $\child(C)$ (see Figure~\ref{fig:binary-transform}).
Note that the internal vertices of $T_C'$ do not correspond to the vertices of $T$. We refer to such internal vertices as dummy vertices, and we set their cost of annotation to $B+1$ to make sure that our algorithm does not select any of them.
Applying the mentioned transformation to all nodes of $T$ we obtain a binary tree $T'$. Observe that the number of nodes in $T'$ is at most twice the number of nodes in $T$, and the height of $T'$ is at most $\lceil\log n\rceil$ times larger than the height of $T$. It follows that our pseudopolynomial algorithm runs on the input $T'$ in $O(n(\card{\opt}UD^2)^{3})$ time.
Since our transformation does not change the subset of leaves (concrete concepts) in the subtree of any internal vertex (abstract concept) each solution in $T$ corresponds to a solution in $T'$ with the same cost and benefit.
On the other hand, since dummy vertices are too expensive to be chosen they do not introduce any new solution to the set of feasible solutions.
\begin{theorem}
There is an algorithm to solve the CECD problem on a tree taxonomy of size $n$ with budget $B$ in $O(n(\card{\opt}UD^2)^{3})$ time, where $D$ and $U$ are total frequency and total popularity of all concepts, respectively.
\end{theorem}
Next we apply the standard scaling trick used in Knapsack type problem and we previously applied in a similar problem \cite{}. The scaling helps us to upper bound the value of $|\opt|$ and improve the running time of the algorithm in expense of finding an approximated solution instead.
Let $\lambda = {\eps M \over n}$ where $M =\max_{C\in \mC} u(C)$.
We define $\hat{u}(C) = \lfloor {u(C) \over \lambda} \rfloor$. This implies that
\begin{align*}
\lambda \hat{u}(C) \leq u(C) \leq \lambda(\hat{u}(C)+1)
\end{align*}
We show that if we worked with $\hat{u}()$ instead of $u()$ the running time will improved while we only loose a factor of $1+\eps$ in approximation factor.
Let $\mS_{\scaled}$ be the schema selected by our algorithm considering $\hat{u}$ and let $\mS_{\org}$ be the optimal solution considering the original popularity values, $u()$. Thus,
\begin{align*}
&\sum_{p \in \part(\mS_{\scaled})} {\sum_{C\in p} u(C) d(C) \over \sum_{C\in p} d(C)} + {\sum_{c\in \free(\mS)} u(C) d(C) \over \sum_{c\in \mC} d(C)}\\
&>\lambda \sum_{p \in \part(\mS_{\scaled})} {\sum_{C\in p} \hat{u}(C) d(C) \over \sum_{C\in p} d(C)} + {\sum_{c\in \free(\mS)} \hat{u}(C) d(C) \over \sum_{c\in \mC} d(C)}\\
&>\lambda \sum_{p \in \part(\mS_{\org})} {\sum_{C\in p} \hat{u}(C) d(C) \over \sum_{C\in p} d(C)} + {\sum_{c\in \free(\mS)} \hat{u}(C) d(C) \over \sum_{c\in \mC} d(C)}\\
&>\sum_{p \in \part(\mS_{\org})} {\sum_{C\in p} (u(C) -\lambda) d(C) \over \sum_{C\in p} d(C)} + {\sum_{c\in \free(\mS)} (u(C)-\lambda) d(C) \over \sum_{c\in \mC} d(C)}\\
&= \sum_{p \in \part(\mS_{\org})} {\sum_{C\in p} u(C) d(C) \over \sum_{C\in p} d(C)} + {\sum_{c\in \free(\mS)} u(C) d(C) \over \sum_{c\in \mC} d(C)}\\
&- \lambda (\card{\part(\mS_{\org})}+1)\\
&> \opt - \lambda n = \opt - \eps M > (1-\eps) \opt.
\end{align*}
\begin{lemma}
The optimal solution of CESS problem with $\hat{u}$ values (scaled $u$ values) returns a $(1-\eps)$ approximate solution of the original CESS problem.
\end{lemma}
Nest we show that the scaling idea improve the running time. First we bound the value of $\opt_{\scaled}$ when we consider $\hat{u}()$.
\begin{align*}
\opt_{\scaled} &= \sum_{p \in \part(\mS_{\scaled})} {\sum_{C\in p} \hat{u}(C) d(C) \over \sum_{C\in p} d(C)} + {\sum_{c\in \free(\mS)} \hat{u}(C) d(C) \over \sum_{c\in \mC} d(C)}\\
&< \sum_{C \in \mC} \hat{u}(C) = \sum_{C \in \mC} {u(C) \over \eps M}\\
&< n {M \over {\eps M \over n}} = {n^2 \over \eps}.
\end{align*}
Moreover, by the same calculation, $\hat{U} = \sum_{C \in \mC} \hat{u}(C) \leq {n^2 \over \eps}$. Thus the total running time of the algorithm working with $\hat{u}()$ is $O({n^{13} D^6 \over \eps^6})$ and it returns a $(1+\eps)$-approximate solution.
\end{comment}
\part{{\tt part}}
\def\allparts{{\tt all-parts}}
\def\free{{\tt free}}
\def\final{{\tt final}}
\def\leaf{{\tt leaf}}
\def\nullpart{{\tt nullPart}}
\def\scaled{{\ttscl}}
\def\org{{\tt org}}
\def\pr{{\tt pr}}
\def\sol{\mathrm{sol}}
\def\prob#1{\textup{\text{#1}}\xspace}
\addto{\Affilfont}{\small}
\renewcommand{\Authfont}{\normalsize}
\def\Comment#1{\textsl{$\langle\!\langle$#1\/$\rangle\!\rangle$}}
\newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}}
\usepackage{array}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\renewcommand\floatpagefraction{.9}
\renewcommand\topfraction{.9}
\renewcommand\bottomfraction{.9}
\renewcommand\textfraction{.1}
\newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{example}[theorem]{Example}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{property}[theorem]{Property}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{problem}[theorem]{Problem}
\title{Cost-Effective Conceptual Design Using Taxonomies}
\author[1]{Ali Vakilian {\it [email protected]}}
\author[2]{Yodsawalai Chodpathumwan {\it [email protected]}}
\author[3]{Arash Termehchy {\it [email protected]}}
\author[3]{Amir Nayyeri {\it [email protected]}}
\affil[1]{CSAIL, Department of EECS, MIT, Cambridge, MA, USA}
\affil[2]{Department of Computer Science, University of Illinois, Urbana, IL, USA}
\affil[3]{School of EECS, Oregon State University, Corvallis, OR, USA}
\renewcommand\Authands{ and }
\begin{document}
\date{}
\maketitle
\begin{abstract}
It is known that annotating
named entities in unstructured and semi-structured
data sets by their concepts improves the effectiveness
of answering queries over these data sets.
Ideally, one would like to annotate entities of
all concepts in a given domain in a data set,
however, it takes substantial time and computational
resources to do so over a large data set.
As every enterprise has a limited budget of time
or computational resources, it has to annotate
a subset of concepts in a given domain
whose costs of annotation do not exceed the budget.
We call such a subset of concepts a
{\it conceptual design} for the annotated data set.
We focus on finding a conceptual design that provides
the most effective answers to queries over the annotated data set, i.e., a
{\it cost-effective conceptual design}.
Since, it is often less time-consuming and costly
to annotate small number of general concepts,
such as {\it person}, than
a large number of specific concepts,
such as {\it politician} and {\it artist}, we use
information on superclass/ subclass relationships
between concepts in taxonomies to find a cost-effective
conceptual design.
We quantify the amount by which
a conceptual design with concepts from a taxonomy
improves the effectiveness of answering queries over an
annotated data set.
If the taxonomy is a tree, we prove that the problem is
NP-hard and propose an efficient approximation algorithm
and an exact pseudo-polynomial time algorithm for
the problem.
We further prove that if the taxonomy is a directed
acyclic graph, given some generally accepted hypothesis,
it is not possible to find any approximation algorithm
with reasonably small approximation ratio or a
pseudo-polynomial algorithm for the problem.
Our empirical study using real-world data sets,
taxonomies, and query workloads shows that our framework
effectively quantifies the amount by which
a conceptual design improves
the effectiveness of answering queries.
It also indicates that our
algorithms are efficient for a
design-time task with pseudo-polynomial algorithm
being generally more effective than
the approximation algorithm.
\end{abstract}
\input{introduction.tex}
\input{background.tex}
\input{TreeTaxonomy.tex}
\input{ApproximationAlgorithms.tex}
\input{PseudoPolyAlgorithm.tex}
\input{DAGTaxonomy.tex}
\input{experiments.tex}
\section{Conclusion and Future Work}
\label{section:conclusion}
\label{sec:conclusion}
Annotating entities in large unstructured or
semi-structured data sets improves the effectiveness
of answering queries over these data sets.
It takes significant amounts of
financial and computational resources
and/or manual labor to annotate entities
of a concept. Because an enterprise normally has limited
resources, it has to choose a subset of
affordable concepts in its domain of interest for annotation.
In this paper, we introduced
the problem of cost-effective conceptual design using taxonomies,
where given a taxonomy, one would like to find a
subset of concepts in the taxonomy whose
total cost does not exceed a given budget and improves the
effectiveness of answering queries the most.
We proved the problem is NP-hard and proposed an efficient approximation algorithm, called \prob{level-wise}
algorithm, and an exact algorithm with
pseudo-polynomial running time for the problem
over tree taxonomies.
We also proved that it is not possible
to find any approximation algorithm with reasonably
small approximation ratio or pseudo-polynomial
time exact algorithm for the problem
when the taxonomy is a directed acyclic graph.
We showed that our formalization framework effectively estimates
the amount by which a design improves the effectiveness of
answering queries through extensive experiments over real-world
datasets, taxonomies, and queries. Our empirical studies
also indicated that our algorithms are efficient
for a design-time task
with pseudo-polynomial algorithm delivering more effective
designs in most cases.
\bibliographystyle{abbrv}
\section{Cost-Effective Conceptual Design}
\label{sec:cost-effective-design}
\subsection{Basic Definitions}
Similar to previous works, we do not rigorously define the notion
of named entity \cite{WebDataManage:Abiteboule:11}.
We define a named entity (entity for short) as a unique name in some (possibly infinite) domain.
A concept is a set of entities, i.e., its instances.
Some examples of concepts are {\it person} and {\it country}. An entity of concept
{\it person} is {\it Albert Einstein} and an entity of concept {\it country} is {\it Jordan}.
Concept $C$ is a {\it subclass} of concept
$D$ iff we have $C \subset D$.
In this case, we call $D$ a {\it superclass} of $C$.
For example, {\it person} is a superclass of {\it scientist}.
If an entity belongs to a concept C, it will belong to all
its superclass's.
A taxonomy organizes concepts in a domain of interest \cite{WebDataManage:Abiteboule:11}.
We first investigate the properties of tree-shaped
taxonomies and later in Section~\ref{sec:dag-taxonomy}
we will explore the taxonomies that are
directed acyclic graphs.
Formally, we define {\it taxonomy} $\mX = $ $(R, \mC, \mR)$
as a rooted tree, with {\it root concept} $R$, vertex set $\mC$ and edge set $\mR$.
$\mC$ is a finite set of concepts.
For $C,D\in \mC$ we have $(C,D) \in \mR$ iff
$D$ is a subclass of $C$.
Every concept in $\mC$ that is not a superclass of any other
concept in $\mC$ is a {\it leaf concept}.
The leaf concepts are leaf nodes in taxonomy $\mX$.
For instance, concepts {\it athlete} and {\it artist} are leaf concepts in Figure~\ref{fig:DBpedia}.
Let $ch(C)$ denote the children of concept $C$.
For the sake of simplicity, we assume that
$\cup_{D \in ch(C)} D$ $= C$ for all concepts $C$ in a
taxonomy.
Each data set is a set of documents.
Data set $DS$ is in the domain of taxonomy $\mX$ iff
some entities of concepts in $\mX$ appear in
some documents in $DS$.
For instance, the set of documents in
Figure~\ref{fig:wikipedia1} are in the domain of
the taxonomy shown in Figure~\ref{fig:DBpedia}.
An entity in $\mX$ may appear in
several documents in a data set.
For brevity, we refer to the occurrences of entities
of a concept in a data set as the occurrences of
the concept in the data set.
A query $q$ over $DS$ is a pair $(C, T)$,
where $C \in \mC$ and $T$ is a set of terms.
Some example queries are
{\it (person, \{Michael Jordan\})} or {\it (location, \{Jordan\})}.
This type of queries has been widely used to
search and explore annotated data sets
\cite{OptimizingAnnotation:Chakrabarti,Chu-Carroll06,KeywordStructuredQuery:Pound}.
Empirical studies on real world query logs indicate that the majority of
entity centric queries refer to a single entity \cite{MoreSenses:Sanderson}.
In this paper, we consider queries that refer to a single entity.
Considering more complex queries that seek information
about relationships between several entities requires
more sophisticated models and algorithms and more space than a paper.
It is also an interesting topic for future work.
\subsection{Conceptual Design}
{\it Conceptual design} $\mS$ over taxonomy $\mX = $ $(R, \mC, \mR)$ is a
non-empty subset of $\mC - \set{R}$. For brevity, in the rest of the paper,
we refer to conceptual design as {\it design}.
A design divides the set of leaf nodes in $\mC$ into some partitions,
which are defined as follows.
\begin{definition}
\label{def:partition}
Let $\mS$ be a design over taxonomy $\mX =$ $(\mC, \mR)$, and let $C\in\mS$.
We define the partition of $C$ as a subset of leaf nodes of
$\mC$ with the following property.
A leaf node $D$ is in the partition of $C$ iff $D=C$ or $C$ is the lowest ancestor of $D$ in $\mS$.
\end{definition}
\noindent
Let function $\part$ map each concept into its partition.
\begin{example}
Consider the taxonomy described in Figure~\ref{fig:annotation}. Let
design $\mS$ be $\{agent, person\}$.
The partitions of $\mS$ are $\{artist,$ $politician,$ $athlete\}$
and $\set{school, legislature}$.
Also, $\part(\text{person})$ = $\{artist,$ $politician,$ $athlete\}$
and $\part(\text{agent})$ = $\{school,$
$legislature\}$.
\end{example}
For each design $\mS$, the set of \emph{leaf concepts} that do not belong to any partition are
called {\it free concepts} and denoted as $\free(\mS)$.
These concepts neither belong to $\mS$ nor are descendant
of a concept in $\mS$.
\begin{figure}[htb]
\centering
\includegraphics[height=1.75in]{annotation}
\caption{The concepts in red, {\it agent} and {\it person}, denote the design. The blue curves denote the partitions created after annotating the design and the dashed curved shows the free concepts of the selected design.}
\label{fig:annotation}
\end{figure}
\begin{comment}
content...
\begin{figure}
\scriptsize{
\begin{alltt}
<article>
{\bf<person>} John Adams {\bf</person>} has been a former member
of the {\bf<agent>} Ohio House of Representatives {\bf</agent>}
from 2007 to 2014. ...
</article>
<article>
{\bf<person>} John Adams {\bf</person>} is a composer whose music is inspired
by nature, ...
</article>
\end{alltt}
}
\vspace{-0.5cm}
\caption{Wikipedia article excerpts annotated by design \{{\it person, organization}\}}
\label{fig:wikipedia-general-1}
\end{figure}
\end{comment}
\begin{example}
Again consider design $\{person,$ $agent\}$
over the taxonomy described in Figure~\ref{fig:annotation}.
The free concepts of $\mS$ are $\{state,$ $city\}$ as
they are not in any partition of $\mS$.
\end{example}
Let $DS$ be a data set in the domain of taxonomy
$\mX =$$(R, \mC, \mR)$ and $\mS$ be a design over $\mX$.
$\mS$ is the design of data set $DS$ iff for all concept $C\in \mS$,
all occurrences of concepts in the partition of $C$ are annotated by $C$.
In this case, we say $DS$ is an {\it instance} of $\mS$.
For example, consider the design
$\mT = $ \{{\it person, organization}\}
over the taxonomy in Figure~\ref{fig:DBpedia}.
The data set in Figure~\ref{fig:wikipedia-general}
is an instance of $\mT$ as all instances of concepts
{\it athlete}, {\it artist} and {\it politician}, that belong
to the partition of {\it person}, are annotated by
{\it person} and all instances of concepts
{\it school} and {\it legislature}, that constitute
the partition of {\it organization},
are annotated by {\it organization} in the data set.
\subsection{Design Queriability}
\label{sec:design-queriability}
Let $\mQ$ be a set of queries over data set $DS$.
Given design $\mS$ over taxonomy
$\mX =$$(R, \mC, \mR)$, we would like to measure the
degree by which $\mS$ improves the effectiveness of
answering queries in $\mQ$ over $DS$.
The value of this function should be larger for the
designs that help the query interface to answer a
larger number of queries in $\mQ$ more effectively.
As most entity-centric information needs are precision-oriented \cite{Classification-Enhanced:Bennett,Chu-Carroll06},
we use the standard metric of
{\it precision at $k$} ($p@k$ for short)
to measure the effectiveness of answering
queries over structured data sets \cite{IRBook}.
The value of $p@k$ is
the fraction of relevant answers in the
top $k$ returned answers for the query.
We average the values of $p@k$
over queries in $\mQ$ to measure the amount of
effectiveness in answering queries in $\mQ$.
The problem of design in order to maximize
other objective functions, such as recall,
is an interesting subject for future work.
Let $Q:(C,T)$ be a query in $\mQ$ such that
$C$ belongs to the partition of $P \in \mS$.
The query interface may consider only the documents that
contain information about entities annotated by
$P$ to answer $Q$.
For instance, consider query $Q_1 =$ $(politician,$ $John Adams)$
over data set fragment in Figure~\ref{fig:wikipedia-general}
whose design is \{{\it person, organization}\}. The query interface
may examine only the entities annotated by {\it person}
in this data set to answer $Q_1$.
Thus, the query interface will avoid
non-relevant results that otherwise may have been placed in
the top $k$ answers for $Q$.
It may further rank them according to
its ranking function, such as the traditional TF-IDF
scoring methods \cite{IRBook}.
Our model is orthogonal to the method used to rank the
candidate answers for the query.
The query interface still has to examine all documents that
contain some mentions to the entities annotated by
concept $P$ to answer $Q:(C,T)$.
Nevertheless, only a fraction
of these documents may contain
information about entities of $C$. For instance,
to answer query $(politician, John Adams)$
over the data set fragment in
Figure~\ref{fig:wikipedia-general}, the query interface has to examine all documents that contain instances
of concept {\it person}.
Some documents in this set have matching
entities form concepts other
than {\it politician}, such as John Adams, the artist.
We like to estimate the fraction of the results for $Q:(C,T)$
that contains a matching entity in concept $C$. Given all other
conditions are the same, the larger this fraction is, the more
likely it is that the query interface delivers more relevant answers,
and therefore, a larger value of $p@k$ for $Q$.
Let $d_{DS}(C)$ denote the fraction of
documents that contain entities of concept $C$ in data set $DS$.
We call $d_{DS}(C)$ the {\it frequency} of $C$ over $DS$.
When $DS$ is clear from the context,
we denote the frequency of $C$ as $d(C)$.
We want to compute the fraction of the returned
answers for query $Q:(C,T)$ that
contain a matching instance of concept $C$.
These entities are annotated by concept
$P$, such that $C$ is in the partition of $p$.
Let $d(P)$ be the total frequency of leaf concepts
in the partition of $P$. The fraction of these documents
that contain information about $C$ is $\frac{d(C)}{d(P)}$.
The larger this fraction is, the
more likely it is that query interface returns more
documents about entities of concept $C$
for query $Q:(C,T)$. Thus, it is more
likely for query interface to return relevant answers
for $Q$ and improve its $p@k$.
For instance, assume that
the mentions to the entities of concept {\it artist}
appear more frequently
in data set $DS$ than the ones of concept {\it politician}.
Also assume that we only annotate {\it person} from $DS$.
Given query $(politician, John Adams)$
it is more likely for articles about John Adams, the artist,
to appear in the top-ranked answers than about John Adams, the politician.
We call the fraction of queries in
$\mQ$ whose concept is $C$ the {\it popularity} of $C$ in
$\mQ$. Let $u_{\mQ}$ be the function that maps concept
$C$ to its popularity in $\mQ$.
When $\mQ$ is clear from the context, we simply use $u$ instead of $u_{\mQ}$. The degree of improvement in value of $p@k$
in answering queries of concept $C$ over $DS$
is proportional to $\frac{u(C)\ d(C)}{d(p)}$.
Hence, the amount of the contribution of queries of the concepts
in partition of $P$ to the value of $p@k$ will be:
$$\sum_{C \in \part(P)} \frac{u(C)\ d(C)}{d(P)}.$$
Given all other conditions are the same, the larger this value
is, the more likely it is that the query interface will achieve a larger $p@k$ value over queries in $\mQ$.
Annotators, however, may make mistakes in identifying
the correct concepts of entities in a collection \cite{Chu-Carroll06}.
An annotator may recognize some appearances of entities
from concepts that are not $P$ as the
occurrences of entities in $P$.
For instance, the annotator of concept {\it person}
may identify {\it Lincoln}, the movie, as a person.
The {\it accuracy} of annotating concept $P$ over $DS$ is
the number of correct annotations of $P$ divided by
the number of all annotations of $P$ in $DS$.
We denote the accuracy of annotating concept $P$
over $DS$ as $\pr_{DS}(P)$. When $DS$ is clear from the
context, we show $\pr_{DS}(P)$ as $\pr(P)$.
Hence, we refine our estimate to the following.
\begin{equation}
\sum_{C \in \part(P)}\frac{u(C)\ d(C)}{d(P)}\ \pr(P).
\label{eq:queriability2}
\end{equation}
\noindent
Next, we compute the amount of improvement
that $\mS$ provides for queries whose concepts do not belong
to any partition, i.e., free concepts.
If concept $C$ is a free concept with regard to design
$\mS$, the query interface has to examine all documents in the
collection to answer $Q:(C,T)$.
Thus, if $C$ is a free concept,
the fraction of returned answers
for $Q$ that contains a matching instance of
concepts $C$ is $d(C)$.
Using equation~\ref{eq:queriability2}, we formally
define the function that estimates the likelihood of
improvement for the
value of $p@k$ for all queries in a query workload
over a data set annotated by design $\mS$.
\begin{definition}
\label{def:queriability}
The queriability of design $\mS$ from taxonomy $\mX$
over data set $DS$ is
\begin{equation}
QU(\mS) = \sum_{P \in \mS} {\sum_{C\in \part(P)} \frac{u(C)\ d(C)\ \pr(P)}{d(P)}}\ + {\sum_{C\in \free(\mS)} u(C)d(C)}.
\label{eq:queriability}
\end{equation}
\noindent
\end{definition}
Similar to other optimization problems in data management, such
as query optimization \cite{DBBook},
the complete information about the
parameters of the objective function, i.e.
frequencies and popularities of concepts,
may not be available at the design-time.
Nevertheless, our empirical results in
Section~\ref{sec:experiment} indicate that one
can effectively estimate these parameters
using a small sample of the full data set.
For instance, we show that
the frequencies of concepts over a collection of more than
a million documents can be effectively estimated using a
sample of about three hundred documents.
\begin{comment}
Similar to the formula for annotation benefit in equation~\ref{eq:AnnotationBenefit},
the first term of the annotation benefit in equation~\ref{eq:AnnotationBenefitNoCons}
reflects the group of queries for which the query interface
returns only the candidate answers with instances matching to
their concepts.
The second term of the annotation benefit in
equation~\ref{eq:AnnotationBenefitNoCons}, however, is different
from the second term in equation~\ref{eq:AnnotationBenefit}
and represents the impact of
the frequency of a concept that is not in $\mS$ on the likelihood
of the precisions of its queries.
Our experimental results
in Section~\ref{opt:sec:sub:objfunc} indicate that in spite of this simplification,
our objective function effectively captures the degree of improvement
delivered by a design over a collection.
This portion of answers will stay in the list of results for
$Q$ after the query interface eliminates all candidate answers
with matching instances from concepts in $\mS$.
Hence, the fraction of the candidate answers that contain a
matching instance of concept $C$ in the list of answers for
a query in $\mQ$ is $\frac{d(C)}{\sum_{E \notin \mS} d(E)}$.
\end{comment}
\subsection{Cost-Effective Design Problem}
Given taxonomy $\mX=$ $(\mC, \mR)$ and data set $DS$
in domain of $\mX$, the
function $w_{DS}: \mC \rightarrow \mathbb{R}^+$, maps each concept
$C$ to a real number that reflects the amount of
resources used to annotate mentions of entities in
$C$ from data set $DS$. When the data set is clear from the
context, we simply denote the cost function as $w$.
The enterprise may predict the costs of
development and maintenance of annotation programs using
available methods for predicting costs of software
development and maintenance \cite{Boehm:SoftwareCost}.
If the cost is running time, the enterprise may use
current methods of estimating the
execution time of concept annotators \cite{OptimizeSQLText:Jain}.
If there is not sufficient information to estimate the costs for concepts,
the enterprise may assume that all concepts are equally costly.
We will show in Sections~\ref{sec:approximation-algorithms},
\ref{sec:pseudo-polynomial}, and \ref{sec:dag-taxonomy} that
finding cost-effective designs is still challenging
in the cases where concepts are equally costly.
Similar to previous works on cost-effective concept
annotation \cite{Termehchy:SIGMOD:14},
we assume that annotating certain concepts
does not affect the cost and accuracies of other concepts.
The reasons behind this assumption are two-fold.
First, it usually takes significant amount of resources to
develop, execute, and maintain a concept annotator
even after pairing with other annotators.
For instance, developers have to discover
a large number of distinct features for each concept
to accurately annotate them.
Second, it may require exponential
number of cost values to express the relationships
between costs of concepts in a taxonomy,
which is not realistic and makes the problem
extremely complex to express.
However, finding a simplified framework that
can effectively express the problem with
relationships between the costs of annotating different concepts
is an interesting subject for future work.
The cost of annotating a data set under
design $\mS$ is the sum of the costs the concepts in $\mS$.
Budget $B$ is a positive real number that
represents the amount of available
resources for organizing the data set.
Next, we formally define the problem of Cost-Effective
Conceptual Design ({\it CECD} for short) as follows.
\begin{problem}
Given taxonomy $\mX$, data set $DS$ in the domain of $\mX$,
and budget $B$, we like to find design $\mS$ over $\mX$
such that $\sum_{C \in \mS} w(C) \leq B$ and
$\mS$ delivers the maximum queriability over $\mX$.
\end{problem}
\noindent
\iffalse
Extracting concepts may require different types of
resources, such as time and manual labor,
that cannot be aggregated into a single cost metric.
In these cases, the enterprise may solve a different
instance of CECD problem for each type of such resources.
\fi
Unfortunately, the CECD problem cannot be solved in
polynomial time in terms of input size unless $\mathbf{P} = \NP$.
\begin{theorem}
\label{theorem:NPHard}
The problem of CECD is $\NP$-hard.
\end{theorem}
\begin{proof}
The problem of CECD can be reduced to the problem
of choosing cost-effective concepts from a set of concepts by
creating a taxonomy $\mX = (R, \mC, \mR)$ where all
nodes except for $R$ are leaf concepts, i.e. leaves.
Since the problem of choosing cost-effective concepts from
a set of concepts is $\NP$-hard \cite{Termehchy:SIGMOD:14},
CECD will be $\NP$-hard.
\end{proof}
\noindent
Because CECD is $\NP$-hard, we propose and study
efficient approximation and
pseudo-polynomial algorithms to solve it.
\begin{comment}
Entities from mutually exclusive concepts and
similar names rarely appear in the same document.
For example, the animal and the car named {\it Jaguar} do not usually occur together in
a document. Hence, we assume that the
entities of mutually exclusive concepts and
similar names do not appear in a document.
\end{comment}
|
2,877,628,088,884 | arxiv |
\section{Introduction}
\label{sec:intro}
Galaxy clusters, as the largest gravitationally bound objects formed in
the universe, play a fundamental role in our understanding of cosmology
and structure formation. A key ingredient for cluster-based cosmology is
the distribution of dark matter (DM) in and around cluster halos. In
this context, the standard $\Lambda$ cold dark matter ($\Lambda$CDM)
model and its variants, such as self-interacting DM
\citep[SIDM,][]{Spergel+Steinhardt2000} and wave DM
\citep[$\psi$DM,][]{Schive2014psiDM},
provide distinct, observationally testable predictions.
For the case of collisionless DM, high-resolution $N$-body simulations exhibit an
approximately ``universal'' form for the spherically averaged density
profile of halos in gravitational quasi-equilibrium
\citep[][hereafter NFW]{1997ApJ...490..493N},
$\rho(r) \propto (r/r_\mathrm{s})^{-1}(1+r/r_\mathrm{s})^{-2}$,
where $r_\mathrm{s}$ is the characteristic scale radius at which the
logarithmic density slope $d\ln{\rho}/d\ln{r}$ equals $-2$.
In this context, the halo concentration,
$c_\mathrm{vir}\equiv r_\mathrm{vir}/r_\mathrm{s}$,
is a key quantity that characterizes the structure of a halo (all
relevant symbols are defined in detail at the end of this section).
In the hierarchical $\Lambda$CDM picture of structure formation,
concentration is predicted to correlate with halo mass,
$M_\mathrm{vir}$, because $r_\mathrm{s}$ stays nearly constant
after an early phase of rapid accretion, whereas $r_\mathrm{vir}$ continues to
grow through a mixture of physical accretion and pseudo-evolution
\citep{bullock01profiles, wechsler02haloassembly, cuesta08infall,
zhao09mah, diemer13pe}. As cluster halos are, on average, still
actively growing today, they are expected to have relatively low concentrations,
$c_\mathrm{vir}\sim 4$--$5$
\citep{Bhatt+2013,Dutton+Maccio2014,Diemer+Kravtsov2015}. These general
trends are complicated by the large scatter in halo growth histories,
which translates into a significant diversity in their density profiles
\citep{Ludlow+2013}.
Recently, closer examination of the outer halo density profiles in collisionless
$\Lambda$CDM simulations has revealed systematic deviations from the
universal NFW or \citet{Einasto1965} form
\citep[][hereafter DK14]{Diemer+Kravtsov2014}.
In particular, the profiles exhibit a sharp drop in density,
a feature associated with the last shell that has reached the
apocenter of its first orbit after accreting onto a halo
in spherical collapse models
\citep{gunn72sphericalcollapse, fillmore84, bertschinger85}.
The location of this ``splashback radius,'' $R_\mathrm{sp}^\mathrm{3D}$, is within a factor
of two of $r_\mathrm{200m}$ and depends on the mass accretion rate of halos, with a
secondary dependence on redshift \citep[][]{Diemer+Kravtsov2014,
Adhikari2014, More2015splash, Shi2016, More2016splash, adhikari16, mansfield17}. The splashback
radius constitutes a physically motivated halo boundary because (at least in the
spherical case) material outside $R_\mathrm{sp}^\mathrm{3D}$ is on its first infall into the
halo, whereas material inside of it is orbiting in the halo potential.
\citet{More2016splash} first observed the splashback feature
in stacked galaxy surface density profiles around clusters
(see also \citealt{tully15}; \citealt{patej16}).
Their measured $R_\mathrm{sp}^\mathrm{3D}$ is somewhat smaller than expected from the
numerical calibration of \citet{More2015splash}. This intriguing
disagreement could be due to subtle effects in the analysis,
errors in the numerical calculation,
baryonic physics affecting cluster member galaxies, or hitherto
undetected properties of the DM itself, such as self-interaction.
Given this wide range of possible reasons for the disagreement, other
observational probes are of great interest. In particular, we are
looking for a test that is subject to different systematic uncertainties
than cluster member density profiles, but is still applicable to
high-mass galaxy clusters where the splashback signal is expected to be
strongest \citep[][]{Diemer+Kravtsov2014,Adhikari2014}. One potential
probe that fulfills these requirements is gravitational lensing, because
it measures the total mass profile rather than the distribution of
subhalos, while the signal is strongest in galaxy clusters.
Cluster gravitational lensing offers a well-established method for
testing halo structure, through observations of weak shear lensing
\citep[e.g.,][]{WtG1,WtG3,Gruen2014,Umetsu2014clash,Hoekstra2015CCCP,Okabe+Smith2016,Melchior2016des},
weak magnification lensing
\citep[e.g.,][]{Hildebrandt+2011,Umetsu+2011,Coupon+2013,Ford2014cfhtlens,Jimeno2015,Chiu2016magbias,Ziparo2016locuss},
strong gravitational lensing \citep[e.g.,][]{2005ApJ...621...53B,Zitrin+2009CL0024,Coe+2010,Jauzac2014,Diego2015a1689},
and the combination of all these effects
\citep[e.g.,][]{BTU+05,UB2008,Umetsu+2011stack,Umetsu+2012,Umetsu2015A1689,Umetsu2016clash,Coe+2012A2261,Medezinski+2013}. Over the last decade, cluster lensing observations
\citep{BTU+05,Okabe+2010WL,Okabe+2013,Umetsu+2011stack,Umetsu2016clash,Newman+2013a} have
established that the projected total mass distribution within individual
and stacked clusters is well described by a family of density profiles
predicted for cuspy DM-dominated halos, such as the
NFW \citep{1997ApJ...490..493N}, Einasto \citep{Einasto1965}, and
DARKexp \citep{Hjorth+2010DARKexp,DARKexp2} models.
Subsequent systematic studies targeting lensing-unbiased cluster samples
\citep[e.g.,][]{Okabe+2013,Umetsu2014clash,Umetsu2016clash,Merten2015clash,Du2015}
show that the cluster lensing measurements are also in agreement with the
theoretical $c$--$M$ relation that is calibrated for
recent $\Lambda$CDM cosmologies with a relatively high normalization
\citep{Bhatt+2013,Dutton+Maccio2014,Meneghetti2014clash,Diemer+Kravtsov2015}.
In principle, any feature in the density profiles predicted by
numerical simulations is directly accessible by lensing observations
of a large sample of galaxy clusters
\citep[e.g.,][]{Okabe+2010WL,Okabe+2013,Umetsu2014clash,Umetsu2016clash,Miyatake2016bias}.
In reality, however, two effects make such measurements
difficult. First, projecting the density profile into two dimensions
smooths out features because any given sightline crosses a range of
three-dimensional cluster radii. Second, in order to average lensing
observations of individual clusters, we need to stack their density
profiles using some radial scale. Conventionally, physical length units
are chosen for this rescaling
\citep[e.g.,][]{Okabe+2010WL,Sereno2015s8}. If the cluster sample spans
a wide range of masses, such a stacking procedure is likely to smooth
out the intrinsic density profiles of the clusters, and sharp features
such as the splashback radius in particular. Instead, we wish to rescale
the profiles by a halo radius in units of which the features we are
interested in are universal, i.e. they appear at the same rescaled radius.
Numerical simulations show that this choice of scaling radius is far
from trivial: while the inner profiles ($r\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} r_\mathrm{vir}$) are
most universal with halo radii that scale with the critical density of
the universe (such as $r_\mathrm{200c}$), the outer profiles are most
universal when expressed in units of radii that scale with the mean
cosmic density, such as $r_\mathrm{200m}$
\citep[][]{Diemer+Kravtsov2014,Lau2015}. These predictions have not
hitherto been tested observationally.
In this paper, we develop new methods for scaling and modeling
stacked cluster lensing profiles, and undertake the first investigation
of the splashback radius based on lensing observations. We use the
data presented in \citet[][hereafter U16]{Umetsu2016clash}, who
performed a joint analysis of strong-lensing, weak-lensing shear and
magnification data sets for 20 high-mass clusters targeted in the
Cluster Lensing And Supernova survey with Hubble
\citep[CLASH,][]{Postman+2012CLASH,Umetsu2014clash,Umetsu2016clash,Zitrin2015clash,Merten2015clash}. Their
analysis combines constraints from 16-band {\em Hubble Space Telescope}
({\em HST}) observations and wide-field multicolor imaging taken
primarily with Suprime-Cam on the Subaru telescope. Such a joint
analysis of multiple lensing probes allows us not only to improve the
precision of mass reconstructions, but also to calibrate systematic
errors inherent in each probe \citep{Rozo+Schmidt2010,Umetsu2013}. The
large radial range covered by the combination of weak- and strong-lensing
data allows us to explore a range of scaling overdensities, and to
investigate their impact on the stacked ensemble fit. Thanks to the
improved stacking procedure, we derive tighter constraints on halo
concentration than in U16, and put a lower limit on the splashback
radius of the stacked CLASH density profile.
The paper is organized as follows.
In Section \ref{sec:methodology} we summarize the characteristics
of the CLASH sample and describe the data used in this study.
We then outline our procedure for modeling the
cluster lensing profiles, and test its robustness using synthetic
CLASH weak-lensing data.
In Section \ref{sec:results} we apply our methodology to the CLASH
lensing data and fit them with NFW, Einasto, and DK14 profiles.
We discuss the results in Section \ref{sec:discussion}.
Finally, a summary is given in Section \ref{sec:summary}.
Throughout this paper, we adopt a spatially flat $\Lambda$CDM cosmology
with
$\Omega_\mathrm{m}=0.27$,
$\Omega_\Lambda=0.73$,
and a Hubble constant
$H_0 = 100h$\,km\,s$^{-1}$\,Mpc$^{-1}$
with $h=0.7$.
We denote the mean matter density of the universe as
$\rho_\mathrm{m}$ and the critical density as $\rho_\mathrm{c}$.
We use the standard notation $M_{\Delta_\mathrm{c}}$ or
$M_{\Delta_\mathrm{m}}$
to denote the mass enclosed within a sphere of radius
$r_{\Delta_\mathrm{c}}$ or $r_{\Delta_\mathrm{m}}$,
within which the mean overdensity equals
$\Delta_\mathrm{c} \times \rho_\mathrm{c}(z)$
or
$\Delta_\mathrm{m} \times \rho_\mathrm{m}(z)$
at a particular redshift $z$, such that
$M_{\Delta\mathrm{c}}=(4\pi/3)\Delta_\mathrm{c}\rho_\mathrm{c}(z)r_{\Delta\mathrm{c}}^3$
and
$M_{\Delta\mathrm{m}}=(4\pi/3)\Delta_\mathrm{m}\rho_\mathrm{m}(z)r_{\Delta\mathrm{m}}^3$.
We generally denote three-dimensional cluster radii as $r$, and reserve
the symbol $R$ for projected clustercentric distances. We define the
splashback radius of a three-dimensional density profile, $R_\mathrm{sp}^\mathrm{3D}$, as the
radius where the logarithmic slope of the profile is steepest.
Similarly, we use $R_\mathrm{sp}^\mathrm{2D}$ to denote the splashback
radius derived from the steepest slope of the projected profile.
We compute the virial mass and radius, $M_\mathrm{vir}$ and $r_\mathrm{vir}$,
using an expression for $\Delta_\mathrm{vir}(z)$ based on the
spherical collapse model \citep[Appendix A of][]{1998PASJ...50....1K}.
For a given overdensity $\Delta$, the concentration parameter is
defined as $c_\Delta=r_\Delta/r_\mathrm{s}$.
All quoted errors are $1\sigma$ confidence limits (CL) unless otherwise
stated.
\section{Data and Methodology}
\label{sec:methodology}
\subsection{CLASH Sample and Data}
\label{subsec:data}
\input{table1.tex}
The CLASH survey \citep{Postman+2012CLASH} is a 524-orbit {\em HST}
Multi-Cycle Treasury program targeting 25 high-mass galaxy clusters.
Of these, 20 CLASH clusters were selected to be X-ray hot ($T_X>5$\,keV)
and to have a high degree of regularity in their X-ray morphology,
with no lensing information used a priori.
Another subset of five clusters were selected by their
high-magnification properties. These high-magnification clusters often
turn out to be complex, massive merging systems
\citep[e.g.,][]{Zitrin+2013M0416,Medezinski+2013}.
A complete definition of the CLASH sample is given in
\citet{Postman+2012CLASH}.
In this work, we shall focus on the analysis of the X-ray-selected
subsample to simplify the interpretation of our results. Numerical
simulations suggest that this subsample is mostly composed of relaxed
clusters ($\sim 70\%$) and largely free of orientation bias
\citep{Meneghetti2014clash}. Specifically, we use a lensing-unbiased
subset of 16 CLASH X-ray-selected clusters taken from
U16, who performed a comprehensive analysis of the
strong-lensing, weak-lensing shear and magnification data.
Our cluster sample lies in the redshift range
$0.19\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z_\mathrm{l}\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.69$ and over a mass range
$5\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} M_\mathrm{vir}/(10^{14}h^{-1}\,M_\odot)\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}
30$
(Table \ref{tab:sample}),
spanning a factor of $\sim 6$ in halo mass $M_\Delta$, or a factor of
$\sim 1.8$ in $r_\Delta\propto M_\Delta^{1/3}$.
A full description of the data used in this work is given by
U16. Here, we provide only a brief summary of the
most relevant aspects of the lensing reconstructions.
The U16 analysis
uses the cluster lensing mass inversion (\textsc{clumi}) code developed
by \citet{Umetsu+2011} and \citet{Umetsu2013}, in which lensing
constraints are combined a posteriori in the form of azimuthally
averaged radial profiles. U16 used constraints spanning the radial range
$10\arcsec$--$960\arcsec$ obtained from 16-band {\em HST} observations
\citep{Zitrin2015clash} and wide-field multicolor imaging
\citep{Umetsu2014clash} taken primarily with Subaru/Suprime-Cam.
The position of the brightest cluster galaxy (BCG) is adopted as the
cluster center (Table 1 in U16).
\citet{Umetsu2014clash} obtained weak-lensing shear and magnification
measurements in 10 logarithmically spaced radial bins ($N_\mathrm{WL}=10$) over the
range $0.9\arcmin$--$16\arcmin$ for all clusters observed with Subaru,
and $0.9\arcmin$--$14\arcmin$ for RX~J2248.7$-$4431 observed with ESO/WFI.
\citet{Zitrin2015clash} constructed detailed mass models for each
cluster core from a joint analysis of {\em HST} strong and weak-shear
lensing data.
U16 constructed enclosed projected mass constraints
for a set of four equally-spaced integration radii
($10\arcsec$--$40\arcsec$, $N_\mathrm{SL}=4$)
from the {\em HST} lensing analysis of \citet{Zitrin2015clash}.
U16 combined these full lensing constraints for
individual clusters in their joint likelihood analysis to reconstruct
binned surface mass density profiles
$\bSigma=\{\Sigma(R_i)\}_{i=1}^{N_\mathrm{bin}}$
measured in a set of clustercentric radial bins,
$\mbox{\boldmath $R$}=\{R_i\}_{i=1}^{N_\mathrm{bin}}$ with
$N_\mathrm{bin}=N_\mathrm{SL}+N_\mathrm{WL}+1$. We have
$N_\mathrm{bin}=15$ bins for all clusters,
except $N_\mathrm{bin}=11$ for RX\,J1532.9$+$3021 with $N_\mathrm{SL}=0$,
for which no secure identification of multiple images was made
\citep{Zitrin2015clash}. The $\bSigma$ profiles used in this work are
shown in Figure 11 of U16.
U16 accounted for various sources of errors.
Their analysis includes four terms in the total covariance matrix
$C_{ij}$ of the $\bSigma$ profile,
\begin{equation}
\label{eq:Ctot}
C = C^\mathrm{stat} + C^\mathrm{sys} + C^\mathrm{lss} + C^\mathrm{int},
\end{equation}
where
$C^\mathrm{stat}$ represents statistical observational errors,
$C^\mathrm{sys}$ contains systematic uncertainties due primarily to
the residual mass-sheet uncertainty \citep{Umetsu2014clash},
$C^\mathrm{lss}$ is the cosmic-noise covariance matrix due to
projected uncorrelated large-scale structures
\citep{2003MNRAS.339.1155H,Umetsu+2011stack},
and
$C^\mathrm{int}$ accounts for the intrinsic variations of the projected
cluster lensing signal at fixed mass due to variations in halo
concentration, cluster asphericity, and the presence of correlated halos
\citep{Gruen2015}.
Overall, the reconstruction uncertainty is dominated by the
$C^\mathrm{stat}$ term (Figure 1 of U16).
The relative contribution from the $C^\mathrm{int}$ term becomes
increasingly dominant at small cluster radii, especially at
$\theta\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2\arcmin$.
The impact of the $C^\mathrm{lss}$ term is most important at large
cluster radii, where the cluster signal is small.
Table \ref{tab:sample} summarizes CLASH lensing determinations of the
mass and concentration parameters $(M_\Delta,c_\Delta)$
for our 16 clusters based on the full lensing
analysis of U16.
These values were obtained from spherical NFW fits to individual cluster
$\bSigma$ profiles using the total covariance matrix $C$ (Equation
(\ref{eq:Ctot})),
restricting the fitting range to $R\le 2h^{-1}\,\mathrm{Mpc}$
($\sim 2r_\mathrm{500c}\sim r_\mathrm{200m}$
for the CLASH sample) to avoid systematic effects
\citep{Becker+Kravtsov2011,Meneghetti2014clash}.
In Table \ref{tab:sample}, we also list effective overdensity masses
$M_\Delta^\mathrm{eff}$ of the sample,
which were obtained by U16 from a spherical NFW fit
to the stacked surface mass density profile of the 16 CLASH clusters.
The stacked ensemble has an effective halo mass
$M_\mathrm{vir}^\mathrm{eff}=(11.99\pm 0.93)\times 10^{14}h^{-1}\,M_\odot$ and
an effective halo concentration
$c_\mathrm{vir}^\mathrm{eff}=4.69\pm 0.35$
($c_\mathrm{200c}^\mathrm{eff}=3.76\pm 0.28$; see Table \ref{tab:sample}),
and lies at a sensitivity-weighted average redshift of
$z_\mathrm{l}^\mathrm{eff}=0.337$, close to the median
redshift of $\overline{z}_\mathrm{l} = 0.352$.
U16 quantified potential sources of
systematic uncertainty in their mass calibration (see their Section 7.1),
such as the effect of dilution of the weak-lensing signal by cluster
members (2.4\%), photometric-redshift bias (0.27\%),
shear calibration uncertainty (5\%),
and
projection effects of prolate halos (3\%).
Combining them in quadrature, the total systematic uncertainty in the
absolute mass calibration was estimated to be $\simeq 6\%$.
This is in close agreement with the value $\sim 8\%$ empirically
estimated from the shear-magnification consistency test of
\citet{Umetsu2014clash}.
Another potential source of systematic errors is smoothing of the
central lensing signal from miscentering effects
\citep{Johnston+2007a,Umetsu+2011stack,Du+Fan2014}.
On average, the sample exhibits a small positional offset between the
BCG and the X-ray peak, characterized by an rms offset of
$\sigma_\mathrm{off}\simeq 11h^{-1}\,\mathrm{kpc}$ \citep[][]{Umetsu2014clash,Umetsu2016clash},
which is much smaller than the typical resolution limit of our {\em HST}
lensing data ($\theta_\mathrm{min}=10\arcsec$, corresponding to
$\simeq 35h^{-1}\,\mathrm{kpc}$ at $\overline{z}_\mathrm{l}=0.35$).
Hence, the miscentering effects are not expected to significantly affect
our ensemble lensing analysis.
\subsection{Radial Scaling of the Profiles}
\label{subsec:scaling}
One of our main goals in this work is to investigate how the radial
scaling of stacked profiles influences the fit results. Ideally, one
would like to stack profiles as a function of the exact halo radius
where a particular feature is expected, e.g. the NFW scale radius or the
splashback radius. However, since their locations are {\em a priori}
unknown,
we need to resort to an alternative halo radius that can be measured for
individual clusters, generally a spherical overdensity radius
$r_\Delta$. Now, the goal is to choose a definition in which the
location of the feature in question is {\em universal}, i.e. independent
of halo mass, and possibly of redshift.
Unfortunately, there is no guarantee that any one definition will be
ideal for multiple features. In fact, DK14 discovered that halo
density profiles in $N$-body simulations prefer different scaling radii
in different regions of the profile: while the inner profiles (and thus
concentrations) are most universal when expressed in units of halo radii
that scale with the critical density $\rho_\mathrm{c}(z)$ of the
universe, such as $r_\mathrm{200c}$, the outer profiles (and thus the
splashback radius) are most universal in units of halo radii that scale
with the mean cosmic density $\rho_\mathrm{m}(z)$, such as
$r_\mathrm{200m}$. These scalings were confirmed in hydrodynamical
simulations of galaxy clusters \citep[][see also \citealt{Shi2016}]{Lau2015}. As we wish to investigate
both the inner and outer profiles, we repeat our analysis using a number
of different scalings covering a range between $2500\rho_\mathrm{c}$ and
$200\rho_\mathrm{m}\simeq 94\rho_\mathrm{c}$ at
$z_\mathrm{l}^\mathrm{eff}= 0.337$.
To construct the scaled surface mass density profiles for the CLASH
sample, we use our full lensing constraints on the NFW
parameters $M_\Delta$ and $c_\Delta$ of each individual cluster as obtained by U16 (Section
\ref{subsec:data}; see Table \ref{tab:sample}),
and normalize their observed $\Sigma(R)$ profiles to a given overdensity
$\Delta$ of interest.
We stress that we do not rely on scaling relations
(e.g., the $c$--$M$ relation), but use observational lensing constraints
on $r_\Delta$ and $\Sigma(r_\Delta)$ for each cluster.
The high-quality, multiscale weak- and strong-lensing
constraints from the CLASH survey enable us to explore the wide range of
overdensities listed above.
We choose the NFW model for the scaling because recent cluster lensing
observations show that the ``projected total'' matter distribution in
the intracluster region ($R\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} r_\mathrm{200m}$) is in excellent
agreement with the NFW form
\citep[][]{Umetsu+2011stack,Umetsu2014clash,Umetsu2016clash,Silva+2013,Newman+2013a,Okabe+2013,Niikura2015,Okabe+Smith2016},
as predicted for collisionless DM-dominated halos in $N$-body
cosmological simulations \citep{Oguri+Hamana2011,Meneghetti2014clash}. We
demonstrate the consistency of this choice with our results in Section \ref{sec:results}.
\subsection{Fitting Functions}
\label{subsec:DK14}
The second goal of this work (besides investigating radial
scalings) is to constrain the splashback radius of the CLASH cluster
sample. While NFW and Einasto fitting functions were sufficient for the
radial rescaling, they describe only the 1-halo term and do not take the
steepening due to the splashback radius into account. Thus, we also fit
the cluster lensing profiles with the more flexible fitting function of
DK14 which was calibrated to a suite of $\Lambda$CDM $N$-body
simulations.
We emphasize that the quality of the data is insufficient to distinguish
among the different profile models, which all describe the data very
well (see Section \ref{subsec:fit_quality}).
Nevertheless, in order to constrain the location of the splashback
radius, we {\em assume} that the $\Lambda$CDM simulations of DK14
describe real cluster halos and use the DK14 profile as a
fitting function in conjunction with generic priors.
Furthermore, we note that the ``true'' location of the splashback radius
is not strictly equivalent to a particular location in the spherically
averaged density profile. However, we follow \citet{More2015splash} in
defining $R_\mathrm{sp}^\mathrm{3D}$ as the radius where the logarithmic slope of the
three-dimensional density profile is steepest. According to this
definition, $R_\mathrm{sp}^\mathrm{3D}$ is expected to lie within a factor of two of
$r_\mathrm{200m}$ \citep{More2015splash}.
Furthermore, the steepest slope would need to be steeper than that
expected from the sum of the Einasto profile and the 2-halo term at
large scales if a detection were to be claimed \citep{More2016splash}.
The DK14 fitting formula is described by eight parameters, and is
sufficiently flexible to reproduce a range of fitting functions for the
DM density profile, such as the halo model
\citep{Oguri+Hamana2011,Hikage2013}. We use the publicly available code
\textsc{colossus}
\citep{colossus} for many of calculations relating to density
profiles. The DK14 model is given by
\begin{equation}
\label{eq:DK14}
\begin{aligned}
\Delta\rho(r) &= \rho(r)-\rho_\mathrm{m} =\rho_\mathrm{inner}\times f_\mathrm{trans}+\rho_\mathrm{outer},\\
\rho_\mathrm{inner} &= \rho_\mathrm{Einasto} = \rho_\mathrm{s}
\exp\left\{-\frac{2}{\alpha}\left[
\left(\frac{r}{r_\mathrm{s}}\right)^\alpha-1\right]\right\},\\
f_\mathrm{trans} &= \left[1+
\left(\frac{r}{r_\mathrm{t}}\right)^\beta\right]^{-\frac{\gamma}{\beta}},\\
\rho_\mathrm{outer}&=\frac{b_\mathrm{e}\rho_\mathrm{m}}{\Delta_\mathrm{max}^{-1}
+ (r/r_\mathrm{piv})^{s_\mathrm{e}}},\\
\end{aligned}
\end{equation}
with
$r_\mathrm{piv}=5r_\mathrm{200m}$ and
$\Delta_\mathrm{max} = 10^3$.
Here $\Delta_\mathrm{max}$ has been introduced as a maximum cutoff
density of the outer term to avoid a spurious contribution at small halo
radii \citep{colossus}.
The Einasto profile $\rho_\mathrm{Einasto}$ describes the intracluster
mass distribution, with the shape parameter $\alpha$ describing the
degree of profile curvature and $r_\mathrm{s}$ the scale radius at which
the logarithmic slope is -2.
The transition term $f_\mathrm{trans}$ characterizes the steepening
around a truncation radius, $r_\mathrm{t}$.
The outer term $\rho_\mathrm{outer}$, given by a softened power law, is
responsible for the correlated matter distribution around clusters, also
known as the 2-halo term, $\rho_\mathrm{2h}$.
DK14 found that this fitting function provides a precise description
($\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 5\%$) of their simulated DM density profiles at
$r\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 9r_\mathrm{vir}$.
At larger radii, the outer term is expected to follow a shape
proportional to the matter correlation function
\citep[e.g.,][]{Oguri+Takada2011}.
To scale out the mass dependence of the density profile, we use an arbitrary
overdensity radius $r_\Delta$ as a pivot radius, and define
a scaled version of $\Delta\rho(r)$ as a function of $r=r_\Delta x$, namely
\begin{equation}
\begin{aligned}
\Delta\rho(r)
\propto&
\exp\left[ -\frac{2}{\alpha}c_\Delta^\alpha(x^\alpha-1) \right]
\left[1+ \left(\frac{x}{\tau_\Delta}\right)^\beta\right]^{-\frac{\gamma}{\beta}}\\
&+ \frac{B_\Delta}{\epsilon_{\Delta} + x^{s_\mathrm{e}}} \equiv f_{\Delta}(x),
\end{aligned}
\end{equation}
with $\epsilon_{\Delta}=\Delta_\mathrm{max}^{-1}\times(5r_\mathrm{200m}/r_\Delta)^{s_\mathrm{e}}$.\footnote{For $\Delta=\mathrm{200m}$, $\epsilon_\Delta$ is constant for all clusters,
$\epsilon_\mathrm{200m}\simeq0.11$ at $s_\mathrm{e}=1.5$; otherwise, the
actual value of $\epsilon_\Delta$ depends on the ratio
$r_\mathrm{200m}/r_\Delta$ of each individual cluster.}
This scaled DK14 model is described by a set of seven dimensionless parameters,
\begin{equation}
\mbox{\boldmath $p$}=\{c_\Delta,\alpha,\tau_\Delta,B_\Delta,s_\mathrm{e},\beta,\gamma\},
\end{equation}
namely the halo concentration $c_\Delta=r_\Delta/r_\mathrm{s}$,
the Einasto shape parameter $\alpha$,
the dimensionless truncation radius
$\tau_\mathrm{\Delta}=r_\mathrm{t}/r_\mathrm{\Delta}$,
the relative normalization of the outer term $B_\Delta\propto b_\mathrm{e}$,
and three additional shape parameters ($s_\mathrm{e},\beta,\gamma$) for
the transition and outer terms. The model reduces to the Einasto model
(specified by $c_\Delta$ and $\alpha$)
when $f_\mathrm{trans}=1$ ($\tau_\Delta\to \infty$) and
$f_\mathrm{outer}=0$ ($B_\Delta=0$). We refer the reader to Appendix \ref{appendix:dk14}
for a more detailed description of the scaled DK14 profile function.
Similarly, we define a scaled version of the NFW profile,
$\Delta\rho(r)\propto [c_\Delta x (1+c_\Delta x)^2]^{-1}$ with $r=r_\Delta x$,
described by the concentration parameter $c_\Delta$.
By projecting $\Delta\rho(r)$ along the line of sight, we derive the
scaled surface mass density, which is a lensing observable,
\begin{equation}
\label{eq:ydef}
y_\Delta(x) := \frac{\Sigma(R=r_\mathrm{\Delta}x)}{\Sigma(r_\mathrm{\Delta})},
\end{equation}
normalized as $y_\Delta(x)=1$ at $x=1$, where
\begin{equation}
\Sigma(R)
= 2\int_{R}^\infty \frac{\Delta\rho(r)rdr}{\sqrt{r^2-R^2}}.
\end{equation}
The projected $y_\Delta(x)$ profile is modeled in terms of the scaled
density profile $f_{\Delta}$ as
\begin{equation}
\begin{aligned}
y_\Delta(x|\mbox{\boldmath $p$}) = \left(\int_x^\infty\!\frac{f_{\Delta}(\xi|\mbox{\boldmath $p$})\xi
d\xi}{\sqrt{\xi^2-x^2}}\right)
\left(\int_1^\infty\!\frac{f_{\Delta}(\xi|\mbox{\boldmath $p$})\xi
d\xi}{\sqrt{\xi^2-1}}\right)^{-1}.
\end{aligned}
\end{equation}
We note that it is straightforward to generalize our approach to
shear-only weak-lensing observations where the {\em differential}
surface mass density $\Delta\Sigma(R)=\Sigma(<R)-\Sigma(R)$ is a direct
observable in the weak-lensing limit \citep[e.g.,][]{2001PhR...340..291B}.
\subsection{Parameter Inference}
\label{subsec:bayesian}
We use a Bayesian approach to infer the shape and structural parameters
of the mass distribution of the CLASH sample.
We restrict the optimization to ($c_\Delta, \alpha, \tau_\Delta, B_\Delta$),
the primary model parameters that describe the scaled DK14 model, and
marginalize over the remaining parameters, $(s_\mathrm{e},\beta,\gamma)$,
using priors based on the $N$-body simulations of DK14.
In particular, following \citet{More2016splash}, we adopt
Gaussian priors of
$\log_{10}\beta=\log_{10}{6}\pm 0.2$ and
$\log_{10}\gamma=\log_{10}{4}\pm 0.2$\footnote{This corrects
typographical errors in Section 2 of \citet{More2016splash}.
See also Section 3.1 of \citet{More2015splash}.},
allowing a wide, representative range of values.
We assume a Gaussian prior on $s_\mathrm{e}$ of $1.5\pm 0.1$, centered
around the value found by DK14.
We use unconstraining flat priors on $(c_\Delta, \alpha, B_\Delta)$.
For $\tau_\Delta$, we assume $\tau_\mathrm{200m}\in [0,5]$, where the
upper bound corresponds approximately to
$r_\mathrm{t}=5r_\mathrm{200m}\simeq 10h^{-1}\,\mathrm{Mpc}$ for the CLASH
sample (Table \ref{tab:sample}),
which is larger than the maximum data radius of
$\sim 5h^{-1}\,\mathrm{Mpc}$.
We translate this prior to a given overdensity $\Delta$ using the
effective $r_\Delta$ radius of the sample (Section \ref{subsec:data} and
Table \ref{tab:sample}) as
$\tau_\Delta\in[0, 5 (r_\mathrm{200m}^\mathrm{eff}/r_\Delta^\mathrm{eff})]$.
We use a Gaussian log-likelihood
$-\ln{\cal L}(\mbox{\boldmath $p$})=\chi^2(\mbox{\boldmath $p$})/2+\mathrm{const.}$ with a $\chi^2$
function given by
\begin{equation}
\begin{aligned}
\chi^2 = \sum_{n=1}^{N_\mathrm{halo}}\sum_{i,j=1}^{N_\mathrm{bin}}&
\left[y_{\Delta,i}^{(n)}-y_\Delta(x^{(n)}_i|\mbox{\boldmath $p$})\right]
\left[C^{(n)}_{\Delta}\right]^{-1}_{ij}\\
\times& \left[y_{\Delta,j}^{(n)}-y_\Delta(x^{(n)}_j|\mbox{\boldmath $p$})\right],
\end{aligned}
\end{equation}
where
$\mbox{\boldmath $y$}_\Delta=\{y_{\Delta,i}\}_{i=1}^{N_\mathrm{bin}}$
is the scaled data vector (see Equation (\ref{eq:ydef})) for each cluster
sampled at $\mbox{\boldmath $x$}=\{x_i\}_{i=1}^{N_\mathrm{bin}}=\{R_i/r_\Delta\}_{i=1}^{N_\mathrm{bin}}$,
$C_{\Delta}$ is the total covariance matrix of the scaled data
$\mbox{\boldmath $y$}_\Delta$,
and $y_\Delta(x_i|\mbox{\boldmath $p$})$ represents the model for $y_{\Delta,i}$
with parameters $\mbox{\boldmath $p$}$.
Combining all 16 clusters in our sample, we have a total of $236$ data
points (Section \ref{subsec:data}).
We use a Markov chain Monte Carlo (MCMC) approach with
Metropolis--Hastings sampling to sample from the posterior distribution
of the parameters,
$(c_\Delta,\alpha,\tau_\Delta,B_\Delta,s_\mathrm{e},\log_{10}\beta,\log_{10}\gamma)$,
given the data and the priors stated above.
We largely follow the sampling procedure of \citet{Dunkley+2005} but
employ the Gelman--Rubin statistic \citep{Gelman+Rubin1992}
as a convergence criterion of the generated chains. Once convergence to
a stationary distribution is achieved, we run a long, final chain of
$10^5$ sampled points, which is used for our parameter estimation and
error analysis.
We also use the \textsc{minuit} package from the CERN program libraries
to find the global maximum a posteriori estimate of the joint
probability distribution. This procedure allows a further refinement of
the {\em best-fit} solution with respect to the one obtained from the
MCMC sampling \citep[see discussions
in][]{Planck2014XVI,Penna-Lima2016}.
\subsection{Tests with Synthetic Data}
\label{subsec:mock}
\begin{figure*}[!htb]
\begin{center}
$
\begin{array}
{c@{\hspace{.3in}}c}
\includegraphics[scale=0.45, angle=0,clip]{Figs/f1a.pdf} &
\includegraphics[scale=0.45, angle=0,clip]{Figs/f1b.pdf}\\
\includegraphics[scale=0.45, angle=0,clip]{Figs/f1c.pdf} &
\includegraphics[scale=0.45, angle=0,clip]{Figs/f1d.pdf}
\end{array}
$
\end{center}
\caption{\label{fig:mock}
Test of our forward-modeling method. We create 50 realizations of
CLASH-like weak-lensing data by computing synthetic shear and
magnification catalogs from analytically modeled cluster lenses.
We simulate two configurations of the outer density profile,
one without a splashback feature (top panels) and one with a
splashback-like feature (bottom panels, see Appendix
\ref{appendix:mock} for details on the lens models).
The left panels show the scaled surface mass density
$\Sigma(R)/\Sigma(r_\mathrm{200m})$ of the synthetic observations, where
the black solid line represents the noise-free, sensitivity-weighted
profile of the 16 clusters in the synthetic sample.
The blue shaded region indicates the $1\sigma$ bounds on the ensemble
of DK14 fits to the synthetic data.
The gray lines show scaled $\Sigma(R)$ profiles of individual
clusters reconstructed from each particular source realization.
The right panels show the logarithmic slope of the three-dimensional
DK14 profiles, $d\ln\Delta\rho(r)/d\ln{r}$, as a function of
$r/r_\mathrm{200m}$. For the lens model with a splashback-like feature,
the range from the 16th to the 84th percentile of $R_\mathrm{sp}^\mathrm{3D}$
(gray vertical shaded area)
inferred from the synthetic data is consistent with the
sensitivity-weighted expectation value of the sample
(red vertical line), $\langle R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}\rangle\simeq 0.87$.
In each panel, the blue dotted lines indicate the errors scaled to
match the overall CLASH weak-lensing sensitivity.
This test demonstrates the robustness of our fitting method, and in
particular that fitting the DK14 profile does not introduce a spurious
splashback feature. See Section \ref{subsec:mock} and Appendix
\ref{appendix:mock} for details.
}
\end{figure*}
\input{table2.tex}
Given the simulation results of DK14, we expect the signature of the
splashback radius in lensing data to be weak. Thus, one concern is that
fitting with the DK14 profile function might introduce a spurious
``detection'' of a splashback feature due to systematics or overfitting
in the presence of noise. In order to address this potential issue, we
have tested our procedure (including data analysis, mass reconstruction,
stacking, and the fitting process) on simulated lensing data.
We focus on the recovery of
the lensing signal in the noisy outer regions around $r_\mathrm{200m}$
where the splashback feature is expected. Hence, we consider only the
wide-field weak-lensing observables, namely the shear and magnification
effects in the subcritical regime. To this end, we create 50 source
realizations of synthetic shear and
magnification catalogs for our 16 CLASH clusters,
each modeled as an NFW halo specified by its redshift $z_\mathrm{l}$
(Table \ref{tab:sample}) and ($M_\mathrm{200c},c_\mathrm{200c}$)
parameters fixed to the observed central values (Table 2
of U16).
For each NFW cluster, we consider two configurations of the outer density
profile, one with and one without a splashback-like feature. For technical
reasons, we do not use a DK14 profile for the synthetic cluster lenses
and instead substitute a profile that introduces a similar density
drop (see Appendix \ref{appendix:mock} for details).
The results of applying our methods to the synthetic weak-lensing data
are summarized in Figure \ref{fig:mock}. The lower and upper panels
correspond to the simulations with and without a splashback-like feature,
respectively. The blue shaded region in the left panels shows the mean
and standard deviation of the best-fit DK14 profiles inferred from 50
realizations of the synthetic data for each configuration.
On average, this fit is in excellent agreement with the noise-free,
sensitivity-weighted averaged input profile (black solid curve).
In the right panels, we show the corresponding logarithmic
density slope $d\ln\Delta\rho(r)/d\ln{r}$.
For the model with a splashback-like feature, the range from
the 16th to the 84th percentile of $R_\mathrm{sp}^\mathrm{3D}$
($0.74 \le R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m} \le 1.27$; vertical shaded area) inferred
from the synthetic data is consistent with the sensitivity-weighted
expectation value of the sample,
$\langle R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}\rangle\simeq 0.87$.
In Figure \ref{fig:mock}, we also indicate the errors scaled to the
CLASH weak-lensing sensitivity (dotted lines). The errors are computed
by matching the synthetic (NFW) to the observed (CLASH) total
signal-to-noise ratio (S/N) of the $\bSigma$ profiles, where
$(\mathrm{S/N})^2=\sum_{n=1}^{N_\mathrm{halo}}\left(\bSigma^t C^{-1}\bSigma\right)_n$.
This comparison suggests that the overall uncertainty in our synthetic
observations is underestimated by $\simeq 36\%$.\footnote{The
underestimation is partly due to the assumed ellipticity dispersion of
source galaxies, $\sigma_g=0.3$ (Appendix \ref{appendix:mock}).
This is $\simeq 29\%$ lower than the
observed value, $\sigma_g\simeq 042$, which includes contributions fro
both intrinsic shape and measurement noise. The rest ($\sim 20\%$) can be
accounted for by the intrinsic clustering and other error contributions
in the weak-lensing magnification measurements
\citep{Umetsu2014clash,Umetsu2016clash} as well as by the cosmic noise
contribution, $C^\mathrm{lss}$.}
Nevertheless, the $1\sigma$ uncertainty after this correction is small
compared to the absolute value of the slope at $r \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2r_\mathrm{200m}$.
Even though the determination becomes noisy at radii around and beyond
$r_\mathrm{200m}$,
the steepening relative to the NFW profile is marginally identified at
the $1.7\sigma$ level at the expected location, $r/r_\mathrm{200m}\sim 0.9$.
This test demonstrates that our analysis methods are able to
accurately reproduce the input sensitivity-weighted density profile and
its logarithmic gradient even at realistic noise levels.
The results also show that the priors adopted (Section
\ref{subsec:bayesian}) are generic and flexible enough to reproduce the
NFW-like shape of the profile, as well as a splashback feature.
Importantly, we note that our analysis pipeline does not introduce spurious
gradients that mimic the characteristic splashback feature,
namely a steepening followed by an upturn due to the contribution from
the 2-halo term.
\begin{figure*}[!htb]
\begin{center}
$
\begin{array}
{c@{\hspace{.3in}}c}
\includegraphics[scale=0.45, angle=0, clip]{Figs/f2a.pdf} &
\includegraphics[scale=0.45, angle=0, clip]{Figs/f2b.pdf}
\end{array}
$
\end{center}
\caption{
\label{fig:surfmass}
Upper panels: scaled surface mass density $\Sigma/\Sigma(r_\Delta)$ of
the CLASH sample as a function of $R/r_\Delta$, where the projected
clustercentric distance $R$ is expressed in units of two different
overdensity radii, $\Delta=\mathrm{200m}$ (left) and $\mathrm{2500c}$ (right).
In each panel, the blue thick solid line and the blue shaded area show
the best-fit DK14 profile and its $1\sigma$ uncertainty derived from a
simultaneous ensemble fit to the scaled surface mass density
profiles of the 16 CLASH clusters (gray lines).
The corresponding NFW (black dashed) and Einasto (black solid) fits
are also shown.
The scale on the top axis denotes $R$ in physical length units converted
with the effective overdensity radius $r_\Delta$ of the sample.
The average $\Sigma$ profile of the CLASH sample stacked in physical
units \citep[][U16]{Umetsu2016clash} is shown in rescaled units (green
circles with error bars).
For each $\Delta$, the lower panel shows deviations (in units of
$\sigma$) of the observed cluster profiles from the best-fit DK14
profile.
}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
$
\begin{array}
{c@{\hspace{.3in}}c}
\includegraphics[scale=0.45, angle=0, clip]{Figs/f3a.pdf} &
\includegraphics[scale=0.45, angle=0, clip]{Figs/f3b.pdf}
\end{array}
$
\end{center}
\caption{
\label{fig:Gamma3d}
Same as Figure \ref{fig:surfmass}, but for the gradient of the profiles.
Upper panels: logarithmic gradient of the inferred three-dimensional
density profile as a function of the scaled cluster radius $r/r_\Delta$.
As in Figure \ref{fig:surfmass}, the results are shown for
$\Delta=\mathrm{200m}$ (left) and $\mathrm{2500c}$ (right).
The blue solid line and the blue shaded area represent the best-fit DK14
model and its $1\sigma$ uncertainty, and are compared to the NFW (black
dashed) and Einasto (black solid) fits.
The gray vertical shaded area indicates the range from the
16th to the 84th percentile
of the marginalized posterior distribution of the splashback
radius, $R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ (see Figure \ref{fig:posterior}).
Lower panels: same as the upper panels, but showing the logarithmic
slope of the surface mass density profiles. The best-fit DK14 profile
is shown as blue dots at the locations of the data points.
For comparison, the slope of the conventionally stacked $\Sigma$
profile (U16) is shown in rescaled units (green circles with error
bars).
}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.75\textwidth,angle=0,clip]{Figs/f4.pdf}
\end{center}
\caption{
\label{fig:DK14fit}
Constraints on the seven dimensionless parameters
$\mbox{\boldmath $p$}=\{c_\mathrm{200m},\alpha,\tau_\mathrm{200m},B_\mathrm{200m},s_\mathrm{e},\beta,\gamma\}$
of the scaled DK14 model obtained from a simultaneous fit to
the surface mass density profiles
of 16 CLASH clusters (left panel of Figure \ref{fig:surfmass}), showing
marginalized posterior one-dimensional distributions and
two-dimensional 68\% and 95\% limits.
Blue solid lines indicate the best-fit (global maximum of the posterior)
values of the parameters.
For all parameters, the global best-fit values coincide well with their
respective peak values of the marginalized distributions.
For ($\tau_\mathrm{200m},s_\mathrm{e},\log_{10}\beta,\log_{10}\gamma$),
the prior distributions are shown by red dashed lines. For the other
parameters, the priors are flat and nonrestrictive.
}
\end{figure*}
\section{Results}
\label{sec:results}
Our main results are the best-fit NFW, Einasto, and DK14 profiles
resulting from a simultaneous fit to the scaled $\bSigma$ profiles of 16
CLASH clusters (Table \ref{tab:models} and Figures \ref{fig:surfmass},
\ref{fig:Gamma3d}, and \ref{fig:DK14fit}). Table \ref{tab:models} lists
the best-fit parameters, their uncertainties, and the $\chi^2$ values
for each of five pivot overdensities with which the analysis was
performed ($\Delta=\mathrm{2500c}, \mathrm{500c}, \mathrm{200c}$,
virial, and $\mathrm{200m}$).
In Figure \ref{fig:surfmass}, we show the projected density profile
$y_\Delta(x)=\Sigma(r_\Delta x)/\Sigma(r_\Delta)$ of the individual
CLASH clusters (gray lines), as well as the best-fit DK14 (blue), NFW
(dashed black), and Einasto (solid black) fits. The results are shown
for two different choices of the pivot overdensity, namely 200m (left
panel) and 2500c (right panel). In the upper panels of Figure
\ref{fig:Gamma3d}, we show the corresponding three-dimensional logarithmic slope
$d\ln{\Delta\rho(r)/d\ln{r}}$ of the DK14 fit with $1\sigma$ errors
(shaded area), as well as the NFW and Einasto slopes.
Similarly, the lower panels show the logarithmic
slope $d\ln\Sigma(R)/d\ln{R}$ of the best-fit surface mass density
profiles. Finally, Figure \ref{fig:DK14fit} shows the one- and two-dimensional marginalized
posterior distributions for the complete set of scaled DK14
parameters with $\Delta=\mathrm{200m}$,
$\mbox{\boldmath $p$}=\{c_\mathrm{200m},\alpha,\tau_\mathrm{200m},B_\mathrm{200m},s_\mathrm{e},\beta,\gamma\}$.
For all parameters, the global best-fit values coincide well with their respective peak values
of the one-dimensional marginalized posterior distributions.
In the following sections, we discuss the fit quality as well as the
inferred parameters for the inner ($r \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} r_\mathrm{vir}$) and
outer regions of the profile.
\subsection{Fit Quality}
\label{subsec:fit_quality}
The ensemble mass profile in projection is remarkably well described by
an NFW or Einasto profile out to
$R\sim 1.2r_\mathrm{200m}$
or
$R\sim 4.5 r_\mathrm{2500c}$ (Figures \ref{fig:surfmass} and
\ref{fig:Gamma3d}),
beyond which the data exhibit a flattening that is not modeled by those
fitting functions.
Note that to calculate $r_\Delta$ and $\Sigma(r_\Delta)$ for each
individual cluster (Section \ref{subsec:data}), we employed the
spherical NFW fits of U16 obtained with a restricted fitting
range of $R\le 2h^{-1}\,\mathrm{Mpc} \sim r_\mathrm{200m}$. The results shown here thus
ensure the self-consistency of our analysis.
As the outer profiles are expected to be most universal with respect to
$\Delta=\mathrm{200m}$ (DK14), that definition is of particular
relevance for the splashback radius. We thus use the DK14 model with
$\Delta=\mathrm{200m}$ as a baseline model.
This model has the best-fit $\chi^2$ of 180 for 232 degrees of freedom,
corresponding to a probability of 99.5\% to exceed the observed $\chi^2$
value, assuming the standard $\chi^2$ probability distribution
function. The model is therefore in good agreement with the data.
However, as we will discuss in Section
\ref{sec:discussion:profile_form}, the improvement in the fit is not
significant compared to the NFW or Einasto fit, implying that the parameters
that describe the transition region and outer terms are not well
constrained by the data.
This is not surprising, because Figure \ref{fig:surfmass} shows that the
CLASH lensing data do not resolve the profile curvature in the
transition region particularly well. Hence, the shape of the gradient
feature at $r\sim r_\mathrm{200m}$, which locally deviates from the
three-dimensional NFW and Einasto profiles (Figure \ref{fig:Gamma3d}),
is specific to the assumed DK14 profile form.
We note that the $\chi^2$ values in Table \ref{tab:models} decrease with
increasing overdensity $\Delta$, independent of the fitting
function. The reason for this trend is that the inner $\bSigma$ profiles
are more tightly constrained by the data, especially from the {\em HST}
lensing analysis, so that scaling the $\bSigma$ profiles to higher
overdensities reduces the overall scatter, which is dominated by the
inner regions.
We also note that the reduced $\chi^2$ values in Table
\ref{tab:models} are systematically smaller than unity, which may
indicate that the number of degrees of freedom is overestimated owing to
the effects of nonlinear modeling \citep{Andrae2010} and/or that the
errors are conservatively overestimated. Since U16 found the reduced
$\chi^2$ for their fits to the same input data to be $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1$ (their Table 4),
it is unlikely that the errors are significantly overestimated.
\subsection{The Inner Mass Profile: Shape and Self-similarity}
\label{subsec:models}
The best-fit values of concentration shown in Table \ref{tab:models}
agree well between the different fitting models. This similarity is also
apparent in Figures \ref{fig:surfmass} and \ref{fig:Gamma3d}, as the
models have very similar shapes for both scaling overdensities
shown. Furthermore, the best-fit values for the shape parameter $\alpha$
agree between the Einasto and DK14 fits, and lie in the range
$0.18\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} \alpha\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.22$ with a typical $1\sigma$ uncertainty of
$0.06$ for the DK14 model,
and $0.17\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}\alpha\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.21$ for the Einasto model.
An Einasto density profile with $\alpha\sim 0.2$ closely resembles
an NFW profile \citep[e.g.,][]{Ludlow+2013}.
The uncertainties on $c$ and $\alpha$ allow us to assess the impact of
the scaling overdensity. If the profiles are most universal as a
function of a particular radius definition, we expect the fractional
uncertainty on the fit to be smallest in that definition. DK14
investigated the universality of halo density profiles, and found that
the inner profiles are most universal in units of
$r_\mathrm{200c}$. However, this statement refers primarily to the
redshift scaling of different definitions, which we cannot test here
owing to the limited redshift range of the CLASH sample. At fixed
redshift and fixed mass, we expect any overdensity within a range around
$r_\mathrm{200c}$ to lead to reasonably universal inner profiles,
whereas for very extreme definitions the scatter in the profiles might
increase.
These expectations are borne out in our results. The fractional
uncertainties on NFW-$c$ and Einasto-$\alpha$
(the primary parameter that determines the shape of the profile for each model)
are smallest for the 200m and virial scalings, and increase toward the
highest overdensities (despite a lower $\chi^2$).
However, the uncertainties on Einasto-$c$ are slightly lower at somewhat
higher overdensities such as $\Delta=500$c.
Overall, it appears that a rescaling with densities
around $\Delta = \mathrm{200c}$ leads to relatively low
uncertainties. Regardless of the profile model, scaling with
$\Delta=2500$c results in significantly higher uncertainties.
Since the determination of spherical overdensity radii (or masses) is
not less certain at high overdensities (U16, their Section 4.2), we
conclude that the increased uncertainty arises because the inner
profiles are less universal at very high overdensities.
Furthermore, we note that the relative insensitivity of the inner
profiles to $\Delta$ is likely in part due to the sample selection based
on X-ray regularity (Section \ref{subsec:data}), which is understood to
significantly reduce the scatter in concentration
\citep{Meneghetti2014clash}. The results have been confirmed by the
CLASH full lensing analysis of U16 (see their Section 6.2, as well as
\citealt{Merten2015clash}).
Most importantly, we find that scaling the profiles by {\em any}
overdensity radius (except for $\Delta = {\rm 2500c}$) improves the
constraints on the best-fit parameters. As a result, the limits on
concentration are tighter than those of U16 who scaled the profiles in
physical units and found uncertainties of 8\% on NFW-$c$, 11\% on
Einasto-$c$, and 18\% on Einasto-$\alpha$ (see their Table 4).
\begin{figure}
\begin{center}
$
\begin{array}
{c@{\hspace{.1in}}c}
\includegraphics[scale=0.25, angle=0, clip]{Figs/f5a1.pdf} &
\includegraphics[scale=0.25, angle=0, clip]{Figs/f5b1.pdf}\\
\includegraphics[scale=0.25, angle=0, clip]{Figs/f5a2.pdf} &
\includegraphics[scale=0.25, angle=0, clip]{Figs/f5b2.pdf}
\end{array}
$
\end{center}
\caption{\label{fig:posterior}
Marginalized one-dimensional posterior probability distributions of the
three-dimensional splashback radius (top panels) and mass (bottom
panels) in scaled units, $R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ and $M_\mathrm{sp}/M_\Delta$,
for two different values of the chosen pivot overdensity,
$\Delta=\mathrm{200m}$ (left panels) and $\mathrm{2500c}$ (right
panels). The red vertical dashed lines indicate the
16th, 50th, and 84th percentiles of the distributions. The global
best-fit model values of $R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ and $M_\mathrm{sp}/M_\Delta$
are marked by a black vertical solid lines. The scales on the top axes
denote $R_\mathrm{sp}^\mathrm{3D}$ and $M_\mathrm{sp}$ in physical units, converted using
the effective overdensity radius $r_\Delta$ and mass $M_\Delta$ of the
sample.
}
\end{figure}
\subsection{The Outer Profile and Splashback Radius}
\label{subsec:Rsp}
We now turn toward the outer profiles and particularly the inferred
gradient profiles shown in Figure \ref{fig:Gamma3d}. A comparison of the
three- and two-dimensional slopes highlights why detecting the
splashback radius in surface density profiles is challenging in
practice: even though there is a noticeable steepening in the
three-dimensional slope, the two-dimensional slope drops very little,
owing to projection effects (see Section
\ref{sec:discussion:projection}).
Nevertheless, we can derive a splashback radius and mass from each of
the MCMC samples of the DK14 parameters shown in Figure
\ref{fig:DK14fit} (Equations \ref{eq:Rsp} and
\ref{eq:Msp}). In Figure
\ref{fig:posterior}, we show the corresponding one-dimensional
marginalized posterior distributions for $R_\mathrm{sp}^\mathrm{3D}$ and $M_\mathrm{sp}\equiv
M(<R_\mathrm{sp}^\mathrm{3D})$, both in scaled units.
The results are shown for two different values of the chosen pivot
overdensity, $\Delta=\mathrm{200m}$ (left panels) and 2500c (right panels).
In each panel, the vertical dashed lines indicate the 16th, 50th, and 84th
percentiles of the marginalized posterior distribution.
The locations of
$R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ and
$M_\mathrm{sp}/M_\Delta$
derived from the best-fit model (Table \ref{tab:models})
are indicated by vertical solid lines in Figure \ref{fig:posterior}, and
are in close agreement with the respective median values of the
distributions.
The posterior distributions show a tail extending toward large positive
values of $R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ and $M_\mathrm{sp}/M_\Delta$,
associated with large values of the truncation parameter,
$\tau_\Delta=r_\mathrm{t}/r_\Delta$ (Figure \ref{fig:DK14fit}).
A large $\tau_\Delta$ indicates a profile without a well-defined
steepening feature.
We thus place uninformative upper bounds on $R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ and
$M_\mathrm{sp}/M_\Delta$.
On the other hand, we obtain tighter lower bounds on these parameters
because the inner $\bSigma$ profiles of the clusters are well
constrained by the combination of strong-lensing, weak-lensing shear and
magnification data (U16).
\input{table3.tex}
Table \ref{tab:Rsp} summarizes the 68\% confidence lower limits and
best-fit model values (in parentheses) on the splashback radius and
mass. We also list $R_\mathrm{sp}^\mathrm{3D}$ and $M_\mathrm{sp}$
in physical length units converted with the
effective overdensity radius ($r_\Delta^\mathrm{eff}$) and mass
($M_\Delta^\mathrm{eff}$) of the sample
(Section \ref{subsec:data}).
Using the fiducial scaling overdensity of $\Delta=\mathrm{200m}$, these lower limits are
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}>0.89$
and
$M_\mathrm{sp}/M_\mathrm{200m}>0.93$,
corresponding to
$R_\mathrm{sp}^\mathrm{3D} > 1.83h^{-1}\,\mathrm{Mpc}$
and
$M_\mathrm{sp} > 1.21\times 10^{15}h^{-1}\,M_\odot$.
We note that the location of the steepest slope in projection,
$R_\mathrm{sp}^\mathrm{2D}/r_\Delta$,
is smaller than $R_\mathrm{sp}^\mathrm{3D}/r_\Delta$ (Figure
\ref{fig:Gamma3d}) because of projection effects
\citep{More2016splash}. Using our best-fit base DK14 model
($\Delta=\mathrm{200m}$ in Table \ref{tab:models}), we find
$R_\mathrm{sp}^\mathrm{2D}/R_\mathrm{sp}^\mathrm{3D}\simeq0.8$. We
discuss the difference between the two- and three-dimensional splashback
radii further in Section \ref{sec:discussion:projection}.
From a comparison of the lower bounds on the splashback radius and mass in
Table \ref{tab:Rsp}, we see that the steepening feature is most pronounced
when the cluster profiles are scaled by $r_\mathrm{200m}$, and is smeared
out when scaled to higher overdensities, resulting in less strict lower limits
(see also the right panels of Figure \ref{fig:Gamma3d}).
This trend is consistent with the prediction of DK14 that
halos reveal self-similar behavior in their outskirts when their
profiles are expressed in units of spherical overdensity radii defined
with respect to the mean density of the universe, especially
$r_\mathrm{200m}$.
\section{Discussion}
\label{sec:discussion}
In this section, we compare our results for the shape of the profile,
concentration, and splashback radius with predictions from $N$-body
simulations. We discuss observational and simulation
effects that could potentially complicate our analysis.
\subsection{The Impact of Priors}
\label{sec:discussion:tau}
We imposed a number of priors on the
parameters of the DK14 profile, namely
on the slopes $\beta$ and $\gamma$, on the steepening radius
$\tau_\Delta$, and on the outer profile slope $s_\mathrm{e}$ (see Figure
\ref{fig:DK14fit}).
These priors were based on the results of DK14 for the median profiles
of halo samples spanning a wide range of masses and mass accretion
rates, and chosen conservatively, i.e. allowing a much larger range of
parameter values than found for high-mass cluster halos in DK14.
We find that imposing constraining theoretical priors leads
to an inflated sensitivity to the splashback feature.
As stated in Section \ref{subsec:DK14}, we use the DK14 profile as a
flexible fitting function to determine the location of the steepest
slope, $R_\mathrm{sp}^\mathrm{3D}$, which is an observable quantity.
In this context,
$\mbox{\boldmath $p$}=\{c_\Delta,\alpha,\tau_\Delta,B_\Delta,s_\mathrm{e},\beta,\gamma\}$
are considered to be merely fitting parameters, and we allow them to take
on values not expected from simulations, such as very large $\alpha$.
Nevertheless, one might worry that our inferences regarding $R_\mathrm{sp}^\mathrm{3D}$
are informed by the priors because they clearly inform the posterior of
some parameters (Figure \ref{fig:DK14fit}).
In particular, the asymptotic slope of the 1-halo term, $\gamma$,
is essentially unconstrained by the fit and relatively steep due to the
prior (for example, $\gamma = 0$ is effectively excluded). However, it
is important to note that $\gamma$ (as well as $\beta$) has no impact on
the DK14 profile if $\tau_\Delta$ is large, because the steepening then
moves out of the observed region of the profile. Thus, the most critical
prior is that on $\tau_\Delta$.
Our prior allows values of $\tau_\Delta$ up to 5, placing the steepening
at $r_\mathrm{t}\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10h^{-1}\,\mathrm{Mpc}$ which lies far outside the maximum
radius of our data ($R\simeq 5h^{-1}\,\mathrm{Mpc}$).
Thus, profiles with $\tau_\mathrm{200m} \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 2.5$ effectively reduce
to an Einasto profile with a 2-halo term ($f_\mathrm{trans}\approx 1$)
in the observed radial range.
We have already demonstrated that a fit with these priors can reproduce
a profile without steepening in the analysis of synthetic weak-lensing
data (Figure \ref{fig:mock}).
However, we note that, in combination with a higher value of $\alpha$
and a positive value of $B_\Delta$, even profiles with
$\tau_\mathrm{200m}>2.5$ can reproduce a splashback feature in
the observed radial range (with an inner profile steeper than the best-fit
NFW/Einasto profile). We have confirmed that the resulting profiles
are, in fact, a reasonable description of the data. Our priors also allow
a profile with a negative 2-halo normalization,
$B_\Delta\le 0$ (i.e., underdense regions), which can produce a
steepening gradient without an upturn feature. Here, the ensemble fits
yield positive $\Delta\rho$ (i.e., $\rho>\rho_\mathrm{m}$) in the
observed range because $\bSigma$ is constrained to be positive (U16).
In order to confirm that the $\tau_\Delta$ prior does not significantly
affect our results, we have performed a fit with a flat prior of $\tau_\mathrm{200m}<20$,
corresponding to $r_\mathrm{t}\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 40h^{-1}\,\mathrm{Mpc}$ for the CLASH sample. With
this relaxed $\tau_\Delta$ prior, we find the same best-fit solution and
a 68\% CL lower bound of $R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}>0.90$, compared to
$>0.89$ obtained with $\tau_\mathrm{200m}<5$.
This difference represents a $1\%$ change, which is not significant
given the current sensitivity. Moreover, we find no noticeable changes
in the constraints on the density and gradient profiles shown in Figures
\ref{fig:surfmass} and \ref{fig:Gamma3d}. Therefore, we conclude that
our results are sufficiently robust against the choice of the
$\tau_\Delta$ prior.
\subsection{Which Density Profile Does the CLASH Sample Prefer?}
\label{sec:discussion:profile_form}
A robust outcome of our analysis is that, regardless of the
pivot overdensity $\Delta$ chosen, the ensemble CLASH mass profile in
projection is in full agreement with the NFW or Einasto profile out to
$\sim r_\mathrm{200m}$,
consistent with previous lensing results \citep[e.g.,][]{Umetsu+2011stack,Umetsu2014clash,Umetsu2016clash,Newman+2013a,Okabe+2013,Niikura2015,Okabe+Smith2016}.
This also ensures the self-consistency of our analysis, in which we used
the NFW fits of U16 to calculate $r_\Delta$ and $\Sigma(r_\Delta)$ for
individual clusters, where the fitting range was restricted to
$R\le 2h^{-1}\,\mathrm{Mpc} \sim r_\mathrm{200m}$.
Our base DK14 model with $\Delta=\mathrm{200m}$ gives a slightly better
fit than the corresponding NFW or Einasto model in terms of the best-fit
$\chi^2$ values (Table \ref{tab:models}).
The relative improvements are
$\Delta\chi^2=\chi^2_\mathrm{NFW}-\chi^2_\mathrm{DK14}\simeq 1$
and
$\Delta\chi^2=\chi^2_\mathrm{Einasto}-\chi^2_\mathrm{DK14}\simeq 1$
for three and two additional free parameters, respectively.
Hence, the improvement is not statistically significant, implying that
our inference of the outskirt feature depends on the choice of the
fitting function and priors.
As discussed in Section \ref{subsec:DK14} and by \citet{More2016splash},
we would apply two requirements to claim a detection of the splashback radius
with a DK14 model, namely (1) that the location of the steepest slope in three
dimensions with respect to $r_\Delta$ can be identified at high
significance and
(2) that this steepening is greater than that expected from a DK14 model with
$f_\mathrm{trans} = 1$ (i.e., $\tau_\Delta\gg 1$, reducing to an Einasto profile).
The second criterion is important to ensure that the steepening is
actually associated with a density caustic rather than the transition to
the 2-halo term.
Given these criteria, we do not find sufficient evidence for the
existence of a splashback feature in the CLASH lensing data
because the data do not have the
sensitivity necessary to resolve the profile curvature in the transition region.
This result is not surprising, as demonstrated in our simulated
experiment (Section \ref{subsec:mock}).
On the other hand,
assuming the DK14 profile form and generic priors calibrated
with numerical simulations, we have placed lower
limits on the splashback radius of the CLASH clusters (Table
\ref{tab:Rsp}), if it exists. Since we cannot rule out models with $f_\mathrm{trans}=1$, it is
possible that the observed gradient feature in the outskirts is a
statistical fluctuation. In Section \ref{subsec:mock}, we showed that
our analysis pipeline produces unbiased results in terms of both
$\bSigma$ profiles and ensemble DK14 fits, and does not create spurious
gradient features. Hence, the inferred outskirt feature is unlikely to
arise from systematic errors.
An additional source of uncertainties in our scaling analysis is
the statistical errors on the NFW parameters of individual clusters,
which propagate into uncertainties in their scaling radius $R_\Delta$
and scaling density $\Sigma(R_\Delta)$. In this work, these scaling
parameters of individual clusters are fixed to their best-fit values
from the NFW fits of U16. Hence, although our scaling analysis has led
to significant improvements in the constraints on the inner profiles
relative to the conventional stacking (see Section
\ref{sec:discussion:sims}), these errors are likely to have smeared out
local features to some level \citep{Niikura2015}. The degree of smearing
can be assessed by marginalizing over uncertainties in the NFW
parameters for all clusters, and such effects need to be accounted for in
future studies with a larger statistical sample of clusters and with a
higher statistical sensitivity.
\begin{figure*}[!htb]
\begin{center}
$
\begin{array}
{c@{\hspace{.3in}}c}
\includegraphics[scale=0.45, angle=0, clip]{Figs/f6a.pdf} &
\includegraphics[scale=0.45, angle=0, clip]{Figs/f6b.pdf}
\end{array}
$
\end{center}
\caption{
\label{fig:Rsp_Gamma}
Comparison of our CLASH lensing constraints on $R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}$
against $\Lambda$CDM predictions at $z=0.337$ \citep[gray lines,][]{More2015splash}.
Left panel: the relation between mass accretion rate and splashback
radius. The blue horizontal line and the shaded area represent,
respectively, the best-fit model value and the 68\% confidence interval
of the DK14 parameter $R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}$ inferred for the CLASH
sample (Figure \ref{fig:posterior}).
The CLASH lensing constraints on the splashback radius overlap with a
broad representative range of mass accretion rates predicted for DM
halos.
Right panel: the relation between peak height and splashback
radius. Similarly, the CLASH results (blue square with error bars) are
in agreement with the theoretical expectation
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}\simeq 0.97$ evaluated at the effective peak
height of the CLASH sample, $\nu_\mathrm{200m} = 4.0\pm 0.1$.
The blue vertical dashed lines mark the range of $\nu_\mathrm{200m}$ peak
heights covered by our sample.
The observational results of \citet{More2016splash} for their full
sample at $z=0.24$ are shown as a filled circle (see Section
\ref{sec:discussion:obs}). Owing to the CLASH X-ray selection, there
could be a bias toward higher values of $R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}$ for our
sample.
}
\end{figure*}
\subsection{Comparison with Simulation Results}
\label{sec:discussion:sims}
We begin by making sure that our results for the standard profile
parameters, namely the concentration and Einasto shape parameters, are
congruent with expectations from simulations. The best-fit values for
concentration are relatively independent of the fitting function chosen
(Table \ref{tab:models}): the NFW fit results in
$c_\mathrm{200c} = 3.66$, the Einasto fit in
$c_\mathrm{200c} = 3.30$, and the DK14 fit in
$c_\mathrm{200c} = 3.58$. These values are in excellent agreement with
the model of \citet{Diemer+Kravtsov2015} which estimates the mean
concentration to be
$c_\mathrm{200c} = 3.65$ for
$M_\mathrm{200c} = 10.11 \times 10^{14}h^{-1}\,M_\odot$,
$z=0.337$, and the cosmology assumed in this paper. The model of
\citet{Diemer+Kravtsov2015} is based on NFW concentrations,
whereas Einasto concentrations are expected to be about 10\% lower at
those masses and redshifts, in excellent agreement
with our results
\citep[Figure 5 of \citealt{Dutton+Maccio2014},][]{Meneghetti2014clash,Sereno2016einasto}.
Our NFW constraints on the halo concentration
$c_\mathrm{200c}=3.66\pm 0.11$ (with 16 clusters) are also in agreement
with the expectations for the CLASH sample, namely a mean value of 3.87
and a standard deviation of 0.61, accounting for both selection and
projection effects \citep{Meneghetti2014clash}.
Since the DK14 profile assumes an Einasto profile for the 1-halo term,
the Einasto parameters are of particular interest. The Einasto profile
varies in steepness with halo radius, at a rate given by a shape
parameter $\alpha$. \citet{Gao+2008} showed that this parameter is, to a
good approximation, a function of only the peak height, $\nu$ \citep[see
also][]{Dutton+Maccio2014}. The CLASH mass corresponds to a peak
height of $\nu_{\rm 200c} = 3.76$ and thus $\alpha = 0.29$,
significantly higher than the values of about $0.2$ found in our
fits. While the results of neither \citet{Gao+2008} nor
\citet{Dutton+Maccio2014} are well constrained at such extreme peak
heights, it is clear that $\alpha$ in their simulations exceeds $0.2$
significantly at high $\nu$. This tension already emerged as a
$\approx 1\sigma$ difference in the analysis of
U16. With the improved radial rescaling applied in
this paper, the significance of the difference increases to $3.5
\sigma$.
We have tested whether the steepening due to the splashback radius
can bias the fitted $\alpha$ high compared to the value preferred
by the inner profile. We find that such a bias can indeed appear
depending on the fitted radial range (a larger range leads to
larger bias) and the weights given to each radial bin. However,
\citet{Dutton+Maccio2014} fitted out to $1.2 r_\mathrm{vir}$ and
weighted the bins by the number of particles, and for these
parameters the bias is smaller than $1\%$.
Another possible explanation is that the X-ray-selected CLASH
clusters are preferentially relaxed systems compared to the
average population \citep{Meneghetti2014clash}. However, the
relation of
\citet{Gao+2008} and \citet{Dutton+Maccio2014} for $\alpha$ is
based on halo samples that exclude unrelaxed halos, although this
selection may not capture the entire effect present in the CLASH
sample. On the other hand,
taking into account baryonic effects in nonradiative hydrodynamical
$N$-body simulations, \citet{Meneghetti2014clash} find that the Einasto
shape parameter for cluster-size halos
lies in the range $\alpha = 0.21 \pm 0.07$,
indicating that our measurement is only moderately in tension with
simulations.
Finally, we compare our inferences regarding the splashback radius
$R_\mathrm{sp}^\mathrm{3D}$ with the simulation results of \citet{More2015splash} who
predicted that the ratio of $R_\mathrm{sp}^\mathrm{3D}$ and $r_\mathrm{200m}$ depends
primarily on the mass accretion rate of halos, with a less
important dependence on redshift. In the left panel of Figure
\ref{fig:Rsp_Gamma}, we compare the ratio
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}=1.23^{+2.33}_{-0.34}$
(blue shaded band), inferred
for our sample at $z_\mathrm{l}^\mathrm{eff}=0.337$, with the mean
relation in simulations (gray band),
\begin{equation}
\frac{R_\mathrm{sp}^\mathrm{3D}}{r_\mathrm{200m}} =
0.58\left[1+0.63\Omega_\mathrm{m}(z)\right]
\left(1+1.08 \exp\left[-\frac{\Gamma}{2.26}\right]\right),
\end{equation}
where $\Gamma\equiv d\ln{M_\mathrm{vir}}/d\ln{a}$ is the mass accretion rate
\citep{More2015splash}.
This comparison shows that the CLASH lensing constraints on the
splashback radius overlap with a broad range of mass accretion rates
$\Gamma$.
Our lower $1\sigma$ (16th percentile) limit of
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m} \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 0.89$ translates into an upper limit on the accretion
rate of $\Gamma\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 4.0$.
This limit is not particularly informative, since only a very small fraction
of halos experience such rapid accretion (DK14).
Since we cannot directly measure the mass accretion rate of the
CLASH cluster sample, we cannot independently verify whether this
prediction is congruent with observations. For this reason,
\citet{More2015splash} also provide an approximate relation for the mean
splashback radius as a function of peak height and redshift,
\begin{equation}
\frac{R_\mathrm{sp}^\mathrm{3D}}{r_\mathrm{200m}} =
0.81\left(
1+0.97\exp
\left[
-\frac{\nu_\mathrm{200m}}{2.44}
\right]
\right) \,.
\end{equation}
The right panel of Figure \ref{fig:Rsp_Gamma} shows a comparison of this
fitting function with our results for the splashback radius and the peak
height of the CLASH cluster sample. Our inferred range of possible
splashback radii covers the entire range of values suggested by the
simulation results, showing that our results are generally compatible with
simulations. While the results of \citet{More2015splash} were based on
DM-only simulations, \citet{Lau2015} broadly confirmed the results of
DK14 in hydrodynamical simulations of individual clusters. Thus, we have
currently no reason to assume that baryons affect the location of the
splashback radius significantly.
\subsection{Projection Effects in Halo Profiles}
\label{sec:discussion:projection}
In the previous sections, we defined $R_\mathrm{sp}^\mathrm{3D}$ as the halo radius where the
logarithmic slope of the three-dimensional density profile is
steepest. However, the quantity actually measured is the two-dimensional
density profile, both when observing cluster member density profiles as
in \citet{More2016splash} or the lensing signal of clusters as in this
work. Thus, it is important to understand the relation between the
location of the radii of the steepest slope in thee dimensions and in
projection, $R_\mathrm{sp}^\mathrm{3D}$ and $R_\mathrm{sp}^\mathrm{2D}$.
We have investigated this relation using the simulated halo sample
of \citet{More2015splash}. In all cases, the steepening of the
two-dimensional mass profile is less pronounced than that of the
three-dimensional profile, as highlighted in Figure 14 of DK14. At high
mass accretion rates ($\Gamma \geq 3$),
$R_\mathrm{sp}^\mathrm{2D} / R_\mathrm{sp}^\mathrm{3D}$ approaches a fixed ratio of about
$0.8$, in agreement with our measurements (Section \ref{subsec:Rsp}).
At lower mass accretion rates, however, the $R_\mathrm{sp}^\mathrm{2D}$
derived from the profiles exhibits huge scatter and a seemingly random
pattern.
The difficulty in deriving a valid $R_\mathrm{sp}^\mathrm{2D}$ from
projected measurements can be understood by considering a few
realizations of the DK14 profile with a power-law outer profile
representing the 2-halo term (see DK14 for details).
The location of the steepest slope in
three dimensions is a trade-off between the steepening 1-halo term and
the 2-halo term. The steepening is less pronounced for halos with
lower mass accretion rates. Thus, in projection, the 2-halo term has a
substantial impact on the apparent location of
$R_\mathrm{sp}^\mathrm{2D}$, which emerges at a much smaller radius that
is unrelated to the steepening term (Figure \ref{fig:Gamma3d}), a
problem that becomes more serious at small peak heights.
These results highlight the importance of forward-modeling the effects
of the steepening based on the underlying three-dimensional density
profile, rather than attempting to derive $R_\mathrm{sp}^\mathrm{3D}$ from
$R_\mathrm{sp}^\mathrm{2D}$ directly (e.g., using Gaussian process
modeling).
\subsection{Compatibility with Measurements from Cluster Member Density Profiles}
\label{sec:discussion:obs}
The splashback radius was first unambiguously detected in observations by
\citet{More2016splash} who stacked surface number density profiles of
cluster member galaxies for a large number of clusters.
Their clusters were split into two subsamples with high and low
concentrations of member galaxies ($c_\mathrm{gal}$) at fixed richness
and redshift \citep{Miyatake2016bias}.
These high- and low-$c_\mathrm{gal}$ samples were expected to represent
populations of high and low $\Gamma$, respectively. However, \citet{Zu2016}
have shown that the parameter $c_\mathrm{gal}$ used in \citet{More2016splash}
is strongly contaminated by projection effects and is likely sensitive to
the large-scale environment of the clusters rather than their internal structure.
We thus limit our comparison to the full sample of
\citet{More2016splash} whose inferred splashback radius is
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}=0.837\pm 0.031$, with a weak-lensing mass of
$M_\mathrm{200m}\simeq 1.87\times 10^{14} h^{-1}\,M_\odot$ at $z=0.24$ (S. More
2016, private communication).
Figure \ref{fig:Rsp_Gamma} shows that our lower limit on $R_\mathrm{sp}^\mathrm{3D}$ is
higher than this value, but overlaps with the full-sample measurement
at the $\sim 1\sigma$ level. We note that we expect the average mass
accretion rate of the CLASH sample to be low due to a high fraction of
relaxed objects \citep{Meneghetti2014clash}. Hence, owing to the selection
effects, there could be a bias toward higher values of
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}$ in our sample.
\section{Summary}
\label{sec:summary}
We have developed methods for modeling averaged cluster lensing
profiles scaled to a chosen halo overdensity $\Delta$, which can be
optimized for the extraction of features that are local in radius,
in particular the steepening due to the splashback radius in the
outskirts of collisionless DM halos.
We have examined the ensemble mass distribution of 16 CLASH
X-ray-selected clusters by forward-modeling the gravitational lensing
data obtained by \citet{Umetsu2016clash}. Our main conclusions are as
follows.
\begin{itemize}
\item Regardless of the overdensity chosen, the CLASH
ensemble mass profile in projection is remarkably well described by a
scaled NFW or Einasto density profile out to
$R\sim 1.2r_\mathrm{200m} \sim 2.5h^{-1}\,\mathrm{Mpc}$, beyond which the
data exhibit a flattening with respect to the NFW or Einasto profile.
\item We constrain the NFW halo concentration to
$c_\mathrm{200c}=3.66\pm 0.11$ at
$M_\mathrm{200c}\simeq 1.0\times 10^{15}h^{-1}\,M_\odot$,
consistent with previous work based on the same input data
\citep{Umetsu2016clash}. Our new analysis using scaled profiles
provides tighter constraints on the halo shape and structural
parameters ($c$ and $\alpha$) than the conventional
stacking.
\item
We do not find statistically significant evidence for the existence
of the splashback radius in the CLASH lensing data.
At the current sensitivity, this result is in line with expectations
from simulated, synthetic observations.
Assuming the DK14 profile form and generic priors calibrated with
simulations, we have placed a lower limit on the splashback radius
of the clusters, if it exists, of
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}>0.89$ or
$R_\mathrm{sp}^\mathrm{3D}>1.83h^{-1}\,\mathrm{Mpc}$
at 68\% confidence. This constraint is in agreement with
$\Lambda$CDM predictions.
\item The gradient feature in the outskirts is most pronounced for a
scaling with $r_\mathrm{200m}$, consistent with simulation results of
\citet{Diemer+Kravtsov2014} and \citet{Lau2015}.
\end{itemize}
The results obtained here are generally favorable in terms of the
standard explanation for DM as effectively collisionless and
nonrelativistic on sub-megaparsec scales and beyond, with an excellent match
between lensing data and $\Lambda$CDM predictions for high-mass
clusters.
This study represents a first step toward using cluster gravitational
lensing to examine detailed predictions from collisionless $\Lambda$CDM
simulations regarding the shape and universality of the outer density
profiles.
Such predictions can, in principle, be unambiguously tested across a
wide range of halo masses, redshifts, and accretion rates, with large
statistical samples of clusters from ongoing and planned lensing surveys
such as the Subaru Hyper Suprime-Cam survey \citep{Miyazaki2015}, the
Dark Energy Survey, and the {\em WFIRST} and {\em Euclid} missions.
\acknowledgments
This work was made possible in part by the availability of high-quality
lensing data produced by the CLASH team. We are grateful to the CLASH
team who enabled us to carry out the work. We thank all authors of
\citet{Umetsu2014clash,Umetsu2016clash} and \citet{Zitrin2015clash} for
their contributions to the lensing analyses used here.
We thank our referee for his valuable suggestions that
improved the paper significantly.
We thank Andrey Kravtsov and Surhud More for important suggestions
and detailed comments on a draft of this paper.
We acknowledge very fruitful discussions with
Nobuhiro Okabe,
Ho Seong Hwang,
Arman Shafieloo,
Zuhui Fan,
and
Congyao Zhang.
This work is partially supported by the Ministry of Science and
Technology of Taiwan under the grants
MOST 103-2112-M-001-030-MY3
and
MOST 103-2112-M-001-003-MY3.
\input{ms.bbl}
\clearpage
\begin{appendix}
\section{Scaled DK14 Model}
\label{appendix:dk14}
We express the scaled DK14 density profile as
\begin{equation}
f_{\Delta}(x) = f_\mathrm{inner}(x)\,f_\mathrm{trans}(x) + f_\mathrm{outer}(x)
\end{equation}
with
\begin{equation}
\begin{aligned}
f_\mathrm{inner}(x) &= \exp\left[ -\frac{2}{\alpha}c_\Delta^\alpha(x^\alpha-1) \right],\\
f_\mathrm{trans}(x) &= \left[1+ \left(\frac{x}{\tau_\Delta}\right)^\beta\right]^{-\frac{\gamma}{\beta}},\\
f_\mathrm{outer}(x) &= \frac{B_\Delta}{\epsilon_{\Delta} + x^{s_\mathrm{e}}},
\end{aligned}
\end{equation}
where $c_\Delta=r_\Delta/r_\mathrm{s}$,
$\tau_\Delta=r_\mathrm{t}/r_\Delta$,
$B_\Delta=b_\mathrm{e}(\rho_\mathrm{m}/\rho_\Delta)(5r_\mathrm{200m}/r_\Delta)^{s_\mathrm{e}}$
with $\rho_\Delta\equiv\rho_\mathrm{Einasto}(r_\Delta)=\rho_\mathrm{s}\exp\left[-(2/\alpha)(c_\Delta^\alpha-1)\right]$,
and
$\epsilon_\Delta=\Delta_\mathrm{max}^{-1}(5r_\mathrm{200m}/r_\Delta)^{s_\mathrm{e}}$.
The (unscaled) DK14 density profile is obtained as $\Delta\rho(r)=\rho_\Delta f_{\Delta}(r/r_\Delta)$. The derivatives of the inner, transition, and outer terms are
\begin{equation}
\begin{aligned}
-x df_\mathrm{inner}/dx &= 2c_\Delta^\alpha x^\alpha f_\mathrm{inner},\\
-x df_\mathrm{trans}/dx &= \gamma\frac{(x/\tau_\Delta)^\beta}{1+(x/\tau_\Delta)^\beta}f_\mathrm{trans},\\
-x df_\mathrm{outer}/dx &=s_\mathrm{e} \frac{x^{s_\mathrm{e}}}{\epsilon_\Delta+x^{s_\mathrm{e}}}f_\mathrm{outer}.
\end{aligned}
\end{equation}
The logarithmic gradient of the DK14 density profile $\Delta\rho(r)$
with $r=r_\Delta x$ is thus given by
\begin{eqnarray}
\frac{d\ln\Delta\rho}{d\ln{r}}
=\frac{d\ln{f_{\Delta}}}{d\ln{x}}
&=& \left(
x\frac{df_\mathrm{inner}}{dx} f_\mathrm{trans} +
xf_\mathrm{inner}\frac{df_\mathrm{trans}}{dx} +
x\frac{df_\mathrm{outer}}{dx}
\right) \Big/ f_{\Delta}\\
&=&
-\left[
2(c_\Delta x)^\alpha f_\mathrm{inner}f_\mathrm{trans}
+
\gamma\frac{(x/\tau_\Delta)^\beta}{1+(x/\tau_\Delta)^\beta}
f_\mathrm{inner}f_\mathrm{trans}
+
s_\mathrm{e} \frac{x^{s_\mathrm{e}}}{\epsilon_\Delta+x^{s_\mathrm{e}}}f_\mathrm{outer}
\right] \Big{/} f_{\Delta}.
\end{eqnarray}
For the Einasto model ($f_\mathrm{trans}=1$, $f_\mathrm{outer}=0$),
$d\ln{\Delta\rho}/d\ln{r}=-2(c_\Delta x)^\alpha=-2(r/r_\mathrm{s})^\alpha$.
In this work, the splashback radius $R_\mathrm{sp}^\mathrm{3D}$ is
defined as the location of the steepest slope of the three-dimensional
mass distribution $\Delta\rho(r)$.
For a given set of the DK14 model parameters, we find the scaled
splashback radius $x_\mathrm{sp}\equivR_\mathrm{sp}^\mathrm{3D}/r_\Delta$ from
\begin{equation}
\label{eq:Rsp}
x_\mathrm{sp} = \mathop{\rm arg~min}\limits_x \frac{d\ln{f_{\Delta}}}{d\ln{x}}.
\end{equation}
The ratio of the splashback mass
$M_\mathrm{sp}=M(<R_\mathrm{sp}^\mathrm{3D})$
to the overdensity mass $M_\Delta=M(<r_\Delta)$ is given by
\begin{equation}
\label{eq:Msp}
\frac{M_\mathrm{sp}^\mathrm{3D}}{M_\Delta}=\frac{\int_0^{x_\mathrm{sp}}\!dx\,x^2f_{\Delta}(x)}{\int_0^1\!dx\,x^2f_{\Delta}(x)}.
\end{equation}
\section{Synthetic Weak-lensing Data}
\label{appendix:mock}
We create a total of 50 source realizations of synthetic shear and
magnification catalogs for our sample of 16 CLASH clusters, each of
which is modeled as a spherical NFW halo specified by its redshift
$z_\mathrm{l}$ (Table \ref{tab:sample}) and its parameters
$M_\mathrm{200c}$ and $c_\mathrm{200c}$, which were fixed to the observed
central values (Table 2 of U16).
For each NFW cluster, we consider two configurations of the outer density
profile, one with and one without a splashback-like feature. For the latter,
the total density profile is given by a single NFW profile,
$\Delta\rho(r) = \rho_\mathrm{NFW}(r|M_\mathrm{200c},c_\mathrm{200c},z)$.
For the former, we employ a composite lens model that
produces an approximate splashback feature. We cannot use the DK14 profile directly
because it is not implemented in the \textsc{glafic} software used to perform ray-tracing simulations
as described below. However, the code does implement a steepening 1-halo term
given by the truncated NFW profile of \citet[][BMO]{BMO}, $\rho_\mathrm{BMO}(r)$.
We approximate the 2-halo term by a softened isothermal (SI) profile,
$\rho_\mathrm{SI}(r)$:
$\Delta\rho=\rho_\mathrm{BMO} + \rho_\mathrm{SI}$.
The BMO density profile is expressed as
$\rho_\mathrm{BMO}(r)=\rho_\mathrm{NFW}(r|M_\mathrm{200c},c_\mathrm{200c},z)\times f_\mathrm{trans}(r|\beta,\gamma,r_\mathrm{t})$
with $\beta=2$ and $\gamma=4$.
The truncation radius $r_\mathrm{t}$ is set to
$r_\mathrm{t}=1.1r_\mathrm{vir}\approx r_\mathrm{200m}$
We take the outer SI profile to be
$\rho_\mathrm{SI}(r)=\rho_\mathrm{c}/[1+(r/r_\mathrm{c})^2]$
with
$\rho_\mathrm{c}=1.5\times 10^{12}h^2\,M_\odot$\,Mpc$^{-3}$
and
$r_\mathrm{c}=2.2h^{-1}\,\mathrm{Mpc}$,
so as to give a splashback feature at
$R_\mathrm{sp}^\mathrm{3D}/r_\mathrm{200m}\sim 0.9$ for our CLASH sample.
Here, the normalization of the SI profile is chosen to be three times
lower than that of the 2-halo term in projection
\citep[][see their Figure 7]{Umetsu2014clash},
because the standard halo model
($\Delta\rho =\rho_\mathrm{BMO}+\rho_\mathrm{2h}$), by design, does not
produce a steepening relative to the NFW profile
\citep[][see their Figure 2]{Oguri+Hamana2011}.
In general, source galaxy catalogs used for the shear and the
magnification analysis are different because we apply different size,
magnitude, and color cuts in source selection for measuring the
shear and the magnification effects \citep{Umetsu2014clash}.
We assume, for simplicity, that the two galaxy samples are identical.
We ignore the cosmic noise contribution from projected
uncorrelated large-scale structures here because it is subdominant in
the total error budget (see Figure 1 of U16).
For all clusters, we assume the same survey parameters and source
properties as described below.
To produce synthetic magnification-bias data sets, we perform
ray-tracing simulations
with both NFW and BMO+SI lenses
\citep{2000ApJ...534...34W,BMO,Oguri+Hamana2011}
using the public package \textsc{glafic} \citep{glafic}.
We assume a maximally depleted sample of sources with
$s\equiv d\log_{10}N(<m)/dm=0$, for which the effect of
magnification bias is purely geometric\footnote{
For a depleted population of sources with
$s<0.4$, the net effect of magnification bias is
dominated by the geometric area distortion \citep{Umetsu2013}, and is
insensitive to the intrinsic source luminosity function. This is the
case for the $BR_\mathrm{C}z'$-selected red galaxy samples with
$\langle s\rangle \sim 0.15$
used for the magnification bias measurements of
\citet{Umetsu2014clash}.}
and their lensed source counts can be inferred from the lensed image
positions.
We randomly distribute $N_\mathrm{s}=14,336$ source galaxies over an area
of $32\times 32$\,arcmin$^2$ in the source plane centered on the cluster.
This corresponds to an unlensed source density of
$\overline{n}_\mathrm{s}=14$ galaxies per arcmin$^{2}$,
matched to the typical (median) value found in the CLASH weak-lensing
observations of \citet[][their Table 4]{Umetsu2014clash}.
The source plane is placed at a redshift of $z_\mathrm{s}=1.0$, the
median depth of their magnification samples \citep{Umetsu2014clash}.
For creation of synthetic shear catalogs, we draw $N_\mathrm{s}$ random
source ellipticities from the Gaussian intrinsic ellipticity
distribution given by Equation (12) of \citet{Schneider+2000}, with the
rms intrinsic ellipticity assumed to be $0.3$.
For each galaxy, we transform the source ellipticity into the
image ellipticity at the image position according to Equation (1) of
\citet{Schneider+2000}.
Our simulations include the following major steps of data analysis,
signal reconstruction, and modeling processes:
\begin{enumerate}
\item Measurements of the reduced tangential shear and magnification
bias profiles in $N_\mathrm{WL}=10$ log-spaced radial bins over the radial range
$\theta\in [0.9\arcmin,16\arcmin]$ (Section \ref{subsec:data})
from the respective input source catalogs, following the analysis
procedures described in \citet{Umetsu2014clash} and U16.
\item Reconstructions of the projected mass profile ($\bSigma$) from the binned
shear and magnification constraints obtained in the first step.
\item Ensemble characterization of the cluster $\bSigma$ profiles using
the scaled DK14 model and the priors described in Sections
\ref{subsec:DK14} and \ref{subsec:bayesian}.
\end{enumerate}
For the second step, we use the cluster lensing mass inversion
(\textsc{clumi}) code \citep{Umetsu+2011,Umetsu2013}
as implemented in U16 but without using inner strong-lensing
constraints, and assume perfect knowledge of the source properties,
namely
$z_\mathrm{s}=1.0$,
$\overline{n}_\mathrm{s}=14$ galaxies arcmin$^{-2}$,
and $s=0$.
Otherwise,
synthetic data are processed in the same manner as the CLASH data
described in Section \ref{subsec:data}.
In the third step, we find, for each source realization, the global
maximum a posteriori
estimate of the joint posterior distribution to infer the best-fit
DK14 parameters.
\end{appendix}
\end{document}
|
2,877,628,088,885 | arxiv | \section{Introduction}
\label{sec:intro}
Access to daily news content is challenging for blind, low-vision, and otherwise print-disabled individuals~\cite{manik}. Online news websites are not screen-reader friendly being cluttered with menus, ads, popups, and sidebars, reading which consumes minutes before getting to the actual article content, and having to repeat that for each and every article. News on social media is often low-quality. And print newspapers are not accessible.
We attempted to solve the accessible news problem by applying computer vision to print newspapers, specifically segmenting articles and performing OCR, however, we discovered existing approaches resulted in poor quality. The primary problem was the lack of ground-truth at scale suitable for training. We used automated means to collect additional ground-truth, which was lower quality than expected. Section~\ref{sec:experiments:data} details challenges and the data cleaning process we ultimately devised for a high-quality newspaper article segmentation dataset.
Training a segmentation model on this dataset and performing OCR still resulted in high Word and Character Error Rates (WER/CER). We diagnosed this high error rate to two contributors. First was the inherent WER/CER of the OCR engine, which was fairly low due to the nature of newspaper print. The predominant factor was small errors in segmentation boundary, sometimes of even a single pixel, which would render characters on the \emph{edges} of the article unrecognizable. This is fundamentally due to the cross-entropy mask loss function, which forms the basis of all segmentation models, being extremely non-sensitive to the \emph{boundary} pixels as long as there is high area overlap. Section~\ref{sec:methodology:edgemask} details the problem with the mask loss in this regard, and our enhanced loss function -- EdgeMask -- that addresses this problem.
Overall, this work makes three contributions. First, we present our approach to scale ground-truth for newspaper digitization. Second, we propose the EdgeMask loss function for news article segmentation to help improve accuracy of downstream OCR tasks. And finally, we report on experimental results of a 32.5\% reduction in WER in newspaper digitization.
\section{Background}
\label{sec:background}
Some of the past work has been done for extracting visual content and digitizing historical newspaper for searching and archival purposes\cite{newspinflib, googlenewspaper, navigator}. The Google Newspaper Search project \cite{googlenewspaper} was one of the large-scale efforts to digitize newspapers, index it and make the article contents discoverable via search engine. \cite{navigator} proposed a pipeline for extracting and searching visual content from historic print newspaper scans. A fully convolutional segmentation approach has been applied by \cite{fcnarticle} and \cite{dnnnews} to extract content blocks from newspaper images.
Besides newspaper, there had been similar efforts on digitizing general document PDF and making it accessible. \cite{historyocr} developed an OCR system for historical documents. \cite{pdfaccessible} proposed various techniques to improve accessibility of scanned PDFs to visually impaired.
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\linewidth]{plots/pipeline.png}
\caption{Illustration of the newspaper digitization pipeline}
\label{fig:newspipeline}
\end{figure*}
In the newspaper space, there has been number of effort to digitize entire or some elements of newspaper for search related purposes. In the general print document space, there has been work on digitizing as well as making it accessible to people with vision impaired. To our knowledge, there has been no exact work on making print newspaper accessible to print-impaired people including blind and low-vision.
\section{Methodology}
\label{sec:methodology}
This section details the newspaper digitization process and the problem with mask loss of Mask-RCNN-based architecture. It also describes our proposed EdgeMask loss function for mitigating digitization issues and the metric used for evaluation.
\subsection{Newspaper digitization}
\label{sec:methodology:newsdigi}
The process of digitizing a print newspaper is divided into 3 steps. First, understanding the layout of the newspaper for segmenting news article blocks. Secondly, detecting elements like headline, images, graphical illustration and paragraph text within a news article. And third, applying OCR to extract text and structuring the text based on the semantics of the content. In a newspaper page, there are different block of contents like news articles, ads, illustrations, header and footer. Among these, the most important is news article content and we digitize only such content at present.
The news articles are cropped out using an instance segmentation model, based on the Mask RCNN architecture. A typical article contains a headline, article text and images or graphical illustrations. After segmentation, the headlines and graphical illustration are detected and masked as white pixels. The remainder is the article text which is then extracted using an OCR engine. Similarly, the headline is also extracted using OCR by cropping the headline-detected region. The digitized article is generated in a markdown format with headline as a header and article text as paragraph text. A digitized accessible newspaper is generated by collating all articles in a single file with all headlines in a table of content on the top of the page. A sample output is shown in Fig. \ref{fig:htmlnews}. Fig. \ref{fig:newspipeline} illustrates the end-to-end pipeline for digitization. Section \ref{sec:experiments:training} details about the complete process of training both news article segmentation model and the headline detection model.
\subsection{EdgeMask loss}
\label{sec:methodology:edgemask}
We used the Mask-RCNN framework and adopted the architecture for training the news article segmentation model. We observed high word error rate and character error rate after OCR. This was primarily due to text getting chopped off at the segmentation boundaries. The original loss function of Mask-RCNN is defined as a multi-task loss as shown in \ref{maskrcnnloss}
\begin{equation}
L = L_{cls} + L_{box} + L_{mask}
\label{maskrcnnloss}
\end{equation}
\noindent where $L_{cls}$ is the classification loss, $L_{box}$ is the bounding box regression loss and $L_{mask}$ is the average binary cross-entropy loss for all the pixels in the mask. This formulation works well for general object instance segmentation because even if the predictions are slightly off by few pixels at the boundary, the end result of identifying an object is not affected. On the other hand, this is very sensitive for newspaper digitization. If the segmentation boundary is smaller than the ground-truth boundary at the edges, then it would miss out on the bottom lines of text and if the boundary is slightly extended, then it would include text from neighboring articles. Both of these scenarios are undesirable for our end task of digitizing the news article contents.
We tackled this problem by proposing a new loss function EdgeMask loss for the Mask RCNN framework. The mask loss function $L_{mask}$ is modified to improve accuracy of the segmentation towards the boundary. We put more weights for pixels at the boundary of the mask than the pixels at the center and hence penalizing the boundary pixels more for misclassification. The EdgeMask loss is defined as follows
\begin{equation}
\label{eq:edgemask}
L_{EdgeMask} = \frac{1}{m^2} [\sum_{\substack{(i,j) \in I}}^{} h_{ij} + \lambda \sum_{(p,q) \in B}^{} h_{pq}]
\end{equation}
where $B$ is the set of points at boundary of mask with window of size $k$, $I$ is the set of points interior to the mask, $\lambda$ is the weights for the pixels at boundary and $h_{ij}$ is the binary cross-entropy for a pixel $(i,j)$. For each pixel in the RoI of size $m$x$m$, the binary cross-entropy is defined as shown in equation \ref{eq:pixelbce}.
\begin{equation}
\label{eq:pixelbce}
h_{ij} = - [y_{ij} \log \hat{y_{ij}} + (1-y_{ij}) \log (1-\hat{y_{ij}})]
\end{equation}
\subsection{Metrics}
\label{sec:methodology:metrics}
All standard metrics for object detection or segmentation task depend on the fundamental concept of average precision (AP). It is computed by calculating the area under the precision-recall curve where TP or FP is calculated by considering some IoU threshold between the predicted boxes and the ground-truth boxes. Some of the common metric include AP, [email protected], AP@[.5:.05:.95] for each classes and similarly mAP averaged over all classes.
While the IoU-based metrics work fine for generic object detection or segmentation tasks, this metric was not found to be suitable for our application. Our objective is to segment each article region independently. As mentioned in section \ref{sec:methodology:edgemask}, the digitization problem is sensitive to predictions at the boundary of the mask and typical IoU metrics doesn't capture the same. Considering a scenario when the predicted mask is just a few pixels shorter than the ground-truth mask which results in missing out on the last 1-2 lines of the article text. Although the IoU is still very high, the end digitized text would be meaningless because of the incomplete text. This calls for an alternative metric that could be representative for the task at hand.
In our experiments, we use Character Error Rate (CER) and Word Error Rate (WER) for measuring the correctness of the end digitized text. Using the predicted segmentation mask, we apply OCR on it and compare the predicted text with the ground truth text.
\section{Experiments}
\label{sec:experiments}
This section talks about the dataset, the process to generate good quality ground-truth and the training process of the vision models for our digitization pipeline. We also discuss results with various hyperparameters and illustrate segmentation output result as well as the end output of digitized accessible news.
\subsection{Data}
\label{sec:experiments:data}
The data consists of 5600 images of print newspapers and ground-truth of two types - 1) segmentation masks for 5 categories of elements - article, ad, header, headline and graphical illustration, and 2) string of text for each article in the newspaper. The primary challenge related to the data was lack of readily-available ground-truth. So we scraped the internet to find newspaper images with any kind of bounding box or segmentation annotations.
For the majority of data, either the ground-truth segmentation didn't exist or they were of low quality -- overlapping masks and not spanning the actual area of content. Besides, there were lack of label for each mask corresponding to different category of elements. We improved the ground-truth segmentations and added labels by combination of manual labeling, heuristics and labeling by vision models.
First, we create a high-quality dataset by manually labeling 100 images which is then used to fine-tune a Faster-RCNN object detection model, pre-trained on the COCO dataset. Using the fine-tuned model, we generated labels and bounding box annotations for all the newspaper images. The quality of the annotation was further improved by using text bounding box outputs from an OCR region. We take an intersection of the predicted bounding boxes from fine tuned Faster-RCNN model and the text bounding boxes from an OCR engine to get a tighter, high-quality rectangle fit for all news articles. We use a heuristic based on font size and bounding box results from OCR region to distinguish between the labels -- headlines and article text. With a combination of such heuristics, very small-scale manual labeling and large-scale vision-based model labeling, we generated good quality ground-truth for the digitization task.
\begin{figure}
\centering
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=1.1\linewidth]{plots/before.jpg}
\caption{vanilla Mask-RCNN loss}
\label{fig:short-a}
\end{subfigure}
\hspace{0.3in}
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=1.1\linewidth]{plots/after.jpg}
\caption{EdgeMask loss, $\lambda=100$}
\label{fig:short-b}
\end{subfigure}
\caption{This figure illustrates the segmentation output with the original mask loss and improved boundaries with EdgeMask loss}
\label{fig:short}
\end{figure}
\subsection{Training}
\label{sec:experiments:training}
The details of the digitization pipeline is as follows - given a newspaper page, an instance segmentation model crops out all newspaper articles. For each cropped article, we pass it through a detection model to detect 2 classes of elements - headlines and images. These elements are then masked and the article image is then feed through OCR engine to extract article text. Similarly, the headline region is cropped and digitized using OCR. The news article is hence reconstructed from digitized headline and article text.
We trained two vision models for our digitization pipeline -- 1) an instance segmentation model based on Mask-RCNN architecture for segmenting news articles, 2) an object detection model based on Faster-RCNN architecture to detect headlines and images within news article blocks.
We adopted the Mask-RCNN framework from the Detectron2 \cite{detectron2} model zoo for identification and segmentation of news article blocks from images of print newspaper. The architecture was based on ResNet50 feature pyramid network (FPN) base config and the loss function criterion was modified with our proposed EdgeMask loss. The model was trained for 50k iterations on 5600 print newspaper images. We experimented with multiple values of hyperparameter $\lambda$=1, 30, 100, 1000 and 10000 for the EdgeMask loss and found $\lambda=100$ to be the optimal.
After cropping the news article, we use a Faster-RCNN object detection model to detect headlines and images. This model was adopted from the Detectron2 model zoo as well and trained with 260,000 news article block images.
We evaluated the performance of our pipeline using the metric CER and WER. Table \ref{tab:cerwer} shows WER and CER values for different values of $\lambda$ when computed for boundary text. Table \ref{tab:apscore} shows various AP-based metrics for both vision models.
\begin{table}
\centering
\begin{tabular}{c|c|c}
\toprule
$\lambda$ & WER \% & CER \% \\
\midrule
1 & 8.89 & 3.19 \\
10 & 8.10 & 2.82 \\
30 & 7.17 & 2.35 \\
\bf{100} & \bf{6.00} & \bf{1.77} \\
1000 & 15.66 & 4.81 \\
\bottomrule
\end{tabular}
\caption{Word and Character Error Rate (WER/CER) of boundary text for various $\lambda$}
\label{tab:cerwer}
\end{table}
\begin{table}
\centering
\setlength{\tabcolsep}{4pt}
\begin{tabular}{@{}c|cccccc@{}}
\toprule
\footnotesize Model & \footnotesize AP & \footnotesize AP@$.50$ & \footnotesize AP@$.75$ & \footnotesize AP$_{m}$ & \footnotesize AP$_l$ \\
\midrule
\footnotesize Article segmentation & \footnotesize 86.11 & \footnotesize 93.46 & \footnotesize 91.85 & \footnotesize 30.30 & \footnotesize 86.71 \\
\footnotesize Headline detection & \footnotesize 84.31 & \footnotesize 91.79 & \footnotesize 89.53 & \footnotesize 62.63 & \footnotesize 85.53 \\
\bottomrule
\end{tabular}
\caption{AP score}
\label{tab:apscore}
\end{table}
\begin{figure}
\centering
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=1.1\linewidth]{plots/news_headline.jpg}
\label{fig:short-a}
\end{subfigure}
\hspace{0.3in}
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=1.1\linewidth]{plots/news_article.jpg}
\label{fig:short-b}
\end{subfigure}
\caption{Digitized newspaper in HTML}
\label{fig:htmlnews}
\end{figure}
\section{Discussion}
\label{sec:discussion}
In Fig. \ref{fig:htmlnews}, we show the HTML file generated from our newspaper digitization pipeline. The top of the page contains information about the newspaper such as date and name of the newspaper, followed by a table of contents. The digitized version was generated by collating text from all the article blocks independently. In few newspapers, we noticed that some articles are split across two pages, providing only a gist of information on the first page. Such cases will be considered as two different articles in our pipeline and in future, we plan to tackle this as such a design is fairly common across newspapers.
There are various kinds of graphical elements or visual illustrations found in a newspaper, e.g. images, tables, plots, comics, etc. Although, in our pipeline, we detect any such illustration but ignore for digitization. In order to make a newspaper truly digitally accessible, one of the most important direction to work towards is to automatically generate alt-text for any visual illustration and meaningful descriptions for plots, tables and comics \cite{alt1, alt2, alt3, alt4}.
\section{Conclusion}
\label{sec:conclusion}
In this work, we focus towards digitizing and hence making print newspaper accessible to print-disabled individuals. We adopted state-of-the-art computer vision techniques and proposed modifications to aid to our use-case. We proposed EdgeMask loss function for Mask-RCNN framework to improve segmentation mask prediction at the boundary. This significantly improved the end digitized news article texts with a reduction of 32.5\% word error rate compared to the vanilla loss criterion. We also shared how we tackled data challenges such as lack of ground-truth using a combination of manual labelling, heuristics and labeling by a vision model. This is our first stepping stone towards our broader goal to make all kinds of printed content accessible.
\iffalse
\fi
{\footnotesize
\bibliographystyle{ieee_fullname}
\section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file.
Please follow the steps and style guidelines outlined below for submitting your author response.
The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers.
It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers.
You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments.
Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers.
Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction.
The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading).
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures.
Overlength responses will simply not be reviewed.
This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.
Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format.
The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them.
The top margin should begin 1 inch (2.54 cm) from the top edge of the page.
The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper;
for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page.
Please number any displayed equations.
It is important for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used.
Main text should be in 10-point Times, single-spaced.
Section headings should be in 10 or 12 point Times.
All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm).
Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response.
When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}.
Where appropriate, include the name(s) of editors of referenced books.
\begin{figure}[t]
\centering
\fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:onecol}
\end{figure}
To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper).
See \LaTeX\ template for a workaround.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
Please ensure that any point you wish to make is resolvable in a printed copy of the response.
Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.
Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it.
You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.pdf}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,088,886 | arxiv | \section{Introduction}
Spurred by the discovery of topological insulators, topological phases
have become a vital part of condensed matter physics over the last
decade\cite{QiSCZhangTIReview,HasanKaneReview,QiSCZhangTITSCReview,HasanMooreReview}.
Even in the absence of interactions, a wide variety of gapped topological
phases of fermions are now known, ranging from the quantum Hall\cite{HaldaneQHE88,TKNN}
and the quantum spin Hall\cite{KaneMeleQSH,KaneMeleZ2,RoyQSH3D,FuKaneMeleTI3D,Bernevig2006,BernevigZhangHgTe,Roy2DZ2}
insulators among insulators to the chiral $p$-wave superconductor\cite{ReadGreenP+ipFQHE00,KallinSrRuOreview}
and the $B$ phase of Helium-3\cite{OsheroffHe3SF,LeggettHe3SF} among
superconducting phases. All these phases share some common features:
as long as certain symmetry conditions are upheld, they have a bulk
band structure that cannot be deformed into that of an atomic insulator
-- a \emph{trivial} insulator by definition -- without closing the
band gap along the way. Moreover, they all have robust surface states
that mediate unusual transport immune to symmetry-respecting disorder. This leads one to wonder whether all gapped phases of free fermions can be unified within a common mathematical framework.
Two different approaches have been developed to provide unified characterization of gapped phases of free fermions. In the topological band theory approach \cite{KitaevClassification,Ryu2010,SFRLClassification}, homotopy theory and $K$-theory are applied to classify free fermion Hamiltonians in a given spatial dimension and symmetry class. The topological band theory provides a complete topological classification of free fermion gapped states in all dimensions and all the $10$ Altland-Zirnbauer symmetry classes\cite{altland1997}. However, it does not directly describe physical properties of the topological states. In comparison, the topological response theory approach\cite{QiHughesZhangTFT,EssinMPAxion,wang2011,ryu2012,qi2013} describes topological phases by topological terms in their response to external gauge fields and gravitational fields. The advantage of this approach is that the topological phases are characterized by physically observable topological effects, so that the robustness of the topological phase is explicit and more general than in the topological band theory. Since it is insensitive to details of the microscopic Hamiltonian, a response theory based classification
scheme can be further extended to strongly interacting systems\cite{WangInteractingTI}.
Recently, the advent of Weyl semimetals (WSMs) has triggered interest
in gapless topological phases of free fermions\cite{PyrochloreWeyl,WeylMultiLayer,BurkovNodalSemimetal,TurnerTopPhases,VafekDiracReview,KrempaWeyl,HosurWeylReview}.
These phases are topological in the sense that they cannot be gapped
out perturbatively as long as momentum and charge are conserved. In
this regard, ordinary metals are also topological since their Fermi
surfaces are robust in the absence of instabilities towards density
waves or superconductivity. Additionally, gapless topological phases
may have non-trivial surface states such as Fermi arcs\cite{PyrochloreWeyl,HosurFermiArcs,PotterFermiArcOsc,HaldaneFermiArc}
and\emph{ }flat bands\cite{VolovikFlatBands}. Teo and Kane\cite{TeoKaneTopologicalDefects}
applied homotopy arguments to classify topological defects such as
vortices and dislocations in\emph{ }gapped phases; Matsuura \emph{et.
al.}\cite{Matsuura2013}\emph{ }used an analogous prescription to
classify gapless phases by observing that gapless regions in momentum
space such as Fermi surfaces and Dirac nodes can be viewed as topological
defects in momentum space in a gapped system. Thus, a common mathematical
formalism to describe the Bloch Hamiltonians of gapless phases was
derived.
Unlike their gapped counterparts, however, it is not clear whether
the response theories of gapless phases are amenable to a unified
description. For gapped systems, the path from the Hamiltonian
to the response theory is conceptually straightforward: the fermions
are coupled to gauge fields and integrated out to get the low energy
effective field theory, which describes the topological response properties.
In contrast, the low energy theory of gapless phases contains fermions
as well as gauge fields, and is distinct from the response theory
which contains only gauge fields. Thus, it is not obvious how the
topological properties of the Hamiltonian affect the response. The
response depends on system details in general and therefore, recognizing
its universal features and then unifying the responses of various
gapless phases is a non-trivial task. A few cases of topological response properties of gapless fermions have been studied. One example is the intrinsic anomalous Hall effect of a two-dimensional Fermi gas\cite{karplus1954,HaldaneAHE,jungwirth2002,XiaoBerryReview,nagaosa2010}. A generalization of this effect in three-dimensional doped topological insulators has been discussed\cite{barkeshli2011}. Another example is the topological response of Weyl semimetals, which has been described in the form of the axial anomaly\cite{adler1969,bell1969,NielsenFermionDoubling1,NielsenFermionDoubling2,NielsenABJ,ZyuninBurkovWeylTheta,ChenAxionResponse,AjiABJAnomaly,QiWeylAnomaly,HosurWeylReview,BasarTriangleAnomaly,SonSpivakWeylAnomaly,LandsteinerAnomaly,GoswamiFieldTheory,GrushinWeyl}. This refers to the apparent charge conservation violation that occurs for each Weyl fermion branch in the presence of parallel electric and magnetic fields, although the net charge of the system must still be conserved. Recently, these ideas were generalized to find the topological responses of point Fermi surfaces in arbitrary dimensions\cite{HughesTopoSemimetal}. The DC conductivity of metals has also been proposed to be related to a phase space topological quantity \cite{HetenyiDrude}. However, a general theory that describes the topological properties of gapless fermions in a unified framework has not been developed yet.
In this work, we achieve the above goals for free fermions: we show that gapless systems have universal features, independent of system details, and derive
a unified description of their response. Remarkably, this description
also captures the response of gapped systems. In particular, the response
of gapped phases arises from the bulk response of a certain parent
topological phase, while the universal features of the gapless phases
correspond to its edge anomalies. We elucidate this idea below.
The backbone of our construction is a mapping from $n$-dimensional gapped \emph{or} gapless systems to a \emph{gapped} quantum Hall (QH) system which lives in $2n$-dimensional phase space. Such a phase space system has both bulk responses, given by a $2n$-dimensional Chern-Simons (CS) theory, and boundary (axial) anomalies. We identify the bulk responses with topological responses of insulators. However, the key insight that allows us to include gapless systems is to identify a Fermi surface in real space with a phase space boundary in the momentum directions. Likewise, real space excitations near the Fermi surface are identified with the gapless edge excitations in phase space. The universal features of the response of gapless systems are thus the anomalies associated with these phase space edges.
There is an important technical point required in order to bestow the $2n$-dimensional QH system with the interpretation of phase space. Specifically, we must choose the QH system to consist of a
massive Dirac fermion under $n$ uniform magnetic fields of strength
$B_{0}$ in $n$ orthogonal planes, then project to the zeroth Landau level (ZLL) of the total field. In this case, the projected
operators for pairs of dimensions acquire the usual canonical commutation
relations that relate ordinary real and momentum space (up to an overall
factor of $B_{0}$). This allows us to interpret the ZLL of the $2n$-dimensional QH system as phase space for the $n$-dimensional physical space.
We interpret additional perturbations in the phase space gauge fields as physical quantities such as the $n$-dimensional system's electromagnetic (EM)
field, strain field, Berry curvatures, and Hamiltonian. Topological defects, such as monopoles, in the phase space gauge fields allow us to generalize to systems with dislocations and with point Fermi surfaces such as graphene and Weyl semimetals. These ideas are summarized in Table \ref{table:dictionary}.
\begin{table*}
\begin{centering}
\begin{tabular}{|c|c|}
\hline
\textbf{Phase space} & \textbf{Real space} \tabularnewline
\hline
Bulk responses & Quantized insulator responses\tabularnewline
\hline
Anomalies from momentum direction edges & Gapless response\tabularnewline
\hline
Anomalies from real direction edges & Real edge anomalies \tabularnewline
\hline
Gauge field strength & EM field strength/$k$-space Berry curvature/Strain \tabularnewline
\hline
Monopole in gauge field & Magnetic monopole/Weyl node/dislocation \tabularnewline
\hline
\end{tabular}\caption{Dictionary for interpreting phase space quantities in real space.}
\par\end{centering}
\label{table:dictionary}
\end{table*}
This construction enables us to systematically enumerate all possible
intrinsic, topological responses to electromagnetic and strain fields
in the DC limit in any given dimension. One simply has to write down
the Chern-Simons action in phase space, vary it with respect to each
gauge field, and consider each boundary to obtain all the bulk, boundary
and gapless responses in real space. Following
this procedure, we show carefully that screw dislocations in Weyl semimetals
trap chiral modes which are well-localized around the dislocation
at momentum values away from the Weyl nodes. A related but different effect has been studied previously \cite{jianWSMdisloc}. However, our framework provides a unified and natural description of this effect and other topological effects.
It is crucial that the $2n$-dimensional system be gapped even in the absence of the background magnetic fields
of strength $B_{0}$. This ensures that its response theory contains
terms depending on $B_{0}$ in addition to fluctuations in the gauge
fields. In $n$ dimensions, we will see that the $B_{0}$-dependent
terms translate into quasi-lower-dimensional responses, such as the
polarization of a system of coupled chains along the chains. If the
$2n$-dimensional system is gapless in the absence of the background
fields, such responses will be missed by the unified theory.
A caveat is that our construction does not capture responses to spatial,
momental and temporal variations in the field strengths, such as the
gyrotropic effect which is an electric response to a spatially varying
electric field. Note that the regular Maxwell response, given by $j^{\mu}=\partial_{\nu}F^{\nu\mu}$
is a response to a variation in the field strength. Another caveat
is that in phase space dimensions equal to $4$ and above, the Maxwell
term in the action is equally or more relevant than the Chern-Simons
term and hence will, in general, dominate the DC response. However,
our central objective is to demonstrate that there exists a theory
which unifies the responses of gapped and gapless systems, namely,
the phase space Chern-Simons theory.
The rest of this paper is structured as follows. In Section \ref{section:ZLL},
we review the key property of the ZLL which provides the physical
justification for our construction. In Section \ref{section:CSExample}
we explain the interpretation of the gauge fields in our mapping and
give an example illustrating the validity of the CS theory. In Section
\ref{section:diracModel} we write down an explicit model with a CS
response theory and show the precise way in which it behaves as the
phase space response theory of a lower-dimensional model. In Sections
\ref{section:responses} and \ref{section:anomalies} we explain the
responses and anomalies (respectively) that come from the CS theory
in various dimensions, applying our framework to describe spectral
flow in Weyl semimetals with dislocations. Finally, in Section \ref{section:discussion}
we summarize our work and suggest extensions of our theory to more
nontrivial systems.
\section{A Review of the Algebra of the Zeroth Landau Level}
\label{section:ZLL}
One of the key features that we use in the intuition for our approach
is the fact that projecting position operators to the ZLL yields nonzero
commutators between those operators. We now review this fact, for
concreteness as well as for later convenience, for the case of Dirac
electrons in a uniform magnetic field in two spatial dimensions in
Landau gauge. Although we consider the ZLL of Dirac electrons here,
the non-commutativity of position operators is simply a consequence
of minimal coupling and Landau quantization of cyclotron orbits and
hence is true for other dispersions as well as for other Landau levels
for a Dirac dispersion.
Consider a 2D massive Dirac Hamiltonian in a uniform magnetic field
\begin{equation}
H=(p_{x}-eBy)\sigma_{x}+p_{y}\sigma_{y}+m\sigma_{z}
\end{equation}
Here $\sigma_{i}$ are the Pauli matrices. We have set the Fermi velocity
to unity, written the electron charge as $-e$, and chosen the Landau
gauge $\mathbf{A}=-By\mathbf{\hat{x}}$ with $B>0$ for definiteness.
Note that $p_{x}$ commutes with the Hamiltonian, so we may replace
it by its eigenvalue. We can define an annihilation operator $a=(p_{x}-eBy-ip_{y})/\sqrt{eB}$,
which has $[a,a^{\dagger}]=1$, and the Hamiltonian becomes
\begin{equation}
H=\begin{pmatrix}m & \sqrt{2eB}a\\
\sqrt{2eB}a^{\dagger} & -m
\end{pmatrix}
\end{equation}
It is straightforward to show that the eigenstates are labeled by
an eigenvalue $n\geq0$ of the number operator $a^{\dagger}a$, with
dispersion $\pm\sqrt{2eBn+m^{2}}$ for $n\neq0$. For $n=0$, the
eigenvalue is $-m$, the spin state is $\begin{pmatrix}0\\
1
\end{pmatrix}$, and the state is annihilated by $a$. This is the expected result
that the kinetic energy is quenched and the spectrum becomes discrete,
highly degenerate Landau levels.
Let $\ket{k_{x}}$ be the state in the ZLL with $p_{x}$ eigenvalue
$k_{x}$. Then the projection operator to the ZLL is
\begin{equation}
P=\int dk_{x}\frac{L}{2\pi}\ket{k_{x}}\bra{k_{x}}
\end{equation}
with $L$ the system length in the $x$ direction. Writing $y=((p_{x}/\sqrt{eB})-(a+a^{\dagger}))/\sqrt{eB}$,
the projected $y$ operator becomes
\begin{equation}
PyP=\int dk_{x}\frac{L}{2\pi}\frac{k_{x}}{eB}\ket{k_{x}}\bra{k_{x}}
\end{equation}
where we have used the fact the $a$ and $a^{\dagger}$ describe inter-Landau
level processes and thus, vanish under projection onto the ZLL. Next,
using the fact that $\ket{k_{x}}$ is an eigenstate of $p_{x}$, we
find
\begin{equation}
PxP=\int dk_{x}\frac{L}{2\pi}i\partial_{k_{x}}\ket{k_{x}}\bra{k_{x}}
\end{equation}
The commutator can then easily be computed to be
\begin{equation}
\left[PxP,PyP\right]=\frac{i}{eB}\label{eq:[x,y]=00003D00003D00003Di}
\end{equation}
Hence if we absorb the factor of $eB$ into $y$, then $PxP$ and
$PyP$ have the correct commutator structure for us to imbue them
with the interpretation of the position and momentum operators, respectively,
of a 1D system. This interpretation is the primary physical motivation
for the construction which follows. As mentioned earlier, other dispersions
will also result in commutation relations similar to Eq. (\ref{eq:[x,y]=00003D00003D00003Di})
and thus imbue $x$ and $y$ with interpretations of position and
momentum of a 1D system. However, a massive Dirac dispersion is ideal
for deriving the unified response theory because it does not miss
any quasi-lower-dimensional responses, as mentioned earlier and detailed
later.
In higher dimensions, the Dirac model is $H=\sum_{i}(p_{i}+eA_{i})\Gamma_{i}$,
where the $\Gamma_{i}$ are anticommuting elements of the Clifford
algebra of $2n$ by $2n$ matrices. If we apply constant magnetic
fields $F_{ij}$ for disjoint pairs $(i,j)$ of coordinates, we can
form an annihilation operator for each such pair. Annihilation operators
from different pairs commute, and the analysis above carries through
so that the position operators within each pair no longer
commute after projection.
\section{Phase Space Chern-Simons Theory}
\label{section:CSExample}
The key idea of our construction is to represent a (possibly gapless)
$n$-dimensional system by a gapped $2n$-dimensional phase space
system, specifically a massive Dirac model coupled to a gauge field.
As we just showed, we can interpret a $2n$-dimensional system
as living in phase space by adding background magnetic fields between disjoint pairs of spatial directions and projecting to the ZLL. Moreover, since
the phase space system is gapped, we can immediately write down a
response theory for it, the topological part of which can be proved
to be a CS theory\cite{QiHughesZhangTFT, YaoLeeTIChernSimons, Zhang4DQH,Bernevig4DQH}. Note that in a CS theory,
real and momentum space gauge fields enter the action in similar ways,
analogous to our idea of treating position and momentum on the same
footing in phase space.
Before proceeding, we fix some notation and conventions. We will always
use the Einstein summation convention where repeated indices are summed.
Phase space coordinates will be labelled by $x,y,z$ and $\bar{x},\bar{y},\bar{z}$.
After projection, $x,y,z$ will be interpreted as the corresponding
real space coordinates, while $\bar{x},\bar{y},\bar{z}$ will be interpreted
as momentum space coordinates $k_{x},k_{y},k_{z}$ respectively. In
phase space, we will refer to the $U(1)$ background gauge field which
generates the real space commutator structure as $A_{\mu}$ with its nonzero field strengths being $F_{i \bar{i}} = B_0$ for $i=(x,y,z)$. We denote all other contributions to the gauge field by $a_{\mu}$ and the total gauge field by $\mathcal{A}_{\mu}=A_{\mu}+a_{\mu}$.
Likewise, we write $f_{\mu \nu}$ and $\mathcal{F}_{\mu\nu} = F_{\mu \nu} + f_{\mu \nu}$ for the non-background and total field strengths respectively.
The (non-Abelian) field strengths are as usual defined by $f_{\mu\nu}=\partial_{\mu}a_{\nu}-\partial_{\nu}a_{\mu}-[a_{\mu},a_{\nu}]$.
We will also abbreviate the CS Lagrangian by $\epsilon a\partial a\equiv\epsilon^{\mu\nu\sigma}a_{\mu}\partial_{\nu}a_{\sigma}$,
where $\epsilon$ is the totally antisymmetric Levi-Civita tensor,
with an analogous abbreviation for higher-dimensional CS terms. Finally,
we set $e=\hbar=1$, and also assume that $\sqrt{B_{0}}\sim1/l_{B}$
is very large compared to all other wavenumbers in the problem.
\subsection{Interpretation of Phase Space Gauge Fields}
\label{subsection:interpretation}
Our prescription is that the non-background contributions $a_{\mu}$
to the phase space gauge field should be interpreted as the Berry
connection for the lower-dimensional system:
\begin{equation}
a_{\mu}^{\alpha\beta}=i\bra{u_{k\alpha}}\partial_{\mu}\ket{u_{k\beta}}
\end{equation}
where $\ket{u_{k\alpha}}$ is the (local) Bloch wavefunction at momentum $k$
for the $\alpha$ band. Here $\partial_{\bar{x},\bar{y},\bar{z}}$
should be interpreted as $B_{0}\partial_{k_{x},k_{y},k_{z}}$. We can think of the physical EM vector potential as a Berry connection, which means that it is included in the real-space components of $a$.
As such, we will often use the following heuristic interpretations
in order to more clearly see the physics: $-a_{t}$ is the lower-dimensional
band Hamiltonian plus the physical EM scalar potential, $a_{x,y,z}$ is the
physical EM vector potential, and $a_{\bar{x},\bar{y},\bar{z}}$ is the momentum-space
Berry connection. The field strengths which do not mix real and momentum
space then have natural interpretations as the physical EM field strengths
and Berry curvatures.
The physical interpretation of ``mixed'' field strengths such as $f_{x\bar{y}}$ (in 4 or higher
dimensional phase space) is less obvious. Here we present two ways
to think about them. First consider a gauge where $\partial_{\bar{y}}a_{x}=0$.
We find that
\begin{equation}
\int d\bar{y}f_{x\bar{y}}=\partial_{x}\int d\bar{y}a_{\bar{y}}=2\pi\partial_{x}P_{y}
\end{equation}
where $P_{y}$ is the one-dimensional polarization of the system\cite{Vanderbilt1993}.
A spatially varying polarization can be thought of as strain of the
electron wavefunction, which can come from either mechanical strain
or some other spatial variation of the parameters entering the band
structure.
To make the connection of $f_{x\bar{y}}$ to mechanical strain more
explicit, we change the gauge to set $\partial_{x}a_{\bar{y}}=0$. An
intuitive way to think about a nonzero $f_{x\bar{y}}$ in this gauge
is in terms of dislocations. In particular, adiabatically moving a
particle around a real space dislocation leads to a translation, but
if the particle can locally be treated as a Bloch wave, then that
translation is equivalent to the accumulation of a phase. This (Berry)
phase is equal to $\mathbf{k}\cdot\mathbf{b}$, with $\mathbf{b}$
the Burgers vector of the dislocation. In particular, this is a \emph{momentum-dependent}
Berry phase resulting from adiabatic motion in real space. Hence $f_{x\bar{y}}$
is nonzero. It can be shown explicitly\cite{XiaoBerryReview} in the
perturbative regime that strain typically leads to such a Berry phase.
\subsection{Example: 2D Phase Space}
\label{section:2DExample}
We first consider the case where our real space theory consists of
a single filled band living in 1D and that a 2D CS response term in
phase space with a background field describes the expected responses.
We will, for simplicity, only consider Abelian physics in this example.
Consider the CS action
\begin{equation}
S_{CS}=\frac{1}{4\pi}\int dtd^{2}xC(\bar{x},x)\epsilon\mathcal{A}\partial\mathcal{A}\label{eqn:ChernSimons2D}
\end{equation}
(In Section \ref{section:diracModel}, we will show in an explicit model how Eq. \ref{eqn:ChernSimons2D}, with this (quantized) coefficient, appears, but for now we simply assume that it is the relevant response theory.) Here $C(\bar{x},x)$ accounts for the filling at different points;
for example, if the system occupies $x>0$, then $C(\bar{x},x)$ will
be proportional to $\Theta(x)$ with $\Theta$ the Heaviside step
function, as shown in Fig. \ref{fig:realEdge}. Likewise, if the system
has a Fermi momentum $k_{F}$, then $C(\bar{x},x)$ will be proportional
to $\left(\Theta(\bar{x}+k_{F}/B_{0})-\Theta(\bar{x}-k_{F}/B_{0})\right)$,
as shown in Fig. \ref{fig:FSEdge}.
\begin{figure}
\subfigure[ ]{ \includegraphics[width=4cm]{realEdge} \label{fig:realEdge}
} \subfigure[ ]{ \includegraphics[width=4cm]{FSEdge} \label{fig:FSEdge}
} \caption{Phase space realization of (a) real-space edges (b) Fermi points for
a 1D real space system. Arrows indicate the direction of the edge
modes.}
\end{figure}
Let us assume that there are no edges so that $C=1$ everywhere. Then,
the responses for this action, given by $j^{\mu}=\delta S/\delta\mathcal{A}_{\mu}$,
are
\begin{equation}
j_{2D}^{\mu}=\frac{1}{2\pi}\epsilon^{\mu\nu\sigma}\mathcal{F}_{\nu\sigma}
\end{equation}
where $\mathcal{F}_{\nu\sigma}=\partial_{\nu}\mathcal{A}_{\sigma}-\partial_{\sigma}\mathcal{A}_{\nu}$
is the field strength tensor corresponding to $\mathcal{A}$. Let us consider
each component, assuming for conciseness that the background field
is in Landau gauge $A_{\bar{x}}=B_{0}x$.
First we examine the real-space response $j_{2D}^{x}=\mathcal{F}_{\bar{x}t}/2\pi$.
This current in general depends on $\bar{x}$, which we interpret
as $k_{x}/B_{0}$; the observable current should then be given by
integrating the 2D current with respect to $\bar{x}$, as the real-space
current has contributions from all occupied momenta. The resulting
1D current is
\begin{equation}
j_{1D}^{x}=\frac{1}{2\pi}\left(\int d\bar{x}\partial_{\bar{x}}a_{t}-\partial_{t}\int d\bar{x}a_{\bar{x}}\right)\label{eq:jx1D}
\end{equation}
Interpreting the $\bar{x}$-dependent part of $a_{t}$ as the dispersion,
the first integral generically gives zero. The second integral is,
for a gapped system, exactly the time derivative of the polarization $P_{x}=\frac{1}{2\pi}\int a_{\bar{x}}\mathrm{d}\bar{x}$,
which is the expected 1D real-space current response $j_{1D}^{x}=-\partial_{t}P_{x}$.
Similarly, the $k$-space response is $j_{2D}^{\bar{x}}=\mathcal{F}_{tx}/2\pi$.
Interpreting $j^{\bar{x}}_{2D}$ as $dk/dt$, we recover the real space semiclassical
equation of motion $dk/dt=E/2\pi$ with $E$ the electric field.
Finally, the charge response is given by
\begin{equation}
\rho_{1D}=\frac{1}{2\pi}\int\mathrm{d}\bar{x}\left(F_{x\bar{x}}+\partial_{x}a_{\bar{x}}-\partial_{\bar{x}}a_{x}\right)\label{eq:rho1D}
\end{equation}
In units $B_{0}=1$, the first term simply gives the total charge
in the occupied band, which can be thought of a quasi-0D response.
The second term of Eq. (\ref{eq:rho1D}), in a gauge where $a_{x}=0$,
becomes $\partial_{x}P_{x}$ for a gapped system. This is again intuitive;
if, say, the system is strained, then the polarization and hence the
charge density will change accordingly.
If we now impose a pair of edges at $\bar{x}=\pm k_{F}$, two things
happen. Firstly, the 1D system lies between a pair of momentum points
$\pm k_{F}$, so the integrals in (\ref{eq:jx1D}) and (\ref{eq:rho1D})
run from $-k_{F}$ to $+k_{F}$ instead of the full Brillouin zone.
Consequently, the background charge becomes $\rho_{1D}^{bg}=\frac{1}{2\pi}\intop_{-k_{F}}^{k_{F}}F_{x\bar{x}}=k_{F}/\pi$,
as expected, while the terms proportional to $f_{x\bar{x}}$ cease
to have a simple interpretation as the polarization but can be non-zero
nonetheless. Secondly, the 2D system develops a chiral anomaly at
the edge, given by
\begin{equation}
\partial_{t}\rho_{2D}+\partial_{x}j_{2D}^{x}=\frac{1}{2\pi}\mathcal{F}_{tx}\partial_{\bar{x}}f(\bar{x})=\frac{E+\partial_{x}\varepsilon}{2\pi}\partial_{\bar{x}}C(\bar{x})\label{eqn:FSanomaly}
\end{equation}
where $C(\bar{x})=\Theta(\bar{x}+k_{F})-\Theta(\bar{x}-k_{F})$. Integrating
over 1D real space under a constant electric field and translational
invariance yields
\begin{equation}
\partial_{t}\int dx\rho_{2D}=\frac{L}{2\pi}\left(\delta(\bar{x}+k_{F})-\delta(\bar{x}-k_{F})\right)E
\end{equation}
where $L$ is the length of the system and $\delta$ is the Dirac
delta function. This is precisely the chiral anomaly in the 1D system:
the electric field tilts the 1D Fermi surface, effectively converting
right-moving charge in the vicinity of one Fermi point into left-moving
charge near the other. Thus, we have derived a property of a gapless
1D band structure from the edge anomaly of the parent 2D QH system.
Notice also that integration of (\ref{eqn:FSanomaly}) over momentum
space leads to
\begin{equation}
\partial_{t}\rho_{1D}+\partial_{x}j_{1D}^{x}=0
\end{equation}
which correctly tells us that there is no anomaly in the total charge.
The precise value of $\rho_{1D}$ and $j_{1D}^{x}$ depends on system
details; therefore, calculating them in our formalism would require
knowledge of non-universal properties of the 2D QH edge, such as the
velocity of the chiral modes. However, we have shown here that they
still have universal properties that reflect the universal properties
of a higher dimensional topological state.
A different type of anomaly occurs when the system has real-space
edges and a filled band. In this case, the anomaly equation in 2D
is
\begin{equation}
\partial_{t}\rho_{2D}+\partial_{\bar{x}}j_{2D}^{\bar{x}}=-\frac{1}{2\pi}\mathcal{F}_{t\bar{x}}\partial_{x}C(x)
\end{equation}
Integrating the above in $\bar{x}$ yields
\begin{equation}
\partial_{t}\rho_{1d}=(\left(\delta(x-x_{0})-\delta(x+x_{0})\right)\partial_{t}P(x)
\end{equation}
This is the known result\cite{ThoulessChargePump} that charge can be adiabatically pumped from
one edge of the system to the other via a time-dependent local polarization.
We thus see that the standard responses, including anomalies, that
we expect in a 1D theory are retrieved from the 2D CS theory. However,
detail-dependent edge responses are described in our theory only by
the anomaly (or lack thereof) that they create. We expect the same
procedure to generalize to higher dimensions, and we will show that
the expected topological responses appear in Section \ref{section:responses}.
\section{Explicit Model}
\label{section:diracModel}
\begin{figure}
\centering{}\includegraphics[width=0.95\columnwidth]{DerivationLogicNew}
\caption{Logical flow of the derivation in Section \ref{section:diracModel}.
An $nD$ Hamiltonian $h_{nD}$ can be obtained by projecting a $2nD$
massive Dirac Hamiltonian $H_{Dirac}^M$ in a magnetic field onto the ZLL, denoted
by the thick red bar. For a Fermi level in the Dirac mass gap, the
$M>0$ and $M<0$ ground states differ only in the occupation of the
ZLL, while their response theories $S_{Dirac}^M$ differ by the CS-term in $2nD$.
Thus, the response theory of the ZLL $S_{ZLL_{2nD}}$, which is the real space response theory $S_{nD}^{real}$ of $h_{nD}$, is the phase space $CS$-theory $S_{CS}^{phase}$.\label{fig:derivationLogic}}
\end{figure}
In this section, we elucidate the precise way in which a ZLL behaves
as the phase space of a system in half the number of spatial dimensions.
In particular, we explain why the response theory of the lower dimensional
system should be of CS form in phase space and describe the physical
meaning of projecting onto the ZLL. We also answer the question of
when a CS theory in $2nD$ can be interpreted as a phase space response
theory in $nD$.
To begin, consider a massive Dirac Hamiltonian in $2n$ dimensions
coupled to the gauge field $\mathbf{\mathcal{A}}$ defined in
Sec. \ref{section:CSExample}:
\begin{equation}
H_{2nD}=\sum_{i=1}^{n}\left[\Gamma_{i}(p_{i}-\mathcal{A}_{i})+\Gamma_{\bar{i}}(p_{\bar{i}}-\mathcal{A}_{\bar{i}})\right]+\Gamma_{0}M
\label{eqn:2nDirac}
\end{equation}
$\mathbf{\mathcal{A}}$ corresponds to large constant background
fields $B_{0}$ in $n$ orthogonal planes plus small fluctuations;
thus, $F_{i\bar{i}}=B_{0}\gg f_{ij},f_{i\bar{j}},f_{\bar{i}\bar{j}}$ for all $i,j\in0\dots n$.
The $\Gamma$'s are $2n\times2n$ anticommuting matrices with eigenvalues
$\pm1$ and satisfy $\Gamma_{0}=\prod_{i=1}^{n}\Gamma_{i}\Gamma_{\bar{i}}$.
To zeroth order in $f$, the spectrum of $H_{2nD}$ can be easily
derived by generalizing the calculation of Sec \ref{section:ZLL}; it consists of Landau levels with energies $\pm\sqrt{2kB+M^{2}}$
for positive integers $k$ together with a ZLL state which has energy $-M$ and a spinor wavefunction that has a $\Gamma_{0}$ eigenvalue of $-1$.
We have two tasks. First, we must isolate the topological response theory of the ZLL of this system, which we expect to be a Chern-Simons theory. Second, we must relate this Hamiltonian, projected onto the ZLL of the total field, to the Hamiltonian of the real space system.
For the first task, note that the ZLL is occupied (unoccupied) in the ground state if $M>0$ ($M<0$),
while the occupation of all the other Landau levels is independent
of the sign of $M$. This should hold for non-zero $f$ as well if
$M\gg\sqrt{B_{0}}$. As a result, the response of the ZLL to $\mathbf{\mathcal{A}}$
is given by the terms in the total response that are \emph{odd} in $M$.
Moreover, it is known that the two signs of $M$ correspond to a topological
and a trivial insulator (which sign corresponds to which phase is determined by the regularization
far away from the Dirac point).
Therefore, the difference between their response theories, which
by definition is the topological part of the effective action, equals
the response of the ZLL. In the absence of vertex corrections, this
is known to be the $2n$-dimensional CS action with coefficient 1 to lowest order in the coupling constant $e$. In short, the response of the ZLL is precisely
the CS action with coefficient 1 in appropriate dimensions under suitable well-controlled
perturbative approximations. We emphasize that this statement is true
even if $H_{2nD}$ is modified at high energies to change the total
($n^{\text{th}}$) Chern numbers of the occupied and unoccupied bands. The
only requirement is that the Chern numbers of the $M>0$ and $M<0$ cases differ by unity; their
actual values are irrelevant for determining the ZLL response.
Next, we recall that $[x_{i},\bar{x}_{i}]=i$ in the ZLL as shown
in Sec \ref{section:ZLL}, so $x_{i}$ and $\bar{x}_{i}$ can be thought
of as a pair of canonically conjugate position and momentum variables.
Therefore, projecting $H_{2nD}$ onto the ZLL gives an $n$-dimensional
system whose response theory is guaranteed to be of CS form in phase
space. In this response theory, the gauge fields in the ``momentum''
directions are to be reinterpreted as momentum space Berry connections.
This flow of logic is depicted in Fig \ref{fig:derivationLogic} (where
we have renamed $H_{2nD}$ as $H_{Dirac}^{M}$ to make the figure
self-contained).
Having shown that the response of the $n$-dimensional system is given
by the phase space CS theory, we turn to our second task and show in detail how the Hamiltonian
in $n$-dimensions is related to $H_{2nD}$. For clarity, we choose
$n=1$, i.e., we demonstrate this in 1D real space with a $U(1)$
gauge field; the procedure generalizes straightforwardly to more dimensions
and to larger gauge groups. To construct the 2D phase space model,
let $\Gamma_{x}$, $\Gamma_{\bar{x}}$, and $\Gamma_{0}$ be the Pauli
matrices $\sigma_{x}$, $\sigma_{y}$, $\sigma_{z}$. (The $\Gamma$
notation is for consistency with the higher-dimensional generalization in Eq. (\ref{eqn:2nDirac}).) The
appropriate 2D Hamiltonian is
\begin{equation}
H_{2D}=(p_{x}-\mathcal{A}_{x})\Gamma_{x}+(p_{\bar{x}}-\mathcal{A}_{\bar{x}})\Gamma_{\bar{x}}+M\Gamma_{0}+\mathcal{A}_{0}\label{eqn:2dH}
\end{equation}
We are projecting onto the ZLL of the total field $\mathcal{A}$,
so we need to make some approximations to make progress. We assume
that the field fluctuations $f$ are much smaller than $B_{0}$. That is, we identify $1/\sqrt{B_{0}}$ with some microscopic length
scale like a lattice constant for the underlying real space system,
and assume that all the gauge field fluctuations are small over that
length scale. If this is true, then we can make the gauge choice that
$\partial_{\mu} a_{\nu} \ll B_0$ for all $\mu, \nu$. In this case, the Hamiltonian of the ZLL
of $\mathcal{A}$ can be computed by considering $a$ to be a perturbation
on the Hamiltonian with $\mathcal{A}=A$. We implement degenerate
perturbation theory as follows.
Let us write
\begin{equation}
H_{2D}=H_{0}+H'
\end{equation}
with
\begin{align}
H_{0} & =(p_{x}-A_{x})\Gamma_{x}+(p_{\bar{x}}-A_{\bar{x}})\Gamma_{\bar{x}}+M\Gamma_{0}-A_{0}\\
H' & =-a_{x}\Gamma_{x}-a_{\bar{x}}\Gamma_{\bar{x}}-a_{0}
\end{align}
Since $A$ is a constant background field of strength $B_{0}$, we
know how to diagonalize $H_{0}$; let $\ket{n,k}$ be the eigenstates
where $n$ labels the LL and $k$ labels a momentum (in Landau gauge).
Using $\left\langle \bar{x}|0,k\right\rangle \propto e^{-B_{0}(\bar{x}-k/B_{0})^{2}/2}(1,0)$,
where the spinor indicates that the ZLL states are polarized in the
basis of $\Gamma_{0}$ eigenstates, and denoting $k_{\pm}=k\pm q/2$,
first order degenerate perturbation theory gives an effective 1D Hamiltonian
as
\begin{align}
\bra{0,k_{-}}H'\ket{0,k_{+}} & \propto-\int dxd\bar{x}e^{iqx}a_{0}(x,\bar{x})e^{-B_{0}(\bar{x}-k/B_{0})^{2}}e^{-q^{2}/4B_{0}}\nonumber \\
& \propto-a_{0}\left(-i\delta'(q),k/B_{0}\right)\nonumber \\
\implies h_{1d}(k) & \equiv-a_{0}\left(i\partial_{k},k/B_{0}\right)\label{eq:firstorderpert}
\end{align}
Thus, the desired 1D Hamiltonian $h_{1d}(x,k)$ can easily be obtained
by choosing $a_{0}(x,\bar{x})=-h_{1d}(x,B_{0}\bar{x})$. Since the
ZLL is spin-polarized, the dependence on $a_{x}$ and $a_{\bar{x}}$
disappears from (\ref{eq:firstorderpert}); these fields only appear
at second order in perturbation theory. Degenerate perturbation theory
tells us that, if $P$ is the projector onto the degenerate subspace,
then the second order correction to the energy is given by the eigenvalues
of\begin{widetext}
\begin{equation}
\bra{\psi_{i}}H_{2D}\ket{\psi_{j}}=\bra{0,k_{i}}\left(H_{0}+H'+(H'-PH'P)(H_{0}-E_{0})^{-1}(H'-PH'P)\right)\ket{0,k_{j}}\equiv\bra{0,k_{i}}H_{2D}+H^{(2)}\ket{0,k_{j}}\label{eqn:pertTheory}
\end{equation}
where
\begin{equation}
\ket{\psi_{i}}=\ket{0,n_{i}}+\sum_{n>0,l}\ket{n,k_{l}}\bra{n,k_{l}}(H_{0}-E_{0})^{-1}(H'-PHP)\ket{n=0,k_{i}}
\end{equation}
\end{widetext} is a basis for the perturbed ZLL wavefunctions up
to first order in $H'$. In particular, the unitary transformation
$U$ which takes $H_{2D}$ to $H_{2D}+H^{(2)}$, to second order in
$H'$, is the one which takes $\ket{0,k_{i}}$ to a state living in
the ZLL of the full Hamiltonian, to first order in $H'$.
Therefore, if we find this unitary transformation and then perform
the projection in the ZLL of the background field, we still get our
desired projected Hamiltonian. We write $U=\exp(iS)$ with $S$ Hermitian,
and expand $S=S_{1}+S_{2}+...$ where the subscripts indicate an expansion
in orders of $H'$ (by inspection $S$ can be chosen to have no zeroth
order term). Then we can match, order by order, terms in $e^{iS}H_{2D}e^{-iS}$
with those in $H_{2D}+H^{(2)}$ to find the conditions
\begin{align}
[H_{0},S_{1}] & =0\label{eqn:firstOrderCondition}\\
[H_{0},S_{2}] & =iH^{(2)}
\end{align}
We do not claim to be able to demonstrate explicitly a unitary transformation
which obeys the second of these conditions, as computing $H^{(2)}$
is highly nontrivial. However, we will proceed first by exhibiting
an ansatz for $U$, then showing that the projection onto the ZLL
of $A$ yields the correct Hamiltonian in real space, and finally
giving the physical motivation for the ansatz.
Let us start in the gauge $A_{x}=-B_{0}\bar{x}$, $A_{\bar{x}}=A_{t}=0$.
Then let
\begin{equation}
U=\left(e^{ia_{\bar{x}}p_{x}ds/B_{0}}\right)^{N}e^{-iB_{0}x\bar{x}}\left(e^{-ia_{x}p_{\bar{x}}ds/B_{0}}\right)^{N}e^{iB_{0}x\bar{x}}
\end{equation}
where $ds$ is an infinitesimal parameter and $N\rightarrow\infty$
such that $Nds=1$. This transformation is a gauge transformation,
followed by a translation of $\bar{x}$ by $-a_{x}$, followed by
the reverse gauge transformation, followed by a translation of $x$
by $a_{\bar{x}}$.
By inspection $U$ commutes with $H_{0}$, so Eq. (\ref{eqn:firstOrderCondition})
is satisfied. To second order in $H'$, we now have \begin{widetext}
\begin{equation}
UH_{2D}U^{\dagger}\approx H_{0}-a_{x}\left(x+\frac{a_{\bar{x}}}{B_{0}},\bar{x}-\frac{a_{x}}{B_{0}}\right)\Gamma_{x}-a_{\bar{x}}\left(x+\frac{a_{\bar{x}}}{B_{0}},\bar{x}-\frac{a_{x}}{B_{0}}\right)\Gamma_{\bar{x}}-a_{0}\left(x+\frac{a_{\bar{x}}}{B_{0}},\bar{x}-\frac{a_{x}}{B_{0}}\right)
\end{equation}
\end{widetext} where $a_{i}$ appearing without explicit functional
dependence means $a_{i}(x,\bar{x})$. The terms that we have neglected are ``double-nestings" of $a/B_0$; our aforementioned approximation that $a$ is slowly varying (which was a gauge choice possible when the corresponding field strengths were weak) allows us to write
\begin{equation}
a_0\left(x+\frac{a_{\bar{x}}}{B_{0}},\bar{x}-\frac{a_{x}\left(x+\frac{a_{\bar{x}}}{B_0},\bar{x}\right)}{B_{0}}\right)\approx a_0\left(x+\frac{a_{\bar{x}}}{B_{0}},\bar{x}-\frac{a_{x}}{B_{0}}\right)
\end{equation}
Let us now perform the projection
on the ZLL of the background field. As before, everything projects
to zero except for the $a_{0}$ term and the mass term of $H_{0}$.
The latter just projects to a constant which we can absorb by a shift
of $a_{0}$. However, we now obtain a different 1D Hamiltonian $h_{1d}'(x,\bar{x})=-a_{0}(x+a_{\bar{x}},\bar{x}-a_{x}))$,
which is simply $h_{1d}(x,\bar{x})$ with minimal coupling to the
gauge fields $a_{\bar{x}}$ and $a_{x}$, respectively ($B_{0}$ set
to unity for convenience). We therefore have correctly retrieved the
full 1D Hamiltonian from a projection to the ZLL, as the functional
form of the projected Hamiltonian is correct if we imbue $x$ and
$\bar{x}$ with the interpretations of a parameter tracking a locally
periodic Hamiltonian in space and Bloch momentum respectively.
A major question remains: why, physically, should this choice of $U$
be the correct one? First of all, the projected Hamiltonian, if it
is to describe a real system, must be gauge invariant. Hence the gauge
fields should be minimally coupled, and $U$ indeed accomplishes this
goal.
A more fundamental reason, though, is the following. Consider $H_{2D}$
in some local region over which $a$ is approximately constant, and
for convenience choose a gauge in which $a_{\bar{x}}$ is zero. In
this region, $a_{x}$ functions as a constant shift of the momentum
$p_{x}$ which dictates, in the ZLL of $A$, the wavefunction center
in $\bar{x}$. Hence we should, roughly speaking, identify the (local)
eigenvalue of $p_{x}-a_{x}$ with $\bar{x}$. In the original basis,
then, the variable canonically conjugate to $x$ is identified in
the ZLL with $\bar{x}+a_{x}$. If we are to interpret the commutator
of the projected $x$ and $\bar{x}$ operators in phase space as being
the canonical commutation relation of $x$ and $p$ in real space,
then we need to shift $\bar{x}$ by $-a_{x}$ in order to do so. By
a similar argument in the gauge where $a_{x}=0$, we should shift
$x$ by $a_{\bar{x}}$ to identify $x$ with $p_{\bar{x}}$ in the
ZLL.
Having derived the real space Hamiltonian from an ansatz for the solution
to the phase space one, we now comment on a few details.
First notice that this derivation generalizes easily to higher dimensions,
as the background field only couples $x$ to $\bar{x}$, $y$ to $\bar{y}$,
etc. The primary difference is that in $2n$-dimensional phase space,
the $\Gamma$ matrices must be anticommuting elements of the Clifford
algebra of $2n$ by $2n$ matrices with diag$(\Gamma_{i})=0$ for
$i\neq0$.
We next comment on gauge invariance. It may appear that there is extra
gauge invariance in the phase space theory; in particular, it may
seem strange that the Berry connections $a_{\bar{x}}$ can be gauge
transformed into real space gauge fields $a_{x}$ and vice-versa.
We claim that this is simply a reflection of the usual gauge invariance
in the lower-dimensional Hamiltonian. To see this, consider a unitary
operator $U=\exp(if(x,\bar{x}))$ which implements the gauge transformation
$a_{\mu}\rightarrow a_{\mu}+\partial_{\mu}f$, and let the ZLL wavefunctions
be $\ket{n}$ for some set of labels $n$. Since $U$ is a gauge transformation
in the phase-space system, it must commute with the projection operator
$P$ (as $U$ must take states in a given LL to the same LL). Hence
we can project $U$ to get its action on the projected Hamiltonian;
by the same argument we used for projecting the Hamiltonian, we must
have $PUP=\exp(if(x,k))$. To understand the meaning of this operator,
recall that locally, any state can be labeled as a Bloch wavefunction
$\ket{k;x}$ at momentum $k$ for a local Hamiltonian at $x$. Therefore,
a gauge transformation in the higher-dimensional system is equivalent
to a spatially dependent $U(1)$ gauge transformation on the eigenstates
$\ket{k;x}$ of the local Hamiltonian parametrized by $x$.
Finally, after seeing the derivation, we may answer the following
question: when can a CS theory in $2n$D be interpreted as the phase
space response theory of a system in $n$D? The key physical requirement
in our derivation was that the total field in the $2n$D system could
be separated into two parts: a uniform background field, which sets
some length scale, and another portion which varies slowly on that
length scale. When this condition holds, the CS theory may be interpreted
as a phase space theory for some lower-dimensional system.
\section{Enumeration of Bulk Responses}
\label{section:responses}
Having shown that the phase space CS theory is the correct unified
theory, we now systematically enumerate all the bulk responses of
the CS theory for each possible dimensionality of phase space, and
interpret them in real space. To avoid cluttering the notation, we
set $B_{0}=1$.
The 2D responses were discussed in Section \ref{section:2DExample}.
There we showed that the real space current density is the rate of
change of polarization, while the $k$-space current density reflects
the expected relation $dk/dt\sim E$ with $E$ the electric field.
The charge density response is just the band filling corrected for
strain-induced changes in the lattice constant.
We summarize the 2D responses in Table \ref{table:2Dresponses}.
\begin{table*}
\begin{tabular}{|c|c|}
\hline
\textbf{Current component} & \textbf{Response}\tabularnewline
\hline
Real space & Change in polarization\tabularnewline
\hline
$k$-space & Electric force\tabularnewline
\hline
Charge density & Band filling \tabularnewline
\hline
\end{tabular}\caption{Summary of 2D phase space responses.}
\label{table:2Dresponses}
\end{table*}
\subsection{4D Phase Space}
The action is given by
\begin{equation}
S=\frac{1}{24\pi^{2}}\int dtd^{4}x\text{tr}\left[C(x,\bar{x})\epsilon\mathcal{A}\partial\mathcal{A}\partial\mathcal{A}+...\right]
\end{equation}
where $+...$ indicates the non-Abelian terms. We set $C = 1$ uniformly to look at bulk responses.
\noindent \textbf{Spatial components:}
A priori there is no difference between the spatial directions $x$
and $y$, so we focus on the $x$ responses.
\begin{equation}
j_{2D}^{x}=\frac{1}{4\pi^{2}}\int d^{2}\bar{x}\text{tr}\left[\mathcal{F}_{y\bar{y}}\mathcal{F}_{t\bar{x}}+\mathcal{F}_{\bar{x}\bar{y}}\mathcal{F}_{ty}+\mathcal{F}_{y\bar{x}}\mathcal{F}_{\bar{y}t}\right]
\end{equation}
The first term includes the background field; setting $\mathcal{F}_{y\bar{y}}=F_{y\bar{y}}=B_{0}$
turns this into $j_{2D}^{x}=\frac{1}{2\pi}\int d\bar{x}\text{tr}\left[\mathcal{F}_{t\bar{x}}\right]=\partial_{t}P_{x}$
with $P_{x}$ the polarization in the $x$ direction. This is the
same response that appears in 1D, and is illustrated in Fig. \ref{fig:unstrainedPolarization}. The second term is the anomalous
Hall response; in the simple case where $\mathcal{F}_{ty}$ is simply
an electric field, this term gives a current $j^{x}=E_{y}\int d^{2}\bar{x}\text{tr}[\mathcal{F}_{\bar{x}\bar{y}}]/4\pi^{2}=E_{y}C_{1}/2\pi$
with $C_{1}$ the first Chern number of the occupied bands. This formula also applies to systems with open boundary in the $\bar{x},\bar{y}$ directions, in which case $C_1$ is not quantized but still determines the intrinsic Hall conductivity of the two-dimensional Fermi liquid\cite{karplus1954,HaldaneAHE,jungwirth2002,XiaoBerryReview,nagaosa2010}.
The third term, illustrated in Fig. \ref{fig:strainedPolarization}, says the following. Suppose that there is a change
in time of the polarization in the $y$-direction, i.e. $\mathcal{F}_{\bar{y}t}\neq0$,
without any strain in the system. If we now add shear in the system,
i.e. have $\mathcal{F}_{y\bar{x}}\neq0$, then some of that polarization
change becomes a current along the $x$ direction as defined before
adding strain.
\begin{figure}
\subfigure[ ]{\includegraphics[width=2.8cm]{unstrainedLattice.jpg} \label{fig:unstrainedPolarization}}
\subfigure[ ]{\includegraphics[width=5.5cm]{strainedLatticev2.jpg} \label{fig:strainedPolarization}}
\caption{Cartoon of the polarization response of an (a) unstrained square lattice (b) sheared square lattice. In (b), $F_{y\bar{x}} \neq 0$ because motion along the $y$ lattice vector leads to translation in the unstrained $x$ direction. The directed lines are the flow of charge due to a positive $\partial_t P_y$; the polarization is measured along the lattice vector, which is deformed in (b) due to strain. Only (b) has a nonzero current in the $x$ direction in (b), which is due to this deformation.}
\end{figure}
\noindent \textbf{$k$-space components}:
The $\bar{x}$-direction responses are:
\begin{equation}
j_{4D}^{\bar{x}}=\frac{1}{4\pi^{2}}\text{tr}\left[\mathcal{F}_{tx}\mathcal{F}_{y\bar{y}}+\mathcal{F}_{ty}\mathcal{F}_{\bar{y}x}+\mathcal{F}_{t\bar{y}}\mathcal{F}_{xy}\right]
\end{equation}
The first term is quasi-1D, saying that $dk_{x}/dt$ is proportional
to the electric field $E_{x}$. The second term says that an electric
field $E_{y}$ leads to a change in $k_{x}$ if there is shear in
the system. The third term is, semiclassically, the Lorentz force
- changing the polarization in the $y$ direction ($\mathcal{F}_{t\bar{y}}\neq0$)
leads to a 1D current in the $y$ direction, which then feels the
Lorentz force of the magnetic field ($\mathcal{F}_{xy}\neq0$), causing
$k_{x}$ to change.
\noindent \textbf{Charge component:}
The charge responses are
\begin{equation}
j_{2D}^{t}=\frac{1}{4\pi^{2}}\int d^{2}\bar{x}\text{tr}\left[\mathcal{F}_{xy}\mathcal{F}_{\bar{y}\bar{x}}+\mathcal{F}_{x\bar{x}}\mathcal{F}_{y\bar{y}}-\mathcal{F}_{x\bar{y}}\mathcal{F}_{y\bar{x}}\right]
\end{equation}
The first term is the Hall response for a Chern insulator. Specifically,
if $F_{xy}$ is just the magnetic field, this term becomes $j^{t}=(B_{z}/4\pi^{2})\int d^{2}\bar{x}F_{\bar{y}\bar{x}}=C_{1}B_{z}/2\pi$,
where $C_{1}$ is the first Chern number.
Consider the remaining terms
\begin{equation}
j_{2D}^{t}=\frac{1}{4\pi^{2}}\int d^{2}\bar{x}\text{tr}\left[\mathcal{F}_{x\bar{x}}\mathcal{F}_{y\bar{y}}-\mathcal{F}_{x\bar{y}}\mathcal{F}_{y\bar{x}}\right]\label{eqn:4DStrainTerms}
\end{equation}
In the simplest case of a single featureless, flat band, $f_{i\bar{j}}\propto\partial_{i}u_{j}$
where $\mathbf{u}$ is the displacement vector. This is because infinitesimal
motion $dx_{i}$ in the $i$ direction leads to a translation $dx_{j}=\partial_{i}u_{j}dx_{i}$
in the $j$ direction, which is, for Bloch wavefunctions, the same
as accumulating a $\bar{j}$-dependent phase $B_{0}\bar{j}dx_{j}$.
Hence $a_{i}=\bar{j}\partial_{i}u_{j}$ with our convention of $B_{0}=1$.
In this simple case, then, Eq. (\ref{eqn:4DStrainTerms}) becomes
\begin{equation}
j_{2D}^{t}=\frac{1}{4\pi^{2}}\int d^{2}\bar{x}\left(\left(1+\partial_{x}u_{x}\right)\left(1+\partial_{y}u_{y}\right)-\partial_{x}u_{y}\partial_{y}u_{x}\right)
\end{equation}
The expression inside the parentheses is the determinant of the deformation
gradient, that is, the area of the strained unit cell in units of
the original unit cell area. Hence the non-background terms are just
due to the change in the area of the unit cell. Adding features to
the bands will lead to corrections due to, for example, strain changing
the local density of states.
We summarize the 4D responses in Table \ref{table:4Dresponses}.
\begin{table*}
\begin{tabular}{|c|c|}
\hline
\textbf{Current density component} & \textbf{Response}\tabularnewline
\hline
Real space & Change in polarization\tabularnewline
& Hall response\tabularnewline
& Change in polarization with strain\tabularnewline
\hline
$k$-space & Electric force\tabularnewline
& Sheared response to electric field\tabularnewline
& Lorentz force\tabularnewline
\hline
Charge density & Hall response\tabularnewline
& Change in unit cell area due to strain \tabularnewline
\hline
\end{tabular}\caption{Summary of responses from 4D phase space.}
\label{table:4Dresponses}
\end{table*}
\subsection{6D Phase Space}
\label{section:6+1Dresponses}
The action is given by
\begin{equation}
S=\frac{1}{192\pi^{3}}\int dtd^{6}x\text{tr}\left[C(x,\bar{x})\epsilon\mathcal{A}\partial\mathcal{A}\partial\mathcal{A}\partial\mathcal{A}+...\right]
\end{equation}
with $+...$ again representing the terms for a non-Abelian gauge
field. For simplicity of exposition and interpretation, we will assume
that the momentum-space and time components of $\mathcal{A}$ are
$U(N)$ and the real-space components are $U(1)$, that is, the latter
couple to all the bands in the same way. This assumption is not necessary
for our theory, however. We again assume that $C = 1$ uniformly to look at bulk responses.
There are 15 different responses in each component. If we separate
$\mathcal{F}_{x\bar{x}}$ into its background and non-background components,
for the spatial and momentum components we get an extra 7 terms for
a total of 22. For the charge component, there are 28. We sort them,
neglecting relative minus signs between the groups.
\noindent \textbf{Spatial components:}
Quasi-1D response (5 terms):
\begin{equation}
j_{3D}^{x}=\frac{1}{8\pi^{3}}\int d^{3}\bar{x}\text{tr}\left[\mathcal{F}_{t\bar{x}}\left(\left(F_{y\bar{y}}+f_{y\bar{y}}\right)\left(F_{y\bar{y}}+f_{y\bar{y}}\right)-\mathcal{F}_{y\bar{z}}\mathcal{F}_{z\bar{y}}\right)\right]
\end{equation}
By the same computation that was done for the charge response in 4D,
$\mathcal{F}_{y\bar{y}}\mathcal{F}_{z\bar{z}}-\mathcal{F}_{y\bar{z}}\mathcal{F}_{z\bar{y}}$
is the change in area perpendicular to the current. This response thus has the
form of the 1D real space response (time-varying polarization) times
the change in area perpendicular to the current.
Layered Chern insulator response (2 terms):
\begin{align}
j_{3D}^{x}=\frac{1}{8\pi^{3}}\int d^{3}\bar{x}\text{tr}\left[\mathcal{F}_{ty}\mathcal{F}_{\bar{x}\bar{y}}F_{z\bar{z}}+(y\leftrightarrow z)\right]
\end{align}
where $(y\leftrightarrow z)$ means to switch $y$ and $z$ as well
as $\bar{y}$ and $\bar{z}$. This is the Hall response corresponding
to thinking of the 3D system as 2D systems layered in momentum space.
Note that this includes the Hall response of a Weyl semimetal (WSM)\cite{ZyuninBurkovWeylTheta, ChenAxionResponse,HosurWeylReview,RanQHWeyl,QiWeylAnomaly}
appearing from its monopoles of $\mathcal{F}_{\bar{x}\bar{y}}$. This
can be seen by thinking of the (2-node) WSM as stacks of 2D insulators
parametrized by the momentum direction along which the Weyl nodes
are split; as shown in Fig. \ref{fig:FermiArcs}, each insulator lying between the nodes is a Chern insulator
and thus contributes to $\mathcal{F}_{\bar{x}\bar{y}}$ (for $k_{z}$-direction
Weyl node splitting). In this special case, integration yields
\begin{equation}
j_{3D}^{x}=\frac{1}{4\pi^{2}}\left(E_{y}\Delta k_{z}-E_{z}\Delta k_{y}\right)
\end{equation}
where $\Delta k_{i}$ is the splitting of the Weyl points in the $k_{i}$
direction.
\begin{figure}
\begin{centering}
\includegraphics[width=0.95\columnwidth]{FermiArcs}
\par\end{centering}
\caption{Slab of WSM with Weyl nodes separated along $\bar{x}$. Each slice in momentum space with fixed $\bar{x}$ can be characterized by a Chern number $C_1(\bar{x})$, which changes by unity across the Weyl nodes. Thus, the region between the nodes in the above figure is a series of Chern insulators. The edge states of these Chern insulators constitute the Fermi arcs, marked as thick red lines with an irregular shape. Note that the cones are only present as a cartoon to depict the position of the Weyl nodes; the vertical direction in the figure is $z$, and should not be confused with energy. Figure adapted from Ref. \cite{HosurWeylReview}.\label{fig:FermiArcs}}
\end{figure}
Topological magnetoelectric effect (3 terms):
\begin{equation}
j_{3D}^{x}=\frac{1}{8\pi^{3}}\int d^{3}\bar{x}\text{tr}\left[(\mathcal{F}_{\bar{x}t}\mathcal{F}_{\bar{y}\bar{z}}-\mathcal{F}_{\bar{x}\bar{y}}\mathcal{F}_{t\bar{z}}+\mathcal{F}_{\bar{x}\bar{z}}\mathcal{F}_{t\bar{y}}\right)\mathcal{F}_{yz}]\label{eqn:unsimplifiedTME}
\end{equation}
Assuming that $\mathcal{F}_{yz}$ does not depend on momentum for
simplicity, this term is an x-direction current proportional to $B_{x}$.
Indeed, if we assume that the real space system is gapped so that
there are no monopoles of Berry curvature, simple but tedious manipulations
(see Appendix \ref{app:TME}) turn Eq. (\ref{eqn:unsimplifiedTME})
into
\begin{align}
j_{3D}^{x} & =-\frac{1}{16\pi^{3}}\partial_{t}\int d^{3}\bar{x}\epsilon^{IJK}\text{tr}\left[a_{I}\partial_{J}a_{K}+\frac{2}{3}a_{I}a_{J}a_{K}\right]B_{x}\nonumber \\
& \equiv-\frac{1}{2\pi}(\partial_{t}P_{3})B_{x}\label{eqn:TME}
\end{align}
where $I,J,K$ run over $\bar{x},\bar{y},\bar{z}$ and $P_{3}$ is
the 3-dimensional analog of charge polarization. Eq. (\ref{eqn:TME})
is precisely the contribution of topological magnetoelectric effect
to $j^{x}$.\cite{QiHughesZhangTFT}
Topological insulator (TI)-type anomalous Hall response (6 terms):
\begin{align}
j_{3D}^{x}=\frac{1}{8\pi^{3}}\int d^{3}\bar{x}\text{tr}\big[\mathcal{F}_{tz}\left(\mathcal{F}_{\bar{x}\bar{y}}\mathcal{F}_{y\bar{z}}-\mathcal{F}_{\bar{y}\bar{z}}\mathcal{F}_{\bar{x}y}\right)\nonumber \\
+\mathcal{F}_{ty}\mathcal{F}_{\bar{x}\bar{y}}f_{z\bar{z}}+(y\leftrightarrow z)\big]\label{eqn:strainedChernResp}
\end{align}
If we choose a gauge such that the real space Berry connections do
not depend on momentum, then by similar logic to the topological magnetoelectric
effect terms (and with similar assumptions), we can manipulate this
contribution into the form
\begin{equation}
j_{3D}^{x}=-\frac{1}{2\pi}\left(E_{y}\partial_{y}-E_{z}\partial_{z}\right)P_{3}
\end{equation}
Here $E$ is the real space electric field. This is the anomalous
Hall effect that appears in a 3D TI.\cite{QiHughesZhangTFT} It differs
from the Hall effect that appears as a quasi-2D response in that it
does not originate from having a nonzero total Chern number at each
2D slice of momentum space.
Sheared polarization responses (6 terms):
\begin{align}
j_{3D}^{x} & =\frac{1}{8\pi^{3}}\int & d^{3}\bar{x}\text{tr}\left[\mathcal{F}_{t\bar{y}}\left(\mathcal{F}_{\bar{x}z}\mathcal{F}_{\bar{z}y}+\mathcal{F}_{\bar{x}y}\left(F_{z\bar{z}}+f_{z\bar{z}}\right)\right)\right.\nonumber \\
& \left.+(y\leftrightarrow z)\right]
\end{align}
The first term here corresponds to a current flowing in $y$ due to
a changing polarization, but that current being redirected into the
$z$ and then the $x$ direction by shear. The second term is the
same current in $y$ being redirected into the $x$ direction together
with a change in the perpendicular area due to uniaxial strain. These are the 3D real space analogues of the 2D real space response illustrated in Fig. \ref{fig:strainedPolarization}.
\noindent $\mathbf{k}$\textbf{-space components:}
Quasi-1D response (5 terms):
\begin{align}
j^{\bar{x}}=\frac{1}{8\pi^{3}}\text{tr}\left[\mathcal{F}_{tx}\left((F_{y\bar{y}}+f_{y\bar{y}})(F_{z\bar{z}}+f_{z\bar{z}})-\mathcal{F}_{y\bar{z}}\mathcal{F}_{z\bar{y}}\right)\right]
\end{align}
As in the real space response, this is the 1D response accounting
for changes in the area perpendicular to the current.
Strained electric forces (6 terms):
\begin{align}
j^{\bar{x}}=\frac{1}{8\pi^{3}}\text{tr} & \left[\mathcal{F}_{ty}\left(\mathcal{F}_{x\bar{z}}\mathcal{F}_{\bar{y}z}+\mathcal{F}_{x\bar{y}}(F_{z\bar{z}}+f_{z\bar{z}})\right)+(y\leftrightarrow z)\right]
\end{align}
These terms correspond to a typical electrical force in the $y$ direction,
which is then redirected to the $x$ direction by shears and correcting
for change in the area perpendicular to the current.
Strained polarization/Lorentz force responses (8 terms):
\begin{align}
j^{\bar{x}}=\frac{1}{8\pi^{3}}\text{tr} & \left[\mathcal{F}_{t\bar{y}}\left(\mathcal{F}_{zx}\mathcal{F}_{y\bar{z}}+\mathcal{F}_{x\bar{z}}\mathcal{F}_{yz}+\mathcal{F}_{xy}(F_{z\bar{z}}+f_{z\bar{z}})\right)\right.\nonumber \\
& \left.+(y\leftrightarrow z)\right]
\end{align}
The first two terms say that if the polarization changes in the $y$
direction, then either shear or the Lorentz force can change this
into a current in the $z$ direction. That current can then be redirected
by the Lorentz force or shear (respectively) to the $x$ direction.
The third term is the same current due to polarization, leading to
a current in the $x$ direction by the Lorentz force, correcting for
change in the area perpendicular to the new current.
WSM-type $\mathbf{E}\cdot\mathbf{B}$ charge pumping (3 terms):
\begin{equation}
j^{\bar{x}}=\frac{1}{8\pi^{3}}\text{tr}\left[\left(\mathcal{F}_{tx}\mathcal{F}_{yz}+\mathcal{F}_{ty}\mathcal{F}_{zx}+\mathcal{F}_{tz}\mathcal{F}_{xy}\right)\mathcal{F}_{\bar{y}\bar{z}}\right]
\end{equation}
Assuming the real-space field strengths are $k$-independent, then
this term is $\mathbf{E}\cdot\mathbf{B}$ times the Berry curvature.
If we integrate over $\bar{y}$ and $\bar{z}$, we find
\begin{equation}
\int d\bar{y}d\bar{z}j^{\bar{x}}=\frac{1}{4\pi^{2}}C_{1}(\bar{x})\mathbf{E}\cdot\mathbf{B}
\end{equation}
where $C_{1}(\bar{x})$ is the Chern number of the slice of the Brillouin
zone at fixed $\bar{x}$. For the case of a WSM with Weyl points split
along the $\bar{x}$ direction, $C_{1}(\bar{x})$ is nonzero between
the Weyl points (see Fig. \ref{fig:FermiArcs}), so this response is a current from one Weyl point
to the other . This is exactly the chiral anomaly\cite{ZyuninBurkovWeylTheta,ChenAxionResponse, HosurWeylReview,RanQHWeyl,QiWeylAnomaly,NielsenABJ,VolovikBook}, which says that
an $\mathbf{E}\cdot\mathbf{B}$ field in a WSM pumps charge from one
Weyl point to the other.
\noindent \textbf{Charge component:}
All 16 terms which only contain mixed field strengths $\mathcal{F}_{i\bar{j}}$
combine to form the unit cell volume, corrected for strain. The other
two types of response are:
Layered Chern insulator Hall response (3 terms):
\begin{equation}
j^{0}=\frac{1}{8\pi^{3}}\int d^{3}\bar{x}\text{tr}\left[\mathcal{F}_{xz}\mathcal{F}_{\bar{z}\bar{x}}F_{y\bar{y}}+\text{ perm.}\right]
\end{equation}
where ``perm.'' indicates terms created by cyclically permuting
$(x,y,z)$ and $(\bar{x},\bar{y},\bar{z})$. This is the charge density
counterpart to the spatial layered Chern insulator Hall response;
it analogously comes from adding the Hall response of each subsystem
at fixed $k_{i}$ to a magnetic field $B_{i}$.
TI-type Hall response (9 terms):
\begin{align}
j^{0}=\frac{1}{8\pi^{3}}\int d^{3}\bar{x}\text{tr} & \left[\mathcal{F}_{xz}\left(\mathcal{F}_{\bar{y}\bar{z}}\mathcal{F}_{y\bar{x}}+\mathcal{F}_{\bar{x}\bar{y}}\mathcal{F}_{y\bar{z}}+\mathcal{F}_{\bar{z}\bar{x}}f_{y\bar{y}}\right)\right.\nonumber \\
& \left.+\text{ perm.}\right]
\end{align}
By the same methods used to derive Eq. (\ref{eqn:TME}), these terms
can be massaged into the form
\begin{equation}
j^{0}=\frac{1}{2\pi}\mathbf{B}\cdot\nabla P_{3}
\end{equation}
This is exactly the charge component of the Hall response that appears
in a TI\cite{QiHughesZhangTFT}.
We summarize the 6D responses in Table \ref{table:6Dresponses}.
\begin{table*}
\begin{tabular}{|c|c|}
\hline
\textbf{Current component} & \textbf{Response}\tabularnewline
\hline
Real space & Quasi-1D responses\tabularnewline
& Quasi-2D (layered Chern insulator) response\tabularnewline
& Topological magnetoelectric effect\tabularnewline
& TI-like anomalous Hall response\tabularnewline
& Change in polarization with strain\tabularnewline
\hline
$k$-space & Quasi-1D response\tabularnewline
& Electric fields with strain\tabularnewline
& Change in polarization plus Lorentz force with strain\tabularnewline
& WSM-like $\mathbf{E}\cdot\mathbf{B}$ charge pumping\tabularnewline
\hline
Charge density & Density response to change in unit cell volume \tabularnewline
& Layered Chern insulator Hall response \tabularnewline
& TI-like anomalous Hall response\tabularnewline
\hline
\end{tabular}\caption{Summary of 6D phase space responses.}
\label{table:6Dresponses}
\end{table*}
\section{Anomalies}
\label{section:anomalies}
In the previous section, we enumerated the phase space bulk responses,
which, as we have seen, correspond to the topological responses of
filled states in real space. This includes the responses of insulators, semimetals and the responses of metals that involve all the occupied states, such as the anomalous Hall effect. We now wish to describe the
topological features of Fermi surfaces and real space system edges. We will
also approach the response of semimetals from another perspective.
All these features take the form of anomalies in phase space.
Given the CS theory for a phase space system, such as that which appears
in Eq. (\ref{eqn:ChernSimons2D}) or its higher-dimensional and/or
non-Abelian generalization, suppose that $\partial_{i}C\neq0$ for
some coordinate $i$. This means that the phase space system contains
an edge, that is, the real space system has a Fermi surface or an
edge. Then in general, the responses of the system will depend on
details; for example, edge currents of quantum Hall systems depend
on the non-universal edge mode velocity. However, there will be a
universal anomaly (or lack thereof) along such edges. We see this
from the anomaly resulting from the phase-space CS term in $2n$D:
\begin{equation}
\sum_{\mu\neq i}\partial_{\mu}j^{\mu}=\frac{1}{n!2^{2n}\pi^{n}}\partial_{i}C\epsilon^{i\alpha_{1}\alpha_{2}...\alpha_{2n}}\text{tr}\left[\mathcal{F}_{\alpha_{1}\alpha_{2}}...\mathcal{F}_{\alpha_{2n-1}\alpha_{2n}}\right]\label{eq:anomaly-general}
\end{equation}
From here, integration over the appropriate phase space directions
determines the anomalies in the real system. We list some physically
interesting effects below.
\subsection{Fermi Surface Anomalies}
In Section \ref{section:2DExample}, we computed the chiral anomaly
in 1D momentum space by imposing edges at $\bar{x}=\pm k_{F}$ in
2D phase space. Integrating over $x$ yielded the fact that an electric
field pumps electrons in states near $+k_{F}$ into states near $-k_{F}$,
or vice versa. At a down-to-earth level, this simply corresponds to
tilting of the 1D Fermi surface in an electric field due to semiclassical
motion of the electrons.
This idea is straightforward to generalize to higher dimensions. For
instance, an open Fermi surface in 2D is obtained by confining the
4D phase space system between $\bar{x}=\pm k_{F}$ while leaving the
other three directions infinite, while a 3D spherical Fermi surface
results from making the $\bar{x}$, $\bar{y}$ and $\bar{z}$ directions
in 6D phase space finite under the constraint $\bar{x}^{2}+\bar{y}^{2}+\bar{z}^{2}=k_{F}^{2}$
and leaving the $x$, $y$ and $z$ directions unconstrained. For
each phase space geometry, the corresponding anomaly characterizes
properties of the resultant Fermi surface. Importantly, if a special
object such as a Dirac or a Weyl point is buried under the Fermi surface,
its observable effects in local transport phenomenon should emerge
from the anomaly equation.
We demonstrate this first for a 3D spherical Fermi surface, which
carries a Chern number in general and exhibits a chiral anomaly proportional
to the Chern number and the electromagnetic field $\mathbf{E}\cdot \mathbf{B}$.
The most well-known occurence of this phenomenon is in Weyl semimetals.
We begin with the anomaly equation in 6D phase space. In the absence
of any strains and ignoring quasi-lower-dimensional terms (i.e., terms
such as $F_{x\bar{x}}$ that contain the background field), it reads
\begin{alignat}{1}
& \sum_{\mu}\partial_{\mu}j^{\mu}-\partial_{\bar{r}}j^{\bar{r}}=\nonumber \\
& \,\,\,\frac{1}{8\pi^{3}}\delta(\bar{r}-k_{F})\mathcal{F}_{\bar{\theta}\bar{\phi}}\left(\mathcal{F}_{tx}\mathcal{F}_{yz}+\mathcal{F}_{ty}\mathcal{F}_{zx}+\mathcal{F}_{tz}\mathcal{F}_{xy}\right)
\end{alignat}
where $(\bar{r},\bar{\theta},\bar{\phi})$ are the spherical
co-ordinates corresponding to $(\bar{x},\overline{y},\overline{z})$.
Integrating over the barred co-ordinates immediately yields the chiral
anomaly in Weyl semimetals:
\begin{equation}
\sum_{\mu=t,x,y,z}\partial_{\mu}j_{3D}^{\mu}=\frac{1}{4\pi^2}C_{FS}\mathbf{E}\cdot\mathbf{B}\label{eq:Weyl-anomaly}
\end{equation}
where $C_{FS}=\frac{1}{2\pi}\oint\mathrm{d}\overline{\theta}\mathrm{d}\overline{\phi}\sin\overline{\theta}\mathcal{F}_{\overline{\theta}\overline{\phi}}\in\mathbb{Z}$
is the Chern number of the Fermi surface and equals the total chirality
of all Weyl points enclosed by it.
Unlike in 3D, in 2D systems the one-dimensional Fermi surface carries a non-quantized Berry phase
instead of a Chern number. An analogous analysis, i.e.,
starting with the anomaly equation in 4D phase space with a $(\overline{x},\overline{y})$
boundary that satisfies $\overline{x}^{2}+\overline{y}^{2}=k_{F}^{2}$
and integrating over $(\overline{x},\overline{y})$ gives
\begin{equation}
\sum_{\mu=t,x,y}\partial_{\mu}j_{2D}^{\mu}=\frac{1}{4\pi^2}B_{z}\partial_{t}\gamma\label{eq:AH-metal}
\end{equation}
ignoring strain and quasi-lower-dimensional terms, where $\gamma=\ointop_{FS}\mathbf{a_{\bar{r}}}\cdot\mathrm{d}\mathbf{\bar{r}}$
is the Berry phase on the Fermi surface. Eq (\ref{eq:AH-metal}) is
the statement that adiabatically changing the Hall conductivity of
an anomalous Hall metal in a magnetic field creates charged excitations
bound to the field.
In the presence of strains, both (\ref{eq:Weyl-anomaly}) and (\ref{eq:AH-metal})
contain more terms on their right hand sides. We will encounter these
terms in the next subsection when we discuss the effects of dislocations.
Before moving on, however, we wish to stress that the physical anomaly
in a given dimension is independent of the topology of the
Fermi surface. However, certain topologies are more convenient for
studying a given physical anomaly. For instance, the chiral anomaly
in Weyl metals is easier to see for a spherical Fermi surface, but
it can be equally well be derived for open Fermi surfaces that span
one or two directions in the Brillouin zone.
\subsection{Anomalies in Real Space}
\subsubsection{Real space edge}
The simplest example of a real space edge anomaly was derived in Sec
\ref{section:2DExample}, where we imposed $x$-direction edges in
2D phase space and showed that a time-dependent polarization in 1D
can be used to pump charge across the length of the chain. As a more
non-trival example, consider a real, single-band 2D Chern insulator
which occupies $x>0$. Then in 4D phase space, the anomaly equation
(\ref{eq:anomaly-general}) with $C(x)=\Theta(x)$ reads
\begin{align}
\partial_{t}\rho & +\partial_{y}j^{y}+\partial_{\bar{x}}j^{\bar{x}}+\partial_{\bar{y}}j^{\bar{y}}=\nonumber \\
& \frac{1}{4\pi^{2}}\delta(x)\left(\mathcal{F}_{ty}\mathcal{F}_{\bar{x}\bar{y}}-\mathcal{F}_{t\bar{x}}\mathcal{F}_{y\bar{y}}+\mathcal{F}_{t\bar{y}}\mathcal{F}_{y\bar{x}}\right)
\end{align}
Integrating $\partial_{\bar{x}}j^{\bar{x}}+\partial_{\bar{y}}j^{\bar{y}}$
over momentum space gives zero since there is no boundary in those
directions. Hence, integrating the previous equation over momentum
space gives
\begin{equation}
\partial_{t}\rho_{2D}+\partial_{y}j_{2D}^{y}=\frac{1}{4\pi^{2}}\delta(x)\int d^{2}k\mathcal{F}_{\bar{x}\bar{y}}E_{y}=\frac{1}{2\pi}\delta(x)C_{1}E_{y}\label{eqn:ChernAnomaly}
\end{equation}
with $C_{1}$ the first Chern number of the occupied band of the 2D
Hamiltonian. We have ignored quasi-1D terms and terms containing strain.
Eq. (\ref{eqn:ChernAnomaly}) is recognizable as the usual anomaly
for a 2D Chern insulator where an electric field parallel to the edge
builds up a charge density along that edge.
\subsubsection{Dislocations}
\textbf{4D phase space:} The simplest example of a dislocation is
an edge dislocation in 2D real space. The key feature of the dislocation,
as we discussed in Section \ref{subsection:interpretation}, is that,
far from the dislocation line itself, electrons accumulate a Berry's
phase of $\mathbf{k}\cdot\mathbf{b}$ upon encircling the
dislocation. We can thus model the dislocation by a Berry connection
$(a_{r},a_{\theta})=(0,\frac{\mathbf{b}\cdot\mathbf{k}}{2\pi})$,
leading to a $\mathbf{k}$-independent Berry curvature $\mathcal{F}_{\theta\bar{i}}=b_{i}/2\pi r$.
Our theory breaks down at the dislocation itself because the system
changes quickly on the scale of a lattice constant. We can avoid this
problem by keeping the Berry connection but surrounding the dislocation
by a finite size puncture in the system of radius $r_{0}$, i.e. choose
$C=\Theta(r-r_{0})$ with $r$ the radial coordinate in the $xy$-plane.
The resulting anomaly equation reads
\begin{equation}
\sum_{\mu}\partial_{\mu}j^{\mu}-\partial_{r}j^{r}=\frac{1}{8\pi^3}\left(\mathcal{F}_{t\overline{x}}b_{y}-\mathcal{F}_{t\overline{y}}b_{x}\right)
\end{equation}
plus quasi-1D terms on the right-hand-side, which we ignore. Integrating
over $\overline{x},\overline{y}$ and $\theta$ gives the charge radiating
from the core of an edge dislocation in the presence of a time-dependent
polarization:
\begin{equation}
\partial_{t}\rho_{2D}=\hat{\mathbf{z}}\cdot\left(\mathbf{b}\times\partial_{t}\mathbf{P}\right)
\end{equation}
This is shown in Fig \ref{fig:2D-edge-disloc}, which makes the physical
picture of the anomaly clear in the limit of weakly coupled chains
perpendicular to $\mathbf{b}$; the core of the dislocation is
the end of such a chain, so polarizing that chain adds charge to its end.
The non-trivial result is that the extra charge remains bound to the
dislocation core and does not leak into other chains even when they
are strongly coupled.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{2D_Edge_Disloc}
\par\end{centering}
\caption{An edge dislocation in 2D with Burger's vector $\mathbf{b}$.
In the presence of a polarization $\mathbf{P}\perp\mathbf{b}$,
charge gets accumulated at the core of the dislocation, shown by the
red dot.\label{fig:2D-edge-disloc}}
\end{figure}
\textbf{6D phase space: }A similar analysis for a disocation in 3D
real space running along $\hat{\mathbf{z}}$ and with Burger's vector
$\mathbf{b}$ gives
\begin{align}
& \partial_{t}\rho+\frac{1}{r}\partial_{\theta}j^{\theta}+\partial_{z}j^{z}\nonumber \\
& =\frac{\delta(r-r_{0})}{8\pi^{3}}\int d^{3}\bar{x}\left(\mathcal{F}_{\bar{x}\bar{y}}(r\mathcal{F}_{\theta\bar{z}})+\mathcal{F}_{\bar{y}\bar{z}}(r\mathcal{F}_{\theta\bar{x}})+\mathcal{F}_{\bar{z}\bar{x}}(r\mathcal{F}_{\theta\bar{y}})\right)\mathcal{F}_{tz}\nonumber \\
& =-\frac{\delta(r-r_{0})}{8\pi^{3}}E_{z}\int d^{3}\bar{x}\mathbf{\Omega}\cdot\mathbf{b}\label{eq:3D-disloc-anomaly}
\end{align}
where $\Omega_{i}=\frac{1}{2}\epsilon_{ijk}\mathcal{F}_{\bar{j}\bar{k}}$
is the Berry curvature of the bands in the plane perpendicular to
$i$.
To understand (\ref{eq:3D-disloc-anomaly}), let us first consider
a layered Chern insulator, that is, a system composed of layers of
Chern insulators stacked along a certain direction. The integral in
(\ref{eq:3D-disloc-anomaly}) then gives the Chern number of the layers
in each direction, so
\begin{equation}
\partial_{t}\rho+\frac{1}{r}\partial_{\theta}j^{\theta}+\partial_{z}j^{z}=-\frac{\delta(r-r_{0})}{\pi}E_{z}\mathbf{C}\cdot\mathbf{b}\label{eq:Chern-layers-disloc}
\end{equation}
where $C_{i}=\frac{1}{8\pi^{2}}\int\mathrm{d}^{3}\overline{x}\Omega_{i}$.
Now, add a dislocation running along $\hat{\mathbf{z}}$ with Burger's
vector $\mathbf{b}$. Edge dislocations are defined by $\mathbf{b}\perp\hat{\mathbf{z}}$
whereas screw dislocations have $\mathbf{b}\parallel\hat{\mathbf{z}}$.
The two scenarios are shown in Fig \ref{fig:3D-disloc}. Eq (\ref{eq:Chern-layers-disloc})
says that in either case, there exists a chiral mode along the dislocation
that participates in an anomaly in response to $E_{z}$. We can understand
this as follows.
An edge dislocation can be thought of as a semi-infinite sheet perpendicular
to $\mathbf{b}$ and unbounded along $\hat{\mathbf{z}}$ inserted
into the 3D lattice. If the sheet has a Chern number, we expect it
to have a chiral mode along $\hat{\mathbf{z}}$. For weakly coupled
sheets, this is precisely the chiral mode along the edge dislocation.
For a screw dislocation, the existence of a chiral dislocation mode
follows from an argument adapted from one that predicts helical dislocation
modes in weak topological insulators\cite{VishwanathWeakDisloc, HughesMajoranaDisloc}.
Suppose that our system is of finite size in the $z$ direction. Then
on each surface, there is a semi-infinite edge emerging from the dislocation.
However, this edge must carry a chiral mode since the surface layer
is a Chern insulator. By charge conservation, this chiral mode cannot
terminate at the dislocation, so the chiral mode must proceed along
the dislocation to the other surface. Moreover, in each case, the
chiral mode is expected to survive for strongly coupled layers as
well, where the system is better thought of as stacked sheets in momentum
space and is typically termed an axion insulator. This is because
layered Chern insulators and axion insulators are actually the same
phase -- there is no phase transition as the interlayer coupling is
strengthened -- so their topological defects such as dislocations
have qualitatively similar behavior. Indeed, the presence of a chiral
mode was shown explicitly for an axion insulator created from a charge
density wave instability of a WSM in Ref. \onlinecite{Wang2013}.
\begin{figure}
\centering{}\includegraphics[clip,width=0.5\columnwidth,height=4.5cm]{DislocModeGraphic}\includegraphics[width=0.5\columnwidth]{3D_Edge_Disloc}\caption{Screw (left) and edge (right) dislocations in 3D. Dislocations in
an axion insulator harbor chiral modes, denoted by red lines in both
figures. In the screw dislocation, thick black lines represent the
standard chiral edge mode. The screw dislocation geometry with Weyl
nodes split along the screw axis was used for the numerical results
presented in Fig \ref{fig:Disloc-numerics}.\label{fig:3D-disloc}}
\end{figure}
\textbf{Spectral flow due to dislocations in Weyl semimetals:} Having
seen examples of anomalies being the universal feature of gapless
systems, we use our theory's anomaly machinery to derive a new result:
prediction of an anomaly at dislocation lines in a WSM. This is closely
related to the case of a layered Chern insulator (axion insulator)
just discussed.
Consider a WSM with two nodes split by $\mathbf{K}$ (and thus
having broken time reversal symmetry) with a dislocation along $z$.
In contrast to the layered Chern insulators or axion insulators, WSMs
have a gapless bulk and thus, cannot support localized modes the same
way that the former do. However, our theory allows us to confirm that there is indeed an
anomaly at the dislocation in a WSM. The anomaly calculation is identical
to the axion insulator case, except that $\frac{1}{4\pi}\int\mathrm{d}^{3}x\Omega_{i}=K_{i}$
instead of is $2\pi$. The result is that the anomaly is $\delta(r-r_{0})E_{z}\mathbf{K}\cdot\mathbf{b}/2\pi$,
which reflects the fact that chiral modes appear only in the region
of momentum space between the Weyl nodes, where the Chern number of
the layers is $\pm1$. From now on, we assume $\mathbf{K}=K\hat{\mathbf{z}}$
for concreteness.
The physical interpretation of this anomaly is more subtle for the
WSM than the axion insulator. In the latter case, due to the bulk
gap, the anomaly means that there is a chiral zero mode on the dislocation.
In the WSM case, there is no bulk gap. Furthermore, if the region
carrying a nonzero Chern number is near $k_{z}=0$, then that region
sees only small perturbations from the dislocation because the dislocation
acts like a flux proportional to $k_{z}$. Hence we should not necessarily
expect a zero mode on the dislocation. On the other hand, if this region is located near $k_{z}=\pi$, then there may be such a zero
mode. In general, however, the existence of a localized zero mode
is not guaranteed.
Since the anomaly need not imply a localized zero mode, we numerically
solved a simple $\mathbf{k}\cdot\mathbf{p}$ model for a WSM in the
presence of a dislocation in order to directly verify the anomaly.
The Hamiltonian we used is
\begin{align}
H= & \left[M_{0}+M_{1}k_{z}^{2}+M_{2}(k_{x}^{2}+k_{y}^{2})\right]\Gamma_{5}+L_{1}k_{z}\Gamma_{4}\nonumber \\
& +L_{2}\left(k_{y}\Gamma_{1}-k_{x}\Gamma_{2}\right)+U_{0}\Gamma_{12}\label{eqn:WSMHam}
\end{align}
Here the anticommuting $\Gamma$ matrices are defined by $\Gamma_{1,2,3}=\sigma_{x,y,z}\tau_{x}$,
$\Gamma_{4}=\tau_{y}$, $\Gamma_{5}=\tau_{z}$, and $\Gamma_{ij}=[\Gamma_{i},\Gamma_{j}]/2i$
where $\sigma$ is a spin index and $\tau$ is an orbital index. This
model leads to Weyl points at $\mathbf{k}=\pm|U_{0}|/L_{1}\mathbf{\hat{z}}$
when the quadratic term is neglected. This model has been previously
investigated in a radial geometry\cite{QiWeylAnomaly} with no dislocation.
The only effect of a screw dislocation at $r=0$ with Burgers vector
$b\mathbf{\hat{z}}$ is that the dependence of the components of the
wavefunction on the in-plane angle $\theta$ changes from $e^{in\theta}$
to $e^{i(l+bk_{z}/2\pi)\theta}$, where the half-integer $l$ is the
eigenvalue of $L_{z}$ in the absence of the dislocation.
\begin{figure}
\subfigure[ ]{ \includegraphics[width=4cm]{WSMNoDisloc_new.jpg}
\label{fig:WSMNoDisloc} } \subfigure[ ]{ \includegraphics[width=4cm]{WSMDisloc_new.jpg}
\label{fig:WSMDisloc}} \subfigure[ ]{ \includegraphics[width=4cm]{AxionInsulatorDisloc_new.jpg}
\label{fig:axionInsulatorDisloc} } \subfigure[ ]{ \includegraphics[width=4cm]{WSMNearPi_new.jpg}
\label{fig:WSMDislocNearPi}} \caption{Band structure of the lattice regularized version of Eqn. (\ref{eqn:WSMHam})
in a cylindrical geometry. Color corresponds to $\langle r\rangle$
with the dislocation at $r=0$; red indicates localization on the
dislocation and blue is localization on the outer boundary. Parameters
are $M_{0}=0$, $M_{1}=0.342$ eV \AA{}$^{2}$, $M_{2}=18.25$ eV
\AA{}$^{2}$, $L_{1}=1.33$ eV \AA{}, $L_{2}=2.82$ eV \AA{}, $R=120$
radial sites, $l=1/2$ angular momentum unless otherwise stated. Note
that due to the dislocation the system is not periodic in $k_{z}$
at fixed angular momentum. (a) WSM phase ($U_{0}=1.3$ eV), no dislocation.
(b) Same as (a), but with dislocation. (c) Axion insulator phase ($U_{0}=1.7$
eV) with dislocation. (d) WSM phase ($U_{0}=-1.3$ eV, $M_{1}=-0.342$
eV \AA{}, $M_{0}=1.4$ eV) with topologically nontrivial
BZ slices centered at $k_{z}=\pi$ and a dislocation. \label{fig:Disloc-numerics}}
\end{figure}
We solved the discretized version of this model at fixed angular momentum
for a cylinder of size $R=120$ sites at fixed angular momentum $l=1/2+k_{z}/2\pi$.
For comparison, we show the band structure in the WSM phase with no
dislocation in Fig. \ref{fig:WSMNoDisloc}. The mode localized near
$r=0$ (in blue) is always at higher energy than the Fermi arc, and
is not topological. Adding the dislocation, we see in Fig. \ref{fig:WSMDisloc}
that now the $r=0$ mode changes from unoccupied to occupied after
crossing the Weyl points; an electron has been pumped from the Fermi
arc (in red) to the dislocation. This is the anomaly that we discussed
above, even though there is no zero energy mode localized on the dislocation.
This system can smoothly evolve, by bringing the Weyl points together
and annihilating them, into the axion insulator in Fig. \ref{fig:axionInsulatorDisloc}.
That state has a single chiral mode localized on the dislocation which
crosses the bandgap without mixing with the outer edge mode, as expected.
In Fig. \ref{fig:WSMDislocNearPi}, we have a WSM with a topologically
nontrivial region centered about $k_{z}=\pi$; here there is a zero
mode localized on the dislocation, and the charge pumping is more
obvious than in Fig. \ref{fig:WSMDisloc}.
The result of charge pumping due to disclinations has been previously predicted\cite{jianWSMdisloc}. However, our picture is different from than the one considered there. The claim in Ref. \cite{jianWSMdisloc} is that a chiral magnetic field, which in our case is created by the dislocation, causes a net spontaneous current to flow. As can be seen from our picture, this is not true; an electric field is necessary to have an anomaly and thus a net current. Fundamentally, the total current must vanish in the absence of an electric field. If the current did not vanish, adding an electric field parallel to the current would cause dissipation and lower the system energy, but this is impossible for a system already in its ground state. The difference in Ref. \cite{jianWSMdisloc} stems from neglecting momentum space regions away from the Weyl nodes and the real space boundary in determining the total current. Thus, while the general expression for the current density derived by Ref. \cite{jianWSMdisloc} is correct, the total current vanishes when these contributions are included. For the case which we show in Fig. \ref{fig:WSMDisloc}, the net current due to the dislocation is cancelled by the current along the Fermi arcs. In the case of Fig. \ref{fig:WSMDislocNearPi}, the dislocation mode near one Weyl point connects directly to the mode on the other side through a zero mode which cancels the net current.
To summarize, dislocations in a WSM indeed cause pumping of charge
to (or from) the dislocation line when an electric field is applied
along the dislocation. Such a charge pumping is smoothly connected
to that which occurs in the axion insulator, but it may or may not,
depending on details, result in a zero mode localized on the dislocation.
Although we presented numerics for a screw dislocation that runs along
the same direction as the Weyl node splitting, (\ref{eq:3D-disloc-anomaly})
and hence the qualitative result is valid for edge dislocations as
well as for other directions of the Weyl node splitting.
\section{Discussion and Conclusions}
\label{section:discussion}
We have shown that the responses and anomalies of a gapped or gapless
system living in $n$ spatial dimensions can be described by a single
response theory of a gapped system living in $2n$ spatial dimensions.
Conceptually, this is because adding magnetic fields in the $2n$ dimensional system and projecting onto the zeroth Landau level allows us
to interpret that system as living in phase space. We have used this
theory to reproduce well-understood responses and anomalies in systems
with non-interacting electrons and Abelian real space gauge fields,
as well as to demonstrate the existence of spectral flow due to dislocations
in Weyl semimetals.
There are several interesting fundamental questions about our theory which are at present open. It would be interesting to see how our theory connects to the use of phase space in statistical mechanics; how might the Landau-Boltzmann transport equation, that describes transport in Fermi liquids via Wigner functions, or Liouville's theorem, that describes the time-evolution of general classical systems in phase space via a density matrix, arise in our context? Both the Wigner function and the phase space density matrix treat real and momentum space on an equal footing; thus, our theory holds promise in capturing these phenomena.
In addition to these fundamental questions, we envision a number of extensions of our theory to more complicated systems. In particular, the responses that we have explicitly discussed have so far been only those of non-interacting systems which only
feel a $U(1)$ real space gauge field, though the $k$-space Berry
connection has been allowed to be non-Abelian. The latter constraint
is not an inherent limitation of the theory; perhaps there are interesting
responses to a larger real-space gauge group. $SU(2)$ groups in 4D and 3D have been studied and shown to give topological insulator- and WSM-like responses, respectively\cite{LiSU2LL}. It is thus conceivable that general gauge groups can lead to other topological responses, possibly of phases with emergent fermions such as partons\cite{WenNonAbelianPartons,*SwingleFTI,*Maciejko3DFTI} or composite fermions\cite{JainCompositeFermions}.
As for interactions, it is not immediately clear if there
are sensible real space systems that are well-described by a phase
space theory with only local interactions. However, if there are such
real space systems, then working in phase space could be very useful
because, for example, in the absence of a magnetic field mean field
theory is more accurate due to the higher dimensionality. This advantage
may be mitigated by the fact that our construction requires gauge
fields, however. Alternatively, it is possible that there is a simple
way to directly incorporate the interactions of the real space system
into the phase space theory.
Another interesting question is if there is an extension of our theory
which describes nodal superconductors. Our theory as written requires
$U(1)$ charge conservation; perhaps there is some way to incorporate
spontaneous breaking of this symmetry. Finally, it could also be interesting
to explicitly incorporate other symmetries of the lower-dimensional
system; this could allow a better understanding of gapless symmetry-protected
phases like Dirac semimetals.
\begin{acknowledgments}
DB is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747. PH is supported by the David and Lucile Packard Foundation and the U.S. DOE, Office of Basic Energy Sciences, contract DEAC02-76SF00515. SCZ is supported by the National Science Foundation under grant No. DMR-1305677. XLQ is supported by the National Science Foundation through the grant No. DMR-1151786.
\end{acknowledgments}
\bibstyle{apsrev4-1} |
2,877,628,088,887 | arxiv | \section{Introduction}
Let $\mathcal{X}\subset \mathbf{R}^n$ be non-empty, open. We study the following stochastic differential equation
\begin{align}
\label{eqn:main}
dx^m(t) &= v^m(t) \, dt \\
\nonumber m\, dv^m(t)& = [F(x^m(t)) -\gamma(x^m(t)) v^m(t)] \, dt + \sigma(x^m(t)) \, dB(t)
\end{align}
where $F:\mathcal{X}\rightarrow \mathbf{R}^n$, $\gamma:\mathcal{X}\rightarrow \mathbf{R}^{n\times n}$, $\sigma : \mathcal{X}\rightarrow \mathbf{R}^{n\times k}$, $m>0$ is a constant and $B(t)=(B_1(t), \ldots, B_k(t))^T$ is a $k$-dimensional Brownian motion defined on a probability space $(\Omega, \mathcal{F}, \P)$. Relation~\eqref{eqn:main} is the standard form of Newton's equation for the position $x^m(t)$ of a particle of mass $m$ subject to thermal fluctuations ($\sigma(x^m(t)) \, dB_t$), friction ( $-\gamma(x^m(t)) v^m(t) \, dt$), and a force ($F(x^m(t))\, dt$).
The goal of this note is to strengthen the main result in \cite{HMVW_14} concerning the small-mass limit of the position $x^m(t)$. In essence, provided the friction matrix $\gamma(x)$ is positive-definite for each $x\in \mathcal{X}$, our main result shows that we can still extract convergence of $x^m(t)$ as $m\rightarrow 0$ pathwise on bounded time intervals in probability, without making strong boundedness assumptions on the coefficients $F, \gamma, \sigma$ and their derivatives. These boundedness assumptions were made in the earlier work \cite{HMVW_14} and have also been made previously in other, related works \cite{freidlin2004,freidlin2011,sancho1982}. From a physical standpoint, however, there are many natural model equations that do not satisfy these strong boundedness requirements, and therefore the use of the small-mass approximation of the dynamics above is in question. Such an approximation has been instrumental in estimating chemical reaction rates \cite{kramers1940, smoluchowski1916}, simplifying computations of escape times from potential wells \cite{freidlin,weinan2010}, and answering ergodicity questions \cite{freidlin,weinan2010}.
To see the utility of our general result, we will apply it to three examples describing physically realizable dynamics, including the situation discussed in \cite{volpe2010} (see Section~3). In each of these examples, there is a confining force which grows unboundedly near the boundary of $\mathcal{X}$ (if it is non-empty) and/or near the point at infinity. This unbounded force translates to, at the very least, unboundedness of some of the coefficients in the model equations \eqref{eqn:main}. Making use of our main result, we will be able to establish convergence in each of these three examples.
Compared with existing results on the small-mass limit with positive friction \cite{freidlin2004,freidlin2011,sancho1982,HMVW_14}, the hypotheses of our main result are extraordinarily weak. Specifically, we only assume nominal regularity of the coefficients $F$, $\gamma$, $\sigma$ and that the believed limiting dynamics does not leave the set $\mathcal{X}$ in finite time. It is worth emphasizing that we do not assume that the pair process defined by \eqref{eqn:main} also remains in the natural state space $\mathcal{X}\times \mathbf{R}^n$ for all finite times. This makes our result more readily applicable because, while it is not always easy to control the family of exit times $\{ \tau_{\mathcal{X}\times \mathbf{R}^n}^m\}_{m>0}$ where $\tau_{\mathcal{X}\times \mathbf{R}^n}^m$ denotes the first exit time of $(x^m(t), v^m(t))$ from $\mathcal{X}\times \mathbf{R}^n$, it is more straightforward to control the exit time $\tau_\mathcal{X}$ of the limiting dynamics from $\mathcal{X}$. An additional benefit of structuring the hypotheses in this way is that, as a consequence of our result, we gain control of the exit times $\tau_{\mathcal{X}\times \mathbf{R}^n}^m$ for $m>0$ small, in the sense we show that $\tau_{\mathcal{X}\times \mathbf{R}^n}^m \rightarrow \infty$ in probability as $m\rightarrow 0$.
The organization of this paper is as follows. In Section~\ref{sec:mainres}, we state our main theoretical result (Theorem~\ref{thm:main}). Section~\ref{sec:examples} gives a few physical, motivating examples for this work. In each example, we will verify that the hypotheses of Theorem~\ref{thm:main} are satisfied using the appropriate Lyapunov methods. As a consequence of Theorem~\ref{thm:main}, we will therefore obtain the desired convergence as $m\rightarrow 0$ in each physical example studied. In Section~\ref{sec:proof}, we prove Theorem~\ref{thm:main}.
\section{Main Results}
\label{sec:mainres}
The limiting dynamics $x(t)$ will satisfy the It\^{o} stochastic differential equation
\begin{align}
\label{eqn:limitsde}
dx(t) = [\gamma^{-1}(x(t)) F(x(t)) + S(x(t)) ] \, dt + \gamma^{-1}(x(t)) \sigma(x(t)) \, dB(t)
\end{align}
where, adopting the Einstein summation convention, the vector-valued function $S$ satisfies
\begin{align*}
S(x)= \left(\partial_{x_l} [\gamma^{-1}_{ij}(x)]J_{ jl}(x) \right)_{i=1}^n
\end{align*}
and the matrix $J$ solves the Lyapunov equation
\begin{align*}
J \gamma^T + \gamma J = \sigma \sigma^T .
\end{align*}
To understand on some level how the equation \eqref{eqn:limitsde} could possibly define the limiting dynamics, we can try and formally set $m=0$ in equation~\eqref{eqn:main}, and solve for $v^0(t) \, dt = dx^0(t)$ using the second part of this equation. This leads us to the following guess for the limiting equation
\begin{align*}
dx^0(t) = \gamma^{-1}(x^0(t))F(x^0(t)) \, dt + \gamma^{-1}(x^0(t)) \sigma(x^0(t)) \, dB(t) ,
\end{align*}
where there is some ambiguity in how $\gamma^{-1}(x^0(t)) \sigma(x^0(t)) \, dB_t $ should be interpreted using the various conventions of stochastic integrals, e.g. It\^{o}, Stratonovich, Anti-It\^{o} \cite{karatzas, oksendal}. The different conventions of the stochastic integral do not coincide because, even assuming $\sigma, \gamma$ are sufficiently smooth, as opposed to $\sigma(x^m(t))$ for $m>0$, $\gamma^{-1}(x^0(t)) \sigma(x^0(t))$ does not vary smoothly in $t$. While one might suspect that the drift term $S(x(t))$ in equation~\eqref{eqn:limitsde} tells one how to interpret $ \gamma^{-1}(x^0(t)) \sigma(x^0(t)) \, dB_t $, this is not quite the case because there can be no relation between the type of stochastic integral and this drift in the most general case \cite{freidlin2011}. Nevertheless this heuristic, first employed by Smoluchowski in \cite{smoluchowski1916} and later by Kramers in \cite{kramers1940}, serves as a good first step in understanding how some parts of \eqref{eqn:limitsde} arise. See \cite{HMVW_14} for further, more specific details in how the noise-induced drift term, i.e. $S(x(t))$, in equation~\eqref{eqn:limitsde} is produced.
Throughout the paper, we will make the following assumptions:
\begin{assumption}[Regularity, Positive-Definite Friction]
\label{assump:reg}
\label{assump:mf}
$F\in C^1( \mathcal{X}:\mathbf{R}^n)$, $\gamma\in C^2( \mathcal{X}: \mathbf{R}^{n\times n})$ and $\sigma \in C^1(\mathcal{X}:\mathbf{R}^{n\times k})$. Moreover, for each $x\in \mathcal{X}$ the matrix $\gamma(x)$ is positive-definite; that is, for each $x\in \mathcal{X}$ and each $
y\in \mathbf{R}^n_{\neq 0}$ we have that $$(\gamma(x) y, y) >0.$$
\end{assumption}
\begin{assumption}[Non-explosivity of $x(t)$]
\label{assump:et}
The first exit time $\tau_\mathcal{X}$ of $x(t)$ from $\mathcal{X}$ is $\P$-almost surely infinite for all initial conditions $x_0=x\in \mathcal{X}$; that is, for all $x\in \mathcal{X}$
\begin{align*}
\P_x \{ \tau_\mathcal{X} < \infty\} =0.
\end{align*}
\end{assumption}
The regularity part of Assumption~\ref{assump:mf} assures that all equations in question make sense locally in time. Critical to our main result is the positive-definite assumption made on the friction matrix $\gamma$. This can be seen by taking a glance at equation~\eqref{eqn:limitsde}, for if the matrix $\gamma$ is simply non-negative we expect to get different behavior as $m\rightarrow 0$. See \cite{freidlin2013} for an example of the small-mass limit when $\gamma$ vanishes on a set. Assumption~\ref{assump:et} assures that the presumed limiting dynamics $x(t)$ remains in its domain of definition $\mathcal{X}$ for all finite times $t\geq 0$ almost surely. Different from the previous references \cite{freidlin2004,HMVW_14, sancho1982}, we will not assume that the solution of \eqref{eqn:main} is non-explosive or, more importantly, that either $x(t)$ is contained in a compact subset of $\mathcal{X}$ or the coefficients $F, \gamma, \sigma$ are bounded on $\mathcal{X}$. We do, however, need control over an additional derivative of $\gamma$. Nevertheless, this should not be seen as an additional hypothesis, for this is a typical minimalist assumption needed to make sense of the pathwise solution of \eqref{eqn:limitsde} locally in time (see, for example, \cite{khasminskii, reybellet2006}). An additional difference between our result and previous results is that we need not assume that $\gamma$ is uniformly positive definite on $\mathcal{X}$. In some sense, however, the size of the smallest positive eigenvalue of $\gamma$ is controlled by non-explosivity (Assumption~\ref{assump:et}) of the solution of the limiting equation.
Because we will not assume that the process $\{(x_t^m, v_t^m)\}_{t\geq 0}$, $m>0$, remains in $\mathcal{X}\times \mathbf{R}^n$ for all finite times, we will extend this process for times $t\geq \tau_{\mathcal{X}\times \mathbf{R}^n}^m$ where $\tau_{\mathcal{X} \times \mathbf{R}^n}^m$ is the first exit time of $(x_t^m, v_t^m)$ from $\mathcal{X}\times \mathbf{R}^n$. In particular, letting $\Delta$ be some point not in $\mathbf{R}^n$, we set $x_t^m=v_t^m=\Delta$ for all times $t\geq \tau_{\mathcal{X}\times \mathbf{R}^n}^m$. To measure convergence of $x_t^m$ on the enlarged state space $\mathcal{X}\cup \{ \Delta\}$, let $d_\infty: \mathcal{X} \cup \{ \Delta \} \times \mathcal{X} \cup \{ \Delta\} \rightarrow [0, \infty]$ be given by
\begin{align*}
d_\infty(x,y) = \begin{cases}
|x-y| & \text{ if } x,y \in \mathcal{X}\\
\infty & \text{ if } x=\Delta \text{ or } y = \Delta
\end{cases}.
\end{align*}
Observe that $d_\infty$ is not quite a metric since $d_\infty(\Delta, \Delta)=\infty$; however, $d_\infty$ satisfies the remaining properties of a metric. As we will see, it will serve us well as a slight generalization of a distance.
We are now prepared to state our main results:
\begin{theorem}
\label{thm:main}
Suppose that Assumption~\ref{assump:mf} and Assumption~\ref{assump:et} are satisfied. If the process $\{ x(t)\}_{t\geq 0}$ and the extended processes $\{x^m(t)\}_{t\geq 0}$ have the same initial condition $x\in \mathcal{X}$ for all $m>0$, then for every $T, \epsilon >0$
\begin{align*}
\P\bigg\{ \sup_{t\in [0, T]} d_\infty(x^m(t), x(t)) > \epsilon \bigg\} \rightarrow 0\,\,\text{ as }\,\, m \rightarrow 0.
\end{align*}
\end{theorem}
\begin{remark}
To emphasize a remark made earlier, an aspect of the theorem that is particularly striking is that we make no explicit assumptions about the exit times $\{\tau^m_{\mathcal{X} \times \mathbf{R}^n}\}_{m>0}$ yet we still obtain pathwise convergence on compact time intervals in probability in $d_\infty$. As we will see, not making this assumption about the exit times $\{\tau^m_{\mathcal{X} \times \mathbf{R}^n}\}_{m>0}$ is convenient because it is often easier to simply control $\tau_\mathcal{X}$. Another interesting aspect of the result is that $d_\infty$ was constructed so that it penalizes the process $\{(x_t^m, v_t^m)\}_{t\geq 0}$ infinitely if it has exited $ \mathcal{X} \times \mathbf{R}^n$. In particular, as a corollary of the proof of the theorem above, by having control over $\tau_\mathcal{X}$ we can obtain control over $\tau_{\mathcal{X}\times \mathbf{R}^n}^m$ for $m>0$, small.
\end{remark}
\begin{corollary}
\label{cor:nolimexplosion}
Under the hypotheses of Theorem~\ref{thm:main}: For all $T>0$
\begin{align*}
\lim_{m\rightarrow 0}\P\{ \tau^m_{\mathcal{X}\times \mathbf{R}^n} \leq T \}=0.
\end{align*}
In other words, $\tau_{\mathcal{X} \times \mathbf{R}^n}^m \rightarrow \infty$ in probability as $m\rightarrow 0$.
\end{corollary}
\begin{remark}
Under the appropriate moment bounds and non-explosivity of the pair process $(x^m(t), v^m(t))$, one can apply Theorem~\ref{thm:main} to obtain stronger forms of convergence, e.g. convergence in $L^p$ for $p\geq 1$.
\end{remark}
\section{Examples of Newtonian Dynamics with Unbounded Potentials}
\label{sec:examples}
In this section, we apply Theorem~\ref{thm:main} to physical examples realizable in a
laboratory. In the following, $x(t)$ will denote the position of one or more mesoscopic particles
in a liquid at a well-defined temperature $T$ (e.g. a Brownian particle coupled to a heat bath provided by the liquid, such as the ones experimentally studied in \cite{lanccon2001,volpe2010}). The particle is influenced by a force $F$, friction $\gamma$, and noise coefficient $\sigma$. For such a Brownian particle, the fluctuation-dissipation
relation holds:
\begin{equation}
\label{eq:FDrelation}
\gamma(x) \propto \sigma(x)\sigma^T(x).
\end{equation}
Although in each example there is a confining potential force which grows rapidly near the boundary $\partial \mathcal{X}$ and/or the point at infinity, it will be clear that Assumption~\ref{assump:reg} is satisfied. Therefore, we will only need to see that Assumption~\ref{assump:et} is satisfied by showing that first exit time $\tau_\mathcal{X}$ of the limiting process $x(t)$ is almost surely infinite for all initial conditions $x\in \mathcal{X}$. To show that $\tau_\mathcal{X}$ is almost surely infinite, we will use, by now standard, Lyapunov methods \cite{khasminskii,MTIII_93, reybellet2006}. In particular, in each example we will exhibit a certain type of function $V\in C^2(\mathcal{X}: [0, \infty))$, called a Lyapunov function, which guarantees that
\begin{align}
\label{eqn:noexp}
\P_x \{\tau_\mathcal{X} < \infty \}=0
\end{align}
for all initial conditions $x\in \mathcal{X}$. To be more precise, define a sequence of open subsets $\mathcal{X}_k$, $k\in \mathbf{N}$, of $\mathcal{X}$ by
\begin{align*}
\mathcal{X}_k = \begin{cases}
\{ x\in \mathcal{X} \, : \, \text{distance}(x, \partial \mathcal{X}) > k^{-1}\, \text{ and }\, |x| < k \} & \text{ if } \partial \mathcal{X} \neq \emptyset \\
\{ x\in \mathbf{R}^n \, : \, |x| < k \} & \text{ if } \partial \mathcal{X} = \emptyset
\end{cases}
\end{align*}
and observe that, if $\partial \mathcal{X}=\emptyset$, then $\mathcal{X}=\mathbf{R}^n$ as $\mathcal{X}$ is non-empty and both open and closed. In each example we will exhibit a function $V\in C^2(\mathcal{X} : [0, \infty))$ satisfying the following two properties:
\begin{itemize}
\item[p1)] There exists a sequence of positive constants satisfying $C_k \rightarrow \infty$ as $k\rightarrow \infty$ and
\begin{align*}
V(x) \geq C_k \,\, \text{ for } x\in \mathcal{X}\setminus \mathcal{X}_k.
\end{align*}
\item[p2)] There exist positive constants $C, D$ such that for all $x\in \mathcal{X}$
\begin{align*}
\mathcal{L}V(x) \leq C V(x) + D,
\end{align*}
where $\mathcal{L}$ denotes the infinitesimal generator of the Markov process $x(t)$.
\end{itemize}
It follows that the existence of a function $V\in C^2(\mathcal{X}: [0, \infty))$ satisfying p1) and p2) above gives that $\P_x\{ \tau_{\mathcal{X}} < \infty\} =0$ for all $x\in \mathcal{X}$ (See, for example, Theorem~2.1 of \cite{MTIII_93}).
\subsection{Gravity and electrostatics}
We first prove convergence for the experimental example in
\cite{volpe2010} which originally motivated this work. In \cite{volpe2010}, a Brownian
particle is in a vertical cylinder of finite height $b-a$ filled with water and the horizontal motion of the particle is assumed to be independent from its vertical motion. Therefore, $x(t)$ denotes the (one-dimensional) vertical position of the particle at time $t$ and the natural state space $\mathcal{X}$ of $x(t)$ is given by the open interval $(a,b)$ with $0\leq a< b< \infty$. The conservative forces acting on the particle are
given by the potential function
\begin{equation}
\label{eq:potfun}
U(x) ={B\over\kappa} e^{-\kappa (x-a)}+{B\over\kappa} e^{-\kappa (b-x)}+G_{\rm eff}x+ {e^{-\lambda (x-a)}\over (x-a)} + {e^{-\lambda (b-x)}\over (b-x
)}.
\end{equation}
The first two terms are due to double layer particle-wall forces, with $\kappa^{-1}$ the Debye length and $B>0$ a prefactor depending on the surface charge densities. The third term accounts for the effective gravitational contribution $G_{\rm eff} = \frac{4}{3}\pi R^3 (\rho_{\rm p} - \rho_{\rm s}) g$, with $g$ the gravitational acceleration constant, $R$ the radius of the particle, $\rho_{\rm p}$ the density of the particle and $\rho_{\rm s}$ the density of the fluid. Note that the the value of the first three terms of the potential at $x=a,b$ is finite but very large (as the prefactor $B$ is on the order of thousands of $k_{\rm B}T$); thus, to assure that the particle remains in the cylinder, the last two terms model ``soft walls'' at $x=a$ and $b$ and are fast-decaying away from the boundary with $\lambda \gg \kappa$. The forces are given by
\begin{equation*}
F(x) = -U'(x)
\end{equation*}
and the friction coefficient is
\begin{equation*}
\gamma(x) = \frac{k_BT}{D(x)},
\end{equation*}
where $D(x)$ is a hydrodynamic diffusion coefficient due to effects of particle wall
interactions. The exact form of $D$ is an infinite sum and can be found in \cite{brenner}. For our analysis, it is enough to know $D(x)\in C^2([a,b]:(0,\infty))$ with $D(a) = D(b) =0$, $D'(x)>0$ for $x\in[a,(a+b)/2)$, $D'(x)<0$ for $x\in((a+b)/2,b]$, and $D'((a+b)/2) = 0$.
Using the fluctuation-dissipation relation, the inertial system is given by
\begin{align*}
dx^m(t) =& v^m(t)\;dt\\
m\, dv^m(t) = &\bigg[F(x^m(t))-
\frac{k_BT}{D(x^m(t))}v^m(t)\bigg ]\;dt +\sqrt{ \frac{2(k_BT)^2}{D(x^m(t))}}\;dB(t),
\end{align*}
where $B(t)$ is a standard, one-dimensional Brownian motion.
The corresponding limiting equation is
\begin{align}
dx(t) =& \frac{F(x(t))D(x(t))}{k_BT}\;dt +
D'(x(t))dt +\sqrt{D(x(t))}\;dB(t).
\end{align}
To prove convergence of $x^m(t)$ to $x(t)$ in the sense described in Theorem~\ref{thm:main}, all we must show is that $\P_x(\tau_{(a,b)}=\infty)=1$ for all $x\in (a,b)$. To do so, we find the appropriate Lyapunov function as described at the beginning of this section. We define our candidate Lyapunov function to be the potential function $U$ and note that $U\in C^2((a,b) :[0, \infty))$ and, moreover, $U$ satisfies p1). To see that p2) is satisfied, first apply the generator $\mathcal{L}$ of $x(t)$ to $U$ to find that
\begin{equation}
\mathcal{L}U(x) = \left (-\frac{(U'(x))^2}{k_BT} + \frac{1}{2}U''(x)\right )D(x)+U'(x)D'(x),
\end{equation}
where we have replaced the force by $F(x) = -U'(x)$. Because $x\mapsto \mathcal{L}U(x)$ is bounded on every compact interval $[c,d]$ with $a<c$ and $d<b$, to produce the required estimate we focus on the behavior of this function near the endpoints $x=a,b$. First fix $c\in (a, (a+b)/2)$. Using the fact that $D(a)=0$, we can apply the mean value theorem to see that there exist constants $c_i>0$ such that for all $x\in (a, c]$
\begin{align*}
\bigg(\frac{(U'(x))^2}{k_B T}-\frac{1}{2}U''(x) \bigg)D(x)&=
\bigg(\frac{(U'(x))^2}{k_B T}-\frac{1}{2}U''(x)\bigg) (D(x)-D(a))\\
&= \bigg(\frac{(U'(x))^2}{k_B T}-\frac{1}{2}U''(x)\bigg) D'(\xi_{x,a}) (x-a)\\
&\geq \frac{c_1}{(x-a)^3}+ \frac{c_2}{(x-b)^4} -c_3,
\end{align*}
where $\xi_{x,a}$ is some point in $[a, c]$.
By fixing $d\in ((a+b)/2, b)$ and adjusting the positive constants $c_i$ above, one can produce the same bound
\begin{align*}
\bigg(\frac{(U'(x))^2}{k_B T}-\frac{1}{2}U''(x) \bigg)D(x)&=
\bigg(\frac{(U'(x))^2}{k_B T}-\frac{1}{2}U''(x)\bigg) (D(x)-D(b))\\
&= \bigg(\frac{(U'(x))^2}{k_B T}-\frac{1}{2}U''(x)\bigg) (-D'(\eta_{x,b}) )(b-x)\\
&\geq \frac{c_1}{(x-a)^3} + \frac{c_2}{(x-b)^4} -c_3,
\end{align*}
where $\eta_{x,b}$ is some point in $[d,b]$, which is satisfied for $x\in [d,b)$.
Additionally, since $D'$ is bounded on $[a,b]$, there exists $C_1, C_2 >0$ such that
\begin{align*}
|U'(x) D'(x)| \leq \frac{C_1}{(x-a)^2} + \frac{C_2}{(x-b)^2}
\end{align*}
for all $x\in [a,b]$. Putting these estimates together we find that $x\mapsto \mathcal{L} U(x)$ is bounded on $(a,b)$. The bound in p2) then follows immediately.
\subsection{1D interacting particles} We consider two close Brownian particles suspended in a fluid. If the separation between particles, denoted by $d$, is large enough that the Debye-H\"uckel linearization approximation can be made in the electrostatic potential of a system of ions in an electrolyte, then the DLVO theory \cite{derjaguin1941theory,verwey1999theory} gives the potential between colloidal spheres as
\begin{equation}
\label{eq:DLVO}
U_{\rm DLVO}(d) = c
\frac{e^{- d/l}}{d},
\end{equation}
where the positive constants $c$ and $l$ depend on various properties of the two particles and $d$ is the separation distance of the particles. The diffusion coefficient $D=D(d)$ satisfies the following: $d\mapsto D(d)\in C^2([0,\infty):[0,\infty))$, $D(0) = 0$,
$D(d)\rightarrow D_{\mbox{SE}}<\infty$ as $d\rightarrow\infty$, $D'(d) >0$ and $D''(d)<0$ for all $0\leq d<\infty$. Additionally, the two particles are contained in a (common) shallow harmonic potential, $k x^2$, where $k$ is small compared to the constants in \eqref{eq:DLVO}. The particles' positions are described in one dimension using the potential function
\begin{equation}
\label{eq:potfun2}
U(x_1,x_2) = \frac{k}{2}(x_1^2+x_2^2) + U_{\rm DLVO}(x_2-x_1).
\end{equation}
Defining $d^m(t) = x_2^m(t)-x_1^m(t)$, the system is described by
\begin{align*}
dx_i^m(t) = & v_i^m(t)dt\\
m\, dv_i^m(t) = &\Big [ -\partial_{x_i}U(x_1^m(t),x_2^m(t))-\frac{k_BT}{D(d^m(t))}v_i^m(t)\Big]\;dt+\sqrt{ \frac{2(k_BT)^2}{D(d^m(t))}}\;dB_i(t)
\end{align*}
where $i=1,2$, $B_1(t),B_2(t)$ are two standard, one-dimensional, independent Brownian motions. The corresponding limiting equation is
\begin{align*}
dx_i(t) =& \left [-\partial_{x_i}U(x_1(t),x_2(t))\frac{D(d(t))}{k_BT} + (-1)^i D'(d(t))
\right ]\;dt + \sqrt{2D(d(t))}\;dB_i(t).
\end{align*}
Here, the natural domain of definition for these processes is $\mathcal{X}\times \mathbf{R}^2$ and $\mathcal{X}$, respectively, where $$ \mathcal{X} = \{(x_1,x_2)\in\mathbf{R}^2: x_1<x_2\}.$$
To apply Theorem~\ref{thm:main}, we again need to see that $\P_x\{\tau_\mathcal{X}=\infty \}=1$ for all initial conditions $x=(x_1, x_2)\in \mathcal{X}$. We define our candidate Lyapunov function to be the potential $U(x_1,x_2)$ as in \eqref{eq:potfun2} and now check to see that p1) and p2) are satisfied. One can readily check that p1) is satisfied. To see p2), apply the generator to $U$ to see that
\begin{align}
\label{eq:generator1}
\mathcal{L}U(x_1,x_2) =& \left (-\frac{[(\partial_{x_1}U(x_1,x_2))^2+\partial_{x_2}U(x_1,x_2))^2]}{k_BT} + (\partial^2_{x_1}+\partial^2_{x_2})U(x_1,x_2)\right )D(x_2-x_1)\\
&\qquad +\Big[(\partial_{x_1}+\partial_{x_2})U(x_1,x_2)\Big ]D'(x_2-x_1). \nonumber
\end{align}
The partial derivatives above are given by
\begin{align*}
\partial_{x_i} U(x_1,x_2) =& k x_i + (-1)^{i-1}\frac{c e^{-(x_2-x_1)/l}}{(x_2-x_1)}\left(\frac{1}{l} + \frac{1}{(x_2-x_1)}\right )\\
\partial_{x_i}^2 U(x_1,x_2) =& k +\frac{c e^{-(x_2-x_1)/l }}{(x_2-x_1)}\left (\frac{1}{l^2 } + \frac{2}{l (x_2-x_1)}+\frac{2}{(x_2-x_1)^2}\right )
\end{align*}
and
\begin{equation*}
(\partial_{x_1}+\partial_{x_2})U(x_1,x_2) = k(x_1+x_2).
\end{equation*}
Using the mean value theorem and the fact that $D(0)=0$, there exist constants $c_i>0$ and $\xi_{x_1,x_2} \geq 0$ such that
\begin{align*}
&\left (\frac{[(\partial_{x_1}U(x_1,x_2))^2+\partial_{x_2}U(x_1,x_2))^2]}{k_BT} - (\partial^2_{x_1}+\partial^2_{x_2})U(x_1,x_2)\right )D(x_2-x_1)\\
&= \left (\frac{[(\partial_{x_1}U(x_1,x_2))^2+\partial_{x_2}U(x_1,x_2))^2]}{k_BT} - (\partial^2_{x_1}+\partial^2_{x_2})U(x_1,x_2)\right )D'(\xi_{x_1,x_2})(x_2-x_1) \\
&\geq - c_1(x_1^2+x_2^2) - c_2
\end{align*}
for all $(x_1, x_2) \in \mathcal{X}$. In the estimate above, we have used the facts that
\begin{align*}
\sup_{\xi \geq 0} D'(\xi) \in (0, \infty) \,\,\text{ and } \,\, \inf_{\xi \in [0, c]} D'(\xi) \in (0, \infty)
\end{align*}
for all $c>0$, as $D''(\xi) < 0$ for $\xi \geq 0$. Combining the above estimate with the bound
\begin{align*}
|(\partial_{x_1} + \partial_{x_2}) U(x_1, x_2) D'(\xi)| \leq k D_{\min}' (|x_1|+|x_2|),
\end{align*}
which is satisfied for all $\xi \in [0, \infty)$ and all $(x_1, x_2) \in \mathcal{X}$, produces the required estimate p2).
\subsection{Non-conservative forces}
In the previous two examples, one can easily adapt the arguments given there to show that the pair process $(x^m(t),v^m(t))$ never leaves $\mathcal{X}\times \mathbf{R}^n$ for each $m>0$ by simply taking $U(x) + \frac{1}{2}mv^2$ to be our candidate Lyapunov function.
In this example, we introduce non-conservative forces in a 2D system where finding a
Lyapunov function for the system when $m>0$ is difficult. This is because there is no
potential function for all of the external forces.
Adding the rotational force field to the Langevin equations for the Brownian motion of a particle in the $(x_1,x_2)$-plane, the corresponding non-conservative
forces are:
\begin{equation}
\left\{\begin{array}{ccc}
\displaystyle \hat{F}_{x_1}(x_1,x_2) & = & - \gamma\Omega x_2 \\ [12pt]
\displaystyle \hat{F}_{x_2}(x_1,x_2) & = & + \gamma\Omega x_1
\end{array}\right.
\end{equation}
The terms $-\gamma\Omega x_2$ and $+\gamma\Omega x_1$ introduce a coupling between the equations, which becomes apparent in the fact that the cross-correlation is non-zero. This can in fact be realized experimentally by, for example, using transfer of orbital angular momentum to an optically trapped particle \cite{volpe2006torque}. In addition to the non-conservative forces, the particle is confined to a pore, i.e. a well with radius $C$ centered at $(x_1, x_2)=(0,0)$. We now define the radially symmetric potential $U(x_1, x_2)$ and the diffusion gradient. We assume that $U(x_1, x_2)= \mathcal{U}(r^2(x_1,x_2))$ where $r^2(x_1, x_2)=x_1^2+x_2^2$ and $\mathcal{U}\in C^2([0, C^2): [0, \infty))$ satisfies
\begin{align*}
\mathcal{U}(r) = {B\over\kappa (C^2-r)} e^{-\kappa (C^2-r)}
\end{align*}
for $r\in [0, C^2)$. The diffusion gradient is such that for $r\in [0, C)$, $D(r)=\mathcal{D}(r^2)$ where $\mathcal{D}\in C^2([0, C^2]: [0, \infty))$ satisfies $\mathcal{D}(C^2) = 0$ and $\mathcal{D}(r)<D_{\rm SE}$, $-\infty<\mathcal{D}'(r)<0$, $\mathcal{D}''(r)<0$ for $0\leq r\leq C^2$.
Define $r^m(t)^2 ={x_1^m(t)^2+x_2^m(t)^2}$. The full system then becomes
\begin{equation*}
\label{eq:Ex3sys}
\left \{ \begin{array}{rcl}
dx_i^m(t) &=& v_i^m(t)\;dt\\
m\, dv^m_i(t) &= & \left [-(\partial_{x_i} U)(x_1^m(t), x_2^m(t))
-\frac{k_BT}{D(r^m(t))}\Omega x^m_{j}(t)-\frac{k_BT}{D(r^m(t))}v_i^m(t)\right]\;dt + \sqrt{\frac{2(k_BT)^2}{D(r^m(t))}}\;dB_i(t),
\end{array}\right .
\end{equation*}
$i=1,2$, $j\neq i$, and where $B_i(t)$ are two standard, one-dimensional, independent Brownian motions. The corresponding limiting equation is
\begin{align}
\label{eqn:ex3ld}
dx_i(t) =& \left [-\mathcal{U}'(r^2(t))\frac{2x_i(t)\mathcal{D}(r^2(t))}{ k_BT}-\Omega x_j(t)+2x_i(t)\mathcal{D}'(r^2(t))\right ]\;dt + \sqrt{2 \mathcal{D}(r^2(t))}\;dB_i(t).
\end{align}
A suitable choice of a Lyapunov function for the dynamics \eqref{eqn:ex3ld} is the potential function $U(x_1, x_2)$. This choice works intuitively because the non-conservative forces are bounded inside the pore and are dominated by the potential function near the boundary. To see that this intuition is indeed true, note that p1) is clearly satisfied. To see p2), apply the generator of the process $(x_1(t), x_2(t))$ to $U(x_1, x_2)$ to find that
\begin{align}
\label{eq:generator}
\mathcal{L}U(x_1, x_2) =& -\frac{4r^2\left (\mathcal{U}'(r^2)\right)^2\mathcal{D}(r^2)}{k_BT}
+ 4\left ( r^2\mathcal{U}''(r^2)+\mathcal{U}'(r^2)\right )\mathcal{D}(r^2)\\
-&4\Omega x_1x_2\mathcal{U}'(r^2) +4r^2\mathcal{U}'(r^2)\mathcal{D}'(r^2).\nonumber
\end{align}
By assumption $\mathcal{D}(C^2)=0$, and $\mathcal{D}(r)$, $\mathcal{D}'(r)$ are bounded on $[0, C^2]$. Moreover, $\mathcal{D}'(r)<0$ on $[0, C^2]$. Thus using the mean value theorem, we find that there exist constants $c_i >0$ such that
\begin{align*}
&\left (\frac{4r^2\left (\mathcal{U}'(r^2)\right)^2}{k_BT}
- 4\left (r^2\mathcal{U}''(r^2)+\mathcal{U}'(r^2)\right )\right )\mathcal{D}(r^2) \\
&=\left (\frac{4r^2\left (\mathcal{U}'(r^2)\right)^2}{k_BT}
- 4\left (r^2\mathcal{U}''(r^2)+\mathcal{U}'(r^2)\right )\right )(\mathcal{D}(r^2)-\mathcal{D}(C^2))\\
&\geq \frac{c_1}{(C^2-r^2)^3}-c_2.
\end{align*}
Additionally since $\mathcal{D}'$ is bounded on $[0, C^2]$, there exists $C_1,C_2>0$ such that
\begin{equation*}
\left |-2\Omega x_1x_2U'(r^2) + 4r^2U'(r^2)D'(r^2)\right | \leq \frac{C_1}{(C^2-r^2)^2}+C_2.
\end{equation*}
Putting these estimates together we find that $x\mapsto \mathcal{L} U(x)$ is bounded on $\mathcal{X}$. Property p2) now follows easily.
\section{Proof of Main Result}
\label{sec:proof}
In this section we prove Theorem~\ref{thm:main} and Corollary~\ref{cor:nolimexplosion}. The idea underlying the proof of both results is quite natural. First we will see that due to the structure of equation~\eqref{eqn:main}, for each $m>0$ the first exit time $\tau_\mathcal{X}^m$ of $x^m(t)$ from $\mathcal{X}$ coincides with $\tau_{\mathcal{X}\times \mathbf{R}^n}^m$. In other words if the process $(x^m(t), v^m(t))$ exits the domain $\mathcal{X} \times \mathbf{R}^n$, then $x^m(t)$ must have exited $\mathcal{X}$. Once we have control of the stopping times in this way, the goal is to then construct processes $\{ x_k(t)\}_{t \geq 0}$ and $\{ x_k^m(t)\}_{t\geq 0}$, $m>0$ and $k\in \mathbf{N}$, on $(\Omega, \mathcal{F}, \P)$ satisfying the following properties:
\begin{enumerate}
\item $\{ x_k^m(t)\}_{t\geq 0}\subseteq \mathbf{R}^n$ and $\{ x_k(t)\}_{t \geq 0}\subseteq \mathbf{R}^n$, i.e., all processes live in the ambient space $\mathbf{R}^n$ (as opposed to $\mathcal{X}$) for all finite times $t\geq 0$.
\item $\{x_k^m(t)\}_{t\geq 0}$ and $\{ x_k(t)\}_{t\geq 0}$ have continuous sample paths.
\item Letting
\begin{align*}
\mathcal{X}_k = \begin{cases}
\{ x\in \mathcal{X} \, : \, \text{distance}(x, \partial \mathcal{X}) > k^{-1} \text{ or } |x| < k \} & \text{ if } \partial \mathcal{X} \neq \emptyset \\
\{ x\in \mathcal{X}=\mathbf{R}^n \, : \, |x| < k \} & \text{ if } \partial \mathcal{X} = \emptyset
\end{cases}
\end{align*}
and $\tau_{\mathcal{X}_k}^m$, $\tau_{\mathcal{X}_k}$ denote the first exit times of, respectively, $x^m(t)$ and $x(t)$ from $\mathcal{X}_k$:
\begin{align*}
x_k^m(t) \equiv x^m(t)\,\, \text{ for } \,\, 0\leq t< \tau_{\mathcal{X}_k}^m\,\, \text{ and }\,\, x_k(t) \equiv x(t)\,\, \text{ for }0\leq t < \tau_{\mathcal{X}_k} \qquad \P-\text{almost surely}.
\end{align*}
In the definition of $\mathcal{X}_k$ above, we note that if $\partial \mathcal{X}=\emptyset$ then $\mathcal{X}=\mathbf{R}^n$, as $\mathcal{X}$ is non-empty and both open and closed.
\item For every $\epsilon, T>0$, $k\in \mathbf{N}$ and $x_k^m(0)=x_k(0)=x\in \mathcal{X}$
\begin{align*}
\lim_{m\rightarrow 0}\P\Big\{ \sup_{t\in [0, T]} |x_k^m(t) - x_k(t)| > \epsilon\Big\} =0.
\end{align*}
\end{enumerate}
The processes $\{ x_k^m(t) \}_{t\geq 0}$ and $\{ x_k(t) \}_{t\geq 0}$ should be thought of as localizations (in time) of our original processes $\{ x^m(t)\}_{t\geq 0}$ and $\{ x(t) \}_{t\geq 0}$ which satisfy the desired convergence as $m\rightarrow 0$ for each $k\in \mathbf{N}$. Formally taking $k\rightarrow \infty$ and exchanging the order of limits in (4) above we may expect on an intuitive level the convergence to hold. However, performing such an exchange is nontrivial. Nevertheless, due to the way the set $\,\mathcal{X}\,$ is stratified by $\{\mathcal{X}_k\}_{k\in \mathbf{N}}$, we will see at the end of this section that this intuition is indeed correct; that is, we can extract convergence given the existence of such approximate processes. Corollary~\ref{cor:nolimexplosion} will be an easy consequence of the proof of Theorem~\ref{thm:main}.
We begin the section by showing $\tau_\mathcal{X}^m=\tau_{\mathcal{X}\times \mathbf{R}^n}^m$ almost surely for all $m>0$ and by constructing the approximate processes satisfying (1)-(4) above. Afterwards, we will prove Theorem~\ref{thm:main} and Corollary~\ref{cor:nolimexplosion}.
\begin{lemma}
\label{lem:tc}
Suppose that Assumption~\ref{assump:mf} is satisfied. Then for each $m>0$ and each initial condition $(x, v)\in \mathcal{X}\times \mathbf{R}^n$ $$\P\{\tau_\mathcal{X}^m= \tau_{\mathcal{X}\times \mathbf{R}^n}^m\}=1.$$
Moreover, there exist processes $\{ x_k^m(t)\}_{t\geq 0}$ and $\{x_k(t) \}_{t\geq 0}$, $k\in \mathbf{N}$ and $m>0$, on the probability space $(\Omega, \mathcal{F}, \P)$ satisfying properties (1)-(4) above.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:tc}]
We will start by constructing the desired family of processes $\{ x_k^m(t)\}_{t\geq 0}$ and $\{ x_k(t)\}_{t\geq 0}$. The conclusion $\tau_\mathcal{X}^m=\tau_{\mathcal{X}\times \mathbf{R}^n}^m$ $\P$-almost surely will be shown in the process of constructing these approximations.
By the existence of smooth bump functions, for each $k\in \mathbf{N}$ there exists $g_k \in C^\infty(\mathbf{R}^n \, : \,[0, 1])$ satisfying
\begin{align*}
g_k(x) = \begin{cases}
1 & \text{ if } x\in \overline{\mathcal{X}}_k\\
0 & \text{ if } x\in \mathbf{R}^n \setminus \mathcal{X}_{k +1}
\end{cases}.
\end{align*}
Let $\hat{F}: \mathbf{R}^n \rightarrow \mathbf{R}^n$, $\hat{\sigma}: \mathbf{R}^n \rightarrow \mathbf{R}^{n\times k}$ be $C^\infty$ and have bounded derivatives of all orders, and let $\hat{\gamma}= c \,\text{Id}_{n\times n}$ where $\text{Id}_{n\times n}$ is the $n\times n$ identity matrix and $c>0$ is a fixed, arbitrary constant. For each $k\in \mathbf{N}$, define $F_k, \sigma_k, \gamma_k$ on $\mathbf{R}^n$ by
\begin{align*}
F_k= g_k F + (1-g_k) \hat{F}, \,\,\,\, \sigma_k = g_k \sigma + (1-g_k)\hat{\sigma}, \,\, \,\, \gamma_k = g_k \gamma + (1-g_k) \hat{\gamma}.
\end{align*}
By construction, observe that $F_k, \gamma_k, \sigma_k$ are bounded and globally Lipschitz on $\mathbf{R}^n$. Also, letting
\begin{align*}
c_{k}:=\inf_{\substack{x\in \overline{\mathcal{X}}_{k+1}\\y\in \mathbf{R}^n_{\neq 0}}} \frac{(\gamma(x) y, y)}{|y|^2},
\end{align*}
we note that $c_k>0$ as $\overline{\mathcal{X}}_{k+1}$ is compact. Moreover, $\gamma_k \in C^2(\mathbf{R}^n : \mathbf{R}^{n\times n})$ is uniformly positive definite on $\mathbf{R}^n$ since
\begin{align*}
(\gamma_k(x) y, y) &= g_k(x)( \gamma(x) y, y) + (1-g_k(x))c |y|^2 \\
&\geq g_k(x) c_k|y|^2 + (1-g_k(x)) c|y|^2 \\
& \geq\min\{ c_k, c\} |y|^2 .
\end{align*}
Now consider the family of $\mathbf{R}^n \times \mathbf{R}^n$-valued
SDEs
\begin{align}
\label{eqn:SDEapprox}
d x_k^m(t) & = v_k^m(t) \, dt\\
\nonumber m \, dv^m_k(t)& = [F_{k}(x_k^m(t)) - \gamma_k (x_k^m(t)) v_k^m(t) ] \, dt + \sigma_k(x_k^m(t)) \, dB_t
\end{align}
indexed by the parameters $k\in \mathbf{N}$ and $m>0$ and the family of $\mathbf{R}^n$-valued SDEs given by
\begin{align*}
dx_k(t) = [\gamma_k^{-1}(x_k(t)) F_k(x_k(t)) -S_k(x_k(t))] \, dt + \gamma_{k}^{-1}(x_k(t)) \sigma_k(x_k(t)) \, dB_t,
\end{align*}
where $S_k$ is the noise-induced drift term determined by $\gamma_k,\sigma_k$.
We now show that $\{ (x_k^m(t), v_k^m(t))\}_{t\geq 0} \subset \mathbf{R}^n \times \mathbf{R}^n$. By construction, we saw that the coefficients $F_k, \gamma_k, \sigma_k$ are bounded and globally Lipschitz on $\mathbf{R}^n$. However, the SDE \eqref{eqn:SDEapprox} has only locally Lipschitz coefficients as the term $\gamma_k(x)v$ is a locally Lipschitz function on $\mathbf{R}^n \times \mathbf{R}^n$. Therefore to see that $\{ (x_k^m(t), v_k^m(t))\}_{t\geq 0} \subset \mathbf{R}^n \times \mathbf{R}^n$ we construct the appropriate Lyapunov functions. Pick $h\in C^\infty(\mathbf{R}^n : [0, \infty))$ to satisfy the following two properties:
\begin{itemize}
\item[(a)] $h(x) \rightarrow \infty$ as $|x|\rightarrow \infty$.
\item[(b)] For each $j=1,\ldots, n$, $\partial_{x_j} h$ is a bounded function on $\mathbf{R}^n$.
\end{itemize}
Define $\Phi(x,v)= h(x) + |v|^2$ and let $\mathcal{L}_{k}^m$ denote the infinitesimal generator of the Markov process defined by \eqref{eqn:SDEapprox}. By construction and uniform positivity of the matrix $\gamma_k$, it is not hard to check that for each $m>0$, $k\in \mathbf{N}$ fixed
\begin{align*}
(x,v) \mapsto \mathcal{L}_k^m \Phi(x, v)
\end{align*}
is bounded on $\mathbf{R}^n \times \mathbf{R}^n$. It now follows easily by the standard Lyapunov function theory (see, for example, \cite{reybellet2006}): for each fixed $m>0, k\in \mathbf{N}$ we have that $\{ (x_k^m(t), v_k^m(t))\}_{t\geq 0} \subset \mathbf{R}^n \times \mathbf{R}^n$ almost surely.
Before verifying that the remaining properties in (1)-(4) are satisfied, let us take a moment to see that $\P \{ \tau_\mathcal{X}^m = \tau_{\mathcal{X} \times \mathbf{R}^n}^m \} =1$ for all $m>0$ and all initial conditions $(x,v)\in \mathcal{X}\times \mathbf{R}^n$. Trivially, $\tau_{\mathcal{X}\times \mathbf{R}^n}^m\leq \tau_{\mathcal{X}}^m$ almost surely. Next we prove the opposite inequality. Let $\xi_l^m= \inf\{t>0\,: |v^m(t)| \geq l \}$. Then for all $j, l \in \mathbf{N}$ and all $m>0$, we have the almost sure inequality
\begin{align*}
\tau_{\mathcal{X}_j}^m \wedge \xi_{l}^m \leq \tau_{\mathcal{X}\times \mathbf{R}^n}^m .
\end{align*}
The goal is to show that for all $j\in \mathbf{N}$
\begin{align}
\label{eqn:limitst}
\lim_{l\rightarrow \infty} \tau_{\mathcal{X}_j}^m \wedge \xi_{l}^m = \tau_{\mathcal{X}_j}^m \leq \tau_{\mathcal{X}\times \mathbf{R}^n}^m
\end{align}
almost surely. Taking $j\rightarrow \infty$ in the expression above will then establish the desired conclusion. By construction and pathwise uniqueness, if $(x_k^m(0), v_k^m(0))=(x^m(0), v^m(0))=(x,v) \in \mathcal{X}\times \mathbf{R}^n$, then
\begin{align*}
\P \bigg\{ \sup_{t\in [0, \tau_{\mathcal{X}_k}^m)} |(x_k^m(t), v_k^m(t))-(x^m(t), v^m(t))| =0\bigg\}=1
\end{align*}
as the coefficients defining both pair processes agree on $\mathcal{X}_{k}\times \mathbf{R}^n$. In particular, we have established \eqref{eqn:limitst} as $v_k^m(t)$, hence $v^m(t)$, has yet to exit $\mathbf{R}^n$ before time $\tau_{\mathcal{X}_k}^m$.
Now we turn our attention to showing the remaining properties in the list (1)-(4). To see that property (1) is satisfied, we have already seen using the Lyapunov function $\Phi(x,v)$ that $\{ x_k^m(t)\}_{t\geq 0}\subseteq \mathbf{R}^n$ for all $m>0$ and $k\in \mathbf{N}$. To see that $\{ x_k(t) \}_{t\geq 0}\subseteq \mathbf{R}^n$ for all $k\in \mathbf{N}$, by construction, the coefficients $\gamma_k^{-1} F_k$ and $\gamma_k^{-1} \sigma_k$ are globally Lipschitz functions on $\mathbf{R}^n$. This can be seen using the derivative formula
\begin{align*}
\frac{\partial \gamma_k^{-1}}{\partial x_l} (x) = - \gamma_k^{-1}(x) \frac{\partial \gamma_k^{-1}}{\partial x_l}(x)\gamma_k^{-1}(x)
\end{align*}
and using the fact that all matrices on the right above are bounded on $\mathbf{R}^n$. Also, one can use the fact that the unique solution $J_k$ of the Lyapunov equation $J_k \gamma_k^T+ \gamma_k J_k = \sigma_k \sigma_k^T$ is given by (see \cite{HMVW_14, ortega})
\begin{align*}
J_k(x)= -\int_0^\infty \exp(- t \gamma_k(x)) \sigma_k(x) \sigma^T_k(x) \exp(-t \gamma_k^T(x)) \, dt
\end{align*}
to see that, too, $S_k$ is globally Lipschitz. By the standard pathwise existence and uniqueness theorem for solutions of SDEs, we now see that $\{ x_k(t)\}_{t\geq 0}\subseteq \mathbf{R}^n$.
Properties (2) and (3) follow immediately by construction. To obtain property (4), apply Theorem~ 1 of \cite{HMVW_14}.
\end{proof}
We now have all of the tools necessary to prove Theorem~\ref{thm:main} and Corollary~\ref{cor:nolimexplosion}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
For any $T, \epsilon, m>0$ we have that
\begin{align*}
\nonumber \P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\}&=\P \Big\{\sup_{t\in [0,T]} d_\infty(x^m(t), x(t))>\epsilon, \, \tau_{\mathcal{X} \times \mathbf{R}^n}^m \leq T \Big\}\\
&= \P \{ \tau_{\mathcal{X} \times \mathbf{R}^n}^m \leq T\} + \P \Big\{\sup_{t\in [0,T]} |x^m(t)-x(t)|>\epsilon, \, \tau_{\mathcal{X} \times \mathbf{R}^n}^m > T \Big\}
\end{align*}
where on the last line we used the fact that $x(t) \in \mathcal{X}$ for all finite times $t\geq 0$ almost surely. Applying Lemma~\ref{lem:tc}, we see that for any $T, \epsilon, m >0$ and any $k\in \mathbf{N}$
\begin{align*}
\nonumber \P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\}&=\P \{ \tau_{\mathcal{X}}^m \leq T\} + \P \Big\{\sup_{t\in [0,T]} |x^m(t)-x(t)|>\epsilon, \, \tau_{\mathcal{X}}^m > T \Big\}\\
&\leq 2 \P \{ \tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} \leq T\} + \P \Big\{\sup_{t\in [0,T]} |x^m(t)- x(t)| >\epsilon, \, \tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} > T\Big\}\\
&\leq 2 \P \{ \tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k}\leq T\} + \P \Big\{\sup_{t\in [0,T]} |x^m_k(t)- x^m_k(t)| >\epsilon\Big\}
\end{align*}
where the first inequality was obtained by partitioning each event $A$ in question as
\begin{align*}
A=( A\cap \{ \tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} \leq T\}) \cup (A\cap \{\tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} > T\})
\end{align*}
and estimating their associated probabilities by containment. By property (4), for each $\epsilon>0$ and $k\in \mathbf{N}$:
\begin{align*}
\P \Big\{ \sup_{t\in [0,T]} |x^m_k(t)- x_k(t)|>\epsilon \Big\} \rightarrow 0 \,\, \text{ as } \,\, m \rightarrow 0
\end{align*}
so we turn to bounding $\P\big\{\tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} \leq T\big\}$. Notice that
\begin{align*}
\P\big\{\tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} \leq T\big\}& \leq \P\Big\{\tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} \leq T, \sup_{t\in [0,T]} |x^m_{k+1}(t) - x_{k+1}(t)| \leq \epsilon \bigg\}\\
&\qquad +\P \Big\{\sup_{t\in [0,T]} |x_{k+1}^m(t)- x_{k+1}(t)|>\epsilon \Big\}.
\end{align*}
Because we have control of the latter term on the last line above, the crucial observation is that
for all $\epsilon \in (0,1/2)$, $k\geq 2$ $$\Big\{ \tau_{\mathcal{X}_k}^m \wedge \tau_{\mathcal{X}_k} \leq T,\, \sup_{t\in [0,T]} |x_{k+1}^m(t) -x_{k+1}(t)|\leq \epsilon \Big\}\subset \{ \tau_{\mathcal{X}_{N(\epsilon, k)}}\leq T\}$$
for some integer $N(\epsilon, k)\geq 1$ satisfying $\lim_{k \rightarrow \infty} N(\epsilon, k)=N(\epsilon)\in \mathbf{N}\cup\{\infty \}$ and, if $N(\epsilon)< \infty$, $\lim_{\epsilon \rightarrow \infty} N(\epsilon) =\infty$. Putting this all together we obtain the following estimate for all $ m, T>0$, all $\epsilon \in (0,1/2)$, $k\geq 2$
\begin{align*}
\P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\}&\leq 2\P\{\tau_{\mathcal{X}_{N(\epsilon, k)}} \leq T\} + 2 \P \Big\{\sup_{t\in [0,T]} |x_{k+1}^m(t)- x_{k+1}(t)|>\epsilon \Big\}\\
&\qquad + \P \Big\{ \sup_{t\in [0,T]} |x^m_k(t)- x_k(t)|>\epsilon \Big\}.
\end{align*}
Thus for all $T>0$, $\epsilon \in (0,1/2)$, $k\geq 2$ we have
\begin{align*}
\limsup_{m\rightarrow 0} \P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\}&\leq 2\P\{\tau_{\mathcal{X}_{N(\epsilon, k)}} \leq T\}.
\end{align*}
Taking $k\rightarrow \infty$ in the above we obtain the following inequality
\begin{align*}
\limsup_{m\rightarrow 0} \P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\}&\leq \begin{cases} 2\P\{\tau_{\mathcal{X}_{N(\epsilon)}} \leq T\} & \text{ if } N(\epsilon) \in \mathbf{N}\\
0 & \text{ otherwise} \end{cases}
\end{align*}
for all $\epsilon \in (0,1/2)$. In particular, the result is proven in the case when $N(\epsilon)=\infty$. If $N(\epsilon) \in \mathbf{N}$, then for $\delta \in (0, \epsilon)$, $\epsilon < 1/2$ we have
\begin{align*}
\limsup_{m\rightarrow 0} \P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\} \leq 2\P\{\tau_{\mathcal{X}_{N(\delta)}} \leq T\}.
\end{align*}
Taking $\delta \downarrow 0$, using the fact that $N(\delta) \rightarrow \infty$ and the fact that $\tau_{\mathcal{X}}= \infty $ almost surely, we obtain the result.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:nolimexplosion}]
This follows easily by Theorem~\ref{thm:main} since we have already seen that
\begin{align*}
\nonumber \P\Big\{ \sup_{t\in [0,T]} d_\infty(x^m(t),x(t))>\epsilon \Big\}&=\P \{ \tau_{\mathcal{X} \times \mathbf{R}^n}^m \leq T\} + \P \Big\{\sup_{t\in [0,T]} |x^m(t)-x(t)|>\epsilon, \, \tau_{\mathcal{X} \times \mathbf{R}^n}^m > T \Big\}.
\end{align*}
\end{proof}
\bibliographystyle{plain}
|
2,877,628,088,888 | arxiv | \section{Introduction}
This work deals with some aspects on the vortex motion for the classical planar incompressible Euler equations that can be reformulated in the vorticity/velocity form as follows
\begin{equation}\label{Euler equations}
\left\lbrace\begin{array}{ll}
\partial_{t}\boldsymbol{\omega}+\mathbf{v}\cdot\nabla\boldsymbol{\omega}=0, & \textnormal{in }\mathbb{R}_{+}\times\mathbb{R}^2,\\
\mathbf{v}=\nabla^{\perp}\psi,\\
\Delta\psi=\boldsymbol{\omega},\\
\boldsymbol{\omega}(0,\cdot)=\boldsymbol{\omega}_0.
\end{array}\right.
\end{equation}
The quantity $\mathbf{v}$ represents the {velocity field} of the fluid particles which is supposed to be solenoidal according to the second equation in \eqref{Euler equations} where the notation $\nabla^{\perp}\triangleq \begin{pmatrix}
-\partial_2\\
\partial_1
\end{pmatrix}$ is used. The scalar potential $\boldsymbol{\omega}$ is called the {vorticity} and measures the local rotation effects inside the fluid. It is related to the velocity field by the relation
$$\boldsymbol{\omega}\triangleq\nabla^{\perp}\cdot\mathbf{v}.$$
From the third equation in \eqref{Euler equations}, we can recover the {stream function} $\psi$ from the vorticity $\boldsymbol{\omega}$ through the following integral operator with a logarithmic kernel
\begin{equation}\label{def streamL1}
\psi(t,z)=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}\log(|z-\xi|)\boldsymbol{\omega}(t,\xi)dA(\xi),
\end{equation}
where $dA$ is the $2$-dimensional Lebesgue measure. It is well-known since the work of Yudovich \cite{Y63} that any bounded and integrable initial datum $\boldsymbol{\omega}_0$ generates a unique global in time weak solution of \eqref{Euler equations} which is Lagrangian, namely
$$\boldsymbol{\omega}(t,x)=\boldsymbol{\omega}_0\big(\boldsymbol{\Phi}_{t}^{-1}(x)\big),\qquad\boldsymbol{\Phi}_t(x)=x+\int_{0}^{t}\mathbf{v}\big(t,\boldsymbol{\Phi}_s(x)\big)ds.$$
In particular, if the initial datum is the characteristic function of some bounded domain $D_0$ then
$$\boldsymbol{\omega}(t,\cdot)=\mathbf{1}_{D_t},\qquad D_t\triangleq\boldsymbol{\Phi}_t(D_0)$$
and the resulting solution is called a \textit{vortex patch}. The dynamics of these solutions is entirely described by the evolution of the boundary $\partial D_t.$ The global in time persistence of the boundary regularity of type $C^{1,\alpha}$, with $\alpha\in(0,1)$ was first proved by Chemin in \cite{C93,C95} and later by Bertozzi and Constantin in \cite{BC93}. Notice that the boundary motion can be tracked from the {\it contour dynamics equation} of the patch. Indeed, for any parametrization $z(t):\mathbb{T}\rightarrow\partial D_t$ of the boundary, denoting $\mathbf{n}\big(t,z(t,\theta)\big)\triangleq\ii \partial_\theta z(t,\theta)$ a normal vector to the boundary at the point $z(t,\theta)$, one has
\begin{equation}\label{CDE}
\partial_{t}z(t,\theta)\cdot\mathbf{n}\big(t,z(t,\theta)\big)=\partial_{\theta}\Big[\psi\big(t,z(t,\theta)\big)\Big].
\end{equation}
We refer for instance to \cite{HMV13} for a complete derivation of this equation. In 1858, Rankine observed that any radial initial domain $D_0$ (disc, annulus, etc...) generates a stationary solution to \eqref{Euler equations}. Then from a dynamical system point of view it is of important interest to explore the local structure of the phase portrait and to know whether periodic solutions may exist around these equilibrium states. This topic turns out to be highly rich leading to fruitful subjects connecting various areas in Mathematics. The first result in this direction is due to Kirchhoff in 1874 \cite{K74} where he proved that an ellipse $\mathcal{E}_{a,b}$ with semi-axes $a$ and $b$ performs a uniform rotation about its center with an angular velocity $\Omega$ if and only if
$$\Omega=\frac{ab}{(a+b)^2}\cdot$$
Actually, the ellipses form a subclass of \textit{relative equilibria} or \textit{V-states} which are solutions keeping the same shape during the motion, from which we derive another subclass given by rotating patches where the domain $D_t$ rotates uniformly about its center (due to the space invariance, we can suppose without any loss of generality that the center is the origin),
$$D_t=e^{\ii\Omega t}D_0,\qquad\Omega\in\mathbb{R}.$$
They form a family of rigid periodic solutions where the domain is not deformed during the motion and keeps its initial shape. Then, more recently in 1978, Deem and Zabusky \cite{DZ78} discovered numerically 3-fold, 4-fold and 5-fold V-states living close to the unit disc. Few years after, Burbea \cite{B82} confirmed analytically these simulations using bifurcation theory. More precisely, he proved that for any integer $\mathtt{m}\geqslant 1,$ one can find a branch of $\mathtt{m}$-fold simply-connected V-states bifurcating from the unit disc at the angular velocity
$$\Omega_{\mathtt{m}}\triangleq\frac{\mathtt{m}-1}{2\mathtt{m}}\cdot$$
Actually, the case $\mathtt{m}=1$ corresponds to a translation of the Rankine vortex whereas the branch associated with the mode $\mathtt{m}=2$ gives the Kirchhoff ellipses. Observe that for any $\mathtt{m}\geqslant2,$ the bifurcation frequency $\Omega_{\mathtt{m}}$ lives in the interval $(0,\tfrac{1}{2})$ and the series of works \cite{F00,GPSY20,H15} showed that, outside this interval and in the simply-connected case, the only relative equilibria are the radial ones. The boundary regularity of the V-states and the global bifurcation were analyzed in \cite{CCG16,C-C-G16, HMW20,HMV13}. The second bifurcation from the ellipses has been discussed in \cite{C-C-G16,HM16}. More precisely, if we consider an ellipse $\mathcal{E}_{a,b}$ ($a>b$) described by $$\mathcal{E}_{a,b}=\Big\{\tfrac{a+b}{2}\big(w+\tfrac{\mathcal{Q}}{w}\big),\quad w\in\mathbb{T},\quad\mathcal{Q}\triangleq\tfrac{a-b}{a+b}\in(0,1)\Big\},$$
then for any integer $\mathtt{m}\geqslant3,$ the bifurcation occurs at the angular velocity $\Omega=\frac{1-Q^2}{4}$, where $\mathcal{Q}$ is a solution to the polynomial equation
\begin{equation}\label{constraint ellipse}
f(\mathtt{m},\mathcal{Q})\triangleq1+\mathcal{Q}^{\mathtt{m}}-\tfrac{1-\mathcal{Q}^2}{2}\mathtt{m}=0.
\end{equation}
The boundary effects on the emergence of V-states have been explored recently in \cite{HHHM15} where the authors proved the existence of V-states when the fluid evolves in the unit disc $\mathbb{D}$.
It was shown that for any integer $\mathtt{m}\geqslant 1$ a family of $\mathtt{m}$-fold implicit curves bifurcate from the disc $b\mathbb{D},$ $b\in(0,1),$ at the angular velocity
$$\Omega_{\mathtt{m}}(b)\triangleq\frac{\mathtt{m}-1+b^{2\mathtt{m}}}{2\mathtt{m}}\cdot$$
In contrast to the flat case $\R^2$, the one-fold curve is no longer trivial here and moreover the numerical simulations performed in \cite{HHHM15} show that in some regimes of $b$ the bifurcating curves oscillate with respect to the angular velocity. In the same spirit, Hmidi, de la Hoz, Mateu and Verdera discussed in \cite{HHMV16} the existence of rotating patches with one hole called doubly-connected V-states. They proved that for a fixed symmetry $\mathtt{m}\geqslant3$ and $b\in(0,1)$ two $\mathtt{m}$-fold curves of doubly-connected V-states bifurcate from the annulus
\begin{equation}\label{annulus-Ab}
A_b\triangleq\big\{z\in\mathbb{C}\quad\textnormal{s.t.}\quad b<|z|<1\big\},
\end{equation}
provided that the following constraint is satisfied
$$f(\mathtt{m},b)<0,$$
where $f$ is given by \eqref{constraint ellipse}. The bifurcation occurs at the angular velocities
\begin{align}\label{Om-doub}
\Omega_{\mathtt{m}}^{\pm}(b)\triangleq\frac{1-b^{2}}{4}\pm\frac{1}{2\mathtt{m}}\sqrt{\left(\frac{1-b^2}{2}\mathtt{m}-1\right)^2-b^{2\mathtt{m}}}.
\end{align}
It is worthy to point out that the role played by the same function $f$ in the two different cases (bifurcation from the ellipses and the annulus) is quite mysterious and could be explained through Joukowsky transformation.
As for the degenerate case where
\begin{equation}\label{deg bif}
f(m,b)=0
\end{equation}
the situation turns out to be more delicate to handle. The solutions to \eqref{deg bif} can be ranged in the form
\begin{align}\label{bn-def11}
\{(2,b),\quad b\in(0,1)\}\qquad\textnormal{or}\qquad\Big\{(n,\underline{b}_n),\quad n\geqslant 3,\quad\underline{b}_{n}\in(0,1)\Big\},
\end{align}
where the sequence $(\underline{b}_{n})_{n\geqslant 3}$ is increasing and tends to $1.$ This problem has been explored by Hmidi and Mateu in \cite{HM16-2}, where they show that for $b\in(0,1)\setminus\{\underline{b}_{2p},\,p\geqslant2\}$ there is a trans-critical bifurcation of the 2-fold V-states. However, there is no bifurcation with the $\mathtt{m}$-fold symmetry for $b=\underline{b}_{\mathtt{m}},$ $\mathtt{m}\geqslant3.$ Very recently, Wang, Xu and Zhou extended in \cite{WXZ22} the 2-fold trans-critical bifurcation to the cases $b=\underline{b}_{4}$ and $b=\underline{b}_{6}.$
We should also mention that over the past few years there were a lot of rich activities on the construction of V-states around more general steady shapes (multi-connected patches, Thomson polygons, von K\'arm\'an vortex streets, etc...) and for various active scalar equations (generalized quasi-geostrophic equations, quasi-geostrophic shallow-water equations, Euler-$\alpha$ equations). For more details, we refer to \cite{ADPMW20,CLZ21,CQZZ21,CWWZ21,CJ22,EJ19,EJ20,HHH16,HHHM15,DHR19,G20,G21,GH22,GHS20,GGS20,GS19,HH15,HH21,HW21,HM17,HXX22,R17,R21,R22,T85}.
\smallskip
In the current work, we intend to explore the existence of time quasi-periodic vortex patches for \eqref{Euler equations} close to the annulus. Recall that a \textit{quasi-periodic function} is any application $f:\mathbb{R}\rightarrow\mathbb{R}$ which can be written
$$\forall\, t\in\mathbb{R},\quad f(t)=F(\omega t),$$
with $F:\mathbb{T}^{d}\rightarrow\mathbb{R},$ where $\mathbb{T}^d$ denotes the flat torus of dimension $d\in\mathbb{N}^*$ and $\omega\in\mathbb{R}^{d}$ a non-degenerate frequency vector, namely
\begin{equation}\label{nonresonnace omega}
\forall\, l\in\mathbb{Z}^{d}\setminus\{0\},\quad \omega\cdot l\neq 0.
\end{equation}
Observe that the case $d=1$ corresponds to the definition of periodic functions with frequency $\omega\in\mathbb{R}^*.$ This type of functions are the natural solutions of finite dimensional integrable Hamiltonian systems where the phase space is foliated by Lagrangian invariant tori supporting quasi-periodic motion. The Kolmogorov-Arnold-Moser (KAM) theory \cite{A63,K54,M62} asserts that under suitable regularity and non-degeneracy conditions, most of these invariant tori persist, up to a smooth deformation, under a small Hamiltonian perturbation. A typical difficulty in the implementation of the KAM method is linked to the \textit{small divisors problems} preventing some intermediate series to be convergent. The solution, proposed by Kolmogorov, is to introduce Diophantine conditions on the small denominators which lead to a fixed algebraic loss of regularity. This loss can be treated through a classical Newton method in the analytical regularity framework as proved by Kolmogorov and Arnold. However, this approach turns out to be more involved in the finitely many differentiable case (for example Sobolev or H\"older spaces). Indeed, to overcome this technical difficulty, Moser used in \cite{M67} a regularization of the Newton method in the spirit of the ideas of Nash implemented in the isometric embedding problem \cite{N54}. Now, such a method is known as Nash-Moser scheme.
\smallskip
The search of lower dimensional invariant tori is so relevant not only for finite dimensional Hamiltonian systems but also for Hamiltonian PDE where this query is quite natural. Actually, in the finite dimensional case, this problem has been explored for instance by Moser and P\"{o}schel \cite{M67,P89} leading to new Diophantine conditions called \textit{first and second order Melnikov conditions}. Later on, the theory has been extended and refined for several Hamiltonian PDE. For example, it has been implemented for the 1D semi-linear wave and Schr\"odinger equations in the following papers \cite{B94,CY00,CW93,K87,P96,P96-1,W90}. Several results were also obtained for semi-linear perturbations of integrable PDE \cite{BBP13,BBP14,B05,EK10,GYZ11,KP03,K98,K00,LY11}. However, the case of quasi-linear or fully nonlinear perturbations were only explored very recently in a series of papers \cite{BBM14,BBM16,BBM16-1,BB15,BKM21,FP14}. A typical example in this direction is given by the water-waves equations which have been the subject of rich and intensive activity over the past few years dealing with the periodic and quasi-periodic solutions, see for instance \cite{AB11,BBMH18, BFM21,BFM21-1,BM18,IPT05,PT01}.
\smallskip
Concerning the emergence of quasi-periodic structures for the 2D Euler equations which is known to be Hamiltonian, few results are known in the literature and some interesting developments have been made very recently opening new perspectives around the vortex motion.
One of the results on the smooth case, supplemented with periodic boundary conditions, goes back to Crouseilles and Faou in \cite{CF13}. The construction of quasi-periodic solutions is founded on the superposition of localized traveling solutions without interaction. Notice that no sophisticated tools from KAM theory are required in their approach. Very recently, their result has been extended to higher dimensions by Enciso, Peralta-Salas and Torres de Lizaur in \cite{EPT22}. For Euler equations on the 3-dimensional torus, Baldi and Montalto \cite{BM21} were able to generate quasi-periodic solutions through small quasi-periodic forcing terms.
\smallskip
Another new and promising topic concerns the construction of quasi-periodic vortex patches to the system \eqref{Euler equations} or to various active scalar equations (generalized surface quasi-geostrophic equations, quasi-geostrophic shallow-water equations and Euler-$\alpha$ equations) which has been partially explored in the recent papers \cite{HHM21,HR21,R22}. All of them deal with simply-connected quasi-periodic patches near Rankine vortices provided that the suitable external parameter is selected in a massive Cantor set. We emphasize that for Euler model there is no natural parameter anymore and one has to create an internal one. Two works have been performed in this direction. The first one is due to Berti, Hassainia and Masmoudi \cite{BHM21} who proved using KAM theory the existence of quasi-periodic patches close to Kirchhoff ellipses provided that the aspect ratio of the ellipse belongs to a Cantor set.
The second one is obtained by Hassainia and Roulley in \cite{HR21-1}, where the fluid evolves in the unit disc, and they proved the existence of quasi-periodic patches close to Rankine vortices $\mathbf{1}_{b\mathbb{D}},$ when $b$ belongs to a suitable Cantor set in $(0,1)$.
\smallskip
Our main task here is to investigate the emergence of quasi-periodic patches (denoted by (QPP)) near the annulus $A_b$. The motivation behind that is the existence of time periodic patches around the annulus as stated in \cite{HHMV16} and one may get (QPP) at the linear level by mixing a finite number of frequencies. Note that the rigidity of the frequencies \eqref{Om-doub} with respect to the modulus $b$ is an essential element to get the non-degeneracy of the linear torus. One of the difficulties in the construction of (QPP) at the nonlinear level stems from the vectorial structure of the problem because we are dealing with two coupled interfaces. As we shall see, this leads to more time-space resonances coming in part from the interaction between the transport equations advected by two different speeds.\\
In what follows, we intend to carefully describe the situation around doubly-connected (QPP), then formulate the main result and sketch the principal ideas of the proof. First, we consider a modified polar parametrization of the two interfaces of the patch close to the annulus $A_b$, namely for $k\in\{1,2\}$
$$ z_k(t,\theta)\triangleq R_k(t,\theta)e^{\ii(\theta-\Omega t)},\quad R_{k}(t,\theta)\triangleq\sqrt{b_{k}^{2}+2r_{k}(t,\theta)},\quad b_1\triangleq 1,\quad b_2\triangleq b.$$
The unknown is the pair of functions $r=(r_1,r_2)$ of small radial deformations of the patch. It is worthy to point out that similarly to \cite{BHM21,HHM21,HR21,R22} our parametrization is written in a rotating frame with an angular velocity $\Omega>0.$ Nevertheless, we have multiple reasons here behind the introduction of this auxiliary parameter $\Omega$. In the previous works, we make appeal to this parameter to remedy to the degeneracy of the first equilibrium frequency leading to a trivial resonance. In our setting, this parameter is needed to avoid an exponential accumulation towards a constant of the unperturbed frequencies (eigenvalues) $\{\mathtt{m}\Omega_{\mathtt{m}}^{\pm}(b), \mathtt{m}\geqslant2\}$, see \eqref{Om-doub}. This fact induces a harmful effect especially related to the second order Melnikov non-resonance condition. Therefore, thanks to the parameter $\Omega$ the eigenvalues will grow linearly with respect to the modes $\mathtt{m}$. Another useful property induced by large values of $\Omega$ is the monotonicity of the eigenvalues, see Lemma \ref{lem-asym}, which allows in turn to get R\"ussmann conditions on the diagonal part, see Lemma \ref{lemma transversalityE}-(iv).
\smallskip
One of the major difference with \cite{HHM21,HR21-1,HR21,R22} is the vectorial structure of the system related to the interfaces coupling. Despite that, we are able to check the Hamiltonian structure in terms of the contour dynamics equations. In fact, we prove in Lemma \ref{lem eq EDC r} and Proposition \ref{prop HAM eq Edc} that the pair of radial deformations $r=(r_1,r_2)$ solves a system of two coupled nonlinear and nonlocal transport PDE admitting a Hamiltonian formulation in the form
\begin{equation}\label{Ham eq EDC r1-r2 intro}
\partial_{t}r=\mathcal{J}\nabla H(r),\qquad\mathcal{J}\triangleq\begin{pmatrix}
\partial_{\theta} & 0\\
0 & -\partial_{\theta}
\end{pmatrix},
\end{equation}
where the Hamiltonian $H$ can be recovered from the kinetic energy and the angular momentum. This Hamiltonian is reversible and invariant under space translations.
The linearized operator at a general state $r$ close to the annulus $A_b$ is described in Lemma \ref{lem lin op 1 DCE} and writes
$$\partial_{t}\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
=\mathcal{J} \mathbf{M}_r\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix},\qquad \mathbf{M}_r\triangleq \begin{pmatrix}
-V_{1}(r)-L_{1,1}({r}) & L_{1,2}(r)\\
L_{2,1}(r) & V_{2}(r)-{L}_{2,2}(r)
\end{pmatrix},$$
where $V_{k}(r)$ are scalar functions and ${L}_{k,n}({r})$ are nonlocal operators given by \eqref{def Vpm}--\eqref{def mathbfLkn}.
The diagonal terms correspond to the self-induction of each interface. In particular, the operators $L_{k,k},$ for $k\in\{1,2\},$ are of zero order and reflect the planar simply-connected Euler action. For $k\neq n\in\{1,2\},$ the anti-diagonal operators $L_{k,n}$ describe the interaction between the two boundaries and they are smoothing at any order. In Lemma \ref{lem lin op 2 DCE}, we shall prove that at the equilibrium $r=(0,0)$, corresponding to the annulus patch,
each entry of $\mathbf{M}_0$ is a Fourier multiplier and the operator $\mathcal{J}\mathbf{M}_0$ can be written in Fourier expansion as a superposition of $2\times 2$ matrices,
$$\mathcal{J}\mathbf{M}_0\begin{pmatrix}
\rho_{1}\\
\rho_2
\end{pmatrix}=\sum_{j\in\mathbb{Z}^*}M_{j}(b,\Omega)\begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix}\mathbf{e}_{j},\qquad M_{j}(b,\Omega)\triangleq\frac{\ii j}{|j|} \begin{pmatrix}
-|j|\big(\Omega+\tfrac{1-b^2}{2}\big)+\tfrac{1}{2}& -\tfrac{b^{|j|}}{2}\\
\tfrac{b^{|j|}}{2} & -|j|\Omega-\tfrac{1}{2}
\end{pmatrix},$$
for all $\rho_1$ and$\rho_2$ with Fourier expansion
$$
\rho_k=\sum_{j\in\mathtt{m}\mathbb{Z}^*}\rho_{j,k}\mathbf{e}_j\quad\textnormal{s.t.}\quad \rho_{-j,k}=\overline{\rho_{j,k}},\qquad\; \mathbf{e}_{j}(\theta)\triangleq e^{\ii j\theta}.
$$
The spectrum of $M_{j}(b,\Omega)$ is
$$\sigma\big(M_{j}(b,\Omega)\big)=\big\{-\ii\Omega_{j,1}(b),-\ii\Omega_{j,2}(b)\big\},\qquad\Omega_{j,k}(b)\triangleq\frac{j}{|j|}\bigg[\big(\Omega+\tfrac{1-b^2}{4}\big)|j|-\ii^{\mathtt{H}\big(\Delta_{j}(b)\big)} \tfrac{(-1)^{k}}{2}\sqrt{|\Delta_{j}(b)|}\bigg],$$
with $\mathtt{H}\triangleq \mathbf{1}_{[0,\infty)}$ the Heaviside function and
$$\Delta_{j}(b)\triangleq b^{2|j|}-\big(\tfrac{1-b^2}{2}|j|-1\big)^2.$$
At this stage, we shall restrict the discussion to $\mathtt{m}$-fold symmetric structures for some integer $\mathtt{m}\geqslant 3$ large enough. This is done for several reasons. First, the mode $j=2$ corresponds to a double root for any $b\in (0,1),$ because $\Delta_{2}(b)=0,$ implying a nontrivial resonance that we cannot remove using the parameter $b$ but simply by imposing higher symmetry for the (QPP). Second, the hyperbolic spectrum, associated to non-zero real part for the eigenvalues, that could generate instabilities and time growth emerge only for lower symmetries. We believe that with this latter configuration, one can still hope to construct (QPP) by inserting the hyperbolic modes on the normal directions as it was recently performed in \cite{BHM21}. We refer for instance the reader to \cite{BB12,LS19,EGK16,G74,PP15,Z75,Z76} for an introduction to hyperbolic KAM theory in finite or infinite dimension.
\smallskip
Now, we fix $b^*\in(0,1)$ and set
\begin{equation}\label{def m star intr}
\mathtt{m}^*\triangleq\min\big\{n\geqslant3\quad\textnormal{s.t.}\quad\underline{b}_n>b^*\big\},
\end{equation}
where the sequence $\underline{b}_n$ defined in \eqref{bn-def11}.
Then, for any integer $|j|\geqslant\mathtt{m}^*$, we have $\Delta_{j}(b)<0.$ Hence, the quantity $\Omega_{j,k}(b)$ is real and the matrix $M_{j}(b,\Omega)$ has pure imaginary spectrum. The restriction of the Fourier modes to the lattice $\mathbb{Z}_{\mathtt{m}}^*\triangleq\mathtt{m}\mathbb{Z}\setminus\{0\}$ with $\mathtt{m}\geqslant\mathtt{m}^*$ allows to eliminate the hyperbolic modes.
At this stage, we find it convenient to work with new coordinates where the linearized operator at the equilibrium state is governed by a diagonal matrix Fourier multiplier operator. This can be done through the diagonalization of each the matrix $M_j(b,\Omega).$ To do that, we use the symplectic transformation (with respect to $\mathcal{W}$) $\mathbf{Q}$ taking the form
$$\mathbf{Q}\begin{pmatrix}
\rho_1\\
\rho_2
\end{pmatrix}=\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\mathbf{Q}_j\begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix}\mathbf{e}_{j},\qquad\mathbf{Q}_j\triangleq \tfrac{-1}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
-1 & a_{j} (b) \\
a_{j}(b)& -1
\end{pmatrix},
$$
where
\begin{equation}\label{def coef ajb}
a_{j}(b)\triangleq\frac{b^{|j|}}{\tfrac{1-b^2}{2}|j|-1+ \sqrt{\big(\tfrac{1-b^2}{2}|j|-1\big)^2-b^{2|j|}}}\in(0,1),
\end{equation}
(see Corollary \ref{coro-equilib-freq} for more details on the bound of $a_{j}(b)$) such that
$$\mathbf{Q}^{-1}\mathcal{J} \mathbf{M}_0\mathbf{Q}= \mathcal{J} \mathbf{L}_0, \qquad \mathbf{L}_0\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
\triangleq \sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\tfrac{1}{j}\begin{pmatrix}
-\Omega_{j,1}(b) & 0\\
0 & \Omega_{j,2}(b)
\end{pmatrix} \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_{j}.$$
Notice that $\ii\Omega_{j,1}(b)$ and $\ii\Omega_{j,2}(b)$ are not complex conjugate, thus the dynamics cannot be reduced to one scalar equation associated with a complex variable unlike the water-waves \cite{BBMH18,BFM21,BFM21-1,BM18} situation.
The new Hamiltonian system through the symplectic transformation $r\mapsto\mathbf{Q}r$
$$\partial_{t}r=\mathcal{J}\nabla K(r),\qquad K(r)\triangleq H(\mathbf{Q}r)$$
whose linearization at the trivial solution has a good normal form
\begin{equation}\label{KL0 intro}
\partial_t \rho=\mathcal{J}\nabla K_{\mathbf{L}_0}(\rho),\qquad K_{\mathbf{L}_0}(\rho)\triangleq \tfrac{1}{2}\big\langle\mathbf{L}_0\rho,\rho\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}=-\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\left(\tfrac{\Omega_{j,1}(b)}{2j}|\rho_{j,1}|^2-\tfrac{\Omega_{j,2}(b)}{2j}|\rho_{j,2}|^2\right).
\end{equation}
Consider two disjoint finite sets of Fourier modes
\begin{equation}\label{tang sets intro}
\mathbb{S}_1,\mathbb{S}_2\subset\mathtt{m}\mathbb{N}^*,\qquad\textnormal{with}\qquad |\mathbb{S}_1|=d_1<\infty,\qquad|\mathbb{S}_{2}|=d_2<\infty\qquad\textnormal{and}\qquad \mathbb{S}_1\cap\mathbb{S}_2=\varnothing.
\end{equation}
Then, from Lemma \ref{lemma sol Eq}, we deduce that for any $0<b^*<1,$ $\Omega>0$ and $r_{j,1},r_{j,2}\in\mathbb{R}^*$, for almost all $b\in[0,b^*],$ any function in the form
$$r(t,\theta)=\sum_{j\in{\mathbb{S}}_1}\tfrac{r_{j,1}}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
1 \\
- a_{j}(b)
\end{pmatrix}\cos\big(j\theta-\Omega_{j,1}(b)t\big)+\sum_{j\in{\mathbb{S}}_2}\tfrac{r_{j,2}}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
-a_{j} (b) \\
1
\end{pmatrix}\cos\big(j\theta-\Omega_{j,2}(b)t\big)$$
is a quasi-periodic solution with frequency
\begin{equation}\label{def omegaEq intro}
\omega_{\textnormal{Eq}}(b)\triangleq\Big(\big(\Omega_{j,1}(b)\big)_{j\in\mathbb{S}_1},\big(\Omega_{j,2}(b)\big)_{j\in\mathbb{S}_2}\Big)
\end{equation}
of the original linearized equation $\partial_{t} r=\mathcal{J}\mathbf{M}_0r$ which is $\mathtt{m}$-fold and reversible, namely $r(-t,-\theta)=r(t,\theta)=r\big(t,\theta+\tfrac{2\pi}{\mathtt{m}}\big).$
Our main result states that these structures persist at the non-linear level. More precisely, we have the following theorem.
\begin{theo}\label{thm QPS E}
Let $0<b_*<b^*<1$ and fix $\mathtt{m}\in\mathbb{N}$ with $\mathtt{m}\geqslant\mathtt{m}^*,$ where $\mathtt{m}^*$ defined in \eqref{def m star intr}.
There exists $\Omega_{\mathtt{m}}^{*}\triangleq\Omega(b^*,\mathtt{m})>0$ satisfying
$$\lim_{\mathtt{m}\to\infty}\Omega_{\mathtt{m}}^*=0$$
such that for any $\Omega>\Omega_{\mathtt{m}}^*,$ there exists $\varepsilon_{0}\in(0,1)$ small enough with the following properties : For every amplitudes
$$\mathtt{a}=\big((\mathtt{a}_{j,1})_{j\in\mathbb{S}_1},(\mathtt{a}_{j,2})_{j\in\mathbb{S}_2}\big)\in(\mathbb{R}_{+}^{*})^{d_1+d_2}\qquad\textnormal{satisfying}\qquad|{\mathtt{a}}|\leqslant\varepsilon_{0},$$
there exists a Cantor-like set
$$\mathcal{C}_{\infty}^{\mathtt{a}}\subset(b_{*},b^{*}),\qquad\textnormal{with}\qquad\lim_{{\mathtt{a}}\rightarrow 0}|\mathcal{C}_{\infty}^{\mathtt{a}}|=b^{*}-b_{*},$$
such that for any $b\in\mathcal{C}_{\infty}^{\mathtt{a}}$, the planar Euler equations \eqref{Euler equations} admit a $\mathtt{m}$-fold time quasi-periodic doubly-connected vortex patch solution in the form
$$\boldsymbol{\omega}(t,\cdot)=\mathbf{1}_{D_t},\qquad D_{t}=\Big\{\ell e^{\ii(\theta-\Omega t)},\quad\theta\in[0,2\pi],\quad \sqrt{b^2+2r_2(t,\theta)}<\ell<\sqrt{1+2r_1(t,\theta)}\Big\},$$
where
\begin{align*}
\begin{pmatrix}
r_1\\
r_2
\end{pmatrix}(t,\theta)&=\sum_{j\in{\mathbb{S}}_1}\tfrac{\mathtt{a}_{j,1}\cos(j\theta-\omega_{j,1}(b,\mathtt{a})t)}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
1 \\
- a_{j}(b)
\end{pmatrix}+\sum_{j\in{\mathbb{S}}_2}\tfrac{\mathtt{a}_{j,2}\cos(j\theta-\omega_{j,2}(b,\mathtt{a})t)}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
-a_{j}(b) \\
1
\end{pmatrix}+\mathtt{p}\big(\omega_{\textnormal{\tiny{pe}}}(b,\mathtt{a})t,\theta\big)
\end{align*}
and $a_{j}(b)$ are given by \eqref{def coef ajb}.
This solution is associated with a non-resonant frequency vector $${\omega}_{\tiny{\textnormal{pe}}}(b,{\mathtt{a}})\triangleq \Big(\big(\omega_{j,1}(b,{\mathtt{a}})\big)_{j\in\mathbb{S}_1},\big(\omega_{j,2}(b,{\mathtt{a}})\big)_{j\in\mathbb{S}_2}\Big)\in\mathbb{R}^{d_1+d_2}$$
satisfying the convergence
$$\mathtt{\omega}_{\tiny{\textnormal{pe}}}(b,{\mathtt{a}})\underset{\mathtt{a}\rightarrow 0}{\longrightarrow}-\Big(\big(\Omega_{j,1}(b)\big)_{j\in\mathbb{S}_1},\big(\Omega_{j,2}(b)\big)_{j\in\mathbb{S}_2}\Big),$$
where $\Omega_{j,1}(b)$ and $\Omega_{j,2}(b)$ are the equilibrium frequencies.
The perturbation $\mathtt{p}:\mathbb{T}^{d_1+d_2+1}\to\mathbb{R}^2$ is a function satisfying the symmetry properties
$$\forall(\varphi,\theta)\in\mathbb{T}^{d_1+d_2+1},\quad\mathtt{p}(-\varphi,-\theta)=\mathtt{p}(\varphi,\theta)=\mathtt{p}(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}})$$
and for some large index of Sobolev regularity $s$ it satisfies the estimate
$$\|\mathtt{p}\|_{H^{{s}}(\mathbb{T}^{d_1+d_2+1},\mathbb{R}^2)}\underset{{\mathtt{a}}\rightarrow 0}{=}o(|{\mathtt{a}}|).$$
\end{theo}
\begin{remark}
The lower bound restriction $b_*$ is required because the operators $L_{k,n}$, $V_{k,n}$ may become singular when $b=0,$ a situation which corresponds to the simply-connected case.
\end{remark}
\begin{remark}
From this theorem, we obtain global non trivial solutions in the patch form which are confined around the annulus. More studies on vortex confinement can be found in \cite{ILN03,ISG99,M94}.
\end{remark}
We shall now briefly describe the main steps of the proof whose general strategy is borrowed from the Nash-Moser approach for KAM theory developed by Berti-Bolle \cite{BB15} and slightly modified in \cite[Sec. 6]{HHM21}.
Recall that the Nash-Moser scheme requires to invert the linearized operator in a neighborhood of the equilibrium state and the inverse operator must satisfy suitable tame estimates in the framework of Sobolev spaces. The first step that we intend to describe now is to reformulate the problem in terms of embedded tori. Remark that the Hamiltonian system associated with $K$ is a quasi-linear perturbation of its linearization at the equilibrium state, namely
$$\partial_{t}r=\mathcal{J}\mathbf{L}_0r+X_{P}(r),\qquad\textnormal{with}\qquad X_{P}(r)\triangleq \mathcal{J}\nabla K({r})-\mathcal{J}\mathbf{L}_0r=\mathbf{Q}^{-1}X_{H\geqslant 3}(\mathbf{Q}{r}),$$
where
$$X_{H\geqslant 3}({r})\triangleq \mathcal{J}\big(\nabla H(r)-\mathbf{M}_0r\big).$$
Under the rescaling $r\mapsto\varepsilon r$ and the quasi-periodic framework $\partial_{t}\leftrightarrow\omega\cdot\partial_{\varphi}$, the Hamiltonian system becomes
$$\omega\cdot\partial_\varphi r=\mathcal{J}\mathbf{L}_0r+\varepsilon X_{P_{\varepsilon}}(r),$$
where $X_{P_{\varepsilon}}$ is the rescaled Hamiltonian vector field defined by
$X_{P_{\varepsilon}}(r)\triangleq \varepsilon^{-2}X_{P}(\varepsilon r).$
Notice that the previous equation is generated by
the rescaled Hamiltonian
$$\mathcal{K}_{\varepsilon}(r)\triangleq\varepsilon^{-2}K(\varepsilon r)= K_{\mathbf{L}_0}(r)+\varepsilon P_{\varepsilon}(r),$$
with $K_{\mathbf{L}_0}$ as in \eqref{KL0 intro} and
$\varepsilon P_{\varepsilon}(r)$ describes all the terms
of higher order more than cubic.
We consider two finite sets $\mathbb{S}_1$, $\mathbb{S}_2$ as in \eqref{tang sets intro} and we denote
$d_1\triangleq|\mathbb{S}_1|,\, d_2\triangleq|\mathbb{S}_2|,\, d\triangleq d_1+d_2$
and
$$\overline{\mathbb{S}}_k\triangleq\mathbb{S}_k\cup(-\mathbb{S}_k),\qquad \overline{\mathbb{S}}_{0,k}\triangleq\overline{\mathbb{S}}_k\cup\{0\}.$$
and set
$$\mathbb{S}\triangleq \mathbb{S}_{1}\cup\mathbb{S}_{2},\quad
\overline{\mathbb{S}}\triangleq \mathbb{S}\cup(-\mathbb{S}), \quad \overline{\mathbb{S}}_0\triangleq \overline{\mathbb{S}}\cup \{0\}.$$
Next, we decompose the phase space into the following orthogonal sum
\begin{equation}\label{decomp prod l2}
L_{\mathtt{m}}^2(\mathbb{T})\times L_{\mathtt{m}}^2(\mathbb{T})=\mathbf{H}_{\overline{\mathbb{S}}}\overset{\perp}{\oplus}\mathbf{H}_{\overline{\mathbb{S}_0}}^{\perp},\quad L_{\mathtt{m}}^2(\mathbb{T})\triangleq \bigg\{f=\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}f_{j}\mathbf{e}_j\quad\textnormal{s.t.}\quad f_{-j}=\overline{f_j},\quad\sum_{j\in\mathbb{Z}^*_{\mathtt{m}}}|f_j|^2<+\infty\bigg\},
\end{equation}
with
\begin{align*}
\mathbf{H}_{\overline{\mathbb{S}}}&\triangleq \left\{\sum_{j\in\overline{\mathbb{S}}_1}v_{j,1}\begin{pmatrix}
1\\
0
\end{pmatrix}\mathbf{e}_{j}+\sum_{j\in\overline{\mathbb{S}}_2}v_{j,2}\begin{pmatrix}
0\\
1
\end{pmatrix}\mathbf{e}_{j},\quad v_{j,k}\in\mathbb{C},\,\quad\overline{v_{j,k}}=v_{-j,k}\right\},\\
\mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}&\triangleq \left\{\sum_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1}}z_{j,1}\begin{pmatrix}
1\\
0
\end{pmatrix}\mathbf{e}_{j}+\sum_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2}}z_{j,2}\begin{pmatrix}
0\\
1
\end{pmatrix}\mathbf{e}_{j},\quad z_{j,k}\in\mathbb{C},\,\quad\overline{z_{j,k}}=z_{-j,k}\right\}.
\end{align*}
The sets $\mathbf{H}_{\overline{\mathbb{S}}}$ and $\mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}$ are called \textit{tangential} and \textit{normal subspaces}, respectively. The associated projections are defined through
\begin{equation}\label{proj-nor1}
\Pi_{\overline{\mathbb{S}}}\triangleq\begin{pmatrix}
\Pi_{1} & 0\\
0 & \Pi_{2}
\end{pmatrix}\qquad\textnormal{and}\qquad\Pi_{\overline{\mathbb{S}}_0}^{\perp}\triangleq\begin{pmatrix}
\Pi_{1}^{\perp} & 0\\
0 & \Pi_{2}^{\perp}
\end{pmatrix}
\end{equation}
namely,
\begin{equation}\label{PI1-PI2}
\forall k\in\{1,2\},\quad\Pi_k \sum_{j\in \mathbb{Z}_{\mathtt{m}}^*} v_{j,k}\mathbf{e}_{j}\triangleq\sum_{j\in \overline{\mathbb{S}}_k} v_{j,k}\mathbf{e}_{j}\qquad\textnormal{and}\qquad\Pi_{k}^{\perp}\triangleq\mathbb{I}_{\mathtt{m}}-\Pi_{k},
\end{equation}
where $\mathbb{I}_{\mathtt{m}}$ is the identity map of $L_{\mathtt{m}}^{2}(\mathbb{T}).$ On the tangential set $\mathbf{H}_{\overline{\mathbb{S}}}$, we introduce the action-angle variables
$$\vartheta\triangleq\big((\vartheta_{j,1})_{j\in{\mathbb{S}}_1},(\vartheta_{j,2})_{j\in{\mathbb{S}}_2}\big),\qquad I\triangleq\big((I_{j,1})_{j\in{\mathbb{S}}_1}, (I_{j,2})_{j\in{\mathbb{S}}_2}\big)$$
as follows : Fix any amplitudes $(\mathtt{a}_{j,k})_{j\in \overline{\mathbb{S}}_k}$ such that for any $ j\in\overline{\mathbb{S}}_k,\,\mathtt{a}_{-j,k}=\mathtt{a}_{j,k}>0$ and set
$$
\forall k\in\{1,2\}, \quad \forall j\in\overline{\mathbb{S}}_k,\quad v_{j,k}\triangleq \sqrt{(\mathtt{a}_{j,k})^2+|j|I_{j,k}}e^{\ii \vartheta_{j,k}},
$$
supplemented with the symmetry properties
$$\forall k\in\{1,2\},\quad\forall j\in\mathbb{S}_k,\quad I_{-j,k}=I_{j,k}\in\mathbb{R}\qquad\textnormal{and}\qquad\vartheta_{-j,k}=-\vartheta_{j,k}\in\mathbb{T}.$$
Therefore, we have the following decomposition of $r=(r_1,r_2)$,
\begin{equation}\label{aa-coord-00}
r=\mathbf{A}(\vartheta,I,z)\triangleq v(\vartheta,I)+z,
\end{equation}
where $z\in \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}$ and
$$v (\vartheta, I)\triangleq \sum_{j\in\overline{\mathbb{S}}_1} \sqrt{(\mathtt{a}_{j,1})^2+|j|I_{j,1}}e^{\ii \vartheta_{j,1}}\begin{pmatrix}
1\\
0
\end{pmatrix}\mathbf{e}_{j}+ \sum_{j\in\overline{\mathbb{S}}_2}\sqrt{(\mathtt{a}_{j,2})^2+|j|I_{j,2}}e^{\ii \vartheta_{j,2}}
\begin{pmatrix}
0\\
1
\end{pmatrix}\mathbf{e}_{j}\in \mathbf{H}_{\overline{\mathbb{S}}}.$$
The transformation $\mathbf{A}$ is $\mathcal{W}$-symplectic and, in the new variables, the new Hamiltonian $K_{\varepsilon}\triangleq \mathcal{K}_{\varepsilon}\circ\mathbf{A}$ writes
$$K_{\varepsilon} =
-\big({\mathtt J}\,{\omega}_{\textnormal{Eq}}(b)\big)\cdot I + \tfrac12 \big\langle\mathbf{L}_0\, z, z \big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})} + \varepsilon \mathcal{ P}_{\varepsilon},\qquad{\mathtt J}\triangleq \begin{pmatrix}
{\rm I}_{d_1}& 0 \\
0 & -{\rm I}_{d_2} \end{pmatrix}, \qquad \mathcal{ P}_{\varepsilon} \triangleq P_\varepsilon \circ \mathbf{A}.$$
Observe that the Poisson structure is associated with $\mathtt{J}$, and will be needed later during the implementation of Berti-Bolle approach. The corresponding Hamiltonian vector field is
$$X_{K_{\varepsilon}} \triangleq
\big({\mathtt J}\partial_I K_{\varepsilon} , -{\mathtt J} \partial_\vartheta K_{\varepsilon} , \Pi_{\overline{\mathbb{S}}_0}^\bot
\mathcal{J}\nabla_{z} K_{\varepsilon}\big).$$
Therefore, the problem is reduced to looking for embedded invariant tori
$$i :\mathbb{T}^d \rightarrow
\mathbb{R}^d \times \mathbb{R}^d \times \mathbf{H}^{\perp}_{\overline{\mathbb{S}}_0}
\,, \qquad \varphi \mapsto i(\varphi)\triangleq \big(\vartheta(\varphi), I(\varphi), z(\varphi)\big),$$
solution of the equation
$$\omega\cdot\partial_{\varphi}i(\varphi)=X_{K_{\varepsilon}}\big(i(\varphi)\big).
$$
As observed in \cite{BB15,M67}, it turns out to be convenient along Nash-Moser scheme to work with one degree of freedom vector-valued parameter $\alpha\in\R^d$ which provides at the end of the scheme a solution for the original problem when it is fixed to $-\mathtt{J}\omega_{\textnormal{Eq}}(b).$ Therefore, we shall consider the following $\alpha$-dependent family of Hamiltonians
$$K_\varepsilon^\alpha \triangleq\alpha\cdot I+\tfrac12\big\langle\mathbf{L}_0\, z, z\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}+\varepsilon\mathcal{P}_{\varepsilon}$$
and we search for the zeros of the following functional
$$\mathcal{F}(i,\alpha,b,\omega,\varepsilon)\triangleq\omega\cdot\partial_{\varphi}i(\varphi)-X_{K_\varepsilon^\alpha}\big(i(\varphi)\big)=\left(\begin{array}{c}
\omega\cdot\partial_{\varphi}\vartheta(\varphi)-\mathtt{J}\big(\alpha-\varepsilon\partial_{I}\mathcal{P}_{\varepsilon}(i(\varphi))\big)\\
\omega\cdot\partial_{\varphi}I(\varphi)+\varepsilon\mathtt{J}\partial_{\vartheta}\mathcal{P}_{\varepsilon}(i(\varphi))\\
\omega\cdot\partial_{\varphi}z(\varphi)-\mathcal{J}\big[\mathbf{L}_0z(\varphi)+\varepsilon\nabla_{z}\mathcal{P}_{\varepsilon}\big(i(\varphi)\big)\big]
\end{array}\right).$$
At each step of Nash-Moser scheme, we have to linearize this functional at a small reversible embedded torus $i_0:\varphi\mapsto\big(\vartheta_0(\varphi),I_0(\varphi),z_0(\varphi)\big)$ and $\alpha_0\in\R^d,$ then we need to construct an approximate right inverse of $d_{i,\alpha}\mathcal{F}(i_0,\alpha_0).$ The core of the Berti-Bolle theory is to conjugate the linearized operator by a linear diffeomorphism $G_0$ of the toroidal phase space $\mathbb{T}^d\times\mathbb{R}^d\times \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}$ in order to obtain a triangular system in the action-angle-normal variables up to error terms. Notice that in a similar way to \cite{HHM21}, we do not use isotropic tori. Therefore, in this framework, inverting the triangular system amounts to inverting the linearized operator in the normal directions, denoted by $\widehat{\mathcal{L}}.$ This latter fact is analyzed in Section \ref{reduction} and uses KAM reducibility techniques similarly to \cite{BBMH18,BM18,HHM21,HR21} that we shall now explain and extract the main new difficulties.
According to Proposition \ref{lemma-normal-s}, we can write
$$\widehat{\mathcal{L}}=\Pi_{\overline{\mathbb{S}}_0}^{\perp}\left(\mathcal{L}-\varepsilon\partial_\theta\mathcal{R}\right)\Pi_{\overline{\mathbb{S}}_0}^{\perp},\qquad \mathcal{L}\triangleq\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m}}+\mathfrak{L}_{\varepsilon r},\qquad\mathbf{I}_{\mathtt{m}}\triangleq \begin{pmatrix}
\mathbb{I}_{\mathtt{m}} &0\\
0& \mathbb{I}_{\mathtt{m}}
\end{pmatrix},\qquad \mathcal{R}\triangleq \begin{pmatrix} \mathcal{T}_{J_{1,1}}({r}) & \mathcal{T}_{J_{1,2}}({r})\\
\mathcal{T}_{J_{2,1}}({r}) & \mathcal{T}_{J_{2,2}}({r})
\end{pmatrix},
$$
where for any $(k,\ell)\in\{1,2\}^2$, $\mathcal{T}_{J_{k,\ell}}$ is a $\mathtt{m}$-fold and reversibility preserving integral operator with smooth kernel $J_{k,\ell}$ and $\mathfrak{L}_{\varepsilon r}$ is the linearized operator
$$\begin{aligned}
\mathfrak{L}_{r} =& \begin{pmatrix}
\partial_\theta\big(\mathcal{V}_1(r)\, \cdot\big) +\frac{1}{2}\mathcal{H}+\partial_\theta\mathcal{Q}\ast\cdot&0\\
0& \partial_\theta\big(\mathcal{V}_2(r)\, \cdot\big)-\tfrac{1}{2}\mathcal{H} -\partial_\theta \mathcal{Q}\ast\cdot
\end{pmatrix} +\partial_\theta\begin{pmatrix}
\mathcal{T}_{\mathscr{K}_{1,1}}(r)& \mathcal{T}_{\mathscr{K}_{1,2}}(r)\\
\mathcal{T}_{\mathscr{K}_{2,1}}(r) & \mathcal{T}_{\mathscr{K}_{2,2}}(r)
\end{pmatrix},
\end{aligned}$$
where we denote by $\mathcal{H}$ the $2\pi$-periodic Hilbert transform and $\mathcal{V}_{k}(r)$, $k\in\{1,2\}$, are scalar functions. The convolution operator $\mathcal{Q}\ast\cdot$ has even smooth kernel $\mathcal{Q}$.
For $k,n\in\{1,2\},$ the operator $\mathcal{T}_{\mathscr{K}_{k,n}}({r})$ is an integral operator with smooth, $\mathtt{m}$-fold and reversibility preserving kernel $\mathscr{K}_{k,n}(r),$ see Proposition~\ref{prop:conjP}.
First, following the KAM reducibility scheme in \cite{BM21,FGMP19,HR21}, we can reduce the transport part and the zero order part by conjugating by a quasi-periodic symplectic change of variables in the form
$$\mathscr{B}\triangleq\begin{pmatrix}
\mathscr{B}_1 & 0\\
0 & \mathscr{B}_2
\end{pmatrix},\qquad\forall k\in\{1,2\},\quad\mathscr{B}_k\rho(\mu,\varphi,\theta)=\big(1+\partial_{\theta}\beta_k(\mu,\varphi,\theta)\big)\rho\big(\mu,\varphi,\theta+\beta_k(\mu,\varphi,\theta)\big).$$
More precisely, as stated in Propositions \ref{prop strighten} and \ref{prop RTNL}, we can find two functions $\mathtt{c}_1\triangleq\mathtt{c}_1(b,\omega,i_0)$, $\mathtt{c}_2\triangleq\mathtt{c}_2(b,\omega,i_0)$ and a Cantor set
$$\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_0)\triangleq\bigcap_{k\in\{1,2\}\atop\underset{|l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}\setminus\{(0,0)\}}}\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\big|\omega\cdot l+j\mathtt{c}_{k}(b,\omega,i_0)\big|>\tfrac{4\gamma^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_1}}\Big\},$$
where $N_n\triangleq N_0^{(\frac{3}{2})^n}$ with $N_0\gg 1,$ in which the following decomposition holds
$${\mathscr{B}}^{-1}\big(\omega\cdot\partial_{\varphi} {\mathbf{I}}_{\mathtt{m}}+\mathfrak{L}_{\varepsilon r}\big) {\mathscr{B}}=\omega\cdot\partial_{\varphi} \mathbf{I}_{\mathtt{m}}+{\mathscr{D}}+{\mathscr{R}}+{\mathscr{E}}_{n},$$
where
$$\mathscr{D}\triangleq \begin{pmatrix}
\mathtt{c}_1\partial_{\theta}\, +\tfrac12\mathcal{H}+\partial_\theta \mathcal{Q}\ast\cdot& 0\\
0 & \mathtt{c}_2\partial_{\theta}\, -\big(\tfrac12\mathcal{H}+\partial_\theta \mathcal{Q}\ast\cdot)
\end{pmatrix},$$
and $\mathscr{R}\triangleq\mathscr{R}(\varepsilon r)$ is a real, $\mathtt{m}$-fold and reversibility preserving Toeplitz in time matrix integral operator enjoying good smallness properties. The operator $\mathscr{E}_n$ is of order one but with small coefficients decaying faster in $n.$
The next step deals with the localization effects on the normal modes. We first introduce the operator
$$\mathscr{B}_{\perp}\triangleq\Pi_{\overline{\mathbb{S}}_0}^{\perp}\mathscr{B}\Pi_{\overline{\mathbb{S}}_0}^{\perp}=\begin{pmatrix}
\Pi_{1}^{\perp}\mathscr{B}_1\Pi_1^{\perp} & 0\\
0 & \Pi_2^{\perp}\mathscr{B}_2\Pi_2^{\perp}
\end{pmatrix}.$$
Then, according to Proposition \ref{prop proj nor dir}, we prove by restricting the parameters to the set $\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0})$ that for any $n\in\mathbb{N}^{*}$ we have the identity
$$\mathscr{B}_{\perp}^{-1}\widehat{{\mathcal{L}}}{\mathscr{B}}_{\perp}={\mathscr{L}}_{0}+{\mathscr{E}}_{n}^0,\qquad
{\mathscr{L}}_{0}\triangleq \omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+{\mathscr{D}}_{0}+{\mathscr{R}}_{0},$$
where $\mathbf{I}_{\mathtt{m},\perp}\triangleq \Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\mathbf{I}_{\mathtt{m}}$
and ${\mathscr{D}}_{0}=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} {\mathscr{D}}_{0}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}$ is an $\mathtt{m}$-fold preserving and reversible matrix Fourier multiplier operator in the form
$${\mathscr{D}}_{0}\triangleq \begin{pmatrix}
\mathscr{D}_{0,1} & 0\\
0 & \mathscr{D}_{0,2}
\end{pmatrix},$$
with
$$\forall k\in\{1,2\},\quad\mathscr{D}_{0,k}\triangleq \left(\ii\mu_{j,k}^{(0)}\right)_{j\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}},\qquad\mu_{j,k}^{(0)}(b,\omega,i_{0})\triangleq \Omega_{j,k}(b)+j\big(\mathtt{c}_k(b,\omega)-\mathtt{v}_k(b)\big)$$
and $\mathscr{R}_0=\Pi_{\overline{\mathbb{S}}_0}^{\perp}\mathscr{R}_0\Pi_{\overline{\mathbb{S}}_0}^{\perp}$ is a small real, $\mathtt{m}$-fold preserving and reversible Toeplitz in time matrix remainder whose entries are integral operators with smooth kernels. The error term $\mathscr{E}_n^0$ plays a similar role as the previous one $\mathscr{E}_n$. The next goal is to implement a KAM reduction of the remainder term $\mathscr{R}_0.$ This is done in a new hybrid operator topology treating the diagonal and anti-diagonal terms differently. Along the scheme, the diagonal part is treated as in the scalar situation through the use of the off-diagonal Toeplitz norm, see for instance \cite[Prop. 6.5]{HR21}, whereas the anti-diagonal part, which is smoothing at any order in the spatial variable, is studied in an isotropic topology. We refer to Section \ref{sec matrix op} for more details on this topological framework, in particular \eqref{hyb nor}.
We point out that, thanks to the nice structure of the 2D-Euler equation, the diagonal and anti-diagonal terms of the remainder term $\mathscr{R}_0$ are smoothing at any order in the spatial variable and therefore both can be studied using the isotropic topology. However, this fact is not true for other transport models \cite{HHM21,HR21} where the remainders on the diagonal are not highly smoothing but of negative order. For this reason, we prefer to work in the most general framework.
The Proposition \ref{prop RR} states that we can find an operator $\Phi_{\infty}$ such that in the following Cantor set gathering both diagonal and anti-diagonal second order Melnikov conditions
\begin{align*}
&\mathscr{O}_{\infty,n}^{\gamma,\tau_{1},\tau_{2}}(i_{0})\triangleq \mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0})\\
&\bigcap_{\underset{\,j, j_{0}\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}}{ {k\in\{1,2\}}}}\bigcap_{\underset{|l|\leqslant N_{n}}{l\in\mathbb{Z}^{d}\atop(l,j)\neq(0,j_{0})}}\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\big|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{0})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{0})\big|>\tfrac{2\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\Big\}\\
&\bigcap_{\underset{\,j_0\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,2}}{ j\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,1}}}\bigcap_{\underset{\langle l,j,j_0\rangle\leqslant N_{n}}{l\in\mathbb{Z}^{d}}}\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\big|\omega\cdot l+\mu_{j,1}^{(\infty)}(b,\omega,i_{0})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{0})\big|>\tfrac{2\gamma}{\langle l,j,j_0\rangle^{\tau_2}}\Big\}
\end{align*}
the following decomposition holds
$$\Phi_{\infty}^{-1}\mathscr{L}_0\Phi_{\infty}=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{\infty}+\mathscr{E}_n^1\triangleq \mathscr{L}_{\infty}+\mathscr{E}_n^1,$$
where $\mathscr{D}_{\infty}=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\mathscr{D}_{\infty}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}=\mathscr{D}_{\infty}(b,\omega,i_{0})$ is a diagonal operator with reversible Fourier multiplier entries, namely
$$\mathscr{D}_{\infty}\triangleq\begin{pmatrix}
\mathscr{D}_{\infty,1} & 0\\
0 & \mathscr{D}_{\infty,2}
\end{pmatrix},$$
with
$$\forall k\in\{1,2\},\quad\mathscr{D}_{\infty,k}\triangleq\left(\ii\mu_{j,k}^{(\infty)}\right)_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}},\qquad\mu_{j,k}^{(\infty)}(b,\omega,i_{0})\triangleq\mu_{j,k}^{(0)}(b,\omega,i_{0})+r_{j,k}^{(\infty)}(b,\omega,i_{0})$$
and
$$\sup_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}}|j|\left\| r_{j,k}^{(\infty)}\right\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1}.$$
Notice that, according to the monotonicity of the eigenvalues the difference $\mu_{j,k}^{(\infty)}-\mu_{j_0,k}^{(\infty)}$ is not vanishing for $j\neq j_0$ and grows like $|j-j_0|$. This is no longer true for the mixed difference $\mu_{j,1}^{(\infty)}-\mu_{j_0,2}^{(\infty)}$ (coming from the mutual interactions between the interfaces) due to the different transport speeds leading to a new small divisor problem. Therefore, to handle this problem we should adjust the geometry of the Cantor sets $\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_0)$ with an isotropic decay on frequency. This explains in part the introduction of the hybrid topology in \eqref{hyb nor} needed in the remainder reduction, Another key observation is that we have no resonances for the off-diagonal part at $j=j_0$ and consequently the associated homological equations can be solved without any residual diagonal terms. Thus, at the end of the KAM scheme we get a diagonal Fourier multiplier operator $\mathscr{D}_{\infty}.$
Now, the final operator $\mathscr{L}_{\infty}$ can be easily inverted by restricting the parameters to the following first order Melnikov conditions
$$\Lambda_{\infty,n}^{\gamma,\tau_1}(i_0)\triangleq\bigcap_{k\in\{1,2\}\atop\underset{|l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})} }\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_0)\right|>\tfrac{\gamma\langle j\rangle}{\langle l\rangle^{\tau_1}}\Big\}.$$
As a consequence, we can construct an approximate right inverse of $\widehat{\mathcal{L}}$ provided that we choose $(b,\omega)$ in the set
$$\mathtt{G}_{n}^{\gamma}(i_0)\triangleq\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_0)\cap\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_0)\cap\Lambda_{\infty,n}^{\gamma,\tau_1}(i_0).$$
Therefore, we can perform in Proposition \ref{Nash-Moser} and Corollary \ref{Corollary NM} a Nash-Moser scheme as in \cite{BM18,HHM21,HR21} with slight modifications due to our particular Poisson structure and the off-diagonal second order Melnikov conditions. Hence, we can find a non-trivial solution $(b,\omega)\mapsto(i_{\infty}(b,\omega),\alpha_{\infty}(b,\omega))$ to the equation $\mathcal{F}(i,\alpha,b,\omega,\varepsilon)=0$ provided that we restrict the parameters $(b,\omega)$ to a Borel set $\mathtt{G}_{\infty}^{\gamma}$ constructed as the intersection of all the Cantor sets encountered along the different schemes of the multiple reductions. A solution to the original problem is obtained by constructing a frequency curve $b\mapsto\omega(b,\varepsilon)$ solution to the implicit equation
$$\alpha_{\infty}\big(b,\omega(b,\varepsilon)\big)=-\mathtt{J}\omega_{\textnormal{Eq}}(b).$$
By this way we construct a solution for any value of $b$ in
$$\mathcal{C}_{\infty}^{\varepsilon}\triangleq\Big\{b\in(b_*,b^*)\quad\textnormal{s.t.}\quad\big(b,\omega(b,\varepsilon)\big)\in\mathtt{G}_{\infty}^{\gamma}\Big\}.$$
The last step is to check that this final set is non-empty and massive. Actually, we prove in Proposition \ref{lem-meas-es1} the following measure bound
$$(b^*-b_*)-\varepsilon^{\delta}\leqslant|\mathcal{C}_{\infty}^{\varepsilon}|\leqslant(b^*-b_*)\quad\textnormal{for some }\delta\triangleq\delta(q_0,d,\tau_1,\tau_2)>0.$$
The proof is quite standard and based on the perturbation of R\"ussmann conditions, shown to be true at the equilibrium state. We emphasize that the restriction $\Omega>\Omega_{\mathtt{m}}^*$ is required by Lemma \ref{lemma transversalityE}-(iv) and Lemma \ref{lem Ru-pert}-(iv), and the value of $\Omega_{\mathtt{m}}^*$ given in \eqref{expr Omega star} is not necessary optimal.\\
\noindent\textbf{Acknoledgments :} The work of Zineb Hassainia has been supported by Tamkeen under the NYU Abu Dhabi Research Institute grant of the center SITE. The work of Taoufik Hmidi has been supported by Tamkeen under the NYU Abu Dhabi Research Institute grant. The work of Emeric Roulley has been partially supported by PRIN 2020XB3EFL, "Hamiltonain and Dispersive PDEs".
\section{Function and operator spaces}\label{sec funct set}
This section is devoted to the presentation of the general topological framework for both functions and operators classes. In addition, we shall set some basic notations, definitions and give some technical results used in this work.
\paragraph{Notations.}
Along this paper we shall make use of the following set notations.
\begin{enumerate}[label=\textbullet]
\item The sets of numbers that will be frequently used are denoted as follows
$$
\mathbb{N}= \{0,1,2,\ldots\},\quad\mathbb{N}^*= \mathbb{N}\setminus\{0\},\quad\mathbb{Z}=\mathbb{N}\cup(-\mathbb{N}),\qquad\mathbb{Z}^*= \mathbb{Z}\setminus\{0\}, \qquad \mathbb{T}= \mathbb{R}/2\pi\mathbb{Z}.
$$
For any $\mathtt{m}\in\mathbb{N}^*,$ we may denote
$$
\mathbb{Z}_{\mathtt{m}}=\mathtt{m}\mathbb{Z},\qquad\mathbb{N}_{\mathtt{m}}= \mathtt{m}\mathbb{N},\qquad \mathbb{Z}_{\mathtt{m}}^*= \mathtt{m}\mathbb{Z}^*, \qquad \mathbb{N}_{\mathtt{m}}^*= \mathtt{m}\mathbb{N}^*,\qquad \mathbb{T}^{\mathtt{m}}= \underbrace{\mathbb{T}\times\cdots \times\mathbb{T}}_{{\mathtt{m}} \textnormal{ times}},
$$
and for any $m,n\in\mathbb{Z},$ such that $m<n$,
$$
\llbracket m,n\rrbracket\triangleq \{m,m+1,\ldots,n-1,n\}.
$$
\item We fix two real numbers $b_*$ and $b^*$ such that
$$0<b_*<b^*<1.$$
The parameter $b$ lies in the interval $(b_{*},b^{*})$ and represents the radius of the annulus $A_b$ in \eqref{annulus-Ab}, corresponding to the equilibrium state and
\item Consider the following parameters, that will be used to construct the Cantor set as well as the regularity of the perturbations,
\begin{align}
d\in\mathbb{N}^*,&\qquad q\in\mathbb{N}^*,\label{setting q}\\
0<\gamma\leqslant1,&\qquad \tau_{2}>\tau_{1}>d,\label{setting tau1 and tau2}\\
S\geqslant s\geqslant &\,s_{0}>\tfrac{d+1}{2}+q+2.\label{init Sob cond}
\end{align}
\item For any $n\in\mathbb{N}^*$ and any complex periodic function $\rho:\mathbb{T}^n\to\R$, we denote
\begin{equation}\label{average-notation}
\int_{\mathbb{T}^{n}}\rho(\eta)d\eta\triangleq \frac{1}{(2\pi)^n}\int_{[0,2\pi]^n}\rho(\eta)d\eta.
\end{equation}
\item Let $f:X\to Y$ be a map where $X$ is a set and $Y$ is a vector space. For any $r_1,r_2\in X$, we denote
$$\Delta_{12}f=f(r_1)-f(r_2).$$
\end{enumerate}
\subsection{Function spaces}\label{Functionspsaces}
This section is devoted to some functional tools frequently used along this paper. First, we shall introduce the complex Sobolev space on the periodic setting $H^{s}(\mathbb{T}^{d +1},\mathbb{C})$ with index regularity $s\in\R.$ It is the set
of all the complex periodic functions $\rho:\mathbb{T}^{d +1}\to \mathbb{C}$ with the Fourier expansion
$$
{\rho=\sum_{(l,j)\in\mathbb{Z}^{d+1 }}\rho_{l,j}\,\mathbf{e}_{l,j}},\qquad \mathbf{e}_{l,j}(\varphi,\theta)\triangleq e^{\ii(l\cdot\varphi+j\theta)},\qquad \rho_{l,j}\triangleq\big\langle \rho,\mathbf{e}_{l,j}\big\rangle_{L^{2}(\mathbb{T}^{d+1})}
$$
equipped with the scalar product
$$
\big\langle\rho,\widetilde{\rho}\big\rangle_{H^{s}}\triangleq \sum_{(l,j)\in\mathbb{Z}^{d+1 }}\langle l,j\rangle^{2s}\rho_{l,j}\overline{\widetilde\rho_{l,j}},\qquad\textnormal{with}\qquad \langle l,j\rangle\triangleq \max(1,|l|,|j|),
$$
where $|\cdot|$ denotes the classical $\ell^{1}$ norm in $\mathbb{R}^{d }$. For $s=0$ this space coincides with the standard $ L^{2}(\mathbb{T}^{d+1},\mathbb{C})$ space equipped with the scalar product
$$
\big\langle\rho_{1},\rho_{2}\big\rangle_{L^{2}(\mathbb{T}^{d+1})}\triangleq\bigintssss_{\mathbb{T}^{d+1}}\rho_{1}(\varphi,\theta)\overline{\rho_{2}(\varphi,\theta)}d\varphi d\theta.
$$
We shall make use of {the product Sobolev space}
\begin{equation}\label{def:sob-product}
\mathbf{H}^{s}_{\mathtt{m}}(\mathbb{T}^{d+1},\mathbb{C})\triangleq H_{\mathtt{m}}^{s}(\mathbb{T}^{d+1},\mathbb{C})\times H_{\mathtt{m}}^{s}(\mathbb{T}^{d+1},\mathbb{C}),
\end{equation}
equipped with the scalar product
$$
\big\langle(\rho_1,\rho_2),(\widetilde{\rho}_1,\widetilde{\rho}_1)\big\rangle_{\mathbf{H}^{s}_{\mathtt{m}}(\mathbb{T}^{d+1},\mathbb{C})}\triangleq \sum_{(l,j)\in\mathbb{Z}^{d+1 }}\langle l,j\rangle^{2s}\rho_{l,j}^1\overline{\widetilde\rho_{l,j}^1}+\sum_{(l,j)\in\mathbb{Z}^{d+1 }}\langle l,j\rangle^{2s}\rho_{l,j}^2\overline{\widetilde\rho_{l,j}^2}.
$$
We also simply denote the real space
$$\mathbf{H}_{\mathtt{m}}^s\triangleq H_{\mathtt{m}}^s\times H_{\mathtt{m}}^{s}.$$
As we shall see later, the main enemy in the construction of quasi-periodic solutions is the resonances and in particular the trivial ones which can be fortunately removed by imposing more symmetry on the solutions. For this aim we need to work with the following subspace $H_{\mathtt{m}}^{s}(\mathbb{T}^{d +1},\mathbb{C})$, with $\mathtt{m}\in\N^*,$ whose elements enjoy the $\mathtt{m}$-fold symmetry in the variable $\theta,$ that is
$$H_{\mathtt{m}}^{s}(\mathbb{T}^{d +1},\mathbb{C})\triangleq \Big\{\rho\in H^{s}(\mathbb{T}^{d +1},\mathbb{C})\quad\textnormal{s.t.}\quad\forall (\varphi,\theta)\in\mathbb{T}^{d+1},\quad\rho\left(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}}\right)=\rho(\varphi,\theta)\Big\}.
$$
Notice the $\m$-fold symmetry is equivalent to say that
\begin{align*}
\forall l\in\mathbb{Z}^{d },\quad \forall j\in \mathbb{Z}\setminus \Z_{\mathtt{m}},\quad {\big\langle \rho,\mathbf{e}_{l,j}\big\rangle_{L^{2}(\mathbb{T}^{d+1})}=0.}
\end{align*}
The real Sobolev space $H^{s}_{\mathtt{m}}(\mathbb{T}^{d+1},\mathbb{R})$ is simply denoted by $ H^{s}_{\mathtt{m}}$ and we define the subspace $$H_{\m}^\infty\triangleq\bigcap_{s\in\R} H_{\m}^s.$$
For $N\in\mathbb{N}$, we define the cut-off frequency projectors $\Pi_{N}$ and its orthogonal $\Pi_{N}^\perp$ on $H^{s}(\mathbb{T}^{d +1},\mathbb{C})$ as follows
\begin{equation}\label{def projectors PiN}
\Pi_{N}\rho\triangleq\sum_{\underset{\langle l,j\rangle\leqslant N}{(l,j)\in\Z^{d+1}}}\rho_{l,j}\mathbf{e}_{l,j}\qquad\textnormal{and}\qquad \Pi^{\perp}_{N}\triangleq\textnormal{Id}-\Pi_{N}.
\end{equation}
We shall also make use of the following mixed weighted Sobolev spaces with respect to a given parameter $\gamma\in(0,1)$.
{Let $\mathcal{O}$ be an open bounded set of $\mathbb{R}^{d+1}$ and define the Banach spaces }
\begin{align*}
W^{q,\infty,\gamma}(\mathcal{O},H^{s}_{\mathtt{m}})&\triangleq\Big\lbrace \rho:\mathcal{O}\rightarrow H^{s}_{\mathtt{m}}\quad\textnormal{s.t.}\quad\|\rho\|_{s}^{q,\gamma,\mathtt{m}}\triangleq\sum_{\underset{|\alpha|\leqslant q}{\alpha\in\mathbb{N}^{d+1}}}\gamma^{|\alpha|}\sup_{\mu\in{\mathcal{O}}}\|\partial_{\mu}^{\alpha}\rho(\mu,\cdot)\|_{H^{s-|\alpha|}_{\mathtt{m}}}<\infty\Big\rbrace,\\
W^{q,\infty,\gamma}(\mathcal{O},\mathbb{C})&\triangleq\Big\lbrace\rho:\mathcal{O}\rightarrow\mathbb{C}\quad\textnormal{s.t.}\quad\|\rho\|^{q,\gamma}\triangleq\sum_{\underset{|\alpha|\leqslant q}{\alpha\in\mathbb{N}^{d+1}}}\gamma^{|\alpha|}\sup_{\mu\in{\mathcal{O}}}|\partial_{\mu}^{\alpha}\rho(\mu)|<\infty\Big\rbrace.
\end{align*}
Through this paper, we shall implicitly use the notation $\|\rho\|_{s}^{q,\gamma,\mathtt{m}}$, while the function $\rho$ depends on more variables such as with $(\varphi,\theta)\in\Z^{d}\times\Z_{\m}^{d^\prime}\mapsto \rho(\varphi,\theta)$, frequently encountered when we have to estimate the kernels of some operators, in which case the variables can be doubled.\\
In the next lemma we shall collect some useful classical results related {to various actions over weighted Sobolev spaces. The proofs are standard and can be found for instance in \cite{BFM21,BFM21-1,BM18}.}
\begin{lem}\label{lem funct prop}
Let $q\in\N$, $\m\in\N^*$ and $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond}, then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item Frequency growth/decay of projectors : Let $\rho\in W^{q,\infty,\gamma}(\mathcal{O},H^{s}_{\mathtt{m}}),$ then for all $N\in\mathbb{N}^{*}$ and $t>0$,
$$\|\Pi_{N}\rho\|_{s+t}^{q,\gamma,{\mathtt{m}}}\leqslant N^{t}\|\rho\|_{s}^{q,\gamma,{\mathtt{m}}}\qquad\textnormal{and}\qquad\|\Pi_{N}^{\perp}\rho\|_{s}^{q,\gamma,{\mathtt{m}}}\leqslant N^{-t}\|\rho\|_{s+t}^{q,\gamma,{\mathtt{m}}},
$$
where the cut-off projectors are defined in \eqref{def projectors PiN}.
\item Product law :
Let $\rho_{1},\rho_{2}\in W^{q,\infty,\gamma}(\mathcal{O},H^{s}_{\mathtt{m}}).$ Then $\rho_{1}\rho_{2}\in W^{q,\infty,\gamma}(\mathcal{O},H^{s}_{\mathtt{m}})$ and
$$\| \rho_{1}\rho_{2}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\| \rho_{1}\|_{s_{0}}^{q,\gamma,\mathtt{m}}\| \rho_{2}\|_{s}^{q,\gamma,\mathtt{m}}+\| \rho_{1}\|_{s}^{q,\gamma,\mathtt{m}}\| \rho_{2}\|_{s_{0}}^{q,\gamma,\mathtt{m}}.$$
\item Composition law $1$ : Let $f\in C^{\infty}(\mathcal{O}\times\mathbb{R},\mathbb{R})$ and $\rho_{1},\rho_{2}\in W^{q,\infty,\gamma}(\mathcal{O},H^{s}_{\mathtt{m}})$ such that $$\| \rho_{1}\|_{s}^{q,\gamma,\mathtt{m}},\|\rho_{2}\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C_{0}$$ for an arbitrary constant $C_{0}>0$ and define the pointwise composition $$\forall (\mu,\varphi,\theta)\in \mathcal{O}\times\mathbb{T}^{d+1},\quad f(\rho)(\mu,\varphi,\theta)\triangleq f\big(\mu,\rho(\mu,\varphi,\theta)\big).$$
Then
$$\| f(\rho_{1})-f(\rho_{2})\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C(s,d,q,f,C_{0})\| \rho_{1}-\rho_{2}\|_{s}^{q,\gamma,\mathtt{m}}.$$
\item Composition law $2$ : Let $f\in C^{\infty}(\mathbb{R},\mathbb{R})$ with bounded derivatives. Let $\rho\in W^{q,\infty,\gamma}(\mathcal{O},\mathbb{C}).$ Then $$\|f(\rho)-f(0)\|^{q,\gamma}\leqslant C(q,d,f)\|\rho\|^{q,\gamma}\left(1+\|\rho\|_{L^{\infty}(\mathcal{O})}^{q-1}\right).$$
\item Interpolation inequality : Let $q<s_{1}\leqslant s_{3}\leqslant s_{2}$ and $\overline{\theta}\in[0,1],$ with $s_{3}=\overline{\theta} s_{1}+(1-\overline{\theta})s_{2}.$\\
If $\rho\in W^{q,\infty,\gamma}(\mathcal{O},H^{s_{2}}_{\mathtt{m}})$, then $\rho\in W^{q,\infty,\gamma}(\mathcal{O},H^{s_{3}}_{\mathtt{m}})$ and
$$\|\rho\|_{s_{3}}^{q,\gamma,\mathtt{m}}\lesssim\left(\|\rho\|_{s_{1}}^{q,\gamma,\mathtt{m}}\right)^{\overline{\theta}}\left(\|\rho\|_{s_{2}}^{q,\gamma,\mathtt{m}}\right)^{1-\overline{\theta}}.$$
\end{enumerate}
\end{lem}
The next result is proved in \cite[Lem. 4.2]{HR21} and will be useful later in the study of some regularity aspects for the linearized operator.
\begin{lem}\label{lem triche}
Let $q\in\N$, $\m\in\N^*$, $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond} and $f\in W^{q,\infty,\gamma}(\mathcal{O},H_{\mathtt{m}}^{s}).$\\
We consider the function $g:\mathcal{O}\times\mathbb{T}_{\varphi}^{d}\times\mathbb{T}_{\theta}\times\mathbb{T}_{\eta}\rightarrow\mathbb{C}$ defined by
$$g(\mu,\varphi,\theta,\eta)=\left\lbrace\begin{array}{ll}
\frac{f(\mu,\varphi,\eta)-f(\mu,\varphi,\theta)}{\sin\left(\tfrac{\eta-\theta}{2}\right)}, & \textnormal{if }\theta\neq \eta,\\
2\partial_{\theta}f(\mu,\varphi,\theta),& \textnormal{if }\theta=\eta.
\end{array}\right.$$
Then
$$\|g\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|f\|_{s+1}^{q,\gamma,\mathtt{m}}.$$
\end{lem}
\subsection{Operators}\label{section-ope}
We intend in this section to explore some algebraic and analytical aspects on the a large class of operators that fit with our context. Firstly, we shall classify them according to their Toeplitz in time structures, real and $\mathtt{m}$-fold symmetry, etc... Secondly, we shall fix some specific norms, such as the off-diagonal/isotropic decay, and analyze some of their properties. This part is a crucial later in the reduction of the remainder of the linearized operator. Thirdly, a particular attention will be focused on operators with kernels by exploring the link between the different norms and the action of suitable quasi-periodic transformations. The last point concerns a short discussion on matrix operators.
\subsubsection{Symmetry}
Consider a smooth family of bounded linear operators acting on the Sobolev spaces $H^s(\mathbb{T}^{d+1}, \C)$,
$$T: \mu\in \mathcal{O}\mapsto T(\mu)\in\mathcal{L}\big(H^s(\mathbb{T}^{d+1},\mathbb{C})\big).$$
The linear operator $T(\mu)$ can be identified to the infinite dimensional matrix $\left(T_{l_{0},j_{0}}^{l,j}(\mu)\right)_{\underset{j,j_{0}\in\mathbb{Z}}{l,l_{0}\in\mathbb{Z}^{d}}}$ with
$$T(\mu)\mathbf{e}_{l_{0},j_{0}}=\sum_{(l,j)\in\mathbb{Z}^{d}\times\Z}T_{l_{0},j_{0}}^{l,j}(\mu)\mathbf{e}_{l,j},\qquad\textnormal{where}\qquad T_{l_{0},j_{0}}^{l,j}(\mu)\triangleq\big\langle T(\mu)\mathbf{e}_{l_{0},j_{0}},\mathbf{e}_{l,j}\big\rangle_{L^{2}(\mathbb{T}^{d+1},\mathbb{C})}.$$
Along this paper the operators and the test functions may depend on the same parameter $\mu$ and thus
the action of the operator $T(\mu)$ on a scalar function $\rho\in W^{q,\infty,\gamma}\big(\mathcal{O},H^{s}(\mathbb{T}^{d+1},\mathbb{C})\big)$ is by convention
defined through
$$(T\rho)(\mu,\varphi,\theta)\triangleq T(\mu)\rho(\mu,\varphi,\theta).$$
We recall the following definitions of Toeplitz, real, reversible, reversibility preserving and $\mathtt{m}$-fold preserving operators, see for instance \cite[Def. 2.2]{BBM14}.
\begin{defin}\label{Def-Rev}
Let $\rho:\mathbb{T}^{d+1}\to\R$ be a periodic function. Define the involution
$$(\mathscr{I}_{0}\rho)(\varphi,\theta)\triangleq\rho(-\varphi,-\theta)$$
and for a given integer $\mathtt{m}\geqslant1$ consider the transformation
$$(\mathscr{I}_{\mathtt{m}}\rho)(\varphi,\theta)\triangleq\rho\left(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}}\right).$$
We say that an operator $T\in \mathcal{L}\big(L^2(\mathbb{T}^{d+1},\mathbb{C})\big)$ is
\begin{enumerate}[label=\textbullet]
\item Toeplitz in time (actually in the variable $\varphi$) if its Fourier coefficients satisfy,
$$\forall(l,l_{0},j,j_{0})\in(\mathbb{Z}^{d})^{2}\times\mathbb{Z}^{2},\quad T_{l_{0},j_{0}}^{l,j}=T_{j_{0}}^{j}(l-l_{0}),\qquad\textnormal{with}\qquad T_{j_{0}}^{j}(l)\triangleq T_{0,j_{0}}^{l,j},$$
\item real if for all $\rho\in L^{2}(\mathbb{T}^{d+1},\mathbb{R}),$ we have
$T\rho$ is real-valued, or equivalently
$$\forall(l,l_{0},j,j_{0})\in(\mathbb{Z}^{d})^{2}\times\mathbb{Z}^{2},\quad T_{-l_{0},-j_{0}}^{-l,-j}=\overline{T_{l_{0},j_{0}}^{l,j}},$$
\item reversible if
$T\circ\mathscr{I}_{0}=-\mathscr{I}_{0}\circ T,$ or equivalently,
$$\forall(l,l_{0},j,j_{0})\in(\mathbb{Z}^{d})^{2}\times\mathbb{Z}^{2},\quad T_{-l_{0},-j_{0}}^{-l,-j}=-T_{l_{0},j_{0}}^{l,j},$$
\item reversibility preserving if
$T\circ\mathscr{I}_{0}=\mathscr{I}_{0}\circ T,$ or equivalently,
$$\forall(l,l_{0},j,j_{0})\in(\mathbb{Z}^{d})^{2}\times\mathbb{Z}^{2},\quad T_{-l_{0},-j_{0}}^{-l,-j}=T_{l_{0},j_{0}}^{l,j},$$
\item $\mathtt{m}$-fold preserving if
$T\circ\mathscr{I}_{\mathtt{m}}=\mathscr{I}_{\mathtt{m}}\circ T,$ or equivalently,
$$\forall(l,l_{0},j,j_{0})\in(\mathbb{Z}^{d})^{2}\times\mathbb{Z}^{2},\quad T_{l_{0},j_{0}}^{l,j}\neq0\quad\Rightarrow\quad j- j_0\in\mathbb{Z}_{\mathtt{m}}.$$
\end{enumerate}
\end{defin}
\subsubsection{Operator topologies}
We shall restrict ourselves to Toeplitz operators and fix different topologies whose use will be motivated later by different applications. Given $\m\in\N^*$, then any $\mathtt{m}$-fold preserving Toeplitz operator $T(\mu)$ acting on $\m$-fold symmetric functions $\rho=\displaystyle\sum_{l_{0}\in\mathbb{Z}^{d}\atop j_{0}\in\Z_{\m}}\rho_{l_{0},j_{0}}\mathbf{e}_{l_{0},j_{0}}$ is described by
$$T(\mu)\rho=\sum_{l,l_{0}\in\mathbb{Z}^{d}\\\atop j,j_{0}\in\mathbb{Z}_{\mathtt{m}}}T_{j_{0}}^{j}(\mu,l-l_{0})\rho_{l_{0},j_{0}}\mathbf{e}_{l,j}.$$
For $q\in\mathbb{N},$ $\gamma\in(0,1]$ and $s\in\mathbb{R},$ {we equip this set of operators} with the off-diagonal norm given by,
\begin{align}\label{Top-NormX}
\| T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\triangleq\max_{\underset{|\alpha|\leqslant q}{\alpha\in\mathbb{N}^{d+1}}}\gamma^{|\alpha|}\sup_{\mu \in{\mathcal{O}}}\|\partial_{\mu}^{\alpha}(T)(\mu)\|_{\textnormal{\tiny{O-d}},s-|\alpha|},
\end{align}
with
$$\| T\|_{\textnormal{\tiny{O-d}},s}\triangleq\sup_{(l,m)\in\mathbb{Z}^{d+1}}\langle l,m\rangle^{s}\sup_{j_0-j=m}|T_{j_0}^{j}(\mu,l)|.$$
We define the cut-off frequency operator
$$\left(P_{N}^1T(\mu)\right)\mathbf{e}_{l_{0},j_{0}}\triangleq\sum_{\underset{\langle l-l_{0},j-j_{0}\rangle\leqslant N}{(l,j)\in\mathbb{Z}^{d+1}}}T_{l_{0},j_{0}}^{l,j}(\mu)\mathbf{e}_{l,j}\qquad\mbox{and}\qquad P_{N}^{1,\perp}T\triangleq T-P_{N}^1T.$$
or equivalently
\begin{equation}\label{definition of projections for operators}
\left(P_{N}^1T(\mu)\right)_{j_0}^j(l)=\begin{cases}
T_{j_{0}}^{j}(\mu,l),& \hbox{if}\quad \langle l,j-j_0\rangle\leqslant N,\\
0,&\hbox{if not.}
\end{cases}
\end{equation}
Another norm that will be used together with the previous one during the reduction process of the remainder of the linearized operator, is given by the isotropic frequency decay
\begin{align}\label{Top-NormX2}
\|T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\triangleq\sup_{\underset{|\alpha|\leqslant q}{\alpha\in\mathbb{N}^{d+1}}}\gamma^{|\alpha|}\sup_{\mu\in{\mathcal{O}}}\|\partial_{\mu}^{\alpha}(T)(\mu)\|_{\textnormal{\tiny{I-D}},s-|\alpha|},
\end{align}
where
$$\|T(\mu)\|_{\textnormal{\tiny{I-D}},s}\triangleq \sup_{l\in\mathbb{Z}^{d}\atop j,j_0 \in\Z_{\m}}\langle l,j_0,j\rangle^{s}|T_{j_0}^{j}(\mu,l)|.$$
The associated cut-off projectors $(P_N^2)_{N\in\mathbb{N}}$ are defined as follows
\begin{equation}\label{definition of projections for operators2}
\left(P_{N}^2T(\mu)\right)\mathbf{e}_{l_{0},j_{0}}\triangleq\begin{cases}
\displaystyle{\sum_{\underset{\langle l-l_{0},j\rangle\leqslant N}{(l,j)\in\mathbb{Z}^{d}\times\Z_{\mathtt{m}}}}}T_{j_{0}}^{j}(\mu,l-l_0)\mathbf{e}_{l,j},& \hbox{if}\quad |j_0|\leqslant N,\\
0,&\hbox{if}\quad |j_0|>N.
\end{cases}
\end{equation}
or equivalently
\begin{equation}\label{definition of projections for operators3}
\left(P_{N}^2T(\mu)\right)_{j_0}^j(l)=\begin{cases}
T_{j_{0}}^{j}(\mu,l),& \hbox{if}\quad \langle l,j,j_0\rangle\leqslant N,\\
0,&\hbox{if not.}
\end{cases}
\end{equation}
We also define the orthogonal projector $ P_{N}^{2,\perp}T\triangleq T-P_{N}^2T.$ The next lemma lists some elementary results related to the off-diagonal and the isotropic norms.
\begin{lem}\label{properties of Toeplitz in time operators}
Let $q\in\N$, $\m\in\N^*$, $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond}, $T$ and $S$ be Toeplitz in time operators.
\begin{enumerate}[label=(\roman*)]
\item Frequency localization : Let $N\in\mathbb{N}^{*}$ and $\mathtt{t}\in\mathbb{R}_{+}.$ Then
$$\|P_{N}^{1}T\|_{\textnormal{\tiny{O-d}},s+\mathtt{t}}^{q,\gamma,\mathtt{m}}\leqslant N^{\mathtt{t}}\|T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}},\qquad\|P_{N}^{1,\perp}T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\leqslant N^{-\mathtt{t}}\|T\|_{\textnormal{\tiny{O-d}},s+\mathtt{t}}^{q,\gamma,\mathtt{m}}$$
and
$$\|P_{N}^{2}T\|_{\textnormal{\tiny{I-D}},s+\mathtt{t}}^{q,\gamma,\mathtt{m}}\leqslant N^{\mathtt{t}}\|T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}},\qquad\|P_{N}^{2,\perp}T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\leqslant N^{-\mathtt{t}}\|T\|_{\textnormal{\tiny{I-D}},s+\mathtt{t}}^{q,\gamma,\mathtt{m}}.$$
\item Link with the classical operator norm :
\begin{align*}
\|T\rho\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim\|T\|_{\textnormal{\tiny{O-d}},s_{0}}^{q,\gamma,\mathtt{m}}\|\rho\|_{s}^{q,\gamma,\mathtt{m}}+\|T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}}^{q,\gamma,\mathtt{m}},\\
\|T\rho\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s_0}^{q,\gamma,\mathtt{m}}\|\rho\|_{s}^{q,\gamma,\mathtt{m}}+\|T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}}^{q,\gamma,\mathtt{m}}.
\end{align*}
In particular,
\begin{align*}
\|T\rho\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim\|T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s}^{q,\gamma,\mathtt{m}},\\
\|T\rho\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s}^{q,\gamma,\mathtt{m}}.
\end{align*}
\item We have the embedding : for any $s\geqslant 0$
$$\|T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}.$$
\item Composition law :
\begin{align*}
\|TS\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}&\lesssim\|T\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\|S\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}+\|T\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}\|S\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}
\end{align*}
and
\begin{align*}
\|TS\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}+\|ST\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\|S\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}+\|T\|_{\textnormal{\tiny{I-D}},s_0}^{q,\gamma,\mathtt{m}}\|S\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}.
\end{align*}
\end{enumerate}
\end{lem}
\begin{proof}
\textbf{(i)} and \textbf{(ii)} can be easily obtained using \eqref{Top-NormX}-\eqref{definition of projections for operators3} in a similar way to \cite{BM18}. \\
\noindent \textbf{(iii)} We shall prove the embedding for $q=0$ { and the the case $q\geqslant 1$ is similar}. We write by definition
$$|T_{j_0}^{j}(\mu,l)|\leqslant \langle l,j_0,j\rangle^{-s}\|T(\mu)\|_{\textnormal{\tiny{I-D}},s}.$$
Hence
\begin{align*}
\sup_{j_0-j=m}|T_{j_0}^{j}(\mu,l)|&\leqslant \sup_{j}\langle l,j+m,j\rangle^{-s}\|T(\mu)\|_{\textnormal{\tiny{I-D}},s}.
\end{align*}
By direct computations we infer
$$\inf_{x\in\R}\langle l, x+m,x\rangle=\langle l,\tfrac{m}{2}\rangle$$
Therefore
\begin{align*}
\sup_{j_0-j=m}|T_{j_0}^{j}(\mu,l)|&\lesssim \langle l,m\rangle^{-s}\|T(\mu)\|_{\textnormal{\tiny{I-D}},s}.
\end{align*}
It follows that
$$\|T\|_{\textnormal{\tiny{O-d}},s}\lesssim\|T(\mu)\|_{\textnormal{\tiny{I-D}},s}.$$
\textbf{(iv)} We shall prove these tame estimates for $q=0$. The general case $q\geqslant 1$ can be done in a similar way using Leibniz formula. One can check that
$$\left(TS\right)_{j_0}^j(l)=\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}T_{j_0}^{j_1}(l_1)S_{j_1}^j(l-l_1).$$
Hence for $s\geqslant 0$ and using the norm definition and the triangle inequality we infer
\begin{align}\label{ST-1}
\langle l,j_0,j\rangle^{s}\big|(TS)_{j_0}^j(l)\big|&\lesssim\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l_1,j_0,j_1\rangle^{s}|\nonumber T_{j_0}^{j_1}(l_1)S_{j_1}^j(l-l_1)|\\
&\quad+\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l-l_1, j-j_1\rangle^{s}|T_{j_0}^{j_1}(l_1)S_{j_1}^j(l-l_1)|.
\end{align}
By definition we get
\begin{align}\label{Ineq-N1}
\langle l-l_1,j- j_1\rangle^{s}|S_{j_1}^j(l-l_1)|\lesssim\|S\|_{\textnormal{\tiny{O-d}},s}.
\end{align}
Consequently,
\begin{align*}
\langle l,j_0,j\rangle^{s}\big|\left(TS\right)_{j_0}^j(l)\big|&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}|S_{j_1}^j(l-l_1)|+ \| S\|_{\textnormal{\tiny{O-d}},s}\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}|T_{j_0}^{j_1}(l_1)|.
\end{align*}
Using \eqref{Ineq-N1} we deduce for $s_0>\frac{d+1}{2}$ that
\begin{align*}
\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}|S_{j_1}^j(l-l_1)|&\leqslant\|S\|_{\textnormal{\tiny{O-d}},s_0}\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l-l_1,j- j_1\rangle^{-s_0}\\
&\lesssim\|S\|_{\textnormal{\tiny{O-d}},s_0}.
\end{align*}
We also have
\begin{align}
\nonumber\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}|T_{j_0}^{j_1}(l_1)|&\leqslant\|T\|_{\textnormal{\tiny{I-D}},s_0} \sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l_1,j_1\rangle^{-s_0}\\
&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s_0}.\label{T-00}
\end{align}
Therefore, we obtain
\begin{align*}
\langle l,j_0,j\rangle^{s}\big|\left(TS\right)_{j_0}^j(l)\big|&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s} \| S\|_{\textnormal{\tiny{O-d}},s_0}+ \| S\|_{\textnormal{\tiny{O-d}},s}\|T\|_{\textnormal{\tiny{I-D}},s_0},
\end{align*}
leading to
\begin{align*}
\|TS\|_{\textnormal{\tiny{I-D}},s}&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}\|S\|_{\textnormal{\tiny{O-d}},s_0}+\|T\|_{\textnormal{\tiny{I-D}},s_0}\| S\|_{\textnormal{\tiny{O-d}},s}.
\end{align*}
Let us now move to the estimate of $ST$. Proceeding as for \eqref{ST-1} we get
\begin{align*}
\nonumber\langle l,j_0,j\rangle^{s}\big|(ST)_{j_0}^j(l)\big|&\lesssim \sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l_1,j_0-j_1\rangle^{s}| S_{j_0}^{j_1}(l_1)T_{j_1}^j(l-l_1)|\\
&+\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l-l_1, j_1,j\rangle^{s}|S_{j_0}^{j_1}(l_1)T_{j_1}^j(l-l_1)|.
\end{align*}
Applying \eqref{Ineq-N1} together with \eqref{T-00} yields
\begin{align*}
\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l_1,j_0-j_1\rangle^{s}|S_{j_0}^{j_1}(l_1)T_{j_1}^j(l-l_1)|&\lesssim \|S\|_{\textnormal{\tiny{O-d}},s}\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}|T_{j_1}^j(l-l_1)|\\
&\lesssim\|S\|_{\textnormal{\tiny{O-d}},s}\|T\|_{\textnormal{\tiny{I-D}},s_0}.
\end{align*}
Similarly we get
\begin{align*}
\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l-l_1, j_1,j\rangle^{s}|S_{j_0}^{j_1}(l_1)T_{j_1}^j(l-l_1)|&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}\sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}|S_{j_0}^{j_1}(l_1)|\\
&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}\| S\|_{\textnormal{\tiny{O-d}},s_0} \sum_{l_1\in\Z^d\atop j_1\in\Z_\mathtt{m}}\langle l_1,j_1-j_0\rangle^{-s_0}\\
&\lesssim\|T\|_{\textnormal{\tiny{I-D}},s}\| S\|_{\textnormal{\tiny{O-d}},s_0} .
\end{align*}
Putting together the preceding estimates we get
\begin{align*}
\|ST\|_{\textnormal{\tiny{I-D}},s}&\lesssim \|T\|_{\textnormal{\tiny{I-D}},s} \| S\|_{\textnormal{\tiny{O-d}},s_0}+ \|T\|_{\textnormal{\tiny{I-D}},s_0}\|S\|_{\textnormal{\tiny{O-d}},s}.
\end{align*}
This concludes the proof of the lemma.
\end{proof}
\subsubsection{Integral operators}
{The main goal in this part is to analyze Toeplitz integral operators and connect the different norms introduced before to the regularity of the kernel. Consider a Toeplitz integral operator taking the form }
\begin{equation}\label{Top-op1}(\mathcal{T}_K\rho)(\mu,\varphi,\theta)\triangleq\int_{\mathbb{T}}K(\mu,\varphi,\theta,\eta)\rho(\mu,\varphi,\eta)d\eta,
\end{equation}
where the kernel function $K(\mu,\varphi,\theta,\eta)$ may be smooth or singular at the diagonal set $\{\theta=\eta\}$. The kernel is called $\mathtt{m}$-fold preserving if
$$(\mathscr{I}_{\mathtt{m},2}K)(\mu,\varphi,\theta,\eta)\triangleq K\left(\mu,\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\right)=K(\mu,\varphi,\theta,\eta).$$
We shall need the following lemma whose proof is a consequence of \cite[Lem. 4.4]{HR21}.
\begin{lem}\label{lem sym--rev}
Let $q\in\N$, $\m\in\N^*$, $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond} and $\mathcal{T}_K$ be an integral operator with a real-valued kernel $K$. Then the following assertions hold true.
\begin{enumerate}[label=\textbullet]
\item If $K$ is even in $(\varphi,\theta,\eta)$, then $\mathcal{T}_K$ is reversibility preserving.
\item If $K$ is odd in $(\varphi,\theta,\eta)$,
then $\mathcal{T}_K$ is reversible.
\item If $K$ is $\mathtt{m}$-fold preserving, then $\mathcal{T}_K$ is $\mathtt{m}$-fold preserving.
\end{enumerate}
In addition,
$$\|\mathcal{T}_K\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\lesssim\|K\|_{s}^{q,\gamma,\mathtt{m}}$$
and
\begin{align*}
\| \mathcal{T}_K\rho\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim \|\rho\|_{s_0}^{q,\gamma,\mathtt{m}}\|K\|_{s}^{q,\gamma,\mathtt{m}} +\|\rho\|_{s}^{q,\gamma,\mathtt{m}} \|K\|_{s_0}^{q,\gamma,\mathtt{m}}.
\end{align*}
\end{lem}
\begin{proof}
The reversibility properties have already been proved in \cite[Lem. 4.4]{HR21}. Now, let us prove the $\mathtt{m}$-fold property. Assume that $K$ is $\mathtt{m}$-fold preserving, then
\begin{align*}
\mathcal{T}_{K}(\mathscr{I}_{\mathtt{m}}\rho)(\varphi,\theta)&=\int_{\mathbb{T}}K(\varphi,\theta,\eta)\rho\big(\varphi,\eta+\tfrac{2\pi}{\mathtt{m}}\big)d\eta\\
&=\int_{\mathbb{T}}K\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\big)\rho\big(\varphi,\eta+\tfrac{2\pi}{\mathtt{m}}\big)d\eta\\
&=\int_{\mathbb{T}}K\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta\big)\rho\big(\varphi,\eta\big)d\eta\\
&=\mathscr{I}_{\mathtt{m}}(\mathcal{T}_{K}\rho)(\varphi,\theta).
\end{align*}
Hence $\mathcal{T}_{K}$ is $\mathtt{m}$-fold preserving. By duality $H_{\mathtt{m}}^{-s}-H_{\mathtt{m}}^{s}$, we have
\begin{align*}
\left|(\mathcal{T}_{K})_j^{j'}(l)\right|&=\left|\int_{\mathbb{T}^{d+2}}K(\varphi,\theta,\eta)e^{\ii(l\cdot\varphi+j\theta-j'\eta)}d\varphi d\theta d\eta\right|\\
&\lesssim\langle l,j,j'\rangle^{-s}\|K\|_{s}^{q,\gamma,\m}
\end{align*}
proving the first estimate. The second one follows easily from Lemma \ref{properties of Toeplitz in time operators}-(ii)-(iii).
\end{proof}
The next task is to introduce some quasi-periodic symplectic change of variables needed later in the reduction of the transport part of the linearized operator. The following lemma is proved in the scalar case $d^\prime=1$ in \cite[Lem. 2.34]{BM18}. The vectorial case $d^\prime\geqslant 2$ can be obtained in a similar way, up to slight modifications.
\begin{lem}\label{Compos1-lemm}
Let $q\geqslant 0 $, $\m, d,d^\prime\geqslant 1$,
$s\geqslant s_0> \tfrac{d+d^\prime}{2}+q+1$
and $\beta_1,\cdots,\beta_{d^\prime}\in W^{q,\infty,\gamma}\big(\mathcal{O},H^{\infty}_{\m}(\mathbb{T}^{d+1})\big)$ such that
\begin{equation}\label{small beta lem}
\max_{k\in\{1,\cdots,d^\prime\}}\|\beta_k \|_{2s_0}^{q,\gamma,\mathtt{m}}\leqslant \varepsilon_0,
\end{equation}
with $\varepsilon_{0}$ small enough. Then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item The function $\widehat{\beta}$ defined by the inverse diffeomorphism
$$y=x+\beta(\mu,\varphi,x)\qquad\Leftrightarrow\qquad x=y+\widehat\beta(\mu,\varphi,y),$$
where
\begin{align}\label{beta-new}
\beta(\mu,\varphi,x)&\triangleq \big(\beta_1(\mu,\varphi,x_1),\cdots, \beta_{d^\prime}(\mu,\varphi,x_{d^\prime})\big), \quad x=(x_1,\cdots,x_{d^\prime}),\\
\widehat{\beta}(\mu,\varphi,y)&\triangleq \big(\widehat{\beta}_1(\mu,\varphi,y_1),\cdots, \widehat{\beta}_{d^\prime}(\mu,\varphi,y_{d^\prime})\big), \,\quad y=(y_1,\cdots,y_{d^\prime}),\notag
\end{align}
satisfies
\begin{equation}\label{beta-hat and beta norm}
\forall s\geqslant s_0,\quad\|\widehat{\beta}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim \|\beta\|_{s}^{q,\gamma,\mathtt{m}}.
\end{equation}
\item The composition operator $\mathcal{B}:W^{q,\infty,\gamma}\big(\mathcal{O},H^{s}_{\m}(\mathbb{T}^{d+d^\prime})\big)\to W^{q,\infty,\gamma}\big(\mathcal{O},H^{s}_{\m}(\mathbb{T}^{d+d^\prime})\big)$, defined by
\begin{equation}\label{def symplctik CVAR}
\mathcal{B}\rho(\mu,\varphi,x)\triangleq \rho\big(\mu,\varphi,x+\beta(\mu,\varphi,x)\big),
\end{equation}
is continuous and invertible, with inverse
$$
\mathcal{B}^{-1} \rho(\mu,\varphi,y)=\rho\big(\mu,\varphi,y+\widehat{\beta}(\mu,\varphi,y)\big).
$$
Moreover, we have the estimates
\begin{align}
\|\mathcal{B}^{\pm1}\rho\|_{s}^{q,\gamma,\mathtt{m}}&\leqslant \|\rho\|_{s}^{q,\gamma,\mathtt{m}}\left(1+C\|\beta\|_{s_{0}}^{q,\gamma,\mathtt{m}}\right)+C\|\beta\|_{s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}}^{q,\gamma,\mathtt{m}},\nonumber
\\
\|\mathcal{B}^{\pm1}\rho-\rho\|_{s}^{q,\gamma,\mathtt{m}}&\leqslant C\left(\|\rho\|_{s+1}^{q,\gamma,\mathtt{m}}\|\beta\|_{s_0}^{q,\gamma,\mathtt{m}}+\|\rho\|_{s_0}^{q,\gamma,\mathtt{m}}\|\beta\|_{s}^{q,\gamma,\mathtt{m}}\right).\label{e-spe comp B}
\end{align}
\item Let $\beta^{[1]},\beta^{[2]}\in W^{q,\infty,\gamma}\big(\mathcal{O},H_{\m}^{\infty}(\mathbb{T}^{d+d^\prime})\big)$ as in \eqref{beta-new} and satisfying \eqref{small beta lem}. If we denote
$$\Delta_{12}\beta\triangleq\beta^{[1]}-\beta^{[2]}\qquad\textnormal{and}\qquad\Delta_{12}\widehat{\beta}\triangleq\widehat{\beta}^{[1]}-\widehat{\beta}^{[2]},$$
then we have
\begin{equation}\label{Delta12 bh vs Delta12 b}
\forall s\geqslant s_0,\quad\|\Delta_{12}\widehat{\beta}\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C\left(\|\Delta_{12}\beta\|_{s}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}\beta\|_{s_{0}}^{q,\gamma,\mathtt{m}}\max_{\ell\in\{1,2\}}\|\beta^{[\ell]}\|_{s+1}^{q,\gamma,\mathtt{m}}\right).
\end{equation}
\end{enumerate}
\end{lem}
Next, we gather several results related to the action of the transformation \eqref{def symplctik CVAR} on Toeplitz integral operators.
\begin{lem}\label{lem CVAR kernel}
Let $q\in\N$, $\m\in\N^*$ and $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond}. Consider a smooth $\mathtt{m}$-fold preserving kernel
$$K:(\mu,\varphi,\theta_1,\theta_2)\mapsto K(\mu,\varphi,\theta_1,\theta_2).$$
Let $\beta_k: \mathcal{O}\times \mathbb{T}^{d+1}\to \mathbb{T}$, $k\in\{1,2\}$ be odd $\m$-fold symmetric functions and subject to the smallness condition
\begin{equation}\label{assumption smallness beta lem}
\max_{k\in\{1,2\}}\|\beta_k \|_{2s_0}^{q,\gamma,\mathtt{m}}\leqslant \varepsilon_0.
\end{equation}
Consider the quasi-periodic change of variables
$$\forall k\in\{1,2\},\quad \mathscr{B}_k\triangleq (1+\partial_{\theta}\beta_k)\mathcal{B}_k,\qquad\mathcal{B}_k\rho(\mu,\varphi,\theta)=\rho\big(\mu,\varphi,\theta+\beta_k(\mu,\varphi,\theta)\big),$$
Then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item The operator $\mathscr{B}_1^{-1}\mathcal{T}_K\mathscr{B}_2$ is $\mathtt{m}$-fold preserving integral operator. Moreover, we have
\begin{equation}\label{e-odsBtB}
\|\mathscr{B}_1^{-1}\mathcal{T}_K\mathscr{B}_2\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\lesssim\|K\|_{s}^{q,\gamma,\mathtt{m}}+\|K\|_{s_0}^{q,\gamma,\mathtt{m}}\,\max_{k\in\{1,2\}}\|\beta_{k}\|_{s+1}^{q,\gamma,\mathtt{m}}.
\end{equation}
and
\begin{equation}\label{e-odsBtB-diff}
\|\mathscr{B}_1^{-1}\mathcal{T}_K\mathscr{B}_2-\mathcal{T}_K\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\lesssim\|K\|_{s+1}^{q,\gamma,\mathtt{m}}\,\max_{k\in\{1,2\}}\|\beta_{k}\|_{s_0}^{q,\gamma,\mathtt{m}}+\|K\|_{s_0}^{q,\gamma,\mathtt{m}}\,\max_{k\in\{1,2\}}\|\beta_{k}\|_{s+1}^{q,\gamma,\mathtt{m}}.
\end{equation}
\item If $K$ is even in all the variables $(\varphi,\theta_1,\theta_2)$ (resp. odd), then $\mathscr{B}_1^{-1}\mathcal{T}_K\mathscr{B}_2$ is a reversibility preserving (resp. reversible) integral operator.
\item Given smooth functionals $r\in W^{q,\infty,\gamma}(\mathcal{O},H_{\mathtt{m}}^{s}\times H_{\mathtt{m}}^{s})\mapsto K(r), \beta_{k}(r)$, for $k\in \{1,2\}$.\\
Consider $r^{[\ell]}=(r_{1}^{[\ell]},r_2^{[\ell]})\in W^{q,\infty,\gamma}(\mathcal{O},H^s_{\mathtt{m}}\times H_{\mathtt{m}}^{s})$, $\ell\in\{1,2\}$. We denote
$$\forall k\in\{1,2\},\quad f^{[k]}\triangleq f(r^{[k]})\qquad\textnormal{and}\qquad\Delta_{12}f\triangleq f^{[1]}-f^{[2]}.$$
Assume that there exists $\varepsilon_0>0$ small enough such that
\begin{equation}\label{assumption smallness beta K lem}
\max_{(k,\ell)\in\{1,2\}^2}\|\beta_{k}^{[\ell]}\|_{2s_0}^{q,\gamma,{\mathtt{m}}}+\max_{\ell\in\{1,2\}}\|K^{[\ell]}\|_{s_0+1}^{q,\gamma,{\mathtt{m}}}\leqslant\varepsilon_0.
\end{equation}
Then, the following estimate holds,
\begin{align}\label{diff12BoxBtB}
\|\Delta_{12}\mathscr{B}_{1}^{-1}\mathcal{T}_{K}&\mathscr{B}_{2}\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\lesssim\|\Delta_{12}K\|_{s}^{q,\gamma,{\mathtt{m}}}+\|\Delta_{12}K\|_{s_0}^{q,\gamma,{\mathtt{m}}}\max_{(k,\ell)=\{1,2\}^2}\|\beta_{k}^{[\ell]}\|_{s}^{q,\gamma,\mathtt{m}}\\
&\quad+\Big(\max_{\ell\in\{1,2\}}\|K^{[\ell]}\|_{s+1}^{q,\gamma,\mathtt{m}}+\max_{(k,\ell)\in\{1,2\}^2}\|\beta_{k}^{[\ell]} \|_{s+1}^{q,\gamma,\mathtt{m}}\Big)\max_{k\in\{1,2\}}\|\Delta_{12}\beta_{k}\|_{s_0+1}^{q,\gamma,\mathtt{m}}\nonumber\\
&\quad+ \max_{k\in\{1,2\}}\|\Delta_{12}\beta_{k}\|_{s+1}^{q,\gamma,\mathtt{m}}. \nonumber
\end{align}
\end{enumerate}
\end{lem}
\begin{proof}
\textbf{(i)}
Straightforward computations lead to
\begin{equation}\label{mathscrB1}
\mathscr{B}_1^{-1}\rho(\mu,\varphi,y_1)=\Big(1+\partial_y\widehat{\beta}_1(\mu,\varphi,y_1)\Big) \rho\big(\mu,\varphi,y_1+\widehat{\beta}_1(\mu,\varphi,y_1)\big).
\end{equation}
Thus, the conjugation of the operator $\mathcal{T}_K$ writes
\begin{align}\label{comp b-1tb}
\mathscr{B}_1^{-1}\mathcal{T}_{K}\mathscr{B}_2\rho(\mu,\varphi,\theta_1)&=\int_{\mathbb{T}}\rho(\mu,\varphi,{\theta_2})\widehat{K}(\mu,\varphi,\theta_1,{\theta_2})d{\theta_2},
\end{align}
with
$$\widehat{K}(\mu,\varphi,\theta_1,{\theta_2})\triangleq \big(1+\partial_{\theta_1}\widehat{\beta}_1(\mu,\varphi,\theta_1)\big)(\mathcal{B}^{-1} K)(\mu,\varphi,\theta_1,{\theta_2})$$
and
$$
\mathcal{B}^{-1} K(\mu,\varphi,\theta_1,\theta_2)=K\big(\mu,\varphi,\theta_1+\widehat{\beta}_1(\mu,\varphi,\theta_1),{\theta_2}+\widehat{\beta}_2(\mu,\varphi,{\theta_2})\big).
$$
Using the product laws in Lemma \ref{lem funct prop}, Lemma \ref{Compos1-lemm} and \eqref{assumption smallness beta lem}, we get
\begin{equation}\label{e-B1B2rho}
\|\widehat{K}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|K\|_{s}^{q,\gamma,\mathtt{m}}+\|K\|_{s_0}^{q,\gamma,\mathtt{m}}\max_{k\in\{1,2\}}\|\beta_k\|_{s+1}^{q,\gamma,\mathtt{m}}.
\end{equation}
Consequently, we obtain the estimate \eqref{e-odsBtB} by applying Lemma \ref{lem sym--rev}. As for the difference with the original operator, we can write
$$\big(\mathscr{B}_{1}^{-1}\mathcal{T}_{K}\mathscr{B}_2-\mathcal{T}_{K}\big)\rho(\mu,\varphi,\theta_1)=\int_{\mathbb{T}}\rho(\mu,\varphi,{\theta_2})\widetilde{K}(\mu,\varphi,\theta_1,{\theta_2})d{\theta_2},$$
with
$$\widetilde{K}(\mu,\varphi,\theta_1,{\theta_2})\triangleq \partial_{\theta_1}\widehat{\beta}_1(\mu,\varphi,\theta_1)(\mathcal{B}^{-1}K)(\mu,\varphi,\theta_1,{\theta_2})+\big[\mathcal{B}^{-1}K-K\big](\mu,\varphi,\theta_1,{\theta_2}).$$
Therefore, by the product laws in Lemma \ref{lem funct prop}, together with \eqref{e-spe comp B} and \eqref{assumption smallness beta lem}, we infer
$$\|\widetilde{K}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|K\|_{s+1}^{q,\gamma,\mathtt{m}}\max_{k\in\{1,2\}}\|\beta_k\|_{s_0}^{q,\gamma,\mathtt{m}}+\|K\|_{s_0}^{q,\gamma,\mathtt{m}}\max_{k\in\{1,2\}}\|\beta_k\|_{s+1}^{q,\gamma,\mathtt{m}}.$$
Then, the estimate \eqref{e-odsBtB-diff} follows by applying Lemma \ref{lem sym--rev}.\\
\noindent \textbf{(ii)} The symmetry properties follow immediately from Lemma \ref{lem sym--rev} and the symmetry assumptions.\\
\noindent \textbf{(iii)} By definition and according to \eqref{comp b-1tb} we have
\begin{align*}
\Delta_{12}(\mathscr{B}_1^{-1}\mathcal{T}_K\mathscr{B}_{2})(\rho)(\mu,\varphi,\theta_1)&=\int_{\mathbb{T}}\rho(\mu,\varphi,{\theta_2})\mathbb{K}(\mu,\varphi,\theta_1,{\theta_2})d{\theta_2},
\end{align*}
with
\begin{align*}
\mathbb{K}(\mu,\varphi,\theta_1,{\theta_2})&\triangleq \big(1+\partial_{\theta_1}\widehat{\beta}_1^{[1]}(\mu,\varphi,\theta_1)\big)\mathcal{B}_{[1]}^{-1}K^{[1]}\big(\mu,\varphi,\theta_1,{\theta_2}\big)\\ &\quad-\big(1+\partial_{\theta_1}\widehat{\beta}_1^{[2]}(\mu,\varphi,\theta_1)\big)\mathcal{B}_{[2]}^{-1}K^{[2]}\big(\mu,\varphi,\theta_1,{\theta_2}\big)
\end{align*}
and
$$
\mathcal{B}^{-1}_{[\ell]} f(\mu,\varphi,y)=f\big(\mu,\varphi,\theta_1+\widehat{\beta}_1^{[\ell]}(\mu,\varphi,\theta_1),{\theta_2}+\widehat{\beta}_2^{[\ell]}(\mu,\varphi,{\theta_2})\big).
$$
This can also be written as
\begin{align*}
\mathbb{K}(\mu,\varphi,\theta_1,{\theta_2})&=\partial_{\theta_1}\Delta_{12}\widehat{\beta}_1(\mu,\varphi,\theta_1)\mathcal{B}_{[1]}^{-1}K^{[1]}\big(\mu,\varphi,\theta_1,{\theta_2}\big)\\
&\quad+\big(1+\partial_{\theta_1}\widehat{\beta}_1^{[2]}(\mu,\varphi,\theta_1)\big)\mathcal{B}_{[1]}^{-1}(\Delta_{12}K)\big(\mu,\varphi,\theta_1,{\theta_2}\big)\\
&\quad+\big(1+\partial_{\theta_1}\widehat{\beta}_1^{[2]}(\mu,\varphi,\theta_1)\big)\Big[\mathcal{B}_{[1]}^{-1}K^{[2]}\big(\mu,\varphi,\theta_1,{\theta_2}\big)
-\mathcal{B}_{[2]}^{-1}K^{[2]}\big(\mu,\varphi,\theta_1,{\theta_2}\big)\Big].
\end{align*}
Then, Taylor Formula implies
\begin{align*}
\mathbb{K}(\mu,\varphi,\theta_1,{\theta_2})&=\partial_{\theta_1}\Delta_{12}\widehat{\beta}_1(\mu,\varphi,\theta_1)\mathcal{B}_{[1]}^{-1}K^{[1]}\big(\mu,\varphi,\theta_1,{\theta_2}\big)\\
&\quad+\big(1+\partial_{\theta_1}\widehat{\beta}_1^{[2]}(\mu,\varphi,\theta_1)\big)\mathcal{B}_{[1]}^{-1}(\Delta_{12}K)\big(\mu,\varphi,\theta_1,{\theta_2}\big)\\
&\quad+\big(1+\partial_{\theta_1}\widehat{\beta}_1^{[2]}(\mu,\varphi,\theta_1)\big)\Big[\Delta_{12}\widehat{\beta}_1(\mu,\varphi,\theta_1)\int_{0}^1\big(\mathcal{B}_{[1],[2]}^\tau(\partial_{\theta_1}K^{[2]})\big)\Big(\mu,\varphi,\theta_1,{\theta_2}\Big)d\tau\\
&\quad+\Delta_{12}\widehat{\beta}_2(\mu,\varphi,{\theta_2})\int_{0}^1\big(\widetilde{\mathcal{B}}_{[1],[2]}^\tau(\partial_{{\theta_2}}K^{[2]})\big)\Big(\mu,\varphi,\theta_1,{\theta_2}\Big)d\tau\Big],
\end{align*}
where we have used the notations
\begin{align*}
\mathcal{B}_{[1],[2]}^\tau f(\mu,\varphi,\theta_1,{\theta_2})&=f\big(\mu,\varphi,\theta_1+\tau\widehat{\beta}_{1}^{[1]}(\mu,\varphi,\theta_1)+(1-\tau)\widehat{\beta}_{1}^{[2]}(\mu,\varphi,\theta_1),{\theta_2}+\widehat{\beta}_2^{[2]}(\mu,\varphi,{\theta_2})\big),\\
\widetilde{\mathcal{B}}_{[1],[2]}^\tau f(\mu,\varphi,\theta_1,{\theta_2})&=f\big(\mu,\varphi,\theta_1+\widehat{\beta}_1^{[2]}(\mu,\varphi,\theta_1),{\theta_2}+\tau\widehat{\beta}_{2}^{[1]}(\mu,\varphi,{\theta_2})+(1-\tau)\widehat{\beta}_{2}^{[2]}(\mu,\varphi,{\theta_2})\big).
\end{align*}
By product laws, \eqref{Delta12 bh vs Delta12 b}, \eqref{e-B1B2rho}, \eqref{assumption smallness beta K lem}, we obtain
\begin{align*}
\|\mathbb{K}\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim\|\Delta_{12}K\|_{s}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}K\|_{s_0}^{q,\gamma,\mathtt{m}}\max_{(k,\ell)\in\{1,2\}^2}\|\beta_{k}^{[\ell]}\|_{s}^{q,\gamma,\mathtt{m}}+\max_{k\in\{1,2\}}\|\Delta_{12}\beta_{k}\|_{s+1}^{q,\gamma,\mathtt{m}}\\
&\quad+\left(\max_{\ell\in\{1,2\}}\|K^{[\ell]}\|_{s+1}^{q,\gamma,\mathtt{m}}+\max_{(k,\ell)\in\{1,2\}^2}\|\beta_{k}^{[\ell]}\|_{s+1}^{q,\gamma,\mathtt{m}}\right)\max_{k\in\{1,2\}}\|\Delta_{12}\beta_k\|_{s_0+1}^{q,\gamma,\mathtt{m}}.
\end{align*}
Finally, combining this estimate with Lemma \ref{lem sym--rev} we conclude \eqref{diff12BoxBtB}. This achieves the proof of Lemma~\ref{lem CVAR kernel}.
\end{proof}
Now, we recall the following result stated in \cite[Lemma 2.36]{BM18} and dealing with the conjugation of the Hilbert transform with the quasi-periodic change of coordinates introduced in \eqref{mathscrB1}. Here, and along the paper, $ {\mathcal H}$ denotes the standard Hilbert transform on the periodic setting acting only on the variable $\theta\in\mathbb{T},$ namely,
\begin{equation}\label{def Hilbert}
\mathcal{H}(1)=0,\qquad\forall j\in\mathbb{Z}^*,\quad\mathcal{H}\mathbf{e}_j=-\ii\, \mathtt{sgn}(j)\mathbf{e}_j,
\end{equation}
where $\mathtt{sgn}$ denotes the usual sign function.
\begin{lem} \label{lemma:conjug-Hilbert}
Let $q\in\N$, $\m\in\N^*$, $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond} and $\beta\in W^{q,\infty,\gamma}(\mathcal{O},H_{\mathtt{m}}^{\infty})$ odd in the variables $(\varphi,\theta)$. There exists $\varepsilon_0> 0$ such that, if $\|\beta\|_{2s_0 }^{q,\gamma,\mathtt{m}} \leqslant \varepsilon_0$, then
$$
( {\mathscr B}^{-1} {\cal H} {\mathscr B} - {\cal H}) \rho (\mu, \varphi, \theta)
= \int_{\mathbb{T}} \,K(\mu, \varphi, \theta,\eta) \rho(\mu, \varphi, \eta)\,d\eta,
$$
defines a reversible and $\mathtt{m}$-fold preserving integral operator with the estimates : for all $s\geqslant s_0,$
$$\|K\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C(s,q)\|\beta\|_{s+2}^{q,\gamma,\mathtt{m}},\qquad\|\mathscr{B}^{-1}\mathcal{H}\mathscr{B}-\mathcal{H}\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\leqslant C(s,q)\|\beta \|_{s+2}^{q,\gamma,\mathtt{m}}$$
and
$$\|\Delta_{12}(\mathscr{B}^{-1}\mathcal{H}\mathscr{B}-\mathcal{H})\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\leqslant C(s,q)\Big[\|\Delta_{12}\beta\|_{s+2}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}\beta\|_{s_0+1}^{q,\gamma,\mathtt{m}}\max_{\ell\in\{1,2\}}\|\beta^{[\ell]}\|_{s+3}^{q,\gamma,\mathtt{m}}\Big].$$
\end{lem}
\begin{proof}
We shall use the following classical integral representation of the Hilbert transform
$$\mathcal{H}(\rho)(\theta)=\int_{\mathbb{T}}\rho(\eta)\cot\left(\frac{\theta-\eta}{2}\right)d\eta,$$
where this integral is understood in the principal value sense. Therefore, we have
$$K(\mu,\varphi,\theta,\eta)=\big(1+\partial_{\theta}\widehat{\beta}(\mu,\varphi,\theta)\big)\cot\left(\frac{\theta-\eta+\widehat{\beta}(\mu,\varphi,\theta)-\widehat{\beta}(\mu,\varphi,\eta)}{2}\right)-\cot\left(\frac{\theta-\eta}{2}\right).$$
One can easily check that
$$K(\mu,\varphi,\theta,\eta)=2\partial_{\theta}\left[\log\left(\frac{\sin\left(\frac{\theta-\eta+\widehat{\beta}(\mu,\varphi,\theta)-\widehat{\beta}(\mu,\varphi,\eta)}{2}\right)}{\sin\left(\frac{\theta-\eta}{2}\right)}\right)\right].$$
This can also be written as
$$K(\mu,\varphi,\theta,\eta)=2\partial_{\theta}\Big[\log\big(1+g(\mu,\varphi,\theta,\eta)\big)\Big],$$
where
$$g(\mu,\varphi,\theta,\eta)=\cos\left(\tfrac{\widehat{\beta}(\mu,\varphi,\theta)-\widehat{\beta}(\mu,\varphi,\eta)}{2}\right)-1+\cos\left(\tfrac{\theta-\eta}{2}\right)\frac{\sin\left(\frac{\widehat{\beta}(\mu,\varphi,\theta)-\widehat{\beta}(\mu,\varphi,\eta)}{2}\right)}{\sin\left(\frac{\theta-\eta}{2}\right)}\cdot$$
The symmetry assumptions on $\beta$ (and thus $\widehat{\beta}$) implies
$$g\big(\mu,\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\big)=g(\mu,\varphi,\theta,\eta)=g(\mu,-\varphi,-\theta,-\eta),$$
that is
$$K\big(\mu,\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\big)=K(\mu,\varphi,\theta,\eta)=-K(\mu,-\varphi,-\theta,-\eta),$$
The Lemma \ref{lem sym--rev} implies that $\mathscr{B}^{-1}\mathcal{H}\mathscr{B}-\mathcal{H}$ is a reversible and $\mathtt{m}$-fold preserving integral operator.
Using composition laws in Lemma \ref{lem funct prop}, Lemma \ref{lem triche} and \eqref{beta-hat and beta norm}, we get
$$\|g\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\beta\|_{s+1}^{q,\gamma,\mathtt{m}}.$$
Hence, still by composition laws, we infer
$$\|K\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\beta\|_{s+2}^{q,\gamma,\mathtt{m}}$$
and we conclude by applying Lemma \ref{lem sym--rev}. As for the difference, we have
$$\|\Delta_{12}K\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\Delta_{12}g\|_{s+1}^{q,\gamma,\mathtt{m}}.$$
We set
$$h(\mu,\varphi,\theta,\eta)\triangleq \frac{\widehat{\beta}(\mu,\varphi,\theta)-\widehat{\beta}(\mu,\varphi,\eta)}{2}\cdot$$
Then, using Taylor formula, we can write
\begin{align*}
\Delta_{12}g(\mu,\varphi,\theta,\eta)=&-\Delta_{12}h(\mu,\varphi,\theta,\eta)\int_{0}^{1}\sin\big(h^{[2]}(\mu,\varphi,\theta,\eta)+t\Delta_{12}h(\mu,\varphi,\theta,\eta)\big)dt\\
&+\cos\left(\tfrac{\theta-\eta}{2}\right)\frac{\Delta_{12}h(\mu,\varphi,\theta,\eta)}{\sin\left(\frac{\theta-\eta}{2}\right)}\int_{0}^{1}\cos\big(h^{[2]}(\mu,\varphi,\theta,\eta)+t\Delta_{12}h(\mu,\varphi,\theta,\eta)\big)dt.
\end{align*}
From the identity $\Delta_{12}h(\mu,\varphi,\theta,\eta)=\Delta_{12}\widehat{\beta}(\mu,\varphi,\theta)-\Delta_{12}\widehat{\beta}(\mu,\varphi,\eta)$ together with the product/composition laws combined with Lemma \ref{lem triche} and \eqref{Delta12 bh vs Delta12 b}, we get
$$\|\Delta_{12}g\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\Delta_{12}\beta\|_{s+1}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}\beta\|_{s_0+1}^{q,\gamma,\mathtt{m}}\max_{\ell\in\{1,2\}}\|\beta^{[\ell]}\|_{s+2}^{q,\gamma,\mathtt{m}}.$$
Again we conclude by invoking Lemma \ref{lem sym--rev}.
\end{proof}
The following lemma deals with the kernel structure of iterative operators that will be useful later.
\begin{lem} \label{iter-kerns}
Let $q\in\N$, $\m\in\N^*$, $n\in \N^*$ and $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond} and consider a family of $\mathtt{m}$-fold preserving kernel operators $(\mathcal{T}_{K_i})_{i=1}^n$ as in \eqref{Top-op1}. Then there exists a kernel $K$ such that
$$
\prod_{i=1}^n\mathcal{T}_{K_i}=\mathcal{T}_{K},\qquad\textnormal{with}\qquad
\|K\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C\sum_{i=1}^n\|K_i\|_{s}^{q,\gamma,\mathtt{m}}\prod_{j\neq i}\|K_{j}\|_{s_0}^{q,\gamma,\mathtt{m}}.
$$
In addition, if for some $i_0$ we have $K_{i_0}(\varphi,\theta,\eta)=f(\varphi,\theta)\delta({\theta-\eta}),$ with $f\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}}\big)=f(\varphi,\theta)$, then
$$
\|K\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C\|f\|_{s}^{q,\gamma,\mathtt{m}}\prod_{i\neq i_0}\|K_{i}\|_{s_0}^{q,\gamma,\mathtt{m}}+C\|f\|_{s_0}^{q,\gamma,\mathtt{m}}\sum_{i=1,\atop i\neq i_0}^n\|K_i\|_{s}^{q,\gamma,\mathtt{m}}\prod_{j\neq i, i_0}\|K_{j}\|_{s_0}^{q,\gamma,\mathtt{m}}.
$$
\end{lem}
\begin{proof}
The kernel $K$ is explicit and takes the form,
$$
K(\varphi,\theta,\eta)=\bigintsss_{\mathbb{T}^{n-1}}\prod_{i=1}^{n}K_i(\varphi,\eta_{i-1},\eta_{i}) \prod_{i=1}^{n-1}d\eta_i,
$$
with the convention $\eta_0=\theta$ and $\eta_{n}=\eta.$ The $\mathtt{m}$-fold preserving property of $K$ is inherited from the one of the $K_i$. Thus, to get the first result it suffices to use the products law in Lemma \ref{lem funct prop}. In the second case, the kernel takes the form
$$
K(\varphi,\theta,\eta)=\bigintsss_{\mathbb{T}^{n-2}}f(\varphi,\theta,\eta_{i_0})K_{i_0-1}(\varphi,\eta_{i_0-2},\eta_{i_0}) \prod_{i=1\atop i\neq i_0,i_0-1}^{n}K_i(\varphi,\eta_{i-1},\eta_{i}) \prod_{i=1\atop i\neq i_0-1}^{n-1}d\eta_i
$$
and the desired estimate follows once again from the products law detailed in Lemma \ref{lem funct prop}.
\end{proof}
\subsubsection{Matrix operators}\label{sec matrix op}
For further purposes related to the reduction of the remainder in the transport linear parts subject to a vectorial structure, we need to introduce $2\times2$ matrices of scalar operators taking the form
\begin{align}\label{Op-Vec1}
\mathbf{T}=\begin{pmatrix}
T_1 & T_3\\
T_4 & T_2
\end{pmatrix},
\end{align}
acting on {the product Hilbert space} $\mathbf{H}^{s}_{\mathtt{m}}(\mathbb{T}^{d+1},\mathbb{C})$, defined in \eqref{def:sob-product} .
Notice that we shall restrict our discussion to the case where all the $T_i:\mathcal{O}\rightarrow\mathcal{L}\big(H_{\m}^s(\mathbb{T}^{d+1},\mathbb{C})\big)$ are $\mathtt{m}$-fold preserving Toeplitz kernel operators as in \eqref{Top-op1}. The matrix operator $\mathbf{T}$ is said to be real (resp. $\mathtt{m}$-fold preserving, reversible, reversibility-preserving) if all the entries $T_i$ enjoy this property.
The diagonal part $\lfloor\mathbf{T}\rfloor$ of $\mathbf{T}$ is defined as follows,
\begin{equation}\label{def diag-diag}
\lfloor\mathbf{T}\rfloor\triangleq\begin{pmatrix}
\lfloor T_1\rfloor & 0\\
0 & \lfloor T_2\rfloor
\end{pmatrix},
\end{equation}
where for any scalar operator $T$, the notation $\lfloor T\rfloor$ is its diagonal part defined by
\begin{equation}\label{def diag opi}
\forall(l_{0},j_{0})\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}},\quad\lfloor T\rfloor {\bf e}_{l_0,j_0}\triangleq T_{l_0,j_0}^{l_0,j_0}{\bf e}_{l_0,j_0}=\big\langle T{\bf e}_{l_0,j_0}, {\bf e}_{l_0,j_0}\big\rangle_{L^2(\mathbb{T}^{d+1},\mathbb{C})}\,{\bf e}_{l_0,j_0}.
\end{equation}
The next goal is to equip the class of matrix operators with the following hybrid norm
\begin{equation}\label{hyb nor}
\interleave {\bf T}\interleave_{s}^{q,\gamma,\mathtt{m}}\triangleq \|T_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}+\|T_2\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}+\|T_3\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}+\|T_4\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}.
\end{equation}
The choice of this norm will be motivated later in the remainder reduction performed in Section \ref{Reduction-Remaind}. Actually, the off-diagonal norm used to measure the diagonal terms $T_1$ and $T_2$ is compatible with the scalar case as in the papers \cite{BHM21,HHM21,HR21-1,HR21,R22}. However the isotropic norm used to measure the anti-diagonal terms $T_3$ and $T_4$ is compatible with the smoothing effects of the operators and it is introduced to remedy to a new space resonance phenomenon in the second order Melnikov condition due to the interaction between the diagonal eigenvalues. The cut-off projectors $({\bf P_N})_ {N\in\mathbb{N}}$ are defined as follows
\begin{align}\label{proj mat}
{\bf P_N}\mathbf{T}\triangleq\begin{pmatrix}
P_N^1T_1 & P_N^2T_3\vspace{0.1cm}\\
P_N^2T_4 & P_N^1T_2
\end{pmatrix}\qquad\hbox{and}\qquad{\bf P_N^\perp}\mathbf{T}\triangleq\begin{pmatrix}
P_N^{1,\perp}T_1 & P_N^{2,\perp}T_3\vspace{0.1cm}\\
P_N^{2,\perp}T_4& P_N^{1,\perp}T_2
\end{pmatrix},
\end{align}
where $P_N^1$ is defined in \eqref{definition of projections for operators} and $P_N^2$ in \eqref{definition of projections for operators2}. We shall prove the following result.
\begin{cor}\label{cor-hyb-nor}
Let $q\in\N$, $\m\in\N^*$, $(\gamma,d,s_{0},s)$ satisfy \eqref{setting tau1 and tau2}-\eqref{init Sob cond} and ${\bf T}$, ${\bf S}$ two matrix operators as in \eqref{Op-Vec1}, then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item Projector property : for any $\mathtt{t}\geqslant 0$
$$\interleave{\bf P_N T }\interleave_{s+\mathtt{t}}^{q,\gamma,\mathtt{m}}\leqslant N^{\mathtt{t}}\interleave{\bf T}\interleave_{s}^{q,\gamma,\mathtt{m}}\qquad\textnormal{and}\qquad\interleave{\bf P_N^\perp T }\interleave_{s}^{q,\gamma,\mathtt{m}}\leqslant N^{-\mathtt{t}}\interleave{\bf T}\interleave_{s+\mathtt{t}}^{q,\gamma,\mathtt{m}}.$$
\item Composition law :
$$\interleave {\bf T S}\interleave_{s}^{q,\gamma,\mathtt{m}}\lesssim \interleave {\bf T}\interleave_{s}^{q,\gamma,\mathtt{m}}\interleave {\bf S}\interleave_{s_0}^{q,\gamma,\mathtt{m}}+\interleave {\bf T}\interleave_{s_0}^{q,\gamma,\mathtt{m}}\interleave {\bf S}\interleave_{s}^{q,\gamma,\mathtt{m}}.$$
\item Link with the operator norm : for $\rho=(\rho_1,\rho_{2})\in\mathbf{H}_{\mathtt{m}}^{s},$
$$\|\mathbf{T}\rho\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\interleave\mathbf{T}\interleave_{s_0}^{q,\gamma,\mathtt{m}}\|\rho\|_{s}^{q,\gamma,\mathtt{m}}+\interleave\mathbf{T}\interleave_{s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_0}^{q,\gamma,\mathtt{m}}.$$
In particular,
$$\|\mathbf{T}\rho\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\interleave\mathbf{T}\interleave_{s}^{q,\gamma,\mathtt{m}}\|\rho\|_{s}^{q,\gamma,\mathtt{m}}.$$
\end{enumerate}
\end{cor}
\begin{proof}
\textbf{(i)} It follows immediately from \eqref{proj mat}, \eqref{hyb nor} and Lemma \ref{properties of Toeplitz in time operators}-(i).\\
\textbf{(ii)} One has
$$\mathbf{TS}=\begin{pmatrix}
T_1 S_1+T_3S_4& T_1S_3+T_3 S_2\\
T_4S_1+T_2S_4 & T_2S_2+T_4S_3
\end{pmatrix}\triangleq \begin{pmatrix}
R_1 & R_3\\
R_4 & R_2
\end{pmatrix}.
$$
Let us estimate $R_1$. One has from the law products detailed in Lemma \ref{properties of Toeplitz in time operators}-$($iv$)$
\begin{align*}
\|R_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}&\lesssim\|T_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\|S_1\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}+\|T_1\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}\|S_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\\
&\quad+\|T_3\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\|S_4\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}+\|T_3\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}\|S_4\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}.
\end{align*}
Then using the embedding estimate in Lemma \ref{properties of Toeplitz in time operators}-(iii) together with \eqref{hyb nor}, we get
\begin{align*}
\|R_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}&\lesssim\interleave {\bf T }\interleave_{s}^{q,\gamma,\mathtt{m}}\interleave {\bf S}\interleave_{s_0}^{q,\gamma,\mathtt{m}}+\interleave {\bf T }\interleave_{s_0}^{q,\gamma,\mathtt{m}}\interleave {\bf S}\interleave_{s}^{q,\gamma,\mathtt{m}}.
\end{align*}
Let us now estimate $R_3$. Using Lemma \ref{properties of Toeplitz in time operators}-(iv) and \eqref{hyb nor}, we infer
\begin{align*}
\|R_3\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\lesssim\|S_3\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\|T_1\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}+\|S_3\|_{\textnormal{\tiny{I-D}},s_0}^{q,\gamma,\mathtt{m}}\|T_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\\
&\quad+\|T_3\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\|S_1\|_{\textnormal{\tiny{O-d}},s_0}^{q,\gamma,\mathtt{m}}+\|T_3\|_{\textnormal{\tiny{I-D}},s_0}^{q,\gamma,\mathtt{m}}\|S_1\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\\
&\lesssim\interleave {\bf T }\interleave_{s}^{q,\gamma,\mathtt{m}}\interleave {\bf S}\interleave_{s_0}^{q,\gamma,\mathtt{m}}+\interleave {\bf T }\interleave_{s_0}^{q,\gamma,\mathtt{m}}\interleave {\bf S}\interleave_{s}^{q,\gamma,\mathtt{m}}.
\end{align*}
The terms $R_2$ and $R_4$ can be treated in a similar way.\\
\textbf{(iii)} This point is a direct consequence of \eqref{hyb nor} and Lemma \ref{properties of Toeplitz in time operators}-(ii).
\end{proof}
\section{Hamiltonian reformulation}
Here, we describe the contour dynamics by using polar parametrization for the two interfaces of the patch near the annulus. We end up with a system of coupled nonlinear and nonlocal transport equations satisfied by the radial deformations and that can be recast as a Hamiltonian system. This structure is crucial to establish quasi-periodic solutions near the stationary annulus patch.
\subsection{Transport system for radial deformations}
Let $D_0=D_1\backslash\overline{D_2}$ be a doubly-connected domain where $D_1$ and $D_2$ are two simply-connected domains with $\overline{D_{2}}$ strictly embedded in $ D_{1}.$ Consider the initial datum $\boldsymbol{\omega}_0={\bf{1}}_{D_0},$ then the Yudovich solution takes for any $t\geqslant 0$ the form $\boldsymbol{\omega}(t)={\bf{1}}_{D_t},$ with $D_t=D_{t,1}\backslash \overline{D_{t,2}}$ a doubly-connected domain. In addition, $D_{t,1}$ and $D_{t,2}$ are two simply-connected domains with $\overline{D_{t,2}}$ strictly embedded in $D_{t,1}$. For fixed $b\in(0,1),$ we start with a domain $D_0$ close to the annulus $A_b$, defined in \eqref{annulus-Ab}, then for a short interval of time $[0,T]$, the domain $D_t$ will be localized around the same annulus. Therefore, we may use on this time interval the following symplectic polar parametrization of the boundary.
For $k\in\{1,2\},$
\begin{equation}\label{def zk}
z_{k}(t):\begin{array}[t]{rcl}
\mathbb{T} & \mapsto & \partial D_{t,k}\\
\theta & \mapsto & e^{-\ii\Omega t}w_k (t,\theta),
\end{array}\quad \textnormal{where}\qquad\begin{array}{l}
w_k (t,\theta)\triangleq \left(b_k^{2}+2r_k(t,\theta)\right)^{\frac{1}{2}}e^{\ii\theta},\\
b_1\triangleq1, \;\;
b_2\triangleq b.
\end{array}
\end{equation}
Similarly to \cite{BHM21, HHM21,HR21}, the introduction of the angular velocity $\Omega>0$ is due to some technical issues and devised to circumvent the trivial resonances associated to the eigen-mode $n=1$ and used in the current configuration to remedy to a more delicate phenomenon related to the analytic accumulation of a sequence of eigenvalues in the vectorial case, see Lemma \ref{lem lin op 2 DCE}.
The radial deformations $r_1$ and $r_2$ are assumed to be small, namely
$$|r_1(t,\theta)|+|r_2(t,\theta)|\ll1.$$
In the sequel, for more convenience, we denote
\begin{equation}\label{def Rk}
\forall k\in\{1,2\},\quad R_k(t,\theta)\triangleq \left(b_k^{2}+2r_k(t,\theta)\right)^{\frac{1}{2}}.
\end{equation}
Remind that in this particular case the stream function defined through \eqref{def streamL1} takes the form
\begin{equation}\label{def stream}
\psi(t,z)=\frac{1}{2\pi}\int_{D_{t,1}}\log(|z-\xi|)dA(\xi)-\frac{1}{2\pi}\int_{D_{t,2}}\log(|z-\xi|)dA(\xi).
\end{equation}
The vortex patch equation \eqref{CDE} provides a system of coupled transport-type PDE satisfied by $r_1$ and $r_2$. This is described by the following lemma.
\begin{lem}\label{lem eq EDC r}
For short time $T>0,$ the radial deformations $r_1$ and $r_2$ defined through \eqref{def zk} satisfy the following nonlinear coupled system: for all $k\in \{1,2\}$, $(t,\theta)\in[0,T]\times\mathbb{T}$,
\begin{align}
\partial_{t}r_k(t,\theta)+\Omega \partial_\theta r_k (t,\theta)&=-\partial_{\theta}\Big[\psi\big(t,z_k(t,\theta)\big)\Big]\label{Edc eq rkPsi}\\
&=(-1)^{k+1} F_{k,k}[r](t,\theta)+(-1)^{k}F_{k,3-k}[r](t,\theta), \label{Edc eq}
\end{align}
where $r=(r_1,r_2)$ and, for all $k,n\in\{1,2\}$,
\begin{align}\label{Fkn Edc}
F_{k,n}[r](t,\theta)&\triangleq \int_{\mathbb{T}}\log\big(A_{k,n}(t,\theta,\eta)\big)\partial_{\theta\eta}^2\Big(R_k(t,\theta)R_n(t,\eta)\sin(\eta-\theta)\Big)d\eta,\\
A_{k,n}(t,\theta,\eta)&\triangleq \big|R_k(t,\theta)e^{\ii\theta}-R_n(t,\eta)e^{\ii\eta}\big|,\label{Akn}
\end{align}
where we have used the notation \eqref{average-notation}.
\end{lem}
\begin{proof}
For $k\in\{1,2\},$ we denote by $\mathbf{n}_k(t,\cdot)$ an inward normal vector to the boundary $\partial D_{t,k}$ of the patch. According to \cite[p. 174]{HMV13}, the vortex patch equation writes
$$\forall k\in\{1,2\},\quad\partial_{t}z_k(t,\theta)\cdot \mathbf{n}_k\big(t,z_k(t,\theta)\big)=\partial_{\theta}\Big[\psi\big(t,z_k(t,\theta)\big)\Big].$$
Identifying $\mathbb{C}$ with $\mathbb{R}^{2}$
and making the choice $\mathbf{n}_k\big(t,z_k(t,\theta)\big)=\ii\partial_{\theta}z_k(t,\theta)$ we get from \eqref{def zk}
\begin{align*}
\partial_{t}z_k(t,\theta)\cdot \mathbf{n}_k\big(t,z_k(t,\theta)\big)&=\mbox{Im}\left(\partial_{t}z_k(t,\theta)\overline{\partial_{\theta}z_k(t,\theta)}\right)\\ &=-\partial_{t}r_k(t,\theta)-\Omega \partial_\theta r_k (t,\theta).
\end{align*}
Combining the last two identities we obtain \eqref{Edc eq rkPsi}. Next, we intend to use Stokes theorem in order to transform the integral \eqref{def stream} into an integration on the boundary. This theorem can be recast in the complex form,
\begin{equation}\label{stokes}
2\ii\int_{D}\partial_{\overline{\xi}}f(\xi,\overline{\xi})dA(\xi)=\int_{\partial D}f(\xi,\overline{\xi})d\xi,
\end{equation}
where $f:\overline{D}\to\C$ is a function of class $C^1$, $D$ is a simply-connected bounded domain and $\partial D$ is the boundary of $D$. To make the argument rigorous, we shall mollify the logarithmic kernel by setting,
$$\epsilon>0,\quad f_{\epsilon}(\xi,\overline{\xi})\triangleq (\overline{\xi}-\overline{z})\Big[\log\big(|z-\xi|^2+\epsilon\big)-1\Big].$$
Then, we have
$$\partial_{\overline{\xi}}f_{\epsilon}(\xi,\overline{\xi})=\log\big(|z-\xi|^2+\epsilon\big)-\frac{\epsilon}{|z-\xi|^2+\epsilon}\cdot$$
Applying \eqref{stokes} yields
$$
2\ii\int_{D_{t,k}}\log\big(|z-\xi|^2+\epsilon\big)dA(\xi)-2\ii\int_{D_{t,k}}\frac{\epsilon}{|z-\xi|^2+\epsilon}dA(\xi)=\int_{\partial D_{t,k}}f_\epsilon(\xi,\overline{\xi})d\xi
$$
and taking the limit $\epsilon\to0$ together with \eqref{def stream} allow to get
$$\psi(t,z)=\frac{1}{8\ii\pi}\int_{\partial D_{t,1}}(\overline{\xi}-\overline{z})\Big[\log\left(|\xi-z|^{2}\right)-1\Big]d\xi-\frac{1}{8\ii\pi}\int_{\partial D_{t,2}}(\overline{\xi}-\overline{z})\Big[\log\left(|\xi-z|^{2}\right)-1\Big]d\xi.$$
Parametrizing the boundaries with \eqref{def zk} and using the notation \eqref{average-notation} we infer \begin{equation}\label{def Psii}
\begin{aligned}
\psi(t,z)&=\frac{1}{4\ii}\int_{\mathbb{T}}(\overline{z}_1(t,\eta)-\overline{z})\Big[\log\big(|z_1(t,\eta)-z|^2\big)-1\Big]\partial_{\eta}z_1(t,\eta)d\eta\\
&\quad-\frac{1}{4\ii}\int_{\mathbb{T}}(\overline{z}_2(t,\eta)-\overline{z})\Big[\log\big(|z_2(t,\eta)-z|^2\big)-1\Big]\partial_{\eta}z_2(t,\eta)d\eta.
\end{aligned}
\end{equation}
As a consequence, we get by differentiating inside the integral
$$\partial_{\overline{z}}\psi(t,z)=-\frac{1}{4\ii}\int_{\mathbb{T}}\log\big(|z_1(t,\eta)-z|^2\big)\partial_{\eta}z_1(t,\eta)d\eta+\frac{1}{4\ii}\int_{\mathbb{T}}\log\big(|z_2(t,\eta)-z|^2\big)\partial_{\eta}z_2(t,\eta)d\eta.$$
Therefore we find through elementary computations
\begin{align}\label{partial-theta-psi}
\nonumber\partial_{\theta}[\psi(t,z_k(t,\theta))]&= 2\textnormal{Re}\Big((\partial_{\overline{z}}\psi)(t,z_k(t,\theta))\partial_{\theta}\overline{z}_k(t,\theta)\Big)
\\ &=
\nonumber -\int_{\mathbb{T}}\log\Big(|z_k(t,\theta)-z_1(t,\eta)|\Big)\partial_{\theta\eta}^{2}\textnormal{Im}\big(z_1(t,\eta)\overline{z}_k(t,\theta)\big)d\eta
\\ &\quad+
\int_{\mathbb{T}}\log\Big(|z_k(t,\theta)-z_2(t,\eta)|\Big)\partial_{\theta\eta}^{2}\textnormal{Im}\big(z_2(t,\eta)\overline{z}_k(t,\theta)\big)d\eta.
\end{align}
Using \eqref{def zk} we obtain
\begin{align*}
\forall\,k,n\in\{1,2\},\quad\textnormal{Im}\big(z_n(t,\eta)\overline{z}_k(t,\theta)\big)&=R_n(t,\eta)R_k(t,\theta)\sin(\eta-\theta),\\
|z_k(t,\theta)-z_n(t,\eta)| &=\big|R_k(t,\theta)e^{\ii\theta}-R_n(t,\eta)e^{\ii\eta}\big|.
\end{align*}
By combining the last two identities with \eqref{Edc eq rkPsi}-\eqref{partial-theta-psi} we conclude the proof of Lemma \ref{lem eq EDC r}.
\end{proof}
\subsection{Hamiltonian structure}
The main purpose is to explore the Hamiltonian structure beyond the equations described in \mbox{Lemma \ref{lem eq EDC r}.} First, the kinetic energy associated to the vortex patch $\boldsymbol{\omega}(t,\cdot)={\bf{1}}_{D_t}=\mathbf{1}_{D_{t,1}\backslash \overline{D_{t,2}}}$ is given by
\begin{align}\label{def kinetic energy DCE}
E(r)&\triangleq\frac{1}{2\pi}\int_{D_{t}}\mathbf{\psi}(t,z)dA(z)\nonumber\\
&=\frac{1}{2\pi}\int_{D_{t,1}}\mathbf{\psi}(t,z)dA(z)-\frac{1}{2\pi}\int_{D_{t,2}}\mathbf{\psi}(t,z)dA(z)
\end{align}
and its angular impulse is defined by
\begin{align}\label{def angular impulse DCE}
J(r)&\triangleq\frac{1}{2\pi}\int_{D_{t}}|z|^{2}dA(z)\nonumber\\
&=\frac{1}{2\pi}\int_{D_{t,1}}|z|^{2}dA(z)-\frac{1}{2\pi}\int_{D_{t,2}}|z|^{2}dA(z),
\end{align}
where the stream function $\psi$ is defined in \eqref{def stream}. The main result of this section reads as follows.
\begin{prop}\label{prop HAM eq Edc}
The system \eqref{Edc eq} is Hamiltonian and takes the form
\begin{equation}\label{Hamilt form DCE}
\partial_{t}r=\mathcal{J}\nabla H(r)
\end{equation}
where $r\triangleq(r_{1},r_{2})$,
\begin{equation}\label{def calJ}
\mathcal{J}\triangleq \begin{pmatrix}
\partial_{\theta} & 0\\
0 & -\partial_{\theta}
\end{pmatrix}
\end{equation}
and $\nabla$ is the $L^{2}(\mathbb{T})\times L^{2}(\mathbb{T})$-gradient
and the hamiltonian $H$ is defined by
\begin{equation}\label{def H}
H(r)\triangleq -\tfrac{1}{2}\big(E(r)+\Omega J(r)\big),
\end{equation}
where $E$ and $J$ are defined in \eqref{def kinetic energy DCE} and \eqref{def angular impulse DCE}.
\end{prop}
\begin{proof}
We shall first compute the $L^{2}(\mathbb{T})\times L^{2}(\mathbb{T})$ gradient of the angular impulse $J$. For this aim, we need to write its expression in terms of $r$. Using \eqref{stokes} combined with \eqref{def angular impulse DCE} and \eqref{def zk} yields
\begin{align*}
J(r)&=\frac{1}{8\pi \ii}\int_{\partial D_{t,1}}|z|^{2}\overline{z}dz-\frac{1}{8\pi \ii}\int_{\partial D_{t,2}}|z|^{2}\overline{z}dz\\
&=\frac{1}{4}\int_{\mathbb{T}}\big(1+2r_{1}(t,\theta)\big)^{2}d\theta-\frac{1}{4}\int_{\mathbb{T}}\left(b^{2}+2r_{2}(t,\theta)\right)^{2}d\theta.
\end{align*}
Differentiating in $r=(r_1,r_2)$ one gets for $\rho=(\rho_{1},\rho_{2})\in L^{2}(\mathbb{T})\times L^{2}(\mathbb{T})$,
$$\big\langle\nabla J(r),\rho\big\rangle_{L^{2}(\mathbb{T})\times L^{2}(\mathbb{T})}=\int_{\mathbb{T}}\big(1+2r_{1}(t,\theta)\big)\rho_{1}(\theta)d\theta-\int_{\mathbb{T}}\big(b^{2}+2r_{2}(t,\theta)\big)\rho_{2}(\theta)d\theta.$$
This implies that
\begin{equation}\label{link J and r DCE}
\nabla J(r)=\begin{pmatrix}
1+2r_{1}\\
-b^{2}-2r_{2}
\end{pmatrix}\qquad{\rm and}\qquad \tfrac{1}{2}\Omega\,\mathcal{J}\nabla J(r)=\Omega\,\partial_{\theta}r.
\end{equation}
The next task is to compute the $L^{2}(\mathbb{T})\times L^{2}(\mathbb{T})$ gradient of the kinetic energy $E$ defined in \eqref{def kinetic energy DCE}. Combining \eqref{def stream} with \eqref{def zk} and changing $\xi$ by $e^{-\ii t\Omega}\xi$ we find
$${\psi}\big(t,e^{-\ii t \Omega} z\big)= \frac{1}{2\pi}\int_{\widetilde D_{t,1}}\log(|z-\xi|)dA(\xi)-\frac{1}{2\pi}\int_{\widetilde D_{t,2}}\log(|z-\xi|)dA(\xi),$$
where $ \widetilde D_{t,k}$ are the domains with boundaries parametrized by
$$w_{k}:\begin{array}[t]{rcl}
\mathbb{T} & \mapsto & \partial \widetilde D_{t,k}\\
\theta & \mapsto & R_k(t,\theta)e^{\ii\theta},\quad R_k(t,\theta)=\left(b_k^{2}+2r_k(t,\theta)\right)^{\frac{1}{2}}.
\end{array}$$
Using polar change of coordinates allows to get after straightforward computations,
\begin{equation}\label{psipol}
\psi\big(t,e^{-\ii \Omega t} z\big)=\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(z,\ell_2e^{\ii\eta}\big)\ell_2d\ell_2d\eta,\qquad G(z,\xi)\triangleq \log(|z-\xi|).
\end{equation}
Coming back to \eqref{def kinetic energy DCE} and using once again polar change of coordinates gives
$$E(r)=\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\theta)}^{R_1(t,\theta)}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(\ell_1e^{\ii\theta},\ell_2e^{\ii\eta}\big)\ell_1\ell_2d\ell_1d\ell_2d\theta d\eta.$$
Therefore, the G\^ateaux derivative of $E$ in a given direction $\rho=(\rho_1,\rho_2)$ takes the form
\begin{align*}
\frac{d E(r)\rho}{dr}&=\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(R_1(t,\theta)e^{\ii\theta},\ell_2e^{\ii\eta}\big)\rho_1(\theta)\ell_2d\ell_2d\theta d\eta\\
&\quad+\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\theta)}^{R_1(t,\theta)}G\big(\ell_1e^{\ii\theta},R_1(t,\eta)e^{\ii\eta}\big)\rho_1(\eta)\ell_1d\ell_1d\theta d\eta\\
&\quad-\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(R_2(t,\theta)e^{\ii\theta},\ell_2e^{\ii\eta}\big)\rho_2(\theta)\ell_2d\ell_2d\theta d\eta\\
&\quad-\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\theta)}^{R_1(t,\theta)}G\big(\ell_1e^{\ii\theta},R_2(t,\eta)e^{\ii\eta}\big)\rho_2(\eta)\ell_1d\ell_1d\theta d\eta.
\end{align*}
Since $G(z,\xi)$ is symmetric in $(z,\xi)$ then we obtain, by exchanging $\theta\leftrightarrow \eta$ if necessary,
\begin{align*}
\frac{d E(r)\rho}{dr}&=2\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(R_1(t,\theta)e^{\ii\theta},\ell_2e^{\ii\eta}\big)\rho_1(\theta)\ell_2d\ell_2d\theta d\eta\\
&\quad-2\int_{\mathbb{T}}\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(R_2(t,\theta)e^{\ii\theta},\ell_2e^{\ii\eta}\big)\rho_2(\theta)\ell_2d\ell_2d\theta d\eta.
\end{align*}
It follows from \eqref{psipol} and \eqref{def zk}
\begin{equation}\label{link E and r}
\nabla E(r)=\begin{pmatrix}
\displaystyle 2\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(w_1(t,\theta),\ell_2e^{\ii\eta}\big)\ell_2d\ell_2d\eta\vspace{0.1cm}\\
\displaystyle -2\int_{\mathbb{T}}\int_{R_2(t,\eta)}^{R_1(t,\eta)}G\big(w_2(t,\theta),\ell_2e^{\ii\eta}\big)\ell_2d\ell_2d\eta
\end{pmatrix}=\begin{pmatrix}
2\psi\big(t,z_1(t,\theta)\big)\\
-2\psi\big(t,z_2(t,\theta)\big)
\end{pmatrix}.
\end{equation}
Finally, \eqref{link E and r}, \eqref{link J and r DCE} and \eqref{Edc eq rkPsi} give the desired result. This achieves the proof of Proposition \ref{prop HAM eq Edc}.
\end{proof}
\subsection{Symplectic structure and invariance}
In this section, we intend to discuss the symplectic structure behind the Hamiltonian formulation already seen in Proposition \ref{prop HAM eq Edc}. We shall also discuss some symmetry properties such as the reversibility and the $\mathtt{m}-$fold persistence.\\
$\blacktriangleright${ \it{Symplectic structure.}} We shall present the symplectic structure associated with the Hamiltonian equation \eqref{Hamilt form DCE}. To do so, we need to fix the phase space but before that we shall use the following fact that can be derived from \eqref{Hamilt form DCE},
$$\frac{d}{dt}\int_{\mathbb{T}}r(t,\theta)d\theta=0.$$
This means that the area enclosed by the boundaries is conserved in time.
Therefore, we shall work with the following phase space with zero space average $L_{*}^2(\mathbb{T})\times L_{*}^2(\mathbb{T})$ defined by
$$
L_{*}^{2}(\mathbb{T})\triangleq \bigg\{f=\sum_{j\in\mathbb{Z}^*}f_{j}\mathbf{e}_j\quad\textnormal{s.t.}\quad f_{-j}=\overline{f_j},\quad\sum_{j\in\mathbb{Z}^*}|f_j|^2<+\infty\bigg\},\qquad\; \mathbf{e}_{j}(\theta)\triangleq e^{\ii j\theta}.$$
The equation \eqref{Hamilt form DCE} induces on the phase space $L_{*}^2(\mathbb{T})\times L_{*}^2(\mathbb{T})$ a symplectic structure given by the symplectic $2$-form
\begin{align*}
\mathcal{W}(r,h)&\triangleq\big\langle \mathcal{J}^{-1}r,h\big\rangle_{L^{2}(\mathbb{T})\times L^{2}(\mathbb{T})}\\
&=\int_{\mathbb{T}}\partial_{\theta}^{-1}r_1(\theta)h_1(\theta)d\theta-\int_{\mathbb{T}}\partial_{\theta}^{-1}r_2(\theta)h_2(\theta)d\theta,
\end{align*}
where
$$
\partial_{\theta}^{-1}f\triangleq\sum_{j\in\mathbb{Z}^{*}}\tfrac{f_{j}}{\ii j}\mathbf{e}_{j} \quad\quad\textnormal{for }\quad \quad f=\sum_{j\in\mathbb{Z}^{*}}f_{j}\mathbf{e}_{j}.$$
The corresponding Hamiltonian vector field is $X_{H}(r)\triangleq\mathcal{J}\nabla H(r)$ (where $\nabla$ is the $L^{2}\times L^2$-gradient). It is defined as the symplectic gradient of the Hamiltonian $H$ with respect to the symplectic $2$-form $\mathcal{W}$, namely
$$dH(r)[\cdot]=\mathcal{W}(X_{H}(r),\cdot).$$
Decomposing into Fourier series
$$r=(r_1,r_2),\qquad\forall k\in\{1,2\},\quad r_k=\sum_{j\in\mathbb{Z}^{*}}r_{j,k}\mathbf{e}_{j}\qquad\textnormal{with}\qquad r_{-j,k}=\overline{r_{j,k}},$$
then the symplectic form $\mathcal{W}$ becomes
\begin{equation}\label{Symp-F}
\mathcal{W}(r,h)=\sum_{j\in\mathbb{Z}^{*}}\frac{1}{\ii j}\Big[r_{j,1}h_{-j,1}-r_{j,2}h_{-j,2}\Big], \end{equation}
or equivalently,
\begin{align}\label{sympl ref}
\nonumber \mathcal{W}&=\tfrac{1}{2}\sum_{j\in\mathbb{Z}^{*}}\frac{1}{\ii j}\Big[dr_{j,1}\wedge dr_{-j,1}-dr_{j,2}\wedge dr_{-j,2}\Big]\\
&=\sum_{j\in\mathbb{N}^{*}}\frac{1}{\ii j}\Big[dr_{j,1}\wedge dr_{-j,1}-dr_{j,2}\wedge dr_{-j,2}\Big].
\end{align}
\begin{defin} {\bf (Symplectic)}\label{def:sympl}
A linear transformation $\Phi$ of the phase space $L_*^2(\mathbb{T})\times L_*^2(\mathbb{T})$ is symplectic, if $\Phi$
preserves the symplectic $2$-form $\mathcal{W}$, i.e.
$${\mathcal W}(\Phi u,\Phi v)={\mathcal W}(u,v),$$
or equivalently
$$\Phi^\top\circ\mathcal{J}^{-1}\circ\Phi=\mathcal{J}^{-1}.$$
\end{defin}
This allows to establish the following result which is useful later and whose proof is straightforward.
\begin{lem}\label{lem: charac symp}
Let $\Phi$ be a matrix space-Fourier multiplier with the form
$$\Phi\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
\triangleq \sum_{j\in \mathbb{Z}^*}\, \Phi_j \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_j\, , \qquad
\Phi_j\in M_{2}(\mathbb{R}),$$
and consider the symplectic $2$-form $\mathcal{W}$ defined in \eqref{Symp-F}. Then $\Phi$ is symplectic if and only if
$$\forall j\in \mathbb{Z}^*,\quad \Phi_j^\top \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix} \Phi_j=\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.$$
\end{lem}
\noindent $\blacktriangleright$ {\it{Reversibility.}} We shall analyze the reversibility property of the equation \eqref{Hamilt form DCE} which is crucial to reduce by symmetry the phase space and remove most of the trivial resonances.
We consider the involution $\mathscr{S}$ defined on the phase space $L_{*}^{2}(\mathbb{T})\times L_{*}^2(\mathbb{T})$ by
\begin{equation}\label{defin inv scr S}
(\mathscr{S}r)(\theta)\triangleq r(-\theta),
\end{equation}
which satisfies
\begin{equation}\label{prop inv scr S}
\mathscr{S}^{2}=\textnormal{Id}\qquad \mbox{and}\qquad \mathcal{J}\circ\mathscr{S}=-\mathscr{S}\circ\mathcal{J}.
\end{equation}
Using the change of variables $\eta\mapsto-\eta$ and parity arguments, one gets from \eqref{Fkn Edc}
$$\forall\,k,n\in\{1,2\},\quad F_{k,n}\circ\mathscr{S}=-\mathscr{S}\circ F_{k,n}.$$
Then we conclude by Lemma \ref{lem eq EDC r}, \eqref{Hamilt form DCE} and \eqref{prop inv scr S} that
the Hamiltonian vector field $X_H$ satisfies
$$X_H\circ\mathscr{S}=-\mathscr{S}\circ X_H.$$
Therefore, we will focus on quasi-periodic solutions to \eqref{Hamilt form DCE} satisfying the reversibility condition
\begin{equation}\label{reversibility condition r}
r(-t,-\theta)=r(t,\theta).
\end{equation}
\noindent $\blacktriangleright$ {\it The {$\mathtt{m}$}-fold symmetry.} Let $\mathtt{m}\geqslant1$ be an integer and consider the transformation $\mathscr{T}_{\mathtt{m}}$ on the phase space $L_{*}^2(\mathbb{T})\times L_{*}^2(\mathbb{T})$ defined by
\begin{equation}\label{def scr Tm}
(\mathscr{T}_{\mathtt{m}}r)(\theta)\triangleq r\left(\theta+\tfrac{2\pi}{\mathtt{m}}\right).
\end{equation}
Then it is an immediate fact that
$$\mathscr{T}_{\mathtt{m}}^{\mathtt{m}}=\textnormal{Id}\qquad\textnormal{and}\qquad\mathcal{J}\circ\mathscr{T}_{\mathtt{m}}=\mathscr{T}_{\mathtt{m}}\circ\mathcal{J}.$$
Using the change of variables $\eta\mapsto\eta+\tfrac{2\pi}{\mathtt{m}}$ we easily obtain from \eqref{Fkn Edc}
$$\forall\, k,n\in\{1,2\},\quad F_{k,n}\circ\mathscr{T}_{\mathtt{m}}=\mathscr{T}_{\mathtt{m}}\circ F_{k,n}.$$
Therefore,
$$X_{H}\circ\mathscr{T}_{\mathtt{m}}=\mathscr{T}_{\mathtt{m}}\circ X_H.$$
Thus, the solutions that we shall be interested in satisfy the $\mathtt{m}$-fold symmetry
\begin{equation}\label{m-fold symmetry r}
r\left(t,\theta+\tfrac{2\pi}{\mathtt{m}}\right)=r(t,\theta).
\end{equation}
Consequently, we shall work in the closed subspace $L_{\mathtt{m}}^2(\mathbb{T})\times L_{\mathtt{m}}^2(\mathbb{T})$ defined by
\begin{equation}\label{def L2m}
L_{\mathtt{m}}^2(\mathbb{T})\triangleq \bigg\{f=\sum_{j\in\mathbb{Z}^*}f_{j}\mathbf{e}_j\in L_{*}^2(\mathbb{T})\quad\textnormal{s.t.}\quad f_{j}\neq 0\,\Rightarrow\,j\in\mathbb{Z}_{\mathtt{m}}\bigg\}.
\end{equation}
\section{Linearization and symplectic transformation}
In this section we shall compute the linear Hamiltonian obtained through the linearization of the equation \eqref{Hamilt form DCE} at any state close to the equilibrium solution $r=0.$ It turns out that at the equilibrium state we find a matrix Fourier multiplier that can be diagonalized in a suitable basis using a linear symplectic change of coordinates, for more details we refer to Lemma \ref{lem: properties P}. However, this procedure requires to work with higher $\mathtt{m}$-fold symmetries to avoid the double eigenvalue corresponding to the mode $j=2$ as well as potential hyperbolic directions.
\subsection{Linearized operator}
The main purpose is to explore the structure of the linearized operator which takes the form of a transport system with variable coefficients and subject to compact perturbations.
\begin{lem}\label{lem lin op 1 DCE}
The linearized equation of \eqref{Hamilt form DCE} at a small state $r$ is given by the linear Hamiltonian equation,
\begin{equation}\label{defLr}
\partial_{t}\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
=\mathcal{J} \mathbf{M}_r\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix},\qquad \mathbf{M}_r\triangleq \begin{pmatrix}
-V_{1}(r)-L_{1,1}({r}) & L_{1,2}(r)\\
L_{2,1}(r) & V_{2}(r)-{L}_{2,2}(r)
\end{pmatrix},
\end{equation}
where $V_{k}(r)$ are scalar functions and ${L}_{k,n}({r})$ are nonlocal operators defined by
\begin{align}
V_{k}(r)(t,\theta)&\triangleq \Omega+(-1)^k\big[V_{k,k}(r)(t,\theta)-V_{k,3-k}(r)(t,\theta)\big], \label{def Vpm}\\
V_{k,n}(r)(t,\theta)&\triangleq \int_{\mathbb{T}}\log\big(A_{k,n}({r})(t,\theta,\eta)\big)\partial_{\eta}\Big(\tfrac{R_{n}(t,\eta)}{R_{k}(t,\theta)}\sin(\eta-\theta)\Big)d\eta,\label{def Vkn}\\
{L}_{k,n}(r)\rho(t,\theta)&\triangleq \int_{\mathbb{T}}\rho(t,\eta)\log\big(A_{k,n}({r})(t,\theta,\eta)\big)d\eta\label{def mathbfLkn}
\end{align}
and $A_{k,n}({r})$ and $R_{k}$ are respectively defined by \eqref{Akn} and \eqref{def Rk}. Moreover, if $r$ satisfies \eqref{reversibility condition r} and \eqref{m-fold symmetry r} with $\m\geqslant1$, then the operator $\mathbf{M}_r$ is $\mathtt{m}$-fold reversibility preserving.
\end{lem}
\begin{proof} Throughout the proof, we shall alleviate the notation by removing the time dependence and keep $r$ when it is relevant. In view of \eqref{Edc eq rkPsi}, it suffices to linearize the term involving the stream function. All the computations are done at a formal level, but can be rigorously justified in a classical way in the functional context introduced in Section \ref{sec funct set}. According to \eqref{def Psii} we can write
\begin{align}
\psi\big(z_k(\theta)\big)&=(-1)^{k+1}\Big[\widetilde{\psi}\big(r_k,z_k(\theta)\big)-\widetilde{\psi}\big(r_{3-k},z_k(\theta)\big)\Big],\label{Psikn recall0}\\
\widetilde{\psi}(r_n,z)&\triangleq \tfrac{1}{4\ii}\int_{\mathbb{T}}(\overline{z}_n(\eta)-\overline{z})\Big[\log\left(|z-z_n(\eta)|^{2}\right)-1\Big]\partial_{\eta}z_n(\eta) d\eta.\label{Psikn recall}
\end{align}
Applying the chain rule yields
\begin{equation}\label{dr psi k-k}
\begin{aligned}
d_{r_k}\Big(\psi\big(z_k(\theta)\big)\Big)[\rho_k](\theta)&=(-1)^{k+1}\Big[d_{r_k}\widetilde{\psi}\big(r_k,z_k(\theta)\big)[\rho_k](\theta)\\ &\quad +2\textnormal{Re}\Big((\partial_{\overline{z}}\widetilde{\psi})\big(r_k,z_k(\theta)\big)d_{r_k}\overline{z}_k(\theta)[\rho_k](\theta)\Big)\\
&\quad -2\textnormal{Re}\Big((\partial_{\overline{z}}\widetilde{\psi})\big(r_{3-k},z_k(\theta)\big)d_{r_k}\overline{z}_k(\theta)[\rho_k](\theta)\Big)\Big],\\
d_{r_{3-k}}\Big(\psi\big(z_k(\theta)\big)\Big)[\rho_{3-k}](\theta)&=(-1)^{k}d_{r_{3-k}}\widetilde{\psi}\big(r_{3-k},z_{k}(\theta)\big)[\rho_{3-k}](\theta).
\end{aligned}
\end{equation}
From \eqref{psipol}, we have the following expression of $\widetilde{\psi}$,
$$\widetilde{\psi}\big(r_k,e^{-\ii\Omega t}z\big)=\int_{\mathbb{T}}\int_0^{R_k(\eta)}\log\big(|z-\ell_2e^{\ii\eta}|\big)\ell_2d\ell_2d\eta.$$
Therefore, differentiating with respect to $r_k$ in the direction $\rho_k$, we obtain
$$d_{r_k}\widetilde{\psi}\big(r_k,e^{-\ii\Omega t}z\big)[\rho_k](\theta)=\int_{\mathbb{T}}\rho_k(\eta)\log\big(|z-R_k(\eta)e^{\ii\eta}|\big)d\eta.$$
It follows that, for any $k,n\in\{1,2\}$, we have by virtue of \eqref{def mathbfLkn}
\begin{align}\label{lkn part}
\nonumber d_{r_k}\widetilde{\psi}\big(r_k,z_{n}(\theta)\big)[\rho_k](\theta)&=\int_{\mathbb{T}}\rho_k(\eta)\log\big(|R_n(\theta)e^{\ii\theta}-R_k(\eta)e^{\ii\eta}|\big)d\eta\\
&={L}_{k,n}(r)\rho_k(\theta).
\end{align}
On the other hand, differentiating \eqref{def zk} leads to
$$d_{r_k}\overline{z}_k(\theta)[\rho_k](\theta)=\tfrac{\rho_k(\theta)}{R_k(\theta)}e^{-\ii(\theta-\Omega t)}.$$
In addition, by virtue of \eqref{Psikn recall}, we have
$$\partial_{\overline{z}}\widetilde{\psi}(r_n,z)=-\tfrac{1}{4\ii}\int_{\mathbb{T}}\log\big(|z-z_n(\eta)|^2\big)\partial_{\eta}z_n(\eta)d\eta.$$
Combining the last two identities we infer, for $k,n\in\{1,2\}$,
\begin{align*}
2\textnormal{Re}\Big((\partial_{\overline{z}}\widetilde{\psi})\big(r_n,z_k(\theta)\big)d_{r_k}\overline{z}_k(\theta)[\rho_k](\theta)\Big)&=-\tfrac{\rho_k(\theta)}{R_k(\theta)}\int_{\mathbb{T}}\log\big(|z_k(\theta)-z_n(\eta)|\big)\partial_{\eta}\textnormal{Im}\Big(z_n(\eta)e^{-\ii(\theta-\Omega t)}\Big)d\eta.
\end{align*}
From \eqref{def zk} we find the identity
$$\textnormal{Im}\Big(z_n(\eta)e^{-\ii(\theta-\Omega t)}\Big)=R_n(\eta)\sin(\eta-\theta).$$
Then, by \eqref{def Vkn} we conclude that
\begin{align}\label{Vkn part}
2\textnormal{Re}\Big((\partial_{\overline{z}}\widetilde{\psi})\big(r_n,z_k(\theta)\big)d_{r_k}\overline{z}_k(\theta)[\rho_k](\theta)\Big)&=-
V_{k,n}(r)(\theta)\rho_k(\theta).
\end{align}
Putting together \eqref{Psikn recall0}, \eqref{dr psi k-k}, \eqref{lkn part} and \eqref{Vkn part} yields
\begin{align*}
d_{r}\Big(\psi\big(z_k(\theta)\big)\Big)[\rho](\theta)&=d_{r_k}\Big(\psi\big(z_k(\theta)\big)\Big)[\rho_k](\theta)+d_{r_{3-k}}\Big(\psi\big(z_k(\theta)\big)\Big)[\rho_{3-k}](\theta)\\ &=(-1)^{k+1}\Big[{L}_{k,k}(r)\rho_k(\theta)-V_{k,k}(r)(\theta)\rho_k(\theta)\\
&\qquad\qquad\qquad +V_{k,3-k}(r)(\theta)\rho_{k}(\theta)-{L}_{k,3-k}(r)\rho_{3-k}(\theta)\Big].
\end{align*}
This gives the expression of $\mathbf{M}_r$ in \eqref{defLr}. Next, assume that $r$ satisfies \eqref{reversibility condition r} and \eqref{m-fold symmetry r}. Then, from \eqref{def Rk}, \eqref{def Vpm} and \eqref{def Vkn}, we get the following symmetry properties
\begin{equation}\label{sym-m Vpm}
V_{k}(r)(-t,-\theta)=V_{k}(r)(t,\theta)=V_{k}(r)\big(t,\theta+\tfrac{2\pi}{\mathtt{m}}\big).
\end{equation}
Similarly, from \eqref{def Rk} and \eqref{Akn}, one has
\begin{equation}\label{sym Akn}
A_{k,n}(r)(-t,-\theta,-\eta)=A_{k,n}(r)(t,\theta,\eta)=A_{k,n}(r)\big(t,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\big).
\end{equation}
Thus, the symmetry properties of the operator $\mathbf{M}_r$ are immediate consequences of Lemma \ref{lem sym--rev}. The proof of Lemma \ref{lem lin op 1 DCE} is now complete.
\end{proof}
The next goal is to derive the explicit structure of the linearized operator at the equilibrium \mbox{state $r=0$.}
\begin{lem}\label{lem lin op 2 DCE}
The linearized equation of \eqref{Hamilt form DCE} at $r=0$ writes,
\begin{equation}\label{Ham eq-eq DCE}
\partial_{t}\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}=\mathcal{J} \mathbf{M}_0\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}, \qquad \mathbf{M}_0\triangleq \begin{pmatrix}
-V_{1}(0)-\mathcal{K}_1\ast\cdot & \mathcal{K}_b\ast\cdot\\
\mathcal{K}_b\ast\cdot & V_{2}(0)-\mathcal{K}_1\ast\cdot
\end{pmatrix},
\end{equation}
where
\begin{align}
\forall k\in\{1,2\}, \quad \mathtt{v}_k(b)&\triangleq V_k({0})=\Omega+(2-k)\tfrac{1-b^2}{2},
\label{def V10 V20}\\
\label{def mathcalKkn}
\forall x\in(0,1], \quad \mathcal{K}_x(\theta)&\triangleq\log\big|1-xe^{\ii\theta}\big|.
\end{align}
The convolution is understood in the following sense
$$\mathcal{K}_{x}\ast\rho(\theta)=\int_{\mathbb{T}}\mathcal{K}_{x}(\theta-\eta)\rho(\eta)d\eta.$$
Given the space Fourier expansion of the real solutions
$$\forall k\in\{1,2\},\quad \rho_{k}(t,\theta)=\displaystyle\sum_{j\in\mathbb{Z}^{*}}\rho_{j,k}(t)e^{\ii j\theta},\qquad \textnormal{with}\qquad \rho_{-j,k}=\overline{\rho_{j,k}},$$ the system \eqref{Ham eq-eq DCE} is equivalent to the following countable family of linear differential systems
\begin{equation}\label{def MjbO}
\forall j\in\mathbb{Z}^*,\quad\begin{pmatrix}
\dot{\rho}_{j,1}\vspace{0.1cm}\\
\dot{\rho}_{j,2}
\end{pmatrix}= M_{j}(b,\Omega)\begin{pmatrix}
\rho_{j,1}\vspace{0.1cm}\\
\rho_{j,2}
\end{pmatrix},
\quad M_{j}(b,\Omega)\triangleq \frac{\ii j}{|j|} \begin{pmatrix}
-|j|\big(\Omega+\tfrac{1-b^2}{2}\big)+\tfrac{1}{2}& -\tfrac{b^{|j|}}{2}\\
\tfrac{b^{|j|}}{2} & -|j|\Omega-\tfrac{1}{2}
\end{pmatrix}.
\end{equation}
\end{lem}
\begin{proof}
We shall make use of the following formula which can be found in \cite[Lem. A.3]{CCG16} and \cite[Lem. 3.2]{R21}.
\begin{align}
\forall j\in\mathbb{Z}^*,\quad\forall x\in (0,1],\quad\int_{\mathbb{T}}\log\big(\big|1-xe^{\ii\theta}\big|\big)\cos(j\theta)d\theta&=-\frac{x^{|j|}}{2|j|}\cdot\label{int-2}
\end{align}
First observe that from \eqref{Akn}, one deduces for $r=0$ that
$$A_{k,n}({0})(\theta,\eta)=\left|b_{k}e^{i\theta}-b_{n}e^{i\eta}\right|=\left(b_{k}^{2}+b_{n}^{2}-2b_{k}b_{n}\cos(\eta-\theta)\right)^{\frac{1}{2}}$$
leading in particular to
$$
A_{1,2}({0})(\theta,\eta)=A_{2,1}({0})(\theta,\eta)=\left|1-be^{\ii(\eta-\theta)}\right|.
$$
Taking $r=0$ in \eqref{def Vkn} and using the change of variables $\eta\mapsto\eta+\theta$ together with \eqref{int-2} imply \begin{align*}
\forall\, k\in\{1,2\},\quad V_{k,k}(0)(\theta) &= \bigintssss_{\mathbb{T}}\log\left(\left|1-e^{\ii\eta}\right|\right)\cos(\eta)d\eta= -\frac{1}{2},
\\
V_{k,3-k}(0)(\theta)
& = \frac{b_{3-k}}{b_{k}}\bigintssss_{\mathbb{T}}\log\left(\left|1-be^{\ii\eta}\right|\right)\cos(\eta)d\eta= -\frac{b_{3-k}b }{2b_{k}}\cdot
\end{align*}
Combining the last identity with \eqref{def Vpm} yields \eqref{def V10 V20}.
Substituting $r=0$ into \eqref{def mathbfLkn} gives, since the space average of $\rho$ is zero,
\begin{align*}
\forall\, k\in\{1,2\},\qquad L_{k,k}({0})\rho
&=\mathcal{K}_1\ast\rho\qquad\textnormal{and}\qquad L_{k,3-k}({0})\rho={\mathcal{K}}_b\ast\rho,
\end{align*}
where the kernels $\mathcal{K}_1$ and $\mathcal{K}_b$ are defined by \eqref{def mathcalKkn}. Applying once again \eqref{int-2} allows to get
$$\forall j\in\mathbb{Z}^*,\qquad\mathcal{K}_1\ast\mathbf{e}_{j}=-\frac{1}{2|j|}\mathbf{e}_{j}\qquad\textnormal{and}\qquad{\mathcal{K}}_b\ast\mathbf{e}_{j}=-\frac{b^{|j|}}{2|j|}\mathbf{e}_j,\qquad \mathbf{e}_{j}(\theta)=e^{\ii j\theta}.$$
Finally, gathering the previous computations leads to \eqref{def MjbO}. This achieves the proof of Lemma \ref{lem lin op 2 DCE}.
\end{proof}
\subsection{Diagonalization at the equilibrium state}
In this subsection we shall diagonalize the equilibrium matrix operator appearing in Lemma \ref{lem lin op 2 DCE}. This provides a new Hamiltonian system more adapted for the action-angles reformulation. Before that, we will establish the following result on the spectral structure of the matrix $M_{j}$ introduced in \eqref{def MjbO}.
\begin{lem}\label{lem lin op 3 DCE}
Let $\Omega>0$,
$j\in\mathbb{Z}^*$ and $b\in(0,1)$. Then the eigenvalues of the matrix $ M_{j}(b,\Omega)$, defined in \eqref{def MjbO}, are given by $-\ii \Omega_{j,k}(b)$, $k\in\{1,2\}$, where
\begin{equation}\label{omega jk b}
\Omega_{j,k}(b)\triangleq \frac{j}{|j|}\bigg[\big(\Omega+\tfrac{1-b^2}{4}\big)|j|-\ii^{\mathtt{H}\big(\Delta_{j}(b)\big)} \tfrac{(-1)^{k}}{2}\sqrt{|\Delta_{j}(b)|}\bigg],
\end{equation}
with $\mathtt{H}\triangleq \mathbf{1}_{[0,\infty)}$ the Heaviside function and
\begin{equation}\label{def delta j}
\Delta_{j}(b)\triangleq b^{2|j|}-\big(\tfrac{1-b^2}{2}|j|-1\big)^2.
\end{equation}
The corresponding eigenspaces are one dimensional and generated by the vectors
$$v_{j,1}(b)\triangleq
\begin{pmatrix}
1 \\
-a_{j}(b)
\end{pmatrix}, \quad v_{j,2}(b)\triangleq
\begin{pmatrix}
-a_{j}(b)\\
1
\end{pmatrix}, \quad a_{j}(b)\triangleq \frac{b^{|j|}}{\tfrac{1-b^2}{2}|j|-1+\ii^{\mathtt{H}\big(\Delta_{j}(b)\big)} \sqrt{|\Delta_{j}(b)|}}\cdot$$
\end{lem}
\begin{proof}
According to \eqref{def MjbO} we have
$$
\forall j\in \mathbb{Z}^*,\quad M_{-j}(b,\Omega)=\overline{M_{j}(b,\Omega)}.
$$
Thus, it suffices to consider the case $j\in \mathbb{N}^*$.
The eigenvalues of the matrix $M_{j}(b,\Omega)$ are solutions of the following second order polynomial equation
\begin{equation}\label{poly2}
X^{2}+\ii\left[\mu_j+\delta_j\right]X-\left[\mu_j\delta_j+\tfrac{b^{2j}}{4}\right]=0, \quad\textnormal{with}\quad\mu_j\triangleq j\big(\Omega+\tfrac{1-b^2}{2}\big)-\tfrac{1}{2}\quad\textnormal{and}\quad\delta_j\triangleq j\Omega+\tfrac{1}{2}.
\end{equation}
The discriminant of the last equation is given by
\begin{align*}
-(\mu_j+\delta_j)^2+b^{2j}+4\mu_j\delta_j &=b^{2j}-\big(\mu_j-\delta_j\big)^2\\
&=\Delta_j(b)
\end{align*}
and the solutions to the equation are $-\ii \Omega_{j,k}(b)$, $k\in\{1,2\}$, where $\Omega_{j,k}(b)$ are given by \eqref{omega jk b}. The expression of the eigenvectors $v_{j,k}(b)$ follows by direct computations. Notice that, for all $j\in \mathbb{N}^*$,
$$a_j(0)=0,\qquad a_j(1)=-1$$
and for all $b\in (0,1)$, $a_j(b)$ is well-defined if $\Delta_j(b)>0$. We shall prove that $a_j(b)$ is still well-defined even when $\Delta_j(b)\leqslant 0$. In view of \eqref{def delta j}, we may write for all $j\in\mathbb{N}^*$
\begin{equation}\label{deltaj}
\Delta_j(b)=\Big(b^{j}-1+\tfrac{1-b^2}{2}j\Big)\Big(b^{j}+1-\tfrac{1-b^2}{2}j\Big).
\end{equation}
In particular we have
$$\forall b\in (0,1),\quad \Delta_1(b)=-\frac14(1-b)^2(1+b)^2<0,\, \qquad \Delta_2(b)=0\qquad\textnormal{and}\qquad a_2(b)=-1.$$
For $j\geqslant 3$, we can easily check that
\begin{align}\label{deltaj1}
\nonumber\forall b\in (0,1),\quad \tfrac{1-b^2}{2}j +b^{j}-1&=(1-b)\Big(\tfrac{j}{2}(1+b)-(1+b)-b^2\big[1+b+\cdots +b^{j-3}\big]\Big)\\
\nonumber&\geqslant (1-b)\Big(\tfrac{j-2}{2}(1+b)-b^2(j-2)\Big)\\
\nonumber&\geqslant \tfrac{j-2}{2}(1-b)^2(1+2b)\\ &>0.
\end{align}
It follows that $\Delta_j(b)\leqslant 0$ if and only if $b^{j}+1-\tfrac{1-b^2}{2}j\leqslant 0$. In this case the denominator of $a_j(b)$ satisfies, for all $b\in (0,1)$,
$$
\tfrac{1-b^2}{2}|j|-1+ \sqrt{-\Delta_{j}(b)}\geqslant \tfrac{1-b^2}{2}|j|-1\geqslant b^{j}>0.
$$
This ends the proof of Lemma \ref{lem lin op 3 DCE}.
\end{proof}
The next task is devoted to the study of the sign of the discriminant $\Delta_j(b)$ defined in \eqref{def delta j}.
\begin{lem}\label{lem:disper-relation}
There exists a strictly increasing sequence $(\underline{b}_{j})_{j\geqslant 3}\subset(0,1)$ converging to $1$ such that
$$
\Big\{b\in(0,1)\quad\textnormal{s.t.}\quad\exists\, j\in\mathbb{N}\setminus\{0,1,2\}, \quad \Delta_{j}(b)=0\Big\}=\Big\{\underline{b}_{j},\,j\geqslant 3\Big\},
$$
with
\begin{equation}\label{b3b4}
\underline{b}_{3}=\frac12\qquad\textnormal{and}\qquad \underline{b}_{4}=\sqrt{\sqrt{2}-1}.
\end{equation}
Moreover, for any fixed $j_0\geqslant 3$ we have
$$\forall b\in(\underline{b}_{j_0},\underline{b}_{j_0+1}),\quad \begin{cases}
\Delta_{j}(b)>0, & \textnormal{if } j\leqslant j_0, \\
\Delta_{j}(b)<0, & \textnormal{if } j\geqslant j_0+1.
\end{cases}$$
\end{lem}
\begin{proof}
For $j\geqslant 3$, one gets in view of \eqref{deltaj} and \eqref{deltaj1} that the zeros of the function $b\mapsto \Delta_j(b)$ are the zeros of the function $b\mapsto b^{j}+1-\tfrac{1-b^2}{2}j$. To study the zeros of the latter discrete function let us consider its continuous version
\begin{equation}\label{def:f}
\forall (b,x)\in(0,1)\times[3,\infty),\quad f(b,x)\triangleq b^{x}+1-\tfrac{1-b^2}{2}x.
\end{equation}
Then, for fixed $x\in[3,\infty)$, one has $f(0,x)=1-\tfrac{x}{2}<0,\quad f(1,x)=2\quad$ and
\begin{equation}\label{variation-b}
\forall b\in(0,1),\quad \partial_b f(b,x)=x\big(b^{x-1}+b\big)>0.
\end{equation}
Consequently, by the intermediate value theorem, there exists a unique $\underline{b}_x\in (0,1)$ satisfying
$$
f(\underline{b}_x,x)=0,\qquad \forall b< \underline{b}_x,\quad f(b,x)<0\qquad {\rm and}\qquad \forall b>\underline{b}_x,\quad f(b,x)>0.
$$
Moreover, since
\begin{equation}\label{variation-x}
\forall b\in(0,1),\quad \partial_x f(b,x)={b}^{x}\log (b)-\tfrac{1-{b}^2}{2}<0.
\end{equation}
Then the function $x\mapsto f(b,x)$ is strictly decreasing on $[3,\infty)$, which implies that $x\mapsto \underline{b}_x$ is strictly increasing on $[3,\infty)$.
It follows that for any fixed integer $j_0\geqslant 3$ we have
\begin{equation}\label{deltaj2}
f(\underline{b}_{j_0},{j_0})=0 \qquad {\rm and }\qquad \forall b\in(\underline{b}_{j_0},\underline{b}_{{j_0}+1}),\quad \begin{cases}
f(b,j)>0, & \textnormal{if } j\leqslant j_0, \\
f(b,j)<0, & \textnormal{if } j\geqslant j_0+1.
\end{cases}
\end{equation}
Combining \eqref{deltaj}, \eqref{deltaj1}, \eqref{def:f} and \eqref{deltaj2} we conclude the desired result. Finally, \eqref{b3b4} follows from the identities
\begin{align*}
f(b,3)&=b^{3}+\tfrac{3}{2}b^2-\tfrac{1}{2}\\
&=(b+1)^2\big(b-\tfrac{1}{2}\big)
\end{align*}
and
\begin{align*}
f(b,4)&=b^{4}+2b^2-1\\
&=\big({b^2+1+\sqrt{2}}\big)\left(b+\sqrt{\sqrt{2}-1}\right)\left(b-\sqrt{\sqrt{2}-1}\right).
\end{align*}
This ends the proof of Lemma \ref{lem:disper-relation}.
\end{proof}
We shall now focus on the conditions that guarantee the ellipticity of the eigenvalues based on Lemma \ref{lem:disper-relation}.
\begin{cor}\label{coro-equilib-freq}
Let $\Omega>0$, $b^*\in(0,1)\setminus \big\{\underline{b}_{n},\, n\geqslant 3\big\} $ and set
\begin{equation}\label{define-m}
\mathtt{m}^*\triangleq \mathtt{m}^*(b^*)\triangleq \min\big\{n\geqslant 3 \quad\textnormal{s.t.}\quad \underline{b}_{n}>b^*\big\}.
\end{equation}
Then, for all $|j|\geqslant\mathtt{m}^*$ and $b\in[0, b^*]$, the eigenvalues of the matrix $ M_{j}(b,\Omega)$, defined in \eqref{def MjbO}, are simple and pure imaginary $-\ii \Omega_{j,k}(b)$, $k\in\{1,2\}$, with
\begin{align}
\Omega_{j,k}(b)&=\tfrac{j}{|j|}\bigg[\big(\Omega+\tfrac{1-b^2}{4}\big)|j|+ \tfrac{(-1)^{k+1}}{2}\sqrt{\big(\tfrac{1-b^2}{2}|j|-1\big)^2-b^{2|j|}}\bigg]\label{omegajk}\\
&=\tfrac{j}{|j|}\bigg[\Big(\Omega+(2-k)\tfrac{1-b^2}{2}\Big)|j|+ \tfrac{(-1)^k}{2}+(-1)^{k+1} \mathtt{r}_{j}(b)\bigg], \label{ASYFR1+}
\end{align}
where
\begin{align}\label{ASYFR1-}
\forall (n,m)\in\mathbb{N}^2,\quad \forall\alpha\in\mathbb{N}^*,\quad \sup_{{b \in [0,b^*]}\atop |j| \geqslant \mathtt{m}^*}|j|^\alpha|\partial_j^m\partial_ b^n \mathtt{r}_{j}(b)| \leqslant C_{n,m,\alpha}.
\end{align}
The corresponding eigenspaces are real and generated by
\begin{equation}\label{def-aj}
v_{j,1}(b)=
\begin{pmatrix}
1 \\
-a_{j}(b)
\end{pmatrix}, \quad v_{j,2}(b)=
\begin{pmatrix}
-a_{j}(b)\\
1
\end{pmatrix}, \quad a_{j}(b)=\frac{b^{|j|}}{\tfrac{1-b^2}{2}|j|-1+ \sqrt{\big(\tfrac{1-b^2}{2}|j|-1\big)^2-b^{2|j|}}}\cdot
\end{equation}
Moreover, there exists $\delta\triangleq \delta(b^*)>0$ such that for all $|j|\geqslant\mathtt{m}^*$ and $b\in[0, b^*]$,
\begin{equation}\label{bound aj}
0\leqslant a_{j}(b)<1-\delta
\end{equation}
and
\begin{equation}\label{estimate-aj}
\forall n,\alpha\in\mathbb{N}^*,\quad \sup_{{b \in [0,b^*]}\atop |j| \geqslant \mathtt{m}^*}|j|^\alpha|\partial_{b}^{n}a_j(b)|<\infty.
\end{equation}
\end{cor}
\begin{proof}
In view of \eqref{define-m}, \eqref{variation-b}, \eqref{variation-x} and \eqref{deltaj2}, for all $b\in[0,b^*]$ and $|j|\geqslant \mathtt{m}^*$ one has
\begin{equation}\label{deltaj4}
f(b,|j|)\leqslant f(b^*,\mathtt{m}^*)<f(\underline{b}_{\mathtt{m}^*},\mathtt{m}^*)=0.
\end{equation}
Combining \eqref{deltaj}, \eqref{deltaj1}, \eqref{def:f} and \eqref{deltaj4} we find
$$
\forall b\in[0,b^*],\quad\forall |j|\geqslant\mathtt{m}^*,\quad \Delta_j(b)<0.
$$
Then, by Lemma \ref{lem lin op 3 DCE}, we conclude \eqref{omegajk} and \eqref{def-aj}. On the other hand, the inequality \eqref{deltaj4} also implies
\begin{equation}\label{ineq-b-star}
f(b^*,\mathtt{m}^*)=\big(b^*\big)^{\mathtt{m}^*}+1-\tfrac{1-(b^*)^2}{2}\mathtt{m}^*<0.
\end{equation}
This gives in turn, for any $|j|\geqslant\mathtt{m}^*$ and $b\in[0,b^*]$,
\begin{equation}\label{nOme3}
\tfrac{1-b^2}{2}|j|-1 \geqslant\tfrac{1-(b^*)^2}{2}\mathtt{m}^*-1> 0.
\end{equation}
Consequently, for any $|j|\geqslant\mathtt{m}^*$ and $b\in[0,b^*]$ we may write, by \eqref{omegajk},
\begin{align*}
\Omega_{j,k}(b) &=\tfrac{j}{|j|}\Bigg[\Big(\Omega+\tfrac{1-b^2}{4}\Big)|j|+\tfrac{(-1)^{k+1}}{2} \Big(
\tfrac{1-b^2}{2}|j|-1\Big) \sqrt{1-b^{2|j|}\Big(\tfrac{1-b^2}{2}|j|-1\Big)^{-2}}\Bigg]\\ &=\tfrac{j}{|j|}\bigg[\Big(\Omega+\tfrac{1-b^2}{4}\Big)|j|+\tfrac{(-1)^{k+1}}{2} \Big(\tfrac{1-b^2}{2}|j|-1\Big)+(-1)^{k+1} \mathtt{r}_{j}(b)\bigg],
\end{align*}
with
\begin{align}\label{rngpm}
\mathtt{r}_{j}(b)&\triangleq \tfrac{1}{2} \Big(\tfrac{1-b^2}{2}|j|-1\Big)\Bigg[ \sqrt{1-b^{2|j|}\Big(\tfrac{1-b^2}{2}|j|-1\Big)^{-2}}-1\Bigg].
\end{align}
By virtue of \eqref{ineq-b-star} and \eqref{nOme3}, one has for all $|j|\geqslant\mathtt{m}^*$ and $b\in [0,b^*]\subset[0,1],$
\begin{align}\label{est dec}
b^{|j|}\Big(\tfrac{1-b^2}{2}|j|-1\Big)^{-1}\leqslant (b^*)^{|j|}\Big(\tfrac{1-(b^*)^2}{2}\mathtt{m}^*-1\Big)^{-1}\leqslant (b^*)^{\mathtt{m}^*}\Big(\tfrac{1-(b^*)^2}{2}\mathtt{m}^*-1\Big)^{-1}<1.
\end{align}
Thus expanding in power series the square root and using Leibniz rule we get after straightforward computations the bounds for $\mathtt{r}_{j}(b)$ claimed in
\eqref{ASYFR1-}. Next, we shall check the inequalities \eqref{bound aj}.
Using \eqref{def-aj}, \eqref{nOme3} and \eqref{est dec} we conclude the existence of $\delta=\delta(b^*)\in(0,1)$ such that for all $|j|\geqslant\mathtt{m}^*$ and $b\in[0, b^*]$,
$$
0\leqslant a_{j}(b)\leqslant b^{|j|}\Big(\tfrac{1-b^2}{2}|j|-1\Big)^{-1} <1-\delta.$$
Therefore the estimate \eqref{estimate-aj} follows from \eqref{def-aj} and Leibniz rule.
This achieves the proof of Corollary~\ref{coro-equilib-freq}.
\end{proof}
As a consequence of Corollary \ref{coro-equilib-freq}, we may restrict the Fourier modes to the lattice $\mathbb{Z}_{\mathtt{m}}$ with $\mathtt{m}\geqslant\mathtt{m}^*$ in order to avoid the hyperbolic spectrum. Therefore, we shall work in the phase space $L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T})$ introduced in \eqref{def L2m}. In what follows, we introduce a suitable symplectic transformation $\mathbf{Q}$ used in the diagonalization of the linearized operator at the equilibrium state described by Lemma \ref{lem lin op 2 DCE}. This diagonalization is required latter in order to perform the reduction of the remainder term, see Proposition \ref{prop RR}. The linear transformation $\mathbf{Q}$ is defined by its action on any element $(\rho_1,\rho_2)\in L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T})$ with the Fourier expansions
$$\forall k\in\{1,2\},\quad \rho_{k}=\displaystyle\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\rho_{j,k}\mathbf{e}_{j},\qquad \textnormal{with}\qquad \rho_{-j,k}=\overline{\rho_{j,k}}, \qquad \mathbf{e}_{j}(\theta)=e^{\ii j\theta}
$$
as follows
\begin{equation}\label{def-P}
\mathbf{Q}\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
\triangleq \sum_{j\in \mathbb{Z}_{\mathtt{m}}^*}\, \mathbf{Q}_j \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_{j}\, , \qquad
\mathbf{Q}_j\triangleq \tfrac{1}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
1 & -a_{j} (b) \\
- a_{j}(b)& 1
\end{pmatrix} \, ,
\end{equation}
where $a_{j}(b)$ is given by \eqref{def-aj}.
We have the following properties.
\begin{lem}\label{lem: properties P}
Let $\mathtt{m}\geqslant \mathtt{m}^*,$ $b\in[0,b^*]$, where $\mathtt{m}^*$ and $b^*$ are defined in Corollary $\ref{coro-equilib-freq}.$ Then the following assertions hold true.
\begin{enumerate}
\item $\mathbf{Q}:L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T})\to L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T})$ is symplectic with respect to the symplectic form \eqref{sympl ref}.\\ In addition $\mathbf{Q}^{\top}=\mathbf{Q}.$
\item $\mathbf{Q}$ is invertible and its inverse is given by
\begin{equation}\label{def P-1}
\mathbf{Q}^{-1}\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
\triangleq \sum_{j\in \mathbb{Z}_{\mathtt{m}}^*}\, \mathbf{Q}_j^{-1} \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_{j}\, , \qquad
\mathbf{Q}_j^{-1}=\tfrac{1}{\sqrt{1-a_{j}^2(b)}}\begin{pmatrix}
1 & a_{j}(b) \\
a_{j}(b)& 1
\end{pmatrix}.
\end{equation}
\item The transformations $\mathbf{Q}^{+1}\triangleq \mathbf{Q}$ and $\mathbf{Q}^{-1}$ write
\begin{equation}\label{PP-1}
\mathbf{Q}^{\pm 1}=\begin{pmatrix}
\mathbb{I}_{\mathtt{m}} &0\\
0& \mathbb{I}_{\mathtt{m}}
\end{pmatrix}+\begin{pmatrix}
{P}_{1}\ast \cdot & \mp{P}_{2}\ast \cdot\\
\mp{P}_{2}\ast \cdot& {P}_{1}\ast \cdot
\end{pmatrix},
\qquad {P}_{k}\triangleq\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\tfrac{(2-k)+(-1)^k\sqrt{(2-k)+(-1)^k a_j^2(b)}}{\sqrt{1-a_j^2(b)}}\mathbf{e}_{j}.
\end{equation}
For any $k\in\{1,2\},$ the kernel $P_{k}$ satisfies the symmetry properties
\begin{equation}\label{sym Pk}
P_{k}(-\theta)=P_{k}(\theta)=P_{k}\big(\theta+\tfrac{2\pi}{\mathtt{m}}\big)
\end{equation}
and the estimate
\begin{equation}\label{e-P1P2}
\forall n\in\mathbb{N},\quad\|\partial_{\theta}^n{P}_{k}\ast\rho\|_{s}^{q,\gamma,\m}\lesssim \|\rho\|_{s}^{q,\gamma,\m}.
\end{equation}
\item The transformation $\mathbf{Q}$ diagonalizes the operator $\mathcal{J} \mathbf{M}_0$,
where $\mathbf{M}_0$ is introduced in \eqref{Ham eq-eq DCE}, namely
\begin{equation}\label{def Lbmat}
\mathbf{Q}^{-1}\mathcal{J} \mathbf{M}_0\mathbf{Q}= \mathcal{J} \mathbf{L}_0, \qquad \mathbf{L}_0\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
\triangleq \sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\tfrac{1}{j}\begin{pmatrix}
-\Omega_{j,1}(b) & 0\\
0 & \Omega_{j,2}(b)
\end{pmatrix} \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_{j}.
\end{equation}
\item All the real-valued solutions of the linearized contour dynamics equation \eqref{Ham eq-eq DCE} have the form
\begin{equation}\label{lin-sol}
\rho(t,\theta)=\sum_{j\in\mathbb{Z}_\mathtt{m}^*}\tfrac{A_j}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
1 \\
- a_{j}(b)
\end{pmatrix}e^{-\ii \left(\Omega_{1,j}(b)t- j\theta\right)}+\tfrac{B_j}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
-a_{j} (b) \\
1
\end{pmatrix}e^{-\ii \left(\Omega_{2,j}(b)t-j\theta\right)}
\end{equation}
with $\overline{A_j}=A_{-j},\,\overline{B_j}=B_{-j}.$
\end{enumerate}
\end{lem}
\begin{proof}
\textbf{1.} Straightforward computations based on the definition \eqref{def-P} lead to
$$\forall j\in \mathbb{Z}_{\mathtt{m}}^*,\quad \mathbf{Q}_j^\top \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix} \mathbf{Q}_j=\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.$$
Then, using Lemma \ref{lem: charac symp} we conclude the first point. Notice that one also has $$\forall j\in\mathbb{Z}_{\mathtt{m}}^*,\quad\mathbf{Q}_{j}^{\top}=\mathbf{Q}_{j},$$
which implies $\mathbf{Q}^{\top}=\mathbf{Q}.$\\
\textbf{2.} The second point follows easily by direct computations.\\
\textbf{3.} In view of \eqref{def-P} and \eqref{def P-1} we can write
$$
\mathbf{Q}_j^{\pm 1}= \begin{pmatrix}
1 & 0\\
0& 1
\end{pmatrix}+\tfrac{1}{\sqrt{1-a_j^2(b)}} \begin{pmatrix}
1-\sqrt{1-a_j^2(b)} & \mp a_j (b) \\
\mp a_j(b)& 1-\sqrt{1-a_j^2(b)}
\end{pmatrix} \, ,
$$
leading to \eqref{PP-1}. The symmetry properties \eqref{sym Pk} are obtained either by the fact that $a_{-j}(b)=a_{j}(b)$ (see \eqref{def-aj}) or by the restriction of the Fourier modes in the definition of $P_k.$ The estimate \eqref{e-P1P2} is obtained by applying the Leibniz and the chain rules with \eqref{PP-1} and \eqref{estimate-aj}.
\\
\textbf{4.} Notice, from \eqref{def-P} and \eqref{def-aj}, that
$$
\mathbf{Q}_j=\tfrac{1}{\sqrt{1-a_{j}^2(b)}}\Big(v_{j,1}(b)\quad v_{j,2}(b)\Big).
$$
Then, according to Corollary \ref{coro-equilib-freq}, the matrices $\mathbf{Q}_j$ diagonalize the matrices $M_j(b,\Omega)$, defined in \eqref{def MjbO}, namely
$$\forall j\in\mathbb{Z}_{\mathtt{m}}^*,\quad \mathbf{Q}_j^{-1}M_j(b,\Omega)\mathbf{Q}_j=-\ii\begin{pmatrix}
\Omega_{j,1}(b) & 0\\
0 & \Omega_{j,2}(b)
\end{pmatrix}.$$
Therefore we deduce from Lemma \ref{lem lin op 2 DCE}
\begin{align}\label{linerized-op-diag}
\big(\mathbf{Q}^{-1}\mathcal{J}\mathbf{M}_0\mathbf{Q}\big)\begin{pmatrix}
\rho_{1}\\
\rho_{2}
\end{pmatrix}
&=\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\begin{pmatrix}
-\ii\Omega_{1,j}(b) & 0\\
0 & -\ii\Omega_{2,j}(b)
\end{pmatrix} \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_{j}\\ &=\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\ii j\begin{pmatrix}
1& 0\\
0 & -1
\end{pmatrix}\begin{pmatrix}
-\frac{1}{j}\Omega_{1,j}(b) & 0\\
0 & \frac{1}{j} \Omega_{2,j}(b)
\end{pmatrix} \begin{pmatrix}
\rho_{j,1}\\
\rho_{j,2}
\end{pmatrix} \mathbf{e}_{j},\nonumber
\end{align}
which gives in turn \eqref{def Lbmat}.\\
\textbf{5.} It follows immediately from the fourth point when solving the linear differential system \eqref{def MjbO}. This completes the proof of Lemma \ref{lem: properties P}.
\end{proof}
\subsection{Symplectic change of coordinates}
In this section we intend to conjugate the nonlinear Hamiltonian system \eqref{Hamilt form DCE} with respect to the symplectic linear transformation $\mathbf{Q}$ introduced in \eqref{def-P}. Notice that this transformation does not depend on the unknown $r$, it depends only on the parameter $b.$\\
Let us consider the symplectic unknown $\tilde{r}\triangleq \mathbf{Q}^{-1}r$. Then the Hamiltonian system \eqref{Hamilt form DCE} writes
\begin{equation}\label{nonlinear-func0}
\partial_t \tilde{r}=X_K(\tilde{r})=\mathcal{J}\nabla K(\tilde{r}), \qquad K(\tilde{r})\triangleq H(\mathbf{Q}\tilde{r}),\qquad \tilde{r}=(\tilde{r}_1,\tilde{r}_2)\in L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T}).
\end{equation}
Indeed, on one hand, we have
\begin{equation}\label{symp1}
\partial_{t}\widetilde{r}=\mathbf{Q}^{-1}\partial_{t}r=\mathbf{Q}^{-1}\mathcal{J}\nabla H(r)=\mathbf{Q}^{-1}\mathcal{J}(\nabla H)(\mathbf{Q}\widetilde{r}).
\end{equation}
On the other hand,
\begin{equation}\label{symp2}
\mathcal{J}\nabla K(\widetilde{r})=\mathcal{J}\nabla\big(H(\mathbf{Q}\widetilde{r})\big)=\mathcal{J}\mathbf{Q}(\nabla H)(\mathbf{Q}\widetilde{r}).
\end{equation}
Therefore, if $ \mathbf{Q}\mathcal{J}\mathbf{Q}=\mathcal{J}$ then we have the equivalence
\begin{equation}\label{symp3}
\Big(\partial_{t}r=\mathcal{J}\nabla H(r)\quad\Leftrightarrow\quad\partial_{t}\widetilde{r}=\mathcal{J}\nabla K(\widetilde{r})\Big)
\end{equation}
and this last condition is true since Lemma \ref{lem: properties P}-1 implies that $\mathbf{Q}$ is symplectic and $\mathbf{Q}^{\top}=\mathbf{Q}.$\\
We shall look for time quasi-periodic solutions of \eqref{nonlinear-func0} in the form
$$\tilde{r}(t,\theta)=\hat{r}(\omega t,\theta),$$
where $\hat{r}:(\varphi,\theta)\in \mathbb{T}^{d+1}\to \mathbb{R}^2$ and $\omega\in \mathbb{R}^d$
is a non-resonant vector frequency. In
this setting, the equation \eqref{nonlinear-func0} becomes
$$
\omega\cdot\partial_{\varphi} \hat{r} =\mathcal{J}\nabla K(\hat{r}).
$$
In the sequel, we shall alleviate the notation and denote $\hat{r}$ simply by $r$. Hence, the foregoing equation becomes
\begin{equation}\label{nonlinear-func}
\omega\cdot\partial_{\varphi} {r} =\mathcal{J}\nabla K(r), \qquad K({r})\triangleq H(\mathbf{Q}r),\qquad r=(r_1,r_2)\in L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T}).
\end{equation}
The main result of this section reads as follows.
\begin{prop}\label{prop:conjP}
The linearized equation of \eqref{nonlinear-func} at a given small state $r$ takes the form
$$
\omega\cdot\partial_{\varphi}\rho +\mathfrak{L}_{r}\rho=0,\qquad\mathfrak{L}_r\triangleq -d_r\big(\mathcal{J}\nabla K({r})\big),
$$ with
\begin{equation}\label{defLr2}
\begin{aligned}
\mathfrak{L}_{r} =& \begin{pmatrix}
\partial_\theta\big(\mathcal{V}_1(r)\, \cdot\big) +\frac{1}{2}\mathcal{H}+\partial_\theta\mathcal{Q}\ast\cdot&0\\
0& \partial_\theta\big(\mathcal{V}_2(r)\, \cdot\big)-\tfrac{1}{2}\mathcal{H} -\partial_\theta \mathcal{Q}\ast\cdot
\end{pmatrix}
\\
&\qquad +\partial_\theta\begin{pmatrix}
\mathcal{T}_{\mathscr{K}_{1,1}}(r)& \mathcal{T}_{\mathscr{K}_{1,2}}(r)\\
\mathcal{T}_{\mathscr{K}_{2,1}}(r) & \mathcal{T}_{\mathscr{K}_{2,2}}(r)
\end{pmatrix},
\end{aligned}
\end{equation}
where
\begin{enumerate}
\item the scalar functions $\mathcal{V}_{k}(r)\triangleq V_k(\mathbf{Q}r)$, $k\in\{1,2\}$ satisfy
\begin{equation}\label{es-f0}
\|\mathcal{V}_{k}(r)-V_{k}(0)\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|r\|_{s+1}^{q,\gamma,\mathtt{m}},
\end{equation}
with $V_k(r)$ and $V_k(0)$ described in \eqref{def Vpm} and \eqref{def V10 V20}, respectively,
\item the convolution operator $\mathcal{Q}\ast\cdot$ with even kernel $\mathcal{Q}$ is defined through
\begin{align}\label{dcp calDeb0}
\forall\, j\in\mathbb{Z}_{\mathtt{m}}^*,\quad \mathcal{Q}\ast \mathbf{e}_{j}\triangleq \tfrac{\mathtt{r}_{j}(b)}{|j|}\mathbf{e}_{j},
\end{align}
with $\mathtt{r}_{j}(b)$ being introduced in Corollary \ref{coro-equilib-freq},
\item for $k,n\in\{1,2\},$ the operator $\mathcal{T}_{\mathscr{K}_{k,n}}({r})$ is an integral operator in the form \eqref{Top-op1} whose kernel
$\mathscr{K}_{k,n}(r)$ is $\mathtt{m}$-fold reversibility preserving and satisfies the estimates: for any $s\geqslant s_0$
\begin{equation*}
\|\mathscr{K}_{k,n}(r)\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|r\|_{s+1}^{q,\gamma,\mathtt{m}}
\end{equation*}
and
\begin{equation*}
\|\Delta_{12}\mathscr{K}_{k,n}(r)\|_{s}^{q,\gamma,\m}\lesssim\|\Delta_{12}r\|_{s+1}^{q,\gamma,\m}+\|\Delta_{12}r\|_{s_0+1}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|r_\ell\|_{s+1}^{q,\gamma,\m}.
\end{equation*}
\end{enumerate}
\end{prop}
\begin{proof}
According to \eqref{symp1}, \eqref{symp2} and \eqref{symp3}, one has
\begin{equation}\label{id:XKXH}
\mathcal{J}\nabla K({r})=\mathbf{Q}^{-1}\mathcal{J}(\nabla H)(\mathbf{Q}{r}).
\end{equation}
Differentiating this identity with respect to $r$ in the direction $\rho$ and using Lemma \ref{lem lin op 1 DCE} lead to
\begin{align}\label{drjk}
\nonumber d_r \big(\mathcal{J}\nabla K({r})\big)\rho&=\mathbf{Q}^{-1}\big(d_r \big(\mathcal{J}\nabla H\big)(\mathbf{Q}{r})\big)\mathbf{Q}\rho
\\ \nonumber &=\mathbf{Q}^{-1}\mathcal{J} \mathbf{M}_{\mathbf{Q}{r}}\mathbf{Q}\rho
\\ &=\mathbf{Q}^{-1}\mathcal{J} \mathbf{M}_0\mathbf{Q}\rho +\mathbf{Q}^{-1}\mathcal{J}\big( \mathbf{M}_{\mathbf{Q}{r}}-\mathbf{M}_0\big)\mathbf{Q}\rho.
\end{align}
By virtue of \eqref{linerized-op-diag}, \eqref{ASYFR1+}, \eqref{def V10 V20} and recalling \eqref{def Hilbert} and \eqref{dcp calDeb0} we may write
\begin{equation}\label{p-1jl0p}
\mathbf{Q}^{-1}\mathcal{J} \mathbf{M}_0\mathbf{Q}=-\begin{pmatrix}
V_1(0)\partial_\theta+\tfrac{1}{2}\mathcal{H}+\partial_\theta\mathcal{Q}\ast\cdot & 0\\
0 & V_2(0)\partial_\theta-\tfrac{1}{2}\mathcal{H}-\partial_\theta\mathcal{Q}\ast\cdot
\end{pmatrix}.
\end{equation}
On the other hand, from Lemma \ref{lem lin op 1 DCE} and Lemma \ref{lem lin op 2 DCE} we deduce that
\begin{equation}\label{lr-l0}
\mathbf{M}_{\mathbf{Q}{r}}-\mathbf{M}_{0}=\begin{pmatrix}
-f_{1}(r) & 0\\
0 & f_{2}(r)
\end{pmatrix}+\begin{pmatrix}
-L_{1,1}(\mathbf{Q}{r})+\mathcal{K}_1\ast\cdot & L_{1,2}(\mathbf{Q}r)-\mathcal{K}_b\ast\cdot \\
L_{2,1}(\mathbf{Q}r)-\mathcal{K}_b\ast\cdot & -L_{2,2}(\mathbf{Q}r)+\mathcal{K}_1\ast\cdot
\end{pmatrix}
\end{equation}
with
\begin{align}\label{f0+-}
\forall k\in\{1,2\},\quad f_{k}(r)\triangleq V_{k}(\mathbf{Q}r)-V_k(0).
\end{align}
Notice that if $r$ satisfies \eqref{reversibility condition r} and \eqref{m-fold symmetry r} then,
by virtue of \eqref{sym-m Vpm}, one gets that $f_k(r)$ itself satisfies the same symmetries,
\begin{align}
f_k(r)(-\varphi,-\theta)=f_k(r)(\varphi,\theta)= f_k(r)\big(\varphi,\theta+\tfrac{2\pi}{\m}\big).\label{sym f0}
\end{align}
We shall now turn to the quantitative estimates. For this aim, we shall first give the following decompositions. According to \eqref{Akn}, we can write
\begin{align*}
A_{k,k}({r})(\varphi,\theta,\eta)&=\left((R_{k}(\varphi,\theta)-R_{k}(\varphi,\eta))^{2}+4R_{k}(\varphi,\theta)R_{k}(\varphi,\eta)\sin^2\left(\tfrac{\eta-\theta}{2}\right)\right)^{\frac{1}{2}}\\
&=2b_k\left|\sin\Big(\tfrac{\eta-\theta}{2}\Big)\right|\Bigg(\bigg(\frac{R_{k}(\varphi,\theta)-R_{k}(\varphi,\eta)}{2b_k\sin\big(\tfrac{\eta-\theta}{2}\big)}\bigg)^{2}+\frac{1}{b_k^2}R_{k}(\varphi,\theta)R_{k}(\varphi,\eta)\Bigg)^{\frac{1}{2}}\\
&\triangleq 2b_k\left|\sin\left(\tfrac{\eta-\theta}{2}\right)\right|v_{k}({r})(\varphi,\theta,\eta).
\end{align*}
the function $v_k(r)$ is smooth with respect to each variables and with respect to $r$. In addition $v_k(0)=1.$ An application of Lemma \ref{lem triche} and Lemma \ref{lem funct prop}-(iii) gives
\begin{equation}\label{vk-e}
\|v_{k}(r)-1\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|r\|_{s+1}^{q,\gamma,\mathtt{m}},\qquad\|\Delta_{12}v_{k}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\Delta_{12}r\|_{s+1}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}r\|_{s_0+1}^{q,\gamma,\mathtt{m}}\max_{\ell\in\{1,2\}}\|r_{\ell}\|_{s+1}^{q,\gamma,\mathtt{m}}.
\end{equation}
We can also write
\begin{align*}
A_{k,3-k}^2(r)(\varphi,\theta,\eta)&=R_{k}^2(\varphi,\theta)+R_{3-k}^2(\eta)-2R_{k}(\varphi,\theta)R_{3-k}(\varphi,\eta)\cos(\eta-\theta)\\
&=A_{k,3-k}^2(0)(\varphi,\theta,\eta)\big(1+h_k(r)(\varphi,\theta,\eta)\big),
\end{align*}
where
$$h_k(r)(\varphi,\theta,\eta)\triangleq 2\frac{r_{k}(\varphi,\theta)+r_{3-k}(\varphi,\eta)-\cos(\eta-\theta)\big(R_{k}(\varphi,\theta)R_{3-k}(\varphi,\eta)-b_kb_{3-k}\big)}{b_{k}^2+b_{3-k}^2-2b_kb_{3-k}\cos(\eta-\theta)}\cdot$$
The function $h_k$ satisfies by composition laws in Lemma \ref{lem funct prop}-(iii)
\begin{equation}\label{h-e}
\|h_k(r)\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|r\|_{s}^{q,\gamma,\mathtt{m}},\qquad\|\Delta_{12}h_k\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\Delta_{12}r\|_{s}^{q,\gamma,\mathtt{m}}.
\end{equation}
According to the foregoing decompositions and the estimates \eqref{vk-e}, \eqref{h-e} together with \eqref{f0+-}, \eqref{def Vpm}, \eqref{def Vkn}, the product and composition laws in Lemma \ref{lem funct prop}-(iii) imply
\begin{equation}\label{es-f0-00}
\|f_k(r)\|_{s}^{q,\gamma,\m}\lesssim\|r\|_{s+1}^{q,\gamma,\m}
\end{equation}
and
\begin{equation}\label{es-f0-diff}
\|\Delta_{12}f_k\|_{s}^{q,\gamma,\m}\lesssim\|\Delta_{12}r\|_{s+1}^{q,\gamma,\m}+\|\Delta_{12}r\|_{s_0+1}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|r_\ell\|_{s+1}^{q,\gamma,\m}.
\end{equation}
In view of \eqref{def mathbfLkn} and \eqref{def mathcalKkn}, we also have the following decompositions for $k\in\{1,2\}$
\begin{equation}\label{lkk-lk3-k}
L_{k,k}(\mathbf{Q}r)=\mathcal{K}_1\ast\cdot+\mathcal{T}_{\mathbb{K}_{k,k}}(r)\qquad\textnormal{and}\qquad L_{k,3-k}(\mathbf{Q}r)=\mathcal{K}_b\ast\cdot+\mathcal{T}_{\mathbb{K}_{k,3-k}}(r),
\end{equation}
where $\mathcal{T}_{\mathbb{K}_{k,k}}(r)$ and $\mathcal{T}_{\mathbb{K}_{k,3-k}}(r)$ are integral operators with smooth kernels
$$\mathbb{K}_{k,k}(r)(\varphi,\theta,\eta)\triangleq \log\big(v_{k}(\mathbf{Q}r)(\varphi,\theta,\eta)\big)$$
and
$$\mathbb{K}_{k,3-k}(r)(\varphi,\theta,\eta)\triangleq \tfrac{1}{2}\log\big(1+h_k(\mathbf{Q}r)(\varphi,\theta,\eta)\big).$$
Moreover, if $r$ satisfies \eqref{reversibility condition r} and \eqref{m-fold symmetry r} then, for all $k,n\in\{1,2\}$, one can easily check, using in particular \eqref{sym Akn}, that
\begin{align}
\mathbb{K}_{k,n}(r)(-\varphi,-\theta,-\eta)=\mathbb{K}_{k,n}(r)(\varphi,\theta,\eta)= \mathbb{K}_{k,n}(r)\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\big).\label{sym-m Kkn}
\end{align}
It is clear that \eqref{PP-1}-\eqref{e-P1P2} imply the continuity of $\mathbf{Q}$ on $\mathbf{H}_{\mathtt{m}}^s.$ Thus, applying the composition laws in Lemma \ref{lem funct prop}-(iii) together with \eqref{vk-e} and \eqref{h-e}, we infer
\begin{equation}\label{est-Kkn}
\forall (k,n)\in\{1,2\}^2,\quad\|\mathbb{K}_{k,n}(r)\|_{s}^{q,\gamma,\m}\lesssim\|r\|_{s+1}^{q,\gamma,\m}
\end{equation}
and
\begin{equation}\label{est-Kkn-diff}
\forall (k,n)\in\{1,2\}^2,\quad\|\Delta_{12}\mathbb{K}_{k,n}(r)\|_{s}^{q,\gamma,\m}\lesssim\|\Delta_{12}r\|_{s+1}^{q,\gamma,\m}+\|\Delta_{12}r\|_{s_0+1}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|r_\ell\|_{s+1}^{q,\gamma,\m}.
\end{equation}
Putting together \eqref{lr-l0} and \eqref{lkk-lk3-k} we deduce that
$$\mathbf{Q}^{-1}\mathcal{J}\big( \mathbf{M}_{\mathbf{Q}{r}}-\mathbf{M}_0\big)\mathbf{Q}=-\mathbf{Q}^{-1}\partial_\theta\begin{pmatrix}
f_1(r)& 0\\
0 & f_2(r)
\end{pmatrix}\mathbf{Q}-\mathbf{Q}^{-1}\partial_\theta\begin{pmatrix}
\mathcal{T}_{\mathbb{K}_{1,1}}(r) & -\mathcal{T}_{\mathbb{K}_{1,2}}(r) \\
\mathcal{T}_{\mathbb{K}_{2,1}}(r) & -\mathcal{T}_{\mathbb{K}_{2,2}}(r)
\end{pmatrix}\mathbf{Q}.
$$
Combining the last identity with \eqref{drjk} and \eqref{p-1jl0p} we find \eqref{defLr2} with
\begin{align*}
\begin{pmatrix}
\mathcal{T}_{\mathscr{K}_{1,1}}(r)& \mathcal{T}_{\mathscr{K}_{1,2}}(r)\\
\mathcal{T}_{\mathscr{K}_{2,1}}(r) & \mathcal{T}_{\mathscr{K}_{2,2}}(r)
\end{pmatrix}&\triangleq \mathbf{Q}^{-1}\begin{pmatrix}
f_1(r)& 0\\
0 & f_2(r)
\end{pmatrix}\mathbf{Q}-\begin{pmatrix}
f_1(r) & 0\\
0 & f_2(r)
\end{pmatrix}+\mathbf{Q}^{-1}\begin{pmatrix}
\mathcal{T}_{\mathbb{K}_{1,1}}(r) & -\mathcal{T}_{\mathbb{K}_{1,2}}(r) \\
\mathcal{T}_{\mathbb{K}_{2,1}}(r) & -\mathcal{T}_{\mathbb{K}_{2,2}}(r)
\end{pmatrix}\mathbf{Q}.
\end{align*}
By virtue of the preceding estimate, \eqref{PP-1}, \eqref{e-P1P2}, \eqref{es-f0-00} and \eqref{est-Kkn}, together with Lemma \ref{iter-kerns} we find through straightforward computations that the kernels $\mathscr{K}_{k,n}(r)$ satisfy the following estimate
\begin{equation*}
\forall (k,n)\in\{1,2\}^2,\quad\|\mathscr{K}_{k,n}(r)\|_{s}^{q,\gamma,\m}\lesssim\|r\|_{s+1}^{q,\gamma,\m}.
\end{equation*}
Similar argument as before using in particular \eqref{est-Kkn-diff} and \eqref{es-f0-diff} implies
\begin{equation*}
\forall (k,n)\in\{1,2\}^2,\quad\|\Delta_{12}\mathscr{K}_{k,n}(r)\|_{s}^{q,\gamma,\m}\lesssim\|\Delta_{12}r\|_{s+1}^{q,\gamma,\m}+\|\Delta_{12}r\|_{s_0+1}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|r_\ell\|_{s+1}^{q,\gamma,\m}.
\end{equation*}
As for the symmetry property, it can be obtained easily from the structure of the kernel. Actually, one may check from \eqref{sym-m Kkn}, \eqref{sym f0} and \eqref{sym Pk} that
\begin{align}
r(-\varphi,-\theta)=r(\varphi,\theta)\quad&\Longrightarrow\quad\mathscr{K}_{k,n}(r)(-\varphi,-\theta,-\eta)=\mathscr{K}_{k,n}(r)(\varphi,\theta,\eta),\label{sym scrKkn}\\
r\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}}\big)=r(\varphi,\theta)\quad&\Longrightarrow\quad\mathscr{K}_{k,n}(r)\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}},\eta+\tfrac{2\pi}{\mathtt{m}}\big)=\mathscr{K}_{k,n}(r)(\varphi,\theta,\eta).\label{sym-m scrKkn}
\end{align}
The proof of the desired results is now complete.
\end{proof}
\begin{remark}\label{remark-lin-eq-eq}
The linearized equation of \eqref{nonlinear-func0} at $r=0$ takes the form
\begin{equation}\label{Edc Ham eq0}
\partial_t \rho=\mathcal{J}\nabla K_{\mathbf{L}_0}(\rho),\qquad \rho=(\rho_1,\rho_2)\in L^2_{\mathtt{m}}(\mathbb{T})\times L^2_{\mathtt{m}}(\mathbb{T}),
\end{equation}
where $\mathcal{J}$ is defined in \eqref{def calJ},
$K_{\mathbf{L}_0}$ is the quadratic Hamiltonian
\begin{equation}\label{def KL}
K_{\mathbf{L}_0}(\rho)\triangleq \tfrac{1}{2}\big\langle\mathbf{L}_0\rho,\rho\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}=-\sum_{j\in\mathbb{Z}_{\mathtt{m}}^*}\left(\tfrac{\Omega_{j,1}(b)}{2j}|\rho_{j,1}|^2-\tfrac{\Omega_{j,2}(b)}{2j}|\rho_{j,2}|^2\right)
\end{equation}
and $\mathbf{L}_0$ is the operator defined by \eqref{def Lbmat}. Thus, real-valued oscillating solution of the linearized contour dynamics equation \eqref{Edc Ham eq0} are given by
$$\rho(t,\theta)=\sum_{j\in\mathbb{Z}_\mathtt{m}^*}A_j \begin{pmatrix}
1 \\
0
\end{pmatrix}e^{-\ii \left(\Omega_{1,j}(b)t- j\theta\right)}+B_j\begin{pmatrix}
0 \\
1
\end{pmatrix}e^{-\ii \left(\Omega_{2,j}(b)t-j\theta\right)},$$
with $\overline{A_j}=A_{-j},\,\overline{B_j}=B_{-j}$.
\end{remark}
\subsection{Geometric structure of the equilibrium frequencies}
This section is devoted to some useful properties of the equilibrium frequencies. We shall first discuss their monotonicity and prove some useful bounds. Then, we shall be concerned with their non-degeneracy through the study of the transversality conditions. Those latter are crucial in the measure estimates of the final Cantor set giving rise to quasi-periodic solutions emerging at the linear and nonlinear levels. We have the following lemma.
\begin{lem}\label{lem-asym}
Let $\Omega>0$ and $ \mathtt{m}^*, b^*$ be defined as in Corollary $\ref{coro-equilib-freq}$. Then the following holds true.
\begin{enumerate}
\item For all $|j|\geqslant \mathtt{m}^*$ and $k\in\{1,2\}$,
$
\Omega_{-j,k}(b)=-\Omega_{j,k}(b).
$
\item The sequence $\Big(-\tfrac{\Delta_{j}(b)}{j^2}\Big)_{j\geqslant \mathtt{m}^*}$ is positive increasing. Recall that $\Delta_{j}(b)$ was defined in \eqref{def delta j}.
\item The sequence $\Big(\tfrac{\Omega_{j,1}(b)}{j}\Big)_{j\geqslant\mathtt{m}^*}$ is positive increasing and the sequence $\Big(\tfrac{\Omega_{j,2}(b)}{j}\Big)_{j\geqslant \mathtt{m}^*}$ is positive decreasing. Moreover, for all $|j|\geqslant \mathtt{m}^*$ and $k\in\{1,2\}$ we have
\begin{equation}\label{lim omega jk}
\lim_{n\to\infty}\tfrac{\Omega_{j,k}(b)}{j}= \Omega +(2-k)\tfrac{1-b^2}{2}\cdot
\end{equation}
and
\begin{equation*}
\big|\Omega_{j,k}(b)\big|\geqslant \Omega |j|.
\end{equation*}
\item For all $\mathtt{m}\geqslant \mathtt{m}^*$, there exists $\Omega_{\mathtt{m}}^*=\Omega^*(b^*,\mathtt{m})>0$ satisfying
\begin{equation}\label{lim omega-star}
\lim_{\mathtt{m}\to \infty}\Omega_{\mathtt{m}}^*=0
\end{equation}
such that for all $\Omega>\Omega^*_{\mathtt{m}}$ the sequence $\big(\Omega_{j,2}(b)\big)_{j\geqslant \mathtt{m}}$ is increasing.
\item There exists $c>0$ such that, for all $k\in\{1,2\}$,
\begin{equation*}
\forall\, \Omega>\Omega^*_{\mathtt{m}},\quad \forall\, b\in[0,b^*],\quad \forall\, |j|\geqslant \mathtt{m}^*,\quad \forall\, |j'|\geqslant \mathtt{m}^*,\quad \big|\Omega_{j,k}(b)-\Omega_{j',k}(b)\big|\geqslant c |j- j'|.
\end{equation*}
\item Given ${q}_0\in\N$, there exists $C>0$ such that, for all $k\in\{1,2\}$,
\begin{equation*}
\forall \, |j|\geqslant \mathtt{m}^*,\quad \forall \, |j'|\geqslant \mathtt{m}^*,\quad \max_{q\in\llbracket 0,{q}_0\rrbracket} \sup_{b\in [0,b^*]}\Big|\partial_b^q\big(\Omega_{j,k}(b)-\Omega_{j',k}(b)\big)\Big|\leqslant C |j-j'|.
\end{equation*}
\end{enumerate}
\end{lem}
\begin{proof}
{\bf 1.} It follows immediately from \eqref{omegajk}.\\
{\bf 2.} In order to study the discrete function $j\mapsto -\tfrac{\Delta_{j}(b)}{j^2}$ we shall
consider its continuous version
$$\forall x\geqslant \mathtt{m}^*,\quad g(x)\triangleq -\tfrac{\Delta_{x}(b)}{x^2}=\tfrac{1}{x^2}\big(\tfrac{1-b^2}{2}x-1\big)^2-\tfrac{b^{2x}}{x^2}\cdot$$
Differentiating with respect to $x$ and using \eqref{nOme3}, we conclude that
\begin{align*}
g'(x)&= \tfrac{2}{x^3}\big(\tfrac{1-b^2}{2}x-1+b^{2x}\big)-\tfrac{2b^{2x}}{x^2}\log (b)>0.
\end{align*}
Thus, the mapping $j\mapsto -\Delta_{j}(b)/j^2$ is strictly increasing. \\
{\bf 3.} The monotonicity of the sequences $\Big( \tfrac{\Omega_{j,k}(b)}{j}\Big)_{j\geqslant\mathtt{m}^*} $ follows from the identity
$$
\tfrac{\Omega_{j,k}(b)}{j}=\big(\Omega+\tfrac{1-b^2}{4}\big)+ \tfrac{(-1)^{k+1}}{2}\sqrt{\tfrac{-\Delta_{j}(b)}{j^2}}
$$
and the second point. Moreover, from the last identity we also conclude that
$$
\tfrac{\Omega_{j,1}(b)}{j}\geqslant \Omega.
$$
Next, from \eqref{ASYFR1+}-\eqref{ASYFR1-} we obtain \eqref{lim omega jk}.
Since $\Big( \tfrac{\Omega_{j,2}(b)}{j}\Big)_{j\geqslant\mathtt{m}^*} $ is decreasing, then from \eqref{lim omega jk} we infer that
$$\tfrac{\Omega_{j,2}(b)}{j}\geqslant \lim_{j\to\infty} \tfrac{\Omega_{j,2}(b)}{j}=\Omega>0.$$
This ends the proof of the third point.\\
{\bf 4.} Consider the continuous extension of the discrete mapping $j\mapsto \Omega_{j,2}(b)$,
\begin{align*}
\forall (b,x)\in[0,b^*]\times(\mathtt{m}^*,\infty),\quad h(b,x)&\triangleq \Omega x+\tfrac{1-b^2}{4}x-\tfrac{1}{2}\sqrt{-\Delta_{x}(b)}.
\end{align*}
Differentiating with respect to $x$ and using \eqref{def delta j} and \eqref{nOme3} lead to
\begin{align*}
\partial_x h(b,x)
&=\tfrac{b^{2x}\log (b)-\tfrac{1-b^2}{2}\big(\tfrac{1-b^2}{2}x-1-\sqrt{-\Delta_{x}(b)}\big)+2\Omega\sqrt{-\Delta_{x}(b)}}{2\sqrt{-\Delta_{x}(b)}}\\
&=\tfrac{b^{2x}\log (b)+\big(\tfrac{1-b^2}{2}x-1\big)\Big[2\Omega \sqrt{1-b^{2x}\big(\tfrac{1-b^2}{2}x-1\big)^{-2}}+\tfrac{1-b^2}{2}\Big(\sqrt{1-b^{2x}\big(\tfrac{1-b^2}{2}x-1\big)^{-2}}-1\Big)\Big]}{2\sqrt{-\Delta_{x}(b)}}\cdot
\end{align*}
According to \eqref{est dec} we have, for all $b<b^*$ and $x\geqslant\mathtt{m}\geqslant\mathtt{m}^*$,
\begin{align}\label{est dec2}
0<\sqrt{1-(b^*)^{2\mathtt{m}}\Big(\tfrac{1-(b^*)^2}{2}\mathtt{m}-1\Big)^{-2}}\leqslant\sqrt{1-b^{2x}\Big(\tfrac{1-b^2}{2}x-1\Big)^{-2}}<1.
\end{align}
Thus, in view of \eqref{nOme3} and \eqref{est dec2} we get
\begin{align*}
2\sqrt{-\Delta_{x}(b)} \partial_x h(b,x)
\geqslant b^{2x}\log (b)+\big(\tfrac{1-b^2}{2}x-1\big)\bigg[&2\Omega \sqrt{1-(b^*)^{2\mathtt{m}}\Big(\tfrac{1-(b^*)^2}{2}\mathtt{m}-1\Big)^{-2}}\\ &+\tfrac{1-(b^*)^2}{2}\Big(\sqrt{1-(b^*)^{2\mathtt{m}}\Big(\tfrac{1-(b^*)^2}{2}\mathtt{m}-1\Big)^{-2}}-1\Big)\bigg].
\end{align*}
Setting
\begin{equation}\label{expr Omega star}
\Omega^*_{\mathtt{m}}\triangleq \tfrac{\tfrac{1-(b^*)^2}{2}\Big(1-\sqrt{1-(b^*)^{2\mathtt{m}}\big(\tfrac{1-(b^*)^2}{2}\mathtt{m}-1\big)^{-2}}\Big)-\displaystyle\min_{b\in [0,b^*]}\big(b^{\mathtt{m}}\log (b)\big)}{2 \sqrt{1-(b^*)^{2\mathtt{m}}\big(\tfrac{1-(b^*)^2}{2}\mathtt{m}-1\big)^{-2}}}>0
\end{equation}
gives
\begin{align*}
\forall \Omega>\Omega^*_{\mathtt{m}}, \quad 2\sqrt{-\Delta_{x}(b)} \partial_x h(b,x)>\big(b^{x}+1-\tfrac{1-b^2}{2}x\big)\min_{b\in [0,b^*]} \big(b^{\mathtt{m}}\log (b)\big)\stackrel{\eqref{def:f}}=f(b,x)\min_{b\in [0,b^*]} \big(b^{\mathtt{m}}\log (b)\big) .
\end{align*}
Then by \eqref{deltaj4} we conclude that
\begin{align*}
\forall \Omega>\Omega^*_{\mathtt{m}}, \quad \partial_x h(b,x)>0 .
\end{align*}
Taking the limit $\mathtt{m}\to\infty$ in \eqref{expr Omega star} gives immediately \eqref{lim omega-star}. This ends the proof of the fourth point.\\
{\bf 5.} Since $j\mapsto \Omega_{j,k}(b)$, $k\in\{1,2\}$, are odd then it is enough to check the result for $j\in\mathbb{N}^*$. The estimate on the sum $|\Omega_{j,k}(b)+\Omega_{j',k}(b)|$ easily follows from the positivity of the sequences $\big(\Omega_{j,k}(b)\big)_{j\geqslant \mathtt{m}^*}$, $k\in\{1,2\}$, and the third point, namely,
$$
\forall b\in[0,b^*],\quad\big|\Omega_{j,k}(b)+\Omega_{j',k}(b)\big|=\Omega_{j,k}(b)+\Omega_{j',k}(b)\geqslant \Omega(j+j').
$$
Next we shall prove the estimate on the difference. In view of \eqref{ASYFR1+}, for all $j, j'\geqslant\mathtt{m}^*$ with $j\neq j'$, one has
\begin{align}\label{omej-jp}
\Omega_{j,k}(b)- \Omega_{j',k}(b)&=\Big(\Omega+(2-k)\tfrac{1-b^2}{2}\Big)(j-j')+(-1)^{k+1} \big(\mathtt{r}_{j}(b)-\mathtt{r}_{j'}(b)\big).
\end{align}
It follows that
$$\big|\Omega_{j,k}(b)- \Omega_{j',k}(b)\big|\geqslant\Omega |j-j'|- \sup_{b\in[0,b^*]}\big|\mathtt{r}_{j}(b)-\mathtt{r}_{j'}(b)\big|.$$
Using Taylor formula combined with \eqref{ASYFR1-} gives, for any $b\in[0,b^*],$
\begin{align*}
\big| \mathtt{r}_{j}(b)-\mathtt{r}_{j'}(b)\big| &\leqslant C_0\Big| \int_{j'}^{j}\frac{dx}{x^{2}}\Big|\\ &\leqslant C_0 \tfrac{|j-j'|}{j j'}\cdot
\end{align*}
This implies that
\begin{align}\label{diff-oj-ojp}
\big| \Omega_{j,k}(b)- \Omega_{j',k}(b)\big|
&\geqslant \Big(\Omega-\tfrac{C_0}{jj'}\Big)|j-j'|.
\end{align}
Therefore, there exists $N$ such that if $jj'> N$ the desired inequality holds. For $jj'\leqslant N$ we shall use the one-to-one property of $j\mapsto \Omega_{j,k}(b)$ combined with the continuity of $b\in[0,b^*]\mapsto \Omega_{j,k}(b)-\Omega_{j^\prime,k}(b)$ to get, for any $j\neq j^\prime\in\llbracket \mathtt{m}^*,N\rrbracket$,
$$
\forall \Omega>\Omega^*_{\mathtt{m}},\quad \inf_{b\in[0,b^*]}\big|\Omega_{j,k}(b)-\Omega_{j^\prime,k}(b)\big|\triangleq c_{jj^\prime}^k> 0.
$$
Consequently
$$
\inf_{j\neq j^\prime\in\llbracket \mathtt{m}^*,N\rrbracket\\
\atop b\in[0,b^*]}\big|\Omega_{j,k}(b)-\Omega_{j^\prime,k}(b)\big|=\inf_{j\neq j^\prime\in\llbracket \mathtt{m}^*,N\rrbracket}c_{jj^\prime}^k>0.
$$
Taking
$$c\triangleq\frac1N\min\left({\inf_{j\neq j^\prime\in\llbracket \mathtt{m}^*,N\rrbracket}c_{jj^\prime}^k}\,\,\,,\,\,\, N\Omega-{C_0}\right)$$
and combining the last inequality with \eqref{diff-oj-ojp} we get the desired result.\\
{\bf 6.} Differentiating \eqref{omej-jp} gives
\begin{align*}
\partial_b\big(\Omega_{j,k}(b)- \Omega_{j',k}(b)\big)&=-(2-k)b(j-j')+(-1)^{k+1} \partial_b\big(\mathtt{r}_{j}(b)-\mathtt{r}_{j'}(b)\big),\\
\partial_b^2\big( \Omega_{j,k}(b)- \Omega_{j',k}(b)\big)&=-(2-k)(j-j')+(-1)^{k+1}\partial_b^2 \big(\mathtt{r}_{j}(b)-\mathtt{r}_{j'}(b)\big),\\
\forall q\geqslant 3,\quad \partial_b^q\big( \Omega_{j,k}(b)- \Omega_{j',k}(b)\big)&=(-1)^{k+1} \big(\partial_b^q\mathtt{r}_{j}(b)-\partial_b^q \mathtt{r}_{j'}(b)\big).
\end{align*}
By the mean value theorem combined with \eqref{ASYFR1-} we conclude the proof of Lemma \ref{lem-asym}.
\end{proof}
\paragraph{Non-degeneracy and transversality.} Through the rest of this section we shall follow the approach developed in \cite{BBM11,BM18} to discuss the non-degeneracy and the transversality properties of the linear frequencies. Let us first recall the definition of the non-degeneracy for vector-valued functions.
\begin{defin}\label{def-deg}
Let $N\in\mathbb{N}^*$. A function $f \triangleq (f_1, \ldots , f_N ) : [\alpha_1,\alpha_2] \to \mathbb{R}^N$, with $\alpha_1<\alpha_2$, is called non-degenerate if, for any vector $c \triangleq (c_1,\ldots,c_N) \in \mathbb{R}^N \backslash \{0\}$, the scalar function $f \cdot c = f_1c_1 + \cdots+ f_Nc_N$ is not identically zero on the whole interval $[\alpha_1,\alpha_2]$.
\end{defin}
We have the following result.
\begin{lem}\label{lem non-deg}
Let $\Omega>0$ and $ \mathtt{m}^*, b^*$ be defined as in Corollary $\ref{coro-equilib-freq}.$
Fix an integer $\mathtt{m}\geqslant \mathtt{m}^*$ and consider the finite subsets
\begin{align*}
\forall k\in\{1,2\},\quad S_{k}\subset\mathbb{Z}_{\mathtt{m}}\cap\mathbb{N}^*\quad\textnormal{with}\quad |{S}_{k}|<\infty.
\end{align*}
Then the following hold true.
\begin{enumerate}
\item If $|S_1\cap S_2|\leqslant 1$ then the vector valued function
$$[0,b^*] \ni b\mapsto \left(\big(\Omega_{j,1}(b)\big)_{j\in{S}_{1}},\big(\Omega_{j,2}(b)\big)_{j\in{S}_{2}}\right)$$
is non-degenerate.
\item If $|S_1\cap S_2|=0$ then the vector valued functions
\begin{align*}
[0,b^*] \ni b\mapsto &\left(\big(\Omega_{j,1}(b)\big)_{j\in S_{1}},\big(\Omega_{j,2}(b)\big)_{j\in S_{2}},\mathtt{v}_1(b),\mathtt{v}_2(b)\right),
\\
[0,b^*] \ni b\mapsto &\left(\big(\Omega_{j,1}(b)\big)_{j\in S_{1}},\big(\Omega_{j,2}(b)\big)_{j\in S_{2}},\mathtt{v}_k(b)\right), \quad k\in\{1,2\}
\end{align*}
are non-degenerate, where the $\mathtt{v}_k$ are defined in \eqref{def V10 V20}.
\end{enumerate}
\end{lem}
\begin{proof} We point out that the linear frequencies \eqref{omegajk} are very similar to the linear frequencies close to the Kirchhoff ellipses, studied in \cite{BHM21}. Thus we shall use the same arguments developed in \cite[Lemma 5.2]{BHM21} with slight modifications. According to \eqref{omegajk} the functions $b\mapsto\Omega_{j,k}(b)$, $k\in\{1,2\}$, are well defined and analytic in a full neighborhood of $b=0$. Moreover, by \eqref{ASYFR1+} and \eqref{rngpm} the frequencies $\Omega_{j,k}(b)$ write
\begin{align}
& \Omega_{j,k}(b) =A_{j,k}(z)+{(-1)^{k+1}} B_j(z)\triangleq \widetilde{\Omega}_{j,k}(z), \label{Taylor-fre}\\
\nonumber &z\triangleq b^2,\quad
A_{j,k}(z)\triangleq \Omega j+\frac{2-k}{2}j(1-z)+ \tfrac{(-1)^{k}}{2},\quad
B_j(z)\triangleq \mathtt{r}_j(b)\underset{z\to0}{=}-\frac{z^{j} }{2(j-2)}+O(z^{j+1}).
\end{align}
\textbf{1.} In view of Definition \ref{def-deg} one has to prove that, for all $c\triangleq \big((c_{j,1})_{j\in S_1},(c_{j,2})_{j\in S_2}\big)\in \mathbb{R}^{|S_1|+|S_2|}\backslash\{0\}$, the function
\begin{align*}
z &\mapsto
\sum_{j\in {S}_1\setminus (S_1\cap S_2)} c_{j,1} \widetilde{\Omega}_{j,1}(z) +\sum_{j\in S_2\setminus (S_1\cap S_2)}c_{j,2} \widetilde{\Omega}_{j,2}(z)+\sum_{j\in S_1\cap S_2} \big(c_{j,1}\widetilde{\Omega}_{j,1}(z)+c_{j,2} \widetilde{\Omega}_{j,2}(z)\big)
\end{align*}
is not identically zero on the interval $[0,(b^*)^2]$.
By contradiction, suppose that there exists $c\triangleq \big((c_{j,1})_{j\in S_1},(c_{j,2})_{j\in S_2}\big)\in\mathbb{R}^{|S_1|+|S_2|}\backslash\{0\}$ such that for any $|z|\leqslant(b^*)^2,$
\begin{equation}\label{rel lin 1}
\sum_{j\in S_1\setminus(S_1\cap S_2)} c_{j,1} \widetilde{\Omega}_{j,1}(z)+\sum_{j\in {S}_2\setminus(S_1\cap S_2)} c_{j,2}\widetilde{\Omega}_{j,2}(z)+\sum_{j\in S_1\cap S_2} c_{j,1}\widetilde{\Omega}_{j,1}(z)+c_{j,2}\widetilde{\Omega}_{j,2}(z)=0.
\end{equation}
Writing
$$
S_1\cup S_2=\{j_1,j_2,\cdots,j_d\},\qquad \textnormal{with}\qquad \mathtt{m}\leqslant j_1<j_2<\cdots<j_d,
$$
then differentiating with respect to $z$ the identity in \eqref{rel lin 1}, we find, since $\mathtt{m}\geqslant3,$
$$
\begin{cases}
\widetilde{c}_{j_1} D_z^{(j_{1})}B_{j_{1}}(z)
+ \ldots + \widetilde{c}_{j_{d}}D_z^{(j_{1})} B_{j_{d}}(z)= 0, \cr
\ldots \ldots \ldots \cr
\widetilde{c}_{j_1} D_z^{(j_{d})}B_{j_{1}}(z)
+ \ldots + \widetilde{c}_{j_{d}}D_z^{(j_{d})} B_{j_{d}}(z)= 0,
\end{cases}
$$
where
$$\widetilde{c}_{j}\triangleq \begin{cases}
c_{j,1},& \textnormal{if}\quad j\in {S}_1\setminus ({S}_1\cap S_2),\cr
-c_{j,2},& \textnormal{if}\quad j\in {S}_2\setminus ({S}_1\cap S_2),\cr
c_{j,1}-c_{j,2},& \textnormal{if}\quad j\in {S}_1\cap S_2.
\end{cases}$$
The latter is a linear system that can be recast as in a matricial form as
$$\mathcal{M}(z)\widetilde{c}=0,\qquad
{\mathcal M}(z)\triangleq\begin{pmatrix}
D_z^{(j_{1})}B_{j_{1}}(z) & \dots & D_z^{(j_{1})}B_{j_{d}}(z)\\
\vdots & \ddots & \vdots\\
D_z^{(j_{d})}B_{j_{1}}(z) & \dots & D_z^{(j_{d})}B_{j_{d}}(z)
\end{pmatrix},\qquad\widetilde{c}\triangleq \begin{pmatrix}
\widetilde{c}_{j_1}\\
\vdots\\
\widetilde{c}_{j_d}
\end{pmatrix}.
$$
Note, from \eqref{Taylor-fre}, that
for all $j\geqslant \mathtt{m}$ we have
\begin{equation*}
D^{(j)}_z B_j(0) = -\frac{ j!}{2(j-2)}\qquad\textnormal{and}\qquad \forall\, 2\leqslant m< j, \quad D^{(m)}_z B_{j}(0) =0 .
\end{equation*}
It follows that, for some real constants $ \alpha_{i,j} $, we have that
$$
{\mathcal M}(0) =
\begin{pmatrix}
-\frac{j_{1}!}{2(j_{1}-2)} & 0 & 0 & \dots & 0 \\
\alpha_{2,1} &-\frac{j_2!}{2(j_2-2)} & 0 & \dots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
\alpha_{d-1,1} & \ldots & \alpha_{d-1,d-2} & -\frac{j_{d-1}!}{2(j_{d-1}-2)} & 0 \\
\alpha_{d,1} & \ldots & \alpha_{d,d-2} & \alpha_{d,d-1} & -\frac{j_{d}!}{2(j_{d}-2)}
\end{pmatrix}
$$
which is a triangular matrix whose determinant is given by
$$
\det {\mathcal M}(0) = (-1)^{d}\prod_{i=1}^{d} \frac{ j_{i}!}{2(j_{i}-2)} \neq 0 \, .
$$
It follows that $\widetilde{c}=0$, i.e.
\begin{equation}\label{cj=0}
\forall j\in ({S}_1\cup {S}_2)\setminus ({S}_1\cap {S}_2), \quad c_{j,k}=0\qquad \textnormal{and}\qquad \forall j\in {S}_1\cap {S}_2, \quad c_{j,1}=c_{j,2}.
\end{equation}
Inserting \eqref{cj=0} into \eqref{rel lin 1} evaluated at $z=0$, we get from \eqref{Taylor-fre} that
$$
\big(2\Omega +\tfrac{1}{2}\big)\sum_{j\in {S}_1\cap S_2} jc_{j,1}=0.
$$
Using the fact that $\Omega>0,$ if ${S}_1\cap S_2=\{j_0\}$ then we have
$$
c_{j_0,1}=0.
$$
This with \eqref{cj=0} lead to a contradiction proving the first point.\\
\textbf{2.} Next, we shall prove that the function
$$
[0,b^*] \ni b\mapsto \Big(\big(\Omega_{j,1}(b)\big)_{j\in S_{1}},\big(\Omega_{j,2}(b)\big)_{j\in S_{2}},\mathtt{v}_1(b),\mathtt{v}_2(b)\Big)
$$
is non-degenerate according to the Definition~\ref{def-deg} provided that $S_1\cap S_2=\varnothing$.
Suppose, by contradiction, that there exists
$$c\triangleq \big((c_{j,1})_{j\in S_1},(c_{j,2})_{j\in S_2},c_{0,1},c_{0,2}\big)\in \mathbb{R}^{|S_1|+|S_2|+2} \backslash \{0\}$$ such that for any $|z| \leqslant (b^*)^2,$
\begin{equation}\label{ident3}
c_{0,1}\widetilde{\mathtt{v}}_1(z)+c_{0,2}\widetilde{\mathtt{v}}_2(z)+\sum_{j\in {S}_1}c_{j,1}\widetilde{\Omega}_{j,1}(z) +\sum_{j\in{S}_2}c_{j,2}\widetilde{\Omega}_{j,2}(z)=0,
\end{equation}
where we denote
$$\forall k\in\{1,2\},\quad\widetilde{\mathtt{v}}_{k}(z)\triangleq \mathtt{v}_{k}(b)=\Omega+\frac{2-k}{2}(1-z).$$
Arguing in a similar way to the first case we conclude by a differentiation argument that
$$
\forall j\in {S}_1\cup {S}_2,\quad\forall k\in \{1,2\},\quad c_{j,k}=0.
$$
Plugging these identities into \eqref{ident3} we find
$$
c_{0,1}\widetilde{\mathtt{v}}_1(z)+c_{0,2}\widetilde{\mathtt{v}}_2(z)=0.
$$
That is
$$
\Omega\big(c_{0,1}+c_{0,2}\big)+\frac{1-z}{2}c_{0,1}=0.
$$
The last expression being true for any $|z|\leqslant (b^*)^2,$ then using the fact that $\Omega\neq 0$ we infer
$$
c_{0,1}=c_{0,2}=0.
$$
Thus, the vector $c$ is vanishing and this contradicts the assumption. The proof of the non-degeneracy of the function
$$
[0,b^*] \ni b\mapsto \left(\big(\Omega_{j,1}(b)\big)_{j\in S_{1}},\big(\Omega_{j,2}(b)\big)_{j\in S_{2}},\mathtt{v}_k(b)\right), \qquad k\in\{1,2\}
$$
can be obtained from the previous case by choosing
$$
c\triangleq \big((c_{j,1})_{j\in S_1},(c_{j,2})_{j\in S_2},c_{0,k},c_{0,3-k}\big)=\big((c_{j,1})_{j\in S_1},(c_{j,2})_{j\in S_2},c_{0,k},0\big).
$$
This ends the proof of Lemma \ref{lem non-deg}.
\end{proof}
Let $\Omega>0$ and $ \mathtt{m}^*, b^*$ be defined as in Corollary \ref{coro-equilib-freq}.
Fix an integer $\mathtt{m}\geqslant \mathtt{m}^*$ and consider the finite subsets
\begin{align}
\forall k\in\{1,2\},\quad \mathbb{S}_{k}\subset\mathbb{Z}_{\mathtt{m}}\cap\mathbb{N}^*,\qquad\textnormal{with}\qquad d_k\triangleq |\mathbb{S}_{k}|<\infty \qquad\textnormal{and}\qquad \mathbb{S}_{1}\cap \mathbb{S}_{2}=\varnothing.\label{S+}
\end{align}
For all $b\in [0,b^*]$ define the tangential equilibrium frequency vector by
\begin{equation}\label{Eq freq vec Edc}
\omega_{\textnormal{Eq}}(b)\triangleq \big(\omega_{\textnormal{Eq},1}(b),\omega_{\textnormal{Eq},2}(b)\big)\in\mathbb{R}^{d},\quad\textnormal{with}\quad \omega_{\textnormal{Eq},k}(b)\triangleq \big(\Omega_{j,k}(b)\big)_{j\in\mathbb{S}_{k}}\in\mathbb{R}^{d_{k}},\quad d\triangleq d_1+d_2
\end{equation}
and set
$$\mathbb{S}\triangleq \mathbb{S}_{1}\cup\mathbb{S}_{2},\quad
\overline{\mathbb{S}}\triangleq \mathbb{S}\cup(-\mathbb{S}), \quad \overline{\mathbb{S}}_0\triangleq \overline{\mathbb{S}}\cup \{0\},\quad \overline{\mathbb{S}}_k=\mathbb{S}_k \cup(-\mathbb{S}_k)\quad\textnormal{and}\quad\overline{\mathbb{S}}_{0,k}=\overline{\mathbb{S}}_k\cup\{0\}.$$
In the next proposition we deduce some quantitative bounds from the qualitative non-degeneracy condition of Lemma \ref{lem non-deg}, the analyticity of the linear frequencies and their asymptotics.
\begin{lem}{\textnormal{[Transversality]}}\label{lemma transversalityE}
There exist $q_0\in\mathbb{N}$ and $\rho_{0}>0$ such that the following results hold true. Recall that $\mathtt{v}_k(b)$, $\Omega_{j,k}$ and $\omega_{\textnormal{Eq}}$ are defined in \eqref{def V10 V20}, \eqref{omegajk} and \eqref{Eq freq vec Edc} and respectively.
\begin{enumerate}
\item For any $l\in\mathbb{Z}^{d}\setminus\{0\},$ we have
$$
\inf_{b\in[0,b^{*}]}\max_{q\in\llbracket 0, q_{0}\rrbracket}\Big|\partial_{b}^{q}\omega_{\textnormal{Eq}}(b)\cdot l\Big|\geqslant\rho_{0}\langle l\rangle.
$$
\item For any $k\in\{1,2\}$ and $ (l,j)\in(\mathbb{Z}^{d}\times\mathbb{N}_{\mathtt{m}})\setminus\{(0,0)\}$
$$
\quad\inf_{b\in[0,b^{*}]}\max_{q\in\llbracket 0, q_{0}\rrbracket}\Big|\partial_{b}^{q}\big(\omega_{\textnormal{Eq}}(b)\cdot l+ j\mathtt{v}_k(b)\big)\Big|\geqslant\rho_{0}\langle l\rangle.
$$
\item For any $k\in\{1,2\}$ and $ (l,j)\in\mathbb{Z}^{d}\times (\mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_{k})$
$$
\quad\inf_{b\in[0,b^{*}]}\max_{q\in\llbracket 0, q_{0}\rrbracket}\Big|\partial_{b}^{q}\big(\omega_{\textnormal{Eq}}(b)\cdot l+\Omega_{j,k}(b)\big)\Big|\geqslant\rho_{0}\langle l\rangle.
$$
\item We assume the additional constraint $\Omega>\Omega_{\mathtt{m}}^*$, see Lemma $\ref{lem-asym}$-4-5. For any $k\in\{1,2\}$ and $ l\in\mathbb{Z}^{d}, j,j^\prime\in\mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k,$ satisfying the additional condition $(l,j)\neq(0,j^\prime)$, we have
$$\,\quad\inf_{b\in[0,b^*]}\max_{q\in\llbracket 0, q_{0}\rrbracket}\Big|\partial_{b}^{q}\big(\omega_{\textnormal{Eq}}(b)\cdot l+\Omega_{j,k}(b)\pm\Omega_{j^\prime,k}(b)\big)\Big|\geqslant\rho_{0}\langle l\rangle.$$
\item For any $ l\in\mathbb{Z}^{d}, j\in\mathbb{N}^*\setminus\mathbb{S}_1,j^\prime\in\mathbb{N}^*\setminus\mathbb{S}_2,$ we have
$$\,\quad\inf_{b\in[0,b^{*}]}\max_{q\in\llbracket 0, q_{0}\rrbracket}\Big|\partial_{b}^{q}\big(\omega_{\textnormal{Eq}}(b)\cdot l+\Omega_{j,1}(b)\pm\Omega_{j^\prime,2}(b)\big)\Big|\geqslant\rho_{0}\langle l,j,j^\prime\rangle.$$
\end{enumerate}
\end{lem}
\begin{proof}
${\bf{1.}}$
Suppose, by contradiction, that
for all $n\in\N$ there exist $b_n\in [0,b^{*}]$ and $l_n\in\Z^d\backslash \{0\}$ such that
\begin{equation}\label{alphan}
\max_{q\in\llbracket 0, n\rrbracket}\Big|\partial_b^q\Big({\omega}_{\textnormal{Eq}}(b)\cdot \tfrac{l_n}{\langle l_n\rangle}\Big)_{|{b=b_n}}\Big|< \tfrac{1}{n+1}.\cdot
\end{equation}
The sequences $(b_n)_n\subset [0,b^{*}]$ and $(c_n)_n\triangleq \big(\frac{l_n}{\langle l_n\rangle}\big)_n\subset \mathbb{R}^d\backslash\{0\}$ are bounded. Up to an extraction we may assume that
$$\lim_{n\to\infty}\tfrac{l_{n}}{\langle l_n\rangle}=\widetilde{c}\neq 0\qquad\hbox{and}\qquad \lim_{n\to\infty}b_{n}=\widetilde{b}.
$$
Taking to the limit in \eqref{alphan} for $n\to\infty$ we deduce that
$$\forall q \in \N,\quad \partial_b^q\big({\omega}_{\textnormal{Eq}}( b)\cdot \widetilde{c}\big)_{|{b=\widetilde b}}=0,\qquad{\rm with}\qquad \widetilde{c}\neq 0.$$ Therefore, the real analytic function $b \to {\omega}_{\textnormal{Eq}}(b)\cdot \widetilde{c}\,$ is identically zero. This contradicts Lemma~\ref{lem non-deg}.\\
${\bf{2.}}$ In the case $l=0$ and $j\in\mathbb{N}^*_{\mathtt{m}}$ we obviously have from \eqref{def V10 V20},
\begin{align*}
\quad\inf_{b\in[0,b^{*}]}\max_{q\in\llbracket 0, q_{0}\rrbracket}\big|\partial_{{b}}^{q}\big( j\mathtt{v}_k(b)\big)\big|&\geqslant \inf_{b\in[0,b^{*}]}\big| \mathtt{v}_k(b)\big|\geqslant \Omega\geqslant\rho_{0}\langle l\rangle,
\end{align*}
for some $\rho_0>0.$ Next, we shall consider the case $j\in\mathbb{N}^*_{\mathtt{m}}$, $l\in\mathbb{Z}^d\setminus\{0\}$.
By the triangle inequality combined with the boundedness of ${\omega}_{\textnormal{Eq}}$ and $\mathtt{v}_k(b)$ we get
$$\big|\omega_{\textnormal{Eq}}({b})\cdot l+j\mathtt{v}_k(b)\big|\geqslant|j|\big|\mathtt{v}_k(b)\big|-\big|\omega_{\textnormal{Eq}}({b})\cdot l\big|\geqslant c|j|-C|l|\geqslant |l|$$
provided that $|j|\geqslant C_{0}|l|$ for some $C_{0}>0.$ Hence, we shall only consider indices $j$ and $l$ satisfying
\begin{equation}\label{parameter condition 10}
|j|\leqslant C_{0}|l|, \qquad j\in\mathbb{N}_{\mathtt{m}}, \qquad l\in\mathbb{Z}^d\setminus\{0\}.
\end{equation}
By contradiction, assume the existence of sequences $\{l_{n}\}\subset \Z^d\backslash\{0\}$, $\{j_n \}\subset {\mathbb{N}}_{\mathtt{m}}$ satisfying \eqref{parameter condition 10} and $\{{b}_{n}\}\subset[0,b^*]$ such that
\begin{equation}\label{Ross-01}
\forall q\in\mathbb{N},\quad\forall n\geqslant q,\quad\left|\partial_{{b}}^{q}\left({\omega}_{\textnormal{Eq}}({b})\cdot\tfrac{l_{n}}{\langle l_{n}\rangle}+\tfrac{{j_{n}}\mathtt{v}_k(b)}{\langle l_{n}\rangle}\right)_{|{b=b_n}}\right|<\tfrac{1}{1+n}\cdot
\end{equation}
The sequences $\{{b}_n\}$, $\{d_n\}\triangleq \big\{\frac{j_n}{\langle l_n\rangle}\big\}$ and $\{c_n\}\triangleq \big\{\frac{l_n}{\langle l_n\rangle}\big\}$ are bounded. Thus, up to an extraction, we may assume that
$$
\lim_{n\to\infty}{b}_{n}=\widetilde{{b}},\qquad \lim_{n\to\infty}d_{n}=\widetilde{d}\geqslant 0\qquad\hbox{and}\qquad \lim_{n\to\infty}c_{n}=\widetilde{c}\neq 0.
$$
Hence, letting $n\rightarrow+\infty$ in \eqref{Ross-01} and using the fact that ${b}\mapsto \mathtt{v}_k(b)$ is smooth we obtain
$$\forall q\in\mathbb{N},\quad\partial_{{b}}^{q}\left({\omega}_{\textnormal{Eq}}({{b}})\cdot\widetilde{ c}+{\widetilde{d}}\,\mathtt{v}_k(b)\right)_{|{b}=\widetilde{{b}}}=0.$$
Consequently, the real analytic function ${b}\mapsto {\omega}_{\textnormal{Eq}}({{b}})\cdot\widetilde{ c}+{\widetilde{d}}\,\mathtt{v}_k(b)$ with $(\widetilde{ c},{\widetilde{d}})\neq (0,0)$ is identically zero
and this is in contradiction with Lemma \ref{lem non-deg}.
\\
${\bf{3.}}$ Let $k\in\{1,2\}$ and consider $(l,j)\in\mathbb{Z}^{d }\times (\mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k)$. By the triangle inequality and Lemma \ref{lem-asym}-${\rm{3}}$, we get
$$\big|\omega_{\textnormal{Eq}}(b)\cdot l+\Omega_{j,k}(b)\big|\geqslant\big|\Omega_{j}(b)\big|-\big|\omega_{\textnormal{Eq}}(b)\cdot l\big|\geqslant \Omega j-C|l|\geqslant \tfrac{\Omega}{2}\langle l\rangle$$
provided that $j\geqslant C_{0}| l|$ for some $C_{0}>0.$ Therefore, we shall restrict the proof to integers $j$ with
\begin{equation}\label{parameter condition 1}
0\leqslant j< C_{0} | l|,\qquad j\in\mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k\qquad\hbox{and}\qquad l\in\mathbb{Z}^d\backslash\{0\}.
\end{equation}
By contradiction, for all $n\in\mathbb{N}$, we assume the existence of sequences $\{l_{n}\}\subset \Z^d\backslash\{0\}, \{j_n\} \subset\mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k$ and $\{b_{n}\}\subset [0,{b}^*]$ such that
\begin{equation}\label{Ross-1}
\forall q\in\mathbb{N},\quad\forall n\geqslant q,\quad\Big|\partial_{b}^{q}\Big({\omega}_{\textnormal{Eq}}(b)\cdot\tfrac{l_{n}}{\langle l_{n}\rangle}+\tfrac{\Omega_{j_{n},k}(b)}{\langle l_{n}\rangle}\Big)_{|b={b}_n}\Big|<\tfrac{1}{1+n}\cdot
\end{equation}
Since the sequences $\{b_n\}$ and $\{c_n\}\triangleq \big\{\frac{l_n}{\langle l_n\rangle}\big\}$ are bounded, then by compactness we can assume that
$$
\lim_{n\to\infty}b_{n}=\widetilde{b}\qquad\hbox{and}\qquad \lim_{n\to\infty}c_{n}=\widetilde{c}\neq 0.
$$
We shall distinguish two cases.\\
$\bullet$ {\it Case $1$}: $\{l_{n}\}$ is bounded. From \eqref{parameter condition 1} and up to an extraction the sequences $\{l_{n}\}$ and $\{j_{n}\}$ are stationary. Thus, we can assume that for any $n\in\N$, we have $l_n=\widetilde{l}\in \Z^d\backslash\{0\}$ and $j_n=\widetilde{\jmath}\in \mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k$.
Taking the limit as $n\rightarrow+\infty$ in \eqref{Ross-1} yields
$$\forall q\in\mathbb{N},\quad\partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{l}+\Omega_{\widetilde{\jmath},k}({b})\right)_{|_{b=\widetilde{b}}}=0.$$
Consequently, the real analytic function $b\mapsto\omega_{\textnormal{Eq}}(b)\cdot\widetilde{l}+\Omega_{\widetilde{\jmath},k}(b)$ with $(\widetilde{l},1)\neq (0,0)$ is identically zero and this contradicts
Lemma \ref{lem non-deg}.\\
$\bullet$ {\it Case $2$}: $\{l_{n}\}$ is unbounded. Up to a subsequence, we assume that $ \displaystyle\lim_{n\to\infty} | l_{n}|=\infty$ and
$\displaystyle \lim_{n\to\infty}\frac{ l_{n}}{\langle l_{n}\rangle} =\widetilde{c}\in \R^d\backslash\{0\}.$
We shall distinguish two sub-cases.\\
$\bullet$ Sub-case \ding{172}. The sequence $\{j_n\}$ is bounded.
Up to an extraction we may assume that this sequence of integers is stationary. Taking the limit $n\rightarrow+\infty$ in \eqref{Ross-1}, we get
$$\forall q\in\mathbb{N},\quad \partial_{b}^{q}{\omega}_{\textnormal{Eq}}({b})_{|_{b=\widetilde{b}}}\cdot\widetilde{c}=0.$$
Thus, the real analytic function $b\mapsto{\omega}_{\textnormal{Eq}}(b)\cdot\widetilde{c}$, with $\widetilde{c}\neq 0$, is identically zero and this is a contradiction with the Lemma \ref{lem non-deg}.\\
$\bullet$ Sub-case \ding{173}. The sequence $\{j_{n}\}$ is unbounded. Then up to an extraction we can assume that $\displaystyle \lim_{n\to\infty} j_{n}=\infty$. According to \eqref{ASYFR1+} we have
\begin{equation}\label{quo1}
\tfrac{\Omega_{j_{n},k}(b)}{\langle l_{n}\rangle}=\tfrac{j_n}{\langle l_{n}\rangle}\Big(\Omega+(2-k)\tfrac{1-b^2}{2}\Big)+ \tfrac{(-1)^k}{2\langle l_{n}\rangle}+(-1)^{k+1} \tfrac{\mathtt{r}_{j_n}(b)}{\langle l_{n}\rangle}\cdot
\end{equation}
By \eqref{parameter condition 1}, the sequence $\Big\{\frac{j_{n}}{\langle l_{n}\rangle}\Big\}$ is bounded. Up to a subsequence, it converges to $\widetilde{d}.$ Differentiating then taking the limit in \eqref{quo1} we obtain
$$\lim_{n\to+\infty}\tfrac{\partial_{b}^q\Omega_{j_{n},k}(b_{n})}{\langle l_{n}\rangle}=\partial_{b}^q\big(\widetilde{d}\,\mathtt{v}_k(b)\big)_{|_{b=\widetilde{b}}},
$$
having used in the last identity the estimate \eqref{ASYFR1-}.
Hence, taking the limit $j\rightarrow+\infty$ in \eqref{Ross-1} gives
$$\forall q\in\mathbb{N},\quad \partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{c}+\widetilde{d}\,\mathtt{v}_k(b)\right)_{|_{b=\widetilde{b}}}=0.$$
Thus, the real analytic function $b\mapsto{\omega}_{\textnormal{Eq}}(b)\cdot\widetilde{c}+\widetilde{d}\mathtt{v}_k(b)$ is identically zero. This contradicts Lemma~\ref{lem non-deg} as $(\widetilde{c},\widetilde{d})\neq 0$.
\\
\textbf{4.} Let $ l\in\mathbb{Z}^{d }, j,j^\prime\in \mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k$ with $(l,j)\neq(0,j^\prime).$ By the triangle inequality and Lemma \ref{lem-asym}-${5}$, since $\Omega>\Omega_{\mathtt{m}}^*,$ we infer that
$$\big|\omega_{\textnormal{Eq}}(b)\cdot l+\Omega_{j,k}(b)\pm\Omega_{j',k}(b)\big|\geqslant\big|\Omega_{j,k}(b)\pm\Omega_{j',k}(b)\big|-\big|\omega_{\textnormal{Eq}}(b)\cdot l\big|\geqslant c|j\pm j'|-C|l|\geqslant \langle l\rangle$$
provided $|j-j'|\geqslant C_{0}| l|$ for some $C_{0}>0.$ In this case the desired estimate is trivial. So we shall restrict the proof to integers such that
\begin{equation}\label{parameter condition 2}
|j\pm j'|< C_{0}\langle l\rangle,\qquad l\in\mathbb{Z}^{d }\backslash\{0\},\qquad j,j^\prime\in\mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k.
\end{equation}
Arguing by contradiction, assume that for all $n\in\mathbb{N}$, there exists $( l_n,j_n)\neq(0,j_n^\prime)\in \Z^{d+1}$ satisfying \eqref{parameter condition 2} and $b_{n}\in[0,b^{*}]$ such that
\begin{equation}\label{Ross-2}
\forall q\in\mathbb{N},\quad\forall n\geqslant q,\quad\left|\partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}(b)\cdot\tfrac{ l_{n}}{\langle l_{n}\rangle}+\tfrac{\Omega_{j_{n},k}(b)\pm\Omega_{j'_{n},k}(b)}{\langle l_{n}\rangle}\right)_{|_{b=b_n}}\right|<\frac{1}{1+n}\cdot
\end{equation}
Since the sequences $\left\{\frac{ l_{n}}{\langle l_{n}\rangle}\right\}_{n}$ and $\{b_{n}\}_{n}$ are bounded, then up to an extraction we can assume that $\displaystyle \lim_{n\to\infty}\frac{ l_{n}}{\langle l_{n}\rangle}=\widetilde{c}\neq 0$ and $\displaystyle \lim_{n\to\infty}b_{n}=\widetilde{b}.$ We distinguish two cases :\\
$\bullet$ {\it Case $1$}: $( l_{n})_{n}$ is bounded. We shall only focus on the most delicate case associated to the difference $\Omega_{j_n,k}-\Omega_{j^\prime_n,k}$. Up to an extraction we may assume that this sequence of integers is stationary, that is, $ l_{n}=\widetilde l.$ Looking at \eqref{parameter condition 2} we have two sub-cases.
\\
$\bullet$ Sub-case \ding{172} : $(j_{n})_{n}$ and $(j'_{n})_{n}$ are bounded. Up to an extraction we can assume that they are stationary, that is, $j_n=\widetilde{\jmath}, j_n^\prime=\widetilde{\jmath}^\prime$. Moreover, by assumption we also have $(\widetilde l, \widetilde{\jmath})\neq(0,\widetilde{\jmath}^\prime)$ and $\widetilde{\jmath},\widetilde{\jmath}^\prime\notin\mathbb{S}_k$.
Hence taking the limit $n\rightarrow+\infty$ in \eqref{Ross-2}, we get
$$\forall q\in\mathbb{N},\quad\partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{ l}+\Omega_{\widetilde{\jmath},k}({b})-\Omega_{\widetilde{\jmath}^\prime,k}({b})\right)_{|_{b=\widetilde{b}}}=0.$$
Therefore, the real analytic function $b\mapsto\omega_{\textnormal{Eq}}(b)\cdot\widetilde{ l}+\Omega_{\widetilde{\jmath},k}(b)-\Omega_{\widetilde{\jmath}^\prime,k}(b)$ is identically zero. If $\widetilde{\jmath}=\widetilde{j^\prime}$ then this contradicts Lemma \ref{lem non-deg} since $\widetilde{l}\neq 0.$ In the case $\widetilde{\jmath}\neq \widetilde{j^\prime}\in \mathbb{N}_{\mathtt{m}}^*\setminus\mathbb{S}_k$ this still contradicts Lemma~\ref{lem non-deg}, applied with the vector frequency $(\omega_{\textnormal{Eq}},\Omega_{\widetilde{\jmath},k},\Omega_{\widetilde{j'},k})$.
\\
$\bullet$ Sub-case \ding{173} : $(j_n)_{n}$ and $(j'_{n})_{n}$ are unbounded. Up to an extraction, we assume that $\displaystyle \lim_{n\to\infty}j_{n}= \lim_{m\to\infty}j'_{n}=\infty$.
Assume, without loss of generality, that for a given $n$ we have $j_n\geqslant j_n^\prime$. In view of \eqref{ASYFR1+} we may write
\begin{align}\label{split}
\tfrac{\partial_{b}^{q}\left(\Omega_{j_{n},k}(b)-\Omega_{j'_{n},k}(b)\right)}{\langle l_{n}\rangle}
&=\partial_{b}^{q}\mathtt{v}_k(b)\tfrac{j_n-j_n^\prime}{\langle l_n\rangle}
+\tfrac{(-1)^{k+1}}{\langle l_n\rangle}\partial_{b}^{k}\big(\mathtt{r}_{j_n}(b)-\mathtt{r}_{j_n^\prime}(b)\big).
\end{align}
According to \eqref{parameter condition 2}, up to an extraction, we can assume that $\displaystyle\lim_{n\to\infty}\frac{j'_{n}-j_{n}}{\langle l_{n}\rangle}=\widetilde{d}$. Therefore, combining \eqref{split} and \eqref{ASYFR1-}, we find
$$\lim_{n\to\infty}\partial_{b}^{q}\left(\frac{\Omega_{j_{n},k}(b)-\Omega_{j'_{n},k}(b)}{\langle l_{n}\rangle}\right)_{|_{b=b_n}}=\widetilde{d}\,\partial_{b}^{q}\big(\mathtt{v}_k(b)\big)_{|_{b=\widetilde{b}}}.$$
Taking the limit $n\rightarrow+\infty$ in \eqref{Ross-2} gives
$$\forall q\in\mathbb{N},\quad\partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{c}+\widetilde{d}\,\mathtt{v}_k(b)\right)_{|_{b=\widetilde{b}}}=0.$$
Then, the real analytic function $b\mapsto{\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{c}+\widetilde{d}\,\mathtt{v}_k(b)$ with $(\widetilde{c},\widetilde{d})\neq (0,0)$ is identically zero. This contradicts Lemma \ref{lem non-deg}.
\\
$\bullet$ {\it Case} $2$: $( l_{n})_{n}$ is unbounded. Up to an extraction we can assume that $\displaystyle \lim_{n\to\infty}|l_{n}|=\infty.$ We shall distinguish three sub-cases.\\
$\bullet$ Sub-case \ding{172}. The sequences $(j_{n})_{n}$ and $(j'_{n})_{n}$ are bounded. Thus, up to an extraction they will converge. Taking the limit in \eqref{Ross-2} leads to
$$
\forall q\in\mathbb{N},\quad\partial_{b}^{q}{\omega}_{\textnormal{Eq}}(\bar{b})\cdot\widetilde{c}=0.
$$
which gives a contradiction with Lemma \ref{lem non-deg}. \\
$\bullet$ Sub-case \ding{173}. The sequences $(j_{n})_{n}$ and $(j'_{n})_{n}$ are both unbounded. This case is similar to the sub-case \ding{173} of the case 1.\\
$\bullet$ Sub-case \ding{174}. The sequence $(j_{n})_{n}$ is unbounded and $(j'_{n})_{n}$ is bounded. Without loss of generality, we can assume that $\displaystyle \lim_{n\to\infty}j_n=\infty$ and $ j_{n}^\prime=\widetilde{\jmath}.$ By \eqref{parameter condition 2} and up to an extraction one gets $\displaystyle \lim_{n\to\infty}\frac{j_{n}\pm j'_{n}}{| l_{n}|}=\widetilde{d}.$
Using Taylor formula combined with \eqref{ASYFR1-} gives for any $b\in[0,b^*],$
\begin{align}\label{estm:dif22}
\big| \partial_{b}^{q}\mathtt{r}_{j_n^\prime}(b)- \partial_{b}^{q}\mathtt{r}_{j_n}(b)\big| &\leqslant C\Big| \int_{j'_n}^{j_n}\frac{dx}{x^{2}}\Big|\nonumber\\ &\leqslant C {|j_n-j_n'|}{(j_n j_n^\prime)^{-1}}\cdot
\end{align}
Using \eqref{ASYFR1+}
combined with \eqref{estm:dif22} and \eqref{ASYFR1-} we get, for any $q\in\mathbb{N},$
\begin{align*}
\lim_{n\to\infty}\langle l_{n}\rangle^{-1}
\partial_b^q\Big(\Omega_{j_n,k}(b)\pm\Omega_{j_{n}^\prime,k}(b)-(j_n\pm j^\prime_{n})\mathtt{v}_k(b)\Big)_{|b=b_n}&=\\
(-1)^{k}\ \lim_{n\to\infty}
\partial_b^q\left(\tfrac{1\pm 1}{2\langle l_n\rangle}-\tfrac{\mathtt{r}_{j_n}(b)\pm \mathtt{r}_{j_n^\prime}(b)}{\langle l_n\rangle}\right)_{|b=b_n}&=0.
\end{align*}
Hence, taking the limit in \eqref{Ross-2} implies
$$\forall q\in\mathbb{N},\quad \partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{c}+\widetilde{d}\mathtt{v}_k(b)\right)_{b=\widetilde{b}}=0.$$
Thus, the real analytic function ${b}\mapsto{\omega}_{\textnormal{Eq}}({b})\cdot\widetilde{c}+\widetilde{d}\mathtt{v}_k(b)$ is identically zero with $(\widetilde{c},\widetilde{d})\neq0$ leading to a contradiction with Lemma \ref{lem non-deg}. \\
\textbf{5.}
Arguing by contradiction, suppose that for all $n\in\mathbb{N}$, there exist $b_n\in[0,b^*]$ and $( l_n,j_n,j_n^\prime)\in \Z^{d+2}\setminus\{0\}$, with $j_n\in (\mathbb{N}^*\cap \mathbb{Z}_{\mathtt{m}})\setminus\mathbb{S}_1,$ and $j^\prime_n\in (\mathbb{N}^*\cap \mathbb{Z}_{\mathtt{m}})\setminus\mathbb{S}_2$, such that
$$
\max_{q\in\llbracket 0, n\rrbracket}\left|\partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}(b)\cdot\tfrac{ l_{n}}{\langle l_{n},j_n,j_n^\prime\rangle}+\tfrac{\Omega_{j_{n},1}(b)\pm\Omega_{j'_{n},2}(b)}{\langle l_{n},j_n,j_n^\prime\rangle}\right)_{|_{b=b_n}}\right|<\frac{1}{1+n}$$
and therefore
\begin{equation}\label{Rossemann 2-dif0}
\forall q\in\mathbb{N},\quad\forall n\geqslant q,\quad\left|\partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}(b)\cdot\tfrac{ l_{n}}{\langle l_{n},j_n,j_n^\prime\rangle}+\tfrac{\Omega_{j_{n},1}(b)\pm\Omega_{j'_{n},2}(b)}{\langle l_{n},j_n,j_n^\prime\rangle}\right)_{|_{b=b_n}}\right|<\frac{1}{1+n}\cdot
\end{equation}
The sequence $(b_n)_n\subset [0,b^{*}]$ is bounded. Up to an extraction we may assume that
$$
\lim_{n\to\infty}b_{n}=\widetilde{b}\in [0,b^{*}].
$$
We distinguish two cases.\\
$\bullet$ {\it Case $1$}: The sequence $\{\langle l_{n},j_n,j_n^\prime\rangle\}_n$ is bounded. Then up to an extraction we may assume that
$$\lim_{n\to\infty}l_n=\widetilde{c}\in \mathbb{Z}^d,\qquad \lim_{n\to\infty}j_n=\widetilde{\jmath} \in(\mathbb{N}^*\cap \mathbb{Z}_{\mathtt{m}})\setminus\mathbb{S}_1\qquad\hbox{and}\qquad \lim_{n\to\infty}j'_n=\widetilde{\jmath}^\prime \in(\mathbb{N}^*\cap \mathbb{Z}_{\mathtt{m}})\setminus\mathbb{S}_2 .
$$
Taking the limit in \eqref{Rossemann 2-dif0} we find
$$\forall q\in\mathbb{N},\quad \partial_{b}^{q}\left({\omega}_{\textnormal{Eq}}(b)\cdot\widetilde{c}+\Omega_{\widetilde{\jmath},1}(b)\pm\Omega_{\widetilde{\jmath}^\prime,2}(b)\right)_{b=\widetilde{b}}=0.$$
Thus, the real analytic function ${b}\mapsto {\omega}_{\textnormal{Eq}}(b)\cdot\widetilde{c}+\Omega_{\widetilde{\jmath},1}(b)\pm\Omega_{\widetilde{\jmath}^\prime,2}(b)$ is identically zero on the interval $[0,b^*]$. This contradicts Lemma \ref{lem non-deg} if one of the following holds: $$\widetilde{\jmath} \not\in \mathbb{S}_2\qquad\hbox{and}\qquad \widetilde{\jmath}^\prime \not\in \mathbb{S}_1,$$ or
$$\widetilde{\jmath} \in \mathbb{S}_2\qquad\hbox{and}\qquad \widetilde{\jmath}^\prime \not\in \mathbb{S}_1,$$
or
$$\widetilde{\jmath} \not\in \mathbb{S}_2\qquad\hbox{and}\qquad \widetilde{\jmath}^\prime \in \mathbb{S}_1.$$ Thus, it remain to check the case where
\begin{equation}\label{last-case}
\widetilde{\jmath} \in \mathbb{S}_2\qquad\hbox{and}\qquad \widetilde{\jmath}^\prime \in \mathbb{S}_1.
\end{equation}
Denoting $\widetilde{c}=\Big(\big(\widetilde{c}_{j,1}\big)_{j\in\mathbb{S}_1},\big(\widetilde{c}_{j,1}\big)_{j\in\mathbb{S}_2}\Big),$ then we have for any $z\in [0,(b^*)^2],$
\begin{equation}\label{comb-aj2}
\sum_{j\in \mathbb{S}_1\setminus\{\widetilde{\jmath}^\prime\}} \widetilde{c}_{j,1}\widetilde{\Omega}_{j,1}(z) +\sum_{j\in \mathbb{S}_2\setminus\{\widetilde{\jmath}\}} \widetilde{c}_{j,2} \widetilde{\Omega}_{j,2}(z)+\widetilde{c}_{\widetilde{\jmath}^\prime,1} \widetilde{\Omega}_{\widetilde{\jmath}^\prime,1}(z)\pm\widetilde{\Omega}_{\widetilde{\jmath}^\prime,2}(z) +\widetilde{c}_{\widetilde{\jmath},2} \widetilde{\Omega}_{\widetilde{\jmath},2}(z)+ \widetilde{\Omega}_{\widetilde{\jmath},1}(z)=0.
\end{equation}
Arguing as in the proof of Lemma \ref{lem non-deg} we conclude by a differentiation argument that
$$\forall j\in (\mathbb{S}_1\cup \mathbb{S}_2)\setminus \{\widetilde{\jmath}^\prime, \widetilde{\jmath}\}, \quad \forall k\in\{1,2\},\quad c_{j,k}=0, \qquad c_{\widetilde{\jmath},2}=1\qquad \textnormal{and}\qquad c_{\widetilde{\jmath}^\prime,1}=\pm 1.$$
Substituting these identities into \eqref{comb-aj2} evaluated at $z=0$ and using \eqref{Taylor-fre} we get
$$\big(2\Omega +\tfrac{1}{2}\big)(\widetilde{\jmath}^\prime\pm \widetilde{\jmath})=0.$$
This implies that $\widetilde{\jmath}^\prime=\mp \widetilde{\jmath} $ contradicting \eqref{last-case} and \eqref{S+}.\\
$\bullet$ {\it Case $2$}: The sequence $\{\langle l_{n},j_n,j_n^\prime\rangle\}_n$ is unbounded.
Using \eqref{ASYFR1+} we may write
\begin{equation}\label{Rossemann 2-dif}
\forall q\in\mathbb{N},\quad\forall n\geqslant q,\quad\left|\partial_{b}^{q}\left(\big({\omega}_{\textnormal{Eq}}(b),\Omega+\tfrac{1-b^2}{2},\pm\Omega\big)\cdot\tfrac{ (l_{n},j_n,j_n^\prime)}{\langle l_{n},j_n,j_n^\prime\rangle}+\tfrac{- \tfrac{1}{2}\pm \tfrac{1}{2}+ \mathtt{r}_{j_n}(b)\mp \mathtt{r}_{j_n^\prime}(b)}{\langle l_{n},j_n,j_n^\prime\rangle}\right)_{|_{b=b_n}}\right|<\frac{1}{1+n}\cdot
\end{equation}
The sequence $(c_n)_n\triangleq \big(\tfrac{ (l_{n},j_n,j_n^\prime)}{\langle l_{n},j_n,j_n^\prime\rangle}\big)_n\subset \mathbb{R}^d\backslash\{0\}$ is bounded.
By compactness and up to an extraction we may assume that
$$\lim_{n\to\infty}\tfrac{ (l_{n},j_n,j_n^\prime)}{\langle l_{n},j_n,j_n^\prime\rangle}=\widetilde{c}\neq 0.
$$
Taking the limit in \eqref{Rossemann 2-dif} and using \eqref{ASYFR1-} we get
$$\forall q\in\mathbb{N},\quad \partial_{b}^{q}\left(\big({\omega}_{\textnormal{Eq}}(b),\Omega+\tfrac{1-b^2}{2},\pm\Omega\big)\cdot\widetilde{c}\right)_{b=\widetilde{b}}=0.$$
Thus, the real analytic function ${b}\mapsto \big({\omega}_{\textnormal{Eq}}(b),\Omega+\tfrac{1-b^2}{2},\pm\Omega\big)\cdot\widetilde{c}$ is identically zero with $\widetilde{c}\neq0$ which contradicts Lemma \ref{lem non-deg}.
This completes the proof of the lemma.
\end{proof}
\paragraph{Linear quasi-periodic solution.} Notice that by selecting only a finite number of frequencies, the sum in \eqref{lin-sol} gives rise to quasi-periodic solutions of the linearized equation \eqref{Ham eq-eq DCE}, provided that the parameter $b$ belongs to a suitable Cantor-like set of full measure.
The following result follows in a similar way to \cite[Lem 3.3]{HHM21}, based on Lemma \ref{lemma transversalityE}-(i) and Lemma \ref{lemma Russmann book}.
\begin{lem}\label{lemma sol Eq}
Let $\Omega>0,$ $\mathbb{S}_1,\mathbb{S}_2\subset\mathbb{N}^*$, as in \eqref{S+} and $b^*$ as in Corollary $\ref{coro-equilib-freq}.$ Then, there exists a Cantor-like set $\mathcal{C}\subset[0,b^*]$ satisfying $|\mathcal{C}|=b^*$ and such that for all $b\in\mathcal{C}$, every function in the form
$$\rho(t,\theta)=\sum_{j\in{\mathbb{S}}_1}\tfrac{\rho_{j,1}}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
1 \\
- a_{j}(b)
\end{pmatrix}\cos\big(j\theta-\Omega_{j,1}(b)t\big)+\sum_{j\in{\mathbb{S}}_2}\tfrac{\rho_{j,2}}{\sqrt{1-a_{j}^2(b)}} \begin{pmatrix}
-a_{j} (b) \\
1
\end{pmatrix}\cos\big(j\theta-\Omega_{j,2}(b)t\big),$$
with $\rho_{j,1},\rho_{j,2}\in\mathbb{R}^*$, is a time quasi-periodic reversible solution to the equation \eqref{Ham eq-eq DCE} with the vector frequency $\omega_{\textnormal{Eq}}(b)$, defined in \eqref{Eq freq vec Edc}.
\end{lem}
\section{Hamiltonian toolkit}
The main scope of this section is to relate the existence of quasi-periodic solutions to the Hamiltonian equation \eqref{nonlinear-func0} to the construction of invariant tori in a suitable phase space. More precisely, we shall reformulate the problem in terms of embedded tori through the introduction of action-angle variables. Note that, according to Remark \ref{remark-lin-eq-eq}, \eqref{id:XKXH} and \eqref{def Lbmat}, the equation \eqref{nonlinear-func0} can be seen as a quasilinear perturbation of its linear part at the equilibrium state, namely,
\begin{equation}\label{def XP}
\partial_{t}r=\mathcal{J}\mathbf{L}_0r+X_{P}(r),\qquad\textnormal{with}\qquad X_{P}(r)\triangleq \mathcal{J}\nabla K({r})-\mathcal{J}\mathbf{L}_0r=\mathbf{Q}^{-1}X_{H\geqslant 3}(\mathbf{Q}{r}),
\end{equation}
where
\begin{align*}
X_{H\geqslant 3}({r})\triangleq \mathcal{J}\big(\nabla H(r)-\mathbf{M}_0r\big)
\end{align*}
and $\mathbf{Q},$ $H$, $\mathbf{M}_0,$ $\mathbf{L}_0$ are defined in \eqref{def-P}, \eqref{def H}, \eqref{Ham eq-eq DCE}, \eqref{def Lbmat}, respectively. The following lemma summarizes some tame estimates satisfied by the vector field $X_P$. Notice that the structure of the two components of vector field $X_{H\geqslant 3}({r})$ are very similar to the one obtained in the setting of Euler equations in the unit disc \cite[eq. (5.1)]{HR21-1}. Moreover, the symplectic change of variables $\mathbf{Q}^{\pm 1}$ depends only on the parameter $b$ (and not on $r$) and it acts continuously from $\mathbf{H}_{\mathtt{m}}^s$ into itself for any $s.$ Therefore, one gets in a similar way to \cite[Lemma 5.2]{HR21-1} the following estimates.
\begin{lem}\label{tame XP}
Let $b^*, \mathtt{m}^*$ as in Corollary $\ref{coro-equilib-freq},$ $\mathtt{m}\geqslant \mathtt{m}^*$, and $(\gamma,q,s_{0},s)$ satisfy \eqref{setting q}, \eqref{setting tau1 and tau2} and \eqref{init Sob cond}. There exists $\varepsilon_{0}\in(0,1]$ such that if
$$\| r\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}\leqslant\varepsilon_{0},$$
then the vector field $X_{P}$, defined in \eqref{def XP} satisfies the following estimates
\begin{enumerate}[label=(\roman*)]
\item $\| X_{P}(r)\|_{s}^{q,\gamma,\mathtt{m}}\lesssim \| r\|_{s+2}^{q,\gamma,\mathtt{m}}\| r\|_{s_0+1}^{q,\gamma,\mathtt{m}}.$
\item $\| d_{r}X_{P}(r)[\rho]\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\rho\|_{s+2}^{q,\gamma,\mathtt{m}}\| r\|_{s_0+1}^{q,\gamma,\mathtt{m}}+\| r\|_{s+2}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}.$
\item
$\| d_r^{2}X_{P}(r)[\rho_{1},\rho_{2}]\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\rho_{1}\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}\|\rho_{2}\|_{s+2}^{q,\gamma,\mathtt{m}}+\big(\|\rho_{1}\|_{s+2}^{q,\gamma,\mathtt{m}}+\| r\|_{s+2}^{q,\gamma,\mathtt{m}}\|\rho_{1}\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}\big)\|\rho_{2}\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}$.
\end{enumerate}
\end{lem}
Since we shall look for small amplitude quasi-periodic solutions then it is more convenient to
rescale the solution as follows $r\mapsto\varepsilon r$ with $r$ bounded in a suitable functions space. Hence, the Hamiltonian equation \eqref{nonlinear-func} takes the form
\begin{equation}\label{perturbed hamiltonian}
\omega\cdot\partial_\varphi r=\partial_{\theta}\mathbf{L}_0r+\varepsilon X_{P_{\varepsilon}}(r),
\end{equation}
where $\mathbf{L}_0$ is the operator defined by \eqref{def Lbmat} and $X_{P_{\varepsilon}}$ is the rescaled Hamiltonian vector field defined by
$X_{P_{\varepsilon}}(r)\triangleq \varepsilon^{-2}X_{P}(\varepsilon r).$
Notice that \eqref{perturbed hamiltonian} is the Hamiltonian system generated by
the rescaled Hamiltonian
\begin{align}\label{def calKe}
\nonumber \mathcal{K}_{\varepsilon}(r)&\triangleq\varepsilon^{-2}K(\varepsilon r)\\
&= K_{\mathbf{L}_0}(r)+\varepsilon P_{\varepsilon}(r),
\end{align}
with $K_{\mathbf{L}_0}$ the quadratic Hamiltonian defined in Remark \ref{remark-lin-eq-eq} and
$\varepsilon P_{\varepsilon}(r)$ describes all the terms
of higher order more than cubic.
\paragraph{Action-angle-normal variables}\label{subsec act-angl}
Recalling the notations introduced in \eqref{def L2m}, \eqref{S+}--\eqref{Eq freq vec Edc}. Given the decomposition \eqref{decomp prod l2} of the phase space $L_{\mathtt{m}}^2(\mathbb{T})\times L_{\mathtt{m}}^2(\mathbb{T})$
and the decomposition in action-angle-normal variables \eqref{aa-coord-00},
the symplectic $2$-form in \eqref{sympl ref} becomes
\begin{equation}\label{sympl_form3}
{\mathcal W}=\sum_{j\in\mathbb{S}_1} d\vartheta_{j,1}\wedge dI_{j,2}-\sum_{j\in \mathbb{S}_2}d\vartheta_{j,2}\wedge dI_{j,2} +\frac{1}{2\ii}\sum_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1}}\frac{1}{j}dr_{j,1}\wedge dr_{-j,1}-\frac{1}{2\ii}\sum_{j\in \mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2}}\frac{1}{j}dr_{j,2}\wedge dr_{-j,2}.
\end{equation}
The Poisson bracket is given by
\begin{equation}\label{poisson-bracket}
\{F,G\}\triangleq\mathcal{W}(X_F,X_G)=\big\langle\nabla F, \mathbf{J}\nabla G\big\rangle,
\end{equation}
where $\langle \cdot,\cdot\rangle$ is the inner product, defined by
$$\big\langle (\vartheta,I,z),(\underline \vartheta,\underline I, \underline z)\big\rangle\triangleq\vartheta\cdot \underline \vartheta+I\cdot\underline I+\big\langle z,\underline z \big\rangle_{L^2 (\mathbb{T})\times L^2 (\mathbb{T})}.$$
The Poisson structure $\mathbf{J}$ corresponding to $\mathcal{W}$, defined by the identity \eqref{poisson-bracket} , is the unbounded operator
$$\mathbf{J}:(\vartheta,I,z)\mapsto({\mathtt J}I,-{\mathtt J}\vartheta,\mathcal{J}z),$$
where $\mathcal{J}$ is given by \eqref{def calJ} and
$${\mathtt J}\triangleq \begin{pmatrix}
{\rm I}_{d_1}& 0 \\
0 & -{\rm I}_{d_2} \end{pmatrix},
$$
with ${\rm I}_{d_k}$ the identity matrix of size ${d_k}$.
Now we shall study the Hamiltonian system generated by the Hamiltonian $ {\mathcal K}_\varepsilon $ in \eqref{def calKe},
in the action-angle-normal variables
$
(\vartheta, I, z) \in \mathbb{T}^d \times \mathbb{R}^d \times \mathbf{H}^{\perp}_{\overline{\mathbb{S}}_0} \, .
$
We consider the Hamiltonian $ K_{\varepsilon} (\vartheta, I, z )$ defined by
\begin{equation}\label{K eps}
K_{\varepsilon} \triangleq\mathcal{K}_{\varepsilon} \circ \mathbf{A}
\end{equation}
where
$\mathbf{A} $ is the map defined in \eqref{aa-coord-00}. Since $\mathbf{L}_0 $ in \eqref{def Lbmat} stabilizes the subspace $ \mathbf{H}^{\perp}_{\overline{\mathbb{S}}_0}$ then the quadratic Hamiltonian $ K_{\mathbf{L}_0}$ in \eqref{def KL} in the variables $ (\vartheta, I, z) $ reads, up to a constant,
\begin{align}\label{QHAM}
K_{\mathbf{L}_0} \circ \mathbf{A} &= -\sum_{j\in\mathbb{S}_1} \, \Omega_{j,1}(b)I_{j,1} +\sum_{j\in\mathbb{S}_2} \, \Omega_{j,2}(b)I_{j,2}+ \frac12 \big\langle \mathbf{L}_0\, z, z\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}\nonumber\\ & = -\big({\mathtt J}\,{\omega}_{\textnormal{Eq}}(b)\big)\cdot I
+ \frac12 \big\langle\mathbf{L}_0\, z, z \big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})},
\end{align}
where $ {\omega}_{\textnormal{Eq}}(b) \in \mathbb{R}^d $ is the unperturbed
tangential frequency vector.
By \eqref{def calKe} and \eqref{QHAM},
the Hamiltonian $K_{\varepsilon} $ in \eqref{K eps} reads
\begin{equation}\label{cNP-K}
\begin{aligned}
& K_{\varepsilon} =
{\mathcal N} + \varepsilon \mathcal{ P}_{\varepsilon}, \qquad \textnormal{with} \\
&
{\mathcal N} \triangleq -\big({\mathtt J}\,{\omega}_{\textnormal{Eq}}(b)\big)\cdot I + \frac12 \big\langle\mathbf{L}_0\, z, z \big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})},
\quad
\quad \mathcal{ P}_{\varepsilon} \triangleq P_\varepsilon \circ \mathbf{A}.
\end{aligned}
\end{equation}
We look for an embedded invariant torus
$$i :\mathbb{T}^d \rightarrow\mathbb{R}^d \times \mathbb{R}^d \times \mathbf{H}^{\perp}_{\overline{\mathbb{S}}_0}\,, \qquad \varphi \mapsto i(\varphi)\triangleq \big(\vartheta(\varphi), I(\varphi), z(\varphi)\big),$$
where $\vartheta (\varphi)-\varphi $ is a $ (2 \pi)^d $-periodic function, of the Hamiltonian vector field
$$
X_{K_{\varepsilon}} \triangleq
\big({\mathtt J}\partial_I K_{\varepsilon} , -{\mathtt J} \partial_\vartheta K_{\varepsilon} , \Pi_{\overline{\mathbb{S}}_0}^\bot
\mathcal{J}\nabla_{z} K_{\varepsilon}\big)
$$
filled by quasi-periodic solutions with Diophantine frequency
vector $\omega$.
Note that for the value $\varepsilon=0$, the Hamiltonian system
\begin{equation}\label{HS-T}
\omega\cdot\partial_\varphi i (\varphi) = (X_{{\mathcal N}} +\varepsilon X_{\mathcal{P}_{\varepsilon}}) (i(\varphi) )
\end{equation} possesses, for any value of the parameter $b\in [0,b^*]$ , the invariant torus $$i_{\textnormal{flat}}(\varphi)\triangleq(\varphi,0,0),
$$
provided that $\omega=-{\omega}_{\textnormal{Eq}}(b).$
Now, in order to construct an invariant torus to the Hamiltonian system \eqref{HS-T} which supports a quasi-periodic motion with frequency vector $\omega$, close to $-{\omega}_{\textnormal{Eq}}(b)$, we shall formulate the
problem as a "Nash-Moser Theorem of hypothetical conjugation" established in \cite{BM18}. It consists in using the frequencies $\omega\in\mathbb{R}^d$ as parameters and introducing “counter-terms” $\alpha\in\mathbb{R}^d$ in the family of Hamiltonians
\begin{equation}\label{K alpha}
\begin{aligned}
K_\varepsilon^\alpha \triangleq {\mathcal N}_\alpha +\varepsilon {\mathcal P}_{\varepsilon} \, , \qquad {\mathcal N}_\alpha \triangleq \alpha \cdot I
+ \tfrac12\big\langle\mathbf{L}_0\, z, z\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}.
\end{aligned}
\end{equation}
The value of $\alpha$ will be adjusted along the iteration in order to control the average of the $I$-component at the linear level of the Hamiltonian equation
\begin{align}
{\mathcal F} (i, \alpha )
& \triangleq {\mathcal F} (i, \alpha, \omega,b, \varepsilon ) \triangleq \omega\cdot\partial_\varphi i (\varphi) - X_{K_\varepsilon^\alpha} ( i (\varphi))
= \omega\cdot\partial_\varphi i (\varphi) - (X_{{\mathcal N}_\alpha} +\varepsilon X_{\mathcal{P}_{\varepsilon}}) (i(\varphi) ) \nonumber \\
& = \left(
\begin{array}{c}
\omega\cdot\partial_\varphi \vartheta (\varphi) - \mathtt{J}\big(\alpha - \varepsilon \partial_I \mathcal{P}_{\varepsilon} ( i(\varphi) ) \big) \\
\omega\cdot\partial_\varphi I (\varphi) + \varepsilon {\mathtt J}\partial_\vartheta \mathcal{P}_{\varepsilon}( i(\varphi) ) \\
\omega\cdot\partial_\varphi z (\varphi)
- \mathcal{J} \mathbf{L}_0 z(\varphi) - \varepsilon \mathcal{J} \nabla_z \mathcal{P}_{\varepsilon} ( i(\varphi) )
\end{array}
\right) =0. \label{operatorF}
\end{align}
This degree of freedom through a parameter $\alpha$ will provides at the end of the scheme a solution $(\omega,i)$ for the original problem when it is fixed to $\alpha=-\mathtt{J}{\omega}_{\textnormal{Eq}}(b)$ for any value of $b$ in a suitable Cantor set. Note that the involution $\mathscr{S}$, described in \eqref{defin inv scr S}, becomes
\begin{equation}\label{rev th I z}
\mathfrak{S}:(\vartheta,I,z)\mapsto(-\vartheta,I,\mathscr{S}z)
\end{equation}
{and the operator $\mathscr{T}_{\mathtt{m}}$ in \eqref{def scr Tm} becomes
\begin{equation}\label{mfold th I z}
\mathfrak{T}_{\mathtt{m}}:(\vartheta,I,z)\mapsto(\vartheta,I,\mathscr{T}_{\mathtt{m}}z).
\end{equation}}
Moreover, we can easily check that the Hamiltonian vector field $X_{K_\varepsilon^\alpha}$ is reversible with respect $\mathfrak{S} $ {and $\mathtt{m}$-fold preserving with respect to $\mathfrak{T}_{\mathtt{m}}$}.
Thus, it is natural to look for $\mathtt{m}$-fold reversible solutions of $ {\mathcal F}(i, \alpha) = 0 $, namely satisfying
\begin{equation}\label{parity solution}
\vartheta(-\varphi) = - \vartheta (\varphi),\qquad\
I(-\varphi) = I(\varphi),\qquad z(-\varphi)=\big(\mathscr{S} z(\varphi)\big){\qquad\textnormal{and}\qquad\mathscr{T}_{\mathtt{m}}z(\varphi)=z(\varphi).}
\end{equation}
In the sequel, we shall denote by
$$\mathfrak{I}(\varphi)\triangleq i(\varphi)-(\varphi,0,0)=\big(\vartheta (\varphi)-\varphi, I(\varphi), z(\varphi)\big)$$
the periodic component of the torus $\varphi\mapsto i(\varphi)$.
We end this section by summarizing some tame estimates satisfied by the Hamiltonian vector field
$$X_{\mathcal{P}_{\varepsilon}}\triangleq\big(\mathtt{J}\partial_{I}\mathcal{P}_{\varepsilon},-\mathtt{J}\partial_{\vartheta}\mathcal{P}_{\varepsilon},\Pi_{\overline{\mathbb{S}}_0}^{\perp}\mathcal{J}\nabla_{z}\mathcal{P}_{\varepsilon}\big),$$
where $\mathcal{P}_{\varepsilon}$ is defined in \eqref{cNP-K}. The proof of the next lemma follows in a similar way to \cite[Lem. 5.1]{BM18} using Lemma \ref{tame XP}.
\begin{lem}\label{tame X per}
Let $b^*, \mathtt{m}^*$ as in Corollary $\ref{coro-equilib-freq},$ $\mathtt{m}\geqslant \mathtt{m}^*$ and $(\gamma,q,s_{0},s)$ satisfy \eqref{setting q}, \eqref{setting tau1 and tau2} and \eqref{init Sob cond}.
There exists $\varepsilon_0\in(0,1)$ such that if
$$\varepsilon\leqslant\varepsilon_0\qquad\textnormal{and}\qquad\|\mathfrak{I}\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}\leqslant 1,$$
then the perturbed Hamiltonian vector field $X_{\mathcal{P}_{\varepsilon}}$ satisfies the following tame estimates,
\begin{enumerate}[label=(\roman*)]
\item $\| X_{\mathcal{P}_{\varepsilon}}(i)\|_{s}^{q,\gamma,\mathtt{m}}\lesssim 1+\|\mathfrak{I}\|_{s+2}^{q,\gamma,\mathtt{m}}.$
\item $\big\| d_{i}X_{\mathcal{P}_{\varepsilon}}(i)[\,\widehat{i}\,]\big\|_{s}^{q,\gamma,\mathtt{m}}\lesssim \|\,\widehat{i}\,\|_{s+2}^{q,\gamma,\mathtt{m}}+\|\mathfrak{I}\|_{s+2}^{q,\gamma,\mathtt{m}}\|\,\widehat{i}\,\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}.$
\item $\big\| d_{i}^{2}X_{\mathcal{P}_{\varepsilon}}(i)[\,\widehat{i},\widehat{i}\,]\big\|_{s}^{q,\gamma,\mathtt{m}}\lesssim \|\,\widehat{i}\,\|_{s+2}^{q,\gamma,\mathtt{m}}\|\,\widehat{i}\,\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}+\|\mathfrak{I}\|_{s+2}^{q,\gamma,\mathtt{m}}\big(\|\,\widehat{i}\,\|_{s_{0}+1}^{q,\gamma,\mathtt{m}}\big)^{2}.$
\end{enumerate}
\end{lem}
\section{Approximate inverse}\label{sec:Approximate-inverse}
In order to prove the Theorem \ref{thm QPS E} using a Nash-Moser scheme, we have to construct an approximate right inverse of the linearized operator associated to the functional $\mathcal{F}$, defined in \eqref{operatorF}, at any $\mathtt{m}$-fold and reversible state $(i_0,\alpha_0)$ close to the flat torus,
\begin{equation}\label{Linearized-op-F-DC}
d_{(i,\alpha)}\mathcal{F}(i_0,\alpha_0)=\omega\cdot\partial_\varphi i_0 - d_i X_{K_\varepsilon^{\alpha_0}} ( i_0 )- \left(
\begin{array}{c}
\mathtt{J}\widehat\alpha \\
0
\\
0
\end{array}
\right).
\end{equation}
For this aim, we shall use the Berti-Bolle approach for the approximate inverse developed in \cite{BBP14} and which "approximately" decouples the linearized equations through a triangular system in the action-angle components and the normal ones. This strategy was slightly simplified in \cite[Section 6]{HHM21} bypassing the introduction of an intermediate isotropic torus and directly working with the original one $i_0$. Here, we shall closely follow this latter procedure with giving close attention to the difference in the Hamiltonian structure, which is due to the vectorial framework. Thus, for completeness sake, we shall reproduce all the algebraic computations and refer the reader to \cite[Section 6]{HHM21} for more details on the analysis, which is very similar.
We first introduce the diffeomorpshim
$ G_0 : (\phi, y, w) \mapsto (\vartheta, I, z)$ of the phase space $\mathbb{R}^d \times \mathbb{R}^d \times \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}$ given by
\begin{equation}\label{trasformation G}
\begin{pmatrix}
\vartheta \\
I \\
z
\end{pmatrix} \triangleq G_0 \begin{pmatrix}
\phi \\
y \\
w
\end{pmatrix} \triangleq
\begin{pmatrix}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\vartheta_0(\phi) \\
I_0 (\phi) + L_1(\phi) y + L_2(\phi) w \\
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\!\!\!\!\!\!\!\!\!\!\!\!
z_0(\phi) + w
\end{pmatrix},
\end{equation}
where
\begin{equation}\label{L1-L2}
L_1(\phi)\triangleq \mathtt{J}[\partial_\varphi \vartheta_0(\phi)]^{-\top},\qquad L_2(\phi) \triangleq \mathtt{J}[(\partial_\vartheta \widetilde{z}_0)(\vartheta_0(\phi))]^\top \mathcal{J}^{-1},\qquad \widetilde{z}_0 (\vartheta) \triangleq z_0 (\vartheta_0^{-1} (\vartheta))
\end{equation}
and the transposed operator is defined through the following duality relation :
Given a Hilbert space $\mathtt{H}$ equipped with the inner product $\langle \cdot,\cdot \rangle_{\mathtt{H}}$ and a linear operator $A\in {\mathcal L}({\mathbb R}^d, \mathtt{H})$,
\begin{align*}
\forall \, u\in {\mathtt{H}}\, , \qquad\forall v\in\mathbb{R}^d,\qquad \big\langle A^{\top}u,v\big\rangle_{\mathbb{R}^d} \triangleq\big\langle u, Av\big\rangle_{\mathtt{H}} .
\end{align*}
Note that, in the new coordinates, $i_0$ becomes the trivial
embedded torus $(\phi,y,w)=(\varphi,0,0)$, namely
$$
G_0(\varphi,0,0)=i_0(\varphi).
$$
In what follows we shall use the following notations
\begin{itemize}
\item We denote by ${\mathtt u}=(\phi, y,w)$ the coordinates induced by $G_0$ in \eqref{trasformation G}.
\item The mapping $${\mathtt u}_0(\varphi)\triangleq G_0^{-1}
(i_0)(\varphi)=(\varphi,0,0)$$ refers to the trivial torus
\item We shall denote by
$$
\widetilde{G}_0 ({\mathtt u}, \alpha) \triangleq \big( G_0 ({\mathtt u}), \alpha \big)
$$
the diffeomorphism with the identity on the $ \alpha $-component.
\item We quantify how an embedded torus $i_0(\mathbb{T})$ is approximately invariant for the Hamiltonian vector field $X_{K_\varepsilon^{\alpha_0}}$ in terms of the "error function"
\begin{equation}\label{def Z}
Z(\varphi) \triangleq (Z_1, Z_2, Z_3) (\varphi) \triangleq {\mathcal F}(i_0, \alpha_0) (\varphi) =
\omega \cdot \partial_\varphi i_0(\varphi) - X_{K^{\alpha_0}_\varepsilon}\big(i_0(\varphi)\big).
\end{equation}
\end{itemize}
\subsection{Linear change of variables and defect of the symplectic structure}
In this subsection we shall conjugate the linearized operator $d_{i,\alpha} {\mathcal F} (i_0,{\alpha}_0)$ in \eqref{Linearized-op-F-DC}, via the linear change of variables
\begin{equation}\label{Differential G0}
D G_0({\mathtt u}_0(\varphi))
\begin{pmatrix}
\widehat \phi \, \\
\widehat y \\
\widehat w
\end{pmatrix}
=
\begin{pmatrix}
\partial_\varphi \vartheta_0(\varphi) & 0 & 0 \\
\partial_\varphi I_0(\varphi) &L_1(\varphi) &
L_2(\varphi) \\
\partial_\varphi z_0(\varphi) & 0 & I
\end{pmatrix}
\begin{pmatrix}
\widehat \phi \, \\
\widehat y \\
\widehat w
\end{pmatrix} ,
\end{equation}
to a triangular system with small errors of size $Z=\mathcal{F}(i_0,\alpha_0)$. Our main result is the following.
\begin{prop}\label{Proposition-Conjugation}
Under the linear change of variables $D G_0({\mathtt u}_0)$ the linearized operator $d_{i,\alpha} {\mathcal F} (i_0,\alpha_0)$ is transformed into
\begin{align}\label{Id-conj}
& [D G_0({\mathtt u}_0)]^{-1} d_{(i,\alpha)} {\mathcal F} (i_0,\alpha_0) D\widetilde G_0({\mathtt u}_0)
[\widehat \phi, \widehat y, \widehat w, \widehat \alpha ]
= \mathbb{D} [\widehat \phi, \widehat y, \widehat w, \widehat \alpha ]+\mathbb{E} [\widehat \phi, \widehat y, \widehat w]
\end{align}
where
\begin{enumerate}
\item the operator $ \mathbb{D}$ has the triangular form
\begin{align*}
\mathbb{D} [\widehat \phi, \widehat y, \widehat w, \widehat \alpha ]\triangleq\left(
\begin{array}{c}
\omega\cdot\partial_\varphi \widehat\phi-\big[K_{20}(\varphi) \widehat y+K_{11}^\top(\varphi) \widehat w+L_1^\top (\varphi)\widehat \alpha\big]
\\
\omega\cdot\partial_\varphi \widehat y+\mathcal{B}(\varphi) \widehat \alpha \\
\omega\cdot\partial_\varphi \widehat w-\mathcal{J}\big[K_{11}(\varphi) \widehat y+K_{02}(\varphi)\widehat w +L_2^{\top}(\varphi) \widehat\alpha \big]
\end{array}
\right),
\end{align*}
$\mathcal{B}(\varphi)$ and $K_{20}(\varphi) $ are $d \times d$ real matrices,
\begin{align*}
\mathcal{B}(\varphi)& \triangleq L_1^{-1} (\varphi)\partial_\varphi I_0(\varphi) L_1^\top (\varphi)+[\partial_\varphi z_0(\varphi)]^{\top} L_2^\top (\varphi) ,
\\
K_{20}(\varphi)&\triangleq \varepsilon L_1^\top(\varphi) ( \partial_{II} \mathcal{P}_\varepsilon)(i_0(\varphi)) L_1(\varphi) \, ,
\end{align*}
$K_{02}(\varphi)$ is a linear self-adjoint operator of $ \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}$, given by
\begin{align}\label{def K02}
K_{02}(\varphi)& \triangleq ( \partial_{z}\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) +\varepsilon L_2^\top(\varphi) ( \partial_{II} \mathcal{P}_\varepsilon)(i_0(\varphi)) L_2(\varphi)
\\ &\quad+\varepsilon L_2^\top(\varphi)( \partial_{zI} \mathcal{P}_\varepsilon) (i_0(\varphi)) + \varepsilon (\partial_I\nabla_z \mathcal{P}_\varepsilon) (i_0(\varphi)) L_2(\varphi),\nonumber
\end{align}
and $K_{11}(\varphi) \in {\mathcal L}({\mathbb R}^d, \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp})$,
\begin{align*}
K_{11}(\varphi)&\triangleq \varepsilon L_2^\top(\varphi) ( \partial_{II} \mathcal{P}_\varepsilon)(i_0(\varphi))L_1(\varphi) +\varepsilon ( \partial_I\nabla_z \mathcal{P}_\varepsilon) (i_0(\varphi)) L_1(\varphi) ,
\end{align*}
\item the remainder $\mathbb{E} $ is given by
\begin{align*}
\mathbb{E} [\widehat \phi, \widehat y, \widehat w] &\triangleq
[D G_0({\mathtt u}_0)]^{-1} \partial_\varphi Z(\varphi) \widehat\phi \\ &\quad + \left(
\begin{array}{c}
0
\\
\mathcal{A}(\varphi)\big[K_{20}(\varphi) \widehat y+K_{11}^\top(\varphi) \widehat w\big]-R_{10}(\varphi) \widehat y -R_{01}(\varphi) \widehat w
\\
0
\end{array}
\right)
\end{align*}
where $\mathcal{A}(\varphi)$ and $R_{10}(\varphi) $ are $d \times d$ real matrices,
\begin{align*}
\mathcal{A}(\varphi)& \triangleq[\partial_\varphi \vartheta_0(\varphi)]^\top{\mathtt J} \partial_\varphi I_0(\varphi)-[\partial_\varphi I_0(\varphi)]^\top {\mathtt J}\partial_\varphi \vartheta_0(\varphi) -[\partial_\varphi z_0(\varphi)]^{\top} \mathcal{J}^{-1} \partial_\varphi z_0(\varphi),
\\
R_{10}(\varphi)&\triangleq [\partial_\varphi Z_1(\varphi)]^{\top} [\partial_\varphi \vartheta_0(\varphi)]^{-\top},
\end{align*}
and $R_{01}(\varphi)\in {\mathcal L}( \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp},{\mathbb R}^d)$,
\begin{align*}
R_{01}(\varphi)&\triangleq [\partial_\varphi Z_1(\varphi)]^{\top} [(\partial_\vartheta \widetilde{z}_0)(\vartheta_0(\varphi))]^\top \mathcal{J}^{-1}- [\partial_\varphi Z_3(\varphi)]^{\top} \mathcal{J}^{-1}
\, .
\end{align*}
\end{enumerate}
\end{prop}
\begin{proof}
The composition of the nonlinear operator $\mathcal{F}$, in \eqref{operatorF}, with the map $G_0$ is given by
\begin{align}\label{Composition F G0}
{\mathcal F} (G_0({\mathtt u}(\varphi)),\alpha)
= \omega\cdot\partial_\varphi \big( G_0({\mathtt u}(\varphi))\big)- X_{K_\varepsilon^\alpha} \big(G_0({\mathtt u}(\varphi))\big).
\end{align}
Then, by differentiating \eqref{Composition F G0} at $({\mathtt u}_0,\alpha_0)$ in the direction $(\widehat {\mathtt u}, \widehat \alpha)$ we obtain
\begin{align}\label{differential-composition}
&d_{({\mathtt u},\alpha)} ({\mathcal F} \circ G_0)({\mathtt u}_0,\alpha_0)
[(\widehat {\mathtt u}, \widehat \alpha )](\varphi)
= \omega\cdot\partial_\varphi \big( DG_0({\mathtt u}_0)\widehat {\mathtt u} \big)- \partial_\phi\big[ X_{K_\varepsilon^{\alpha_0}} \big(G_0({\mathtt u}(\varphi))\big)\big]_{{\mathtt u}={\mathtt u}_0} \widehat\phi
\\ &\qquad\qquad\qquad\quad - \partial_y\big[ X_{K_\varepsilon^{\alpha_0}} \big(G_0({\mathtt u}(\varphi))\big)\big]_{{\mathtt u}={\mathtt u}_0} \widehat y- \partial_w\big[ X_{K_\varepsilon^{\alpha_0}} \big(G_0({\mathtt u}(\varphi))\big) \big]_{{\mathtt u}={\mathtt u}_0} \widehat w
- \left(
\begin{array}{c}
\mathtt{J}\widehat\alpha \\
0
\\
0
\end{array}
\right).\nonumber
\end{align}
In view of \eqref{Differential G0}, one has
\begin{equation}\label{omg-d-phi-DG}
\omega\cdot\partial_\varphi \big( DG_0({\mathtt u}_0)[\widehat {\mathtt u} ](\varphi)\big)
=DG_0({\mathtt u}_0) \, \omega\cdot\partial_\varphi \widehat {\mathtt u} + \partial_\varphi\big(\omega\cdot\partial_\varphi i_0 \big)\widehat\phi +\left(
\begin{array}{c}
0
\\
(\omega\cdot\partial_\varphi L_1(\varphi))\widehat y+\big(\omega\cdot\partial_\varphi L_2(\varphi)\big) \widehat w
\\
0
\end{array}
\right),
\end{equation}
and from \eqref{L1-L2}--\eqref{def Z} we get
\begin{equation}\label{omg-d-phi-L1}
\begin{aligned}
\omega\cdot\partial_\varphi L_1(\varphi)&=-\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top}(\omega\cdot\partial_\varphi [\partial_\varphi \vartheta_0(\varphi)]^{\top}) [\partial_\varphi \vartheta_0(\varphi)]^{-\top}\\
&=-\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top} \Big(\big[\partial_\varphi Z_1(\varphi)\big]^{\top} + \big[ \partial_\varphi\big( ( \partial_{I} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big) \big]^{\top}{\mathtt J}\Big) [\partial_\varphi \vartheta_0(\varphi)]^{-\top}.
\end{aligned}
\end{equation}
Observe, from \eqref{L1-L2}, that we have the identity
\begin{equation}\label{d-phi-z}
\partial_\varphi z_0(\varphi)=(\partial_\vartheta \widetilde{z}_0)(\vartheta_0(\varphi))\partial_\varphi \vartheta_0(\varphi).
\end{equation}
Therefore the operator $L_2(\varphi)$ can be written as
\begin{equation}\label{rewriting L2}
L_2(\varphi)=\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top}[\partial_\varphi z_0(\varphi)]^{\top}\mathcal{J}^{-1}=L_1(\varphi)[\partial_\varphi z_0(\varphi)]^{\top}\mathcal{J}^{-1}.
\end{equation}
From the last two identities we find
\begin{align}
\omega\cdot\partial_\varphi L_2(\varphi)
&=-\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top}(\omega\cdot\partial_\varphi [\partial_\varphi \vartheta_0(\varphi)]^{\top}) [\partial_\varphi \vartheta_0(\varphi)]^{-\top}[\partial_\varphi z_0(\varphi)]^{\top}\mathcal{J}^{-1}\notag
\\
& \quad +\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top} [\partial_\varphi (\omega\cdot\partial_\varphi z_0)(\varphi)]^{\top} \mathcal{J}^{-1}\notag
\end{align}
and by \eqref{def Z} we obtain
\begin{align}\label{omg-d-phi-L2}
\omega\cdot\partial_\varphi L_2(\varphi)
&=-\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top}\Big(\big[\partial_\varphi Z_1(\varphi)\big]^{\top} + \big[ \partial_\varphi\big( ( \partial_{I} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big) \big]^{\top}\Big) L_2(\varphi) \notag
\\ &\quad +\mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top} \Big( \big[\partial_\varphi Z_3(\varphi)\big]^{\top}-\big[ \partial_\varphi\big(( \nabla_{z} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big) \big]^{\top}\Big) .
\end{align}
Gathering \eqref{omg-d-phi-DG}, \eqref{omg-d-phi-L1} and \eqref{omg-d-phi-L2} gives
\begin{align}\label{omdphdg}
&\omega\cdot\partial_\varphi \big( DG_0({\mathtt u}_0)[\widehat {\mathtt u}](\varphi) \big)=DG_0({\mathtt u}_0) \, \omega\cdot\partial_\varphi \widehat {\mathtt u} + \partial_\varphi\big(\omega\cdot\partial_\varphi i_0 \big)\widehat\phi \nonumber
\\ &\quad - \left(
\begin{array}{c}
0
\\
~ \mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top}\Big(\big[\mathcal{C}_I(\varphi) L_1(\varphi) +R_{10}(\varphi) \big] \widehat y+\big[\mathcal{C}_I(\varphi) L_2(\varphi)+\mathcal{C}_z(\varphi) + R_{01}(\varphi) \big] \widehat w\Big]
\\
0
\end{array}
\right),
\end{align}
where $R_{10}(\varphi)$ and $R_{01}(\varphi)$ are given by {\rm (ii)}
and
\begin{align}
\mathcal{C}_I(\varphi) &\triangleq \big[ \partial_\varphi\big( ( \partial_{I} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big) \big]^{\top}\label{def CI}\\
&=[\partial_\varphi I_0(\varphi)]^\top( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi))+ [\partial_\varphi \vartheta_0(\varphi)]^\top ( \partial_{\vartheta I} K_\varepsilon^{\alpha_0})(i_0(\varphi))+ [\partial_\varphi z_0(\varphi)]^\top ( \partial_{I} \nabla_z K_\varepsilon^{\alpha_0})(i_0(\varphi)),\notag
\\
\mathcal{C}_z(\varphi) & \triangleq \big[ \partial_\varphi\big( ( \nabla_{z} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big) \big]^{\top}\label{def Cz}\\
&=[\partial_\varphi I_0(\varphi)]^\top( \partial_{zI} K_\varepsilon^{\alpha_0})(i_0(\varphi))+ [\partial_\varphi \vartheta_0(\varphi)]^\top ( \partial_{z\vartheta } K_\varepsilon^{\alpha_0})(i_0(\varphi)) + [\partial_\varphi z_0(\varphi)]^\top ( \partial_{z} \nabla_z K_\varepsilon^{\alpha_0})(i_0(\varphi)).\notag
\end{align}
According \eqref{operatorF} and \eqref{trasformation G}, one may writes
\begin{align*}
\partial_\phi\big[ X_{K_\varepsilon^{\alpha_0}} \big(G_0({\mathtt u}(\varphi))\big)\big]_{{\mathtt u}={\mathtt u}_0} \widehat\phi &
= \partial_\varphi\big[ X_{K_\varepsilon^{\alpha_0}} (i_0(\varphi)))\big] \widehat\phi,
\\
\partial_y\big[ X_{K_\varepsilon^{\alpha_0}} \big(G_0({\mathtt u}(\varphi))\big)\big]_{{\mathtt u}={\mathtt u}_0} \widehat y &
= \left(
\begin{array}{c}
{\mathtt J}( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi)) L_1(\varphi) \widehat y
\\
-{\mathtt J}( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_1(\varphi) \widehat y
\\
\mathcal{J} \big[ ( \partial_I\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_1(\varphi) \widehat y \big]
\end{array}
\right),
\\
\partial_w\big[ X_{K_\varepsilon^{\alpha_0}} \big(G_0({\mathtt u}(\varphi))\big) \big]_{{\mathtt u}={\mathtt u}_0} \widehat w &
=\left(
\begin{array}{c}
({\mathtt J} \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi))L_2(\varphi) \widehat w+{\mathtt J}( \partial_{zI} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \widehat w
\\
-{\mathtt J}( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_2(\varphi) \widehat w- {\mathtt J}( \partial_{z\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \widehat w
\\
\mathcal{J} \big[( \partial_I\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_2(\varphi) \widehat w + ( \partial_{z}\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \widehat w \big]
\end{array}
\right).
\end{align*}
Therefore inserting \eqref{omdphdg} and the last three identities into \eqref{differential-composition} we get
\begin{align}\label{dfcirg}
&d_{({\mathtt u},\alpha)} ({\mathcal F} \circ G_0)({\mathtt u}_0,\alpha_0)
[(\widehat {\mathtt u}, \widehat \alpha )]= DG_0({\mathtt u}_0) \, \omega\cdot\partial_\varphi \widehat {\mathtt u} + \partial_\varphi\big[{\mathcal F}(i_0(\varphi)) \big] \widehat\phi \nonumber
\\
& + \left(
\begin{array}{c}
-{\mathtt J}( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi)) L_1(\varphi) \widehat y
\\
{\mathtt J}( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_1(\varphi)\widehat y -{\mathtt J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top} [\mathcal{C}_I(\varphi) L_1(\varphi) +R_{10}(\varphi) ]\widehat y
\\
- \mathcal{J} ( \partial_I\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_1(\varphi) \widehat y
\end{array}
\right)
\nonumber
\\
&+\left(
\begin{array}{c}
-{\mathtt J}( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi))L_2(\varphi) \widehat w-{\mathtt J}( \partial_{zI} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \widehat w
\\
\big[{\mathtt J}( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_2(\varphi) + {\mathtt J}( \partial_{z\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big] \widehat w
\\
- \mathcal{J} \big[( \partial_I\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_2(\varphi) \widehat w + ( \partial_{z}\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \widehat w \big]
\end{array}
\right)\nonumber
\\ &- \left(
\begin{array}{c}
0
\\
~ \mathtt{J}[\partial_\varphi \vartheta_0(\varphi)]^{-\top}\big[\mathcal{C}_I(\varphi) L_2(\varphi)+ \mathcal{C}_z(\varphi) +R_{01}(\varphi) \big] \widehat w
\\
0
\end{array}
\right)- \left(
\begin{array}{c}
\mathtt{J}\widehat\alpha \\
0
\\
0
\end{array}
\right).
\end{align}
From \eqref{Differential G0}, \eqref{d-phi-z} and \eqref{rewriting L2}, one may easily check that
\begin{align*}
& [D G_0({\mathtt u}_0)]^{-1}
=
\begin{pmatrix}
[\partial_\varphi \vartheta_0(\varphi)]^{-1} & 0 & 0 \\
-\mathcal{B}(\varphi){\mathtt J} & [\partial_\varphi \vartheta_0(\varphi)]^\top {\mathtt J}&
-[\partial_\varphi z_0(\varphi)]^\top \mathcal{J}^{-1} \\
-(\partial_\vartheta \widetilde{z}_0)(\vartheta_0(\varphi)) & 0 & I
\end{pmatrix}
\end{align*}
where $\mathcal{B}(\varphi)$ is given by {\rm(i)}.
Finally, applying $[D G_0({\mathtt u}_0)]^{-1}$ to \eqref{dfcirg} and using \eqref{def CI}, \eqref{def Cz}
we obtain
\begin{align*}
& [D G_0({\mathtt u}_0)]^{-1} d_{({\mathtt u},\alpha)} ({\mathcal F} \circ G_0)({\mathtt u}_0,\alpha_0)
[\widehat {\mathtt u}, \widehat \alpha ]
= \omega\cdot\partial_\varphi \widehat {\mathtt u}+ [D G_0({\mathtt u}_0)]^{-1} \partial_\varphi\big[{\mathcal F}(i_0(\varphi)) \big] \widehat\phi
\\
& + \left(
\begin{array}{c}
-K_{20}(\varphi)\widehat y
\\
\mathcal{A}(\varphi) K_{20}(\varphi) \widehat y -R_{10}(\varphi) \widehat y \\
-\mathcal{J} K_{11}(\varphi) \widehat y
\end{array}
\right)
+ \left(
\begin{array}{c}
-K_{11}^\top(\varphi) \widehat w
\\
\mathcal{A}(\varphi) K_{11}^\top(\varphi) \widehat w -R_{01}(\varphi) \widehat w \\
-\mathcal{J} K_{02}(\varphi)\widehat w
\end{array}
\right)
+ \left(
\begin{array}{c}
- [\partial_\varphi \vartheta_0(\varphi)]^{-1}\mathtt{J}\widehat \alpha
\\
\mathcal{B}(\varphi) \widehat\alpha \\
~ [(\partial_\vartheta \widetilde{z}_0)(\vartheta_0(\varphi)) ]\mathtt{J}\widehat\alpha
\end{array}
\right),
\end{align*}
where $\mathcal{A}(\varphi)$ is defined in {\rm (ii)} and satisfies
$\mathcal{B}(\varphi)=\mathcal{A}(\varphi) L_1^\top(\varphi) +[\partial_\varphi I_0(\varphi)]^\top,$
and
\begin{align*}
K_{20}(\varphi)&\triangleq L_1^\top(\varphi) ( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi)) L_1(\varphi) \, ,
\\
K_{11}(\varphi)&\triangleq L_2^\top(\varphi)( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi))L_1(\varphi) + ( \partial_I\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_1(\varphi)\, ,
\\
K_{02}(\varphi)& \triangleq ( \partial_{z}\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) +L_2^\top(\varphi) ( \partial_{II} K_\varepsilon^{\alpha_0})(i_0(\varphi)) L_2(\varphi)
+L_2^\top(\varphi)( \partial_{zI} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \\ &\quad + (\partial_I\nabla_z K_\varepsilon^{\alpha_0}) (i_0(\varphi)) L_2(\varphi). \nonumber
\end{align*}
This together with \eqref{K alpha} give the desired identity, concluding the proof of Proposition~\ref{Proposition-Conjugation}.
\end{proof}
Next, in order to prove that the remainder $\mathbb{E} $ is of size $Z$, we shall prove that the matrix $\mathcal{A}$, defined in Proposition \ref{Proposition-Conjugation}-{\rm (ii)}, is zero at an exact solution on some Cantor like set, up to an exponentially small remainder. In particular, we shall prove the following lemma.
\begin{lem}\label{lemma1est}
The coefficients of the matrix $\mathcal{A}$, given by
\begin{align*}
\mathcal{A}_{kj}(\varphi)
\triangleq [\mathtt{J} \partial_{\varphi_j} I_0(\varphi)]\cdot\partial_{\varphi_k} \vartheta_0(\varphi)
-[\mathtt{J} \partial_{\varphi_j} \vartheta_0(\varphi)]\cdot\partial_{\varphi_k} I_0(\varphi)
-\big\langle\mathcal{J}^{-1} \partial_{\varphi_j} z_0(\varphi) ,\partial_{\varphi_k} z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}.
\end{align*}
satisfy for all $\varphi\in\mathbb{T}^d$, are
\begin{align*}
\omega\cdot\partial_\varphi \mathcal{A}_{kj}(\varphi)
&=[{\mathtt J} \partial_{\varphi_j} Z_2(\varphi)]\cdot\partial_{\varphi_k} \vartheta_0(\varphi)
-[{\mathtt J} \partial_{\varphi_j} Z_1(\varphi)] \cdot\partial_{\varphi_k} I_0(\varphi)
-\big\langle\mathcal{J}^{-1} \partial_{\varphi_j} Z_3(\varphi),\partial_{\varphi_k} z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}\nonumber
\\
&+ [{\mathtt J} \partial_{\varphi_j} I_0(\varphi)]\cdot\partial_{\varphi_k} Z_2(\varphi)
-[{\mathtt J} \partial_{\varphi_j} \vartheta_0(\varphi)]\cdot\partial_{\varphi_k} Z_1(\varphi)
-\big\langle\mathcal{J}^{-1} \partial_{\varphi_j} z_0(\varphi) ,\partial_{\varphi_k} Z_3(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}.
\end{align*}
where $(\underline{e}_1,\ldots,\underline{e}_d)$ denotes the canonical basis of $\mathbb{R}^d$.
\end{lem}
\begin{proof}
From the expression of the coefficients $\mathcal{A}_{kj}$ one has
\begin{align*}
\omega\cdot\partial_\varphi \mathcal{A}_{kj}(\varphi)
&=\big\langle \mathtt{J}\partial_{\varphi_j} \omega\cdot\partial_\varphi I_0(\varphi),\partial_{\varphi_k} \vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}+\big\langle \mathtt{J} \partial_{\varphi_j} I_0(\varphi),\partial_{\varphi_k} \omega\cdot\partial_\varphi\vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}
\\
&
-\big\langle \mathtt{J}\partial_{\varphi_j} \omega\cdot\partial_\varphi\vartheta_0(\varphi),\partial_{\varphi_k} I_0(\varphi)\big\rangle_{\mathbb{R}^d}
-\big\langle \mathtt{J}\partial_{\varphi_j} \vartheta_0(\varphi),\partial_{\varphi_k} \omega\cdot\partial_\varphi I_0(\varphi)\big\rangle_{\mathbb{R}^d}
\\
&
-\big\langle\mathcal{J}^{-1} \partial_{\varphi_j} \omega\cdot\partial_\varphi z_0(\varphi) ,\partial_{\varphi_k} z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}-\big\langle\mathcal{J}^{-1} \partial_{\varphi_j} z_0(\varphi) ,\partial_{\varphi_k} \omega\cdot\partial_\varphi z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}
\end{align*}
In view of \eqref{def Z} we get
\begin{align}\label{omdphiakj}
\omega\cdot\partial_\varphi \mathcal{A}_{kj}(\varphi)
&=\big\langle \mathtt{J}\partial_{\varphi_j} Z_2(\varphi),\partial_{\varphi_k} \vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}+\big\langle \mathtt{J}\partial_{\varphi_j} I_0(\varphi),\partial_{\varphi_k} Z_1(\varphi)\big\rangle_{\mathbb{R}^d}
\\
&
\quad-\big\langle \mathtt{J}\partial_{\varphi_j} Z_1(\varphi),\partial_{\varphi_k} I_0(\varphi)\big\rangle_{\mathbb{R}^d}
-\big\langle \mathtt{J}\partial_{\varphi_j} \vartheta_0(\varphi),\partial_{\varphi_k} Z_2(\varphi)\big\rangle_{\mathbb{R}^d}\nonumber
\\
&
\quad-\big\langle \mathcal{J}^{-1} \partial_{\varphi_j} z_0(\varphi) ,\partial_{\varphi_k} Z_3(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}
-\big\langle \mathcal{J}^{-1} \partial_{\varphi_j} Z_3(\varphi) ,\partial_{\varphi_k} z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}\nonumber
\\ &\quad+\mathcal{B}_{kj}^1(\varphi)+\mathcal{B}_{kj}^2(\varphi)+\mathcal{B}_{kj}^3(\varphi),\nonumber
\end{align}
where
\begin{align*}
\mathcal{B}_{kj}^1(\varphi)&\triangleq-\big\langle \partial_{\varphi_j} ( \partial_{I} K_\varepsilon^{\alpha_0}) (i_0(\varphi)),\partial_{\varphi_k} I_0(\varphi)\big\rangle_{\mathbb{R}^d}+\big\langle\partial_{\varphi_j} I_0(\varphi),\partial_{\varphi_k} ( \partial_{I} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big\rangle_{\mathbb{R}^d},
\\
\mathcal{B}_{kj}^2(\varphi)&\triangleq-\big\langle \partial_{\varphi_j} ( \partial_{\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)),\partial_{\varphi_k} \vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}+\big\langle\partial_{\varphi_j} \vartheta_0(\varphi),\partial_{\varphi_k} ( \partial_{\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big\rangle_{\mathbb{R}^d},
\\
\mathcal{B}_{kj}^3(\varphi)&\triangleq \big\langle \partial_{\varphi_j} z_0(\varphi) ,\partial_{\varphi_k} ( \nabla_{z} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}
-\big\langle \partial_{\varphi_j} ( \nabla_{z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) ,\partial_{\varphi_k} z_0(\varphi)\big\rangle_{L^2 \times L^2}.
\end{align*}
Straightforward computations leads to
\begin{align*}
\mathcal{B}_{kj}^1(\varphi)&
=-\big\langle ( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_j}\vartheta_0(\varphi),\partial_{\varphi_k} I_0(\varphi)\big\rangle_{\mathbb{R}^d}
-\big\langle ( \partial_{zI} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\partial_{\varphi_j} z_0 (\varphi),\partial_{\varphi_k} I_0(\varphi)\big\rangle_{\mathbb{R}^d}
\\ & \quad
+\big\langle\partial_{\varphi_j} I_0(\varphi), ( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\partial_{\varphi_k}\vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}
+\big\langle\partial_{\varphi_j} I_0(\varphi),( \partial_{I z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_k} z_0(\varphi)\big\rangle_{\mathbb{R}^d},\nonumber
\\
\mathcal{B}_{kj}^2(\varphi)&=-\big\langle ( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_j}I_0(\varphi),\partial_{\varphi_k} \vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}
-\big\langle ( \partial_{z\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\partial_{\varphi_j} z_0 (\varphi),\partial_{\varphi_k} \vartheta_0(\varphi)\big\rangle_{\mathbb{R}^d}
\\ &
\quad+\big\langle\partial_{\varphi_j} \vartheta_0(\varphi), ( \partial_{I\vartheta} K_\varepsilon^{\alpha_0}) (i_0(\varphi))\partial_{\varphi_k}I_0(\varphi)\big\rangle_{\mathbb{R}^d}
+\big\langle\partial_{\varphi_j} \vartheta_0(\varphi),( \partial_{\vartheta z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_k} z_0(\varphi)\big\rangle_{\mathbb{R}^d},\nonumber
\\
\mathcal{B}_{kj}^3(\varphi)&=
\big\langle (\partial_I \nabla_{z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_k}I_0(\varphi) ,\partial_{\varphi_j} z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}
\\
&\quad-\big\langle \partial_{\varphi_k} z_0(\varphi) , ( \partial_I \nabla_{z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_j}I_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}
\\ &\quad+\big\langle( \partial_\vartheta \nabla_{ z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_k}\vartheta_0(\varphi),\partial_{\varphi_j} z_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}
\\
&\quad-\big\langle \partial_{\varphi_k} z_0(\varphi) ,( \partial_\vartheta \nabla_{ z} K_\varepsilon^{\alpha_0}) (i_0(\varphi)) \partial_{\varphi_j}\vartheta_0(\varphi)\big\rangle_{L^2(\mathbb{T})\times L^2(\mathbb{T})}.\nonumber
\end{align*}
Combining the last three identities we obtain
$$
\mathcal{B}_{kj}^1(\varphi)+\mathcal{B}_{kj}^2(\varphi)+\mathcal{B}_{kj}^3(\varphi)=0.
$$
This with \eqref{omdphiakj} concludes the proof of the lemma.
\end{proof}
We define the sequence $(N_n)_{n\in\N\cup\{-1\}}$ as
\begin{equation}\label{def geo Nn}
N_{-1}\triangleq 1,\qquad \forall n\in\mathbb{N},\quad N_{n}\triangleq N_{0}^{\left(\frac{3}{2}\right)^{n}},\qquad \hbox{with} \qquad N_0\geqslant2.
\end{equation}
The following lemma is proved in \cite[Lemma 5.3]{BM18} and \cite[Lemma 6.2]{HHM21}.
\begin{lem}\label{lem:est-akj}
The coefficients $\mathcal{A}_{jk}$, defined in Lemma $\ref{lemma1est}$, decomposes as
\begin{equation}\label{Akj decomposition}
\mathcal{A}_{kj}=\mathcal{A}_{kj}^{(n)}+\mathcal{A}_{kj}^{(n),\perp},\qquad\hbox{with} \qquad \mathcal{A}_{kj}^{(n)}\triangleq \Pi_{N_n}\mathcal{A}_{kj}\qquad \hbox{and}\qquad \mathcal{A}_{kj}^{(n),\perp}\triangleq \Pi_{N_n}^\perp\mathcal{A}_{kj}.
\end{equation}
In addition, the following properties hold true.
\begin{enumerate}
\item The function $\mathcal{A}_{kj}^{(n),\perp}$ satisfies for any $s\in \mathbb{R}$,
\begin{equation*}
\forall \,\mathtt{b}\geqslant 0,\quad\| \mathcal{A}_{k j}^{(n),\perp} \|_{s}^{q,\gamma,\mathtt{m}} \lesssim N_n^{-\mathtt{b}} \| {\mathfrak I}_0 \|_{s+1+\mathtt{b}}^{q,\gamma,\mathtt{m}}.
\end{equation*}
\item There exist functions $\mathcal{A}_{kj}^{(n),\textnormal{ext}}$ defined for any $(b,\omega)\in \mathcal{O}$ and satisfying,
for any $s\geqslant s_0$, the estimate
\begin{equation*}
\| \mathcal{A}_{k j}^{(n),\textnormal{ext}} \|_{s}^{q,\gamma,\mathtt{m}} \lesssim \gamma^{-1}
\big(\| Z \|_{s+\tau_1(q + 1)+1 }^{q,\gamma,\mathtt{m}} + \| Z \|_{s_0+1}^{q,\gamma,\mathtt{m}} \| {\mathfrak I}_0 \|_{s+\tau_1(q + 1) +1}^{q,\gamma,\mathtt{m}}\big)\,.
\end{equation*}
Moreover, $\mathcal{A}_{k j}^{(n),\textnormal{ext}}$ coincides with $\mathcal{A}_{k j}^{(n)}$ on the Cantor set
\begin{equation}\label{DC tau gamma N}
\mathtt {DC}_{N_n} (\gamma, \tau_1) \triangleq \bigcap_{l\in\mathbb{Z}^{d}\setminus\{0\}\atop|l|\leqslant N_n}\Big\{ \omega \in \mathbb{R}^d\quad\textnormal{s.t.}\quad|\omega \cdot l | \geqslant \tfrac{\gamma}{\langle l \rangle^{\tau_1}}\Big\}. \,
\end{equation}
\end{enumerate}
\end{lem}
\subsection{Construction of an approximate inverse}
According to Proposition \ref{Proposition-Conjugation}-{\rm (ii)} and Lemma \ref{lem:est-akj}, the error term $\mathbb{E}$ is zero at an exact solution, up to an exponentially small remainder on the Cantor set $\mathtt {DC}_{N_n} (\gamma, \tau_1) $. Therefore, in order to find an approximate inverse of the linear operator in \eqref{Id-conj} it is sufficient to almost invert the operator $\mathbb{D}$, which is triangular. More precisely, we first invert the action-component equation, in the linear system $\mathbb{D}[\widehat u]=(g_1,g_2,g_3)$, which is decoupled from the other equations,
$$
{\omega\cdot\partial_\varphi \widehat y=g_2-\mathcal{B}(\varphi)\widehat \alpha.}
$$
Then, we shall solve the last normal-component equation
$$
\omega\cdot\partial_\varphi \widehat w-
\mathcal{J} K_{02}(\varphi)\widehat w =g_3+\mathcal{J}\big[K_{11}(\varphi) \widehat y +L_2^{\top}(\varphi)\widehat\alpha \big].
$$
For this aim we need to find an approximate right inverse of the linearized operator in the normal direction
\begin{equation}\label{def hat L}
\widehat{\mathcal{L}}\triangleq \Pi_{\overline{\mathbb{S}}_0}^\bot \big(\omega\cdot \partial_\varphi -
\mathcal{J} K_{02}(\varphi) \big)\Pi_{\overline{\mathbb{S}}_0}^\bot
\end{equation}
when the set of parameters is restricted to a Cantor-like set. Here the projector $\Pi_{\overline{\mathbb{S}}_0}^\bot$ is the one defined in \eqref{proj-nor1}. Finally, we shall solve the first equation in $\mathbb{D}[\widehat u]=(g_1,g_2,g_3)$ after choosing $\widehat \alpha $ in such way we get zero average in the equation.
The following proposition gives a brief statement about the invertibility in the normal direction; the construction of an approximate right inverse of the operator $\widehat{\mathcal{L}}$ is the subject of Section \ref{reduction} and a precise statement with a detailed description of Cantor like sets, see Proposition~\ref{prop inv linfty}.
\begin{prop}\label{thm:inversion of the linearized operator in the normal directions}
Given the conditions \eqref{setting tau1 and tau2}, \eqref{init Sob cond}, \eqref{p-RR} and \eqref{sml-RR}. There exists $\sigma_5\triangleq \sigma_5(\tau_1,\tau_2,q,d)>0$ such that if
\begin{equation*}
\|\mathfrak{I}_0\|_{s_h+\sigma_5}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation*}
then there exists a family of linear operators $\big(\widehat{\mathtt{T}}_{n}\big)_{n\in\mathbb{N}}$ defined in $\mathcal{O}$ and satisfying the estimate
\begin{equation*}
\forall \, s\in\,[ s_0, S],\quad\sup_{n\in\mathbb{N}}\|\widehat{\mathtt{T}}_{n}\rho\|_{s}^{q,\gamma ,\mathtt{m}}\lesssim\gamma^{-1}\left(\|\rho\|_{s+\sigma_5}^{q,\gamma ,\mathtt{m}}+\| \mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma ,\mathtt{m}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}}\right)
\end{equation*}
and, for any $n\in\mathbb{N}$, we have the following splitting
$$\widehat{\mathcal{L}}=\widehat{\mathtt{L}}_{n}+\widehat{\mathtt{R}}_{n},\qquad\textnormal{with}\qquad\widehat{\mathtt{L}}_{n}\widehat{\mathtt{T}}_{n}=\textnormal{Id},$$
in a Cantor set
$\mathtt{G}_n\triangleq \mathtt{G}_n(\gamma,\tau_{1},\tau_{2},i_{0})\subset \mathtt {DC}_{N_n} (\gamma, \tau_1) \times (b_*, b^*),$
where the operators $\widehat{\mathtt{L}}_{n}$ and $\widehat{\mathtt{R}}_{n}$ are defined in $\mathcal{O}$ and satisfy
\begin{align*}
\forall s\in[s_{0},S],\quad& \sup_{n\in\mathbb{N}}\|\widehat{\mathtt{L}}_{n}\rho\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\rho\|_{s+1}^{q,\gamma,\mathtt{m}}+\varepsilon\gamma^{-2}\|\mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+1}^{q,\gamma,\mathtt{m}},\\
\forall s\in[s_{0},S],\quad &\|\widehat{\mathtt{R}}_{n}\rho\|_{s_{0}}^{q,\gamma,\mathtt{m}}\lesssim N_{n}^{s_{0}-s}\gamma^{-1}\left(\|\rho\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}+\varepsilon\gamma^{-2}\|\mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}}\right)\\
&\qquad\qquad\qquad\quad+\varepsilon\gamma^{-3}N_{0}^{\mu_{2}}N_{n+1}^{-\mu_{2}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}}.
\end{align*}
\end{prop}
The main goal is to find an approximate inverse to the operator $[D G_0({\mathtt u}_0)]^{-1} d_{(i,\alpha)} {\mathcal F} (i_0,\alpha_0) D\widetilde G_0({\mathtt u}_0)$ in \eqref{Id-conj}. For this aim, since we require only finitely many non-resonance conditions \eqref{DC tau gamma N}, for any $\omega\in\mathbb{R}^d$, we decompose $\omega\cdot \partial_\varphi$ as
\begin{equation}\label{omeg-phi-decomposition}
\omega\cdot \partial_\varphi=\mathcal{D}_{(n)} +\mathcal{D}_{(n)}^{\perp},\qquad\mathcal{D}_{(n)}\triangleq \omega\cdot \partial_\varphi\,\Pi_{N_n}+ \Pi_{N_n,\mathtt{g}}^\perp, \qquad\mathcal{D}_{(n)}^{\perp}\triangleq \omega\cdot \partial_\varphi\, \Pi_{N_n}^\perp- \Pi_{N_n, \mathtt{g}}^\perp,
\end{equation}
where
$$
\Pi_{N_n, \mathtt{g}}^\perp\sum_{l\in\Z^{d}\setminus\{0\}}h_{l} {\bf{e}}_{l}\triangleq\sum_{l\in\Z^{d}\setminus\{0\}\atop |l|> N_n} \mathtt{g}(l) h_{l} {\bf{e}}_{l}.
$$
and the function $\mathtt{g}:\mathbb{Z}^d\setminus \{0\}\to \{-1,1\}$ is defined, for all $l=(l_1,\cdots,l_d)\in \mathbb{Z}^d\setminus \{0\}$, as the sign of the first non-zero component in the vector $l$. Thus, it satisfies
$$\forall l\in \mathbb{Z}^d\setminus \{0\} ,\quad\mathtt{g}(-l)=-\mathtt{g}(l).$$
The projector $\Pi_{N_n, \mathtt{g}}^\perp$ is used here instead of $\Pi_{N_n}^\perp$ in order to preserve the reversibility property. Then,
according to Proposition~\ref{Proposition-Conjugation}, the identities \eqref{Akj decomposition}-\eqref{omeg-phi-decomposition} and Proposition~\ref{thm:inversion of the linearized operator in the normal directions} we have the following decomposition
\begin{equation}\label{decomposition conj op}
[D G_0({\mathtt u}_0)]^{-1} d_{(i,\alpha)} {\mathcal F} (i_0,\alpha_0) D\widetilde G_0({\mathtt u}_0)={\mathbb D}_n+{\mathbb E}_n+ {\mathscr P}_n+{\mathscr Q}_n,
\end{equation}
with
\begin{align*}
&{\mathbb D}_n [\widehat \phi, \widehat y, \widehat w, \widehat \alpha ] \triangleq
\left(
\begin{array}{c}
\mathcal{D}_{(n)} \widehat\phi-K_{20}\widehat y-K_{11}^\top \widehat w -L_1^{\top}\widehat \alpha
\\
\mathcal{D}_{(n)} \widehat y+ \mathcal{B} \widehat \alpha \\
\widehat{\mathtt{L}}_{\omega,n} \widehat w -\mathcal{J}\big[K_{11} \widehat y+ L_2^{\top} \widehat\alpha \big]
\end{array}
\right),
\\ &{\mathbb E}_{n} [\widehat \phi, \widehat y, \widehat w] \triangleq
[D G_0({\mathtt u}_0)]^{-1} [\partial_\varphi Z] \widehat\phi+\left(
\begin{array}{c}
0
\\
\mathcal{A}^{(n)}\big[K_{20} \widehat y+K_{11}^\top \widehat w\big]-R_{10}\widehat y-R_{01}\widehat w
\\
0
\end{array}
\right)
\\
&{\mathscr P}_n [\widehat \phi, \widehat y, \widehat w ] \triangleq
\left(
\begin{array}{c}
\mathcal{D}_{(n)}^{\perp} \widehat\phi
\\
\mathcal{D}_{(n)}^{\perp} \widehat y + \mathcal{A}^{(n),\perp}\big[K_{20} \widehat y+K_{11}^\top \widehat w\big]
\\
0
\end{array}
\right),\quad {\mathscr Q}_n [\widehat \phi, \widehat y, \widehat w ] \triangleq
\left(
\begin{array}{c}
0
\\
0
\\
\widehat{\mathtt{R}}_n[ \widehat w]
\end{array}
\right),
\end{align*}
where $\mathcal{A}^{(n)}$ and $\mathcal{A}^{(n),\perp}$ are the matrices with coefficients
$\mathcal{A}_{kj}^{(n)}$ and $\mathcal{A}_{kj}^{(n),\perp}$ respectively, see \eqref{Akj decomposition}.
We define the linear operator $\mathbb{L}_{\textnormal{ext}}$ as
\begin{equation}\label{def Lext}
\mathbb{L}_{\textnormal{ext}}\triangleq \mathbb{D}_n+{\mathbb E}_{n}^{{\rm ext}}+{\mathscr P}_n+{\mathscr Q}_n,
\end{equation}
where the operator ${\mathbb E}_{n}^{{\rm ext}}$ vanishes at exact solutions on the whole set of parameters $\mathcal{O}$ and it is given by
\begin{align*}
{\mathbb E}_{n}^{{\rm ext}} [\widehat \phi, \widehat y, \widehat w ] & \triangleq
[D G_0({\mathtt u}_0)]^{-1} [\partial_\varphi Z] \widehat\phi+\left(
\begin{array}{c}
0
\\
\mathcal{A}^{(n),\textnormal{ext}}\big[K_{20}(\varphi) \widehat y+K_{11}^\top \widehat w\big]-R_{10}\widehat y-R_{01}\widehat w
\\
0
\end{array}
\right),
\end{align*}
with $\mathcal{A}^{(n),\textnormal{ext}}$ is the matrix with coefficients
$\mathcal{A}_{kj}^{(n),\textnormal{ext}}$, see \eqref{Akj decomposition}.
The operator $\mathbb{L}_{\textnormal{ext}}$ is defined on the whole set $\mathcal{O}$ and, by construction, coincides with the linear operator in \eqref{decomposition conj op} on the Cantor set $\mathtt{G}_n$,
\begin{equation}\label{lext-f}
\forall (b,\omega) \in \mathtt{G}_n,\quad\mathbb{L}_{\textnormal{ext}}=[D G_0({\mathtt u}_0)]^{-1} d_{(i,\alpha)} {\mathcal F} (i_0,\alpha_0) D \tilde G_0({\mathtt u}_0).
\end{equation}
The following proposition shows that the principal term $\mathbb{D}_n$ has an exact inverse. Its proof can be found in \cite[Prop. 6.3]{HHM21}
\begin{prop}\label{prop:decomp-lin}
Given the conditions \eqref{setting tau1 and tau2}, \eqref{init Sob cond}, \eqref{p-RR} and \eqref{sml-RR}. There exists $\sigma_6\triangleq \sigma_6(\tau_1,\tau_2,q,d) >0$ such that if
\begin{equation*}
\|\mathfrak{I}_0\|_{s_h+\sigma_6}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation*}
then there exist a family of operators $\big([{\mathbb D}_n]_{\textnormal{ext}}^{-1}\big)_n$ such that for all $ g \triangleq (g_1, g_2, g_3) $
satisfying the reversibility and $\mathtt{m}$-fold symmetry properties
\begin{equation}\label{symmetry g1 g2 g3}
g_1(\varphi) = g_1(- \varphi),\qquad g_2(\varphi) = - g_2(- \varphi),\qquad g_3(\varphi) = - ({\mathcal S} g_3)(\varphi),\qquad(\mathscr{T}_{\mathtt{m}}g_3)(\varphi)=g_3(\varphi),
\end{equation}
the function
$ [{\mathbb D}_n]_{\textnormal{ext}}^{-1} g $
satisfies the estimate, for all $s_0 \leqslant s \leqslant S$,
\begin{equation*}
\| [{\mathbb D}_n]_{\textnormal{ext}}^{-1}g \|_{s}^{q,\gamma,\mathtt{m}}
\lesssim \gamma^{-1} \big( \| g \|_{s + \sigma_6}^{q,\gamma,\mathtt{m}}
+ \| {\mathfrak I}_0 \|_{s + \sigma_6}^{q,\gamma,\mathtt{m}}
\| g \|_{s_0 + \sigma_6}^{q,\gamma,\mathtt{m}} \big)
\end{equation*}
and for all $(b,\omega) \in \mathtt{G}_n$ one has
$$
{\mathbb D}_n [{\mathbb D}_n]_{\textnormal{ext}}^{-1} =\textnormal{Id}.
$$
\end{prop}
Coming back to the linear operator $d_{i,\alpha}\mathcal{F}(i_{0},\alpha_{0})$, according to \eqref{def Lext} and
\eqref{lext-f}, on the Cantor set $ \mathtt{G}_n, $ we have the decomposition
\begin{equation*}
\begin{aligned}
d_{i,\alpha}\mathcal{F}(i_{0},\alpha_{0})
&=DG_{0}({\mathtt u}_0) \, {\mathbb{D}}_n\, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}+ DG_{0}({\mathtt u}_0) \, {\mathbb E}_{n}^{{\rm ext}} \, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}\\ &\quad+DG_{0}({\mathtt u}_0)\, {\mathscr P}_n\, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}+DG_{0}({\mathtt u}_0) \, {\mathscr Q}_n\, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}.
\end{aligned}
\end{equation*}
Applying the operator
\begin{equation}\label{def inverse T}
{\rm T}_0 \triangleq {\rm T}_0(i_0) \triangleq D { \widetilde G}_0({\mathtt u}_0)\, [{\mathbb D}_n]_{\textnormal{ext}}^{-1}\,[D G_0({\mathtt u}_0)]^{-1}
\end{equation}
to the right of the last identity we get for all $(b,\omega)\in \mathtt{G}_n,$
\begin{align*}
d_{i,\alpha}\mathcal{F}(i_{0},\alpha_{0}) {\rm T}_{0}-\textnormal{Id}=\mathcal{E}_1^{(n)}+\mathcal{E}_2^{(n)}+\mathcal{E}_3^{(n)}\quad \textnormal{with}\quad\begin{array}[t]{rcl}
&\mathcal{E}_1^{(n)}\triangleq DG_{0}({\mathtt u}_0) \, {\mathbb E}_{n}^{{\rm ext}} \, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}{\rm T}_{0},
\\
&\mathcal{E}_2^{(n)}\triangleq DG_{0}({\mathtt u}_0)\, {\mathscr P}_n\, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}{\rm T}_{0},
\\
&\mathcal{E}_3^{(n)}\triangleq DG_{0}({\mathtt u}_0)\, {\mathscr Q}_n\, [D\widetilde{G}_{0}({\mathtt u}_0)]^{-1}{\rm T}_{0}.
\end{array}
\end{align*}
Consequently, the operator ${\rm T}_0$ is an approximate right inverse for $d_{i,\alpha} {\mathcal F}(i_0,\alpha_0)$. In particular, we have the following result, whose proof is similar to \cite[Theorem 5.1]{HR21}.
\begin{theo} \label{theo appr inv}
{\bf (Approximate inverse)}
Let $(\gamma,q,d,\tau_{1},s_{0},\mu_2,s_h,S)$ satisfy \eqref{setting tau1 and tau2}--\eqref{init Sob cond} and \eqref{p-RR}--\eqref{sml-RR}. There exists $ { \overline\sigma}= { \overline\sigma}(\tau_1,\tau_2,d,q)>0$ such that if
\begin{equation}\label{bnd frkIn-final}
\|\mathfrak{I}_0\|_{s_h+\overline\sigma}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation}
then for smooth $ g = (g_1, g_2, g_3) $, satisfying \eqref{symmetry g1 g2 g3},
the operator $ {\rm T}_0 $ defined in \eqref{def inverse T} is reversible, $\mathtt{m}$-fold preserving and satisfies
\begin{equation}\label{tame T0}
\forall s\in [s_0,S],\quad \| {\rm T}_0 g\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\gamma^{-1}\left(\|g\|_{s+{\overline\sigma}}^{q,\gamma,\mathtt{m}}+\|\mathfrak{I}_{0}\|_{s+{\overline\sigma}}^{q,\gamma,\mathtt{m}}\|g\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right).
\end{equation}
Moreover ${\rm T}_0$ is an almost-approximate
right
inverse of $d_{i, \alpha}
\mathcal{ F}(i_0, \alpha_0)$ on the Cantor set $ \mathtt{G}_n$. More precisely, for all $(b,\omega)\in \mathtt{G}_n $ one has
\begin{equation}\label{splitting of approximate inverse}
d_{i,\alpha} \mathcal{ F} (i_0,\alpha_0) {\rm T}_0
- {\rm Id} = \mathcal{E}^{(n)}_1+\mathcal{E}^{(n)}_2+\mathcal{E}^{(n)}_3,
\end{equation}
where the operators $\mathcal{E}^{(n)}_1$, $\mathcal{E}^{(n)}_2$ and $\mathcal{E}^{(n)}_3$ are defined in the whole set $\mathcal{O}$ with the estimates
\begin{align}
\|{\mathcal{E}_1^{(n)}} \rho \|_{s_0}^{q,\gamma,\mathtt{m}} & \lesssim \gamma^{-1 } \| \mathcal{ F}(i_0, \alpha_0) \|_{s_0 +\overline\sigma}^{q,\gamma,\mathtt{m}} \|\rho\|_{s_0 + \overline\sigma}^{q,\gamma,\mathtt{m}},\label{calE1}
\\
\forall\, \mathtt{b}\geqslant 0,
\quad \| \mathcal{E}_2^{(n)} \rho \|_{s_0}^{q,\gamma,\mathtt{m}}& \lesssim
\gamma^{-1} N_n^{-\mathtt{b}} \big(\|\rho\|_{s_0 +\overline\sigma +\mathtt{b}}^{q,\gamma,\mathtt{m}}+
\|\mathfrak{I}_{0}\|_{s_0+\overline\sigma+\mathtt{b}}^{q,\gamma,\mathtt{m}}\big\|\rho\|_{s_0+\overline\sigma}^{q,\gamma,\mathtt{m}}\big),\label{calE2}
\\
\forall\, \mathtt{b}\in [0,S],
\quad \| \mathcal{E}_3^{(n)} \rho \|_{s_0}^{q,\gamma,\mathtt{m}}& \lesssim N_n^{-\mathtt{b}}\gamma^{-2}\Big( \|\rho\|_{s_0+\mathtt{b}+\overline\sigma}^{q,\gamma,\mathtt{m}}+{\varepsilon\gamma^{-2}}\| \mathfrak{I}_{0}\|_{s_0+\mathtt{b}+\overline\sigma}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_0+{\overline\sigma}}^{q,\gamma,\mathtt{m}} \Big)\label{calE3}\\ &\quad\quad+ \varepsilon\gamma^{-4}N_{0}^{{\mu}_{2}}{N_{n}^{-\mu_{2}}} \|\rho\|_{s_0+\overline\sigma}^{q,\gamma,\mathtt{m}}.\nonumber
\end{align}
\end{theo}
\section{Reduction}\label{reduction}
This section is devoted to the reducibility of the linearized operator associated to the nonlinear equation \eqref{nonlinear-func}, whose structure is detailed in Proposition \ref{prop:conjP}. The first main step is to conjugate it into a diagonal matrix Fourier multiplier using a suitable quasi-periodic symplectic change of coordinates as in \cite{HHM21,HR21}. The second part deals with the asymptotic structure of the operator localized on the normal directions. In the last part, we focus on the remainder reduction. To formulate our statements we need to introduce the following parameters.
\begin{equation}\label{param}
\begin{array}{ll}
s_{l}\triangleq s_0+\tau_1 q+\tau_1 +2,\qquad\qquad& \overline{\mu}_2\triangleq 4\tau_1 q+6\tau_1 +3, \\
\overline{s}_{l}\triangleq s_l+\tau_2 q+\tau_2 , & \overline{s}_{h}\triangleq \frac{3}{2}\overline{\mu}_{2}+s_{l}+1
\end{array}
\end{equation}
and
\begin{equation}\label{sigma-F}
\sigma_{1}\triangleq s_0+\tau_1 q+2\tau_1+4,\qquad \sigma_2\triangleq\sigma_{1}+3.
\end{equation}
Throughout this section and we shall work under the following assumption
\begin{align}\label{ouvert-sym}
\mathcal{O}\triangleq(b_*,b^*)\times \mathscr{U},\qquad\hbox{with}\qquad 0<b_*<b^*<1\qquad \hbox{and} \qquad \mathtt{m}\geqslant \mathtt{m}^*,
\end{align}
where $\mathtt{m}^*$ is defined in Corollary \ref{coro-equilib-freq}. The set $\mathscr{U}$ is an open subset of $\mathbb{R}^{d}$ containing the equilibrum frequency vector curve, namely, we choose
\begin{equation}\label{def scrU}
\mathscr{U}\triangleq B(0,R_0)\qquad\textnormal{s.t.}\qquad\omega_{\textnormal{Eq}}\big([b_*,b^*]\big)\subset B\big(0,\tfrac{R_0}{2}\big),\qquad R_0>0.
\end{equation}
We denote
$$\mathbf{H}^s_{\perp,\m}\triangleq\mathbf{H}_{\mathtt{m}}^{s}\cap \mathbf{H}_{\overline{\mathbb{S}}_0}^{\perp}$$
and equip this space with the same norm as Sobolev spaces.
\subsection{Structure of the linearized operator restricted to the normal directions}
Here, we present the structure of the linearized operator in the normal directions
\begin{equation*}
\widehat{\mathcal{L}}=\widehat{\mathcal{L}}(i_0)= \Pi_{\overline{\mathbb{S}}_0}^\bot \big(\omega\cdot \partial_\varphi -
\mathcal{J} K_{02}(\varphi) \big)\Pi_{\overline{\mathbb{S}}_0}^\bot
\end{equation*} defined through \eqref{def hat L} and \eqref{def K02}, where $i_0=(\vartheta_0,I_0,z_0)$ is a $\mathtt{m}$-fold reversible torus (satisfying \eqref{parity solution}) and whose periodic component $\mathfrak{I}_0$ satisfy the smallness condition
$$\|\mathfrak{I}_0\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}\leqslant 1,$$
given in Lemma \eqref{tame X per}. The linear operator $\widehat{\mathcal{L}}$ decomposes as a finite rank perturbation of the linearized operator associated with the original problem, as the following shows.
We refer the reader to \cite[Prop. 6.1]{HR21} for a detailed proof that one can adapt to our matrix case. We mention that the $\mathtt{m}$-fold symmetry property can also be easily tracked.
\begin{prop}\label{lemma-normal-s}
Let $(\gamma,q,d,s_{0})$ satisfy \eqref{init Sob cond}.
Then the operator $\widehat{\mathcal{L}}$ defined in \eqref{def hat L} takes the form
$$\widehat{\mathcal{L}}=\Pi_{\overline{\mathbb{S}}_0}^{\perp}\left(\mathcal{L}-\varepsilon\partial_\theta\mathcal{R}\right)\Pi_{\overline{\mathbb{S}}_0}^{\perp},\qquad \mathcal{L}\triangleq\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m}}+\mathfrak{L}_{\varepsilon r},\qquad \mathcal{R}\triangleq \begin{pmatrix} \mathcal{T}_{J_{1,1}}({r}) & \mathcal{T}_{J_{1,2}}({r})\\
\mathcal{T}_{J_{2,1}}({r}) & \mathcal{T}_{J_{2,2}}({r})
\end{pmatrix},
$$
where
$$
\mathbf{I}_{\mathtt{m}}\triangleq \begin{pmatrix}
\mathbb{I}_{\mathtt{m}} &0\\
0& \mathbb{I}_{\mathtt{m}}
\end{pmatrix}
$$
denotes the identity map of $L^2_{\mathtt{m}}(\mathbb{T}^{d+1})\times L^2_{\mathtt{m}}(\mathbb{T}^{d+1})$. The operator $\mathfrak{L}_{\varepsilon r}$ is defined in Proposition \ref{prop:conjP} and from \eqref{aa-coord-00} we have
\begin{align*}r(\varphi)
&=\mathbf{A}\big(\vartheta_{0}(\varphi),\, I_{0}(\varphi),z_0(\varphi)\big),
\end{align*}
with $\mathbf{A}$ as in \eqref{aa-coord-00}, supplemented with the reversibility and $\mathtt{m}$-fold properties
\begin{equation*}
r(-\varphi,-\theta)=r(\varphi,\theta)=r\big(\varphi,\theta+\tfrac{2\pi}{\mathtt{m}}\big).
\end{equation*}
Moreover, for any $k,n\in\{1,2\},$ the operator $\mathcal{T}_{J_{k,n}}({r})$ is an integral operator in the form \eqref{Top-op1}, whose kernel
$J_{k,n}(r)$ is $\mathtt{m}$-fold reversibility preserving. In addition, under the assumption
\begin{equation}\label{frakI0 bnd}
\|\mathfrak{I}_0\|_{s_0}^{q,\gamma,\m}\leqslant 1,
\end{equation} we have for all $s\geqslant s_{0}$,
\begin{enumerate}[label=(\roman*)]
\item the function $r$ satisfies the estimates,
\begin{equation}\label{esti r I0}
\|r\|_{s}^{q,\gamma,\m}\lesssim 1+\|\mathfrak{I}_{0}\|_{s}^{q,\gamma,\m}
\end{equation}
and
\begin{equation}\label{esti r I0d}
\|\Delta_{12}r\|_{s}^{q,\gamma,\m}\lesssim\|\Delta_{12}i\|_{s}^{q,\gamma,\m}+\| \Delta_{12}i\|_{s_0}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|\mathfrak{I}_{\ell}\|_{s}^{q,\gamma,\m}.
\end{equation}
\item for any $k,n\in\{1,2\},$ the kernel $J_{k,n}$ satisfies the following estimates
\begin{equation}\label{e-Jkn}
\|J_{k,n}\|_{s}^{q,\gamma,\m}\lesssim 1+\|\mathfrak{I}_{0}\|_{s+3}^{q,\gamma,\m}
\end{equation}
and
\begin{equation}\label{e-Jknd}
\|\Delta_{12}J_{k,n}\|_{s}^{q,\gamma,\m}\lesssim\|\Delta_{12}i\|_{s+3}^{q,\gamma,\m}+\|\Delta_{12}i\|_{s_0+3}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|\mathfrak{I}_{\ell}\|_{s+3}^{q,\gamma,\m}.
\end{equation}
Here $\displaystyle\mathfrak{I}_{\ell}(\varphi)=i_{\ell}(\varphi)-(\varphi,0,0), $ and for any function $f$, $\Delta_{12} f\triangleq f(i_1)-f(i_2)$ refers to the difference of $f$ taken at two different states $i_1$ and $i_2$ satisfying \eqref{frakI0 bnd}.
\end{enumerate}
\end{prop}
\subsection{Reduction of the transport part}
The main purpose is to reduce to constant coefficients the transport parts in the linearized operator, described in Proposition \ref{prop:conjP}. Notice that the transport operator is diagonal, therefore we shall reduce each scalar component apart. This was done by a KAM iterative scheme in \cite{HHM21,HR21}, in the same spirit of the papers \cite{BBMH18,BM21,BFM21,FGMP19}. We skip the proof of the following proposition since it is the same as in \cite[Prop. 6.2]{HR21}, where the scheme is initialized by \eqref{f0+-}, \eqref{es-f0} and \eqref{sml-r0}. Moreover, the persistence of the $\mathtt{m}$-fold symmetry property can be easily checked along the scheme.
\begin{prop}\label{prop strighten}
Given the conditions \eqref{init Sob cond}--\eqref{setting tau1 and tau2} and \eqref{param}--\eqref{ouvert-sym}. Let $\upsilon \in(0,\tfrac{1}{q+1}]$. For any $(\mu_2,\mathtt{p},s_h)$ satisfying
\begin{equation}\label{p-trs}
\mu_{2}\geqslant \overline{\mu}_{2}, \qquad\mathtt{p}\geqslant 0,\qquad s_{h}\geqslant\max\left(\tfrac{3}{2}\mu_{2}+s_{l}+1,\overline{s}_{h}+\mathtt{p}\right),
\end{equation}
there exists $\varepsilon_{0}>0$ such that if
\begin{equation}\label{sml-trs}
\varepsilon\gamma^{-1}N_{0}^{\mu_{2}}\leqslant\varepsilon_{0}\qquad\textnormal{and}\qquad \|\mathfrak{I}_{0}\|_{s_{h}+\sigma_{1}}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation}
then for all $k\in\{1,2\}$ there exist
$
\mathtt{c}_k\triangleq \mathtt{c}_k(b,\omega,i_0)\in W^{q,\infty,\gamma }(\mathcal{O},\mathbb{C})$ and $\beta_{k}\triangleq \beta_k(b,\omega,i_0)\in W^{q,\infty,\gamma }(\mathcal{O},H_{\m}^{S})$
such that the following results hold true.
\begin{enumerate}[label=(\roman*)]
\item The constants $\mathtt{c}_k$ satisfy the following estimate,
\begin{equation}\label{sml-r0}
\| \mathtt{v}_k-\mathtt{c}_k\|^{q,\gamma}\lesssim\varepsilon,
\end{equation}
where $\mathtt{v}_k$ is defined in \eqref{def V10 V20}.
\item The transformation $\mathscr{B}_{k}^{\pm 1}$, related to the functions $\beta_{k}$ and $\widehat{\beta}_{k}$ through \eqref{def symplctik CVAR}-\eqref{mathscrB1}, are $\mathtt{m}$-fold reversibility preserving and satisfying the following estimates: for all $s\in[s_{0},S]$
\begin{align}\label{cont Bk}
\|\mathscr{B}_{k}^{\pm 1}\rho\|_{s}^{q,\gamma ,\mathtt{m}}
&\lesssim\|\rho\|_{s}^{q,\gamma ,\mathtt{m}}+\varepsilon\gamma ^{-1}\| \mathfrak{I}_{0}\|_{s+\sigma_{1}}^{q,\gamma ,\m}\|\rho\|_{s_{0}}^{q,\gamma ,\mathtt{m}},
\\
\label{sml betak I0}
\|\widehat{\beta}_{k}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\beta_{k}\|_{s}^{q,\gamma ,\mathtt{m}}&\lesssim \varepsilon\gamma ^{-1}\left(1+\| \mathfrak{I}_{0}\|_{s+\sigma_{1}}^{q,\gamma ,\mathtt{m}}\right).
\end{align}
\item In the Cantor set
\begin{align}\label{Cant-trs}
\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0})\triangleq& \bigcap_{ \underset{(l,j)\in\mathbb{Z}^d\times\mathbb{Z}_{\mathtt{m}}\backslash\{(0,0)\}\atop|l|\leqslant N_{n}}{k\in\{1,2\}}}\left\lbrace(b,\omega)\in \mathcal{O}\quad\textnormal{s.t.}\quad\big|\omega\cdot l+j\mathtt{c}_k(b,\omega)\big|>\tfrac{4\gamma^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_1}}\right\rbrace
\end{align}
we have
\begin{equation}\label{Strighten Vk}
\mathscr{B}_k^{-1}\Big(
\omega\cdot\partial_{\varphi} \mathbb{I}_{\mathtt{m}}+\partial_\theta \big(\mathcal{V}_k(\varepsilon r)\, \cdot\big)\Big)\mathscr{B}_k=
\omega\cdot\partial_{\varphi} \mathbb{I}_{\mathtt{m}}+\mathtt{c}_{k}\partial_{\theta} +\mathscr{E}_{n,k},
\end{equation}
where $\mathcal{V}_k$ are defined in Proposition \ref{prop:conjP}-1 and $\mathscr{E}_{n,k}\triangleq \mathscr{E}_{n,k}(b,\omega,i_{0})$ are linear operators satisfying
\begin{equation}\label{scr-Enk}
\|\mathscr{E}_{n,k}\rho\|_{s_{0}}^{q,\gamma,\m}\lesssim\varepsilon N_{0}^{\mu_{2}}N_{n+1}^{-\mu_{2}}\|\rho\|_{s_{0}+2}^{q,\gamma,\m}.
\end{equation}
\item Given two tori $i_{1}$ and $i_{2}$ both satisfying \eqref{sml-trs} (replacing $\mathfrak{I}_{0}$ by $\mathfrak{I}_{1}$ or $\mathfrak{I}_{2}$), we have
\begin{align}\label{diff Vpm}
\|\Delta_{12}\mathtt{c}_k\|^{q,\gamma}&\lesssim\varepsilon\| \Delta_{12}i\|_{\overline{s}_{h}+\sigma_{1}}^{q,\gamma,\mathtt{m}},
\\
\label{diff betak}
\|\Delta_{12}\beta_{k}\|_{\overline{s}_{h}+\mathtt{p}}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}\widehat{\beta}_{k}\|_{\overline{s}_{h}+\mathtt{p}}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_{h}+\mathtt{p}+\sigma_{1}}^{q,\gamma,\mathtt{m}}.
\end{align}
\end{enumerate}
\end{prop}
Define the matrix operators
\begin{equation}\label{def BE}
{\mathscr{B}}\triangleq \begin{pmatrix}
\mathscr{B}_{1} & 0\\
0 & \mathscr{B}_{2}
\end{pmatrix}\qquad\hbox{and} \qquad
{\mathscr{E}}_{n}\triangleq \begin{pmatrix}
\mathscr{E}_{n,1} & 0\\
0 & \mathscr{E}_{n,2}
\end{pmatrix},
\end{equation}
where $\mathscr{B}_{k}$ and $\mathscr{E}_{n,k}$ have been defined in Proposition \ref{prop strighten}. Next, we plan to describe the action of the transformation $\mathscr{B}$ on the linearized operator introduced in Proposition \ref{prop:conjP} and derive some useful estimates.
\begin{prop}\label{prop RTNL}
Given the conditions \eqref{setting tau1 and tau2}--\eqref{init Sob cond}, \eqref{param}, \eqref{ouvert-sym} and \eqref{p-trs}. Then, there exists $\varepsilon_{0}>0$ such that if
\begin{equation}\label{sml-nl}
\varepsilon\gamma^{-1}N_{0}^{\mu_{2}}\leqslant\varepsilon_{0}\qquad\textnormal{and}\qquad \|\mathfrak{I}_{0}\|_{q,s_{h}+\sigma_{2}}^{\gamma,\mathtt{m}}\leqslant 1,
\end{equation}
where $\sigma_2$ is given by \eqref{sigma-F},
then by restricting the parameters to the Cantor set defined in \eqref{Cant-trs} we get
\begin{align}\label{b-1lb}
\mathscr{L}\triangleq {\mathscr{B}}^{-1}\big(\omega\cdot\partial_{\varphi} {\mathbf{I}}_{\mathtt{m}}+\mathfrak{L}_{\varepsilon r}\big) {\mathscr{B}}&=\omega\cdot\partial_{\varphi} \mathbf{I}_{\mathtt{m}}+{\mathscr{D}}+{\mathscr{R}}+{\mathscr{E}}_{n},
\end{align}
where
$$\mathscr{D}\triangleq \begin{pmatrix}
\mathtt{c}_1\partial_{\theta}\, \cdot+\tfrac12\mathcal{H}+\partial_\theta \mathcal{Q}\ast\cdot& 0\\
0 & \mathtt{c}_2\partial_{\theta}\, \cdot-\big(\tfrac12\mathcal{H}+\partial_\theta \mathcal{Q}\ast\cdot)
\end{pmatrix},$$
and the operator ${\mathscr{R}}\triangleq {\mathscr{R}}({\varepsilon r})$ is a real, $\mathtt{m}$-fold and reversibility preserving matricial integral operator satisfying
\begin{equation}\label{e-Rnl}
\forall s\in[s_0,S],\quad\interleave{\mathscr{R}}\interleave_{s}^{q,\gamma,\mathtt{m}}\lesssim\varepsilon\gamma^{-1}\Big(1+\|\mathfrak{I}_0\|_{s+\sigma_{2}}^{q,\gamma,\mathtt{m}}\Big).
\end{equation}
Moreover, given two tori $i_{1}$ and $i_{2}$ both satisfying \eqref{sml-nl} (replacing $\mathfrak{I}_{0}$ by $\mathfrak{I}_{1}$ or $\mathfrak{I}_{2}$), we have
\begin{equation}\label{e-Rnld}
\interleave\Delta_{12}{\mathscr{R}}\interleave_{\overline{s}_h+\mathtt{p}}^{q,\gamma,\mathtt{m}}\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_h+\mathtt{p}+\sigma_{2}}^{q,\gamma,\mathtt{m}}.
\end{equation}
\end{prop}
\begin{proof}
From \eqref{def BE} and \eqref{defLr2} we may write
\begin{align*}
{\mathscr{B}}^{-1}\big(\omega\cdot\partial_{\varphi} \mathbf{I}_{\mathtt{m}}+\mathfrak{L}_{\varepsilon r}\big){\mathscr{B}}
&=
\begin{pmatrix}\mathscr{B}_1^{-1}\big(\omega\cdot\partial_{\varphi} \mathbb{I}_{\mathtt{m}}+\partial_\theta \big(\mathcal{V}_1(\varepsilon r)\, \cdot\big)\big)\mathscr{B}_1 &0\\ 0& \mathscr{B}_2^{-1}\big( \omega\cdot\partial_{\varphi} \mathbb{I}_{\mathtt{m}}+\partial_\theta \big(\mathcal{V}_2(\varepsilon r)\, \cdot\big)\big)\mathscr{B}_2\end{pmatrix}
\\
&\quad +\begin{pmatrix}
\tfrac{1}{2}\mathcal{H}+\partial_\theta \mathcal{Q}\ast\cdot & 0\\
0 & -\tfrac{1}{2}\mathcal{H}-\partial_\theta \mathcal{Q}\ast\cdot
\end{pmatrix}+\begin{pmatrix}
\mathscr{R}_{1,1} & \mathscr{R}_{1,2}\\
\mathscr{R}_{2,1} & \mathscr{R}_{2,2}
\end{pmatrix},
\end{align*}
where
\begin{align*}
\quad \mathscr{R}_{k,k'}\triangleq \mathscr{B}_k^{-1}\partial_{\theta}\mathcal{T}_{\mathscr{K}_{k,k'}(\varepsilon r)}\mathscr{B}_{k'}+(-1)^{k+1}\delta_{k,k'}\Big(\tfrac{1}{2}\mathscr{B}_{k}^{-1}\mathcal{H}\mathscr{B}_{k}-\tfrac{1}{2}\mathcal{H}+\mathscr{B}_{k}^{-1}\big(\partial_\theta \mathcal{Q}\ast\cdot\big)\mathscr{B}_{k}-\partial_\theta \mathcal{Q}\ast\cdot\Big)
\end{align*}
and $\delta_{k,k'}$ denotes the usual Kronecker symbol. Putting together \eqref{Strighten Vk} and \eqref{defLr2} allows to get in the Cantor set $\mathcal{O}_{\infty,n}^{\gamma,\tau_{1}}(i_{0})$ the decomposition \eqref{b-1lb}
with
\begin{align*}
{\mathscr{R}}&=\begin{pmatrix}
\mathscr{R}_{1,1} & \mathscr{R}_{1,2}\\
\mathscr{R}_{2,1} & \mathscr{R}_{2,2}
\end{pmatrix}.
\end{align*}
The symmetry properties of $\mathcal{Q}$, $\beta$ and $\widehat{\beta}$, given by Proposition \ref{prop:conjP}-2 and Proposition \ref{prop strighten}-2, together with Lemma \ref{lem sym--rev} and Lemma \ref{lemma:conjug-Hilbert} imply that $\mathscr{B}_{k}^{-1}\mathcal{H}\mathscr{B}_{k}-\mathcal{H}$ and $\mathscr{B}_{k}^{-1}\mathcal{Q}\mathscr{B}_{k}-\mathcal{Q}$ are real, reversible and $\mathtt{m}$-fold preserving integral operators. In view of Lemma \ref{lemma:conjug-Hilbert}, \eqref{sml betak I0}, \eqref{diff betak}, \eqref{sml-nl} and \eqref{sigma-F} the operator $\mathscr{B}_{k}^{-1}\mathcal{H}\mathscr{B}_{k}-\mathcal{H}$ is an integral operator and satisfies
\begin{align}\label{e-Box H}
\|\mathscr{B}_{k}^{-1}\mathcal{H}\mathscr{B}_{k}-\mathcal{H}\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\left(1+\|\mathfrak{I}_0\|_{s+2+\sigma_{1}}^{q,\gamma,\mathtt{m}}\right),
\\
\label{e-Box Hd}
\|\Delta_{12}(\mathscr{B}_{k}^{-1}\mathcal{H}\mathscr{B}_{k}-\mathcal{H})\|_{\textnormal{\tiny{I-D}},\overline{s}_h+\mathtt{p}}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_h+\mathtt{p}+2+\sigma_{1}}^{q,\gamma,\mathtt{m}}.
\end{align}
As for the term in $\mathcal{Q}$, we observe according to the notation \eqref{Top-op1}, \eqref{dcp calDeb0} and \eqref{ASYFR1-} that $$\mathcal{Q}\ast\rho=\mathcal{T}_{\widehat{Q}}\,\rho,\qquad\hbox{with}\qquad \widehat{Q}(b,\varphi,\theta,\eta)\triangleq \mathcal{Q}(b,\theta-\eta)$$
and
$$\|\widehat{Q}\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C(q,s),$$
for some constant $C(q,s)>0$. Therefore, applying Lemma \ref{lem CVAR kernel} together with \eqref {sml betak I0}, \eqref{diff betak} and the smallness condition \eqref{sml-nl} yield to
\begin{align}\label{e-Box Q}
\|\mathscr{B}_{k}^{-1}(\partial_\theta \mathcal{Q}\ast\cdot)\mathscr{B}_{k}-\partial_\theta \mathcal{Q}\ast\cdot\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\left(1+\|\mathfrak{I}_0\|_{s+\sigma_{1}+1}^{q,\gamma,\mathtt{m}}\right),
\\
\label{e-Box Qd}
\|\Delta_{12}\big(\mathscr{B}_{k}^{-1}(\partial_\theta \mathcal{Q}\ast\cdot)\mathscr{B}_{k}-\partial_\theta \mathcal{Q}\ast\cdot\big)\|_{\textnormal{\tiny{I-D}},\overline{s}_h+\mathtt{p}}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_h+\mathtt{p}+\sigma_{1}+1}^{q,\gamma,\mathtt{m}}.
\end{align}
As for the operator $\mathscr{B}_k^{-1}\partial_{\theta}\mathcal{T}_{\mathscr{K}_{k,k'}(\varepsilon r)}\mathscr{B}_{k'}$, first it is real, $\mathtt{m}$-fold preserving and reversible Toeplitz in time operator according to Lemma \ref{lem sym--rev} together with \eqref{sym scrKkn}, \eqref{sym-m scrKkn} and the symmetry properties of $\beta,\widehat{\beta}$, given by Proposition \ref{prop strighten}-2. In addition, using \eqref{e-odsBtB}, Proposition \ref{prop:conjP}-3, \eqref{sml betak I0}, \eqref{esti r I0}, \eqref{sml-nl} and \eqref{sigma-F}, we deduce that
\begin{align}\label{e-Box calT}
\|\mathscr{B}_k^{-1}\partial_{\theta}\mathcal{T}_{\mathscr{K}_{k,k'}(\varepsilon r)}\mathscr{B}_{k'}\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\lesssim \|\mathscr{K}_{k,k'}(\varepsilon r)\|_{s+1}^{q,\gamma,\mathtt{m}}+\|\mathscr{K}_{k,k'}({\varepsilon r})\|_{s_0}^{q,\gamma,\mathtt{m}}\,\max_{\ell\in\{1,2\}}\|\beta_\ell\|_{s+2}^{q,\gamma,\mathtt{m}}\nonumber\\
&\lesssim\varepsilon\gamma^{-1}\left(1+\|\mathfrak{I}_0\|_{s+\sigma_1+2}^{q,\gamma,\mathtt{m}}\right).
\end{align}
Using Proposition \ref{prop:conjP}-3 supplemented by \eqref{esti r I0} and \eqref{esti r I0d}, we obtain
\begin{align}\label{ed-scrKkner}
\nonumber \|\Delta_{12}\mathscr{K}_{k,k'}({\varepsilon r})\|_{s}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\|\Delta_{12}r\|_{s+1}^{q,\gamma,\m}+\varepsilon\|\Delta_{12}r\|_{s_0+1}^{q,\gamma,\m}\max_{\ell\in\{1,2\}}\|r_\ell\|_{s+1}^{q,\gamma,\m}\\
&\lesssim \varepsilon\Big(\|\Delta_{12}i\|_{s+1}^{q,\gamma,\mathtt{m}}+\|\Delta_{12}i\|_{s_0+1}^{q,\gamma,\mathtt{m}}\max_{\ell\in\{1,2\}}\|\mathfrak{I}_{\ell}\|_{s+1}^{q,\gamma,\mathtt{m}}\Big).
\end{align}
Therefore, applying \eqref{diff12BoxBtB} together with \eqref{ed-scrKkner}, \eqref{sml betak I0}, \eqref{diff betak} and \eqref{sml-nl}, we get
\begin{align}\label{e-Box calTd}
\|\Delta_{12}\mathscr{B}_k^{-1}\partial_{\theta}\mathcal{T}_{\mathscr{K}_{k,k'}(\varepsilon r)}\mathscr{B}_{k'}\|_{\textnormal{\tiny{I-D}},\overline{s}_h+\mathtt{p}}^{q,\gamma,\mathtt{m}}\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_h+\mathtt{p}+2+\sigma_{1}}^{q,\gamma,\mathtt{m}}.
\end{align}
Combining \eqref{e-Box H}, \eqref{e-Box Q} and \eqref{e-Box calT}, we find
\begin{align*}
\|{\mathscr{R}}_{k,k'}\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\left(1+\|\mathfrak{I}_0\|_{s+2+\sigma_{1}}^{q,\gamma,\mathtt{m}}\right).
\end{align*}
Moreover, putting together \eqref{e-Box Hd}, \eqref{e-Box Qd} and \eqref{e-Box calTd} implies
\begin{align*}
\|\Delta_{12}{\mathscr{R}}_{k,k'}\|_{\textnormal{\tiny{I-D}},\overline{s}_{h}+\mathtt{p}}^{q,\gamma,\mathtt{m}}&\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_{h}+\mathtt{p}+2+\sigma_{1}}^{q,\gamma,\mathtt{m}}.
\end{align*}
This proves the Proposition \ref{prop RTNL}.
\end{proof}
\subsection{Localization into the normal directions}
We shall focus in this section on the localization effects in the normal directions for the reduction of the transport part. For this aim, we consider the localized quasi-periodic symplectic change of coordinates defined by
$${\mathscr{B}}_{\perp}\triangleq \Pi_{\overline{\mathbb{S}}_{0}}^{\perp}{\mathscr{B}}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}=\begin{pmatrix}
\Pi_{1}^{\perp}\mathscr{B}_1\Pi_{1}^{\perp} & 0\\
0 & \Pi_{2}^{\perp}\mathscr{B}_2\Pi_{2}^{\perp}
\end{pmatrix},$$
where the projectors are defined in \eqref{proj-nor1}-\eqref{PI1-PI2}.
Then, the main result of this section reads as follows.
\begin{prop}\label{prop proj nor dir}
Let $(\gamma,q,d,\tau_1,s_{0},S,\mathtt{m})$ satisfy \eqref{setting tau1 and tau2}--\eqref{init Sob cond} and \eqref{ouvert-sym}.
Let $(\overline{\mu}_2,s_l,\overline{s}_h,\mu_2,\mathtt{p},s_h)$ satisfy \eqref{param} and \eqref{p-trs}.
There exist $\varepsilon_0>0$ and $\sigma_{3}\triangleq\sigma_{3}(\tau_{1},q,d,s_{0})\geqslant\sigma_{2}$, where $\sigma_2$ is given by \eqref{sigma-F}, such that if
\begin{equation}\label{sml-pnor}
\varepsilon\gamma^{-1}N_0^{\mu_2}\leqslant\varepsilon\qquad\textnormal{and}\qquad\|\mathfrak{I}_0\|_{s_h+\sigma_3}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation}
then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item The operators $\mathscr{B}_{\perp}^{\pm 1}$ satisfy the following estimate
\begin{equation}\label{e-vectBnor}
\|{\mathscr{B}}_{\perp}^{\pm 1}\rho\|_{s}^{q,\gamma ,\mathtt{m}}\lesssim\|\rho\|_{s}^{q,\gamma ,\mathtt{m}}+\varepsilon\gamma ^{-1}\| \mathfrak{I}_{0}\|_{s+\sigma_{3}}^{q,\gamma ,\mathtt{m}}\|\rho\|_{s_{0}}^{q,\gamma ,\mathtt{m}}.
\end{equation}
\item For any $n\in\mathbb{N}^{*},$ in the Cantor set $\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0})$ introduced in \eqref{Cant-trs}, we have
\begin{equation}\label{reduction on nor}
{\mathscr{B}}_{\perp}^{-1}\widehat{{\mathcal{L}}}{\mathscr{B}}_{\perp}={\mathscr{L}}_{0}+{\mathscr{E}}_{n}^0,\qquad
{\mathscr{L}}_{0}\triangleq \omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+{\mathscr{D}}_{0}+{\mathscr{R}}_{0},
\end{equation}
where $\mathbf{I}_{\mathtt{m},\perp}\triangleq \Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\mathbf{I}_{\mathtt{m}}$
and ${\mathscr{D}}_{0}=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} {\mathscr{D}}_{0}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}$ is a reversible Fourier multiplier operator given by
$${\mathscr{D}}_{0}\triangleq \begin{pmatrix}
\mathscr{D}_{0,1} & 0\\
0 & \mathscr{D}_{0,2}
\end{pmatrix},\qquad{\mathscr{D}}_{0,k}\triangleq \left(\ii\mu_{j,k}^{(0)}\right)_{j\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}},\qquad\mu_{-j,k}^{(0)}(b,\omega)=-\mu_{j,k}^{(0)}(b,\omega),$$
with
\begin{equation}\label{mu0 r0}
\mu_{j,k}^{(0)}(b,\omega,i_{0})\triangleq \Omega_{j,k}(b)+jr_{k}^{(0)}(b,\omega,i_{0}),\qquad r_{k}^{(0)}(b,\omega,i_{0})\triangleq \mathtt{c}_k(b,\omega)-\mathtt{v}_k(b)
\end{equation}
and such that
\begin{equation}\label{e-ed-r0}
\|r_{k}^{(0)}\|^{q,\gamma}\lesssim \varepsilon \qquad\textnormal{and}\qquad \|\Delta_{12}r_{k}^{(0)}\|^{q,\gamma}\lesssim \varepsilon \| \Delta_{12}i\|_{\overline{s}_{h}+\sigma_{1}}^{q,\gamma,\mathtt{m}}.
\end{equation}
Notice that the frequencies $\Omega_{j,k}(b)$ are defined in \eqref{omega jk b}.
\item The operator ${\mathscr{E}}_{n}^0$ satisfies the following estimate
\begin{equation}\label{e-scrEn0}
\|{\mathscr{E}}_{n}^0\rho\|_{s_{0}}^{q,\gamma,\mathtt{m}}\lesssim \varepsilon N_{0}^{\mu_{2}}N_{n+1}^{-\mu_{2}}\|\rho\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}.
\end{equation}
\item The operator ${\mathscr{R}}_{0}$ is an $\mathtt{m}$-fold preserving and reversible Toeplitz in time matricial operator satisfying
\begin{equation}\label{e-hyb-scrR0}
\forall s\in [s_{0},S],\quad \interleave{\mathscr{R}}_{0}\interleave_{s}^{q,\gamma,\mathtt{m}}\lesssim\varepsilon\gamma^{-1}\left(1+\| \mathfrak{I}_{0}\|_{s+\sigma_{3}}^{q,\gamma,\mathtt{m}}\right)
\end{equation}
and
\begin{equation}\label{ed-hyb-scrR0}
\interleave\Delta_{12}{\mathscr{R}}_{0}\interleave_{\overline{s}_{h}+\mathtt{p}}^{q,\gamma,\mathtt{m}}\lesssim\varepsilon\gamma^{-1}\| \Delta_{12}i\|_{\overline{s}_{h}+\mathtt{p}+\sigma_{3}}^{q,\gamma,\mathtt{m}}.
\end{equation}
\item The operator $\mathscr{L}_{0}$ satisfies
\begin{equation}\label{e-scrL0}
\forall s\in[s_{0},S],\quad\|{\mathscr{L}}_{0}\rho\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\rho\|_{s+1}^{q,\gamma,\mathtt{m}}+\varepsilon\gamma ^{-1}\| \mathfrak{I}_{0}\|_{s+\sigma_{3}}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}}^{q,\gamma ,\mathtt{m}}.
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
\textbf{(i)} It is obtained using \eqref{cont Bk} and Lemma \ref{lem funct prop}-(i).\\
\textbf{(ii)} The first estimate of \eqref{e-ed-r0} follows from \eqref{sml-r0} and the second one from \eqref{diff Vpm}. On the other hand, using the expression of $\widehat{\mathcal{L}}$ detailed in Proposition \ref{lemma-normal-s}, combined with the decomposition $\textnormal{Id}=\Pi_{\mathbb{S}_{0}}+\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}$ we write
\begin{align*}
\mathscr{B}_{\perp}^{-1}\widehat{\mathcal{L}}\mathscr{B}_{\perp}&=\mathscr{B}_{\perp}^{-1}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}({\mathcal{L}}-\varepsilon\partial_{\theta}\mathcal{R})\mathscr{B}_{\perp}
\\
&=\mathscr{B}_{\perp}^{-1}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}{\mathcal{L}}\mathscr{B}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}-\mathscr{B}_{\perp}^{-1}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}{\mathcal{L}}\Pi_{\mathbb{S}_{0}}\mathscr{B}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}-\varepsilon\mathscr{B}_{\perp}^{-1}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\partial_{\theta}\mathcal{R}\mathscr{B}_{\perp}.
\end{align*}
By virtue of
Proposition \ref{prop RTNL}
one has in the Cantor set $\mathcal{O}_{\infty,n}^{\gamma,\tau_{1}}(i_{0})$,
$$
{\mathcal{L}}\mathscr{B}=\mathscr{B}\mathscr{L}
$$
and therefore, using also that $\mathscr{B}_{\perp}^{-1}\Pi_{\overline{\mathbb{S}}_0}^{\perp}=\mathscr{B}_{\perp}^{-1},$ we get
\begin{align*}
\mathscr{B}_{\perp}^{-1}\widehat{\mathcal{L}}\mathscr{B}_{\perp}&=\mathscr{B}_{\perp}^{-1}\mathscr{B}\mathscr{L}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}-\mathscr{B}_{\perp}^{-1}\mathcal{L}\Pi_{\mathbb{S}_{0}}\mathscr{B}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}
-\varepsilon\mathscr{B}_{\perp}^{-1}\partial_{\theta}\mathcal{R}\mathscr{B}_{\perp}.
\end{align*}
Thus, using \eqref{b-1lb} we deduce that
\begin{align*}
\mathscr{B}_{\perp}^{-1}\mathscr{B}\mathscr{L}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}&=\left(\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m}}+\mathscr{D}\right)\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}+{\mathscr{E}}_{n}^0+\mathscr{B}_{\perp}^{-1}\mathscr{B}\mathscr{R}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp},
\end{align*}
with
\begin{equation}\label{def scr En0}
{\mathscr{E}}_{n}^0\triangleq \mathscr{B}_{\perp}^{-1}\mathscr{B}\mathscr{E}_{n}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}.
\end{equation}
Consequently, in the Cantor set $\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0})$, one has the following reduction
\begin{align*}
\mathscr{B}_{\perp}^{-1}\widehat{\mathcal{L}}\mathscr{B}_{\perp}&=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_0+\mathscr{R}_0+{\mathscr{E}}_{n}^0,
\end{align*}
where we set
$$\mathscr{D}_0\triangleq \begin{pmatrix}
\mathtt{c}_1\partial_{\theta}\, \cdot+\tfrac{1}{2}\mathcal{H}+\partial_\theta \mathcal{Q}\ast\cdot& 0\\
0 & \mathtt{c}_2\partial_{\theta}\, \cdot-\tfrac{1}{2}\mathcal{H}-\partial_\theta \mathcal{Q}\ast\cdot
\end{pmatrix}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp},$$
\begin{align*}
\mathscr{R}_0&\triangleq -\mathscr{B}_{\perp}^{-1}{\mathcal{L}}\Pi_{\mathbb{S}_{0}}\mathscr{B}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}-\varepsilon\mathscr{B}_{\perp}^{-1}\partial_{\theta}\mathcal{R}\mathscr{B}_{\perp}+\mathscr{B}_{\perp}^{-1}\mathscr{B}\mathscr{R}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}.
\end{align*}
\textbf{(iii)} It can be obtained from \eqref{def scr En0}, \eqref{e-vectBnor}, \eqref{cont Bk} and \eqref{scr-Enk}.\\
\textbf{(iv)}
To get the estimates \eqref{e-hyb-scrR0} and \eqref{ed-hyb-scrR0}, we may refer to Lemma \cite[Prop 6.3 and Lem. 6.3]{HR21} up to very slight modifications corresponding to the hybrid topology introduced in Section \ref{sec funct set}. The computations are long and based on a duality representations of $\mathscr{B}_{\perp}^{\pm 1}$ and $\mathscr{B}^{\pm 1}$. In particular, one may use Lemma \ref{lem sym--rev}, \eqref{e-Rnl}, \eqref{e-Rnld}, \eqref{e-Jkn}, \eqref{e-Jknd} and \eqref{diff betak}.\\
\textbf{(v)} This estimate follows from \eqref{reduction on nor}, \eqref{sml-r0}, \eqref{e-hyb-scrR0}, \eqref{sml-pnor} and Corollary \ref{cor-hyb-nor}-(iii).
\end{proof}
\subsection{Reduction of the remainder}\label{Reduction-Remaind}
This section is devoted to the conjugation of the operator $\mathscr{L}_{0}$ defined in Proposition \ref{prop proj nor dir} to a diagonal one, up to a fast decaying small remainders. This will be achieved through a standard KAM reducibility techniques well-adapted to the operators setting. This will be implemented by taking advantage of the exterior parameters which are restricted to a suitable Cantor set that prevents the resonances in the second order Melnikov assumption. Notice that one gets from this study some estimates on the distribution of the eigenvalues and their stability with respect to the torus parametrization. This is considered as the key step not only to get an approximate inverse but also to achieve the Nash-Moser scheme with a final massive Cantor set. We may refer for instance to \cite{BBM14,B19,FP14,HHM21,HR21} for some implementations of this KAM strategy to PDEs.
\begin{prop}\label{prop RR}
Let $(\gamma,q,d,\tau_1,\tau_2,s_{0},\overline{s}_l,\overline{\mu}_{2},S,\mathtt{m})$ satisfy \eqref{init Sob cond}, \eqref{setting tau1 and tau2}, \eqref{param} and \eqref{ouvert-sym}. For any $(\mu_2,s_h)$ satisfying
\begin{align}\label{p-RR}
\mu_2\geqslant \overline{\mu}_2+2\tau_2q+2\tau_2\qquad\textnormal{and}\qquad s_h\geqslant \frac{3}{2}\mu_{2}+\overline{s}_{l}+1,
\end{align}
there exist $\varepsilon_{0}\in(0,1)$ and $\sigma_{4}\triangleq\sigma_{4}(\tau_1,\tau_2,q,d)\geqslant\sigma_{3}$, with $\sigma_{3}$ defined in Proposition $\ref{prop proj nor dir},$ such that if
\begin{equation}\label{sml-RR}
\varepsilon\gamma^{-2-q}N_{0}^{\mu_{2}}\leqslant \varepsilon_{0}
\end{equation}
and
\begin{equation}\label{bnd frkI0-4}
\|\mathfrak{I}_{0}\|_{s_{h}+\sigma_{4}}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation}
then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item There exists a family of invertible linear operator $\Phi_{\infty}:\mathcal{O}\to \mathcal{L}\big(\mathbf{H}^s_{\perp,\m}\big)$ satisfying the estimates
\begin{equation}\label{cont-Phifty}
\forall s\in[s_{0},S],\quad \mbox{ }\|\Phi_{\infty}^{\pm 1}\rho\|_{s}^{q,\gamma ,\m}\lesssim \|\rho\|_{s}^{q,\gamma,\m}+\varepsilon\gamma^{-2}\| \mathfrak{I}_{0}\|_{s+\sigma_{4}}^{q,\gamma,\m}\|\rho\|_{s_{0}}^{q,\gamma,\m}.
\end{equation}
There exists a diagonal operator $\mathscr{L}_\infty\triangleq\mathscr{L}_{\infty}(b,\omega,i_{0})$ taking the form
\begin{align*}\mathscr{L}_{\infty}&=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{\infty}
\end{align*}
where $\mathscr{D}_{\infty}=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\mathscr{D}_{\infty}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}=\mathscr{D}_{\infty}(b,\omega,i_{0})$ is a diagonal operator with reversible Fourier multiplier entries, namely
$$\mathscr{D}_{\infty}\triangleq\begin{pmatrix}
\mathscr{D}_{\infty,1} & 0\\
0 & \mathscr{D}_{\infty,2}
\end{pmatrix},\qquad\mathscr{D}_{\infty,k}\triangleq\left(\ii\mu_{j,k}^{(\infty)}\right)_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}},\qquad\mu_{-j,k}^{(\infty)}(b,\omega)=-\mu_{j,k}^{(\infty)}(b,\omega),$$
with
\begin{align}\label{def mu lim}
\forall j\in\Z_{\m}\backslash \overline{\mathbb{S}}_{0,k},&\quad\mu_{j,k}^{(\infty)}(b,\omega,i_{0})\triangleq\mu_{j,k}^{(0)}(b,\omega,i_{0})+r_{j,k}^{(\infty)}(b,\omega,i_{0})
\end{align}
and
\begin{align}\label{e-rjfty}
\sup_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}}|j|\left\| r_{j,k}^{(\infty)}\right\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1},
\end{align}
such that in the Cantor set
\begin{align*}
&\mathscr{O}_{\infty,n}^{\gamma,\tau_{1},\tau_{2}}(i_{0})\triangleq \mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0})\\
&\bigcap_{\underset{\,j, j_{0}\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}}{ {k\in\{1,2\}}}}\bigcap_{\underset{|l|\leqslant N_{n}}{l\in\mathbb{Z}^{d}\atop(l,j)\neq(0,j_{0})}}\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\big|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{0})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{0})\big|>\tfrac{2\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\Big\}\\
&\bigcap_{\underset{\,j_0\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,2}}{ j\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,1}}}\bigcap_{\underset{\langle l,j,j_0\rangle\leqslant N_{n}}{l\in\mathbb{Z}^{d}}}\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\big|\omega\cdot l+\mu_{j,1}^{(\infty)}(b,\omega,i_{0})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{0})\big|>\tfrac{2\gamma}{\langle l,j,j_0\rangle^{\tau_2}}\Big\}
\end{align*}
we have
\begin{align*}
\Phi_{\infty}^{-1}\mathscr{L}_{0}\Phi_{\infty}&=\mathscr{L}_{\infty}+{\mathscr{E}}_{n}^1,
\end{align*}
and the linear operator ${\mathscr{E}}_{n}^1$ satisfies the estimate
\begin{equation}\label{e-scrEn1}
\|{\mathscr{E}}_{n}^1\rho\|_{s_0}^{q,\gamma ,\m}\lesssim \varepsilon\gamma^{-2}N_{0}^{{\mu}_{2}}N_{n+1}^{-\mu_{2}} \|\rho\|_{s_0+1}^{q,\gamma,\m}.
\end{equation}
We refer to \eqref{Cant-trs}, \eqref{reduction on nor} and \eqref{mu0 r0} for the definition of $\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_{0}),$ $\mathscr{L}_{0}$ and $\big(\mu_{j,k}^{(0)}(b,\omega,i_{0})\big)_{j\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}}$, respectively.
\item Given two tori $i_{1}$ and $i_{2}$ both satisfying \eqref{sml-RR}-\eqref{bnd frkI0-4}, then for $k\in\{1,2\}$ we have
\begin{align}\label{ed-rjkfty}
\forall j\in\Z_{\m}\backslash \overline{\mathbb{S}}_{0,k},&\quad\left\|\Delta_{12}r_{j,k}^{(\infty)}\right\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1}\|\Delta_{12}i\|_{\overline{s}_{h}+\sigma_{4}}^{q,\gamma,\m}
\end{align}
and
\begin{align}\label{ed-mujkfty}
\forall j\in\Z_{\m}\backslash \overline{\mathbb{S}}_{0,k},&\quad\left\|\Delta_{12}\mu_{j,k}^{(\infty)}\right\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1}|j|\| \Delta_{12}i\|_{\overline{s}_{h}+\sigma_{4}}^{q,\gamma,\m}.
\end{align}
\end{enumerate}
\end{prop}
\begin{proof}
{\bf{(i)}} First recall that Proposition \ref{prop proj nor dir} states that in restriction to the Cantor set $\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_0)$ the following identity holds
$$\mathscr{B}_{\perp}^{-1}\widehat{\mathcal{L}}\mathscr{B}_{\perp}=\mathscr{L}_{0}+{\mathscr{E}}_{n}^0,$$
where the operator $\mathscr{L}_{0}$ decomposes as follows
\begin{equation*}
\mathscr{L}_0=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_0+\mathscr{R}_0,
\end{equation*}
with
$$\mathscr{D}_0=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\mathscr{D}_0\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}=\begin{pmatrix}
\mathscr{D}_{0,1} & 0\\
0 & \mathscr{D}_{0,2}
\end{pmatrix},\qquad\mathscr{D}_{0,k}=\Big(\ii\mu_{j,k}^{(0)}\Big)_{j\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}},\qquad\mu_{-j,k}^{(0)}(b,\omega)=-\mu_{j,k}^{(0)}(b,\omega)$$
and $\mathscr{R}_0$ a real and reversible Toeplitz in time operator of zero order satisfying $\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} \mathscr{R}_0\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} =\mathscr{R}_0.$
Let us define the quantity
$$
\delta_{0}(s)=\gamma ^{-1}\interleave\mathscr{R}_{0}\interleave_{s}^{q,\gamma,\m},
$$
By virtue of \eqref{e-hyb-scrR0}, we find
\begin{align}\label{edlt0s}
\delta_{0}(s)\leqslant C\varepsilon\gamma^{-2}\left(1+\|\mathfrak{I}_{0}\|_{s+\sigma_{3}}^{q,\gamma,\m}\right).
\end{align}
Thus, combining \eqref{p-RR}, \eqref{sml-RR} and the fact that $\sigma_4\geqslant \sigma_3$ yields
\begin{align}\label{e-dlt0sh init}
\nonumber N_{0}^{\mu_{2}}\delta_{0}(s_{h}) &\leqslant C N_{0}^{\mu_{2}}\varepsilon\gamma^{-2}\\
&\leqslant C\varepsilon_{0}.
\end{align}
The smallness conditions \eqref{edlt0s} and \eqref{e-dlt0sh init} allow to start a KAM reduction procedure similarly to the scalar case \cite[Prop. 6.5]{HR21}. Nevertheless the following KAM iteration is done at the matricial level. For this aim, we need to consider the hybrid norm \eqref{hyb nor} to overcome spatial resonances coming from the anti-diagonal entries when solving the homological equations. To clarify this point, let us first discuss a general KAM step of the procedure.\\
$\blacktriangleright$ \textbf{KAM step.} Now, we explain the typical KAM step used in the reduction of the remainder. Assume that we have a linear operator $\mathscr{L}$ taking the following form when the parameters are restricted to some Cantor set $\mathscr{O}$
$$\mathscr{L}=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}+\mathscr{R},$$
with
\begin{equation}\label{struc scr D}
\mathscr{D}=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\mathscr{D}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}=\begin{pmatrix}
\mathscr{D}_1 & 0\\
0 & \mathscr{D}_2
\end{pmatrix},\qquad\mathscr{D}_{k}=\big(\ii\mu_{j,k}\big)_{j\in\Z_{\m}\backslash \overline{\mathbb{S}}_{0,k}},\qquad\mu_{-j,k}(b,\omega)=-\mu_{j,k}(b,\omega).
\end{equation}
In addition we assume that the matrix operator
$$\mathscr{R}=\begin{pmatrix}
\mathscr{R}_1 & \mathscr{R}_3\\
\mathscr{R}_4 & \mathscr{R}_2
\end{pmatrix}$$
is real, reversible Toeplitz in time of zero order and satisfies
$$\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} \mathscr{R}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} =\mathscr{R}.$$
One may check from \eqref{proj-nor1} that this latter assumption is equivalent to
\begin{equation}\label{cond proj R}
\begin{pmatrix}
\Pi_{1}^{\perp}\mathscr{R}_1\Pi_{1}^{\perp} & \Pi_{1}^{\perp}\mathscr{R}_3\Pi_{2}^{\perp}\vspace{0.3cm}\\
\Pi_{2}^{\perp}\mathscr{R}_4\Pi_{1}^{\perp} & \Pi_{2}^{\perp}\mathscr{R}_2\Pi_{2}^{\perp}
\end{pmatrix}=\begin{pmatrix}
\mathscr{R}_1 & \mathscr{R}_3\\
\mathscr{R}_4 & \mathscr{R}_2
\end{pmatrix}.
\end{equation}
According to Definition \ref{Def-Rev}, the real and reversibility properties of $\mathscr{R}_k$ are equivalent to say
\begin{equation}\label{coef scrRk}
(\mathscr{R}_k)_{l_{0},j_{0}}^{l,j}\triangleq \ii\,r_{j_{0},k}^{j}(b,\omega,l-l_{0})\in \ii\,\mathbb{R}\qquad\mbox{and}\qquad (\mathscr{R}_k)_{-l_{0},-j_{0}}^{-l,-j}=-(\mathscr{R}_k)_{l_{0},j_{0}}^{l,j}.
\end{equation}
Moreover, the condition \eqref{cond proj R} is equivalent to
\begin{align}
&\forall k\in\{1,2\},\quad\forall \,l\,\in\mathbb{Z}^d,\quad\forall\, j\,\,\hbox{or}\,\,\,j_0\in\overline{\mathbb{S}}_{0,k},\quad r_{j_{0},k}^{j}(b,\omega,l)=0,\label{restri rk}\\
&\forall \ell\in\{3,4\},\quad\forall \,l\,\in\mathbb{Z}^d,\quad\forall\, j\in\overline{\mathbb{S}}_{0,\ell-2}\,\,\hbox{or}\,\,\,j_0\in\overline{\mathbb{S}}_{0,5-\ell},\quad r_{j_{0},\ell}^{j}(b,\omega,l)=0.\label{restri rl}
\end{align}
Now, consider a linear invertible transformation close to the identity \begin{equation}\label{ansatz Phi reduc rem}
\Phi=\mathbf{I}_{\mathtt{m},\perp}+\Psi:\mathcal{O}\rightarrow\mathcal{L}({\mathbf{H}}_{\perp,\m}^{s}),\qquad\Psi=\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\Psi\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}=\begin{pmatrix}
\Psi_{1} & \Psi_3\\
\Psi_4 & \Psi_2
\end{pmatrix},
\end{equation}
with $\Psi$ depending on $\mathscr{R}$ and {\it small} in a suitable sense related to the hybrid norm \eqref{hyb nor}. Then, one readily obtains, in restriction to $\mathscr{O},$ the following decomposition
\begin{align*}
\Phi^{-1}\mathscr{L}\Phi & = \Phi^{-1}\Big(\Phi\left(\omega\cdot\partial_{\varphi}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}+\mathscr{D}\right)+\left[\omega\cdot\partial_{\varphi}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}+\mathscr{D},\Psi\right]+\mathscr{R}+\mathscr{R}\Psi\Big)\\
& = \omega\cdot\partial_{\varphi}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}+\mathscr{D}+\Phi^{-1}\Big(\big[\omega\cdot\partial_{\varphi}\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}+\mathscr{D}\, ,\, \Psi\big]+\mathbf{P_{N}}\mathscr{R}+\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}+\mathscr{R}\Psi\Big),
\end{align*}
where $\mathbf{P_{N}}\mathscr{R}$ and $\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}$ are defined as in \eqref{proj mat}. We shall select $\Psi$ such that the above expression contains a new remainder $\mathscr{R}_{\textnormal{\tiny{next}}}$ quadratically smaller than the previous one $\mathscr{R}$ up to modify the diagonal part $\mathscr{D}$ into a new one $\mathscr{D}_{\textnormal{\tiny{next}}}$ with the same structure \eqref{struc scr D}. Therefore, we choose $\Psi$ such that it solves the following \textit{matricial homological equation}
\begin{equation}\label{hom eq Psi}
\big[\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}\, ,\,\Psi\big]+\mathbf{P_{N}}\mathscr{R}=\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor,
\end{equation}
where $\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor$ is the diagonal part of the matrix operator $\mathbf{P_{N}}\mathscr{R}$ as defined by \eqref{def diag-diag}-\eqref{def diag opi}, namely
$$\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor=\begin{pmatrix}
\lfloor P_{N}^1\mathscr{R}_1\rfloor & 0\\
0 & \lfloor P_{N}^1\mathscr{R}_2\rfloor
\end{pmatrix}.$$
The matricial equation \eqref{hom eq Psi} is equivalent to the following set of four scalar homological equations
\begin{equation}\label{4Homeq}
\left\lbrace\begin{array}{l}
\big[\omega\cdot\partial_{\varphi}\Pi_{1}^{\perp}+\mathscr{D}_1\, ,\,\Psi_1\big]=\lfloor P_{N}^{1}\mathscr{R}_1\rfloor-P_{N}^{1}\mathscr{R}_1,\vspace{0.1cm}\\
\big[\omega\cdot\partial_{\varphi}\Pi_{2}^{\perp}+\mathscr{D}_2\, ,\,\Psi_2\big]=\lfloor P_{N}^{1}\mathscr{R}_2\rfloor-P_{N}^{1}\mathscr{R}_2,\vspace{0.1cm}\\
\big(\omega\cdot\partial_{\varphi}\Pi_{1}^{\perp}+\mathscr{D}_1\big)\Psi_{3}-\Psi_3\big(\omega\cdot\partial_{\varphi}\Pi_{2}^{\perp}+\mathscr{D}_2\big)=-P_{N}^{2}\mathscr{R}_3,\vspace{0.1cm}\\
\big(\omega\cdot\partial_{\varphi}\Pi_{2}^{\perp}+\mathscr{D}_2\big)\Psi_{4}-\Psi_4\big(\omega\cdot\partial_{\varphi}\Pi_{1}^{\perp}+\mathscr{D}_1\big)=-P_{N}^{2}\mathscr{R}_4.
\end{array}\right.
\end{equation}
As we shall see these equations can be solved modulo the selection of suitable parameters $(b,\omega)$ among a Cantor-type set connected to non-resonance conditions.
Let us begin with the diagonal equations on $\Psi_1$ and $\Psi_2$ which can be treated in a similar way. Fix $k\in\{1,2\},$ then we are interested in solving the equation
$$\big[\omega\cdot\partial_{\varphi}\Pi_{k}^{\perp}+\mathscr{D}_k\, ,\,\Psi_k\big]=\lfloor P_{N}^{1}\mathscr{R}_k\rfloor-P_{N}^{1}\mathscr{R}_k.$$
This will be done by using the Fourier expansion of our operators. First notice that similarly to \eqref{cond proj R}-\eqref{restri rk}, the condition $\Psi_k=\Pi_{k}^{\perp}\Psi_k\Pi_{k}^{\perp}$ is equivalent to say that the Fourier coefficients of $\Psi_{k}$ satisfy
\begin{equation}\label{restri Psik}
\forall(l,l_0)\in(\mathbb{Z}^{d})^2,\quad\forall j\,\,\textnormal{or}\,\,j_0\in\overline{\mathbb{S}}_{0,k},\quad(\Psi_k)_{l_{0},j_{0}}^{l,j}=0.
\end{equation}
Straightforward computations lead to
$$
\forall (l_0,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}),\quad\big[\omega\cdot\partial_{\varphi}\Pi_{k}^{\perp},\Psi_k\big]\mathbf{e}_{l_{0},j_{0}}=\ii\sum_{(l,j)\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})}(\Psi_k)_{l_{0},j_{0}}^{l,j}\,\,\omega\cdot(l-l_{0})\,\,\mathbf{e}_{l,j}$$
and using \eqref{struc scr D}
$$\forall (l_0,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}),\quad[\mathscr{D}_{k},\Psi_k]\mathbf{e}_{l_{0},j_{0}}=\ii\sum_{(l,j)\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})}(\Psi_k)_{l_{0},j_{0}}^{l,j}\big(\mu_{j,k}(b,\omega)-\mu_{j_0,k}(b,\omega)\big)\mathbf{e}_{l,j}.$$
Consequently $\Psi_k$ is a solution of \eqref{hom eq Psi} if and only if for any $(l_0,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}),$
$$\sum_{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})}(\Psi_k)_{l_{0},j_{0}}^{l,j}\Big(\omega\cdot (l-l_{0})+\mu_{j,k}(b,\omega)-\mu_{j_0,k}(b,\omega)\Big)\mathbf{e}_{l,j}=-\sum_{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})\atop\underset{(l,j)\neq(l,j_0)}{\langle l-l_0,j-j_0\rangle\leqslant N}}r_{j_0,k}^{j}(b,\omega,l-l_0)\mathbf{e}_{l,j}.$$
By identification, we deduce that for any $(l,l_0,j,j_0)\in(\mathbb{Z}^d)^2\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2,$ if $\langle l-l_0,j-j_0\rangle>N$, then $(\Psi_{k})_{l_0,j_0}^{l,j}=0,$
otherwise we have
$$(\Psi_k)_{l_{0},j_{0}}^{l,j}\Big(\omega\cdot (l-l_{0})+\mu_{j,k}(b,\omega)-\mu_{j_0,k}(b,\omega)\Big)=\left\lbrace\begin{array}{ll}
-r_{j_{0},k}^{j}(b,\omega,l-l_{0}), & \mbox{if }(l,j)\neq(l_{0},j_{0}),\\
0, & \mbox{if }(l,j)=(l_{0},j_{0}).
\end{array}\right.$$
As a consequence, we have that $\Psi_k$ is a Toeplitz in time operator with $(\Psi_k)_{j_{0}}^{j}(l-l_0)\triangleq (\Psi_k)_{l_{0},j_{0}}^{l,j}$. In addition, for $(l,j,j_{0})\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2}$ with $\langle l,j-j_0\rangle\leqslant N,$ one gets
\begin{equation}\label{choice psik}
(\Psi_k)_{j_{0}}^{j}(b,\omega,l)=\left\lbrace\begin{array}{ll}
\frac{-r_{j_{0},k}^{j}(b,\omega,l)}{\omega\cdot l+\mu_{j,k}(b,\omega)-\mu_{j_{0},k}(b,\omega)}, & \mbox{if }(l,j)\neq(0,j_{0}),\\
0, & \mbox{if }(l,j)=(0,j_{0}),
\end{array}\right.
\end{equation}
provided that the denominator is non zero. This latter fact is imposed by selecting suitable values of the parameters $(b,\omega)$ among the following set
$$\mathscr{O}_{k}=\bigcap_{\underset{|l|\leqslant N}{(l,j,j_{0})\in\mathbb{Z}^{d }\times\left(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}\right)^{2}}\atop (l,j)\neq(0,j_0)}\left\lbrace(b,\omega)\in\mathscr{O}
\quad\textnormal{s.t.}\quad|\omega\cdot l+\mu_{j,k}(b,\omega)-\mu_{j_{0},k}(b,\omega)|>\tfrac{\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\right\rbrace.$$
This restriction avoids the resonances and implies that the identity \eqref{choice psik} is well defined. Now, we shall extend $\Psi_k$ to the whole set $\mathcal{O}$ by using the cut-off function $\chi\in C^{\infty}(\mathbb{R},[0,1])$ defined by
\begin{equation}\label{def cut-off chi}
\chi(x)=\left\lbrace\begin{array}{ll}
1 & \textnormal{if }|x|\geqslant\frac{1}{2}\vspace{0.1cm}\\
0 & \textnormal{if }|x|\leqslant\frac{1}{3}.
\end{array}\right.
\end{equation}
Then, the extension of $\Psi_k$, still denoted $\Psi_k$, is obtained by defining the Fourier coefficients by \eqref{restri Psik} and for $(l,j,j_{0})\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2}$ with $\langle l,j-j_0\rangle\leqslant N,$
\begin{equation}\label{coef ext psik}
(\Psi_k)_{j_{0}}^{j}(b,\omega,l)=\left\lbrace\begin{array}{ll}
-\varrho_{j_{0},k}^{j}(b,\omega,l)\,\, r_{j_{0},k}^{j}(b,\omega,l),& \mbox{if }\quad (l,j)\neq(0,j_{0}),\\
0, & \mbox{if }\quad (l,j)=(0,j_{0}),
\end{array}\right.
\end{equation}
with
\begin{align}\label{varro psik}
\varrho_{j_{0},k}^{j}(b,\omega,l)\triangleq \frac{\chi\left((\omega\cdot l+\mu_{j,k}(b,\omega)-\mu_{j_{0},k}(b,\omega))(\gamma\langle j-j_{0}\rangle)^{-1}\langle l\rangle^{\tau_2}\right)}{\omega\cdot l+\mu_{j,k}(b,\omega)-\mu_{j_{0},k}(b,\omega)}\cdot
\end{align}
The extension \eqref{coef ext psik} is smooth and coincides with \eqref{choice psik} for the parameters taken in $\mathscr{O}_{k}.$ In addition, putting together \eqref{coef ext psik}, \eqref{varro psik}, \eqref{coef scrRk} and \eqref{struc scr D} gives $$(\Psi_k)_{j_{0}}^{j}(l)\in\mathbb{R}\qquad\textnormal{and}\qquad (\Psi_k)_{-j_{0}}^{-j}(-l)=(\Psi_k)_{j_{0}}^{j}(l).$$
Therefore Definition \ref{Def-Rev} implies that $\Psi_k$ is a real and reversibility preserving Toeplitz in time operator. We now turn to the anti-diagonal equations satisfied by $\Psi_3$ and $\Psi_4$ in \eqref{4Homeq} which can be unified in the following form. Fix $\ell\in\{3,4\}$, then both equations of interest write
\begin{equation}\label{hom eq Psil}
\big(\omega\cdot\partial_{\varphi}\Pi_{\ell-2}^{\perp}+\mathscr{D}_{\ell-2}\big)\Psi_{\ell}-\Psi_{\ell}\big(\omega\cdot\partial_{\varphi}\Pi_{5-\ell}^{\perp}+\mathscr{D}_{5-\ell}\big)=-P_{N}^2\mathscr{R}_{\ell}.
\end{equation}
First notice that similarly to \eqref{cond proj R}-\eqref{restri rl}, the condition $\Psi_\ell=\Pi_{\ell-2}^{\perp}\Psi_\ell\Pi_{5-\ell}^{\perp}$ implies that the Fourier coefficients of $\Psi_{\ell}$ satisfy
\begin{equation}\label{restri Psil}
\forall(l,l_0)\in(\mathbb{Z}^{d})^2,\quad\forall j\in\overline{\mathbb{S}}_{0,\ell-2}\,\,\textnormal{or}\,\,j_0\in\overline{\mathbb{S}}_{0,5-\ell},\quad(\Psi_\ell)_{l_{0},j_{0}}^{l,j}=0.
\end{equation}
One readily has that $\Psi_\ell$ is a solution of \eqref{hom eq Psil} if and only if for any $(l_0,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,5-\ell}),$
\begin{align*}
\sum_{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell-2})}(\Psi_\ell)_{l_{0},j_{0}}^{l,j}\Big(\omega\cdot (l-l_{0})+\mu_{j,\ell-2}(b,\omega)-&\mu_{j_0,5-\ell}(b,\omega)\Big)\mathbf{e}_{l,j}\\
&=-\sum_{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell-2})\atop\langle l-l_0,j,j_0\rangle\leqslant N}r_{j_0,\ell}^{j}(b,\omega,l-l_0)\mathbf{e}_{l,j}.
\end{align*}
By identification, we deduce that for any $(l,l_0,j,j_0)\in(\mathbb{Z}^d)^2\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell-2})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,5-\ell}),$ if $\langle l-l_0,j,j_0\rangle>N$, then $(\Psi_{\ell})_{l_0,j_0}^{l,j}=0,$
otherwise we have
$$(\Psi_\ell)_{l_{0},j_{0}}^{l,j}\Big(\omega\cdot (l-l_{0})+\mu_{j,\ell-2}(b,\omega)-\mu_{j_0,5-\ell}(b,\omega)\Big)=-r_{j_{0},\ell}^{j}(b,\omega,l-l_{0}).$$
As a consequence, we have that $\Psi_\ell$ is a Toeplitz in time operator with $(\Psi_\ell)_{j_{0}}^{j}(l-l_0)\triangleq (\Psi_\ell)_{l_{0},j_{0}}^{l,j}$. In addition, for $(l,j,j_{0})\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell-2})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,5-\ell})$ with $\langle l,j,j_0\rangle\leqslant N,$ one gets
\begin{equation}\label{choice psil}
(\Psi_\ell)_{j_{0}}^{j}(b,\omega,l)=\frac{-r_{j_{0},\ell}^{j}(b,\omega,l)}{\omega\cdot l+\mu_{j,\ell-2}(b,\omega)-\mu_{j_{0},5-\ell}(b,\omega)}
\end{equation}
provided that the denominator is non zero. This latter fact is imposed by selecting suitable values of the parameters $(b,\omega)$ among the following set
$$\mathscr{O}_{1,2}=\bigcap_{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})\atop\langle l,j,j_0\rangle\leqslant N_{n}}\Big\{(b,\omega)\in\mathcal{O}_{\infty,n}^{\gamma,\tau_1}(i_0)\quad\textnormal{s.t.}\quad\big|\omega\cdot l+\mu_{j,1}(b,\omega)-\mu_{j_0,2}(b,\omega)\big|>\tfrac{\gamma}{\langle l,j,j_0\rangle^{\tau_2}}\Big\}.$$
This implies that the identity \eqref{choice psil} is well defined. Now, the extension of $\Psi_\ell$, still denoted $\Psi_\ell$, is obtained by defining the Fourier coefficients by \eqref{restri Psil} and for $(l,j,j_{0})\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell-2})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,5-\ell})$ with $\langle l,j,j_0\rangle\leqslant N,$
\begin{equation}\label{coef ext psil}
(\Psi_\ell)_{j_{0}}^{j}(b,\omega,l)=-\varrho_{j_{0},\ell}^{j}(b,\omega,l)\,\, r_{j_{0},\ell}^{j}(b,\omega,l)
\end{equation}
with
\begin{align}\label{varro psil}
\varrho_{j_{0},\ell}^{j}(b,\omega,l)\triangleq \frac{\chi\left((\omega\cdot l+\mu_{j,\ell-2}(b,\omega)-\mu_{j_{0},5-\ell}(b,\omega))\gamma^{-1}\langle l,j,j_0\rangle^{\tau_2}\right)}{\omega\cdot l+\mu_{j,\ell-2}(b,\omega)-\mu_{j_{0},5-\ell}(b,\omega)},
\end{align}
where $\chi$ is the cut-off function introduced in \eqref{def cut-off chi}. The extension \eqref{coef ext psil} is smooth and coincides with \eqref{choice psil} for the parameters taken in $\mathscr{O}_{1,2}.$ In addition, putting together \eqref{coef ext psil}, \eqref{varro psil}, \eqref{coef scrRk} and \eqref{struc scr D} gives $$(\Psi_\ell)_{j_{0}}^{j}(l)\in\mathbb{R}\qquad\textnormal{and}\qquad (\Psi_\ell)_{-j_{0}}^{-j}(-l)=(\Psi_\ell)_{j_{0}}^{j}(l).$$
Therefore Definition \ref{Def-Rev} implies that $\Psi_{\ell}$ is a real and reversibility preserving Toeplitz in time operator. Now consider,
\begin{equation}\label{D-R next}
\mathscr{D}_{\textnormal{\tiny{next}}}\triangleq \mathscr{D}+\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor,\quad \quad\mathscr{R}_{\textnormal{\tiny{next}}}\triangleq \Phi^{-1}\big(-\Psi\,\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor +\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}+\mathscr{R}\Psi\big)
\end{equation}
and
$$\mathscr{L}_{\textnormal{\tiny{next}}}\triangleq \omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{\textnormal{\tiny{next}}}+\mathscr{R}_{\textnormal{\tiny{next}}}.$$
Recall that $\mathscr{D},$ $\mathscr{R}$ and $\Psi$ satisfy the localizations properties \eqref{struc scr D}, \eqref{cond proj R} and \eqref{ansatz Phi reduc rem}, respectively. One can easily check that this property is stable under composition/addition and therefore obtains
$$\Pi_{\mathbb{S}_{0}}^{\perp}\mathscr{D}_{\textnormal{\tiny{next}}}\Pi_{\mathbb{S}_0}^{\perp}=\mathscr{D}_{\textnormal{\tiny{next}}}\qquad\textnormal{and}\qquad\Pi_{\mathbb{S}_0}^{\perp}\mathscr{R}_{\textnormal{\tiny{next}}}\Pi_{\mathbb{S}_0}^{\perp}=\mathscr{R}_{\textnormal{\tiny{next}}}.$$
Therefore, in restriction to the Cantor set
$$\mathscr{O}_{\textnormal{\tiny{next}}}^{\gamma}\triangleq \mathscr{O}_{1}\cap\mathscr{O}_{2}\cap\mathscr{O}_{1,2},$$
the above construction implies that
$$\mathscr{L}_{\textnormal{\tiny{next}}}=\Phi^{-1}\mathscr{L}\Phi.$$
To end this KAM step, we shall now give some quantitative estimates in order to prove the convergence of the scheme. For this aim, we assume that the following estimates hold true.
\begin{equation}\label{ass-mukd}
\forall k\in\{1,2\},\quad\forall (j,j_0)\in(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2,\quad\max_{\alpha\in\mathbb{N}^{d+1}\atop|\alpha|\in\llbracket 0,q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\Big|\partial_{b,\omega}^{\alpha}\Big(\mu_{j,k}(b,\omega)-\mu_{j_0,k}(b,\omega)\Big)\Big|\leqslant C|j-j_0|
\end{equation}
and
\begin{equation}\label{ass-mu12d}
\forall (j,j_0)\in(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2}),\quad\max_{\alpha\in\mathbb{N}^{d+1}\atop|\alpha|\in\llbracket 0,q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\Big|\partial_{b,\omega}^{\alpha}\Big(\mu_{j,1}(b,\omega)-\mu_{j_0,2}(b,\omega)\Big)\Big|\leqslant C\langle j,j_0\rangle.
\end{equation}
We denote
$$A_{l,j,j_{0}}^{k,k'}(b,\omega)\triangleq \omega\cdot l+\mu_{j,k}(b,\omega)-\mu_{j_{0},k'}(b,\omega), \quad a_{l,j,j_{0}}\triangleq (\gamma\langle j-j_0\rangle)^{-1}\langle l\rangle^{\tau_2},\quad \widetilde{a}_{l,j,j_0}\triangleq \gamma^{-1}\langle l,j,j_0\rangle^{\tau_2}.$$
Then, we can write
\begin{align*}
\forall k\in\{1,2\},\quad\varrho_{j_{0},k}^{j}(b,\omega,l)&=a_{l,j,j_{0}}\widehat \chi\left(a_{l,j,j_{0}} A_{l,j,j_{0}}^{k,k}(b,\omega)\right),\\
\forall\ell\in\{3,4\},\quad\varrho_{j_{0},\ell}^{j}(b,\omega,l)&=\widetilde{a}_{l,j,j_{0}}\widehat \chi\left(\widetilde{a}_{l,j,j_{0}} A_{l,j,j_{0}}^{\ell-2,5-\ell}(b,\omega)\right),
\end{align*}
where $\widehat{\chi}(x)\triangleq \frac{\chi(x)}{x}$ is $\mathcal{C}^\infty$ with bounded derivatives. The assumptions \eqref{ass-mukd}-\eqref{ass-mu12d} imply
\begin{align*}
\forall\, (l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2,\quad\max_{\alpha\in\mathbb{N}^{d+1}\atop|\alpha|\in\llbracket 0, q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\Big|\partial_{b,\omega}^{\alpha}A_{l,j,j_{0}}^{k,k}(b,\omega)\Big|\leqslant C\langle l,j-j_0\rangle
\end{align*}
and
\begin{align*}
\forall\, (l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell-2})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,5-\ell}),\quad\max_{\alpha\in\mathbb{N}^{d+1}\atop|\alpha|\in\llbracket 0, q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\Big|\partial_{b,\omega}^{\alpha}A_{l,j,j_{0}}^{\ell-2,5-\ell}(b,\omega)\Big|\leqslant C\langle l,j,j_0\rangle.
\end{align*}
Applying Lemma \ref{lem funct prop}-(iv), we obtain for any $\alpha\in\mathbb{N}^{d+1}$ with $|\alpha|\in\llbracket 0,q\rrbracket,$
\begin{align*}
\forall k\in\{1,2\},\quad\sup_{(b,\omega)\in\mathcal{O}}\Big|\partial_{b,\omega}\varrho_{j_0,k}^{j}(b,\omega,l)\Big|&\leqslant C\gamma^{-(|\alpha|+1)}\langle l,j-j_0\rangle^{\tau_2 |\alpha|+\tau_2+|\alpha|},\\
\forall\ell\in\{3,4\},\quad\sup_{(b,\omega)\in\mathcal{O}}\Big|\partial_{b,\omega}\varrho_{j_0,\ell}^{j}(b,\omega,l)\Big|&\leqslant C\gamma^{-(|\alpha|+1)}\langle l,j,j_0\rangle^{\tau_2 |\alpha|+\tau_2+|\alpha|}.
\end{align*}
In similar way to \cite[Prop. 6.5]{HR21}, making use Leibniz rule implies
\begin{align}
\forall k\in\{1,2\},\quad\|\Psi_{k}\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}&\leqslant C\gamma^{-1}\|P_{N}^{1}\mathscr{R}_{k}\|_{\textnormal{\tiny{O-d}},s+\tau_2 q+\tau_2}^{q,\gamma,\mathtt{m}},\label{e-psik-scrRk}\\
\forall\ell\in\{3,4\},\quad\|\Psi_{\ell}\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}&\leqslant C\gamma^{-1}\|P_{N}^{2}\mathscr{R}_{\ell}\|_{\textnormal{\tiny{I-D}},s+\tau_2 q+\tau_2}^{q,\gamma,\mathtt{m}}.\label{e-psil-scrRl}
\end{align}
Combining \eqref{e-psik-scrRk}, \eqref{e-psil-scrRl}, \eqref{hyb nor} and Corollary \ref{cor-hyb-nor}, we get
\begin{align}\label{e-Psi-scrR-hyb}
\interleave\Psi\interleave_{s}^{q,\gamma,\mathtt{m}}&\leqslant C\gamma^{-1}\interleave\mathbf{P_N}\mathscr{R}\interleave_{s+\tau_2 q+\tau_2}^{q,\gamma,\mathtt{m}}\nonumber\\
&\leqslant C\gamma^{-1}N^{\tau_2q+\tau_2}\interleave\mathscr{R}\interleave_{s}^{q,\gamma,\mathtt{m}}.
\end{align}
Now assume the following smallness condition
\begin{align}\label{ass-sml-scrR}
\gamma ^{-1}N^{\tau_2 q+\tau_2}\interleave \mathscr{R}\interleave_{s_0}^{q,\gamma,\mathtt{m}}\leqslant C\varepsilon_{0}.
\end{align}
Putting together \eqref{e-Psi-scrR-hyb} and \eqref{ass-sml-scrR}, we obtain
\begin{equation}\label{sml-Psi-hyb}
\interleave\Psi\interleave_{s_0}^{q,\gamma,\mathtt{m}}\leqslant C\varepsilon_{0}.
\end{equation}
We deduce that, for $\varepsilon_0$ small enough, the operator $\Phi$ is invertible and its inverse is given by
$$\Phi^{-1}=\displaystyle\sum_{n=0}^{\infty}(-1)^{n}\Psi^{n}\triangleq \textnormal{Id}+\Sigma.$$
According to Corollary \ref{cor-hyb-nor}-(ii), \eqref{e-Psi-scrR-hyb} and \eqref{sml-Psi-hyb}, one obtains
\begin{align}
\displaystyle\interleave\Sigma\interleave_{s}^{q,\gamma,\mathtt{m}} & \leqslant \displaystyle\interleave\Psi\interleave_{s}^{q,\gamma,\mathtt{m}}\left(1+\sum_{n=1}^{\infty}\left(C\interleave\Psi\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\right)^{n}\right)\label{e2-Sigma}\\
&\leqslant \displaystyle C\,\gamma ^{-1}N^{\tau_2 q+\tau_2}\interleave\mathscr{R}\interleave_{s}^{q,\gamma,\mathtt{m}}.\label{el-Sigma}
\end{align}
In particular, \eqref{ass-sml-scrR} implies
\begin{equation}\label{sml-Sigma}
\interleave\Sigma\interleave_{s_0}^{q,\gamma,\mathtt{m}}\leqslant C\gamma ^{-1}N^{\tau_2 q+\tau_2}\interleave\mathscr{R}\interleave_{s_0}^{q,\gamma,\mathtt{m}}\leqslant C\varepsilon_0.
\end{equation}
The second identity in \eqref{D-R next} also writes
$$\mathscr{R}_{\textnormal{\tiny{next}}}=\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}+\Phi^{-1}\mathscr{R}\Psi-\Psi\,\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor +\Sigma\big(\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}-\Psi\,\lfloor \mathbf{P_{N}}\mathscr{R}\rfloor\big).$$
Hence, one gets from Corollary \ref{cor-hyb-nor}-(ii), \eqref{e2-Sigma}, \eqref{sml-Psi-hyb} and \eqref{sml-Sigma},
\begin{align}\label{e-scrRnext1}
\nonumber \interleave\mathscr{R}_{\textnormal{\tiny{next}}}\interleave_{s}^{q,\gamma,\mathtt{m}} & \leqslant\interleave\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}\interleave_{s}^{q,\gamma,\mathtt{m}}+C\interleave\Sigma\interleave_{s}^{q,\gamma,\mathtt{m}}\left(\interleave\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}+\interleave\Psi\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\interleave\mathscr{R}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\right)\\
&\quad+C\left(1+\interleave\Sigma\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\right)
\left(\interleave\Psi\interleave_{s}^{q,\gamma,\mathtt{m}}\interleave\mathscr{R}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}+\interleave\Psi\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\interleave\mathscr{R}\interleave_{s}^{q,\gamma,\mathtt{m}}\right).
\end{align}
Using Corollary \ref{cor-hyb-nor}-(i), \eqref{e-Psi-scrR-hyb}, \eqref{ass-sml-scrR}, \eqref{sml-Psi-hyb}, \eqref{el-Sigma}, \eqref{sml-Sigma} and \eqref{e-scrRnext1}, we have for all $S\geqslant\overline{s}\geqslant s\geqslant s_{0}$,
\begin{equation}\label{e-KAM Rnext}
\interleave\mathscr{R}_{\textnormal{\tiny{next}}}\interleave_{s}^{q,\gamma,\mathtt{m}}\leqslant N^{s-\overline{s}}\interleave\mathscr{R}\interleave_{\overline{s}}^{q,\gamma,\mathtt{m}}+C\gamma^{-1}N^{\tau_2 q+\tau_2}\interleave\mathscr{R}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\interleave\mathscr{R}\interleave_{s}^{q,\gamma,\mathtt{m}}.
\end{equation}
One also has
\begin{align*}
\partial_{\theta}\mathscr{R}_{\textnormal{\tiny{next}}}&=\Phi^{-1}\Big(\mathbf{P}_{\mathbf{N}}^{\perp}\partial_{\theta}\mathscr{R}+\partial_{\theta}\mathscr{R}\Psi-\Psi\partial_{\theta}\lfloor\mathbf{P}_{\mathbf{N}}\mathscr{R}\rfloor-[\partial_{\theta},\Psi]\lfloor\mathbf{P}_{\mathbf{N}}\mathscr{R}\rfloor\Big)\\
&\quad+[\partial_{\theta},\Sigma]\Big(\mathbf{P}_{\mathbf{N}}^{\perp}\mathscr{R}+\mathscr{R}\Psi-\Psi\lfloor\mathbf{P}_{\mathbf{N}}\mathscr{R}\rfloor\Big).
\end{align*}
Using the fact that for any scalar operator $T$,
$$\|[\partial_{\theta},T]\|_{\textnormal{\tiny{O-d}},s}^{q,\gamma,\mathtt{m}}\lesssim\|T\|_{\textnormal{\tiny{O-d}},s+1}^{q,\gamma,\mathtt{m}},\qquad\|[\partial_{\theta},T]\|_{\textnormal{\tiny{I-D}},s}^{q,\gamma,\mathtt{m}}\lesssim\|T\|_{\textnormal{\tiny{I-D}},s+1}^{q,\gamma,\mathtt{m}},$$
one has for any matricial operator $\mathbf{T},$
$$\interleave[\partial_{\theta},\mathbf{T}]\interleave_{s}^{q,\gamma,\mathtt{m}}\lesssim\interleave\mathbf{T}\interleave_{s+1}^{q,\gamma,\mathtt{m}}.$$
Thus, in a similar way to \eqref{e-KAM Rnext}, one obtains for any $s_0\leqslant s\leqslant\overline{s}\leqslant S,$
\begin{equation}\label{e-KAM dRnext}
\widehat{\delta}_{\textnormal{\tiny{next}}}(s)\leqslant N^{s-\overline{s}}\,\widehat{\delta}(\overline{s})+C\gamma^{-1}N^{\tau_2 q+\tau_2+1}\widehat{\delta}(s_0)\widehat{\delta}(s),
\end{equation}
where
$$\widehat{\delta}(s)\triangleq \max\Big(\gamma^{-1}\interleave\partial_{\theta}\mathscr{R}\interleave_{s}^{q,\gamma,\mathtt{m}}\,,\,\delta(s)\Big).$$
$\blacktriangleright$ \textbf{Initialization} Now, we shall check the validity of the assumptions \eqref{ass-mukd}, \eqref{ass-mu12d} and \eqref{ass-sml-scrR} for the initial operator $\mathscr{L}=\mathscr{L}_0$ in \eqref{reduction on nor}. It is clear from \eqref{ASYFR1+}-\eqref{ASYFR1-} that
$$\forall k\in\{1,2\},\quad\forall(j,j_{0})\in\mathbb{Z}^{2},\quad\max_{q'\in\llbracket 0,q\rrbracket}\sup_{b\in(b_{*},b^{*})}\big|\partial_{b}^{q'}\big(\Omega_{j,k}(b)-\Omega_{j_{0},k}(b)\big)\big|\leqslant C\,|j-j_{0}|$$
and
$$\forall(j,j_{0})\in\mathbb{Z}^{2},\quad\max_{q'\in\llbracket 0, q\rrbracket}\sup_{b\in(b_{*},b^{*})}\big|\partial_{b}^{q'}\big(\Omega_{j,1}(b)-\Omega_{j_{0},2}(b)\big)\big|\leqslant C\,\langle j,j_{0}\rangle.$$
Consequently, we infer from \eqref{mu0 r0}-\eqref{e-ed-r0},
\begin{align*}
\forall k\in\{1,2\},\quad\forall(j,j_{0})\in\mathbb{Z}^{2},\quad\max_{\alpha\in\mathbb{N}^{d+1}\atop|\alpha|\in\llbracket 0, q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\left|\partial_{b,\omega}^{\alpha}\left(\mu_{j,k}^{(0)}(b,\omega)-\mu_{j_{0},k}^{(0)}(b,\omega)\right)\right|\leqslant C\,|j-j_{0}|
\end{align*}
and
\begin{align*}
\forall(j,j_{0})\in\mathbb{Z}^{2},\quad\max_{\alpha\in\mathbb{N}^{d+1}\atop|\alpha|\in\llbracket 0, q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\left|\partial_{b,\omega}^{\alpha}\left(\mu_{j,1}^{(0)}(b,\omega)-\mu_{j_{0},2}^{(0)}(b,\omega)\right)\right|\leqslant C\,\langle j,j_{0}\rangle.
\end{align*}
This proves the initial assumptions \eqref{ass-mukd}-\eqref{ass-mu12d}. Now let us focus on the assumption \eqref{ass-sml-scrR}. This latter is obtained by gathering \eqref{e-hyb-scrR0}, \eqref{sml-RR} and \eqref{p-RR}. Indeed,
\begin{align*}
\gamma ^{-1}N_0^{\tau_2q+\tau_2}\interleave\mathscr{R}_{0}\interleave_{s_0}^{q,\gamma,\mathtt{m}}&\leqslant C\varepsilon\gamma ^{-2}N_0^{\mu_2}\left(1+\|\mathfrak{I}_{0}\|_{s_h+\sigma_{4}}^{q,\gamma,\mathtt{m}}\right)\\
&\leqslant C\varepsilon_{0}.
\end{align*}
$\blacktriangleright$ \textbf{KAM iteration}. Now, we shall implement the complete KAM reduction scheme. Given $m\in\mathbb{N}$ we assume that we have constructed a linear operator
\begin{align}\label{Op-Lm}
\mathscr{L}_{m}\triangleq \omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{m}+\mathscr{R}_{m},
\end{align}
with
$$\mathscr{D}_m=\begin{pmatrix}
\mathscr{D}_{m,1} & 0\\
0 & \mathscr{D}_{m,2}
\end{pmatrix},\qquad\mathscr{D}_{m,k}=\left(\ii \mu_{j,k}^{(m)}\right)_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}},\qquad\mu_{-j,k}^{(m)}(b,\omega)=-\mu_{j,k}^{(m)}(b,\omega)$$
and $\mathscr{R}_m$ a real and reversible Toeplitz in time matrix operator of zero order satisfying $\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} \mathscr{R}_m\Pi_{\overline{\mathbb{S}}_{0}}^{\perp} =\mathscr{R}_m.$ In addition, we assume that the assumptions \eqref{ass-mukd}, \eqref{ass-mu12d} and \eqref{ass-sml-scrR} hold for $\mathscr{D}_m$ and $\mathscr{R}_m$. Notice that for $m=0$ we take the operator $\mathscr{L}_0$ defined in \eqref{reduction on nor}. Applying the KAM step we can construct a linear invertible operator $\Phi_{m}=\mathbf{I}_{\mathtt{m},\perp}+\Psi_{m}$ with $\Psi_m$ living in $\mathcal{O}$ such that in restriction to the Cantor set
\begin{align}\label{Cantor-SX}
&\mathscr{O}_{m+1}^{\gamma }=
\bigcap_{\underset{\,j, j_{0}\in\Z_{\m}\backslash\overline{\mathbb{S}}_{0,k}}{ {k\in\{1,2\}}}}\bigcap_{\underset{|l|\leqslant N_{m}}{l\in\mathbb{Z}^{d}\atop(l,j)\neq(0,j_{0})}}\left\lbrace(b,\omega)\in\mathscr{O}_m^\gamma\quad\textnormal{s.t.}\quad\left|\omega\cdot l+\mu_{j,k}^{(m)}(b,\omega,i_{0})-\mu_{j_{0},k}^{(m)}(b,\omega,i_{0})\right|>\tfrac{\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\right\rbrace\nonumber\\
&\bigcap_{(l,j,j_{0})\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})\atop\langle l,j,j_0\rangle\leqslant N_{m}}\left\lbrace(b,\omega)\in\mathscr{O}_{m}^{\gamma }\quad\textnormal{s.t.}\quad \left|\omega\cdot l+\mu_{j,1}^{(m)}(b,\omega)-\mu_{j_{0},2}^{(m)}(b,\omega)\right|>\tfrac{\gamma}{\langle l,j,j_0\rangle^{\tau_2}}\right\rbrace,
\end{align}
the operator $\Psi_{m}$ satisfies the following homological equation
$$\big[\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{m},\Psi_{m}\big]+\mathbf{P_{N_{m}}}\mathscr{R}_{m}=\lfloor \mathbf{P_{N_{m}}}\mathscr{R}_{m}\rfloor$$
and consequently, the following identity holds in $\mathscr{O}_{m+1}^{\gamma}$
\begin{align}\label{Op-Lm1}
\Phi_{m}^{-1}\mathscr{L}_{m}\Phi_{m}=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{m+1}+\mathscr{R}_{m+1},
\end{align}
with
\begin{align}\label{scr Dm+1-Rm+1}
\mathscr{D}_{m+1}\triangleq \mathscr{D}_{m}+\lfloor \mathbf{P_{N_{m}}}\mathscr{R}_{m}\rfloor\quad\mbox{and}\quad \mathscr{R}_{m+1}\triangleq \Phi_{m}^{-1}\left(-\Psi_{m}\,\lfloor \mathbf{P_{N_{m}}}\mathscr{R}_{m}\rfloor +\mathbf{P_{N_{m}}}^{\perp}\mathscr{R}_{m}+\mathscr{R}_{m}\Psi_{m}\right).
\end{align}
Recall that the operator $\lfloor \mathbf{P_{N_{m}}}\mathscr{R}_{m}\rfloor$ is defined by
$$\lfloor \mathbf{P_{N_{m}}}\mathscr{R}_{m}\rfloor=\begin{pmatrix}
\lfloor P_{N_{m}}^1\mathscr{R}_{m,1}\rfloor & 0\\
0 & \lfloor P_{N_{m}}^1\mathscr{R}_{m,2}\rfloor
\end{pmatrix},$$
with
$$\lfloor P_{N_{m}}^1\mathscr{R}_{m,k}\rfloor=\left(\ii r_{j,k}^{(m)}\right)_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}},\qquad r_{-j,k}^{(m)}(b,\omega)=-r_{j,k}^{(m)}(b,\omega).$$
Observe that the symmetry condition for $r_{j,k}^{(m)}$ is a consequence of the reversibility of $\mathscr{R}_m.$ By construction, we find
\begin{align}\label{spec rec def}
\mu_{j,k}^{(m+1)}=\mu_{j,k}^{(m)}+r_{j,k}^{(m)}.
\end{align}
We point out that working with this extension for $\Psi_m$ allows to extend both $\mathscr{D}_{m+1}$ and the remainder $\mathscr{R}_{m+1}$ provided that the operators $\mathscr{D}_{m}$ and $\mathscr{R}_{m}$ are defined in the whole range of parameters. Thus the operator defined by the right-hand side in \eqref{Op-Lm1} can be extended to the whole set $\mathcal{O}$ and we denote this extension by $\mathscr{L}_{m+1}$. that is,
\begin{align*}
\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{m+1}+\mathscr{R}_{m+1}\triangleq \mathscr{L}_{m+1}.
\end{align*}
This enables to construct by induction the sequence of operators $\left(\mathscr{L}_{m+1}\right)$ in the full set $\mathcal{O}$. Similarly the operator $\Phi_{m}^{-1}\mathscr{L}_{m}\Phi_{m}$ admits an extension in $\mathcal{O}$ induced by the extension of $\Phi_m^{\pm1}$ . However, by construction the identity $\mathscr{L}_{m+1}=\Phi_{m}^{-1}\mathscr{L}_{m}\Phi_{m}$ in \eqref{Op-Lm1} occurs in the Cantor set $\mathscr{O}_{m+1}^{\gamma }$ and may fail outside this set. Define
\begin{equation}\label{def dltm dltmh}
\delta_{m}(s)\triangleq \gamma ^{-1}\interleave\mathscr{R}_{m}\interleave_{s}^{q,\gamma,\mathtt{m}}\qquad\textnormal{and}\qquad\widehat{\delta}_{m}(s)\triangleq \max\Big(\delta_m(s)\,,\,\gamma^{-1}\interleave\partial_{\theta}\mathscr{R}_m\interleave_{s}^{q,\gamma,\mathtt{m}}\Big).
\end{equation}
Assume that the following estimates hold
\begin{align}\label{e-mujmk} \forall\,(j,j_{0})\in(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2},\quad\max_{|\alpha| \in\llbracket 0,q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\left|\partial_{b,\omega}^{\alpha}\left(\mu_{j,k}^{(m)}(b,\omega)-\mu_{j_{0},k}^{(m)}(b,\omega)\right)\right|\leqslant C\,|j-j_{0}|
\end{align}
and
\begin{align}\label{e-mujm12} \forall\,(j,j_{0})\in(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2}),\quad\max_{|\alpha| \in\llbracket 0,q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\left|\partial_{b,\omega}^{\alpha}\left(\mu_{j,1}^{(m)}(b,\omega)-\mu_{j_{0},2}^{(m)}(b,\omega)\right)\right|\leqslant C\,\langle j,j_{0}\rangle.
\end{align}
Applying the KAM step, we deduce from \eqref{e-KAM Rnext} and \eqref{e-KAM dRnext} the following induction formulae true for any $s_0\leqslant s\leqslant\overline{s}\leqslant S,$
\begin{align*}
\delta_{m+1}(s)&\leqslant N_{m}^{s-\overline{s}}\delta_{m}(\overline{s})+CN_{m}^{\tau_2q+\tau_2}\delta_{m}(s_0)\delta_{m}(s),\\
\widehat{\delta}_{m+1}(s)&\leqslant N_{m}^{s-\overline{s}}\,\widehat{\delta}_{m}(\overline{s})+CN_{m}^{\tau_2q+\tau_2+1}\widehat{\delta}_{m}(s_0)\widehat{\delta}_{m}(s).
\end{align*}
Hence, in a similar way to \cite[Prop. 6.5]{HR21}, our choice of parameters \eqref{param} allow to prove by induction on $m\in\mathbb{N}$ that
\begin{equation}\label{hyp-ind dltp}
\forall\, m\in\mathbb{N},\quad \delta_{m}(\overline{s}_{l})\leqslant \delta_{0}(s_{h})N_{0}^{\mu_{2}}N_{m}^{-\mu_{2}}\quad \mbox{ and }\quad \delta_{m}(s_{h})\leqslant\left(2-\tfrac{1}{m+1}\right)\delta_{0}(s_{h}),
\end{equation}
and
\begin{equation}\label{hyp-ind dltph}
\forall m\in\mathbb{N},\quad\widehat{\delta}_{m}(s_{0})\leqslant \widehat{\delta}_{0}(s_{h})N_{0}^{\mu_{2}}N_{m}^{-\mu_{2}}\quad \mbox{ and }\quad \widehat{\delta}_{m}(s_{h})\leqslant\left(2-\tfrac{1}{m+1}\right)\widehat{\delta}_{0}(s_{h}).
\end{equation}
Observe that the first condition in \eqref{hyp-ind dltp} together with \eqref{e-dlt0sh init} and \eqref{p-RR} implies that the smallness condition \eqref{ass-sml-scrR} is satisfied for any $m$ (replacing $\mathscr{R}$ by $\mathscr{R}_m$ and $N$ by $N_m$). Using the Topeplitz structure of $\mathscr{R}_{m,k}$ and an integration by parts, we get from \eqref{spec rec def}
\begin{align*}
\left\|\mu_{j,k}^{(m+1)}-\mu_{j,k}^{(m)}\right\|^{q,\gamma}&=\big\|\big\langle P_{N_{m}}^1\mathscr{R}_{m,k}\mathbf{e}_{l,j}\,,\,\mathbf{e}_{l,j}\big\rangle_{L^{2}(\mathbb{T}^{d +1})}\big\|^{q,\gamma}\\
&=\big\|\big\langle P_{N_{m}}^1\mathscr{R}_{m,k}\mathbf{e}_{0,j}\,,\,\mathbf{e}_{0,j}\big\rangle_{L^{2}(\mathbb{T}^{d+1})}\big\|^{q,\gamma}\\
&=\tfrac{1}{|j|}\big\|\big\langle P_{N_{m}}^1\mathscr{R}_{m,k}\mathbf{e}_{0,j}\,,\,\partial_{\theta}\mathbf{e}_{0,j}\big\rangle_{L^{2}(\mathbb{T}^{d+1})}\big\|^{q,\gamma}\\
&=\tfrac{1}{|j|}\big\|\big\langle P_{N_{m}}^1\partial_{\theta}\mathscr{R}_{m,k}\mathbf{e}_{0,j}\,,\,\mathbf{e}_{0,j}\big\rangle_{L^{2}(\mathbb{T}^{d+1})}\big\|^{q,\gamma}.
\end{align*}
Therefore, a duality argument together with \eqref{def dltm dltmh}, \eqref{hyp-ind dltph}, \eqref{edlt0s}, \eqref{sml-RR} and Corollary \ref{cor-hyb-nor}-(iii) imply
\begin{align}\label{bound Cv mujkm}
|j|\left\|\mu_{j,k}^{(m+1)}-\mu_{j,k}^{(m)}\right\|^{q,\gamma}&\leqslant \big\|\partial_{\theta}\mathscr{R}_{m}\mathbf{e}_{0,j}\|_{s_0}^{q,\gamma,\mathtt{m}}\,\,\langle j\rangle^{-s_0}\nonumber\\
&\leqslant C\interleave\partial_{\theta}\mathscr{R}_{m}\interleave_{s_0}^{q,\gamma,\mathtt{m}}\|\mathbf{e}_{0,j}\|_{H^{s_0}}\,\,\langle j\rangle^{-s_0}\nonumber\\
&\leqslant C \gamma\,\widehat{\delta}_{0}(s_{h})N_{0}^{\mu_{2}}N_{m}^{-\mu_{2}}\nonumber\\
&\leqslant C \varepsilon\gamma^{-1}N_0^{\mu_{2}}N_{m}^{-\mu_{2}}.
\end{align}
Now we shall check that the assumptions \eqref{e-mujmk} and \eqref{e-mujm12} are satisfied for the next step. Combining \eqref{bound Cv mujkm} with \eqref{e-mujmk} we infer that for $k\in\{1,2\}$ and $(j,j_{0})\in(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2},$
\begin{align*}
\max_{|\alpha| \in\llbracket 0,q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\left|\partial_{b,\omega}^{\alpha}\left(\mu_{j,k}^{(m+1)}(b,\omega)-\mu_{j_{0},k}^{(m+1)}(b,\omega)\right)\right|\leqslant C\big(1+\varepsilon\gamma^{-1-q}N_0^{\mu_2}N_{m}^{-\mu_{2}}\big)\,|j-j_{0}|.
\end{align*}
Now putting together \eqref{bound Cv mujkm} and \eqref{e-mujm12}, we get for $(j,j_{0})\in(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2}),$
\begin{align*}
\max_{|\alpha| \in\llbracket 0,q\rrbracket}\sup_{(b,\omega)\in\mathcal{O}}\left|\partial_{b,\omega}^{\alpha}\left(\mu_{j,1}^{(m+1)}(b,\omega)-\mu_{j_{0},2}^{(m+1)}(b,\omega)\right)\right|\leqslant C\big(1+\varepsilon\gamma^{-1-q}N_0^{\mu_2}N_{m}^{-\mu_{2}}\big)\,\langle j,j_{0}\rangle.
\end{align*}
The convergence of the series $\sum N_m^{-\mu_2}$ implies the desired result with a constant $C$ uniform in $m.$ This achieves the induction argument. Observe that the bound \eqref{bound Cv mujkm} implies the convergence of the sequence $\left(\mu_{j,k}^{(m)}\right)_{m\in\mathbb{N}}$ toward some $\mu_{j,k}^{(\infty)}\in W^{q,\infty,\gamma }(\mathcal{O},\mathbb{C})$ given by
\begin{equation}\label{constru mujkfty}
\mu_{j,k}^{(\infty)}=\mu_{j,k}^{(0)}+\sum_{m=0}^\infty\left(\mu_{j,k}^{(m+1)}-\mu_{j,k}^{(m)}\right)\triangleq \mu_{j,k}^{(0)}+r_{j,k}^{(\infty)},
\end{equation}
where $(\mu_{j,k}^{(0)})$ was introduced in Proposition \ref{prop proj nor dir}, writes
$$\mu_{j,k}^{(0)}(b,\omega,i_{0})=\Omega_{j,k}(b)+j\big(c_{k}(b,\omega,i_{0})-\mathtt{v}_k(b)\big).$$
The estimate \eqref{e-rjfty} follows immediately from \eqref{constru mujkfty} and \eqref{bound Cv mujkm}. Define the diagonal operator $\mathscr{D}_{\infty,k}$ defined on the normal modes by
\begin{align}\label{Dinfty-op}
\forall (l,j)\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}),\quad \mathscr{D}_{\infty,k} {\bf e}_{l,j}=\ii\mu_{j,k}^{(\infty)}{\bf e}_{l,j}.
\end{align}
By definition of the off-diagonal norm and \eqref{bound Cv mujkm}, we obtain
\begin{align}\label{Cv-od-scrDn}
\|\mathscr{D}_{m,k}-\mathscr{D}_{\infty,k}\|_{\textnormal{\tiny{O-d}},s_{0}}^{q,\gamma,\mathtt{m}}=\sup_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}}\left\|\mu_{j,k}^{(m)}-\mu_{j,k}^{(\infty)}\right\|^{q,\gamma}\leqslant C\, \gamma\,\delta_{0}(s_{h})N_{0}^{\mu_{2}} N_{m}^{-\mu_{2}}.
\end{align}
Consider the diagonal operator $\mathscr{L}_{\infty}\triangleq \omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{\infty},$ where $\mathscr{D}_\infty$ is introduced in \eqref{Dinfty-op}. For any $m\in\mathbb{N},$ applying \eqref{Cv-od-scrDn} and \eqref{hyp-ind dltp} yields
\begin{align*}
\interleave\mathscr{L}_{m}-\mathscr{L}_{\infty}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}&\leqslant2\max_{k\in\{1,2\}}\|\mathscr{D}_{m,k}-\mathscr{D}_{\infty,k}\|_{\textnormal{\tiny{O-d}},s_{0}}^{q,\gamma,\mathtt{m}}+\interleave\mathscr{R}_{m}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}\\
&\leqslant C\, \gamma\,\delta_{0}(s_{h})N_{0}^{\mu_{2}} N_{m}^{-\mu_{2}},
\end{align*}
where $\mathscr{L}_{m}$ is given in \eqref{Op-Lm}. As a consequence,
\begin{align*}
\lim_{m\rightarrow\infty} \interleave\mathscr{L}_{m}-\mathscr{L}_{\infty}\interleave_{s_{0}}^{q,\gamma,\mathtt{m}}=0.
\end{align*}
Now we define the sequence $\left(\widehat{\Phi}_m\right)_{m\in\mathbb{N}}$ of the successive transformations as follows
\begin{equation}\label{Def-Phi}
\widehat\Phi_0\triangleq \Phi_0\qquad\textnormal{and}\qquad \forall m\geqslant1,\quad \widehat\Phi_m\triangleq \Phi_0\circ\Phi_1\circ...\circ\Phi_m.
\end{equation}
The identity ${\Phi}_{m}=\hbox{Id}+\Psi_m$ gives
$$\widehat\Phi_{m+1}=\widehat\Phi_{m}+\widehat\Phi_{m}\Psi_{m+1}.$$
Using \eqref{e-Psi-scrR-hyb} and \eqref{hyp-ind dltp}, a completeness argument implies that the series $\sum(\widehat\Phi_{m+1}-\widehat\Phi_{m})$ converges to an element $\Phi_\infty$ still close to the identity, so invertible and which satisfies
\begin{equation}\label{Cv-hyp-Phin}
\interleave \Phi_{\infty}^{-1}-\widehat{\Phi}_{n}^{-1}\interleave_{s_0+1}^{q,\gamma,\mathtt{m}}+\interleave\Phi_{\infty}-\widehat{\Phi}_{n}\interleave_{s_0+1}^{q,\gamma,\mathtt{m}}\lesssim\delta_{0}(s_h)N_0^{\mu_2}N_{n+1}^{-\mu_2}
\end{equation}
and \eqref{cont-Phifty}. We refer the reader to \cite[Prop. 6.5]{HR21} for the complete computations up to slight modifications corresponding to the hybrid norm. By construction \eqref{Def-Phi} and \eqref{Op-Lm1}, we have in $\mathscr{O}_{n+1}^{\gamma}$ the following identity
\begin{align*}
\widehat{\Phi}_{n}^{-1}\mathscr{L}_{0}\widehat{\Phi}_{n}&=\omega\cdot\partial_{\varphi}\mathbf{I}_{\mathtt{m},\perp}+\mathscr{D}_{n+1}+\mathscr{R}_{n+1}\\
&=\mathscr{L}_{\infty}+\mathscr{D}_{n+1}-\mathscr{D}_{\infty}+\mathscr{R}_{n+1}.
\end{align*}
Assume for a while that the set $\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_{0})$ described in Proposition \ref{prop RR} satisfies the following inclusion property with respect to the intermediate Cantor sets given by \eqref{Cantor-SX},
\begin{equation}\label{incl Crr}
\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_{0})\subset\bigcap_{m=0}^{n+1}\mathscr{O}_{m}^{\gamma}=\mathscr{O}_{n+1}^{\gamma}.
\end{equation}
Hence, in restriction to $\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_0)\subset\mathscr{O}_{n+1}^{\gamma}$, we obtain
\begin{align*}
\Phi_{\infty}^{-1}\mathscr{L}_{0}\Phi_{\infty}&=\mathscr{L}_{\infty}+\big(\mathscr{D}_{n+1}-\mathscr{D}_{\infty}+\mathscr{R}_{n+1}\big)\Pi_{\overline{\mathbb{S}}_{0}}^{\perp}\\
&\quad+\Phi_{\infty}^{-1}\mathscr{L}_{0}\left(\Phi_{\infty}-\widehat{\Phi}_{n}\right)+\left(\Phi_{\infty}^{-1}-\widehat{\Phi}_{n}^{-1}\right)\mathscr{L}_{0}\widehat{\Phi}_{n}\\
&\triangleq \mathscr{L}_{\infty}+\mathscr{E}_{n}^1.
\end{align*}
The estimate \eqref{e-scrEn1} is obtained by using \eqref{hyb nor}, Lemma \ref{properties of Toeplitz in time operators}-(ii)-(iii), \eqref{Cv-od-scrDn}, \eqref{e-scrL0} and \eqref{Cv-hyp-Phin} combined with \eqref{def dltm dltmh}, \eqref{hyp-ind dltp}, \eqref{edlt0s}, \eqref{cont-Phifty} and \eqref{sml-RR}. Now it remains to prove \eqref{incl Crr}. This is done by a finite induction on $m$ with $n$ fixed. First, by definition we have
$\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_{0})\subset\mathcal{O}\triangleq \mathscr{O}_{0}^{\gamma }.$
Now suppose that $\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_{0})\subset\mathscr{O}_{m}^{\gamma}$ for $m\leqslant n$ and let us prove that
\begin{align}\label{Inc-Orr in Om+1}
\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_{0})\subset\mathscr{O}_{m+1}^{\gamma}.
\end{align}
Let $(b,\omega)\in\mathscr{O}_{\infty,n}^{\gamma,\tau_1,\tau_2}(i_{0}).$ For $(l,j,j_{0})\in\mathbb{Z}^{d }\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})$ such that $0\leqslant \langle l,j,j_0\rangle\leqslant N_{m}$, the triangle inequality, \eqref{Cv-od-scrDn}, \eqref{p-RR} and \eqref{e-dlt0sh init} imply
\begin{align*}
\left|\omega\cdot l+\mu_{j,1}^{(m)}(b,\omega)-\mu_{j_{0},2}^{(m)}(b,\omega)\right| & \geqslant\left|\omega\cdot l+\mu_{j,1}^{(\infty)}(b,\omega)-\mu_{j_{0},2}^{(\infty)}(b,\omega)\right|-2\max_{k\in\{1,2\}}\sup_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}}\left\|\mu_{j,k}^{(m)}-\mu_{j,k}^{(\infty)}\right\|^{q,\gamma}\\
& \geqslant\tfrac{2\gamma}{\langle l,j,j_0\rangle^{\tau_2}}-2C\gamma\delta_{0}(s_{h})N_{0}^{\mu_{2}} N_{m}^{-\mu_{2}}\\
& \geqslant\tfrac{\gamma}{\langle l,j,j_0\rangle^{\tau_2}}\left(2-2C\gamma\varepsilon_{0}\langle l,j,j_0\rangle^{\tau_2-\mu_2}\right).
\end{align*}
Thus for $\varepsilon_0$ small enough and by \eqref{p-RR}(implying that $\mu_2\geqslant \tau_2$) we get
$$\left|\omega\cdot l+\mu_{j,1}^{(m)}(b,\omega)-\mu_{j_{0},2}^{(m)}(b,\omega)\right| > \tfrac{\gamma}{\langle l,j,j_0\rangle^{\tau_2}}\cdot$$
Now for $k\in\{1,2\}$ and $(l,j,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2$ with $(l,j)\neq (0,j_0)$ and $|l|\leqslant N_m,$ we get
\begin{align*}
\left|\omega\cdot l+\mu_{j,k}^{(m)}(b,\omega)-\mu_{j_{0},k}^{(m)}(b,\omega)\right| & \geqslant\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega)-\mu_{j_{0},k}^{(\infty)}(b,\omega)\right|-2\sup_{j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}}\left\|\mu_{j,k}^{(m)}-\mu_{j,k}^{(\infty)}\right\|^{q,\gamma}\\
& \geqslant\tfrac{2\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}-2C\gamma\delta_{0}(s_{h})N_{0}^{\mu_{2}} N_{m}^{-\mu_{2}}\\
& \geqslant\tfrac{\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\left(2-2C\gamma\varepsilon_{0}\langle l\rangle^{\tau_2-\mu_2}\right).
\end{align*}
Hence, taking $\varepsilon_0$ small enough, we obtain $k\in\{1,2\},$
$$\left|\omega\cdot l+\mu_{j,k}^{(m)}(b,\omega)-\mu_{j_{0},k}^{(m)}(b,\omega)\right|>\tfrac{\gamma\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\cdot$$
Hence, $(b,\omega)\in\mathscr{O}_{m+1}^{\gamma}$ which proves \eqref{Inc-Orr in Om+1}.\\
\textbf{(ii)} One can get the estimates \eqref{ed-rjkfty} and \eqref{ed-mujkfty} by a similar induction procedure as above starting with \eqref{diff Vpm} and \eqref{ed-hyb-scrR0} applied with $\mathtt{p}=4\tau_2q+4\tau_2.$ For more details, we refer the reader to \cite[Prop. 6.5]{HR21}.
\end{proof}
We end this section with the effective construction of the approximate right inverse of the linearized operator in the normal directions. Since we have constructed a diagonal operator $\mathscr{L}_{\infty}$ with Fourier multiplier entries, the situation is brought back to two decoupled scalar studies. Therefore, we can copy the proof done in \cite[Prop. 6.6]{HR21} with small adaptations and obtain the following result.
\begin{prop}\label{prop inv linfty}
Let $(\gamma,q,d,\tau_{1},s_{0},\mu_2,s_h,S,\mathtt{m})$ satisfying \eqref{setting tau1 and tau2}--\eqref{init Sob cond}, \eqref{ouvert-sym} and \eqref{p-RR}--\eqref{sml-RR}. There exists $\sigma_5\triangleq \sigma_5(\tau_1,\tau_2,q,d)\geqslant\sigma_{4}$ such that if
\begin{equation}\label{bnd frkIn-1}
\|\mathfrak{I}_0\|_{s_h+\sigma_5}^{q,\gamma,\mathtt{m}}\leqslant 1,
\end{equation}
then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item Consider the operator $\mathscr{L}_{\infty}$ defined in Proposition $\ref{prop RR},$ then there exists a family of linear operators $\big(\mathtt{T}_n\big)_{n\in\mathbb{N}}$ defined in $\mathcal{O}$ satisfying the estimate
$$
\forall s\in[s_0,S],\quad \sup_{n\in\mathbb{N}}\|\mathtt{T}_{n}\rho\|_{s}^{q,\gamma ,\mathtt{m}}\lesssim \gamma ^{-1}\|\rho\|_{s+\tau_{1}q+\tau_{1}}^{q,\gamma ,\mathtt{m}}$$
and such that for any $n\in\mathbb{N}$, in the Cantor set
$$\Lambda_{\infty,n}^{\gamma,\tau_{1}}(i_{0})\triangleq\bigcap_{\underset{(l,j)\in\mathbb{Z}^{d }\times\mathbb{S}_{0}^{c}\atop |l|\leqslant N_{n}}{k\in\{1,2\}}}\left\lbrace(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{0})\right|>\tfrac{\gamma \langle j\rangle }{\langle l\rangle^{\tau_{1}}}\right\rbrace,
$$
we have
$$
\mathscr{L}_{\infty}\mathtt{T}_n=\mathbf{I}_{\mathtt{m},\perp}+{\mathscr{E}}_{n}^2,
$$
with
$$
\forall s_{0}\leqslant s\leqslant\overline{s}\leqslant S, \quad \|{\mathscr{E}}_{n}^2\rho\|_{s}^{q,\gamma ,\mathtt{m}} \lesssim
N_n^{s-\overline{s}}\gamma^{-1}\|\rho\|_{\overline{s}+1+\tau_{1}q+\tau_{1}}^{q,\gamma ,\mathtt{m}}.
$$
\item
There exists a family of linear operators $\big(\widehat{\mathtt{T}}_{n}\big)_{n\in\mathbb{N}}$ satisfying
\begin{equation*}
\forall \, s\in\,[ s_0, S],\quad\sup_{n\in\mathbb{N}}\|\widehat{\mathtt{T}}_{n}\rho\|_{s}^{q,\gamma ,\mathtt{m}}\lesssim\gamma^{-1}\left(\|\rho\|_{s+\sigma_5}^{q,\gamma ,\mathtt{m}}+\| \mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma ,\mathtt{m}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}}\right)
\end{equation*}
and such that in the Cantor set
\begin{equation}\label{ttGn}
\mathtt{G}_n(\gamma,\tau_{1},\tau_{2},i_{0})\triangleq \mathcal{O}_{\infty,n}^{\gamma,\tau_{1}}(i_{0})\cap\mathscr{O}_{\infty,n}^{\gamma,\tau_{1},\tau_{2}}(i_{0})\cap\Lambda_{\infty,n}^{\gamma,\tau_{1}}(i_{0}),
\end{equation}
we have
$$
\widehat{\mathcal{L}}\,\widehat{\mathtt{T}}_{n}=\mathbf{I}_{\mathtt{m},\perp}+\mathtt{E}_n,
$$
where $\mathtt{E}_n$ satisfies the following estimate
\begin{align*}
\forall\, s\in [s_0,S],\quad &\|\mathtt{E}_n\rho\|_{s_0}^{q,\gamma ,\mathtt{m}}
\nonumber\lesssim N_n^{s_0-s}\gamma^{-1}\Big( \|\rho\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}+\varepsilon\gamma^{-2}\| \mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}} \Big)\\
&\qquad\qquad\qquad\quad+ \varepsilon\gamma^{-3}N_{0}^{{\mu}_{2}}N_{n+1}^{-\mu_{2}} \|\rho\|_{s_0+\sigma_5}^{q,\gamma,\mathtt{m}}.
\end{align*}
Recall that $\widehat{\mathcal{L}},$ $ \mathcal{O}_{\infty,n}^{\gamma,\tau_{1}}(i_{0})$ and $\mathscr{O}_{\infty,n}^{\gamma,\tau_{1},\tau_{2}}(i_{0})$ are given in Propositions $\ref{lemma-normal-s}$, $\ref{prop strighten}$ and $\ref{prop RR}$, respectively.
\item In the Cantor set $\mathtt{G}_{n}(\gamma,\tau_{1},\tau_{2},i_{0})$, we have the following splitting
$$\widehat{\mathcal{L}}=\widehat{\mathtt{L}}_{n}+\widehat{\mathtt{R}}_{n},\qquad\textnormal{with}\qquad\widehat{\mathtt{L}}_{n}\widehat{\mathtt{T}}_{n}=\textnormal{Id}\qquad\textnormal{and}\qquad\widehat{\mathtt{R}}_{n}=\mathtt{E}_{n}\widehat{\mathtt{L}}_{n},$$
where the operators $\widehat{\mathtt{L}}_{n}$ and $\widehat{\mathtt{R}}_{n}$ are defined in $\mathcal{O}$ and satisfy the following estimates
\begin{align*}
\forall s\in[s_{0},S],\quad& \sup_{n\in\mathbb{N}}\|\widehat{\mathtt{L}}_{n}\rho\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\|\rho\|_{s+1}^{q,\gamma,\mathtt{m}}+\varepsilon\gamma^{-2}\|\mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+1}^{q,\gamma,\mathtt{m}},\\
\forall s\in[s_{0},S],\quad &\|\widehat{\mathtt{R}}_{n}\rho\|_{s_{0}}^{q,\gamma,\mathtt{m}}\lesssim N_{n}^{s_{0}-s}\gamma^{-1}\left(\|\rho\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}+\varepsilon\gamma^{-2}\|\mathfrak{I}_{0}\|_{s+\sigma_5}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}}\right)\\
&\qquad\qquad\qquad\quad+\varepsilon\gamma^{-3}N_{0}^{\mu_{2}}N_{n+1}^{-\mu_{2}}\|\rho\|_{s_{0}+\sigma_5}^{q,\gamma,\mathtt{m}}.
\end{align*}
\end{enumerate}
\end{prop}
\section{Construction of quasi-periodic solutions }
We provide, in this last section, a construction of a non-trivial solution to the equation \eqref{operatorF}. This is done in two steps. First, we implement a Nash-Moser iteration, where we find a solution provided that the parameters $(b,\omega)$ belong to a suitable Borel set. The latter is constructed as the intersection of the Cantor sets required to invert the linearized operator in the normal modes for all the steps of the procedure. Then we rigidified the frequencies in order to get a solution for the original problem where $\alpha=-\mathtt{J}\omega_{\textnormal{Eq}}(b).$ This gives rise to a final set described in terms of $b$ that we should estimate its Lebesgue measure. Actually, we prove that it has asymptotically full measure as the parameter $\varepsilon$ vanishes.
\subsection{Nash-Moser iteration}
Here, we perform the Nash-Moser scheme which allows to find a solution of
$$\mathcal{F}\big(i,\alpha,b,\omega\big)\triangleq\mathcal{F}\big(i,\alpha,b,\omega,\varepsilon\big)=0,$$
with $\mathcal{F}$ as in \eqref{operatorF}.
This method is classical and has been used in several papers, see for instance \cite{BBMH18,BBM14,BB15}.
The iterative construction of the approximate solutions is summarized in the following proposition. The proof is a slight modification of the one exposed in \cite{BM18,HHM21,HR21}.
\begin{prop}\label{Nash-Moser}
\textbf{(Nash-Moser scheme)}\\
Let $(\tau_{1},\tau_{2},q,d,s_{0})$ satisfy \eqref{setting tau1 and tau2}--\eqref{init Sob cond} and $\mathtt{m}\geqslant \mathtt{m}^*,$ where $\mathtt{m}^*$ is defined in Corollary $\ref{coro-equilib-freq}.$ We consider the following parameters
\begin{equation}\label{param NM}
\left\lbrace\begin{array}{rcl}
\overline{a} & = & \tau_{2}+{3}\\
\mu_1 & = & 3q(\tau_{2}+{3})+6\overline{\sigma}+6\\
a_{1} & = & 6q(\tau_{2}+{3})+12\overline{\sigma}+15\\
a_{2} & = & 3q(\tau_{2}+{3})+6\overline{\sigma}+9\\
\mu_{2} & = & 2q(\tau_{2}+{3})+5\overline{\sigma}+7\\
s_{h} & = & s_{0}+4q(\tau_{2}+{3})+9\overline{\sigma}+11\\
\kappa_{1} & = & 2s_{h}-s_{0}
\end{array}\right.
\end{equation}
where the number $\overline{\sigma}=\overline{\sigma}(\tau_{1},\tau_{2},d)$ is the total loss of regularity given by Theorem $\ref{theo appr inv}.$
There exist $C_{\ast}>0$ and $\varepsilon_{0}>0$ such that for any $\varepsilon\in[0,\varepsilon_{0}]$ we impose the constraint relating $\gamma$ and $N_{0}$ to $\varepsilon$,
\begin{equation}\label{rigidity gam-N0}
0<a<\tfrac{1}{\mu_{2}+q+2},\qquad \gamma\triangleq\varepsilon^{a}, \qquad N_{0}\triangleq\gamma^{-1}.
\end{equation}
Let $n\in\mathbb{N}.$ We introduce the finite dimensional subspace $E_{n,\mathtt{m}}$ defined by
$$E_{n,\mathtt{m}}\triangleq\Big\{\mathfrak{I}=(\Theta,I,z)\in\mathbb{T}^d\times\mathbb{R}^d\times\mathbf{H}_{\perp,\mathtt{m}}^{\infty}\quad
\textnormal{s.t.}\quad\Theta=\Pi_{N_n}\Theta,\quad I=\Pi_{N_n}I\quad\textnormal{and}\quad z=\Pi_{N_n}z\Big\},$$
where $\Pi_{N_n}$ is the projector defined through \eqref{def projectors PiN}. Then, the following properties hold true.
\begin{itemize}
\item [$(\mathcal{P}1)_{n}$] There exists a $q$-times differentiable application
$$\mathtt{W}_{n}:\begin{array}[t]{rcl}
\mathcal{O} & \rightarrow & E_{n-1,\mathtt{m}}\times\mathbb{R}^{d}\times\mathbb{R}^{d+1}\\
(b,\omega) & \mapsto & \big(\mathfrak{I}_{n},\alpha_{n}-\mathtt{J}\omega,0\big)
\end{array}$$
satisfying $\mathtt{W}_{0}=0$ and for $n\in\mathbb{N}^*,$
\begin{equation}\label{e-ttWn}
\|\mathtt{W}_{n}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}.
\end{equation}
We set
\begin{equation}\label{ttU0}
\mathtt{U}_0\triangleq\Big((\varphi,0,0),\mathtt{J}\omega,(b,\omega)\Big)
\end{equation}
and for $n\in\mathbb{N}^*,$
\begin{equation}\label{ttUn}
\mathtt{U}_{n}\triangleq\mathtt{U}_{0}+\mathtt{W}_{n}\qquad \textnormal{and}\qquad \mathtt{H}_{n} \triangleq\mathtt{U}_{n}-\mathtt{U}_{n-1},.
\end{equation}
Then
\begin{align}
\forall s\in[s_{0},S],\quad\|\mathtt{H}_{1}\|_{s}^{q,\gamma,\mathtt{m}}&\leqslant \tfrac{1}{2}C_{\ast}\varepsilon\gamma^{-1}N_{0}^{q\overline{a}},\label{ttH1s}\\
\forall\, 2\leqslant m\leqslant n,\quad\|\mathtt{H}_{m}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{m-1}^{-a_{2}},\label{ttHks0sig}\\
\forall n\geqslant 2,\quad\|\mathtt{H}_{n}\|_{\overline{s}_h+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{n-1}^{-a_{2}}.\label{ttHn shbsig4}
\end{align}
\item [$(\mathcal{P}2)_{n}$] Set
\begin{equation}\label{in gamn}
i_{n}\triangleq(\varphi,0,0)+\mathfrak{I}_{n},\qquad \gamma_{n}\triangleq\gamma(1+2^{-n})\in[\gamma,2\gamma].
\end{equation}
The torus $i_n$ is reversible and $\mathtt{m}$-fold, that is
\begin{equation}\label{reversibility in}
\mathfrak{S}i_n(\varphi)=i_n(-\varphi){\qquad\textnormal{and}\qquad \mathfrak{T}_{\mathtt{m}}i_n(\varphi)=i_n(\varphi),}
\end{equation}
with $\mathfrak{S}$ and $\mathfrak{T}_{\mathtt{m}}$ as in \eqref{rev th I z}-\eqref{mfold th I z}. Define also
$$\mathtt{A}_{0}^{\gamma}\triangleq\mathcal{O}\qquad\mbox{and}\qquad \mathtt{A}_{n+1}^{\gamma}\triangleq\mathtt{A}_{n}^{\gamma}\cap\mathtt{G}_{n}(\gamma_{n+1},\tau_{1},\tau_{2},i_{n})$$
where $\mathtt{G}_{n}(\gamma_{n+1},\tau_{1},\tau_{2},i_{n})$ is given through \eqref{ttGn}. Consider the open sets
$$
\forall \mathtt{v}>0,\quad \mathrm{O}_{n}^\mathtt{v}\triangleq\Big\{(b,\omega)\in\mathcal{O}\quad\textnormal{s.t.}\quad {\mathtt{dist}}\big((b,\omega),\mathtt{A}_{n}^{2\gamma}\big)< \mathtt{v} N_{n}^{-\overline{a}}\Big\},\qquad\mathtt{dist}(x,\mathtt{A})\triangleq\inf_{y\in\mathtt{A}}\|x-y\|.$$
Then we have the following estimate
\begin{equation}\label{decay FttUn}
\|\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\triangleq\sum_{\underset{|\alpha|\leqslant q}{\alpha\in\mathbb{N}^{d+1}}}\gamma^{|\alpha|}\sup_{(b,\omega)\in{\mathrm{O}_{n}^{2\gamma}}}\|\partial_{(b,\omega)}^{\alpha}\mathcal{F}(\mathtt{U}_{n})(b,\omega,\cdot)\|_{H^{s_0-|\alpha|}_{\mathtt{m}}}\leqslant C_{\ast}\varepsilon N_{n-1}^{-a_{1}}.
\end{equation}
\item[$(\mathcal{P}3)_{n}$] We have the following growth in high regularity norm
\begin{equation}\label{growth ttWn}
\|\mathtt{W}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{n-1}^{\mu_1}.
\end{equation}
\end{itemize}
\end{prop}
\begin{proof}
We follow closely \cite[Prop. 7.1]{HR21}.
First notice that the initial guess $\mathtt{U}_0$ is associated to a reversible flat torus and satisfies by virtue of \eqref{operatorF} and Lemma \ref{tame X per} the following estimate for some large enough constant $C_{\ast}$
\begin{equation}\label{e-F(ttU0)}
\forall s\geqslant 0,\quad \|\mathcal{F}(\mathtt{U}_{0})\|_{s}^{q,\gamma,\mathtt{m}}\leqslant C_{\ast}\varepsilon.
\end{equation}
The properties $(\mathcal{P}1)_{0},$ $(\mathcal{P}2)_{0}$ and $(\mathcal{P}3)_{0}$ follow immediately since $N_{-1}=1$ and $\mathrm{O}_{0}^{\gamma}=\mathcal{O}$ and by setting $\mathtt{W}_0=0.$ Now, let us turn to the induction step. Fix $n\in\mathbb{N}^*$ and suppose that $(\mathcal{P}1)_{\ell},$ $(\mathcal{P}2)_{\ell}$ and $(\mathcal{P}3)_{\ell}$ hold for any $\ell\in\llbracket 0,n\rrbracket.$ The purpose is to verify that these properties also hold at the order $n+1$. We denote by
$$L_{n}\triangleq L_{n}(b,\omega)\triangleq d_{i,\alpha}\mathcal{F}(i_{n}\big(b,\omega),\alpha_{n}(b,\omega),(b,\omega)\big)$$
the linearized operator of $\mathcal{F}$ at the state $(i_n,\alpha_n)$. As we shall see later, the next approximation $\mathtt{U}_{n+1}$ can be obtained through the construction of a reversible and $\mathtt{m}$-fold preserving approximate right inverse for $L_n,$ which is the subject of Theorem \ref{theo appr inv}. To apply this result and get some bounds on $\mathtt{U}_{n+1}$ we need to establish first some intermediate results connected to the smallness condition and to some Cantor set inclusions.
\noindent $\blacktriangleright$ \textbf{Smallness/boundedness properties.} First observe that \eqref{param NM} implies \eqref{p-RR}. Thus, to apply Theorem \ref{theo appr inv}, we need to check the smallness \eqref{sml-RR} and boundedness \eqref{bnd frkIn-final} properties. According to \eqref{rigidity gam-N0}, a small enough choice of $\varepsilon$ leads, for some a priori fixed $\varepsilon_0>0,$ to
\begin{equation}\label{sml NM}
\varepsilon\gamma^{-2-q}N_0^{\mu_{2}}=\varepsilon^{1-a(\mu_2+q+2)}\leqslant\varepsilon_0,
\end{equation}
which is exactly \eqref{sml-RR}. Now, since from \eqref{param NM} $\kappa_{1}=2s_h-s_0,$ then by interpolation inequality in Lemma \ref{lem funct prop}-(v), we have for $n\geqslant2,$
\begin{equation}\label{interp NM}
\|\mathtt{H}_{n}\|_{s_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\lesssim\left(\|\mathtt{H}_{n}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right)^{\frac{1}{2}}\left(\|\mathtt{H}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right)^{\frac{1}{2}}. \end{equation}
The property \eqref{growth ttWn} applied with the indices $n$ and $n-1$ gives
\begin{align*}
\|\mathtt{H}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&=\|\mathtt{U}_{n}-\mathtt{U}_{n-1}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\\
&=\|\mathtt{W}_{n}-\mathtt{W}_{n-1}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\\
&\leqslant\|\mathtt{W}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}+\|\mathtt{W}_{n-1}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\\
&\leqslant 2C_{\ast}\varepsilon\gamma^{-1}N_{n-1}^{\mu_{1}}.
\end{align*}
Inserting the last estimate together with \eqref{ttHks0sig} into \eqref{interp NM} leads to
\begin{equation}\label{est ttHn sh+sigma}
\forall n\geqslant 2,\quad\|\mathtt{H}_{n}\|_{s_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\leqslant CC_{\ast}\varepsilon\gamma^{-1}N_{n-1}^{\frac{1}{2}(\mu_{1}-a_{2})}.
\end{equation}
Observe that \eqref{param NM} implies in particular $a_{2}\geqslant \mu_{1}+2.$ Hence, by \eqref{rigidity gam-N0}, \eqref{ttH1s} and \eqref{est ttHn sh+sigma}, we infer
\begin{align*}
\|\mathtt{W}_{n}\|_{s_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant\|\mathtt{H}_{1}\|_{s_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}+\sum_{k=2}^{n}\|\mathtt{H}_{k}\|_{s_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\\
&\leqslant \tfrac{1}{2}C_{\ast}\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}+ CN_{0}^{-1}C_{\ast}\varepsilon\gamma^{-1}\\
&\leqslant C_{\ast}\varepsilon^{1-a(1+q\overline{a})}.
\end{align*}
Remark that \eqref{param NM} and \eqref{rigidity gam-N0} provide $a\leqslant\tfrac{1}{2(1+q\overline{a})}.$ Thus, taking $\varepsilon$ small enough and $\overline{\sigma}\geqslant\sigma_5$ with $\sigma_5$ as in Proposition \eqref{prop inv linfty}, we get
\begin{align}\label{bnd frkIn}
\|\mathfrak{I}_{n}\|_{s_{h}+\sigma_5}^{q,\gamma,\mathtt{m}}&\leqslant\|\mathtt{W}_{n}\|_{s_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\nonumber\\
&\leqslant C_{\ast}\varepsilon^{\frac{1}{2}}\nonumber\\
&\leqslant 1,
\end{align}
which corresponds to \eqref{bnd frkIn-1}. Up to increase the value of $\overline{\sigma},$ we can always assume that $s_0+\overline{\sigma}\geqslant\overline{s}_h+\sigma_{4}$ where $\overline{s}_h$ and $\sigma_{4}$ are respectively given by \eqref{param} and Proposition \ref{prop RR}. Consequently \eqref{ttHks0sig} gives \eqref{ttHn shbsig4}.\\
\noindent $\blacktriangleright$ \textbf{Set inclusions.} The properties \eqref{sml NM} and \eqref{bnd frkIn} allow to apply Theorem \ref{theo appr inv}. Hence, we can reduce the linearized operator $L_{n}$ at the current step. Therefore, the sets $\mathtt{A}_{\ell}^{\gamma}$ for $\ell\leqslant n+1$ and $\gamma\in(0,1)$ are well-defined. Our next purpose is to check some suitable inclusions required later for defining the extensions of our quantities outside the constructed Cantor sets. More precisely, we shall verify the following
\begin{equation*}
\mathtt{A}_{n+1}^{2\gamma}\subset\mathrm{O}_{n+1}^{4\gamma}\subset\left(\mathtt{A}_{n+1}^{\gamma}\cap\mathrm{O}_{n}^{2\gamma}\right).
\end{equation*}
Obviously, by construction, the first inclusion is trivial. Hence, we are left to prove the last one. Observe that by construction $\mathtt{A}_{\ell+1}^{2\gamma}\subset\mathtt{A}_{\ell}^{2\gamma}.$ Then for $(b,\omega)\in \mathrm{O}_{\ell+1}^{4\gamma},$ we have
\begin{align*}
\mathtt{dist}\big((b,\omega),\mathtt{A}_{\ell}^{2\gamma}\big)&\leqslant \mathtt{dist}\big((b,\omega),\mathtt{A}_{\ell+1}^{2\gamma}\big)\\
&<4\gamma N_{\ell+1}^{-\overline{a}}= 4\gamma N_{\ell}^{-\overline{a}}N_{0}^{-\frac{1}{2}\overline{a}}\\
&<2\gamma N_{\ell}^{-\overline{a}}.
\end{align*}
The last estimate is true provided that $2N_0^{-\frac12\overline a}<1,$ which is obtained taking $\varepsilon$ small enough according to \eqref{rigidity gam-N0}. Thus, we have proved
\begin{equation}\label{O2gm in Ogm}
\forall \ell\in \llbracket 0,n\rrbracket,\quad \mathrm{O}_{\ell+1}^{4\gamma}\subset \mathrm{O}_{\ell}^{2\gamma}.
\end{equation}
Now we prove by induction in $\ell$ the following inclusion
\begin{equation}\label{hyprec O in A}
\forall \ell\in \llbracket 0,n+1\rrbracket,\quad \mathrm{O}_{\ell}^{4\gamma}\subset\mathtt{A}_{\ell}^{\gamma}.
\end{equation}
The case $\ell=0$ is obvious because $\mathrm{O}_{0}^{4\gamma}=\mathcal{O}=\mathtt{A}_{0}^{\gamma}.$ Now suppose that \eqref{hyprec O in A} is true for some $\ell\in \llbracket 0,n\rrbracket$ and let us check the inclusion property \eqref{hyprec O in A} at the next order $\ell+1.$ Putting together \eqref{O2gm in Ogm} and \eqref{hyprec O in A}, we get
$$\mathrm{O}_{\ell+1}^{4\gamma}\subset\mathrm{O}_{\ell}^{2\gamma}\subset\mathrm{O}_{\ell}^{4\gamma}\subset\mathtt{A}_{\ell}^{\gamma}.$$
Hence, it remains to verify that
$$\mathrm{O}_{\ell+1}^{4\gamma}\subset\mathtt{G}_{\ell}\Big(\gamma_{\ell+1},\tau_{1},\tau_{2},i_{\ell}\Big).$$
Let $(b,\omega)\in\mathrm{O}_{\ell+1}^{4\gamma},$ then by construction, one can find $(b',\omega')\in\mathtt{A}_{\ell+1}^{2\gamma}$ with
\begin{equation}\label{dist b b'}
\mathtt{dist}\left((b,\omega),(b',\omega')\right)<4\gamma N_{\ell+1}^{-\overline{a}}.
\end{equation}
Let us start proving that $(b,\omega)\in\mathcal{O}_{\infty,\ell}^{\gamma_{\ell+1},\tau_{1}}(i_{\ell}).$ For all $k\in\{1,2\}$ and $(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}$ with $|l|\leqslant N_{\ell},$ we have by triangle and Cauchy-Schwarz inequalities together with \eqref{dist b b'} and the fact $(b',\omega')\in\mathcal{O}_{\infty,\ell}^{2\gamma_{\ell+1},\tau_{1}}(i_{\ell})$,
\begin{align*}
\left|\omega\cdot l+j{c_{k}(b,\omega,i_{\ell})}\right|&\geqslant\left|\omega'\cdot l+j{c_{k}(b',\omega',i_{\ell})}\right|-|\omega-\omega'||l|-|j|\left|c_{k}(b,\omega,i_{\ell})-c_{k}(b',\omega',i_{\ell})\right|\\
&>\tfrac{4\gamma_{\ell+1}^{\upsilon}2^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}-4\gamma N_{\ell+1}^{1-\overline{a}}-\langle j\rangle\left|c_{k}(b,\omega,i_{\ell})-c_{k}(b',\omega',i_{\ell})\right|.
\end{align*}
Now the Mean Value Theorem and the definition of $\mathrm{O}_{\ell+1}^{4\gamma}$ imply
$$\left|c_{k}(b,\omega,i_{\ell})-c_{k}(b',\omega',i_{\ell})\right|\leqslant CN_{\ell+1}^{-\overline{a}}\|c_{k}(i_{\ell})\|^{q,\gamma}.$$
From \eqref{sml-r0}, we deduce
\begin{align*}
\|c_{k}(i_{\ell})\|^{q,\gamma}&\leqslant\|c_{k}(i_{\ell})-\mathtt{v}_{k}\|^{q,\gamma}+\|\mathtt{v}_{k}\|^{q,\gamma}\\
&\leqslant C.
\end{align*}
Combining the last two estimates gives
$$\left|c_{k}(b,\omega,i_{\ell})-c_{k}(b',\omega',i_{\ell})\right|\leqslant C\gamma\gamma^{-1}N_{\ell+1}^{-\overline{a}}\leqslant C\gamma N_{\ell+1}^{1-\overline{a}}.$$
Consequently, using the facts that $\gamma_{\ell}\geqslant\gamma$ and $\upsilon\in(0,1),$ we get
\begin{align*}
\left|\omega\cdot l+jc_{k}(b,\omega,i_{\ell})\right|&>\tfrac{4\gamma_{\ell+1}^{\upsilon}2^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}-C\gamma\langle j\rangle N_{\ell+1}^{1-\overline{a}}\\
&\geqslant\tfrac{4\gamma_{\ell+1}^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}\left(2^{\upsilon}-CN_{\ell+1}^{\tau_{1}+1-\overline{a}}\right).
\end{align*}
Our choice of parameters \eqref{param NM} and \eqref{setting tau1 and tau2} implies in particular
\begin{equation}\label{cond-abarre-1}
\overline{a}=\tau_{2}+{3}\geqslant \tau_{1}+2.
\end{equation}
Therefore, taking $N_{0}$ sufficiently large, we obtain
$$2^{\upsilon}-CN_{\ell+1}^{\tau_{1}+1-\overline{a}}\geqslant 2^{\upsilon}-CN_{0}^{-1}>1,$$
which implies in turn
$$\left|\omega\cdot l+jc_{k}(b,\omega,i_{\ell})\right|>\tfrac{4\gamma_{\ell+1}^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}\cdot
$$
This proves that $(b,\omega)\in\mathcal{O}_{\infty,\ell}^{\gamma_{\ell+1},\tau_{1}}(i_{\ell}).$ Now, let us check that $(b,\omega)\in\mathscr{O}_{\infty,\ell}^{\gamma_{\ell+1},\tau_{1},\tau_{2}}(i_{\ell}).$ For all $k\in\{1,2\}$ and $(l,j,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2$ with $|l|\leqslant N_{\ell},$ using the triangle and Cauchy-Schwarz inequalities together with \eqref{dist b b'} and the fact that $(b',\omega')\in\mathscr{O}_{\infty,\ell}^{2\gamma_{\ell+1},\tau_{1},\tau_{2}}(i_{\ell})$
\begin{align*}
\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})\right|&\geqslant\left|\omega'\cdot l+\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b',\omega',i_{\ell})\right|-|\omega-\omega'||l|\\
&-\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})+\mu_{j_{0},k}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|\\
&>\tfrac{4\gamma_{\ell+1}\langle j-j_{0}\rangle}{\langle l\rangle^{\tau_{2}}}-4\gamma N_{\ell+1}^{1-\overline{a}}\\
&-\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})+\mu_{j_{0},k}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|.
\end{align*}
We remind from \eqref{def mu lim} that the perturbed eigenvalues admit the following structure
\begin{equation}\label{dec mjkfty}
\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})=\mu_{j,k}^{(0)}(b,\omega,i_{\ell})+r_{j,k}^{(\infty)}(b,\omega,i_{\ell}).
\end{equation}
Hence,
\begin{align*}
&\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})+\mu_{j_{0},k}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|\\
&\leqslant\left|\mu_{j,k}^{(0)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(0)}(b,\omega,i_{\ell})+\mu_{j_{0},k}^{(0)}(b',\omega',i_{\ell})-\mu_{j,k}^{(0)}(b',\omega',i_{\ell})\right|\\
&\quad+\left|r_{j,k}^{(\infty)}(b,\omega,i_{\ell})-r_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|+\left|r_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})-r_{j_{0},k}^{(\infty)}(b',\omega',i_{\ell})\right|.
\end{align*}
The Mean Value Theorem, \eqref{dist b b'} and the definition of $\mathrm{O}_{\ell+1}^{4\gamma}$ allow to write
\begin{equation}\label{im00}
\left|\mu_{j,k}^{(0)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(0)}(b,\omega,i_{\ell})+\mu_{j_{0},k}^{(0)}(b',\omega',i_{\ell})-\mu_{j,k}^{(0)}(b',\omega',i_{\ell})\right|\leqslant\gamma CN_{\ell+1}^{1-\overline{a}}\langle j-j_{0}\rangle.
\end{equation}
Similarly, using in particular \eqref{e-rjfty}, \eqref{sml NM} and the definition of $\mathrm{O}_{\ell+1}^{4\gamma}$ we get
\begin{equation}\label{ir00}
\left|r_{j,k}^{(\infty)}(b,\omega,i_{\ell})-r_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|\leqslant C\gamma N_{\ell+1}^{-\overline{a}}\varepsilon \gamma^{-2}\leqslant \gamma CN_{\ell+1}^{1-\overline{a}}\langle j-j_{0}\rangle.
\end{equation}
Gathering the previous inequalities and using the facts that $|l|\leqslant N_{\ell}$ and $\gamma_{\ell+1}\geqslant\gamma$ we deduce
\begin{align*}
\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})\right|&\geqslant\tfrac{\gamma_{\ell+1}\langle j-j_{0}\rangle}{\langle l\rangle^{\tau_{2}}}\left(4-CN_{\ell+1}^{\tau_{2}+1-\overline{a}}\right).
\end{align*}
By the choice \eqref{cond-abarre-1}, if $N_{0}$ is large enough, then we obtain
$$CN_{\ell+1}^{\tau_{2}+1-\overline{a}}\leqslant CN_0^{-1}<1,$$
which implies in turn
$$\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},k}^{(\infty)}(b,\omega,i_{\ell})\right|>\tfrac{2\gamma_{\ell+1}\langle j-j_{0}\rangle}{\langle l\rangle^{\tau_{2}}}\cdot$$
For all $(l,j,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})$ with $\langle l,j,j_0\rangle\leqslant N_{\ell},$ using the triangle and Cauchy-Schwarz inequalities together with \eqref{dist b b'} and the fact that $(b',\omega')\in\mathscr{O}_{\infty,\ell}^{2\gamma_{\ell+1},\tau_{1},\tau_{2}}(i_{\ell})$
\begin{align*}
\left|\omega\cdot l+\mu_{j,1}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{\ell})\right|&\geqslant\left|\omega'\cdot l+\mu_{j,1}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j_{0},2}^{(\infty)}(b',\omega',i_{\ell})\right|-|\omega-\omega'||l|\\
&-\left|\mu_{j,1}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{\ell})+\mu_{j_{0},2}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j,1}^{(\infty)}(b',\omega',i_{\ell})\right|\\
&>\tfrac{4\gamma_{\ell+1}}{\langle l,j,j_{0}\rangle^{\tau_{2}}}-4\gamma N_{\ell+1}^{1-\overline{a}}\\
&-\left|\mu_{j,1}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{\ell})+\mu_{j_{0},2}^{(\infty)}(b',\omega',i_{\ell})-\mu_{j,1}^{(\infty)}(b',\omega',i_{\ell})\right|.
\end{align*}
Similarly to \eqref{im00} and \eqref{ir00}, we get
\begin{align*}
\left|\mu_{j,1}^{(0)}(b,\omega,i_{\ell})-\mu_{j_{0},2}^{(0)}(b,\omega,i_{\ell})+\mu_{j_{0},2}^{(0)}(b',\omega',i_{\ell})-\mu_{j,1}^{(0)}(b',\omega',i_{\ell})\right|\leqslant\gamma CN_{\ell+1}^{1-\overline{a}}\langle j,j_{0}\rangle\leqslant\gamma CN_{\ell+1}^{2-\overline{a}},\\
\left|r_{j,1}^{(\infty)}(b,\omega,i_{\ell})-r_{j,1}^{(\infty)}(b',\omega',i_{\ell})\right|+\left|r_{j_0,2}^{(\infty)}(b,\omega,i_{\ell})-r_{j_0,2}^{(\infty)}(b',\omega',i_{\ell})\right|\leqslant \gamma CN_{\ell+1}^{1-\overline{a}}\langle j,j_{0}\rangle\leqslant\gamma CN_{\ell+1}^{2-\overline{a}}.
\end{align*}
Therefore,
$$\left|\omega\cdot l+\mu_{j,1}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{\ell})\right|\geqslant\tfrac{\gamma_{\ell+1}}{\langle l,j,j_0\rangle^{\tau_{2}}}\left(4-CN_{\ell+1}^{\tau_{2}+2-\overline{a}}\right).$$
By the choice \eqref{cond-abarre-1}, if $N_{0}$ is large enough, then we obtain
$${CN_{\ell+1}^{\tau_{2}+2-\overline{a}}\leqslant CN_0^{-1}<1},$$
and then
$$\left|\omega\cdot l+\mu_{j,1}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j_{0},2}^{(\infty)}(b,\omega,i_{\ell})\right|>\tfrac{2\gamma_{\ell+1}}{\langle l,j,j_0\rangle^{\tau_{2}}}\cdot$$
This proves that $(b,\omega)\in\mathscr{O}_{\infty,\ell}^{\gamma_{\ell+1},\tau_{1},\tau_{2}}(i_{\ell}).$ It remains to check that $(b,\omega)\in\Lambda_{\infty,\ell}^{\gamma_{\ell+1},\tau_1}(i_{\ell}).$ For all $k\in\{1,2\}$ and $(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})$ with $|l|\leqslant N_{\ell},$ we have by left triangle and Cauchy-Schwarz inequalities together with \eqref{dist b b'} and the fact $(b',\omega')\in\Lambda_{\infty,\ell}^{2\gamma_{\ell+1},\tau_{1}}(i_{\ell})$
\begin{align*}
\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})\right|&\geqslant\left|\omega'\cdot l+\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|-|\omega-\omega'||l|-\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{k})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|\\
&>\tfrac{2\gamma_{\ell+1}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}-4\gamma N_{\ell}N_{\ell+1}^{-\overline{a}}-\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|\\
&>\tfrac{2\gamma_{\ell+1}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}-4\gamma N_{\ell+1}^{1-\overline{a}}-\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|.
\end{align*}
The Mean Value Theorem and the definition of $\mathrm{O}_{\ell+1}^{4\gamma}$ give
\begin{align*}
\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|&\leqslant|(b,\omega)-(b',\omega')|\gamma^{-1}\|\mu_{j,k}^{(\infty)}(i_{\ell})\|^{q,\gamma}\\
&\leqslant 4N_{\ell+1}^{-\overline{a}}\|\mu_{j,k}^{(\infty)}(i_{\ell})\|^{q,\gamma}.
\end{align*}
Now, by triangle inequlity
$$\forall j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k},\quad \|\mu_{j,k}^{(\infty)}(i_{\ell})\|^{q,\gamma}\leqslant\|\mu_{j,k}^{(\infty)}(i_{\ell})-\Omega_{j,k}\|^{q,\gamma}+\|\Omega_{j,k}\|^{q,\gamma}.$$
From \eqref{lim omega jk} one has for all $|j|\geqslant\mathtt{m}^{*}$,
$$\|\Omega_{j,k}\|^{q,\gamma}\leqslant C|j|.$$
Besides, \eqref{def mu lim}, \eqref{mu0 r0}, \eqref{e-ed-r0} and \eqref{e-rjfty} imply
$$\forall j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k},\quad \|\mu_{j,k}^{(\infty)}(i_{\ell})-\Omega_{j,k}\|^{q,\gamma}\leqslant C|j|.$$
Putting together the preceding three estimates gives
$$\forall j\in\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k},\quad \|\mu_{j,k}^{(\infty)}(i_{\ell})\|^{q,\gamma}\leqslant C|j|.$$
As a consequence, we have
$$\left|\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})-\mu_{j,k}^{(\infty)}(b',\omega',i_{\ell})\right|\leqslant C\langle j\rangle N_{\ell+1}^{-\overline{a}}\leqslant C\gamma\langle j\rangle N_{\ell+1}^{1-\overline{a}}.$$
Usint that $|l|\leqslant N_{\ell}\leqslant N_{\ell+1}$ and $\gamma_{\ell+1}\geqslant \gamma$, we get
\begin{align*}
\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})\right|&\geqslant\tfrac{2\gamma_{\ell+1}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}-C\gamma\langle j\rangle N_{\ell+1}^{1-\overline{a}}\\
&\geqslant\tfrac{\gamma_{\ell+1}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}\left(2-CN_{\ell+1}^{\tau_{1}+1-\overline{a}}\right).
\end{align*}
Now, we choose $N_{0}$ sufficiently large so that
$$CN_{\ell+1}^{\tau_{1}+1-\overline{a}}\leqslant CN_{0}^{-1}<1,$$
and then
$$\left|\omega\cdot l+\mu_{j,k}^{(\infty)}(b,\omega,i_{\ell})\right|>\tfrac{\gamma_{\ell+1}\langle j\rangle}{\langle l\rangle^{\tau_{1}}}\cdot
$$
This shows that, $(b,\omega)\in\Lambda_{\infty,\ell}^{\gamma_{\ell+1},\tau_{1}}(i_{\ell})$ and finally $(b,\omega)\in\mathtt{G}_{\ell}\big(\gamma_{\ell+1},\tau_{1},\tau_{2},i_{\ell}\big).$ Therefore $(b,\omega)\in\mathtt{A}_{\ell+1}^{\gamma}.$ This achieves the induction proof of \eqref{hyprec O in A}.\\
\noindent $\blacktriangleright$ \textbf{Construction of the next approximation.} Our next task is to construct the next approximate solution denoted $\mathtt{U}_{n+1}.$ Observe that according to Theorem \ref{theo appr inv}, the properties \eqref{sml NM} and \eqref{bnd frkIn} allow to construct a reversible approximate right inverse $\mathrm{T}_n\triangleq\mathrm{T}_{n}(b,\omega)$ of the linearized operator $L_{n}.$ Recall that the operator $\mathrm{T}_n$ is well-defined on the whole set of parameters $\mathcal{O}$ and satisfies, by virtue of \eqref{tame T0}, the following tame estimate
\begin{equation}\label{eari-NM}
\forall s\in[s_{0},S],\quad\|\mathrm{T}_{n}\rho\|_{s}^{q,\gamma,\mathtt{m}}\lesssim\gamma^{-1}\left(\|\rho\|_{s+\overline{\sigma}}^{q,\gamma,\mathtt{m}}+\|\mathfrak{I}_{n}\|_{s+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\rho\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right).
\end{equation}
In addition it is an approximate right inverse of $L_n$ when restricted to $\mathtt{G}_{n}(\gamma_{n+1},\tau_{1},\tau_{2},i_{n}).$ More precisely, according to \eqref{splitting of approximate inverse} we have in $\mathtt{G}_{n}(\gamma_{n+1},\tau_{1},\tau_{2},i_{n})$
\begin{equation}\label{approx Ln}
L_n{\rm T}_n-\textnormal{Id} = \mathcal{E}^{(n)}_1+\mathcal{E}^{(n)}_2+\mathcal{E}^{(n)}_3,
\end{equation}
where the error terms in the right hand-side satisfy the estimates \eqref{calE1}, \eqref{calE2} and \eqref{calE3}. The next approximation is defined as follows,
$$\widetilde{\mathtt{U}}_{n+1}\triangleq\mathtt{U}_{n}+\widetilde{\mathtt{H}}_{n+1},\qquad \widetilde{\mathtt{H}}_{n+1}\triangleq(\widehat{\mathfrak{I}}_{n+1},\widehat{\alpha}_{n+1},0)\triangleq-\mathbf{\Pi}_{N_n}\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\in E_{n,\mathtt{m}}\times\mathbb{R}^{d}\times\mathbb{R}^{d+1},
$$
where the projector $\mathbf{\Pi}_{N_n}$ and its orthogonal are defined by
\begin{equation}\label{proj-NM}
\mathbf{\Pi}_{N_n}(\mathfrak{I},\alpha,0)=(\Pi_{N_n}\mathfrak{I},\alpha,0)\qquad \textnormal{and }\qquad\mathbf{\Pi}_{N_n}^{\perp}(\mathfrak{I},\alpha,0)=(\Pi_{N_n}^{\perp}\mathfrak{I},0,0).
\end{equation}
Then, applying Taylor formula yields
\begin{align}\label{Fnext-NM}
\nonumber \mathcal{F}(\widetilde{\mathtt{U}}_{n+1})& = \mathcal{F}(\mathtt{U}_{n})-L_{n}\mathbf{\Pi}_{N_n}\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})+Q_{n}\\
\nonumber& = \mathcal{F}(\mathtt{U}_{n})-L_{n}\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})+L_{n}\mathbf{\Pi}_{N_n}^{\perp}\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})+Q_{n}\\
\nonumber& = \mathcal{F}(\mathtt{U}_{n})-\Pi_{N_n}L_{n}\mathrm{T}_{n}\Pi_{n}\mathcal{F}(\mathtt{U}_{n})+(L_{n}\mathbf{\Pi}_{N_n}^{\perp}-\Pi_{N_n}^{\perp}L_{n})\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})+Q_{n}\\
& = \Pi_{N_n}^{\perp}\mathcal{F}(\mathtt{U}_{n})-\Pi_{N_n}(L_{n}\mathrm{T}_{n}-\textnormal{Id})\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})+(L_{n}\mathbf{\Pi}_{N_n}^{\perp}-\Pi_{N_n}^{\perp}L_{n})\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})+Q_{n},
\end{align}
where $Q_n$ denotes the quadratic part given by
\begin{align}\label{quad-NM}
Q_{n}=\mathcal{F}(\mathtt{U}_{n}+\widetilde{\mathtt{H}}_{n+1})-\mathcal{F}(\mathtt{U}_{n})-L_{n}\widetilde{\mathtt{H}}_{n+1}.
\end{align}
Now, we shall prove \eqref{decay FttUn} at the order $n+1$ for a suitable extension $\mathtt{U}_{n+1}$ of $\widetilde{\mathtt{U}}_{n+1}|_{\mathrm{O}_{n+1}^{2\gamma}}.$ This is done in two steps. The first one is to prove that
\begin{align}\label{decay ttUn inter}
\|\mathcal{F}(\widetilde{\mathtt{U}}_{n+1})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\leqslant C_{\ast}\varepsilon N_{n}^{-a_{1}}.
\end{align}
The second step is to construct the classical extension of $\mathtt{U}_{n+1}$ which fulfills the desired estimate \eqref{decay FttUn}.\\
\ding{226} \textit{Proof of \eqref{decay ttUn inter}.} We estimate each one of the four terms in the right hand-side of \eqref{Fnext-NM}. Let us begin with the first one. Applying Lemma \ref{lem funct prop}-(i) and using the inclusion \eqref{O2gm in Ogm}, we obtain
\begin{equation}\label{e-first term NM}
\|\Pi_{N_n}^{\perp}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\leqslant N_{n}^{s_{0}-\kappa_{1}}\|\mathcal{F}(\mathtt{U}_{n})\|_{\kappa_{1}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}.
\end{equation}
Now, Taylor formula together with \eqref{operatorF}, Lemma \ref{tame X per}, \eqref{e-F(ttU0)} and \eqref{ttUn} imply
\begin{align}\label{e-fttUn-Wn}
\nonumber \forall s\geqslant s_{0},\quad\|\mathcal{F}(\mathtt{U}_{n})\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}&\leqslant\|\mathcal{F}(\mathtt{U}_{0})\|_{s}^{q,\gamma,\mathtt{m}}+\|\mathcal{F}(\mathtt{U}_{n})-\mathcal{F}(\mathtt{U}_{0})\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\\
&\lesssim\varepsilon+\| \mathtt{W}_{n}\|_{s+\overline{\sigma}}^{q,\gamma,\mathtt{m}}.
\end{align}
Besides, \eqref{growth ttWn}, \eqref{def geo Nn} and \eqref{rigidity gam-N0} together give
\begin{align}\label{eps+ttWn}
\nonumber\varepsilon+\|\mathtt{W}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant\varepsilon\left(1+C_{\ast}\gamma^{-1}N_{n-1}^{\mu_{1}}\right)\\
&\leqslant 2C_{\ast}\varepsilon N_{n}^{\frac{2}{3}\mu_{1}+1}.
\end{align}
Inserting \eqref{e-fttUn-Wn} and \eqref{eps+ttWn} into \eqref{e-first term NM} yields
\begin{equation}\label{e-1-NM}
\|\Pi_{N_n}^{\perp}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\lesssim C_{\ast}\varepsilon N_{n}^{s_{0}+\frac{2}{3}\mu_{1}+1-\kappa_{1}}.
\end{equation}
Let us move on to the second term. According to \eqref{hyprec O in A}, we have the following inclusions
$$\mathrm{O}_{n+1}^{4\gamma}\subset\mathtt{A}_{n+1}^{\gamma}\subset\mathtt{G}_{n}\Big(\gamma_{n+1},\tau_{1},\tau_{2},i_{n}\Big).$$
Hence, the decomposition \eqref{approx Ln} holds on $\mathrm{O}_{n+1}^{4\gamma}$ and we can write
$$\Pi_{N_n}(L_{n}\mathrm{T}_{n}-\textnormal{Id})\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})=\mathfrak{E}_{1,n}+\mathfrak{E}_{2,n}+\mathfrak{E}_{3,n},$$
with for any $k\in\{1,2,3\},$
$$\mathfrak{E}_{k,n}\triangleq\Pi_{N_n}\mathcal{E}_{k}^{(n)}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n}).$$
Thus, we need to estimate each one of the error terms $\mathfrak{E}_{k,n}$.
We begin with $\mathfrak{E}_{1,n}$ for which we need the following interpolation-type inequality
\begin{align}\label{interp NM2}
\|\mathcal{F}(\mathtt{U}_{n})\|_{q,s_{0}+\overline{\sigma}}^{\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}&\leqslant\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{q,s_0+\overline{\sigma}}^{\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\|\Pi_{N_n}^{\perp}\mathcal{F}(\mathtt{U}_n)\|_{q,s_0+\overline{\sigma}}^{\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\nonumber\\
&\leqslant N_{n}^{\overline{\sigma}}\|\mathcal{F}(\mathtt{U}_n)\|_{q,s_0}^{\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+N_{n}^{s_0-\kappa_{1}}\|\mathcal{F}(\mathtt{U}_n)\|_{q,\kappa_{1}+\overline{\sigma}}^{\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}.
\end{align}
Now, putting together \eqref{e-fttUn-Wn} and \eqref{eps+ttWn}, we infer
\begin{equation}\label{FttUn high}
\|\mathcal{F}(\mathtt{U}_{n})\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}} \leqslant C_{\ast}\varepsilon N_{n}^{\overline{\sigma}+\frac{2}{3}\mu_{1}+1}.
\end{equation}
Combining \eqref{calE1}, \eqref{interp NM2}, \eqref{e-ttWn}, \eqref{sml NM} and \eqref{FttUn high}, we obtain
\begin{align}\label{Efrk1n}
\|&\mathfrak{E}_{1,n}\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\lesssim\gamma^{-1}\|\mathcal{F}(\mathtt{U}_n)\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\left(1+\|\mathfrak{I}_n\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right)\nonumber\\
&\lesssim \gamma^{-1}N_n^{\overline{\sigma}}\left(N_n^{\overline{\sigma}}\|\mathcal{F}(\mathtt{U}_n)\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+N_{n}^{s_0-\kappa_{1}}\|\mathcal{F}(\mathtt{U}_n)\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\|\mathcal{F}(\mathtt{U}_n)\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\left(1+\|\mathtt{W}_{n}\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right)\nonumber\\
&\lesssim C_{\ast}\varepsilon\left(N_{n}^{2\overline{\sigma}-\frac{4}{3}a_1}+N_n^{s_0+2\overline{\sigma}+\frac{2}{3}\mu_{1}+1-\frac{2}{3}a_1-\kappa_{1}}\right).
\end{align}
As for $\mathfrak{E}_{2,n}$, we apply \eqref{calE2} with $\mathtt{b}=\kappa_{1}-s_0$ and use \eqref{sml NM}, \eqref{decay FttUn} and \eqref{growth ttWn} in order to find
\begin{align}\label{Efrk2n}
\|\mathfrak{E}_{2,n}\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}&\lesssim\gamma^{-1}N_n^{s_0-\kappa_{1}}\left(\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\varepsilon\|\mathfrak{I}_n\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\nonumber\\
&\lesssim \gamma^{-1}N_n^{s_0-\kappa_{1}}\left(\|\mathcal{F}(\mathtt{U}_n)\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\varepsilon N_n^{\overline{\sigma}}\|\mathtt{W}_n\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\mathcal{F}(\mathtt{U}_n)\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\nonumber\\
&\lesssim C_{\ast}\varepsilon N_n^{s_0+\overline{\sigma}+\frac{2}{3}\mu_{1}+2-\kappa_{1}}+C_{\ast}\varepsilon N_n^{s_0+\overline{\sigma}+\frac{2}{3}\mu_{1}+2-\frac{2}{3}a_{1}-\kappa_{1}}\nonumber\\
&\lesssim C_{\ast}\varepsilon N_n^{s_0+\overline{\sigma}+\frac{2}{3}\mu_{1}+2-\kappa_{1}}.
\end{align}
Similarly, putting together \eqref{calE3}, \eqref{def geo Nn}, \eqref{rigidity gam-N0} and \eqref{sml NM}, we infer
\begin{align}\label{Efrk3n}
\|\mathfrak{E}_{3,n}\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}&\lesssim N_n^{s_0-\kappa_{1}}\gamma^{-2}\left(\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\varepsilon\gamma^{-2}\|\mathfrak{I}_n\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\nonumber\\
&\quad+\varepsilon\gamma^{-4}N_{0}^{\mu_{2}}N_n^{-\mu_{2}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_n)\|_{s_0+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\nonumber\\
&\lesssim C_{\ast}\varepsilon\left(N_n^{s_0+\overline{\sigma}+\frac{2}{3}\mu_{1}+2-\kappa_{1}}+N_n^{\overline{\sigma}+1-\mu_{2}-\frac{2}{3}a_1}\right).
\end{align}
Gathering, \eqref{Efrk1n}, \eqref{Efrk2n} and \eqref{Efrk3n}, we deduce
\begin{equation}\label{second term NMn}
\|\Pi_{N_n}(L_{n}\mathrm{T}_{n}-\textnormal{Id})\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_0}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\leqslant CC_{\ast}\varepsilon\left(N_{n}^{2\overline{\sigma}-\frac{4}{3}a_1}+N_n^{s_0+2\overline{\sigma}+\frac{2}{3}\mu_{1}+1-\kappa_{1}}+N_n^{\overline{\sigma}+1-\mu_{2}-\frac{2}{3}a_1}\right).
\end{equation}
For $n=0$, we deduce from \eqref{e-F(ttU0)}, \eqref{sml NM} and by slight modifications of the preceding computations
\begin{align}\label{second term NM0}
\|\Pi_{N_0}(L_{0}\mathrm{T}_{0}-\textnormal{Id})\Pi_{N_0}\mathcal{F}(\mathtt{U}_{0})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{1}^{4\gamma}}&\leqslant\|\mathfrak{E}_{1,0}\|_{s_0}^{q,\gamma,\mathtt{m}}+\|\mathfrak{E}_{2,0}\|_{s_0}^{q,\gamma,\mathtt{m}}+\|\mathfrak{E}_{3,0}\|_{s_0}^{q,\gamma,\mathtt{m}}\nonumber\\
& \lesssim \varepsilon^{2}\gamma^{-1}+\varepsilon\gamma^{-1}+\big(\varepsilon \gamma^{-2}N_{0}^{s_{0}-\kappa_{1}}+\varepsilon^{2}\gamma^{-4}\big)\nonumber\\
&\lesssim\varepsilon\gamma^{-2}.
\end{align}
Now, we turn to the estimate corresponding to the third term in \eqref{Fnext-NM}. In view of \eqref{Linearized-op-F-DC}, we have for $\mathtt{H}=(\widehat{\mathfrak{I}},\widehat{\alpha})$ with $\widehat{\mathfrak{I}}=(\widehat{\Theta},\widehat{I},\widehat{z}),$
\begin{equation}\label{linH00}
L_{n}\mathtt{H}=\omega\cdot\partial_{\varphi}\widehat{\mathfrak{I}}-(0,0,\mathcal{J}\mathbf{L}_0(b)\widehat{z})-\varepsilon d_{i}X_{\mathcal{P}_{\varepsilon}}(i_{n})\widehat{\mathfrak{I}}-(\mathtt{J}\widehat{\alpha},0,0).
\end{equation}
Now, \eqref{proj-NM} and the fact that $\omega\cdot\partial_{\varphi}$ and $\mathcal{J}\mathbf{L}_0(b)$ are diagonal yield
$$\left(L_{n}\mathbf{\Pi}_{N_n}^{\perp}-\Pi_{N_n}^{\perp}L_{n}\right)\mathtt{H}=-\varepsilon[d_{i}X_{\mathcal{P}_{\varepsilon}}(i_{n}),\Pi_{N_n}^{\perp}]\widehat{\mathfrak{I}}.$$
Applying Lemma \ref{tame X per}-{(ii)} together with Lemma \ref{lem funct prop}-(i) and \eqref{O2gm in Ogm}, we infer
$$
\left\|\left(L_{n}\mathbf{\Pi}_{N_n}^{\perp}-\Pi_{N_n}^{\perp}L_{n}\right)\mathtt{H}\right\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\lesssim\varepsilon N_{n}^{s_{0}-\kappa_{1}}\left(\|\widehat{\mathfrak{I}}\|_{\kappa_{1}+1}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\|\mathfrak{I}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\widehat{\mathfrak{I}}\|_{s_{0}+1}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right).
$$
Hence,
\begin{align*}
\left\|\left(L_{n}\mathbf{\Pi}_{N_n}^{\perp}-\Pi_{N_n}^{\perp}L_{n}\right)\mathrm{T}_{n}\Pi_{n}\mathcal{F}(\mathtt{U}_{n})\right\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}&\lesssim\varepsilon N_{n}^{s_{0}-\kappa_{1}}\|\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{\kappa_{1}+1}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\\
&\quad +\varepsilon N_{n}^{s_{0}-\kappa_{1}}\|\mathfrak{I}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+1}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}.
\end{align*}
Now, using \eqref{eari-NM}, \eqref{O2gm in Ogm}, Lemma \ref{lem funct prop}-(i), Sobolev embeddings, \eqref{sml NM} and \eqref{rigidity gam-N0}, we get
\begin{align*}
&\left\|\left(L_{n}\mathbf{\Pi}_{N_n}^{\perp}-\Pi_{N_n}^{\perp}L_{n}\right)\mathrm{T}_{n}\Pi_{n}\mathcal{F}(\mathtt{U}_{n})\right\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\\
&\lesssim \varepsilon\gamma^{-1}N_{n}^{s_{0}-\kappa_{1}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{\kappa_{1}+\overline{\sigma}+1}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\|\mathfrak{I}_{n}\|_{\kappa_{1}+\overline{\sigma}+1}^{q,\gamma,\mathtt{m}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\\
&\quad+\varepsilon\gamma^{-1}N_{n}^{s_{0}-\kappa_{1}}\|\mathfrak{I}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\left(\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+\overline{\sigma}+1}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\|\mathfrak{I}_{n}\|_{s_{0}+\overline{\sigma}+1}^{q,\gamma,\mathtt{m}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\\
&\lesssim \varepsilon N_{n}^{s_{0}+2-\kappa_{1}}\left(\|\mathcal{F}(\mathtt{U}_{n})\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\| \mathtt{W}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right).
\end{align*}
Besides, from Lemma \ref{lem funct prop}-(ii), \eqref{decay FttUn} and \eqref{def geo Nn}, we obtain
\begin{align*}
\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}&\leqslant N_{n}^{\overline{\sigma}}\|\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\\
&\leqslant C_{\ast}\varepsilon N_{n}^{\overline{\sigma}}N_{n-1}^{-a_{1}}\\
&\leqslant C_{\ast}\varepsilon N_{n}^{\overline{\sigma}-\frac{2}{3}a_{1}}.
\end{align*}
Added to \eqref{param NM}, \eqref{FttUn high} and \eqref{growth ttWn}, we deduce that
\begin{equation}\label{final estimate commutator}
\|(L_{n}\mathbf{\Pi}_{n}^{\perp}-\Pi_{n}^{\perp}L_{n})\mathrm{T}_{n}\Pi_{n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\leqslant CC_{\ast}\varepsilon N_{n}^{s_{0}+\overline{\sigma}+\frac{2}{3}\mu_{1}+3-\kappa_{1}}.
\end{equation}
We are left with the quadratic term in \eqref{Fnext-NM}. Another application of Taylor formula with \eqref{quad-NM} leads to
$$Q_{n}=\int_{0}^{1}(1-t)d_{i,\alpha}^{2}\mathcal{F}(\mathtt{U}_{n}+t\widetilde{\mathtt{H}}_{n+1})[\widetilde{\mathtt{H}}_{n+1},\widetilde{\mathtt{H}}_{n+1}]dt.$$
Now, \eqref{linH00} and Lemma \ref{tame X per}-{(iii)} give
\begin{align}\label{quad-e-11}
\| Q_{n}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\lesssim\varepsilon\left(1+\|\mathtt{W}_{n}\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}+\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}+2}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\right)\left(\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}+2}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\right)^{2}.
\end{align}
Observe that, \eqref{e-fttUn-Wn}, \eqref{rigidity gam-N0} and \eqref{e-ttWn} imply
\begin{equation}\label{sml FttU0-00}
\gamma^{-1}\|\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\leqslant 1.
\end{equation}
Gathering \eqref{O2gm in Ogm}, \eqref{hyprec O in A}, \eqref{eari-NM}, \eqref{e-fttUn-Wn} and \eqref{sml FttU0-00}, we obtain for all $s\in[s_{0},S]$
\begin{align}\label{tttHm+1 and ttWn}
\| \widetilde{\mathtt{H}}_{n+1}\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}&= \|\mathbf{\Pi}_{N_n}\mathrm{T}_{n}\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\nonumber\\
& \lesssim \gamma^{-1}\left(\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+\|\mathfrak{I}_{n}\|_{s+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\|\Pi_{N_n}\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\nonumber\\
& \lesssim \gamma^{-1}\left(N_{n}^{\overline{\sigma}}\|\mathcal{F}(\mathtt{U}_{n})\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}+N_{n}^{2\overline{\sigma}}\|\mathfrak{I}_{n}\|_{s}^{q,\gamma,\mathtt{m}}\|\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\right)\nonumber\\
& \lesssim \gamma^{-1}N_{n}^{2\overline{\sigma}}\left(\varepsilon+\| \mathtt{W}_{n}\|_{s}^{q,\gamma,\mathtt{m}}\right).
\end{align}
Similarly, \eqref{eari-NM}, \eqref{O2gm in Ogm}, \eqref{e-ttWn}, \eqref{sml NM} and \eqref{decay FttUn}, we infer
\begin{align}\label{decay tttHm+1 s0}
\nonumber \| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}&\lesssim\gamma^{-1}N_{n}^{\overline{\sigma}}\|\mathcal{F}(\mathtt{U}_{n})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n}^{2\gamma}}\\
&\lesssim C_{\ast}\varepsilon\gamma^{-1} N_{n}^{\overline{\sigma}}N_{n-1}^{-a_{1}}.
\end{align}
For $\varepsilon$ sufficiently small, \eqref{e-ttWn} and \eqref{decay tttHm+1 s0} imply
\begin{align*}
\|\mathtt{W}_{n}\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}+\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}+2}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}} & \leqslant C_{\ast}\varepsilon\gamma^{-1}N_0^{q\overline{a}}+N_{n}^{2}\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\\
& \leqslant 1+C\varepsilon\gamma^{-1}N_{n}^{\overline{\sigma}+2}N_{n-1}^{-a_{1}}\\
& \leqslant 1+C\varepsilon\gamma^{-1}N_{n-1}^{3+\frac{3}{2}\overline{\sigma}-a_{1}}.
\end{align*}
But \eqref{param NM} gives in particular $a_{1}\geqslant 3+\tfrac{3}{2}\overline{\sigma}.$ Thus,
\begin{equation}\label{bnd ttWn and tttHn+1}
\|\mathtt{W}_{n}\|_{s_{0}+2}^{q,\gamma,\mathtt{m}}+\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}+2}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\leqslant 2.
\end{equation}
Therefore, inserting \eqref{bnd ttWn and tttHn+1} and \eqref{decay tttHm+1 s0} into \eqref{quad-e-11} and using \eqref{rigidity gam-N0} and \eqref{sml NM}, we get
\begin{align*}
\| Q_{n}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}} & \lesssim \varepsilon\left(\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}+2}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\right)^{2}\\
& \leqslant \varepsilon N_{n}^{4}\left(\| \widetilde{\mathtt{H}}_{n+1}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\right)^{2}\\
& \lesssim \varepsilon C_{\ast}N_{n}^{2\overline{\sigma}+4}N_{n-1}^{-2a_1}.
\end{align*}
Using \eqref{def geo Nn}, we finally obtain for $n\in\mathbb{N}^*$,
\begin{equation}\label{fourth term NM}
\|Q_{n}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathcal{O}_{n+1}^{4\gamma}}\leqslant CC_{\ast}\varepsilon N_{n}^{2\overline{\sigma}+4-\frac{4}{3}a_{1}}.
\end{equation}
As for $n=0$, we come back to \eqref{tttHm+1 and ttWn} and \eqref{e-F(ttU0)} to get for all $s\in[s_{0},S]$
\begin{align}\label{ttH1}
\|\widetilde{\mathtt{H}}_{1}\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{1}^{4\gamma}}&\lesssim\gamma^{-1}\|\Pi_{0}\mathcal{F}(\mathtt{U}_{0})\|_{s+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\nonumber\\
&\lesssim C_{\ast}\varepsilon\gamma^{-1}.
\end{align}
Finally, the inequality \eqref{fourth term NM} becomes for $n=0$,
\begin{equation}\label{e-quad0-NM}
\|Q_{0}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{0}^{4\gamma}}\lesssim C_{\ast}\varepsilon^{3}\gamma^{-2}.
\end{equation}
Plugging \eqref{e-1-NM}, \eqref{second term NMn}, \eqref{final estimate commutator} and \eqref{fourth term NM}, into \eqref{Fnext-NM} gives for $n\in\mathbb{N}^{*}$,
\begin{align*}
\|\mathcal{F}(\widetilde{\mathtt{U}}_{n+1})\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}&\leqslant CC_{\ast}\varepsilon\left(N_{n}^{s_{0}+2\overline{\sigma}+\frac{2}{3}\mu_{1}+1-\kappa_{1}}+N_{n}^{\overline{\sigma}+1-\mu_{2}-\frac{2}{3}a_{1}}+N_{n}^{2\overline{\sigma}+4-\frac{4}{3}a_{1}}\right).
\end{align*}
Now, our choice of parameters in \eqref{param NM} implies
$$s_{0}+2\overline{\sigma}+\tfrac{2}{3}\mu_{1}+2+a_{1}\leqslant\kappa_{1},\qquad\overline{\sigma}+\tfrac{1}{3}a_{1}+2\leqslant\mu_{2},\qquad2\overline{\sigma}+5\leqslant\tfrac{1}{3}a_{1}.$$
Consequently, for $N_{0}$ large enough, that is $\varepsilon$ small enough, we can obtain for any $n\in\mathbb{N},$
\begin{equation}\label{suitable constraints NM}
\max\Big(CN_{n}^{s_{0}+2\overline{\sigma}+\frac{2}{3}\mu_{1}+1-\kappa_{1}}\,,\, CN_{n}^{\overline{\sigma}+1-\mu_{2}-\frac{2}{3}a_{1}}\, , \, CN_{n}^{2\overline{\sigma}+4-\frac{4}{3}a_{1}}\Big) \leqslant \tfrac{1}{3}N_{n}^{-a_{1}},
\end{equation}
which implies in turn \eqref{decay ttUn inter} for $n\in\mathbb{N}^*.$ As for the case $n=0$, we insert \eqref{e-1-NM}, \eqref{second term NM0}, \eqref{final estimate commutator} and \eqref{e-quad0-NM} into \eqref{Fnext-NM} to get
$$\|\mathcal{F}(\widetilde{\mathtt{U}}_{1})\|_{q,s_{0}}^{\gamma,\mathtt{m},\mathrm{O}_{1}^{4\gamma}}\leqslant CC_{\ast}\varepsilon\left(N_{0}^{s_{0}+2\overline{\sigma}+\frac{3}{2}\mu_{1}+1-\kappa_{1}}+\varepsilon\gamma^{-2}+\varepsilon^{2}\gamma^{-2}\right).$$
Hence, using \eqref{suitable constraints NM} and the fact that \eqref{rigidity gam-N0} and \eqref{param NM} imply $0<a<\tfrac{1}{2+a_{1}}\cdot$, then taking $\varepsilon$ small enough, we infer
$$C\left(\varepsilon\gamma^{-2}+\varepsilon^{2}\gamma^{-2}\right)\leqslant \tfrac{2}{3}N_{0}^{-a_{1}}.$$
As a consequence, \eqref{decay ttUn inter} occurs also for $n=0.$\\
\ding{226} \textit{Construction of the extension.}
\noindent The next goal is to construct an extention of $\widetilde{\mathtt{H}}_{n+1}$ to the whole set of parameters $\mathcal{O}$ and still satisfying nice decay properties. For this aim, we introduce the following cut-off function $\chi_{n+1}\in C^{\infty}\big(\mathcal{O},[0,1]\big)$ given by
$$\chi_{n+1}=\begin{cases}
1 & \textnormal{in }\mathrm{O}_{n+1}^{2\gamma}\\
0 & \textnormal{in }\mathcal{O}\setminus \mathrm{O}_{n+1}^{4\gamma}
\end{cases}$$
and such that
\begin{equation}\label{groth deriv chin+1}
\forall \alpha\in\mathbb{N}^{d},\quad |\alpha|\in\llbracket 0,q\rrbracket,\quad \|\partial_{b,\omega}^{\alpha}\chi_{n+1}\|_{L^{\infty}(\mathcal{O})}\lesssim\left(\gamma^{-1}N_{n}^{\overline{a}}\right)^{|\alpha|}.
\end{equation}
Therefore, we can define the extension $\mathtt{H}_{n+1}$ of $\widetilde{\mathtt{H}}_{n+1}$ as follows
\begin{equation}\label{def extension ttHn+1}
\mathtt{H}_{n+1}\triangleq\begin{cases}
\chi_{n+1}\,\widetilde{\mathtt{H}}_{n+1} & \textnormal{in } \mathrm{O}_{n+1}^{4\gamma},\\
0 &\textnormal{in }\mathcal{O}\setminus\mathrm{O}_{n+1}^{4\gamma}.
\end{cases}
\end{equation}
We also define
\begin{equation}\label{def ttUn+1}
\mathtt{W}_{n+1}\triangleq\mathtt{W}_{n}+\mathtt{H}_{n+1},\qquad\mathtt{U}_{n+1}\triangleq\mathtt{U}_0+\mathtt{W}_{n+1}=\mathtt{U}_{n}+\mathtt{H}_{n+1}.
\end{equation}
We can observe that in restriction to $\mathrm{O}_{n+1}^{2\gamma}$, we have
$$\mathtt{H}_{n+1}=\widetilde{\mathtt{H}}_{n+1},\qquad\mathtt{U}_{n+1}=\widetilde{\mathtt{U}}_{n+1}\qquad\textnormal{and}\qquad\mathcal{F}(\mathtt{U}_{n+1})=\mathcal{F}(\widetilde{\mathtt{U}}_{n+1}).$$
The last identity together with \eqref{decay ttUn inter} and the fact that $\mathrm{O}_{n+1}^{2\gamma}\subset\mathrm{O}_{n+1}^{4\gamma}$ imply \eqref{decay FttUn}.
Now, the product laws in Lemma \ref{lem funct prop} together with \eqref{def extension ttHn+1} and \eqref{groth deriv chin+1} provide the following estimate
\begin{equation}\label{control ext ttHn+1}
\forall s\geqslant s_{0},\quad\|\mathtt{H}_{n+1}\|_{s}^{q,\gamma,\mathtt{m}}\lesssim N_{n}^{q\overline{a}}\|\widetilde{\mathtt{H}}_{n+1}\|_{s}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}.
\end{equation}
Then, gathering \eqref{control ext ttHn+1} and \eqref{decay tttHm+1 s0}, implies for any $n\in\mathbb{N}^{*},$
\begin{align*}
\| \mathtt{H}_{n+1}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant C N_{n}^{q\overline{a}+\overline\sigma}\|\widetilde{\mathtt{H}}_{n+1}\|_{s_{0}}^{q,\gamma,\mathtt{m},\mathrm{O}_{n+1}^{4\gamma}}\\
&\leqslant C C_{\ast}\varepsilon\gamma^{-1} N_{n}^{q\overline{a}+2\overline{\sigma}-\frac{2}{3}a_{1}}.
\end{align*}
The constraint \eqref{param NM} implies in particular $a_{2}=\tfrac{2}{3}a_{1}-q\overline{a}-2\overline{\sigma}-1\geqslant 1,$ we get for $\varepsilon$ small enough
\begin{align}\label{estim Hm+1 tilde}
\|\mathtt{H}_{n+1}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant CN_0^{-1}C_{\ast}\varepsilon\gamma^{-1}N_{n}^{-a_{2}}\nonumber\\
&\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{n}^{-a_{2}}.
\end{align}
As for the case $n=0$, we combine \eqref{control ext ttHn+1} and \eqref{ttH1} to get, up to take $C_{\ast}$ sufficiently large,
\begin{equation}\label{e-ttH1-01}
\|\mathtt{H}_{1}\|_{s}^{q,\gamma,\mathtt{m}}\leqslant \tfrac{1}{2}C_{\ast}\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}.
\end{equation}
Putting together \eqref{ttUn}, \eqref{e-ttH1-01} and \eqref{estim Hm+1 tilde}, we deduce
\begin{align*}
\|\mathtt{W}_{n+1}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant\|\mathtt{H}_{1}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}+\sum_{k=2}^{n+1}\|\mathtt{H}_{k}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\\
&\leqslant \tfrac{1}{2}C_{\ast}\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}+CN_{0}^{-1}C_{\ast}\varepsilon\gamma^{-1}\\
&\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}.
\end{align*}
This proves \eqref{e-ttWn} at the order $n+1.$ Now \eqref{tttHm+1 and ttWn}, \eqref{control ext ttHn+1} and \eqref{growth ttWn} all together yield
\begin{align*}
\|\mathtt{W}_{n+1}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}} & \leqslant \| \mathtt{W}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}+CN_{n}^{q\overline{a}}\|\mathtt{H}_{n+1}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\\
& \leqslant C_{\ast} \varepsilon\gamma^{-1}N_{n-1}^{\mu_{1}}+CC_{\ast}\gamma^{-1}N_{n}^{q\overline{a}+2\overline{\sigma}}\left(\varepsilon+\| \mathtt{W}_{n}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\right)\\
& \leqslant CC_{\ast}\varepsilon\gamma^{-1}N_{n}^{q\overline{a}+2\overline{\sigma}+1+\frac{2}{3}\mu_{1}}.
\end{align*}
By \eqref{param NM} we get $q\overline{a}+2\overline{\sigma}+2=\tfrac{\mu_{1}}{3},$ which implies that for $\varepsilon$ small enough we have
\begin{align*}
\|\mathtt{W}_{n+1}\|_{\kappa_{1}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant CN_{0}^{-1}C_{\ast}\varepsilon\gamma^{-1}N_{n}^{\mu_{1}}\\
&\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{n}^{\mu_{1}}.
\end{align*}
This proves \eqref{growth ttWn} at the order $n+1.$\\
\ding{226} \textit{Reversibility preserving property of the scheme.} Form $(\mathcal{P}2)_n$, we know that the torus $i_n$ is reversible. Observe that the projectors $\Pi_{N_n}$ are reversibility preserving thanks to the symmetry with respect to the Fourier modes. Now, using the reversibility property of the operators $\mathrm{T}_n$ and $\Pi_{N_n}$ we have that the torus component of $\widehat{\mathfrak{I}}_{n+1}$ of $\widetilde{\mathtt{H}}_{n+1}$ is reversible. Since the cut-off function $\chi_{n+1}$ only depends on the variables $(b,\omega)$, then the reversibility property is also preserved for the torus component $\mathfrak{I}_{n+1}$ of $\mathtt{H}_{n+1}$. Looking at the first component of \eqref{def ttUn+1}, we have
$$i_{n+1}=i_n+\mathfrak{I}_{n+1}.$$
Hence, the reversibility property \eqref{reversibility in} also holds at the order $n+1$.
\end{proof}
The previous iteration procedure converges and allows to find a non trivial reversible quasi-periodic solution of our problem provided some restriction on the internal radius $b$. More precisely, we have the following result.
\begin{cor}\label{Corollary NM}
There exists $\varepsilon_0>0$ such that, for all $\varepsilon\in(0,\varepsilon_0),$ the following assertions hold true. There exists a $q$-times differentiable function
$$\mathtt{U}_{\infty}:\begin{array}[t]{rcl}
\mathcal{O} & \rightarrow & \left(\mathbb{T}^{d}\times\mathbb{R}^{d}\times \mathbf{H}^{s_0}_{\bot,\mathtt{m}}\right)\times\mathbb{R}^{d}\times\mathbb{R}^{d+1}\\
(b,\omega) & \mapsto & \big(i_{\infty}(b,\omega),\alpha_{\infty}(b,\omega),(b,\omega)\big)
\end{array}$$
such that in restriction to the Cantor set $\mathtt{G}_{\infty}^{\gamma}$ defined by
\begin{equation}\label{Canbomgfin}
\mathtt{G}_{\infty}^{\gamma}\triangleq\bigcap_{n\in\mathbb{N}}\mathtt{A}_{n}^{\gamma},
\end{equation}
we have
\begin{equation}\label{solution NM non rigidified}
\forall(b,\omega)\in\mathtt{G}_{\infty}^{\gamma},\quad\mathcal{F}\big(\mathtt{U}_{\infty}(b,\omega)\big)=0.
\end{equation}
The torus $i_{\infty}$ is $\mathtt{m}$-fold and reversible. The vector $\alpha_{\infty}\in W^{q,\infty,\gamma}(\mathcal{O},\mathbb{R}^d)$ satisfies
\begin{equation}\label{limt alf}
\alpha_{\infty}(b,\omega)=\mathtt{J}\omega+\mathrm{r}_{\varepsilon}(b,\omega),\qquad\|\mathrm{r}_{\varepsilon}\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}.
\end{equation}
In addition, there exists a $q$-times differentiable function $b\in(b_{*},b^{*})\mapsto\omega(b,\varepsilon)$ implicitly defined by
\begin{equation}\label{implicit omegaper}
\alpha_{\infty}\big(b,\omega(b,\varepsilon)\big)=-\mathtt{J}\omega_{\textnormal{Eq}}(b)
\end{equation}
with
\begin{equation}\label{pert freq}
\omega(b,\varepsilon)=-\omega_{\textnormal{Eq}}(b)+\bar{r}_{\varepsilon}(b),\qquad \|\bar{r}_{\varepsilon}\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1}N_{0}^{q\overline{a}},
\end{equation}
such that
\begin{equation}\label{solution NM rigidified}
\forall b\in \mathcal{C}_{\infty}^{\varepsilon},\quad \mathcal{F}\Big(\mathtt{U}_{\infty}\big(b,\omega(b,\varepsilon)\big)\Big)=0,
\end{equation}
where
\begin{equation}\label{FCS}
\mathcal{C}_{\infty}^{\varepsilon}\triangleq\Big\{b\in(b_{*},b^{*})\quad\textnormal{s.t.}\quad\big(b,\omega(b,\varepsilon)\big)\in\mathtt{G}_{\infty}^{\gamma}\Big\}.
\end{equation}
\end{cor}
\begin{proof}
We deduce from \eqref{ttUn} and \eqref{estim Hm+1 tilde} that
$$\|\mathtt{W}_{n+1}-\mathtt{W}_{n}\|_{s_{0}}^{q,\gamma,\mathtt{m}}=\|\mathtt{H}_{n+1}\|_{s_{0}}^{q,\gamma,\mathtt{m}}\leqslant\|\mathtt{H}_{n+1}\|_{s_{0}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{n}^{-a_{2}}.$$
Consequently, the sequence $\left(\mathtt{W}_{n}\right)_{n\in\mathbb{N}}$ converges and we denote
$$\mathtt{W}_{\infty}\triangleq\lim_{n\rightarrow\infty}\mathtt{W}_{n}\triangleq(\mathfrak{I}_{\infty},\alpha_{\infty}-\mathtt{J}\omega,0,0),\qquad\mathtt{U}_{\infty}\triangleq\big(i_{\infty},\alpha_{\infty},(b,\omega)\big)=\mathtt{U}_0+\mathtt{W}_{\infty}.$$
The reversibility and $\mathtt{m}$-fold properties of $i_{\infty}$ are obtained as the pointwise limit in \eqref{reversibility in}. Now, for $\varepsilon$ small enough, we get \eqref{solution NM non rigidified} from \eqref{decay FttUn}. The identity \eqref{limt alf} follows from the previous construction and the corresponding estimate is obtained by taking the limit in \eqref{e-ttWn}. We recall that the open set $\mathcal{O}$ is defined in \eqref{ouvert-sym}-\eqref{def scrU} by
$$\mathcal{O}=(b_{*},b^{*})\times\mathscr{U},\qquad\mathscr{U}=B(0,R_{0}),\qquad\omega_{\textnormal{Eq}}\big([b_*,b^*]\big)\subset B\big(0,\tfrac{R_0}{2}\big).$$
According to \eqref{limt alf}, we have that for any $b\in(b_{*},b^{*}),$ the mapping $\omega\in\mathscr{U}\mapsto\alpha_{\infty}(b,\omega)\in\alpha_{\infty}(b,\mathscr{U})$ is invertible, implying that
$$\widehat{\omega}=\alpha_{\infty}(b,\omega)=\mathtt{J}\omega+\mathrm{r}_{\varepsilon}(b,\omega)\qquad\Leftrightarrow\qquad\omega=\alpha_{\infty}^{-1}(b,\widehat{\omega})=\mathtt{J}\widehat{\omega}+\widehat{\mathrm{r}}_{\varepsilon}(b,\widehat{\omega}).$$
In particular,
$$\widehat{\mathrm{r}}_{\varepsilon}(b,\widehat{\omega})=-\mathtt{J}\mathrm{r}_{\varepsilon}(b,\omega).$$
Then, differentiating the previous relation and using \eqref{limt alf}, we get
\begin{equation}\label{estimate mathrm repsilon}
\|\widehat{\mathrm{r}}_{\varepsilon}\|^{q,\gamma}\lesssim\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}.
\end{equation}
Finally, if we denote
$$\omega(b,\varepsilon)\triangleq\alpha_{\infty}^{-1}(b,-\mathtt{J}\omega_{\textnormal{Eq}}(b))=-\omega_{\textnormal{Eq}}(b)+\overline{\mathrm{r}}_{\varepsilon}(b),\qquad\overline{\mathrm{r}}_{\varepsilon}(b)\triangleq\widehat{\mathrm{r}}_{\varepsilon}\big(b,-\mathtt{J}\omega_{\textnormal{Eq}}(b)\big),$$
then we have in particular \eqref{implicit omegaper} and the estimate \eqref{pert freq} follows from \eqref{estimate mathrm repsilon}. In addition, combining \eqref{solution NM non rigidified}, \eqref{implicit omegaper} and \eqref{FCS}, the indentity \eqref{solution NM rigidified} holds.
The proof of Corollary \ref{Corollary NM} is now complete.
\end{proof}
\subsection{Measure of the final Cantor set}
In this last section, we check that the final Cantor set $\mathcal{C}_{\infty}^{\varepsilon}$ in the variable $b$ given by \eqref{FCS} is massive set, which proves the existence of non-trivial quasi-periodic solution to our problem. Actually, we prove that the measure of $\mathcal{C}_{\infty}^{\varepsilon}$ is $\varepsilon$-close to $(b^*-b_*).$ One of the main technical ingredient is the following R\"{u}ssmann Lemma \cite[Thm. 17.1]{R01}.
\begin{lem}\label{lemma Russmann book}
Let $q_{0}\in\mathbb{N}^{*}$ and $\alpha,\beta\in\mathbb{R}_{+}^*.$ Let $f\in C^{q_{0}}([a,b],\mathbb{R})$ such that
$$\inf_{x\in[a,b]}\max_{k\in\llbracket 0,q_{0}\rrbracket}|f^{(k)}(x)|\geqslant\beta.$$
Then, there exists $C=C(a,b,q_{0},\| f\|_{C^{q_{0}}([a,b],\mathbb{R})})>0$ such that
$$\Big|\big\lbrace x\in[a,b]\quad\textnormal{s.t.}\quad |f(x)|\leqslant\alpha\big\rbrace\Big|\leqslant C\tfrac{\alpha^{\frac{1}{q_{0}}}}{\beta^{1+\frac{1}{q_{0}}}},$$
where the notation $|A|$ corresponds to the Lebesgue measure of a given measurable set $A.$
\end{lem}
Our main result is stated in the next proposition.
\begin{prop}\label{lem-meas-es1}
Let $q_{0}$ be defined as in Lemma $\ref{lemma transversalityE}$ and assume that \eqref{param NM} and \eqref{rigidity gam-N0} hold with $q=q_0+1.$ Assume the additional conditions
\begin{equation}\label{selec tau12-upsilon}
\tau_1>dq_{0},\qquad\tau_2>\tau_1+dq_0,\qquad\upsilon\triangleq \frac{1}{q_0+3}\cdot
\end{equation}
Then there exists $C>0$ such that
$$(b^{*}-b_{*})-C \varepsilon^{\frac{a\upsilon}{q_{0}}}\leqslant\big|\mathcal{C}_{\infty}^{\varepsilon}\big|\leqslant b^*-b_* .$$
\end{prop}
\begin{proof}
The identities \eqref{FCS} and \eqref{Canbomgfin} provide the following decomposition of the final Cantor set
\begin{equation}\label{Cinftyepsilon}
\mathcal{C}_{\infty}^{\varepsilon}=\bigcap_{n\in\mathbb{N}}\mathcal{C}_{n}^{\varepsilon},\qquad\textnormal{where}\qquad \mathcal{C}_{n}^{\varepsilon}\triangleq \Big\{b\in(b_{*},b^{*})\quad\hbox{s.t}\quad \big(b,{\omega}(b,\varepsilon)\big)\in\mathtt{A}_n^{\gamma}\Big\},
\end{equation}
with $\mathtt{A}_n^{\gamma}$ and $\omega(b,\varepsilon)$ as in Proposition \ref{Nash-Moser} and \eqref{pert freq}. We can write
\begin{equation}\label{complement finCantset}
(b_{*},b^{*})\setminus\mathcal{C}_{\infty}^{\varepsilon}=\big((b_{*},b^{*})\setminus\mathcal{C}_{0}^{\varepsilon}\big)\sqcup\bigsqcup_{n=0}^{\infty}\big(\mathcal{C}_{n}^{\varepsilon}\setminus\mathcal{C}_{n+1}^{\varepsilon}\big).
\end{equation}
First, let us prove that
\begin{equation}\label{triv emb}
(b_{*},b^{*})\setminus\mathcal{C}_{0}^{\varepsilon}=\varnothing,\qquad\textnormal{i.e.}\qquad\mathcal{C}_{0}^{\varepsilon}=(b_*,b^*).
\end{equation}
For this purpose, notice that \eqref{pert freq} and \eqref{rigidity gam-N0} imply
$$\sup_{b\in(b_{*},b^{*})}\big|\omega(b,\varepsilon)+\omega_{\textnormal{Eq}}(b)\big|\leqslant\|\overline{\mathrm{r}}_{\varepsilon}\|^{q,\gamma}\leqslant C\varepsilon\gamma^{-1}N_{0}^{q\overline{a}}=C\varepsilon^{1-a(1+q\overline{a})}.$$
But \eqref{param NM} and \eqref{rigidity gam-N0} give in particular
$$0<a<\frac{1}{1+q\overline{a}}\cdot$$
Hence, in view of \eqref{def scrU}, for $\varepsilon$ sufficiently small we can ensure
$$\forall b\in(b_{*},b^{*}),\quad \omega(b,\varepsilon)\in \mathscr{U}=B(0,R_{0}).$$
By construction of $\mathtt{A}_{0}^{\gamma}$ and $\mathcal{O}$, we deduce \eqref{triv emb}. Coming back to \eqref{complement finCantset}, we find
\begin{align}\label{sum meas CnmCn+1}
\nonumber \Big|(b_{*},b^{*})\setminus\mathcal{C}_{\infty}^{\varepsilon}\Big|&\leqslant\sum_{n=0}^{\infty}\Big|\mathcal{C}_{n}^{\varepsilon}\setminus\mathcal{C}_{n+1}^{\varepsilon}\Big|\\
&\triangleq \sum_{n=0}^{\infty}\mathcal{S}_{n}.
\end{align}
Using the notations of Propositions \ref{prop RR} and \ref{prop proj nor dir}, we denote the perturbed frequencies associated with the reduced linearized operator at state $i_n$ in the following way
\begin{align}\label{def mujknfty}
\nonumber \mu_{j,k,n}^{(\infty)}(b,\varepsilon)&\triangleq \mu_{j,k,n}^{(\infty)}\big(b,{\omega}(b,\varepsilon),i_{n}\big)\\
&=\Omega_{j,k}(b)+jr_{k,n}^{(0)}(b,\varepsilon)+r_{j,k,n}^{(\infty)}(b,\varepsilon),
\end{align}
where
$$r_{k,n}^{(0)}(b,\varepsilon)\triangleq \mathtt{c}_{k,n}(b,\varepsilon)-\mathtt{v}_k(b),\qquad \mathtt{c}_{k,n}(b,\varepsilon)\triangleq \mathtt{c}_{k}\big(b,{\omega}(b,\varepsilon),i_{n}\big),\qquad r_{j,k,n}^{(\infty)}(b,\varepsilon)\triangleq r_{j,k}^{(\infty)}\big(b,{\omega}(b,\varepsilon),i_{n}\big).$$
Now, according to \eqref{Cinftyepsilon} and Propositions \ref{prop strighten}, \ref{prop RR} and \ref{prop inv linfty}, one has by construction that for any $n\in\mathbb{N}$,
\begin{align}\label{Set CnmCn+1}
\mathcal{C}_{n}^{\varepsilon}\setminus\mathcal{C}_{n+1}^{\varepsilon}&=\bigcup_{\underset{(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}\atop |l|\leqslant N_{n}}{k\in\{1,2\}}}\mathcal{R}_{l,j}^{(0,k)}(i_{n})\cup\bigcup_{\underset{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})\atop |l|\leqslant N_{n}}{k\in\{1,2\}}}\mathcal{R}_{l,j}^{(1,k)}(i_{n})\nonumber\\
&\quad\cup\bigcup_{\underset{\underset{\langle l\rangle\leqslant N_{n}}{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2}}\atop(l,j)\neq(0,j_{0})}{k\in\{1,2\}}}\mathcal{R}_{l,j,j_{0}}^{k}(i_{n})\cup\bigcup_{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})\atop\langle l,j,j_0\rangle\leqslant N_{n}}\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n}),
\end{align}
where we denote for $k\in\{1,2\},$
\begin{align*}
\mathcal{R}_{l,j}^{(0,k)}(i_{n})&\triangleq \left\lbrace b\in\mathcal{C}_{n}^{\varepsilon}\quad\textnormal{s.t.}\quad\Big|{\omega}(b,\varepsilon)\cdot l+j\mathtt{c}_{k,n}(b,\varepsilon)\Big|\leqslant\tfrac{4\gamma_{n+1}^{\upsilon}\langle j\rangle}{\langle l\rangle^{\tau_1}}\right\rbrace,\\
\mathcal{R}_{l,j,j_{0}}^{k}(i_{n})&\triangleq \left\lbrace b\in\mathcal{C}_{n}^{\varepsilon}\quad\textnormal{s.t.}\quad\Big|{\omega}(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\Big|\leqslant\tfrac{2\gamma_{n+1}\langle j-j_0\rangle}{\langle l\rangle^{\tau_2}}\right\rbrace,\\
\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n})&\triangleq \left\lbrace b\in\mathcal{C}_{n}^{\varepsilon}\quad\textnormal{s.t.}\quad\Big|{\omega}(b,\varepsilon)\cdot l+\mu_{j,1,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},2,n}^{(\infty)}(b,\varepsilon)\Big|\leqslant\tfrac{2\gamma_{n+1}}{\langle l,j,j_0\rangle^{\tau_2}}\right\rbrace,\\
\mathcal{R}_{l,j}^{(1,k)}(i_{n})&\triangleq \left\lbrace b\in\mathcal{C}_{n}^{\varepsilon}\quad\textnormal{s.t.}\quad\Big|{\omega}(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\Big|\leqslant\tfrac{\gamma_{n+1}\langle j\rangle}{\langle l\rangle^{\tau_1}}\right\rbrace.
\end{align*}
Since
$$W^{q,\infty,\gamma}(\mathcal{O},\mathbb{C})\hookrightarrow C^{q-1}(\mathcal{O},\mathbb{C})\qquad\textnormal{and}\qquad q=q_0+1,$$
one obtains for any $n\in\mathbb{N}$ and for any $(k,\ell)\in\{1,2\}^2$, the $C^{q_0}$ regularity of the curves
$$\begin{array}{ll}
\displaystyle b\mapsto\omega(b,\varepsilon)\cdot l+j\mathtt{c}_{k,n}(b,\varepsilon),&\quad (l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\backslash\{(0,0)\},\vspace{0.1cm}\\
\displaystyle b\mapsto\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)-\mu_{j_0,\ell,n}^{(\infty)}(b,\varepsilon),&\quad (l,j,j_0)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})\times (\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,\ell}),\vspace{0.1cm}\\
\displaystyle b\mapsto\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon),&\quad (l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}).
\end{array}$$
Therefore, applying Lemma \ref{lemma Russmann book} together with Lemma \ref{lem Ru-pert} yields that for all $n\in\mathbb{N}$,
\begin{align}\label{e-Russ- Rk12}
\forall |j|\leqslant C_{0}\langle l\rangle,\qquad \Big|\mathcal{R}_{l,j}^{(0,k)}(i_{n})\Big|&\lesssim\gamma^{\frac{\upsilon}{q_{0}}}\langle j\rangle^{\frac{1}{q_{0}}}\langle l\rangle^{-1-\frac{\tau_1+1}{q_{0}}},\nonumber\\
\forall |j|\leqslant C_{0}\langle l\rangle,\qquad \Big|\mathcal{R}_{l,j}^{(1,k)}(i_{n})\Big|&\lesssim\gamma^{\frac{1}{q_{0}}}\langle j\rangle^{\frac{1}{q_{0}}}\langle l\rangle^{-1-\frac{\tau_1+1}{q_{0}}},\\
\forall |j-j_0|\leqslant C_{0}\langle l\rangle,\qquad \Big|\mathcal{R}_{l,j,j_{0}}^{k}(i_{n})\Big|&\lesssim\gamma^{\frac{1}{q_{0}}}\langle j-j_0\rangle^{\frac{1}{q_{0}}}\langle l\rangle^{-1-\frac{\tau_2+1}{q_{0}}},\nonumber\\
\Big|\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n})\Big|&\lesssim\gamma^{\frac{1}{q_{0}}}\langle l,j,j_0\rangle^{-1-\frac{\tau_2+1}{q_{0}}}.\nonumber
\end{align}
We first estimate the measure of $\mathcal{S}_0$ and $\mathcal{S}_{1}$ defined in \eqref{sum meas CnmCn+1}. Their estimation cannot be done in a similar way to the other terms due to the range of validity of the estimate \eqref{ediff in in-1 norm sh+sigma4} obtained later in the proof of Lemma \ref{lemm-dix1}. From Lemma \ref{some cantor set are empty}, we have some trivial inclusions allowing us to write for $n\in\{0,1\}$,
\begin{align}\label{e-cal S01}
\mathcal{S}_{n}&\lesssim \sum_{k\in\{1,2\}\atop\underset{|j|\leqslant C_{0}\langle l\rangle, |l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}}}\Big|\mathcal{R}_{l,j}^{(0,k)}(i_{n})\Big|+\sum_{k\in\{1,2\}\atop\underset{|j|\leqslant C_{0}\langle l\rangle, |l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})}}\Big|\mathcal{R}_{l,j}^{(1,k)}(i_{n})\Big|\\
&\quad+\sum_{k\in\{1,2\}\atop\underset{\underset{(l,j)\neq(0,j_0),\min(|j|,|j_0|)\leqslant c_2\gamma_{1}^{-\upsilon}\langle l\rangle^{\tau_1}}{|j-j_0|\leqslant C_0\langle l\rangle,|l|\leqslant N_n}}{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2}}\Big|\mathcal{R}_{l,j,j_{0}}^{k}(i_{n})\Big|+\sum_{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})\atop\langle l,j,j_0\rangle\leqslant N_{n}}\Big|\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n})\Big|.\nonumber
\end{align}
Inserting \eqref{e-Russ- Rk12} into \eqref{e-cal S01} implies that for $n\in\{0,1\}$,
\begin{align*}
\mathcal{S}_{n}&\lesssim \gamma^{\frac{\upsilon}{q_0}}\sum_{(l,j)\in\mathbb{Z}^{d+1}\atop|j|\leqslant C_{0}\langle l\rangle}\langle j\rangle^{\frac{1}{q_{0}}}\langle l\rangle^{-1-\frac{\tau_1+1}{q_{0}}}+ \gamma^{\frac{1}{q_{0}}}\sum_{(l,j)\in\mathbb{Z}^{d+1}\atop|j|\leqslant C_{0}\langle l\rangle}\langle j\rangle^{\frac{1}{q_{0}}}\langle l\rangle^{-1-\frac{\tau_1+1}{q_{0}}}\\
&\quad+\gamma^{\frac{1}{q_{0}}}\sum_{(l,j,j_0)\in\mathbb{Z}^{d+2}\atop\underset{\min(|j|,|j_0|)\leqslant c_2\gamma_1^{\upsilon}\langle l\rangle^{\tau_1}}{|j-j_0|\leqslant C_{0}\langle l\rangle}}\langle j-j_0\rangle^{\frac{1}{q_{0}}}\langle l\rangle^{-1-\frac{\tau_2+1}{q_{0}}}+\gamma^{\frac{1}{q_{0}}}\sum_{(l,j,j_0)\in\mathbb{Z}^{d+2}}\langle l,j,j_0\rangle^{-1-\frac{\tau_2+1}{q_{0}}}.
\end{align*}
Notice that the conditions $|j-j_0|\leqslant C_0\langle l\rangle$ and $\min(|j|,|j_0|)\leqslant c_2\gamma_1^{-\upsilon}\langle l\rangle^{\tau_1}$ imply
\begin{equation}\label{Majo-max j j0}
\max(|j|,|j_0|)\leqslant\min(|j|,|j_0|)+|j-j_0|\leqslant c_2\gamma_1^{-\upsilon}\langle l\rangle^{\tau_1}+C_0\langle l\rangle\lesssim\gamma^{-\upsilon}\langle l\rangle^{\tau_1}.
\end{equation}
Consequently, we have
\begin{align}\label{e-calSn00}
\max_{n\in\{0,1\}}\mathcal{S}_{n}&\lesssim \gamma^{\frac{1}{q_{0}}}\Bigg(\sum_{l\in\mathbb{Z}^d}\langle l\rangle^{-\frac{\tau_1}{q_{0}}}+\gamma^{-\upsilon}\sum_{\l\in\mathbb{Z}^d}\langle l\rangle^{\tau_1-1-\frac{\tau_2}{q_0}}+\sum_{(l,j,j_0)\in\mathbb{Z}^{d+2}}\langle l,j,j_0\rangle^{-1-\frac{\tau_2+1}{q_{0}}}\Bigg)+\gamma^{\frac{\upsilon}{q_0}}\sum_{l\in\mathbb{Z}^d}\langle l\rangle^{-\frac{\tau_1}{q_0}}.
\end{align}
Observe that \eqref{selec tau12-upsilon} implies
$$\min\big(\tfrac{\upsilon}{q_0}\,\tfrac{1}{q_0}-\upsilon\big)=\tfrac{\upsilon}{q_0}.$$
Now the constraints on $\tau_1$ and $\tau_2$ listed in \eqref{selec tau12-upsilon} allow to make the series in \eqref{e-calSn00} convergent and we get
\begin{equation}\label{e-calSn 01}
\max_{n\in\{0,1\}}\mathcal{S}_n\lesssim\gamma^{\min\big(\frac{\upsilon}{q_0},\tfrac{1}{q_0}-\upsilon\big)}=\gamma^{\frac{\upsilon}{q_0}}.
\end{equation}
Let us now move to the estimate of $\mathcal{S}_{n}$ for $n\geqslant 2$ defined by \eqref{sum meas CnmCn+1}. Using Lemma \ref{lemm-dix1} and Lemma \ref{some cantor set are empty}, we infer
\begin{align*}
\mathcal{S}_{n}&\lesssim \sum_{k\in\{1,2\}\atop\underset{|j|\leqslant C_{0}\langle l\rangle, N_{n-1}<|l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}}}\Big|\mathcal{R}_{l,j}^{(0,k)}(i_{n})\Big|+\sum_{k\in\{1,2\}\atop\underset{|j|\leqslant C_{0}\langle l\rangle, N_{n-1}<|l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})}}\Big|\mathcal{R}_{l,j}^{(1,k)}(i_{n})\Big|\\
&\quad+\sum_{k\in\{1,2\}\atop\underset{\underset{(l,j)\neq(0,j_0),\min(|j|,|j_0|)\leqslant c_2\gamma_{n+1}^{-\upsilon}\langle l\rangle^{\tau_1}}{|j-j_0|\leqslant C_0\langle l\rangle,N_{n-1}<|l|\leqslant N_n}}{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2}}\Big|\mathcal{R}_{l,j,j_{0}}^{k}(i_{n})\Big|+\sum_{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})\atop N_{n-1}<\langle l,j,j_0\rangle\leqslant N_{n}}\Big|\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n})\Big|.
\end{align*}
Similarly to \eqref{Majo-max j j0}, we have the implication
$$\Big(\min(|j|,|j_0|)\leqslant c_2\gamma_{n+1}^{-\upsilon}\langle l\rangle^{\tau_1}\quad\textnormal{and}\quad|j-j_0|\leqslant C_0\langle l\rangle\Big)\quad\Rightarrow\quad\max(|j|,|j_0|)\lesssim\gamma^{-\upsilon}\langle l\rangle^{\tau_1}.$$
Hence, we deduce from \eqref{e-Russ- Rk12} that for any $n\geqslant2$
\begin{align*}
\mathcal{S}_{n}&\lesssim \gamma^{\frac{1}{q_{0}}}\Bigg(\sum_{l\in\mathbb{Z}^d\atop|l|> N_{n-1}}\langle l\rangle^{-\frac{\tau_1}{q_{0}}}+\gamma^{-\upsilon}\sum_{\l\in\mathbb{Z}^d\atop|l|>N_{n-1}}\langle l\rangle^{\tau_1-1-\frac{\tau_2}{q_0}}+\sum_{(l,j,j_0)\in\mathbb{Z}^{d+2}\atop\langle l,j,j_0\rangle>N_{n-1}}\langle l,j,j_0\rangle^{-1-\frac{\tau_2+1}{q_{0}}}\Bigg)+\gamma^{\frac{\upsilon}{q_0}}\sum_{l\in\mathbb{Z}^d\atop|l|>N_{n-1}}\langle l\rangle^{-\frac{\tau_1}{q_0}}.
\end{align*}
We deduce that the series of general term $\mathcal{S}_n$ converges and
\begin{align}\label{esti cal Snb2}
\sum_{n=2}^\infty \mathcal{S}_{n}&\lesssim \gamma^{\frac{\upsilon}{q_{0}}}=\varepsilon^{\frac{a\upsilon}{q_0}}.
\end{align}
Inserting \eqref{e-calSn 01} and \eqref{esti cal Snb2} into \eqref{sum meas CnmCn+1} yields
$$\Big|(b_{*},b^{*})\setminus\mathcal{C}_{\infty}^{\varepsilon}\Big|\lesssim \varepsilon^{\frac{a\upsilon}{q_0}}.$$
This proves the Proposition \ref{lem-meas-es1}.
\end{proof}
We shall now prove Lemma \ref{lemm-dix1} and Lemma \ref{some cantor set are empty} used in the proof of Proposition \ref{lem-meas-es1}.
\begin{lem}\label{lemm-dix1}
Let $n\in\mathbb{N}\setminus\{0,1\}$ and $k\in\{1,2\}.$ Then the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item For $(l,j)\in\mathbb{Z}^d\times\mathbb{Z}_{\mathtt{m}}$ with $(l,j)\neq(0,0)$ and $|l|\leqslant N_{n-1}$, we get $\,\,\mathcal{R}_{l,j}^{(0,k)}(i_{n})=\varnothing.$
\item For $(l,j)\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})$ with $|l|\leqslant N_{n-1}$, we get $\,\,\mathcal{R}_{l,j}^{(1,k)}(i_{n})=\varnothing.$
\item For $(l,j,j_0)\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2$ with $|l|\leqslant N_{n-1}$ and $(l,j)\neq(0,j_0),$ we get $\,\,\mathcal{R}_{l,j,j_0}^{k}(i_n)=\varnothing.$
\item For $(l,j,j_0)\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})$ with $\langle l,j,j_0\rangle\leqslant N_{n-1},$ we get $\,\,\mathcal{R}_{l,j,j_0}^{1,2}(i_n)=\varnothing.$
\item For any $n\in\mathbb{N}\setminus\{0,1\},$
\begin{align*}
\mathcal{C}_{n}^{\varepsilon}\setminus\mathcal{C}_{n+1}^{\varepsilon}&=\bigcup_{\underset{\underset{N_{n-1}<|l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}}}{k\in\{1,2\}}}\mathcal{R}_{l,j}^{(0,k)}(i_{n})\cup\bigcup_{\underset{\underset{N_{n-1}<|l|\leqslant N_{n}}{(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})}}{k\in\{1,2\}}}\mathcal{R}_{l,j}^{(1,k)}(i_{n})\\
&\quad\cup\bigcup_{\underset{\underset{N_{n-1}<|l|\leqslant N_{n}}{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2}}\atop(l,j)\neq(0,j_{0})}{k\in\{1,2\}}}\mathcal{R}_{l,j,j_{0}}^{k}(i_{n})\cup\bigcup_{(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})\atop N_{n-1}<\langle l,j,j_0\rangle\leqslant N_{n}}\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n}).
\end{align*}
\end{enumerate}
\end{lem}
\begin{proof}
Observe that the point (v) follows immediately from \eqref{Set CnmCn+1} and the points (i), (ii), (iii) and (iv). The points (i), (ii) and (iii) can be proved similarly to \cite[Lem. 7.1-(i)-(ii)-(iii)]{HR21} based on the following estimate, obtained from \eqref{ttHn shbsig4}.
\begin{align}\label{ediff in in-1 norm sh+sigma4}
\forall n\geqslant 2,\qquad\|i_{n}-i_{n-1}\|_{\overline{s}_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}&\leqslant\|\mathtt{U}_{n}-\mathtt{U}_{n-1}\|_{\overline{s}_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\nonumber\\
&\leqslant\| \mathtt{H}_{n}\|_{\overline{s}_{h}+\overline{\sigma}}^{q,\gamma,\mathtt{m}}\nonumber\\
&\leqslant C_{\ast}\varepsilon\gamma^{-1}N_{n-1}^{-a_{2}}.
\end{align}
We mention that the required constraint on $\upsilon$ stated in \eqref{selec tau12-upsilon} appears in the skipped proofs. Now it remains to prove the point (iv).\\
\noindent \textbf{(iv)} Let $(l,j,j_{0})\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})$ such that $\langle l,j,j_0\rangle\leqslant N_{n-1}.$ It is sufficient to prove that
$$\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n})\subset \mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n-1}).$$
Indeed, if this inclusion holds, then by construction
$$\mathcal{R}_{l,j,j_0}^{1,2}(i_n)\subset\big(\mathcal{C}_{n}^{\varepsilon}\setminus\mathcal{C}_{n+1}^{\varepsilon}\big)\cap\big(\mathcal{C}_{n-1}^{\varepsilon}\setminus\mathcal{C}_{n}^{\varepsilon}\big)=\varnothing.$$
Take $b\in\mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n})\subset\mathcal{C}_{n}^{\varepsilon}\subset \mathcal{C}_{n-1}^{\varepsilon}.$ Then coming back to \eqref{Set CnmCn+1}, we deduce from the triangle inequality that
\begin{equation}\label{triv121}
\left|{\omega}(b,\varepsilon)\cdot l+\mu_{j,1,n-1}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},2,n-1}^{(\infty)}(b,\varepsilon)\right|\leqslant\tfrac{2\gamma_{n+1}}{\langle l,j,j_0\rangle^{\tau_2}}+\varrho_{j,j_0,n}(b,\varepsilon),
\end{equation}
where
$$\varrho_{j,j_0,n}(b,\varepsilon)\triangleq \left|\mu_{j,1,n}^{(\infty)}(b,\varepsilon)-\mu_{j_0,2,n}^{(\infty)}(b,\varepsilon)-\mu_{j,1,n-1}^{(\infty)}(b,\varepsilon)+\mu_{j_{0},2,n-1}^{(\infty)}(b,\varepsilon)\right|.$$
Using the decomposition \eqref{def mujknfty}, we infer
\begin{align}\label{triv122}
\nonumber \varrho_{j,j_0,n}(b,\varepsilon)&\leqslant |j|\left|r_{1,n}^{(0)}(b,\varepsilon)-r_{1,n-1}^{(0)}(b,\varepsilon)\right|+|j_0|\left|r_{2,n}^{(0)}(b,\varepsilon)-r_{2,n-1}^{(0)}(b,\varepsilon)\right|\\
&\quad+\left|r_{j,1,n}^{(\infty)}(b,\varepsilon)-r_{j,1,n-1}^{(\infty)}(b,\varepsilon)\right|+\left|r_{j_0,2,n}^{(\infty)}(b,\varepsilon)-r_{j_0,2,n-1}^{(\infty)}(b,\varepsilon)\right|.
\end{align}
Applying \eqref{e-ed-r0} together with \eqref{ediff in in-1 norm sh+sigma4}, \eqref{rigidity gam-N0} and the fact that $\sigma_{4}\geqslant\sigma_{3},$ we obtain
\begin{align}\label{triv123}
\left|r_{1,n}^{(0)}(b,\varepsilon)-r_{1,n-1}^{(0)}(b,\varepsilon)\right|+\left|r_{2,n}^{(0)}(b,\varepsilon)-r_{2,n-1}^{(0)}(b,\varepsilon)\right|&\lesssim \varepsilon\| i_{n}-i_{n-1}\|_{\overline{s}_{h}+\sigma_{3}}^{q,\gamma,\mathtt{m}}\nonumber\\
&\lesssim \varepsilon^{2}\gamma^{-1}N_{n-1}^{-a_2}\nonumber\\
&\lesssim \varepsilon^{2-a}N_{n-1}^{-a_2}.
\end{align}
In the same way, we can apply \eqref{e-rjfty} together with \eqref{ediff in in-1 norm sh+sigma4} and \eqref{rigidity gam-N0} to deduce
\begin{align}\label{triv124}
\left|r_{j,1,n}^{(\infty)}(b,\varepsilon)-r_{j,1,n-1}^{(\infty)}(b,\varepsilon)\right|+\left|r_{j_0,2,n}^{(\infty)}(b,\varepsilon)-r_{j_0,2,n-1}^{(\infty)}(b,\varepsilon)\right|&\lesssim \varepsilon\gamma^{-1}\| i_{n}-i_{n-1}\|_{\overline{s}_{h}+\sigma_{4}}^{q,\gamma,\mathtt{m}}\nonumber\\
&\lesssim \varepsilon^{2}\gamma^{-2}N_{n-1}^{-a_2}\nonumber\\
&\lesssim \varepsilon^{2(1-a)}\langle l,j,j_0\rangle N_{n-1}^{-a_2}.
\end{align}
Inserting \eqref{triv123} and \eqref{triv124} into \eqref{triv122} gives
\begin{align}\label{sml-rhojj0n}
\varrho_{j,j_0,n}(b,\varepsilon)\lesssim \varepsilon^{2(1-a)}\langle l,j,j_0\rangle N_{n-1}^{-a_{2}}.
\end{align}
Now putting together \eqref{triv121}, \eqref{sml-rhojj0n} and the fact that $\gamma_{n+1}=\gamma_{n}-\varepsilon^a 2^{-n-1},$ we get
\begin{align*}
\left|{\omega}(b,\varepsilon)\cdot l+\mu_{j,1,n-1}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},2,n-1}^{(\infty)}(b,\varepsilon)\right|
&\leqslant \displaystyle\tfrac{2\gamma_{n}}{\langle l,j,j_0\rangle^{\tau_2}}-{\varepsilon^a}2^{-n}\langle l,j,j_0\rangle ^{-\tau_2}+C\varepsilon^{2(1-a)}\langle l,j,j_0\rangle N_{n-1}^{-a_{2}}.
\end{align*}
Added to the constraint $\langle l,j,j_0\rangle\leqslant N_{n-1}$, we obtain
\begin{align*}
-{\varepsilon^a }2^{-n}\langle l,j,j_0\rangle ^{-\tau_2}+C\varepsilon^{2(1-a)}N_{n-1}^{-a_{2}}\leqslant {\varepsilon^a}2^{-n}\langle l,j,j_0\rangle ^{-\tau_2}\Big(-1+C\varepsilon^{2-3a}2^{n}N_{n-1}^{-a_{2}+\tau_2+1}\Big).
\end{align*}
Observe that our choice of parameters \eqref{param NM} and \eqref{rigidity gam-N0} gives in particular
\begin{align*}
a_2>\tau_2+1\qquad\textnormal{and}\qquad a<\tfrac{2}{3}\cdot
\end{align*}
Hence, up to taking $\varepsilon$ small enough, we infer
\begin{align*}
\forall\, n\in\mathbb{N},\quad -1+C\varepsilon^{2-3a}2^{n}N_{n-1}^{-a_{2}+\tau_2+1}\leqslant 0.
\end{align*}
Consequently,
$$\left|{\omega}(b,\varepsilon)\cdot l+\mu_{j,1,n-1}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},2,n-1}^{(\infty)}(b,\varepsilon)\right| \leqslant\tfrac{2\gamma_{n}}{\langle l,j,j_0\rangle^{\tau_2}}\cdot$$
This means that $b\in \mathcal{R}_{l,j,j_{0}}^{1,2}(i_{n-1}).$ This concludes the proof of Lemma \ref{lemm-dix1}.
\end{proof}
The following lemma provides necessary constraints between time and space Fourier modes so that the sets in \eqref{Set CnmCn+1} are not void.
\begin{lem}\label{some cantor set are empty}
Let $k\in\{1,2\}.$ There exists $\varepsilon_0$ such that for any $\varepsilon\in[0,\varepsilon_0]$ and $n\in\mathbb{N}$ the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item Let $(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}.$ If $\,\mathcal{R}_{l,j}^{(0,k)}(i_{n})\neq\varnothing,$ then $|j|\leqslant C_{0}\langle l\rangle.$
\item Let $(l,j)\in\mathbb{Z}^{d}\times\left(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k}\right).$ If $\, \mathcal{R}_{l,j}^{(1,k)}(i_{n})\neq\varnothing,$ then $|j|\leqslant C_{0}\langle l\rangle.$
\item Let $(l,j,j_0)\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2.$ If $\mathcal{R}_{l,j,j_0}^{k}(i_n)\neq\varnothing$, then $|j-j_0|\leqslant C_0\langle l\rangle.$
\item Let $(l,j,j_0)\in\mathbb{Z}^d\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^2.$ There exists $c_2>0$ such that if $\min(|j|,|j_0|)\geqslant c_2\gamma_{n+1}^{-\upsilon}\langle l\rangle^{\tau_1},$ then
$$\mathcal{R}_{l,j,j_0}^{k}(i_n)\subset\mathcal{R}_{l,j-j_0}^{(0,k)}(i_n).$$
\end{enumerate}
\end{lem}
\begin{proof}
\textbf{(i)} Observe that the case $j=0$ is trivial. Now, for $j\neq0$ we assume that $\mathcal{R}_{l,j}^{(0,\epsilon)}(i_{n})\neq\varnothing.$ Then, there exists $b\in(b_{*},b^{*})$ such that
$$|\omega(b,\varepsilon)\cdot l+j \mathtt{c}_{k,n}(b,\varepsilon)|\leqslant\tfrac{4\gamma_{n+1}^{\upsilon}|j|}{\langle l\rangle^{\tau_1}}\cdot$$
From triangle and Cauchy-Schwarz inequalities, \eqref{in gamn}, \eqref{rigidity gam-N0} and the fact that $(b,\varepsilon)\mapsto \omega(b, \varepsilon)$ is bounded, we deduce
\begin{align}\label{maj cknj}
|\mathtt{c}_{k,n}(b,\varepsilon)||j|&\leqslant 4|j|\gamma_{n+1}^{\upsilon}\langle l\rangle^{-\tau_1}+|{\omega}(b,\varepsilon)\cdot l|\nonumber\\
&\leqslant 4|j|\gamma_{n+1}^{\upsilon}+C\langle l\rangle\nonumber\\
&\leqslant 8\varepsilon^{a\upsilon}|j|+C\langle l\rangle.
\end{align}
Now by construction \eqref{mu0 r0}, we can write
$$\mathtt{c}_{k,n}(b,\varepsilon)=\mathtt{v}_k(b)+r_{k,n}^{(0)}(b,\varepsilon).$$
Remark that by definition \eqref{def V10 V20}, we have
$$\inf_{k\in\{1,2\}}\inf_{b\in(b_{*},b^{*})}\big|\mathtt{v}_{k}(b)\big|\geqslant\Omega.$$
Together with \eqref{sml-r0} and Proposition \ref{Nash-Moser}-$(\mathcal{P}1)_{n}$, this implies
\begin{align}\label{e-uni r0}
\forall q'\in\llbracket 0,q\rrbracket,\quad\sup_{n\in\mathbb{N}}\sup_{b\in(b_{*},b^{*})}\big|\partial_{b}^{q'}r_{k,n}^{(0)}(b,\varepsilon)\big|&\leqslant\gamma^{-q'}\sup_{n\in\mathbb{N}}\| r_{k,n}^{(0)}\|^{q,\gamma}\nonumber\\
&\lesssim\varepsilon\gamma^{-q'}\nonumber\\
&\lesssim\varepsilon^{1-aq'}.
\end{align}
Hence, choosing $\varepsilon$ small enough implies
\begin{equation}\label{low bnd ckn}
\inf_{k\in\{1,2\}}\inf_{n\in\mathbb{N}}\inf_{b\in(b_{*},b^{*})}|\mathtt{c}_{k,n}(b,\varepsilon)|\geqslant \tfrac{\Omega}{2}\cdot
\end{equation}
Inserting \eqref{low bnd ckn} into \eqref{maj cknj} yields
$$\big(\tfrac{\Omega}{2}-8\varepsilon^{a\upsilon}\big)|j|\leqslant C\langle l\rangle.$$
Thus, selecting $\varepsilon$ small enough ensures that $|j|\leqslant C_{0}\langle l\rangle$ for some $C_{0}>0.$\\
\noindent \textbf{(ii)} The case $j=0$ is obvious so we may treat the case where $j\neq 0.$ Assume that $\mathcal{R}_{l,j}^{(1,k)}(i_{n})\neq\varnothing.$ Then, we can find $b\in(b_{*},b^{*})$ such that
$$\big|\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\big|\leqslant\tfrac{\gamma_{n+1}|j|}{\langle l\rangle^{\tau_1}}\cdot$$
Applying the triangle and Cauchy-Schwarz inequalities together with \eqref{in gamn} and \eqref{rigidity gam-N0} yields
\begin{align}\label{maj mujknfty}
\big|\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\big|&\leqslant \gamma_{n+1}|j|\langle l\rangle^{-\tau_1}+|{\omega}(b,\varepsilon)\cdot l|\nonumber\\
&\leqslant 2\varepsilon^a|j|+C\langle l\rangle.
\end{align}
Now coming back to the structure of the eigenvalues in \eqref{def mujknfty}, then using the triangle inequality, we infer
\begin{align}\label{low mjknfty}
\big|\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\big|&\geqslant |\Omega_{j,k}(b)|-|j|\big|r_{k,n}^{(0)}(b,\varepsilon)\big|-\big|r_{j,k,n}^{(\infty)}(b,\varepsilon)\big|.
\end{align}
Notice that \eqref{e-rjfty} implies
\begin{align}\label{e-uni rjknfty}
\forall q'\in\llbracket 0,q\rrbracket,\quad\sup_{n\in\mathbb{N}}\sup_{b\in(b_{*},b^{*})}\sup_{j\in\overline{\mathbb{S}}_{0,k}}\big|\partial_{b}^{q'}r_{j,k,n}^{(\infty)}(b,\varepsilon)\big|&\leqslant\gamma^{-q'}\sup_{n\in\mathbb{N}}\sup_{j\in\overline{\mathbb{S}}_{0,k}}\| r_{j,k,n}^{(\infty)}\|^{q,\gamma}\nonumber\\
&\lesssim\varepsilon\gamma^{-q'-1}\nonumber\\
&\lesssim\varepsilon^{1-a(q'+1)}.
\end{align}
Gathering \eqref{low mjknfty}, Lemma \ref{lem-asym}-3, \eqref{e-uni r0} and \eqref{e-uni rjknfty}, we obtain
\begin{equation}\label{low mjknfty2}
\big|\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\big|\geqslant \Omega|j|-C\varepsilon^{1-a}|j|.
\end{equation}
Inserting \eqref{low mjknfty2} into \eqref{maj mujknfty} yields
\begin{align*}
\big( \Omega-C\varepsilon^{1-a}-2\varepsilon^a\big)|j|
&\leqslant C\langle l\rangle.
\end{align*}
Hence, taking $\varepsilon$ small enough we obtain $|j|\leqslant C_0\langle l\rangle,$ for some $C_{0}>0.$\\
\noindent\textbf{(iii)} Notice that for $j=j_0$ we have $\mathcal{R}_{l,j_0,j_{0}}^{k}(i_{n})=\mathcal{R}^{(0,k)}_{l,0}(i_{n})$. Hence this situation has already been studied in the first point. Therefore, we shall consider $j\neq j_0.$ Let us assume that $\mathcal{R}_{l,j,j_{0}}(i_{n})\neq\varnothing.$ We can find $b\in(b_{*},b^{*})$ such that
$$\big|\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)-\mu_{j_0,k,n}^{(\infty)}(b,\varepsilon)\big|\leqslant\tfrac{2\gamma_{n+1}|j-j_0|}{\langle l\rangle^{\tau_2}}\cdot$$
Using one more time the triangle and Cauchy-Schwarz inequalities together with \eqref{in gamn} and \eqref{rigidity gam-N0}, we infer
\begin{align*}
\big|\mu_{j,k,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\big|&\leqslant 2\gamma_{n+1}|j-j_{0}|\langle l\rangle^{-\tau_{2}}+|{\omega}(b,\varepsilon)\cdot l|\\
&\leqslant 2\gamma_{n+1}|j-j_{0}|+C\langle l\rangle\\
&\leqslant 4\varepsilon^a|j-j_{0}|+C\langle l\rangle.
\end{align*}
On the other hand, the triangle inequality, Lemma \ref{lem-asym}-5, \eqref{e-uni r0} and \eqref{e-uni rjknfty} give for $\varepsilon$ small enough
\begin{align*}
\big|\mu_{j,k,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},k,n}^{(\infty)}(b,\varepsilon)| & \geqslant |\Omega_{j,k}(b)-\Omega_{j_{0},k}(b)|-\big|r_{k,n}^{(0)}(b,\varepsilon)\big||j-j_{0}|-\big|r_{j,k,n}^{(\infty)}(b,\varepsilon)\big|-\big|r_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\big|\\
& \geqslant \big(c-C\varepsilon^{1-a}\big)|j-j_{0}|\\
& \geqslant \tfrac{c}{2}|j-j_{0}|.
\end{align*}
Putting together the foregoing inequalities yields
$$\big(\tfrac{c}{2}-4\varepsilon^{a}\big)|j-j_0|\leqslant C\langle l\rangle.$$
Thus, for $\varepsilon$ sufficiently small, we get $|j-j_{0}|\leqslant C_{0}\langle l\rangle,$ for some $C_{0}>0.$\\
\noindent \textbf{(iv)} Observe that the case $j=j_0$ is trivial, so we may restrict our discussion to the case $j\neq j_0.$ Now, according to the symmetry property $\mu_{-j,k,n}^{(\infty)}(b,\varepsilon)=-\mu_{j,k,n}^{(\infty)}(b,\varepsilon)$ we can assume, without loss of generality that $0<j<j_0.$ Take $b\in\mathcal{R}_{l,j,j_{0}}^{k}(i_{n}).$ Then by definition, we have
\begin{equation}\label{complmt Russ2}
\big|{\omega}(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\pm\mu_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\big|\leqslant\tfrac{2\gamma_{n+1}\langle j\pm j_{0}\rangle}{\langle l\rangle^{\tau_{2}}}\cdot
\end{equation}
Recall from \eqref{def V10 V20} and \eqref{ASYFR1+} the decompositions for $j>0,$
\begin{align*}
\mathtt{v}_{k}(b)&=\Omega+(2-k)\frac{1-b^2}{2},\\
\Omega_{j,k}(b)&=j\mathtt{v}_{k}(b)+\frac{(-1)^{k}}{2}+(-1)^{k+1}\mathtt{r}_{j}(b).
\end{align*}
Therefore, by the triangle inequality, we get
\begin{align}
\big|\omega(b,\varepsilon)\cdot l+(j\pm j_{0})\mathtt{c}_{k,n}(b,\varepsilon)\big| &\leqslant \big|\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\pm\mu_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\big|+\tfrac{1}{2}(1\pm 1)\nonumber\\
&\quad+\big|\mathtt{r}_{j}(b)\pm\mathtt{r}_{j_0}(b)\big|+\big|r_{j,k,n}^{(\infty)}(b,\varepsilon)\pm r_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\big|.\label{link R0K and Rkk}
\end{align}
First, it is obvious that
\begin{equation}\label{trive 1pm1}
1\pm 1\leqslant \tfrac{\langle j\pm j_0\rangle}{j}\cdot
\end{equation}
Second, the estimate \eqref{ASYFR1-} implies in particular
\begin{align}\label{e-ttrj pm ttrj0}
\big|\mathtt{r}_{j}(b)\pm\mathtt{r}_{j_0}(b)\big|&\leqslant C\big(|j|^{-1}+|j_0|^{-1}\big)\nonumber\\
&\leqslant C\tfrac{\langle j\pm j_0\rangle}{j}\cdot
\end{align}
Third, the estimate \eqref{e-rjfty} together with \eqref{rigidity gam-N0} give
\begin{align}\label{e-rjfty pm rj0fty}
\big|r_{j,k,n}^{(\infty)}(b,\varepsilon)\pm r_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\big| \leqslant &C \varepsilon^{1-a}\big(|j|^{-1}+|j_0|^{-1}\big)\nonumber\\
\leqslant & C \tfrac{\langle j\pm j_{0}\rangle}{j}\cdot
\end{align}
Gathering \eqref{complmt Russ2}, \eqref{link R0K and Rkk}, \eqref{trive 1pm1}, \eqref{e-ttrj pm ttrj0} and \eqref{e-rjfty pm rj0fty} yields
\begin{align*}
\nonumber \big|{\omega}(b,\varepsilon)\cdot l+(j\pm j_{0})\mathtt{c}_{k,n}(b,\varepsilon)\big| \leqslant & \tfrac{2\gamma_{n+1}\langle j\pm j_{0}\rangle}{\langle l\rangle^{\tau_{2}}}+ C \tfrac{\langle j\pm j_{0}\rangle}{j}\cdot
\end{align*}
Hence, using the assumptions $\displaystyle j\geqslant \tfrac{C}{2} \gamma_{n+1}^{-\upsilon}\langle l\rangle^{\tau_{1}}$ and $\tau_{2}>\tau_{1}$, we infer
$$\big|{\omega}(b,\varepsilon)\cdot l+(j\pm j_{0})\mathtt{c}_{k,n}^{}(b,\varepsilon)\big| \leqslant \tfrac{4\gamma_{n+1}^{\upsilon}\langle j\pm j_{0}\rangle}{\langle l\rangle^{\tau_{1}}}\cdot$$
This gives the desired result and ends the proof of Lemma \ref{some cantor set are empty}, up to defining $c_{2}\triangleq \frac{C}{2}\cdot$
\end{proof}
The next and last lemma is concerned by the transversality property of the perturbed frequency vector $\omega(b,\varepsilon)$ given by Corollary \ref{Corollary NM}.
\begin{lem}\label{lem Ru-pert}
Let $q_{0}$, $C_{0}$ and $\rho_{0}$ as in Lemma $\ref{lemma transversalityE}$. There exist $\varepsilon_{0}>0$ small enough such that for any $\varepsilon\in[0,\varepsilon_{0}]$ the following assertions hold true.
\begin{enumerate}[label=(\roman*)]
\item For all $l\in\mathbb{Z}^{d}\setminus\{0\}, $ we have
$$\inf_{b\in[b_*,b^*]}\max_{q'\in\llbracket 0,q_{0}\rrbracket}\big|\partial_{b}^{q'}\left(\omega(b,\varepsilon)\cdot l\right)\big|\geqslant\tfrac{\rho_{0}\langle l\rangle}{2}\cdot$$
\item For all $(l,j)\in\mathbb{Z}^{d}\times\mathbb{Z}_{\mathtt{m}}\setminus\{(0,0)\}$ such that $|j|\leqslant C_{0}\langle l\rangle,$ we have
$$\forall n\in\mathbb{N},\quad\inf_{b\in[b_*,b^*]}\max_{q'\in\llbracket 0,q_{0}\rrbracket}\left|\partial_{b}^{q'}\big(\omega(b,\varepsilon)\cdot l+jc_{k,n}(b,\varepsilon)\big)\right|\geqslant\tfrac{\rho_{0}\langle l\rangle}{2}\cdot$$
\item For all $(l,j)\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})$ such that $|j|\leqslant C_{0}\langle l\rangle,$ we have
$$\forall n\in\mathbb{N},\quad\inf_{b\in[b_*,b^*]}\max_{q'\in\llbracket 0,q_{0}\rrbracket}\left|\partial_{b}^{q'}\left(\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)\right)\right|\geqslant\tfrac{\rho_{0}\langle l\rangle}{2}\cdot$$
\item For all $(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,k})^{2}$ such that $|j-j_0|\leqslant C_0\langle l\rangle$, we have
$$\forall n\in\mathbb{N},\quad\inf_{b\in[b_*,b^*]}\max_{q'\in\llbracket 0,q_{0}\rrbracket}\left|\partial_{b}^{q'}\left(\omega(b,\varepsilon)\cdot l+\mu_{j,k,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},k,n}^{(\infty)}(b,\varepsilon)\right)\right|\geqslant\tfrac{\rho_{0}\langle l\rangle}{2}\cdot$$
\item For all $(l,j,j_{0})\in\mathbb{Z}^{d}\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,1})\times(\mathbb{Z}_{\mathtt{m}}\setminus\overline{\mathbb{S}}_{0,2})$, we have
$$\forall n\in\mathbb{N},\quad\inf_{b\in[b_*,b^*]}\max_{q'\in\llbracket 0,q_{0}\rrbracket}\left|\partial_{b}^{q'}\left(\omega(b,\varepsilon)\cdot l+\mu_{j,1,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},2,n}^{(\infty)}(b,\varepsilon)\right)\right|\geqslant\tfrac{\rho_{0}\langle l,j,j_0\rangle}{2}\cdot$$
\end{enumerate}
\end{lem}
\begin{proof} The points (i), (ii), (iii) and (iv) are obtained following closely \cite[Lem. 7.3]{HR21}. The estimates are obtained by a perturbative argument from the equilibrium transversality conditions proved in Lemma \ref{lemma transversalityE}-1-2-3-4. Therefore, it remains to prove the point (v).\\
\textbf{(v)} Using the decompositions \eqref{mu0 r0}-\eqref{def mu lim}-\eqref{pert freq} together with \eqref{e-uni r0}, \eqref{e-uni rjknfty} and Lemma \ref{lemma transversalityE}-5, we get for $\varepsilon$ sufficiently small
\begin{align*}
\max_{q'\in\llbracket 0,q_{0}\rrbracket}\left|\partial_{b}^{q'}\Big(\omega(b,\varepsilon)\cdot l\right.&\left.\left.+\mu_{j,1,n}^{(\infty)}(b,\varepsilon)-\mu_{j_{0},2,n}^{(\infty)}(b,\varepsilon)\right)\right|
\geqslant\max_{q'\in\llbracket 0,q_{0}\rrbracket}\left|\partial_{b}^{q'}\Big(\omega_{\textnormal{Eq}}(b)\cdot l+\Omega_{j,1}(b)-\Omega_{j_{0},2}(b)\Big)\right|\\
&-\max_{q'\in\llbracket 0,q\rrbracket}\left|\partial_{b}^{q'}\left(\overline{\mathrm{r}}_{\varepsilon}(b)\cdot l+jr_{1,n}^{(0)}(b,\varepsilon)-j_{0}r_{2,n}^{(0)}(b,\varepsilon)+r_{j,1,n}^{(\infty)}(b,\varepsilon)-r_{j_{0},2,n}^{(\infty)}(b,\varepsilon)\right)\right|\\
&\geqslant\rho_{0}\langle l,j,j_0\rangle-C\varepsilon^{1-a(1+q+q\overline{a})}\langle l,j,j_0\rangle-C\varepsilon^{1-a(1+q)}\langle l,j,j_0\rangle\\
&\geqslant\tfrac{\rho_{0}\langle l,j,j_0\rangle}{2}\cdot
\end{align*}
This ends the proof of Lemma \ref{lem Ru-pert}.
\end{proof}
|
2,877,628,088,889 | arxiv | \section{Introduction}
The cosmic ray (CR) spectrum is characterized by a sharply falling power law behaviour, $\frac{dN}{dE} \sim E^{-(\gamma + 1)} $\cite{gai}. The spectrum gets more steeper around
$10^6$ GeV with the spectral index $\gamma$ changing from $1.7$ to
$2.1$ - this region is called the {\it knee}. Around $E\sim 5\times
10^9$ GeV, one observes a flattening of the spectrum, with the
spectral index $\gamma$ falling between $1.4$ and $1.7$.
This is the so called {\it ankle}.
These two breaks of the primary spectrum are still open questions of CR physics.
The region beyond the ankle is the regime of ultra high energy cosmic rays. There is not much data available in that region and no clear consensus exists on the composition or the particle content in this region \cite{nag}. It is generally believed that the change in the slope around the knee is astrophysical in nature rather than any specific change in hadronic
properties and/or interactions~\cite{nag,kas3}.
An overview of hadronic interactions and cosmic rays can be found in~\cite{lowxcr}.
The atmospheric muon flux originated from decays of pions and kaons is commonly called the {\it conventional muon} flux. There is expected rather sharp reduction of the
conventional muon flux above a few TeV \cite{lip} due to the increasing decay lengths and decreasing interaction lengths of pions and kaons. Therefore, at very high
energies the bulk of muons is expected to arise from the semileptonic decay modes of heavy shortlived hadrons, predominantly the charmed ones. This component is called the {\it prompt muons}.
It is known that the prompt muon flux is only about $10\%$ smaller than the prompt
$\nu_{\mu}$ flux at the surface of the earth. Therefore, measurement of the atmospheric prompt muon (APM) flux at high energies will ensure a normalization for the atmospheric prompt neutrino (APN) flux, and a direct comparison of the two is both desirable and necessary.
This study is necessary because the atmospheric neutrino flux is unavoidable background to VHE neutrino experiments.
There are sizeable uncertainties in theoretical predictions
for the prompt lepton fluxes (see~\cite{bug89, costa01}
for review). The reason is mainly due to the vastly
different choices for the charm production cross-section -- perturbative QCD (pQCD) with
a $K$ factor \cite{tig}, next-to-leading order (NLO) pQCD \cite{prs, ggv2}, phenomenological
nonperturbative approach, such as the recombination quark-parton model (RQPM) or quark-gluon string model (QGSM)~\cite{bug89}. The experimental situation is not
very precise either at this stage. Various experiments~\cite{lvd}
provide upper limits on the APM fluxes in the energy range of interest, which allow a large variation in the prompt fluxes. One can therefore expect that better measurements of
high-energy muon fluxes can play a definitive role in selecting the charm production models, and thereby, also providing invaluable information about parton densities at such low-$x$ and high energy values. Another related source of large theoretical uncertainties
is strong dependence of the hadronic cross-sections on the renormalization and factorization scales. This is partly related to the naive extrapolation of parton distribution functions to very different energy and $x$-values. For the case of conventional fluxes
originating from the pions and kaons, these issues are in much better control and
therefore the predictions stand on a sound footing.
Earlier authors of Ref.~\cite{gp, mp} explored the possibility of utilizing the
high energy prompt muon flux(es) in order to investigate whether the
general expectations expressed above can in practice help in
selecting the charm production model/parameterization and also the importance of
the heavy composition of cosmic rays above knee. They chose some of the
models often used and compare the predictions, incorporating the saturation model
of Golec-Biernat and Wuthsoff~\cite{gbw}. However while esimating their
event rates of muons in a 50 kT Iron detector like INO one~\cite{ino} they did not consider
the angular dependence of the muon fluxes at rock depth.
Angular dependence of muon flux due to surrounding rock is really important
for correct estimation of the muon event rate inside such a detector.
In this work we calculate the high-enery AM flux, conventional as well as prompt, at INO rock depth taking into account the distortion in the surface muon zenith-angle distribution due to specific topography of the INO site.
It is therefore quite clear from all these models that the lepton fluxes at the end are strongly sensitive to the charm production cross section. Till the knee, the cosmic ray flux and composition is rather established and therefore, the only source of large error is the charm cross section. This therefore gives us a unique possibility to gain information about heavy quark production mechanism at high energies and low $x$.
\section{Surface atmospheric muon flux and the calculation technique}
\subsection{Topography of PUSHEP site}
The slant depth $X$ depends on the topography of the rock
surrounding the INO detector.
PUSHEP is the selected site for this purpose. One can assume a constant depth
which is equal to the vertical depth just above the cavern. The vertical
depth of PUSHEP site is 1.3 km of rock. Another assumption is that of a triangle topogarphy.
In this case the slant depth for given zenith angle $\theta$ is calculated as
\begin{equation}
X(\theta) = \frac{h_0}{\cos\theta + (h_0/l_0)\,\sin\theta} \ ,
\end{equation}
where $h_0=1.3$ km is the vertical depth, $l_0=2.1$ km is the half-length of the approach
tunnel and $\tan\omega=h_0/l_0$ is the slope of the mountain.
\begin{figure}[!hb]
\begin{center}\hskip -0mm
\includegraphics[width=7.0cm]{triangle.eps} \hskip 5mm
\includegraphics[width=8.0cm]{Xtheta_3.eps}
\end{center}
\hskip 0mm
\vskip -10mm
\caption{Geometry and slant depth of PUSHEP site.}
\label{fig1}
\end{figure}
The triangle nature of site and the slant depth $X(\theta)$ are shown in Fig.~\ref{fig1}.
For the rock density $\varrho$ we adopt here value $2.72$ g/cm$^{3}$.
The column depth $h(\theta)$ related to the slant depth, $h(\theta)= \varrho X(\theta)$,
varies between $h_{\rm min}\simeq 3.0\cdot 10^5$ g$\cdot$cm$^{-2}$ ($3$ km w. e.) that
corresponds to $\cos\theta_{\rm m}\simeq 0.85$ and
$h_{\rm max}\simeq 5.71\cdot 10^5$ g$\cdot$cm$^{-2}$ ($5.71$ km w. e.) near horizontal. Near vertical direction column depth is about 3.54 km w. e.
\subsection{Parameterization of the conventional muon spectrum at sea level}
The surface muon flux is rather well measured up to TeV and can be described by different analytical formulae taking into account the zenith-angle dependence. Here we list some of them which were used in present calculations.
First of all we use Gaisser's muon flux parameterization~\cite{gai,gaisser04} (in inits of
$ \rm{cm}^{-2}\rm s^{-1}\rm{sr}^{-1}\rm{GeV}^{-1}$)
\begin{equation}\label{gais}
\phi^{\rm \pi, K}_{\mu}(E_{\mu}) =0.14E_{\mu}^{-2.7}\left[\frac{1}{1+1.1(E_{\mu}/115 {\rm GeV})\cos\theta}+ \frac{0.054}{1+1.1 (E_{\mu}/850 {\rm GeV})\cos\theta} \right].
\end{equation}
For our purpose we work with a modified muon flux formula obtained by Tang et al.~\cite{tang}.
Next parameterization of the conventinal muon flux we use here is that by Bugaev et al.~\cite{bug98} for vertical direction:
\begin{equation}\label{fit1}
\phi^{\rm \pi, K}_{\mu}(p_{\mu}, 0^\circ) =
Cp_{\mu}^{-(\gamma_0+\gamma_1 z+\gamma_2 z^2+\gamma_3 z^3)} \, , \ \rm{cm}^{-2}\rm s^{-1}\rm{sr}^{-1}\rm{(GeV/c)}^{-1},
\end{equation}
where $z=\log_{10}(p_{\mu}/1~\rm{GeV/c})$. Values of parameters
in~Eq.~(\ref{fit1}) are listed in Table~\ref{t2} for different
momentum ranges.
The muon energy spectrum is $\phi^{\rm \pi, K}_{\mu}(E,\theta) =
(E_{\mu}/p_{\mu})\,\phi^{\rm \pi, K}_{\mu}(p_{\mu}, \theta) $.
\begin{table}[!h]
\protect\caption{Parameters in Eq.~(\protect\ref{fit1}) for the vertical
energy spectrum of conventional muons at sea level.
\label{t2}}
\center{\begin{tabular}{lccccc}\hline\hline
Momentum range, GeV/c $\ $ &
$C$, (cm\,${}^{2}$s\, sr\, GeV/c)$^{-1}$
& $ \gamma_0 $ & $ \gamma_1$ & $\gamma_2$ & $\gamma_3$ \\\hline
$1.0 - 927.65$ & $2.950\times10^{-3}$
& 0.3061 & 1.2743 & -0.2630 & 0.0252 \\
$927.65 - 1587.8$ & $1.781\times10^{-2}$
& 1.7910 & 0.3040 & 0 & 0 \\
$1587.8 - 4.1625\times10^5$ & $14.35$
& 3.6720 & 0 & 0 & 0 \\
$ > 4.1625\times10^5$ & $10^{ 3}$
& 4.0 & 0 & 0 & 0 \\
\hline\hline
\end{tabular}}
\end{table}
For inclined directions we use zenith-angle dependence given in
Ref.~\cite{tanya} (see also~\cite{ts}). As the third
parameterization of the atmospheric muon flux we use the formula
given in Ref.~\cite{reyna}.
\subsection{Prompt muon contribution}
Atmospheric prompt muon flux predictions are reviewed in Refs.~\cite{bug89, costa01}.
Ratios of the differential energy spectra of muons at sea level originated
from charmed particle decays to that of ($\pi,\,K$)-decays (conventional muons) calculated
for a variety of charm production models are shown in Fig.~\ref{cc-rat} (see also~\cite{misaki}). Here PRS stands for the model~\cite{prs}, GGV for ~\cite{ggv2}, RQPM and QGSM for~\cite{bug89, bug98}, and VZ for Volkova and Zatsepin~\cite{vz}).
Among them we dwell below on quark-gluon string model (QGSM),
as a sample of phenomenological nonperturbative approach, and also on some of models based on perturbative QCD computations, GGV~\cite{ggv2} and GBW~\cite{gbw}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=9.0cm]{rat_vz.eps}
\end{center}
\vskip -15mm
\hskip 0mm
\caption{Ratio of the prompt muon flux to the conventional one at ground level.}
\label{cc-rat}
\end{figure}
Gelmini, Gondolo and Varieschi (GGV)~\cite{ggv2} have included NLO corrections for the charm production with $ xg(x) \sim x^{-\lambda},$ ($\lambda$ varying in the range $0-0.5$). These
results obey the following parameterization for the sea-level muon fluxes (see also~\cite{misaki}):
\begin{equation}\label{ggv}
\phi^{\rm GGV}_{\mu}(E_{\mu}) =
A\left(\frac{E_{\mu}}{1~\rm{GeV}}\right)^{-(a+by+cy^2+dy^3)} , \
\rm{cm}^{-2}\rm s^{-1}\rm{sr}^{-1}\rm{GeV}^{-1}.
\end{equation}
where $y= \log_{10}(E_{\mu}/1~\rm{GeV})$. The parameters are given in the
Table~\ref{t1}. We choose two representative sets corresponding to
$\lambda=0.1$ (GGV01) and $\lambda=0.5$ (GGV05).
\begin{table}[!hb]
\protect\caption{GGV parameters for the prompt muon fluxes.
\label{t1}}
\center{\begin{tabular}{l|ccccccr|} \hline\hline
Model & A, cm$^{-2}$s$^{-1}$sr$^{-1}$GeV$^{-1} \ $& a & b & c & d \\\hline
GGV01& $3.12\times10^{-6}$\,& 2.70 & -0.095 & $1.49\times 10^{-2}$& $-0.2148\times 10^{-3}$ \\
GGV05& $0.58\times10^{-6}$ & 1.84 & 0.257 & $-4.05\times 10^{-2}$ & $2.455\times 10^{-3}$ \\
\hline\hline
\end{tabular}}
\end{table}
QGSM flux parameterization (that is valid for $\theta \lesssim 80^\circ$) may be
written~\cite{bug89} as
\begin{equation}\label{qgsm}
\phi^{\rm QGSM}_{\mu}(E_{\mu}) =1.09\cdot 10^{-18}\left(\frac{E_{\mu}}{100\,\rm{TeV}}\right)^{-3.02}
\left[ 1+ \left(\frac{E_{\mu}}{100\,\rm{TeV}}\right)^{-2.02} \right]^{-0.165}, \
\rm{cm}^{-2}\rm s^{-1}\rm{sr}^{-1}\rm{GeV}^{-1}.
\end{equation}
As last representative model, we consider flux calculation
within the saturation model proposed by Golec-Biernat and Wuthsoff \cite{gbw}. For this model, we consider two cases~\cite{mp}): GBW1, where the protons are taken to be the primary, and GBW2, where we include the effect of heavy elements also.
The sea level prompt muon flux due to GBW1 and GBW2 can be parameterized as Eq.~(\ref{GBW1_pm}) and Eq.~(\ref{GBW2_pm}) respectively:
\begin{equation}\label{GBW1_pm}
\phi^{\rm GBW1}_{\mu}(E_{\mu}) =2.35\cdot 10^{-8}
\left(\frac{E_{\mu}}{1~\rm{GeV}}\right)^{-2.17145-0.04984 y}, \
\rm{cm}^{-2}\rm s^{-1}\rm{sr}^{-1}\rm{GeV}^{-1},
\end{equation}
\begin{equation}\label{GBW2_pm}
\phi^{\rm GBW2}_{\mu}(E_{\mu}) =1.09\cdot 10^{-8}
\left(\frac{E_{\mu}}{1~\rm{GeV}}\right)^{-1.79371-0.10873 y} , \
\rm{cm}^{-2}\rm s^{-1}\rm{sr}^{-1}\rm{GeV}^{-1}.
\end{equation}
These two cases are different in nature, with the expectation that GBW2 should lead to a decreased muon flux at higher energies.
\subsection{Method to calculate the muon flux under thick layer of the rock}
In these computations we base on the semianalytical method for the solution of muon transport equation stated in Ref.~\cite{nsb94} (see also ~\cite{bug98, ts}).
The method allows to consider real atmospheric muon spectrum and the energy behavior of discrete energy loss spectra due to radiative and photonuclear interactions of muons in matter. Only ionization energy loss of muons are treated as continuous one. The method provides effective tool to compute the energy spectra of cosmic-ray muons at large depths of homogeneous media. The benefits of this approach are to carry out verifications of the primary CR spectrum and composition, charm production models, models of the photonuclear interaction with high performance and good precision. This enables to estimate the sea-level muon spectrum using the data of underground/underwater measurements evading the difficult inverse scattering problem.
\section{Expected muon flux at the depth of PUSHEP site}
Zenith-angle distributions of the conventional muon flux calculated for five values of the minimal energy of muons in the range $10$--$10^5$ GeV at depth $1.3$ km of INO detector are shown in Fig.~\ref{conv_ang}. Here solid lines represent computations for the surface muon spectrum~\cite{bug98} by Bugaev et al. with usage of the angle dependence obtained in Ref.~\cite{tanya} (see also~\cite{ts}).
Dashed lines, almost superimposed on solid ones but near horizontal
directions, show results for the spectrum by Tang et al.~\cite{tang} whereas dotted ones show
that for the spectrum by Reyna~\cite {reyna}. The geometry of the INO site is
reflected in the flat shape of the underock distribution(see Fig.~\ref{fig1}).
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=8cm]{conv_ang3.eps}
\hskip 0mm
\end{center}
\vskip -5mm
\caption{Angle distributions of the conventional muons near the INO detector.}
\label{conv_ang}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=8cm]{3mod_100-200v3.eps}
\hskip 0mm
\end{center}
\vskip -5mm
\caption{Zenith-angle distributions of atmospheric muons at $100$ and $200$ TeV.}
\label{100TeV}
\end{figure}
Zenith-angle dependence of the conventional and prompt muon fluxes at the INO depth are shown in Figs.~\ref{100TeV} and \ref{500TeV}. In Fig.~\ref{100TeV} are shown the prompt muon flux at muon energy above $100$ and $200$ TeV calculated with QGSM charm production cross sections (dash-dotted lines) and that of GGV models. For muon energy above $500$ TeV we also plot predictions obtained for GBW model (dashes line in Fig.~\ref{500TeV}).
As one may clearly observe in Fig.~\ref{500TeV}, measurement
of high muon flux near the vertical at INO depth
could allow to discriminate between GGV01($\lambda=0.1$) model and GGV05($\lambda=0.5$) or QGSM one. While the GBW prompt muon flux is unlikely to be observed at 500 TeV.
Differential muon spectra (left panel) at INO depth and integral ones (right panel) are
presented in Fig.~\ref{spectra} for $\cos\theta=0.7$, where solid line shows the conventional muon flux obtained with usage of Bugaev et al. boundary spectrum and circles denote that for Gaisser's spectrum. We can see in Fig.~\ref{spectra} that crossover energy for the conventional muon flux and the GGV05 prompt one is about $300$ TeV, therefore
it seems that more suitable for the prompt muon identification is to analyse the zenith-angle dependence of high-energy muon flux (see Fig.~\ref{100TeV}).
\begin{figure}[ht!]
\begin{center}
\hskip 0mm
\includegraphics[width=8cm]{5mod_500TeV.eps}
\end{center}
\vskip -5mm
\caption{Very high-energy zenith-angle distributions of atmospheric muons.}
\label{500TeV}
\end{figure}
\begin{figure}[ht!]
\begin{center} \hskip -5mm
\includegraphics[width=7cm]{diff_3mod.eps}
\hskip 10mm
\includegraphics[width=7cm]{int_3mod.eps}
\end{center}
\vskip -5mm
\caption{Energy spectra of the atmospheric muons near the INO detector.}
\label{spectra}
\end{figure}
Number of the muon events per steradian per year expected at INO detector near direction $\cos\theta =0.7$ is presented in Table~\ref{t3} (see details in Ref.~\cite{gp}).
\begin{table}[!h]
\protect\caption{Number of the muon events per steradian per year expected at INO detector. \\
\label{t3}}
\center{\begin{tabular}{c|cccc|ccc}\hline\hline
$E_\mu$, TeV & $\quad $ conv. $\quad $ & GGV01 $\quad $ & GGV05 $\quad $ & QGSM $\quad $ & $R_c^{\rm GGV01}$ & $R_c^{\rm GGV05}$ & $R_c^{\rm QGSM}$\\\hline
$10 $ & $60097 $ & $1235 $ & $ 1353 $ & $3037 $ & $0.02$ & $0.022$ & $0.05$ \\
$50 $ & $832 $ & $73 $ & $ 105 $ & $159 $ & $0.087$ & $0.126$ & $0.19$ \\
$100 $ & $132 $ & $20 $ & $ 34 $ & $41 $ &$0.15$ & $0.258$ & $0.31$ \\
$200 $ & $ 20$ & $5.0 $ & $ 10 $ & $10 $ &$0.25$ & $0.50$ & $0.50$ \\
$300 $ & $ 6.0$ & $2.0 $ & $ 5.0 $ & $4.0 $ & $0.33$ & $0.83$ & $0.66$ \\
$400 $ & $ 2.6$ & $1.0 $ & $ 2.6 $ & $2.2 $ & $0.38$ & $1.0$ & $0.85$ \\
$500 $ & $ 1.4$ & $0.6 $ & $ 1.6 $ & $1.2 $ & $0.43$ & $1.14$ & $0.86$ \\
\hline\hline
\end{tabular}}
\end{table}
Last three columns in Table~\ref{t3} represent the ratio of the conventional muon flux to
the prompt muon one due to three charm production models, GGV01, GGV05 and QGSM, respectively.
\section{Summary}
The shape of zenith-angle distributions of conventional muons is nearly flat
(see Figs.~\ref{conv_ang}--\ref{500TeV}). Therefore muons arriving at the detector close to vertical directions are more favorable to measure the prompt muon flux.
The prompt muon contribution to the atmospheric muon flux
increases with energy because of lower value of the energy spectrum index.
The ``crossover'' energy, $E_c$, at which the prompt muon flux
becomes equal to the conventional one, depends strongly on
the charm production model. Following numbers can illustrate (see Fig.~\ref{cc-rat})
the $E_c$ at INO depth for some of charm hadroproduction models: $E_c^{\rm GGV05}\simeq 250$ TeV, $E_c^{\rm QGSM}\simeq 300$ TeV, $E_c^{\rm GGV01}\simeq 600$ TeV.
From the Table~\ref{t3} we can see that prompt muon flux contribution due to GGV01 model, for example, may differs from that for the GGV051 model (or QGSM) by factor 2 at $E_\mu > 200$ TeV. In other words, expected number of muon events inside the INO detector may increase by $50$~\% at the energy above $200$ TeV if GGV05 or QGSM predictions are reasonable.
\section {Acknowledgements}
The work of S.P. was supported by the Ministerio de Educación y Ciencia
under Proyecto Nacional FPA2006-01105, and also by the Comunidad de Madrid
under Proyecto HEPHACOS, Ayuda de I+D S-0505/ESP-0346.
The author S.P would like to thank Indumati for
providing valuable information regarding INO experiment. We thank Pankaj
jain for careful reading the manuscript.
S.~Sinegovsky acknowledges the support by
Federal Programme "Leading Scientific Schools of
Russian Federation", grant NSh-5362.2006.2.
\pagebreak
|
2,877,628,088,890 | arxiv | \section{Introduction}
Throughout this paper, we will let $\mathbb{N}$ denote the set of positive integers, and we will let $\mathbb{N}_0$ denote the set of nonnegative integers.
\par
The arithmetic functions $\sigma_k$ are defined, for every integer $k$, by \\ $\displaystyle{\sigma_k(n)=\sum_{\substack{c\vert n\\c>0}}c^k}$. For each integer $k\neq 0$, $\sigma_k$ is multiplicative and satisfies \\ $\displaystyle{\sigma_k (p^\alpha)=\frac{p^{k(\alpha+1)}-1}{p^k-1}}$ for all (integer) primes $p$ and positive integers $\alpha$. The abundancy index of a positive integer $n$ is defined by $\displaystyle{I(n)=\frac{\sigma_1(n)}{n}}$. A positive integer $n$ is said to be $t$-perfect if $I(n)=t$ for a positive integer $t\geq 2$, and $2$-perfect numbers are called perfect numbers.
\par
For any square-free integer $d$, let $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ be the quadratic integer ring given by \[\mathcal O_{\mathbb{Q}(\sqrt{d})}=\begin{cases} \mathbb{Z}[\frac{1+\sqrt{d}}{2}], & \mbox{if } d\equiv 1\imod{4}; \\ \mathbb{Z}[\sqrt{d}], & \mbox{if } d\equiv 2, 3 \imod{4}. \end{cases}\]
\par
Throughout the remainder of this paper, we will work in the rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ for different specific or arbitrary values of $d$. We will use the symbol ``$\vert$" to mean ``divides" in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ in which we are working.
Whenever we are working in a ring other than $\mathbb{Z}$, we will make sure to emphasize when we wish to state that one integer divides another in $\mathbb{Z}$.
For example, if we are working in $\mathbb{Z}[i]$, the ring of Gaussian integers, we might say that $1+i\vert 1+3i$ and that $2\vert 6$ in $\mathbb{Z}$. We will also refer to primes in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ as ``primes," whereas we will refer to (positive) primes in $\mathbb{Z}$ as ``integer primes." For an integer prime $p$ and a nonzero integer $n$, we will let $\upsilon_p(n)$ denote the largest integer $k$ such that $p^k\vert n$ in $\mathbb{Z}$. For a prime $\pi$ and a nonzero number $x\!\in\!\mathcal O_{\mathbb{Q}(\sqrt{d})}$, we will let $\rho_\pi(x)$ denote the largest integer $k$ such that $\pi^k\vert x$.
Furthermore, we will henceforth focus exclusively on values of $d$ for which $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is a unique factorization domain and $d<0$. In other words, $d\in K$, where we will define $K$ to be the set $\{-163,-67,-43,-19,-11,-7,-3,-2,-1\}$. The set $K$ is known to be the complete set of negative values of $d$ for which $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is a unique factorization domain \cite{Stark67}.
\par
For an element $a+b\sqrt{d}\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $a,b\in \mathbb{Q}$, we define the conjugate by $\overline{a+b\sqrt{d}}=a-b\sqrt{d}$. The norm and absolute value of an element $z$ are defined, respectively, by $N(z)=z\overline{z}$ and $\vert z\vert=\sqrt{N(z)}$. We assume familiarity with the properties of these object, which are treated in Keith Conrad's online notes \cite{Conrad}. For $x,y\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$, we say that $x$ and $y$ are associated, denoted $x\sim y$, if and only if $x=uy$ for some unit $u$ in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Furthermore, we will make repeated use of the following well-known facts.
\begin{fact} \label{Fact1.1}
Let $d\!\in\! K$. If $p$ is an integer prime, then exactly one of the following is true.
\begin{itemize}
\item $p$ is also a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. In this case, we say that $p$ is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$.
\item $p\sim \pi^2$ and $\pi\sim\overline{\pi}$ for some prime $\pi\in \mathcal O_{\mathbb{Q}(\sqrt{d})}$. In this case, we say $p$ ramifies (or $p$ is ramified) in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$.
\item $p=\pi\overline{\pi}$ and $\pi\not\sim\overline{\pi}$ for some prime $\pi\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$. In this case, we say $p$ splits (or $p$ is split) in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$.
\end{itemize}
\end{fact}
\begin{fact} \label{Fact1.2}
Let $d\!\in\! K$. If $\pi\!\in\!\mathcal O_{\mathbb{Q}(\sqrt{d})}$ is a prime, then exactly one of the following is true.
\begin{itemize}
\item $\pi\sim q$ and $N(\pi)=q^2$ for some inert integer prime $q$.
\item $\pi\sim\overline{\pi}$ and $N(\pi)=p$ for some ramified integer prime $p$.
\item $\pi\not\sim\overline{\pi}$ and $N(\pi)=N(\overline{\pi})=p$ for some split integer prime $p$.
\end{itemize}
\end{fact}
\begin{fact} \label{Fact1.3}
If $d\!\in K\!$, $q$ is an integer prime that is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, and $x\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$, then $\upsilon_q(N(x))$ is even and $\rho_q(x)=\frac{1}{2}\upsilon_q(N(x))$.
\end{fact}
\begin{fact} \label{Fact1.4}
Let $p$ be an odd integer prime. Then $p$ ramifies in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ if and only if $p\vert d$ in $\mathbb{Z}$. If $p\nmid d$ in $\mathbb{Z}$, then $p$ splits in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ if and only if $d$ is a quadratic residue modulo $p$. Note that this implies that $p$ is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ if and only if $p\nmid d$ in $\mathbb{Z}$ and $d$ is a quadratic nonresidue modulo $p$.
Also, the integer prime $2$ ramifies in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$ and $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$, splits in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, and is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ for all $d\in K\backslash\{-1,-2,-7\}$.
\end{fact}
\begin{fact} \label{Fact1.5}
Let $\mathcal O_{\mathbb{Q}(\sqrt{d})}^*$ be the set of units in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Then $\mathcal O_{\mathbb{Q}(\sqrt{-1})}^*=\{\pm 1,\pm i\}$, $\displaystyle{\mathcal O_{\mathbb{Q}(\sqrt{-3})}^*=\left\{\pm 1,\pm \frac{1+\sqrt{-3}}{2},\pm \frac{1-\sqrt{-3}}{2}\right\}}$, and $\mathcal O_{\mathbb{Q}(\sqrt{d})}^*=\{\pm 1\}$ \\
whenever $d\in K\backslash\{-1,-3\}$.
\end{fact}
\par
For a nonzero complex number $z$, let $\arg (z)$ denote the argument, or angle, of $z$. We convene to write $\arg (z)\in [0,2\pi)$ for all $z\in\mathbb{C}$. For each $d\in K$, we define the set $A(d)$ by
\[A(d)=\begin{cases} \{z\in\mathcal O_{\mathbb{Q}(\sqrt{d})} \backslash\{0\}: 0\leq \arg (z)<\frac{\pi}
{2}\}, & \mbox{if } d=-1; \\ \{z\in\mathcal O_{\mathbb{Q}(\sqrt{d})} \backslash\{0\}: 0\leq \arg (z)<\frac{\pi}
{3}\}, & \mbox{if } d=-3; \\ \{z\in\mathcal O_{\mathbb{Q}(\sqrt{d})} \backslash\{0\}: 0\leq \arg (z)<\pi\}, & \mbox{otherwise}. \end{cases}\]
Thus, every nonzero element of $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ can be written uniquely as a unit times a product of primes in $A(d)$. Also, every $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ is associated to a unique element of $A(d)$. The author has defined analogues of the arithmetic functions $\sigma_k$ in quadratic rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in K$ \cite{Defant14A}, and we will state the important definitions and properties for the sake of completeness.
\begin{definition} \label{Def1.1}
Let $d\in K$, and let $n\in \mathbb{Z}$.
Define the function
\newline $\delta_n\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\rightarrow [1,\infty)$ by
\[\delta_n (z)=\sum_{\substack{x\vert z\\x\in A(d)}}\vert x \vert^n.\]
\end{definition}
\begin{remark} \label{Rem1.1}
We note that, for each $x$ in the summation in the above definition, we may cavalierly replace $x$ with one of its associates. This is because associated numbers have the same absolute value. In other words, the only reason for the criterion $x\!\in\! A (d)$ in the summation that appears in Definition \ref{Def1.1} is to forbid us from counting associated divisors as distinct terms in the summation, but we may choose to use any of the associated divisors as long as we only choose one. This should not be confused with how we count conjugate divisors (we treat $2+i$ and $2-i$ as distinct divisors of $5$ in $\mathbb{Z}[i]$ because $2+i\not\sim 2-i$).
\end{remark}
\begin{remark} \label{Rem1.2}
We mention that the function $\delta_n$ is different in each ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Perhaps it would be more precise to write $\delta_n(z,d)$, but we will omit the latter component for convenience. We note that we will also use this convention with functions such as $I_n$ (which we will define soon).
\end{remark}
\par
We will say that a function $f\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\!\rightarrow\!\mathbb{R}$ is multiplicative if $f(xy)=f(x)f(y)$ whenever $x$ and $y$ are relatively prime (have no nonunit common divisors). The author has shown that,
for any integer $n$, $\delta_n$ is multiplicative \cite{Defant14A}.
\begin{definition} \label{Def1.2}
For each positive integer $n$, define the function \\
$I_n\colon\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}\rightarrow[1,\infty)$ by $\displaystyle{I_n(z)=\frac{\delta_n(z)}{\vert z\vert ^n}}$.
For a positive integer $t\geq 2$, we say that a number $z\!\in\!\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ is \textit{$n$-powerfully $t$-perfect in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$} if $I_n(z)=t$, and, if $t=2$, we simply say that $z$ is \textit{$n$-powerfully perfect in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$}.
\end{definition}
As an example, we will let $d=-1$ so that $\mathcal O_{\mathbb{Q}(\sqrt{d})}=\mathbb{Z}[i]$. Let us compute $I_2(9+3i)$. We have $9+3i=3(1+i)(2-i)$, so $\delta_2(9+3i)=N(1)+N(3)+N(1+i)+N(2-i)+N(3(1+i))+N(3(2-i))+N((1+i)(2-i))+N(3(1+i)(2-i))=1+9+2+5+18+45+10+90=180$. Then $\displaystyle{I_2(9+3i)=\frac{180}{N(3(1+i)(2-i))}=2}$, so $9+3i$ is $2$-powerfully perfect in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$.
\begin{theorem} \label{Thm1.1}
Let $n\!\in\!\mathbb{N}$, $d\!\in\! K$, and $z_1, z_2, \pi\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ with $\pi$ a prime. Then, if we are working in the ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, the following statements are true.
\begin{enumerate}[(a)]
\item The range of $I_n$ is a subset of the interval $[1,\infty)$, and $I_n(z_1)=1$ if and only if $z_1$ is a unit in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. If $n$ is even, then $I_n(z_1)\in\mathbb{Q}$.
\item $I_n$ is multiplicative.
\item $I_n(z_1)=\delta_{-n}(z_1)$.
\item If $z_1\vert z_2$, then $I_n(z_1)\leq I_n(z_2)$, with equality if and only if $z_1\sim z_2$.
\end{enumerate}
\end{theorem}
We refer the reader to \cite{Defant14A} for a proof of Theorem \ref{Thm1.1}. The author has already investigated $1$-powerfully $t$-perfect numbers in imaginary quadratic rings with unique factorization, and he has shown that, for any integers $n\geq 3$ and $t\geq 2$, no $n$-powerfully $t$-perfect numbers exist in these rings \cite{Defant14B}. Hence, the remainder of this paper will focus on the interesting topic of $2$-powerfully $t$-perfect numbers.
\section{Investigating $2$-powerfully $t$-perfect Numbers}
Trying to find $2$-powerfully $t$-perfect numbers is quite a pleasant activity. One reason for this is that $2$ is the only positive integer $n$ for which there exist $n$-powerfully $t$-perfect numbers that are not associated to integers \cite{Defant14B}. For example, in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, $3+9i$ is $2$-powerfully perfect, and $30+30i$ is $2$-powerfully $3$-perfect. We will also utilize the helpful that, for any $d\in K$ and $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$, we have $N(z)$, $\delta_2(z)\in\mathbb{N}$. In this section, we will focus on the rings $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$, and $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, which are the only rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in K$ in which $2$ is not inert.
\begin{theorem} \label{Thm2.1}
Let us work in a ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in\{-1,-2\}$. Then $2$ ramifies in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, so we may write $2\sim\xi^2$ for some prime $\xi$ satisfying $\xi\sim\overline{\xi}$ and $N(\xi)=2$. Suppose $z$ is $2$-powerfully perfect in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ and $\xi\vert z$. Then we may write $z=\xi^\gamma x$, where $\gamma\in\mathbb{N}$, $x\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$, $\xi\nmid x$, and $2^{\gamma+1}-1$ is a Mersenne prime that is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Furthermore, there exists an odd positive integer $m$ such that $\delta_2(x)=2^{\gamma+1}m$ and $N(x)=(2^{\gamma+1}-1)m$.
\end{theorem}
\begin{proof}
We know the first part of the theorem, which is stated simply to introduce notation. All that we need to prove is the final sentence of the theorem, as well as the fact that $2^{\gamma+1}-1$ is
a Mersenne prime that is inert in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. As $z$ is $2$-powerfully perfect in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, we have
\[\delta_2(z)=2N(z)=2N(\xi^\gamma)N(x)=2^{\gamma+1}N(x).\]
However, we
also have
\[\delta_2(z)=\delta_2(\xi^\gamma)\delta_2(x)=\left(\sum_{j=0}^\gamma N(\xi^j)\right)\delta_2(x)\]
\[=\left(\sum_{j=0}^\gamma 2^j\right)\delta_2(x)=(2^{\gamma+1}-1)\delta_2(x).\]
Therefore, $2^{\gamma+1}N(x)=(2^{\gamma+1}-1)\delta_2(x)$. As $2^{\gamma+1}-1$ is odd, we find that $2^{\gamma+1}\vert\delta_2(x)$ in $\mathbb{Z}$. We may then write $\delta_2(x)=2^{\gamma+1}m$ for some positive integer $m$. Substituting this new expression for $\delta_2(x)$ into the equation $2^{\gamma+1}N(x)=(2^{\gamma+1}-1)\delta_2(x)$, we find $N(x)=(2^{\gamma+1}-1)m$. This tells us that $m$ is odd because $\xi\nmid x$ (implying that $2\nmid N(x)$ in $\mathbb{Z})$. Suppose that $2^{\gamma+1}-1$ is not a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ so that we may write $2^{\gamma+1}-1=y_1y_2$, where $y_1, y_2\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$ satisfy $1<N(y_1)\leq N(y_2)<N(2^{\gamma+1}-1)=(2^{\gamma+1}-1)^2$. Then, because $N(y_1)N(y_2)=N(2^{\gamma+1}-1)=(2^{\gamma+1}-1)^2$, we see that $N(y_1)\leq 2^{\gamma+1}-1$. Now, let $\pi_0$ be a prime that divides $y_1$. Then $\pi_0\vert N(x)$, which implies that either $\pi_0\vert x$ or $\overline{\pi_0}\vert x$. If $\pi_0\vert x$, write $\pi=\pi_0$. Otherwise, write $\pi=\overline{\pi_0}$.
Then $N(\pi)\leq N(y_1)\leq 2^{\gamma+1}-1$, and $\displaystyle{\frac{x}{\pi}}$ is a nonunit proper divisor of $x$. This implies that
\[\delta_2(x)\geq 1+N\left(\frac{x}{\pi}\right)+N(x)=1+\frac{N(x)}{N(\pi)}+N(x)\]
\[=1+\frac{(2^{\gamma+1}-1)m}{N(\pi)}+(2^{\gamma+1}-1)m\geq 1+\frac{(2^{\gamma+1}-1)m}{2^{\gamma+1}-1}+(2^{\gamma+1}-1)m\]
\[=1+2^{\gamma+1}m.\]
However, this contradicts the fact that $\delta_2(x)=2^{\gamma+1}m$, so we conclude that $2^{\gamma+1}-1$ is a prime in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. Furthermore, because $2^{\gamma+1}-1$ is an integer, we conclude that $2^{\gamma+1}-1$ is an inert integer prime that is also a Mersenne prime.
\end{proof}
\begin{theorem} \label{Thm2.2}
Let $z$, $m$, $\gamma$, and $x$ be as in Theorem \ref{Thm2.1}. Write $q=2^{\gamma+1}-1$ and $m=q^kv$, where $k\in\mathbb{N}_0$, $v\in\mathbb{N}$, and $q\nmid v$ in $\mathbb{Z}$. Then $k$ is odd, $v\geq q+2$, and
\[m\geq q^{k+1}+(q+3)\sum_{j=0}^{\frac{k-1}{2}}q^{2j}\geq q^2+q+3.\]
\end{theorem}
\begin{proof}
First, note that $q$ is inert and $\upsilon_q(N(x))=k+1$. Therefore, Fact \ref{Fact1.3} implies that $k$ is odd and $\displaystyle{\rho_q(x)=\frac{k+1}{2}}$. Next, assume that $v=1$. Then $m=q^k$, so $x\sim q^{\frac{k+1}{2}}$. This implies that $\displaystyle{\delta_2(x)=\sum_{j=0}^{\frac{k+1}{2}}q^{2j}\equiv 1\imod{q}}$. However, this contradicts Theorem \ref{Thm2.1}, which tells us, under the assumption $m=q^k$, that $\delta_2(x)=2^{\gamma+1}m=(q+1)m=(q+1)q^k\equiv 0\imod{q}$. Therefore, $v>1$. Now, write $\displaystyle{y=\frac{x}{q^{(k+1)/2}}}$. Then, using Theorem \ref{Thm2.1},
\[N(y)=\frac{N(x)}{N(q^{\frac{k+1}{2}})}=\frac{qm}{q^{k+1}}=\frac{q^{k+1}v}{q^{k+1}}=v.\]
Because $\displaystyle{\rho_q(x)=\frac{k+1}{2}}$, we see that $y$ and $q^{k+1}$ are relatively prime. Therefore,
\[\delta_2(x)=\delta_2(y)\delta_2(q^{\frac{k+1}{2}})=\delta_2(y)\sum_{j=0}^{\frac{k+1}{2}}q^{2j}\geq (v+1)\sum_{j=0}^{\frac{k+1}{2}}q^{2j}.\]
Theorem \ref{Thm2.1} states that $\delta_2(x)=2^{\gamma+1}m=(q+1)m$, so we have
\[(q+1)m\geq (v+1)\sum_{j=0}^{\frac{k+1}{2}}q^{2j}=q^{k+1}v+q^{k+1}+(v+1)\sum_{j=0}^{\frac{k-1}{2}}q^{2j}\]
\[=qm+q^{k+1}+(v+1)\sum_{j=0}^{\frac{k-1}{2}}q^{2j}.\]
We can simplify this last inequality to get
\begin{equation} \label{Eq2.1}
m\geq q^{k+1}+(v+1)\sum_{j=0}^{\frac{k-1}{2}}q^{2j}.
\end{equation}
Therefore, $\displaystyle{v=\frac{m}{q^k}\geq q+(v+1)\sum_{j=0}^{\frac{k-1}{2}}q^{2j-k}>q}$. As $v$ and $q$ are both odd and $v>q$, we conclude that $v\geq q+2$. Substituting this into \eqref{Eq2.1}, we have
\[m\geq q^{k+1}+(q+3)\sum_{j=0}^{\frac{k-1}{2}}q^{2j}\geq q^2+q+3,\]
which completes the proof.
\end{proof}
It is interesting to note that, in the case $z=3+9i$ in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, the inequalities in Theorem \ref{Thm2.2} are, in fact, equalities. That is, $q=3$, $v=q+2=5$, and $m=q^2+q+3=15$. It seems likely, in light of the inequalities in Theorem \ref{Thm2.2}, that the value of $k$ in Theorem \ref{Thm2.2} should have to be $1$.
\par
We now prove results similar to Theorems \ref{Thm2.1} and \ref{Thm2.2} in the ring $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$.
\begin{theorem} \label{Thm2.3}
Let us work in the ring $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ so that $2$ splits as $2=\varepsilon\overline{\varepsilon}$, where $\varepsilon=\frac{1+\sqrt{-7}}{2}$. Suppose $z$ is $2$-powerfully perfect in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ and $2\vert N(z)$ in $\mathbb{Z}$. Then either $z=\varepsilon^\gamma x$ or $z=\overline{\varepsilon}^\gamma x$, where $\gamma\in\mathbb{N}$, $x\in\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, $2\nmid N(x)$ in $\mathbb{Z}$, and $2^{\gamma+1}-1$ is a Mersenne prime that is inert in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$. Furthermore, there exists an odd positive integer $m$ such that $\delta_2(x)=2^{\gamma+1}m$ and $N(x)=(2^{\gamma+1}-1)m$.
\end{theorem}
\begin{proof}
We know that we may write $z=\varepsilon^{\gamma_1}\overline{\varepsilon}^{\gamma_2}x$, where $\gamma_1$, $\gamma_2\in\mathbb{N}_0$, $x\in\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, and $2\!\nmid\! N(x)$ in $\mathbb{Z}$. Furthermore, we know from the fact that $2\vert N(z)$ in $\mathbb{Z}$ that $\gamma_1$ and $\gamma_2$ are not both
zero. We must prove that either $\gamma_1=0$ or $\gamma_2=0$. Then, after setting $\gamma=\gamma_1+\gamma_2$, we need to prove the final sentence of the theorem and the fact that $2^{\gamma+1}-1$ is a Mersenne prime that is inert in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$.
\par
As $z$ is $2$-powerfully perfect in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, we have \[\delta_2(z)=2N(z)=2N(\varepsilon^{\gamma_1} )N(\overline{\varepsilon}^{\gamma_2})N(x)=2^{\gamma_1+\gamma_2+1}N(x).\]
However, we also have
\[\delta_2(z)=\delta_2(\varepsilon^{\gamma_1})\delta_2(\overline{\varepsilon}^{\gamma_2})\delta_2(x)=\left(\sum_{j=0}^{\gamma_1}
N(\varepsilon^j)\right)\left(\sum_{j=0}^{\gamma_2}N(\overline{\varepsilon}^j)\right)\delta_2(x)\]
\[=\left(\sum_{j=0}^{\gamma_1}2^j\right)\left(\sum_{j=0}^{\gamma_2}2^j\right)\delta_2(x)=(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)\delta_2(x).\]
Therefore, $2^{\gamma_1+\gamma_2+1}
N(x)=(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)\delta_2(x)$. As $(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)$ is odd, we find that $2^{\gamma_1+\gamma_2+1}\vert\delta_2(x)$ in $\mathbb{Z}$. We may then write $\delta_2(x)=2^{\gamma_1+\gamma_2+1}m$ for
some positive integer $m$. Substituting this new expression for $\delta_2(x)$ into the equation $2^{\gamma_1+\gamma_2+1} N(x)=(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)\delta_2(x)$, we find $N(x)=(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)m$. This tells us that $m$ is odd because $2\nmid N(x)$ in $\mathbb{Z}$. Now, $2^{\gamma_1+\gamma_2+1}m=\delta_2(x)\geq 1+N(x)=1+(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)m$, so $2^{\gamma_1+\gamma_2+1}>(2^{\gamma_1+1}-1)(2^{\gamma_2+1}-1)=2\cdot2^{\gamma_1+\gamma_2+1}-2^{\gamma_1+1}-2^{\gamma_2+1}+1$. Simplifying this inequality, we have $2^{\gamma_1+1}+2^{\gamma_2+1}>2^{\gamma_1+\gamma_2+1}+1$, which is impossible unless $\gamma_1=0$ or $\gamma_2=0$. Therefore, either $z=\varepsilon^{\gamma_1}x$ or $z=\overline{\varepsilon}^{\gamma_2}x$. Either way, if we write $\gamma=\gamma_1+\gamma_2$, then we have $\delta_2(x)=2^{\gamma+1}m$ and $N(x)=(2^{\gamma+1}-1)m$. Suppose that $2^{\gamma+1}-1$ is not a prime in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ so that we may write
$2^{\gamma+1}-1=y_1y_2$, where $y_1,y_2\in\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ satisfy $1<N(y_1)\leq N(y_2)<N(2^{\gamma+1}-1)=(2^{\gamma+1}-1)^2$. Then, because $N(y_1)N(y_2)=N(2^{\gamma+1}-1)=(2^{\gamma+1}-1)^2$, we see that $N(y_1)\leq 2^{\gamma+1}-1$. Now, let $\pi_0$ be a prime that divides $y_1$. Then $\pi_0\vert N(x)$, which implies that either $\pi_0\vert x$ or $\overline{\pi_0}\vert x$. If $\pi_0\vert x$, write $\pi=\pi_0$. Otherwise, write $\pi=\overline{\pi_0}$.
Then $N(\pi)\leq N(y_1)\leq 2^{\gamma+1}-1$, and $\displaystyle{\frac{x}{\pi}}$ is a nonunit proper divisor of $x$. This implies that
\[\delta_2(x)\geq 1+N\left(\frac{x}{\pi}\right)+N(x)=1+\frac{N(x)}{N(\pi)}+N(x)\]
\[=1+\frac{(2^{\gamma+1}-1)m}{N(\pi)}+(2^{\gamma+1}-1)m\geq 1+\frac{(2^{\gamma+1}-1)m}{2^{\gamma+1}-1}+(2^{\gamma+1}-1)m\]
\[=1+2^{\gamma+1}m.\]
However, this contradicts the fact that $\delta_2(x)=2^{\gamma+1}m$, so we conclude that $2^{\gamma+1}-1$ is a prime in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$. Furthermore, because $2^{\gamma+1}-1$ is an integer, we conclude that $2^{\gamma+1}-1$ is an inert integer prime that is also a Mersenne prime.
\end{proof}
\begin{theorem} \label{Thm2.4}
Let $z$, $m$, $\gamma$, and $x$ be as in Theorem \ref{Thm2.3}. Write $q=2^{\gamma+1}-1$ and $m=q^kv$, where $k\in\mathbb{N}_0$, $v\in\mathbb{N}$, and $q\nmid v$ in $\mathbb{Z}$. Then $k$ is odd, $v\geq q+2$, $\gamma\equiv 1\imod{3}$, $q\equiv 3\imod{7}$, and
\[m\geq q^{k+1}+(q+3)\sum_{j=0}^{\frac{k-1}{2}}q^{2j}\geq q^2+q+3.\]
\end{theorem}
\begin{proof}
Fact \ref{Fact1.4} tells us that an integer prime is inert in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ if and only if that integer prime is congruent to $3$, $5$, or $6$ modulo $7$. Also, it is easy to see that powers of $2$ cannot be congruent to $6$ or $7$ modulo $7$. Therefore, as $q$ is a Mersenne prime that is inert in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, we must have $q\equiv 3\imod{7}$. This implies that $2^{\gamma+1}\equiv 4\imod{7}$, so $\gamma\equiv 1\imod{3}$. The proof of the rest of the theorem is identical to the proof of Theorem \ref{Thm2.2}, except all references to Theorem \ref{Thm2.1} should be replaced with references to Theorem \ref{Thm2.3}.
\end{proof}
Within the rings $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$, and $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, Theorems \ref{Thm2.1} through \ref{Thm2.4} examine some properties of $2$-powerfully perfect numbers with even
norms. These numbers are somewhat analogous to perfect numbers in $\mathbb{Z}$. The analogues of odd perfect numbers are then $2$-powerfully perfect numbers with odd norms. We now briefly explore some of the properties that such numbers would need to exhibit.
\begin{theorem} \label{Thm2.5}
Let us work in a ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in K$. Suppose $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ is such that $I_2(z)=2$ and $N(z)$ is odd (suppose such a $z$ exists). Then
we may write $z\sim\pi^kx^2$, where $\pi,x\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$, $\pi$ is prime, and $k\in\mathbb{N}$. Furthermore, $k\equiv N(\pi)\equiv 1\imod{4}$.
\end{theorem}
\begin{proof}
First, let $\pi_0$ be a prime whose norm is odd, and let $\alpha$ be a positive integer. As $\displaystyle{\delta_2(\pi_0^\alpha)=\sum_{j=0}^\alpha N(\pi_0^j)=\sum_{j=0}^\alpha N(\pi_0)^j}$ and $N(\pi_0)$ is odd, we see that $\alpha$ and $\delta_2(\pi_0^\alpha)$ have opposite parities.
\par
Now, from $I_2(z)=2$, we have $\delta_2(z)=2N(z)$.
Because $N(z)$ is odd, we find that $\delta_2(z)\equiv 2\imod{4}$. Write $\displaystyle{z=\prod_{j=1}^r\pi_j^{\alpha_j}}$, where, for all distinct $j,l\in\{1,2,\ldots,r\}$, $\pi_j$ is prime, $\alpha_j$ is a positive integer, and $\pi_j\not\sim \pi_l$. Then $\displaystyle{\delta_2(z)=\prod_{j=1}^r\delta_2(\pi_j^{\alpha_j})}$. Because $\delta_2(z)\equiv 2\imod{4}$, we find that
there must be exactly one value of $j\in\{1,2,\ldots,r\}$ such that $\delta_2(\pi_j^{\alpha_j})$ is even. This means that there is exactly one value of $j\in\{1,2,\ldots,r\}$ such that $\alpha_j$ is odd. Therefore, $z\sim \pi^kx^2$, where $\pi,x\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$, $\pi$ is prime, and $k$ is an odd positive
integer. Furthermore, $\delta_2(\pi^k)\equiv 2\imod{4}$.
\par
If $N(\pi)=q^2$, where $q$ is an inert integer prime, then \[\delta_2(\pi^k)=\sum_{l=0}^kN(\pi^l)=\sum_{l=0}^kq^{2l}\equiv\sum_{l=0}^k1\equiv k+1\imod{4}.\] Therefore, in this case, we have
$k\equiv 1\imod{4}$. Also, because $N(\pi)=q^2$ and $q$ is odd, we know that $N(\pi)\equiv 1\imod{4}$.
\par
On the other hand, if $N(\pi)=p$ is an integer prime, then \[\delta_2(\pi^k)=\sum_{l=0}^kN(\pi^l)=\sum_{l=0}^kp^l\equiv 2\imod{4},\] which implies that $p\equiv k\equiv 1\imod{4}$.
\end{proof}
\begin{theorem} \label{Thm2.6}
Let us work in a ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\in\{-1,-2\}$. Let $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}\backslash\{0\}$ be such that $I_2(z)=2$ and $N(z)$ is odd (suppose such a $z$ exists). Then $z$ has at least five nonassociated prime divisors.
\end{theorem}
\begin{proof}
Suppose $z$ has four or fewer nonassociated prime divisors. Then we may write $z\sim\pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3}\pi_4^{\alpha_4}$, where, for all distinct $j,l\in\{1,2,3,4\}$, $\pi_j$ is prime, $\alpha_j$ is a nonnegative integer, and $\pi_j\not\sim\pi_l$.
\par
First, let us deal with the case $d=-1$. In the ring $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, the five primes (up to units) that have the smallest odd norms are $2+i$, $1+2i$, $3$, $3+2i$, and $2+3i$, which have norms $5$, $5$, $9$, $13$, and $13$, respectively. Therefore,
\[I_2(z)=I_2(\pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3}\pi_4^{\alpha_4})\]
\[=\left(\sum_{j=0}^{\alpha_1}\frac{1}{N(\pi_1)^j}\right)\left(\sum_{j=0}^{\alpha_2}\frac{1}{N(\pi_2)^j}\right)\left(\sum_{j=0}^{\alpha_3}\frac{1}{N(\pi_3)^j}\right)\left(\sum_{j=0}^{\alpha_4}\frac{1}{N(\pi_4)^j}\right)\]
\[<\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_1)^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_2)^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_3)^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_4)^j}\right)\]
\[\leq\left(\sum_{j=0}^{\infty}\frac{1}{5^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{5^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{9^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{13^j}\right)=\frac{5}{4}\cdot\frac{5}{4}\cdot\frac{9}{8}\cdot\frac{13}{12}<2,\] which is a contradiction.
\par
Second, let us deal with the case $d=-2$. In the ring $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$, the integer prime $3$ splits as $3=(1+\sqrt{-2})(1-\sqrt{-2})$. Suppose $1+\sqrt{-2}\vert z$ and $1-\sqrt{-2}\vert z$. Then, because $N(1+\sqrt{-2})=N(1-\sqrt{-2})=3\not\equiv 1\imod{4}$, Theorem \ref{Thm2.5} implies that $1+\sqrt{-2}$ and $1-\sqrt{-2}$ must both appear with even exponents in the prime factorization of $z$. In particular, $(1+\sqrt{-2})^2(1-\sqrt{-2})^2\vert z$. Therefore, by part $(d)$ of Theorem \ref{Thm2.2},
\[I_2(z)\geq I_2((1+\sqrt{-2})^2)I_2((1-\sqrt{-2})^2)=\left(1+\frac{1}{3}+\frac{1}{9}\right)^2>2,\]
which is a contradiction. This implies that $1+\sqrt{-2}$ and $1-\sqrt{-2}$ cannot both divide $z$. Now, the six primes (up to units) that have the smallest odd norms are $1+\sqrt{-2}$, $1-\sqrt{-2}$, $3+\sqrt{-2}$, $3-\sqrt{-2}$, $3+2\sqrt{-2}$, and $3-2\sqrt{-2}$, which have norms $3$, $3$, $11$, $11$, $17$, and $17$, respectively. Because $1+\sqrt{-2}$ and $1-\sqrt{-2}$ cannot both divide $z$, we have \[I_2(z)=I_2(\pi_1^{\alpha_1}\pi_2^{\alpha_2}\pi_3^{\alpha_3}\pi_4^{\alpha_4})\]
\[=\left(\sum_{j=0}^{\alpha_1}\frac{1}{N(\pi_1)^j}\right)\left(\sum_{j=0}^{\alpha_2}\frac{1}{N(\pi_2)^j}\right)\left(\sum_{j=0}^{\alpha_3}\frac{1}{N(\pi_3)^j}\right)\left(\sum_{j=0}^{\alpha_4}\frac{1}{N(\pi_4)^j}\right)\]
\[<\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_1)^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_2)^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_3)^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_4)^j}\right)\]
\[\leq\left(\sum_{j=0}^{\infty}\frac{1}{3^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{11^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{11^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{17^j}\right)=\frac{3}{2}\cdot\frac{11}{10}\cdot\frac{11}{10}\cdot\frac{17}{16}<2,\]
which is a contradiction.
\end{proof}
\begin{theorem} \label{Thm2.7}
Let us work in the ring $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$. Let $z\in\mathcal O_{\mathbb{Q}(\sqrt{-7})}\backslash\{0\}$ be such that $I_2(z)=2$ and $N(z)$ is odd (suppose such a $z$ exists). Then $z$ has at least eleven nonassociated prime divisors.
\end{theorem}
\begin{proof}
Suppose $z$ has ten or fewer nonassociated prime divisors. Then we may write $\displaystyle{z\sim\prod_{m=1}^{10}\pi_m^{\alpha_m}}$, where, for all distinct $m,l\in\{1,2,\dots,10\}$, $\pi_m$ is prime, $\alpha_m$ is a nonnegative integer, and $\pi_m\not\sim\pi_l$. In $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, the eleven primes (up to units) that have the smallest odd norms are $\sqrt{-7}$, $3$, $2+\sqrt{-7}$, $2-\sqrt{-7}$, $4+\sqrt{-7}$, $4-\sqrt{-7}$, $5$, $1+2\sqrt{-7}$, $1-2\sqrt{-7}$, $3+2\sqrt{-7}$, and $3-2\sqrt{-7}$, which have norms $7$, $9$, $11$, $11$, $23$, $23$, $25$, $29$, $29$, $37$, and $37$, respectively. Therefore,
\[I_2(z)=\prod_{m=1}^{10}I_2(\pi_m^{\alpha_m})=\prod_{m=1}^{10}\left(\sum_{j=0}^{\alpha_m}\frac{1}{N(\pi_m)^j}\right)<\prod_{m=1}^{10}\left(\sum_{j=0}^{\infty}\frac{1}{N(\pi_m)^j}\right)\]
\[\leq \left(\sum_{j=0}^{\infty}\frac{1}{7^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{9^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{11^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{11^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{23^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{23^j}\right)\]
\[\cdot\left(\sum_{j=0}^{\infty}\frac{1}{25^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{29^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{29^j}\right)\left(\sum_{j=0}^{\infty}\frac{1}{37^j}\right)\]
\[=\frac{7}{6}\cdot\frac{9}{8}\cdot\frac{11}{10}\cdot\frac{11}{10}\cdot\frac{23}{22}\cdot\frac{23}{22}\cdot\frac{25}{24}\cdot\frac{29}{28}\cdot\frac{29}{28}\cdot\frac{37}{36}<2,\]
which is a contradiction.
\end{proof}
We conclude this section with a remark about $2$-powerfully perfect numbers in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$, and $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ that have odd norms. In each of these three rings, there
is a prime, say $\xi$, with norm $2$. If $d\in\{-1,-2,-7\}$, $z\in\mathcal O_{\mathbb{Q}(\sqrt{d})}$, $I_2(z)=2$, and $N(z)$ is odd, then $\xi z$ is $2$-powerfully $3$-perfect in $\mathcal O_{\mathbb{Q}(\sqrt{d})}$. This is simply because, under these assumptions, we find that $\displaystyle{I_2(\xi z)=I_2(\xi)I_2(z)=\frac{1+2}{2}I_2(z)=\frac{3}{2}\cdot 2=3}$.
\section{Further Ideas and a Conjecture}
We admit that we directed almost all of our attention toward $2$-powerfully perfect numbers, rather than the more general $2$-powerfully $t$-perfect numbers. Hence, the subject of $2$-powerfully $t$-perfect numbers awaits exploration. We also concentrated so heavily on the rings $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$, $\mathcal O_{\mathbb{Q}(\sqrt{-2})}$, and $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$ when dealing with $2$-powerfully perfect numbers that we left open all questions about the rings $\mathcal O_{\mathbb{Q}(\sqrt{d})}$ with $d\!\in\! K$ in which $2$ is inert. We mentioned that $3+9i$ and $9+3i$ are $2$-powerfully perfect and that $30+30i$ is $2$-powerfully $3$-perfect in $\mathcal O_{\mathbb{Q}(\sqrt{-1})}$. Andrew Lelechenko has observed that $84+4788i$ and $1764+4452i$ are also $2$-powerfully $3$-perfect in this ring. Are there other $2$-powerfully $t$-perfect numbers in this ring? What about in other rings?
\par
Referring to the concluding paragraph of Section 2, we might ask if there are other relationships between different types of $n$-powerfully $t$-perfect numbers. More specifically, in a given ring $\mathcal O_{\mathbb{Q}(\sqrt{d})}$, are there certain criteria which would guarantee that some specific multiple of an $n_1$-powerfully $t_1$-perfect number is $n_2$-powerfully $t_2$-perfect (for some $n_1,n_2,t_1,t_2\in\mathbb{N}$ with $t_1,t_2\geq 2$)?
\begin{conjecture} \label{Conj3.1}
The value of $k$ in Theorem \ref{Thm2.2} must be $1$. Similarly, if there is a $2$-powerfully perfect number in $\mathcal O_{\mathbb{Q}(\sqrt{-7})}$, then the value of $k$ in Theorem \ref{Thm2.4} must be $1$.
\end{conjecture}
\section{Acknowledgments}
The author would like to thank Professor Pete Johnson for inviting him to the 2014 REU Program in Algebra and Discrete Mathematics at Auburn University.
|
2,877,628,088,891 | arxiv | \section{Introduction}
The memory of gravitational wave (GW) was firstly found by Zeldovich, Braginsky, Thorne and their coworkers \cite{Zeldovich74,Pay83,Braginsky:1986ia,braginsky1987gravitational}. This memory effect is produced by the gravitational wave source directly. Later Christodoulou found that gravitational wave itself can also produce memory \cite{PhysRevLett.67.1486,Fra92}.
The memory found before Christodoulou is usually called ordinary memory. The ordinary memory is produced by the quadrupole moment change of the source. And the memory found by Christodoulou is called nonlinear memory. Thorne \cite{Tho92} assumed a relation between the gravitational wave flux and the nonlinear memory through analogy of `quadrupole moment change of gravitational wave energy'
\begin{align}
\Delta h_{jk}^{\rm TT}=\frac{4}{r}\int\frac{dE}{d\Omega'}\left(\frac{\xi^{'j}\xi^{'k}}{1-\cos\theta'}\right)^{\rm TT}d\Omega'.\label{assumeq}
\end{align}
This relation corresponds to the Eq.~(2) of \cite{Tho92}. The integral is over the solid angle $\Omega'$ surrounding the source, $E$ is the energy of gravitational wave, $\xi^{'j}$ is a unit vector pointing from the source toward $d\Omega'$, and $\theta'$ is the angle between $\xi^{'j}$ and the direction to the detector. The assumed relation (\ref{assumeq}) can be shown valid when the condition of post-Newtonian approximation is satisfied \cite{Tho92,WisWil91,BlaDam92}.
Recent years, many works including \cite{Fav09a,Fav09b,favata2009gravitational,favata2010gravitational,PhysRevD.84.124013,PhysRevD.95.084048,PhysRevD.98.064031,2020arXiv200906351K} applied the above assumed relation (\ref{assumeq}) to the full inspiral-merger-ringdown process of binary black hole to get the gravitational waveform of memory. And later these GW memory waveform was used to determine when LIGO would be able to detect the memory effect \cite{PhysRevLett.117.061102,PhysRevD.101.083026} and to search memory signal in LIGO data \cite{PhysRevD.101.023011,PhysRevD.101.104041}.
On the numerical relativity (NR) side, the calculation for non-memory waveform has become more and more accurate. The waveform extraction technique involved in NR guarantees the calculated gravitational wave is gauge invariant, which makes different numerical relativity groups using different Einstein equation formulation, different initial data form, and different coordinate condition during the evolution get the same waveform result (early references including \cite{Baker_2007}). The extracted waveform corresponds to the two polarization modes $h_+$ and $h_\times$. The reported gravitational wave events by LIGO and Virgo highly depend on gravitational waveform models including EOBNR, IMRPhenomena and surrogate models \cite{PhysRevLett.116.061102,PhysRevX.9.031040}. In contrast, numerical relativity results on memory are much less confident. Some preliminary NR results on memory have been got in \cite{PolRei11,PhysRevD.102.104007,2020arXiv201101309M}.
Theoretical model is very important to memory detection and signal interpretation \cite{Set09,VanLev10,PshBasPos10,CorJen12,MadCorCha14,Arzoumanian_2015,PhysRevLett.118.181103,Divakarla:2019zjj}. In this paper, we propose a new method to calculate the gravitational wave memory. This method is based on the Bondi-Metzner-Sachs (BMS) theory \cite{BonVanMet62,Sac62,PenRin88} in stead of the assumption (\ref{assumeq}). Since BMS theory does not need the conditions of slow motion and weak field for the GW source, this new method is very accurate for GW memory calculation. We adopt geometric units with $c=G=1$ through this paper.
\section{New method to calculate the gravitational wave memory}
Based on the Bondi-Metzner-Sachs (BMS) theory, gravitational radiation can be described at null infinity with Bondi-Sachs (BS) coordinate $(u,r,\theta,\phi)$. Here $u$ is called Bondi time which corresponds to the time of observer very far away from the GW source, say the GW detector. Inside the spacetime of the gravitational wave source which is looked as an isolated spacetime, the slice of constant $u$ is null. On the null infinity, the gravitational waveform only depends on $(u,\theta,\phi)$. When we consider a source located luminosity distance $D$ away, the waveform depends on $(u,D,\theta,\phi)$ and the dependence on $D$ is proportional to $\frac{1}{D}$. In GW data analysis community, people use $t=u$ to denote the time. So we choose to use notation `$t$' for the Bondi time in the current paper to avoid two different notations for the same quantity. In order to borrow the well known relations in BMS theory, we use the Newmann-Penrose formalism and the tetrad choice convention of \cite{PenRin88,Held1970}
\begin{align}
n^0=\frac{1}{2\alpha},&\,\, n^i=-\frac{\beta^i}{2\alpha}-\frac{1}{2}v^i,\\
l^0=\frac{1}{\alpha},&\,\, l^i=-\frac{\beta^i}{\alpha}+v^i,\\
m^0=0,&\,\, m^i=\frac{1}{\sqrt{2}}(w^i-iu^i),
\end{align}
where $v^i$ is the out-pointing normal vector of the BS coordinate sphere in the 3-dimensional space-like slice, $u^i$ and $w^i$ are orthnormal basis tangent to the sphere. $v^i$ also corresponds to the propagating direction of the gravitational wave. $\alpha$ and $\beta^i$ are the lapse function and shift vector describing the 3+1 decomposition. Asymptotically $\alpha\rightarrow1$, $\beta^i\rightarrow0$, $v^i\rightarrow\frac{\partial}{\partial r}$, $u^i\rightarrow\frac{1}{r\sin\theta}\frac{\partial}{\partial\phi}$ and $w^i\rightarrow\frac{1}{r}\frac{\partial}{\partial\theta}$. Note the above convention admits a factor $\sqrt{2}$ for null vectors $\textbf{l}$ and $\textbf{n}$ difference to the convention used by numerical relativity community (for an example, Eq.~(32)-(34) of \cite{PhysRevD.77.024027}).
Based on the tetrad choice given above, we have the following relations at null infinity for asymptotically flat spacetime
\begin{align}
&\dot{\Psi}_2^{\circ}=\eth\Psi_3^{\circ}+\sigma^{\circ}\Psi_4^{\circ},\,\, \Psi_3^{\circ}=-\eth\dot{\bar{\sigma}}^{\circ},\,\, \Psi_4^{\circ}=-\ddot{\bar{\sigma}}^{\circ}.\label{eq1}
\end{align}
Here $\sigma$ corresponds to the shear of the $(\theta,\phi)$ coordinate sphere in the BS coordinate \cite{he2015new,he2016asymptotical,sun2019binary}. $\Psi_2^{\circ}$ is the Weyl tensor component relating to Bondi mass. The sign ``$\circ$" means the leading order respect to the luminosity distance when one goes to null infinity. For a function $f$ with spin-weight $s$ on sphere, the operator $\eth$ is defined as
\begin{align}
\eth f\equiv\frac{1}{\sqrt{2}}(\sin\theta)^s(\frac{\partial}{\partial\theta}
+\frac{i}{\sin\theta}\frac{\partial}{\partial\phi})(\sin\theta)^{-s}f.
\end{align}
If the tetrad convention of numerical relativity community is used, the gravitational wave strain $h\equiv h_+-ih_\times$ is related to the double integral of $\Psi_4$ respect to time (Eq.~(14) of \cite{PhysRevD.75.124018}). $h_{+}$ and $h_{\times}$ correspond to the two polarization modes of the gravitational wave \cite{maggiore2008gravitational} respect to the basis
\begin{align}
&e^+_{ij}=w_iw_j-u_iu_j\\
&e^{\times}_{ij}=w_iu_j+w_ju_i.
\end{align}
Due to the factor $\sqrt{2}$ difference of $\textbf{n}$, we now have
\begin{align}
h=-\frac{1}{2}\int\int\Psi_4dtdt=-\frac{1}{2}\int\int\frac{\Psi_4^{\circ}}{D}dtdt
\end{align}
where $D$ is the luminosity distance between the observer and the source. Again we need to note that the convention of $\Psi_4$ definition we adopt here follows \cite{PenRin88,Held1970} which admits a minus sign difference to the convention used in numerical relativity (for example, Eq.~(9) of \cite{PhysRevD.75.124018}).
Aided with the third equation of Eq.~(\ref{eq1}) we have
\begin{align}
\sigma^\circ=\frac{D}{2}\left(h_++ih_\times\right).\label{meq10}
\end{align}
And more the relations (\ref{eq1}) result in
\begin{align}
&\frac{\partial}{\partial t}(\Psi_2^{\circ}+\sigma^{\circ}\dot{\bar{\sigma}}^{\circ})=\dot{\Psi}_2^{\circ}+\dot{\sigma}^{\circ}\dot{\bar{\sigma}}^{\circ}+\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}\\
&=\eth\Psi_3^{\circ}+\sigma^{\circ}\Psi_4^{\circ}+\dot{\sigma}^{\circ}\dot{\bar{\sigma}}^{\circ}+\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}\\
&=-\eth^2\dot{\bar{\sigma}}^{\circ}-\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}+\dot{\sigma}^{\circ}\dot{\bar{\sigma}}^{\circ}+\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}\\
&=|\dot{\sigma}^{\circ}|^2-\eth^2\dot{\bar{\sigma}}^{\circ},
\end{align}
which corresponds to the `final formula' of \cite{Fra92}. $\left.h_{+,\times}\right|_{t_1=-\infty}^{t_2=\infty}$ and correspondingly $\left.\sigma^\circ\right|_{t_1=-\infty}^{t_2=\infty}$ are the gravitational wave memory.
Consequently we have
\begin{align}
\int_{t_1}^{t_2}(|\dot{\sigma}^{\circ}|^2-\eth^2\dot{\bar{\sigma}}^{\circ})dt=\left.(\Psi_2^{\circ}+\sigma^{\circ}\dot{\bar{\sigma}}^{\circ})\right|_{t_1}^{t_2},
\end{align}
which only gives the relation among the asymptotic quantities of a radiative spacetime. This relation indicates that $\left.\sigma^\circ\right|_{t_1=-\infty}^{t_2=\infty}$, i.e. gravitational wave memory, generally does not vanish. But this is just a qualitative result. It does not show the quantitative behavior of memory.
In order to investigate the quantitative behavior of GW memory, we use spin-weighted $-2$ spherical harmonic functions ${}^{-2}Y_{lm}$ to decompose the gravitational wave strain $h$ as following \cite{PhysRevD.75.124018,PhysRevD.77.024027,PhysRevD.78.124011}
\begin{align}
&h(t,\theta,\phi)\equiv\sum_{l=2}^{\infty}\sum_{m=-l}^lh_{lm}(t)[{}^{-2}Y_{lm}](\theta,\phi),\\&[{}^sY_{lm}]\equiv(-1)^s\sqrt{\frac{2l+1}{4\pi}}d^l_{m(-s)}(\theta)e^{im\phi},\\
&d^l_{ms}\equiv\sum_{i=C_1}^{C_2}\frac{(-1)^i\sqrt{(l+m)!(l-m)!(l+s)!(l-s)!}}{(l+m-i)!(l-s-i)!i!(i+s-m)!}\nonumber\\
&\times[\cos(\theta/2)]^{2l+m-s-2i}[\sin(\theta/2)]^{2i+s-m},\\
&C_1=\max(0,m-s),\,\,\, C_2=\min(l+m,l-s),
\end{align}
where the over-bar means the complex conjugate.
Noting more
\begin{align}
\eth[{}^sY_{lm}]=-\frac{1}{\sqrt{2}}\sqrt{(l-s)(l+s+1)}[{}^{s+1}Y_{lm}],
\end{align}
we have
\begin{align}
&\eth^2\dot{\bar{\sigma}}^{\circ}=\frac{1}{4}\sum_{l=2}^{\infty}\sum_{m=-l}^l \dot{h}_{lm}
\sqrt{l(l-1)(l+1)(l+2)}[{}^0Y_{lm}]\\
&|\dot{\sigma}^{\circ}|^2=\dot{\bar{\sigma}}^{\circ}\dot{\sigma}^{\circ}\\
&=\frac{1}{4}\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}
\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}\nonumber\\
&\times[{}^{-2}Y_{l'm'}]\overline{[{}^{-2}Y_{l''m''}]}\\
&=\frac{1}{4}\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}
\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}\nonumber\\
&\times[{}^{-2}Y_{l'm'}](-1)^{m''}[{}^{2}Y_{l''-m''}].\label{eq2}
\end{align}
Using the following relations \cite{Held1970,Fav09a}
\begin{align}
&\overline{[{}^sY_{lm}]}=(-1)^m[{}^{-s}Y_{l-m}],\\
&[{}^{-2}Y_{l'm'}]\overline{[{}^{-2}Y_{l''m''}]}=[{}^{-2}Y_{l'm'}](-1)^{m''}[{}^{2}Y_{l''-m''}]\nonumber\\
&=\sum_{l=0}^{\infty}
\sum_{m=-l}^l(-1)^{m+m''}\Gamma^{2-20}_{l'l''lm'-m''-m}[{}^0Y_{lm}],\\
&\Gamma^{s's''s}_{l'l''lm'm''m}\equiv\int[{}^{-s'}Y_{l'm'}][{}^{-s''}Y_{l''m''}][{}^{-s}Y_{lm}]\sin\theta d\theta d\phi,\label{eq6}\\
&\Gamma^{2-20}_{l'l''lm'-m''-m}\equiv(-1)^{m+m''}\int[{}^{-2}Y_{l'm'}]\overline{[{}^{-2}Y_{l''m''}]}\nonumber\\
&\times\overline{[{}^{0}Y_{lm}]}\sin\theta d\theta d\phi.
\end{align}
we can reduce Eq.~(\ref{eq2}) more
\begin{align}
|\dot{\sigma}^{\circ}|^2&=\frac{1}{4}\sum_{l=0}^{\infty}\sum_{m=-l}^l\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}
\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}(-1)^{m+m''}\nonumber\\
&\times\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}\Gamma^{2-20}_{l'l''lm'-m''-m}[{}^0Y_{lm}].
\end{align}
Eq.~(\ref{eq1}) reduces to
\begin{align}
&\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\left(\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt-\right.\nonumber\\
&\left.\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)+\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right)(-1)^{m''+m}\times\nonumber\\
&\Gamma^{2-20}_{l'l''lm'-m''-m}-\sqrt{\frac{(l+2)!}{(l-2)!}}
h_{lm}\bigg{|}_{t_1}^{t_2}=4R_{lm}\bigg{|}_{t_1}^{t_2}\label{eq3}\\
&R_{lm}(t)\equiv\int \Psi_2^{\circ}(t,\theta,\phi)[{}^0Y_{lm}]\sin\theta d\theta d\phi
\end{align}
for any $l=0,1,...$ and $m=-l,...,l$. In order to unify the form of Eq.~(\ref{eq3}) we have introduced the notations $h_{00}=h_{10}=h_{1\pm1}=0$. For $l\geq2$ Eq.~(\ref{eq3}) can also be written as
\begin{align}
&h_{lm}\bigg{|}_{t_1}^{t_2}=-\sqrt{\frac{(l-2)!}{(l+2)!}}\left[\left.\frac{4}{D}\int\Psi_2^{\circ}[{}^0Y_{lm}]\sin\theta d\theta d\phi\right|_{t_1}^{t_2}\right.-\nonumber\\
&\,\,\,\,D\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\Gamma_{l'l''lm'-m''-m}\times\nonumber\\
&\,\,\,\,\,\left(\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt-\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)+\right.\nonumber\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left.\left.\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right)\right],\label{meq4}
\end{align}
This is a set of coupled equations for unknowns $h_{l0}$ respect to $m\neq0$ modes $h_{lm}$. For non-precession binary black holes, the gravitational wave memory is dominated by modes $h_{l0}$. Correspondingly we call $h_{l0}$ GW memory modes while $h_{lm},\,m\neq0$ non-memory modes. But for precession binary black holes this is not true anymore \cite{PhysRevD.98.064031}. Consequently we consider only spin-aligned binary black holes in the current paper. The unknowns $h_{l0}$ appear on both left and right hand sides. It is hard to solve these unknowns directly.
Due to the quasi-direct current (DC) behavior of the gravitational wave memory \cite{Fav09a}, $\dot{h}_{l0}\approx0$, and we get
\begin{align}
&h_{l0}\bigg{|}_{t_1}^{t_2}=-\sqrt{\frac{(l-2)!}{(l+2)!}}\left[\left.\frac{4}{D}\int \Psi_2^{\circ}[{}^0Y_{l0}]\sin\theta d\theta d\phi\right|_{t_1}^{t_2}\right.-\nonumber\\
&D\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{\mbox{\tiny$\begin{array}{c}
m'=-l',\\
m'\neq0\end{array}$}}^{l'}\sum_{\mbox{\tiny$\begin{array}{c}
m''=-l'',\\
m''\neq0\end{array}$}}^{l''}
\Gamma_{l'l''lm'-m''0}\times\nonumber\\
&\,\,\,\,\,\left(\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt-\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)+\right.\nonumber\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left.\left.\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right)\right].\label{meq2}
\end{align}
At the past infinity time, if we take the mass center frame of the whole system as the asymptotic inertial frame, we have $\Psi_2^\circ(-\infty,\theta,\phi)=M_0$. Here $M_0$ corresponds to the Bondi mass at the past infinity time which equals to the system's ADM mass also \cite{Ashtekar:2019viz}. At the future infinity time, the Bondi mass $M$ is smaller than the initial value $M_0$ because the gravitational wave carries out some energy $E_{\rm GW}$, $M=M_0-E_{\rm GW}$. The spacetime will settle down to a Kerr black hole with mass $\tilde{M}$ at the future infinity time. But importantly the mass center frame at the future infinity time is different to the mass center frame at the past infinity time due to the kick velocity. These two asymptotic inertial frames are related by a boost transformation. Consequently $\tilde{M}=M/\gamma$, where $\gamma$ is the Lorentz factor. It is useful to note that there is not an asymptotic inertial frame which coincides with the mass center frame at all instant time due to the kick velocity. The gravitational waveform calculated by numerical relativity corresponds to the asymptotic inertial frame which corresponds to the initial mass center frame. Consequently the waveform got by numerical relativity already counts the kick velocity effect \cite{Varma:2020nbm,PhysRevLett.121.191102,PhysRevLett.117.011101}. So if we take the mass center frame at the past infinity time as the asymptotic inertial frame, we have \cite{Ashtekar:2019viz}
\begin{align}
&\Psi_2^\circ(\infty,\theta,\phi)=-\frac{\tilde{M}}{\gamma^3}\times\nonumber\\
&\left(1-v_x\sin\theta\cos\phi-v_y\sin\theta\sin\phi-v_z\cos\theta\right)^{-3},\label{meq3}\\
&\gamma=\frac{1}{\sqrt{1-v^2}},
\end{align}
where $\vec{v}$ is the kick velocity.
Since both the gravitational wave energy $E_{\rm GW}$ and the kick velocity can be calculated through non-memory modes, the right hand side of the Eq.~(\ref{meq2}) is completely determined by the non-memory modes $h_{lm},m\neq0$. If only the non-memory modes are known, we can plug them into the Eq.~(\ref{meq2}) and calculate the memory modes exactly.
If we neglect the kick velocity to let $\vec{v}=0$ and neglect the contribution $\left.\dot{h}_{l'm'}\bar{h}_{l''m''}\right|_{t_1}^{t_2}$, our Eq.~(\ref{meq2}) recovers the assumption relation (\ref{assumeq}) (equivalently, the Eq.~(3.3) of \cite{Fav09a}). The $\Psi_2^\circ$ term is understood as ordinary memory part in \cite{2020arXiv200906351K}.
\section{Comparison to previous results}
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{fig1.eps}
\end{tabular}
\caption{Comparison of $h_{20}$ waveforms among the current calculation result (marked with `new'), the post-Newtonian result (marked with `PN') and the numerical relativity result (marked with `PR') for binary black hole coalescence. The top row corresponds to spinless equal mass binary black hole. The bottom row corresponds to the spin aligned equal mass binary black hole with dimensionless spin parameter $\chi_{1}=\chi_{2}=0.8$. The right two plots are the enlargement of the merger part corresponding to the left two plots.}\label{mfig1}
\end{figure}
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{fig2.eps}
\end{tabular}
\caption{Memory waveform for eccentric binary black holes. The eccentricity $e_0$ means the initial eccentricity at reference frequency $Mf_0=0.002$ which is got by SEOBNRE model \cite{PhysRevD.101.044049}. Here the quasi-circular one marked with $e_0=0$ corresponds to SXS:BBH:0070, the eccentric one marked with $e_0=0.43$ corresponds to SXS:BBH:1357 and the eccentric one marked with $e_0=0.59$ corresponds to SXS:BBH:1362. The right column shows
the enlargement of the inspiral part of the waveform corresponding to the left column.
}\label{mfig2}
\end{figure}
In the following we plug the SXS catalog results \cite{SXSBBH} for non-memory modes $h_{lm},m\neq0$ into the Eq.~(\ref{meq2}) to calculate memory waveform $h_{l0}$. More specifically all $l=2,..,8$ with $m=-l,...,-1,1,...,l$ modes are used. As an example we show the comparison of the $h_{20}$ calculated through the above method to the numerical relativity result \cite{PolRei11} and post-Newtonian result \cite{WisWil91,BlaDam92,Tho92,Fav09a,Fav09b,favata2009gravitational,favata2010gravitational,PhysRevD.84.124013,PhysRevD.95.084048,Cao16} in the Fig.~\ref{mfig1}. Here the post-Newtonian result is got through the Eq.~(8) of \cite{Cao16} based on the SEOBNRE model \cite{PhysRevD.96.044028,PhysRevD.101.044049}. The line corresponding to the post-Newtonian stops when the binary merger starts.
The Fig.~\ref{mfig1} indicates that the result based on the current new method is consistent to the PN waveform quite well. But the deviation shows up near merger. The new result is also consistent to the numerical relativity result quite well. But at the time numerical relativity simulation starts (where the line marked with `PR', Pollney and Reisswig, begins), the deviation between the PN waveform and new result is already clear. So the results \cite{PolRei11,Cao16} through attaching the PN approximation to numerical relativity simulation may admit systematic error. In general, the new result and the PR result are quantitatively consistent.
The authors of \cite{PolRei11} have computed more than ten binary black hole systems with equal mass and aligned spin. We confirm that all of those results are consistent to our calculation in the current work similar to the Fig.~\ref{mfig1}.
Favata found strong effect of eccentricity on the memory waveform in \cite{PhysRevD.84.124013} which results in oscillation of $h_{20}$. We confirm this result in the Fig.~\ref{mfig2}. But if we consider the GW memory amplitude $h^{tot}_{20}\equiv \left.h_{20}\right|_{-\infty}^{\infty}$ for binary black hole coalescence, the eccentricity effect is ignorable for almost equal mass binary black hole systems with eccentricity $e_0<0.6$ at reference frequency $Mf_0=0.002$ \cite{PhysRevD.96.044028,PhysRevD.101.044049}.
The assumption (\ref{assumeq}) corresponds to the term
\begin{align}
&h^1_{l0}\bigg{|}_{t_1}^{t_2}=\sqrt{\frac{(l-2)!}{(l+2)!}}D\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{\mbox{\tiny$\begin{array}{c}
m'=-l',\\
m'\neq0\end{array}$}}^{l'}\sum_{\mbox{\tiny$\begin{array}{c}
m''=-l'',\\
m''\neq0\end{array}$}}^{l''}\nonumber\\
&\Gamma_{l'l''lm'-m''0}\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt\label{term1}
\end{align}
of (\ref{meq2}). We call the above term 1 and denote it as $h^1_{l0}$. The authors of \cite{2020arXiv200906351K} considered the ``linear" (or alternately ``ordinary") memory contribution which corresponds to the term
\begin{align}
&h^2_{l0}\bigg{|}_{t_1}^{t_2}=-\sqrt{\frac{(l-2)!}{(l+2)!}}\frac{4}{D}\left.\int \Psi_2^{\circ}[{}^0Y_{l0}]\sin\theta d\theta d\phi\right|_{t_1}^{t_2}\label{term2}
\end{align}
of (\ref{meq2}). We call the above term 2 and denote it as $h^2_{l0}$. In addition our Eq.~(\ref{meq2}) includes instant contribution of $\dot{h}_{lm},m\neq0$
\begin{align}
&h^3_{lm}\bigg{|}_{t_1}^{t_2}=\sqrt{\frac{(l-2)!}{(l+2)!}}\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\Gamma_{l'l''lm'-m''-m}\nonumber\\
&\times D\left(\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)-\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right).\label{term3}
\end{align}
We call the above term 3 and denote it as $h^3_{l0}$. We investigate the fractional contributions of these three terms respectively in the Fig.~\ref{mfig5} for the four cases shown in the Fig.~\ref{mfig1} and the Fig.~\ref{mfig2}. The ``linear" memory (term 2) is always negligible. The term 3 contributes between 0.01\% and 1\%. And as expected when $t\rightarrow\infty$ the term 3 vanishes. Consequently the term 3 does not contribute to $h_{lm}^{\rm tot}$. These kind of behaviors for term 2 and term 3 are common for all binary black holes.
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{fig3a.eps}\\
\\
\includegraphics[width=0.5\textwidth]{fig3b.eps}
\end{tabular}
\caption{The fractional contributions of the three terms listed in Eqs.~(\ref{term1})-(\ref{term3}) for the four cases shown in Figs.~\ref{mfig1} and \ref{mfig2}.}\label{mfig5}
\end{figure}
\section{GW memory for binary black hole coalescence}
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fig4a.eps}&
\includegraphics[width=0.5\textwidth]{fig4b.eps}
\end{tabular}
\caption{(a) Memory amplitude $h^{tot}_{20}$ of spin aligned equal mass binary black hole respect to the effective spin $\chi_{\rm eff}\equiv\frac{q\chi_{1z}+\chi_{2z}}{1+q}$. PR formula means the Eq.~(\ref{meq5}) (also the Eq.~(8) of \cite{PolRei11}). (b) Memory amplitude of spinless binary black hole mergers respect to the mass ratio $q\equiv m_1/m_2$.}\label{mfig3}
\end{figure*}
For equal mass spin aligned binary black hole systems, the authors of \cite{PolRei11} have found the relation between the GW memory amplitude $h^{tot}_{20}$ and the symmetric spin $\chi_{\rm eff}\equiv(m_1\chi_{1z}+m_2\chi_{2z})/M$ (Eq.~(8) of \cite{PolRei11})
\begin{align}
\frac{D}{M}h^{tot}_{20}=&0.0969+0.0562\chi_{\rm eff}+0.0340\chi_{\rm eff}^2+\nonumber\\
&0.0296\chi_{\rm eff}^3+0.0206\chi_{\rm eff}^4.\label{meq5}
\end{align}
We confirm this formula based on SXS catalog in the Fig.~\ref{mfig3}(a). As pointed out by the authors of \cite{PolRei11} and also explained in \cite{Cao16}, we also confirm the memory amplitude $h^{tot}_{20}$ for equal mass spin aligned binary black hole is independent of the anti-symmetric part of the spin $\chi_{\rm A}\equiv\frac{\chi_{1z}-\chi_{2z}}{2}$.
In the Fig.~\ref{mfig3}(b), we investigate the GW memory amplitude of spinless binary black hole mergers respect to the mass ratio $q\equiv m_1/m_2$. Based on the PN approximation, Favata \cite{Set09,Fav09b} showed the memory of the binary black hole with equal mass is about $\frac{1}{24\pi^2}\sqrt{\frac{1543}{70}}\approx0.0198$ which is much less than the calculation result 0.097 here. And more Favata \cite{Set09,Fav09b} estimated the memory is proportional to the symmetric mass ratio $\eta=\frac{q}{(1+q)^2}$. Here we find that it decreases much faster than Favata estimated. Instead it roughly behaves as $\eta^{1.65}$.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fig5a.eps}&
\includegraphics[width=0.5\textwidth]{fig5b.eps}
\end{tabular}
\caption{Memory amplitude $h^{tot}_{20}$ for generic spin aligned binary black hole respect to the spin hang-up parameter $\chi_{\rm up}$ which is defined in (\ref{meq7}). The different colors mean different mass ratio. The left plot: red, cyan and black represent respectively $q=1$, $q=2$ and $q=3$. Right plot: The different lines from top to bottom represent the Eq.~(\ref{meq6}) with $q=4$, $q=5$, $q=6$, $q=7$, $q=8$ and $q=9.5$ respectively. The color of the points represent the different mass ratio indicated by the color bar.}\label{mfig4}
\end{figure*}
For unequal mass spin aligned binary black holes, the Eq.~(\ref{meq5}) does not hold any more. In \cite{PhysRevD.101.044049} we found spin hang-up effect is the most important factor for gravitational waveform. Interestingly we find this statement is also correct for memory. Following \cite{PhysRevD.101.044049} we define a spin hang-up parameter as
\begin{align}
\chi_{\rm up}\equiv\chi_{\rm eff}+\frac{3}{8}\sqrt{1-4\eta}\chi_{\rm A}.\label{meq7}
\end{align}
This definition is different to the Eq.~(7) of \cite{PhysRevD.101.044049}. The current definition lets $\chi_{\rm up}$ go back to $\chi_{\rm eff}$ for equal mass binary black holes. Based on this spin hang-up parameter and the relationship between the GW memory amplitude and the mass ratio, we find the general behavior for generic spin aligned binary black hole systems can be expressed as
\begin{align}
\frac{D}{M}h^{tot}_{20}=&[0.0969+0.0562\chi_{\rm up}+0.0340\chi_{\rm up}^2+\nonumber\\
&0.0296\chi_{\rm up}^3+0.0206\chi_{\rm up}^4](4\eta)^{1.65}.\label{meq6}
\end{align}
We validate the finding (\ref{meq6}) in the Fig.~\ref{mfig4}. From this figure we can see the Eq.~(\ref{meq6}) does describe the main feature of the behavior. For systems with mass ratio between 2 and 4, the effect of $\chi_{\rm A}$ is stronger. So the points do not perfectly fall on the line. We suspect this is because only the combination of $\chi_{\rm A}$, $\sqrt{1-4\eta}$ and $\eta$ contributes to memory for $\chi_{\rm A}$ like the Eqs.~(\ref{meq7}) and (\ref{meq6}). For the rest cases the Eq.~(\ref{meq6}) works very well. When the mass ratio increases, the effect of mass ratio decreases which can be seen in the Eq.~(\ref{meq6}). So after $q=3$ we use gradually larger and larger range to group numerical data in the Fig.~\ref{mfig4}.
\section{Discussion}
We have proposed a new method to accurately calculate the GW memory for spin-aligned binary black holes. Our calculation indicates that the strongest GW memory amplitude for binary black hole merger corresponds to the fastest spinning aligned two equal mass black holes. And the amplitude is about $h^{tot}_{20}\approx0.24\frac{M}{D}$. If the two black holes do not spin, the amplitude is about $h^{tot}_{20}\approx0.1\frac{M}{D}$. Quantitatively we find that the memory amplitude can be described by spin hang-up parameter $\chi_{\rm up}$ and mass ratio $\eta$ quite well.
Based on our new method, it is straight forward to apply the technique of \cite{PhysRevResearch.1.033015} to construct a highly accurate numerical relativity surrogate model for GW memory. In the near future the detection of GW memory can be compared to the prediction by our method \cite{2020arXiv200906351K} and give a test of general relativity.
\section*{Acknowledgments}
We thank Zhi-Chao Zhao for helpful discussions. This work was supported by the NSFC (No.~11690023). X. He was supported by NSF of Hunan province (2018JJ2073). Z. Cao was supported by ``the Fundamental Research Funds for the Central Universities", ``the Interdiscipline Research Funds of Beijing Normal University" and the Strategic Priority Research Program of the Chinese Academy of Sciences, grant No. XDB23040100.
\section{Introduction}
The memory of gravitational wave (GW) was firstly found by Zeldovich, Braginsky, Thorne and their coworkers \cite{Zeldovich74,Pay83,Braginsky:1986ia,braginsky1987gravitational}. This memory effect is produced by the gravitational wave source directly. Later Christodoulou found that gravitational wave itself can also produce memory \cite{PhysRevLett.67.1486,Fra92}.
The memory found before Christodoulou is usually called ordinary memory. The ordinary memory is produced by the quadrupole moment change of the source. And the memory found by Christodoulou is called nonlinear memory. Thorne \cite{Tho92} assumed a relation between the gravitational wave flux and the nonlinear memory through analogy of `quadrupole moment change of gravitational wave energy'
\begin{align}
\Delta h_{jk}^{\rm TT}=\frac{4}{r}\int\frac{dE}{d\Omega'}\left(\frac{\xi^{'j}\xi^{'k}}{1-\cos\theta'}\right)^{\rm TT}d\Omega'.\label{assumeq}
\end{align}
This relation corresponds to the Eq.~(2) of \cite{Tho92}. The integral is over the solid angle $\Omega'$ surrounding the source, $E$ is the energy of gravitational wave, $\xi^{'j}$ is a unit vector pointing from the source toward $d\Omega'$, and $\theta'$ is the angle between $\xi^{'j}$ and the direction to the detector. The assumed relation (\ref{assumeq}) can be shown valid when the condition of post-Newtonian approximation is satisfied \cite{Tho92,WisWil91,BlaDam92}.
Recent years, many works including \cite{Fav09a,Fav09b,favata2009gravitational,favata2010gravitational,PhysRevD.84.124013,PhysRevD.95.084048,PhysRevD.98.064031,2020arXiv200906351K} applied the above assumed relation (\ref{assumeq}) to the full inspiral-merger-ringdown process of binary black hole to get the gravitational waveform of memory. And later these GW memory waveform was used to determine when LIGO would be able to detect the memory effect \cite{PhysRevLett.117.061102,PhysRevD.101.083026} and to search memory signal in LIGO data \cite{PhysRevD.101.023011,PhysRevD.101.104041}.
On the numerical relativity (NR) side, the calculation for non-memory waveform has become more and more accurate. The waveform extraction technique involved in NR guarantees the calculated gravitational wave is gauge invariant, which makes different numerical relativity groups using different Einstein equation formulation, different initial data form, and different coordinate condition during the evolution get the same waveform result (early references including \cite{Baker_2007}). The extracted waveform corresponds to the two polarization modes $h_+$ and $h_\times$. The reported gravitational wave events by LIGO and Virgo highly depend on gravitational waveform models including EOBNR, IMRPhenomena and surrogate models \cite{PhysRevLett.116.061102,PhysRevX.9.031040}. In contrast, numerical relativity results on memory are much less confident. Some preliminary NR results on memory have been got in \cite{PolRei11,PhysRevD.102.104007,2020arXiv201101309M}.
Theoretical model is very important to memory detection and signal interpretation \cite{Set09,VanLev10,PshBasPos10,CorJen12,MadCorCha14,Arzoumanian_2015,PhysRevLett.118.181103,Divakarla:2019zjj}. In this paper, we propose a new method to calculate the gravitational wave memory. This method is based on the Bondi-Metzner-Sachs (BMS) theory \cite{BonVanMet62,Sac62,PenRin88} in stead of the assumption (\ref{assumeq}). Since BMS theory does not need the conditions of slow motion and weak field for the GW source, this new method is very accurate for GW memory calculation. We adopt geometric units with $c=G=1$ through this paper.
\section{New method to calculate the gravitational wave memory}
Based on the Bondi-Metzner-Sachs (BMS) theory, gravitational radiation can be described at null infinity with Bondi-Sachs (BS) coordinate $(u,r,\theta,\phi)$. Here $u$ is called Bondi time which corresponds to the time of observer very far away from the GW source, say the GW detector. Inside the spacetime of the gravitational wave source which is looked as an isolated spacetime, the slice of constant $u$ is null. On the null infinity, the gravitational waveform only depends on $(u,\theta,\phi)$. When we consider a source located luminosity distance $D$ away, the waveform depends on $(u,D,\theta,\phi)$ and the dependence on $D$ is proportional to $\frac{1}{D}$. In GW data analysis community, people use $t=u$ to denote the time. So we choose to use notation `$t$' for the Bondi time in the current paper to avoid two different notations for the same quantity. In order to borrow the well known relations in BMS theory, we use the Newmann-Penrose formalism and the tetrad choice convention of \cite{PenRin88,Held1970}
\begin{align}
n^0=\frac{1}{2\alpha},&\,\, n^i=-\frac{\beta^i}{2\alpha}-\frac{1}{2}v^i,\\
l^0=\frac{1}{\alpha},&\,\, l^i=-\frac{\beta^i}{\alpha}+v^i,\\
m^0=0,&\,\, m^i=\frac{1}{\sqrt{2}}(w^i-iu^i),
\end{align}
where $v^i$ is the out-pointing normal vector of the BS coordinate sphere in the 3-dimensional space-like slice, $u^i$ and $w^i$ are orthnormal basis tangent to the sphere. $v^i$ also corresponds to the propagating direction of the gravitational wave. $\alpha$ and $\beta^i$ are the lapse function and shift vector describing the 3+1 decomposition. Asymptotically $\alpha\rightarrow1$, $\beta^i\rightarrow0$, $v^i\rightarrow\frac{\partial}{\partial r}$, $u^i\rightarrow\frac{1}{r\sin\theta}\frac{\partial}{\partial\phi}$ and $w^i\rightarrow\frac{1}{r}\frac{\partial}{\partial\theta}$. Note the above convention admits a factor $\sqrt{2}$ for null vectors $\textbf{l}$ and $\textbf{n}$ difference to the convention used by numerical relativity community (for an example, Eq.~(32)-(34) of \cite{PhysRevD.77.024027}).
Based on the tetrad choice given above, we have the following relations at null infinity for asymptotically flat spacetime
\begin{align}
&\dot{\Psi}_2^{\circ}=\eth\Psi_3^{\circ}+\sigma^{\circ}\Psi_4^{\circ},\,\, \Psi_3^{\circ}=-\eth\dot{\bar{\sigma}}^{\circ},\,\, \Psi_4^{\circ}=-\ddot{\bar{\sigma}}^{\circ}.\label{eq1}
\end{align}
Here $\sigma$ corresponds to the shear of the $(\theta,\phi)$ coordinate sphere in the BS coordinate \cite{he2015new,he2016asymptotical,sun2019binary}. $\Psi_2^{\circ}$ is the Weyl tensor component relating to Bondi mass. The sign ``$\circ$" means the leading order respect to the luminosity distance when one goes to null infinity. For a function $f$ with spin-weight $s$ on sphere, the operator $\eth$ is defined as
\begin{align}
\eth f\equiv\frac{1}{\sqrt{2}}(\sin\theta)^s(\frac{\partial}{\partial\theta}
+\frac{i}{\sin\theta}\frac{\partial}{\partial\phi})(\sin\theta)^{-s}f.
\end{align}
If the tetrad convention of numerical relativity community is used, the gravitational wave strain $h\equiv h_+-ih_\times$ is related to the double integral of $\Psi_4$ respect to time (Eq.~(14) of \cite{PhysRevD.75.124018}). $h_{+}$ and $h_{\times}$ correspond to the two polarization modes of the gravitational wave \cite{maggiore2008gravitational} respect to the basis
\begin{align}
&e^+_{ij}=w_iw_j-u_iu_j\\
&e^{\times}_{ij}=w_iu_j+w_ju_i.
\end{align}
Due to the factor $\sqrt{2}$ difference of $\textbf{n}$, we now have
\begin{align}
h=-\frac{1}{2}\int\int\Psi_4dtdt=-\frac{1}{2}\int\int\frac{\Psi_4^{\circ}}{D}dtdt
\end{align}
where $D$ is the luminosity distance between the observer and the source. Again we need to note that the convention of $\Psi_4$ definition we adopt here follows \cite{PenRin88,Held1970} which admits a minus sign difference to the convention used in numerical relativity (for example, Eq.~(9) of \cite{PhysRevD.75.124018}).
Aided with the third equation of Eq.~(\ref{eq1}) we have
\begin{align}
\sigma^\circ=\frac{D}{2}\left(h_++ih_\times\right).\label{meq10}
\end{align}
And more the relations (\ref{eq1}) result in
\begin{align}
&\frac{\partial}{\partial t}(\Psi_2^{\circ}+\sigma^{\circ}\dot{\bar{\sigma}}^{\circ})=\dot{\Psi}_2^{\circ}+\dot{\sigma}^{\circ}\dot{\bar{\sigma}}^{\circ}+\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}\\
&=\eth\Psi_3^{\circ}+\sigma^{\circ}\Psi_4^{\circ}+\dot{\sigma}^{\circ}\dot{\bar{\sigma}}^{\circ}+\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}\\
&=-\eth^2\dot{\bar{\sigma}}^{\circ}-\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}+\dot{\sigma}^{\circ}\dot{\bar{\sigma}}^{\circ}+\sigma^{\circ}\ddot{\bar{\sigma}}^{\circ}\\
&=|\dot{\sigma}^{\circ}|^2-\eth^2\dot{\bar{\sigma}}^{\circ},
\end{align}
which corresponds to the `final formula' of \cite{Fra92}. $\left.h_{+,\times}\right|_{t_1=-\infty}^{t_2=\infty}$ and correspondingly $\left.\sigma^\circ\right|_{t_1=-\infty}^{t_2=\infty}$ are the gravitational wave memory.
Consequently we have
\begin{align}
\int_{t_1}^{t_2}(|\dot{\sigma}^{\circ}|^2-\eth^2\dot{\bar{\sigma}}^{\circ})dt=\left.(\Psi_2^{\circ}+\sigma^{\circ}\dot{\bar{\sigma}}^{\circ})\right|_{t_1}^{t_2},
\end{align}
which only gives the relation among the asymptotic quantities of a radiative spacetime. This relation indicates that $\left.\sigma^\circ\right|_{t_1=-\infty}^{t_2=\infty}$, i.e. gravitational wave memory, generally does not vanish. But this is just a qualitative result. It does not show the quantitative behavior of memory.
In order to investigate the quantitative behavior of GW memory, we use spin-weighted $-2$ spherical harmonic functions ${}^{-2}Y_{lm}$ to decompose the gravitational wave strain $h$ as following \cite{PhysRevD.75.124018,PhysRevD.77.024027,PhysRevD.78.124011}
\begin{align}
&h(t,\theta,\phi)\equiv\sum_{l=2}^{\infty}\sum_{m=-l}^lh_{lm}(t)[{}^{-2}Y_{lm}](\theta,\phi),\\&[{}^sY_{lm}]\equiv(-1)^s\sqrt{\frac{2l+1}{4\pi}}d^l_{m(-s)}(\theta)e^{im\phi},\\
&d^l_{ms}\equiv\sum_{i=C_1}^{C_2}\frac{(-1)^i\sqrt{(l+m)!(l-m)!(l+s)!(l-s)!}}{(l+m-i)!(l-s-i)!i!(i+s-m)!}\nonumber\\
&\times[\cos(\theta/2)]^{2l+m-s-2i}[\sin(\theta/2)]^{2i+s-m},\\
&C_1=\max(0,m-s),\,\,\, C_2=\min(l+m,l-s),
\end{align}
where the over-bar means the complex conjugate.
Noting more
\begin{align}
\eth[{}^sY_{lm}]=-\frac{1}{\sqrt{2}}\sqrt{(l-s)(l+s+1)}[{}^{s+1}Y_{lm}],
\end{align}
we have
\begin{align}
&\eth^2\dot{\bar{\sigma}}^{\circ}=\frac{1}{4}\sum_{l=2}^{\infty}\sum_{m=-l}^l \dot{h}_{lm}
\sqrt{l(l-1)(l+1)(l+2)}[{}^0Y_{lm}]\\
&|\dot{\sigma}^{\circ}|^2=\dot{\bar{\sigma}}^{\circ}\dot{\sigma}^{\circ}\\
&=\frac{1}{4}\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}
\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}\nonumber\\
&\times[{}^{-2}Y_{l'm'}]\overline{[{}^{-2}Y_{l''m''}]}\\
&=\frac{1}{4}\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}
\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}\nonumber\\
&\times[{}^{-2}Y_{l'm'}](-1)^{m''}[{}^{2}Y_{l''-m''}].\label{eq2}
\end{align}
Using the following relations \cite{Held1970,Fav09a}
\begin{align}
&\overline{[{}^sY_{lm}]}=(-1)^m[{}^{-s}Y_{l-m}],\\
&[{}^{-2}Y_{l'm'}]\overline{[{}^{-2}Y_{l''m''}]}=[{}^{-2}Y_{l'm'}](-1)^{m''}[{}^{2}Y_{l''-m''}]\nonumber\\
&=\sum_{l=0}^{\infty}
\sum_{m=-l}^l(-1)^{m+m''}\Gamma^{2-20}_{l'l''lm'-m''-m}[{}^0Y_{lm}],\\
&\Gamma^{s's''s}_{l'l''lm'm''m}\equiv\int[{}^{-s'}Y_{l'm'}][{}^{-s''}Y_{l''m''}][{}^{-s}Y_{lm}]\sin\theta d\theta d\phi,\label{eq6}\\
&\Gamma^{2-20}_{l'l''lm'-m''-m}\equiv(-1)^{m+m''}\int[{}^{-2}Y_{l'm'}]\overline{[{}^{-2}Y_{l''m''}]}\nonumber\\
&\times\overline{[{}^{0}Y_{lm}]}\sin\theta d\theta d\phi.
\end{align}
we can reduce Eq.~(\ref{eq2}) more
\begin{align}
|\dot{\sigma}^{\circ}|^2&=\frac{1}{4}\sum_{l=0}^{\infty}\sum_{m=-l}^l\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}
\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}(-1)^{m+m''}\nonumber\\
&\times\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}\Gamma^{2-20}_{l'l''lm'-m''-m}[{}^0Y_{lm}].
\end{align}
Eq.~(\ref{eq1}) reduces to
\begin{align}
&\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\left(\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt-\right.\nonumber\\
&\left.\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)+\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right)(-1)^{m''+m}\times\nonumber\\
&\Gamma^{2-20}_{l'l''lm'-m''-m}-\sqrt{\frac{(l+2)!}{(l-2)!}}
h_{lm}\bigg{|}_{t_1}^{t_2}=4R_{lm}\bigg{|}_{t_1}^{t_2}\label{eq3}\\
&R_{lm}(t)\equiv\int \Psi_2^{\circ}(t,\theta,\phi)[{}^0Y_{lm}]\sin\theta d\theta d\phi
\end{align}
for any $l=0,1,...$ and $m=-l,...,l$. In order to unify the form of Eq.~(\ref{eq3}) we have introduced the notations $h_{00}=h_{10}=h_{1\pm1}=0$. For $l\geq2$ Eq.~(\ref{eq3}) can also be written as
\begin{align}
&h_{lm}\bigg{|}_{t_1}^{t_2}=-\sqrt{\frac{(l-2)!}{(l+2)!}}\left[\left.\frac{4}{D}\int\Psi_2^{\circ}[{}^0Y_{lm}]\sin\theta d\theta d\phi\right|_{t_1}^{t_2}\right.-\nonumber\\
&\,\,\,\,D\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\Gamma_{l'l''lm'-m''-m}\times\nonumber\\
&\,\,\,\,\,\left(\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt-\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)+\right.\nonumber\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left.\left.\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right)\right],\label{meq4}
\end{align}
This is a set of coupled equations for unknowns $h_{l0}$ respect to $m\neq0$ modes $h_{lm}$. For non-precession binary black holes, the gravitational wave memory is dominated by modes $h_{l0}$. Correspondingly we call $h_{l0}$ GW memory modes while $h_{lm},\,m\neq0$ non-memory modes. But for precession binary black holes this is not true anymore \cite{PhysRevD.98.064031}. Consequently we consider only spin-aligned binary black holes in the current paper. The unknowns $h_{l0}$ appear on both left and right hand sides. It is hard to solve these unknowns directly.
Due to the quasi-direct current (DC) behavior of the gravitational wave memory \cite{Fav09a}, $\dot{h}_{l0}\approx0$, and we get
\begin{align}
&h_{l0}\bigg{|}_{t_1}^{t_2}=-\sqrt{\frac{(l-2)!}{(l+2)!}}\left[\left.\frac{4}{D}\int \Psi_2^{\circ}[{}^0Y_{l0}]\sin\theta d\theta d\phi\right|_{t_1}^{t_2}\right.-\nonumber\\
&D\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{\mbox{\tiny$\begin{array}{c}
m'=-l',\\
m'\neq0\end{array}$}}^{l'}\sum_{\mbox{\tiny$\begin{array}{c}
m''=-l'',\\
m''\neq0\end{array}$}}^{l''}
\Gamma_{l'l''lm'-m''0}\times\nonumber\\
&\,\,\,\,\,\left(\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt-\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)+\right.\nonumber\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left.\left.\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right)\right].\label{meq2}
\end{align}
At the past infinity time, if we take the mass center frame of the whole system as the asymptotic inertial frame, we have $\Psi_2^\circ(-\infty,\theta,\phi)=M_0$. Here $M_0$ corresponds to the Bondi mass at the past infinity time which equals to the system's ADM mass also \cite{Ashtekar:2019viz}. At the future infinity time, the Bondi mass $M$ is smaller than the initial value $M_0$ because the gravitational wave carries out some energy $E_{\rm GW}$, $M=M_0-E_{\rm GW}$. The spacetime will settle down to a Kerr black hole with mass $\tilde{M}$ at the future infinity time. But importantly the mass center frame at the future infinity time is different to the mass center frame at the past infinity time due to the kick velocity. These two asymptotic inertial frames are related by a boost transformation. Consequently $\tilde{M}=M/\gamma$, where $\gamma$ is the Lorentz factor. It is useful to note that there is not an asymptotic inertial frame which coincides with the mass center frame at all instant time due to the kick velocity. The gravitational waveform calculated by numerical relativity corresponds to the asymptotic inertial frame which corresponds to the initial mass center frame. Consequently the waveform got by numerical relativity already counts the kick velocity effect \cite{Varma:2020nbm,PhysRevLett.121.191102,PhysRevLett.117.011101}. So if we take the mass center frame at the past infinity time as the asymptotic inertial frame, we have \cite{Ashtekar:2019viz}
\begin{align}
&\Psi_2^\circ(\infty,\theta,\phi)=-\frac{\tilde{M}}{\gamma^3}\times\nonumber\\
&\left(1-v_x\sin\theta\cos\phi-v_y\sin\theta\sin\phi-v_z\cos\theta\right)^{-3},\label{meq3}\\
&\gamma=\frac{1}{\sqrt{1-v^2}},
\end{align}
where $\vec{v}$ is the kick velocity.
Since both the gravitational wave energy $E_{\rm GW}$ and the kick velocity can be calculated through non-memory modes, the right hand side of the Eq.~(\ref{meq2}) is completely determined by the non-memory modes $h_{lm},m\neq0$. If only the non-memory modes are known, we can plug them into the Eq.~(\ref{meq2}) and calculate the memory modes exactly.
If we neglect the kick velocity to let $\vec{v}=0$ and neglect the contribution $\left.\dot{h}_{l'm'}\bar{h}_{l''m''}\right|_{t_1}^{t_2}$, our Eq.~(\ref{meq2}) recovers the assumption relation (\ref{assumeq}) (equivalently, the Eq.~(3.3) of \cite{Fav09a}). The $\Psi_2^\circ$ term is understood as ordinary memory part in \cite{2020arXiv200906351K}.
\section{Comparison to previous results}
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{fig1.eps}
\end{tabular}
\caption{Comparison of $h_{20}$ waveforms among the current calculation result (marked with `new'), the post-Newtonian result (marked with `PN') and the numerical relativity result (marked with `PR') for binary black hole coalescence. The top row corresponds to spinless equal mass binary black hole. The bottom row corresponds to the spin aligned equal mass binary black hole with dimensionless spin parameter $\chi_{1}=\chi_{2}=0.8$. The right two plots are the enlargement of the merger part corresponding to the left two plots.}\label{mfig1}
\end{figure}
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{fig2.eps}
\end{tabular}
\caption{Memory waveform for eccentric binary black holes. The eccentricity $e_0$ means the initial eccentricity at reference frequency $Mf_0=0.002$ which is got by SEOBNRE model \cite{PhysRevD.101.044049}. Here the quasi-circular one marked with $e_0=0$ corresponds to SXS:BBH:0070, the eccentric one marked with $e_0=0.43$ corresponds to SXS:BBH:1357 and the eccentric one marked with $e_0=0.59$ corresponds to SXS:BBH:1362. The right column shows
the enlargement of the inspiral part of the waveform corresponding to the left column.
}\label{mfig2}
\end{figure}
In the following we plug the SXS catalog results \cite{SXSBBH} for non-memory modes $h_{lm},m\neq0$ into the Eq.~(\ref{meq2}) to calculate memory waveform $h_{l0}$. More specifically all $l=2,..,8$ with $m=-l,...,-1,1,...,l$ modes are used. As an example we show the comparison of the $h_{20}$ calculated through the above method to the numerical relativity result \cite{PolRei11} and post-Newtonian result \cite{WisWil91,BlaDam92,Tho92,Fav09a,Fav09b,favata2009gravitational,favata2010gravitational,PhysRevD.84.124013,PhysRevD.95.084048,Cao16} in the Fig.~\ref{mfig1}. Here the post-Newtonian result is got through the Eq.~(8) of \cite{Cao16} based on the SEOBNRE model \cite{PhysRevD.96.044028,PhysRevD.101.044049}. The line corresponding to the post-Newtonian stops when the binary merger starts.
The Fig.~\ref{mfig1} indicates that the result based on the current new method is consistent to the PN waveform quite well. But the deviation shows up near merger. The new result is also consistent to the numerical relativity result quite well. But at the time numerical relativity simulation starts (where the line marked with `PR', Pollney and Reisswig, begins), the deviation between the PN waveform and new result is already clear. So the results \cite{PolRei11,Cao16} through attaching the PN approximation to numerical relativity simulation may admit systematic error. In general, the new result and the PR result are quantitatively consistent.
The authors of \cite{PolRei11} have computed more than ten binary black hole systems with equal mass and aligned spin. We confirm that all of those results are consistent to our calculation in the current work similar to the Fig.~\ref{mfig1}.
Favata found strong effect of eccentricity on the memory waveform in \cite{PhysRevD.84.124013} which results in oscillation of $h_{20}$. We confirm this result in the Fig.~\ref{mfig2}. But if we consider the GW memory amplitude $h^{tot}_{20}\equiv \left.h_{20}\right|_{-\infty}^{\infty}$ for binary black hole coalescence, the eccentricity effect is ignorable for almost equal mass binary black hole systems with eccentricity $e_0<0.6$ at reference frequency $Mf_0=0.002$ \cite{PhysRevD.96.044028,PhysRevD.101.044049}.
The assumption (\ref{assumeq}) corresponds to the term
\begin{align}
&h^1_{l0}\bigg{|}_{t_1}^{t_2}=\sqrt{\frac{(l-2)!}{(l+2)!}}D\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{\mbox{\tiny$\begin{array}{c}
m'=-l',\\
m'\neq0\end{array}$}}^{l'}\sum_{\mbox{\tiny$\begin{array}{c}
m''=-l'',\\
m''\neq0\end{array}$}}^{l''}\nonumber\\
&\Gamma_{l'l''lm'-m''0}\int_{t_1}^{t_2}\dot{h}_{l'm'}\dot{\bar{h}}_{l''m''}dt\label{term1}
\end{align}
of (\ref{meq2}). We call the above term 1 and denote it as $h^1_{l0}$. The authors of \cite{2020arXiv200906351K} considered the ``linear" (or alternately ``ordinary") memory contribution which corresponds to the term
\begin{align}
&h^2_{l0}\bigg{|}_{t_1}^{t_2}=-\sqrt{\frac{(l-2)!}{(l+2)!}}\frac{4}{D}\left.\int \Psi_2^{\circ}[{}^0Y_{l0}]\sin\theta d\theta d\phi\right|_{t_1}^{t_2}\label{term2}
\end{align}
of (\ref{meq2}). We call the above term 2 and denote it as $h^2_{l0}$. In addition our Eq.~(\ref{meq2}) includes instant contribution of $\dot{h}_{lm},m\neq0$
\begin{align}
&h^3_{lm}\bigg{|}_{t_1}^{t_2}=\sqrt{\frac{(l-2)!}{(l+2)!}}\sum_{l'=2}^{\infty}\sum_{l''=2}^{\infty}\sum_{m'=-l'}^{l'}\sum_{m''=-l''}^{l''}
\Gamma_{l'l''lm'-m''-m}\nonumber\\
&\times D\left(\dot{h}_{l'm'}(t_2)\bar{h}_{l''m''}(t_2)-\dot{h}_{l'm'}(t_1)\bar{h}_{l''m''}(t_1)\right).\label{term3}
\end{align}
We call the above term 3 and denote it as $h^3_{l0}$. We investigate the fractional contributions of these three terms respectively in the Fig.~\ref{mfig5} for the four cases shown in the Fig.~\ref{mfig1} and the Fig.~\ref{mfig2}. The ``linear" memory (term 2) is always negligible. The term 3 contributes between 0.01\% and 1\%. And as expected when $t\rightarrow\infty$ the term 3 vanishes. Consequently the term 3 does not contribute to $h_{lm}^{\rm tot}$. These kind of behaviors for term 2 and term 3 are common for all binary black holes.
\begin{figure}
\begin{tabular}{c}
\includegraphics[width=0.5\textwidth]{fig3a.eps}\\
\\
\includegraphics[width=0.5\textwidth]{fig3b.eps}
\end{tabular}
\caption{The fractional contributions of the three terms listed in Eqs.~(\ref{term1})-(\ref{term3}) for the four cases shown in Figs.~\ref{mfig1} and \ref{mfig2}.}\label{mfig5}
\end{figure}
\section{GW memory for binary black hole coalescence}
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fig4a.eps}&
\includegraphics[width=0.5\textwidth]{fig4b.eps}
\end{tabular}
\caption{(a) Memory amplitude $h^{tot}_{20}$ of spin aligned equal mass binary black hole respect to the effective spin $\chi_{\rm eff}\equiv\frac{q\chi_{1z}+\chi_{2z}}{1+q}$. PR formula means the Eq.~(\ref{meq5}) (also the Eq.~(8) of \cite{PolRei11}). (b) Memory amplitude of spinless binary black hole mergers respect to the mass ratio $q\equiv m_1/m_2$.}\label{mfig3}
\end{figure*}
For equal mass spin aligned binary black hole systems, the authors of \cite{PolRei11} have found the relation between the GW memory amplitude $h^{tot}_{20}$ and the symmetric spin $\chi_{\rm eff}\equiv(m_1\chi_{1z}+m_2\chi_{2z})/M$ (Eq.~(8) of \cite{PolRei11})
\begin{align}
\frac{D}{M}h^{tot}_{20}=&0.0969+0.0562\chi_{\rm eff}+0.0340\chi_{\rm eff}^2+\nonumber\\
&0.0296\chi_{\rm eff}^3+0.0206\chi_{\rm eff}^4.\label{meq5}
\end{align}
We confirm this formula based on SXS catalog in the Fig.~\ref{mfig3}(a). As pointed out by the authors of \cite{PolRei11} and also explained in \cite{Cao16}, we also confirm the memory amplitude $h^{tot}_{20}$ for equal mass spin aligned binary black hole is independent of the anti-symmetric part of the spin $\chi_{\rm A}\equiv\frac{\chi_{1z}-\chi_{2z}}{2}$.
In the Fig.~\ref{mfig3}(b), we investigate the GW memory amplitude of spinless binary black hole mergers respect to the mass ratio $q\equiv m_1/m_2$. Based on the PN approximation, Favata \cite{Set09,Fav09b} showed the memory of the binary black hole with equal mass is about $\frac{1}{24\pi^2}\sqrt{\frac{1543}{70}}\approx0.0198$ which is much less than the calculation result 0.097 here. And more Favata \cite{Set09,Fav09b} estimated the memory is proportional to the symmetric mass ratio $\eta=\frac{q}{(1+q)^2}$. Here we find that it decreases much faster than Favata estimated. Instead it roughly behaves as $\eta^{1.65}$.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fig5a.eps}&
\includegraphics[width=0.5\textwidth]{fig5b.eps}
\end{tabular}
\caption{Memory amplitude $h^{tot}_{20}$ for generic spin aligned binary black hole respect to the spin hang-up parameter $\chi_{\rm up}$ which is defined in (\ref{meq7}). The different colors mean different mass ratio. The left plot: red, cyan and black represent respectively $q=1$, $q=2$ and $q=3$. Right plot: The different lines from top to bottom represent the Eq.~(\ref{meq6}) with $q=4$, $q=5$, $q=6$, $q=7$, $q=8$ and $q=9.5$ respectively. The color of the points represent the different mass ratio indicated by the color bar.}\label{mfig4}
\end{figure*}
For unequal mass spin aligned binary black holes, the Eq.~(\ref{meq5}) does not hold any more. In \cite{PhysRevD.101.044049} we found spin hang-up effect is the most important factor for gravitational waveform. Interestingly we find this statement is also correct for memory. Following \cite{PhysRevD.101.044049} we define a spin hang-up parameter as
\begin{align}
\chi_{\rm up}\equiv\chi_{\rm eff}+\frac{3}{8}\sqrt{1-4\eta}\chi_{\rm A}.\label{meq7}
\end{align}
This definition is different to the Eq.~(7) of \cite{PhysRevD.101.044049}. The current definition lets $\chi_{\rm up}$ go back to $\chi_{\rm eff}$ for equal mass binary black holes. Based on this spin hang-up parameter and the relationship between the GW memory amplitude and the mass ratio, we find the general behavior for generic spin aligned binary black hole systems can be expressed as
\begin{align}
\frac{D}{M}h^{tot}_{20}=&[0.0969+0.0562\chi_{\rm up}+0.0340\chi_{\rm up}^2+\nonumber\\
&0.0296\chi_{\rm up}^3+0.0206\chi_{\rm up}^4](4\eta)^{1.65}.\label{meq6}
\end{align}
We validate the finding (\ref{meq6}) in the Fig.~\ref{mfig4}. From this figure we can see the Eq.~(\ref{meq6}) does describe the main feature of the behavior. For systems with mass ratio between 2 and 4, the effect of $\chi_{\rm A}$ is stronger. So the points do not perfectly fall on the line. We suspect this is because only the combination of $\chi_{\rm A}$, $\sqrt{1-4\eta}$ and $\eta$ contributes to memory for $\chi_{\rm A}$ like the Eqs.~(\ref{meq7}) and (\ref{meq6}). For the rest cases the Eq.~(\ref{meq6}) works very well. When the mass ratio increases, the effect of mass ratio decreases which can be seen in the Eq.~(\ref{meq6}). So after $q=3$ we use gradually larger and larger range to group numerical data in the Fig.~\ref{mfig4}.
\section{Discussion}
We have proposed a new method to accurately calculate the GW memory for spin-aligned binary black holes. Our calculation indicates that the strongest GW memory amplitude for binary black hole merger corresponds to the fastest spinning aligned two equal mass black holes. And the amplitude is about $h^{tot}_{20}\approx0.24\frac{M}{D}$. If the two black holes do not spin, the amplitude is about $h^{tot}_{20}\approx0.1\frac{M}{D}$. Quantitatively we find that the memory amplitude can be described by spin hang-up parameter $\chi_{\rm up}$ and mass ratio $\eta$ quite well.
Based on our new method, it is straight forward to apply the technique of \cite{PhysRevResearch.1.033015} to construct a highly accurate numerical relativity surrogate model for GW memory. In the near future the detection of GW memory can be compared to the prediction by our method \cite{2020arXiv200906351K} and give a test of general relativity.
\section*{Acknowledgments}
We thank Zhi-Chao Zhao for helpful discussions. This work was supported by the NSFC (No.~11690023). X. He was supported by NSF of Hunan province (2018JJ2073). Z. Cao was supported by ``the Fundamental Research Funds for the Central Universities", ``the Interdiscipline Research Funds of Beijing Normal University" and the Strategic Priority Research Program of the Chinese Academy of Sciences, grant No. XDB23040100.
|
2,877,628,088,892 | arxiv | \section{Introduction}
Two-level quantum systems, called {\em qubits} by Schumacher \cite{bs}, play a fundamental role in quantum information theory. In this context they are usually treated as mathematical objects living in a two-dimensional Hilbert space. In reality, qubits always exist as material objects and we should not forget that they are endowed with concrete physical properties. In this paper we shall deal with two-level systems that interact directly with the electromagnetic field, such as spin one-half particles endowed with magnetic moment or two-level atoms. Thus, our results do not apply to qubits encoded in the polarization states of photons. We shall restrict ourselves in this paper to isolated qubits interacting only with the quantized electromagnetic field. Therefore, the calculated decay rates will include only the spontaneous emission.
A two-level system is the simplest model of a quantum system and yet in the presence of a coupling to the quantized electromagnetic field an exact solution has not been obtained. Even in the simplest case, when the electromagnetic field is restricted to just one mode, the model has been exactly solved only in the rotating-wave approximation by Jaynes and Cummings \cite{jc}. Among the approximate solutions, perturbation theory is still the most universal and effective tool, especially in the world of electromagnetic phenomena.
\begin{figure}
\centering
\includegraphics[scale=0.8]{Fig1.eps}
\caption{Two Feynman diagrams representing the elementary processes and their interpretation in terms of the Dirac-sea picture. The pair creation (a) corresponds to the photon absorption causing a transition (b) of the two-level system from the ground state to the excited state. The electron is moved from the negative energy state (creating a hole) to the positive energy state. The pair annihilation (c) corresponds to the inverse process (d). The electron jumps back from the positive to the negative energy state emitting a photon.}\label{Fig1}
\end{figure}
In the present paper we develop a systematic and complete theory based on an observation that a two-level system can be treated as a relativistic trapped electron. The translational degrees of freedom of such an electron are practically frozen. The only ``degree of freedom'' that remains is the electron's ability to undergo transitions between two discrete energy states. In order to fully unfold the connection between the QED and the theory of two-level systems, we shall perform the second quantization of the standard theory of qubits. The description of two-level systems in terms of creation and annihilation operators has been introduced before (cf., for example, \cite{l}) but no one has exploited the full potential of this formulation. The crucial new element in our formulation is the systematic use of Feynman diagrams. To expose a close analogy with the relativistic theory, including the form of the propagators, we shall choose the energy scale in such a way that the energy levels of the two-level system have opposite signs. In this way, we arrive at a picture of a two-level system that coincides with the Dirac-sea view of quantum electrodynamics. The ground state of the two-level system corresponds to the occupation of the negative energy state, while the excited state corresponds to the occupation of the positive energy state accompanied by a hole in the negative energy sea. The transition between these two states due to the interaction with a photon can be represented by the two elementary Feynman diagrams shown in Fig.~\ref{Fig1}.
There are significant advantages in using the Feynman diagrams and the Feynman propagators associated with these diagrams as compared to the standard perturbation theory used in nonrelativistic quantum mechanics.
First, we never need the formula for the ground state expressed in terms of the noninteracting particles. This is due to the stability of the ground state under the adiabatic switching-on of the interactions. In the Feynman approach the difference between the physical ground state of interacting particles and the ground state of noninteracting particles amounts only to the phase factor corresponding to all disconnected vacuum diagrams \cite{gml,fw}.
Second, a single Feynman amplitude combines several terms of the standard perturbation theory since in the Feynman approach all processes that differ only in the time ordering of the vertices are described by one Feynman amplitude (Fig.~\ref{Fig2}). The number of diagrams of the standard perturbation theory that are combined into one Feynman diagram grows exponentially with the number of vertices.
Third, there are many sophisticated tools available to evaluate and analyze Feynman propagators that greatly simplify the calculations and also give a deeper insight into the physical processes described by these propagators. In particular, we shall use the quantum linear response theory to calculate the atomic polarizability and the spin susceptibility from the Feynman propagators. Our formalism is not restricted to two-state systems. It can easily be generalized to many-state systems (qudits) and we analyze as an example a four-state system --- the atomic dipole --- to show that the whole framework can easily be extended to cover this case. The main message of our investigation is that the Feynman description of quantum phenomena, known for its elegance, versatility, and effectiveness in relativistic quantum field theory, also leads to significant simplifications in the theory of qubits. Of course, we are not trying to imply that qubits are relativistic objects. We shall only exploit formal similarities and use many available tools of a relativistic theory. Feynman propagators and Feynman diagrams in our approach should be treated as purely mathematical constructs introduced as a means to streamline and organize perturbation theory. They greatly simplify the calculations but they do not represent any physical objects.
There is a huge number of papers and even a monograph \cite{ae} dealing with the theory of two-level systems and its applications. We believe that the point of view described in this paper will further our understanding of these systems. Our research has been prompted by a recent calculation of the atomic polarizability by Loudon and Barnett \cite{lb}. Our results differ from their results in the fourth order of perturbation theory because they have not taken into account all the necessary corrections. The crossing symmetry of the polarizability, that played an important role in the derivation of the final result by Loudon and Barnett, is automatically satisfied in our formulation. In quantum field theory the crossing relations follow from the analytic properties of the propagators as functions of the energy parameter and from the direct connection between the polarizability and the retarded photon propagator. This connection enabled us to easily calculate the polarizability of a two-level atom and the spin susceptibility in the fourth order of perturbation theory by evaluating the contributions from only a few Feynman diagrams.
Our results clarify certain issues, like the opposite sign versus equal sign prescription or the damping in the ground state, that are still being debated \cite{sted0,bf,sted1,bf1,sted2,mb,bbm}. We show that {\em both sign prescriptions are correct} but they apply to different physical situations. The equal sign prescription is appropriate for the scattering situation when we control the initial and the final photon states. The opposite sign prescription is appropriate in the linear response theory when we control the initial state and also the form of the perturbation but we perform a summation over all final states. Thus, only the opposite-sign convention is appropriate for the calculation of the atomic polarizability. We also show that even though, as stated in \cite{ae}, ``A two-level atom is conceptually the same kind of object as a spin-one-half particle in a magnetic field'', the dynamical properties of these systems are quite different. The differences become significantly different in higher orders of perturbation theory.
Of course, one should keep in mind that our calculations of atomic polarizabilities should not be taken too seriously because the two-level model gives only a very crude description of a real atom. However, for a single spin system, our results are close to reality. The only approximation being made in this case is that the position of the spin is frozen --- the translational degrees of freedom are suppressed.
It has been fully recognized that quantum field theory would, in principle, give unambiguous answers to all such questions but the prevailing opinion that ``there are considerable difficulties associated with the treatment of optical damping in a non-phenomenological manner'' \cite{sted0} discouraged efforts to apply field-theoretic methods. In this paper we show how to overcome these ``considerable difficulties''. We formulate a theory that is simple because it follows all the rules of a well established theory and it also has an unambiguous interpretation because it is systematically derived from first principles.
In what follows we shall use most of the time a convenient system in units in which $\hslash=1$, $c=1$, and $\mu_0=1$. Of course, in this system of units also $\epsilon_0=1$. More precisely, we express every physical quantity in powers of the meter and $\hslash,c,\mu_0$ (or $\epsilon_0$) and then we drop $\hslash=1$, $c=1$, $\mu_0=1$ and $\epsilon_0$ in the formulas. For example, the Bohr magneton in these units is $\mu_B = 5.847\,10^{-14}\,$m, Tesla is 1T = 5.017$\,10^{15}\,$m$^{-2}$, and the electronvolt is 1eV = 5.068$\,10^{6}\,$m$^{-1}$.
\begin{figure}
\centering
\includegraphics{Fig2.eps}
\caption{Two time orderings in the standard perturbation theory that are combined into one Feynman amplitude represented by one Feynman diagram.}\label{Fig2}
\end{figure}
\section{The model Hamiltonian}
The physical system that we shall have in mind is primarily a spinning electron trapped in a spherically symmetric potential subjected to a constant magnetic field and interacting with the quantized electromagnetic field and possibly an external time-varying electromagnetic field. We find it convenient to call this system the electron to stress the analogy with quantum electrodynamics although it is a highly reduced model of an electron. We shall treat in detail the spin system coupled to the electromagnetic field through its magnetic dipole but we shall also extend our analysis to atoms coupled through their electric dipole moments. There are two cases here that must be distinguished: the {\em literal two-level atom} that requires a two-dimensional Hilbert space and an atom with a {\em true electric dipole} moment that requires a four-dimensional Hilbert space that can accommodate the three-dimensional dipole vector.
The Hamiltonian $H=H_0+H_I$ for the spin system in the second-quantized form is
\begin{subequations}\label{hammag}
\begin{align}
H_0&=\int\!d^3r\,{\bm\psi}^\dagger({\bm r})H_0^e{\bm\psi}({\bm r})\nonumber\\
&+\frac{1}{2}\int\!d^3r:\!\left({\bm E}^2({\bm r})+{\bm B}^2({\bm r})\right)\!:\,,\label{hammag0}\\
H_I&=-\mu\int\!d^3r\,{\bm\psi}^\dagger({\bm r}){\bm\sigma}{\bm\psi}({\bm r})\!\cdot\!{\bm B}({\bm r}),\label{hammag1}
\end{align}
\end{subequations}
where $H_0^e$ is the quantum-mechanical Hamiltonian of the electron in the absence of the magnetic coupling and the colons, as usual, denote the normal ordering. We shall assume that the magnetic moment of the electron is coupled to a constant external magnetic field and to the quantized magnetic field. Next, we assume that only the spin degree of freedom is active. Therefore, we can retain only one term in the expansion of the electron field operator
\begin{align}\label{fop}
{\bm\psi}({\bm r})=\chi(r){\bm\psi},
\end{align}
where $\chi({\bm r})$ is a fixed orbital electron wave function assumed to be spherically symmetric. The two-component fermionic operators are ${\bm\psi}^\dagger=(\psi^\dagger_e,\psi^\dagger_g)$ and ${\bm\psi}=(\psi_e,\psi_g)$. Their components create and annihilate the electron in the upper (excited) or lower (ground) energy state. Within this approximation, the Hamiltonian can be rewritten in the form
\begin{subequations}\label{ham}
\begin{align}
H_0&=\mu B_0{\bm\psi}^\dagger\sigma_z{\bm\psi} + \frac{1}{2}\int\!d^3r:\!\left({\bm E}^2({\bm r})+{\bm B}^2({\bm r})\right)\!:\,,\label{ham0}\\
H_I&=-\mu{\bm\psi}^\dagger{\bm\sigma}{\bm\psi} \!\cdot\!\!\int\!d^3r\,\rho(r){\bm B}({\bm r}).\label{ham1}
\end{align}
\end{subequations}
The parameter $\mu$ is the magnetic moment, $B_0$ is the constant magnetic field (pointing in the $z$-direction), and ${\bm\sigma}=(\sigma_x,\sigma_y,\sigma_z)$ are the three Pauli matrices. In the interaction Hamiltonian the magnetic field operator ${\bm B}$ is averaged with the electron distribution function $\rho(r)=\chi^*(r)\chi(r)$ over the region where the trapped electron is localized.
The Hamiltonian $H=H_0+H_I$ conserves the number of electrons. It acts independently in each subspace with a given number of electrons. Since there are just two creation operators in this model, the electronic Fock space is four dimensional. It comprises a one-dimensional zero-particle subspace, a one-dimensional two-particle subspace, and a two-dimensional one-particle subspace spanned by the state vectors $\psi_e^\dagger|0\rangle$ and $\psi_g^\dagger|0\rangle$. This two-dimensional subspace will be our {\em qubit space}. The standard fermionic anticommutation relations
\begin{eqnarray}\label{stand}
\{\psi_i,\psi_j^\dagger\}=\delta_{ij},\;\;\{\psi_i,\psi_j\}=0,\;\;\{\psi_i^\dagger,\psi_j^\dagger\}=0
\end{eqnarray}
imply that the operators ${\bm\psi}^\dagger{\bm\sigma}{\bm\psi}$ annihilate the zero-particle and two-particle sectors, whereas in the qubit space they act as the Pauli matrices. Therefore, in the qubit subspace the Hamiltonian (\ref{ham}) is equivalent to the following one obtained from (\ref{ham}) by replacing all bilinear combinations ${\bm\psi}^\dagger{\sigma}_i{\bm\psi}$ of the operators ${\bm\psi}^\dagger$ and ${\bm\psi}$ by the corresponding Pauli matrices:
\begin{subequations}\label{pauli}
\begin{align}
H_0&=-\mu B_0\sigma_z + \frac{1}{2}\int\!d^3r:\!\left({\bm E}^2({\bm r})+{\bm B}^2({\bm r})\right)\!:\,,\label{pauli1}\\
H_I&=-\mu{\bm\sigma}\!\cdot\!\!\int\!d^3r\,\rho(r){\bm B}({\bm r}).\label{pauli2}
\end{align}
\end{subequations}
To stress the analogy between QED and quantum electrodynamics of two-level systems, from now on we shall denote the energy $\mu B_0$ by the letter $m$.
\subsection{Spin system as a dimensional reduction of QED}
The formulation that employs the electronic creation and annihilation operators will enable us to define new objects --- the propagators --- that do not appear in the standard description of a spin system. The electron propagators, being auxiliary objects without direct physical interpretation, fully deserve the name ``dead wood'', as Dirac \cite{dir} called them. However, a complete formulation of QED (including renormalization) without the propagators would be extremely complicated, if possible at all. We shall show that they are also very useful in the description of two-level systems.
The Hamiltonian (\ref{ham}) acts independently in each sector with a given number of electrons, but the electron creation and annihilation operators cause transitions between these sectors. This leads here, like in full QED, to a greater flexibility of the mathematical formalism and will allow us to introduce objects that are not available in the standard theory of qubits based on the Hamiltonian (\ref{pauli}). Long time ago, the same idea has been successfully applied to the study of the Ising chain \cite{bbp} and that served as an inspiration for the present research. The representation of the spin operators as bilinear expressions of the creation and annihilation operators is the key ingredient of our approach. It enabled us to introduce the fermionic Feynman propagators and to employ the Wick theorem in its most convenient, field-theoretic form that leads directly to standard Feynman diagrams. In contrast, the use of the spin operators as basic variables, does not lead to the Feynman rules in their simplest form known from QED.
In order to better explain the relation between QED and our treatment of two-level systems, let us observe that the Hamiltonian (\ref{ham}) can be obtained by the dimensional reduction from three to zero spatial dimensions. To carry out this reduction, we drop entirely the coordinate dependence and we disregard the integration in the QED Hamiltonian $H_D$ of the Dirac field
\begin{eqnarray}\label{dirac}
H_D=\int\!d^3r\,\left(c\psi^\dagger({\bm r}){\bm\alpha}\!\cdot\!{\bm p}\,\psi({\bm r}) + mc^2\psi^\dagger({\bm r})\beta\psi({\bm r})\right).
\end{eqnarray}
We keep only the mass term and we replace the Dirac field operator $\left(\psi_1({\bm r}),\psi_2({\bm r}),\psi_3({\bm r}),\psi_4({\bm r})\right)$ by the space-independent operators $(\psi_e,\psi_g)$. The operator $\psi_e$ annihilates the particle in the positive energy state and $\psi_g$ annihilates the particle in the negative energy state. The rest energy $m_0c^2$ of the electron is to be identified with $\mu B_0$. Despite these drastic simplifications, we shall still retain the full analogy with quantum electrodynamics. This will enable us to use the highly developed formalism of QED and also to gain deeper insights that go with it.
\subsection{Magnetic dipole Hamiltonian}
Under the assumption that only the spin degree of freedom is active and the orbital part of the electron wave function $\chi(r)$ is fixed and spherically symmetric, only the {\em magnetic dipole} component of the radiation field is coupled to the electron. Therefore, it is most convenient to employ the multipole expansion, i.e. the decomposition of the electromagnetic field into the eigenstates of the angular momentum. Then, the integration of the magnetic field vector with the spherically symmetric distribution in the interaction Hamiltonian (\ref{ham1}) eliminates all multipoles except the magnetic dipole. We present the details of this calculation in the Appendix \ref{a1}. We shall rewrite the Hamiltonian (\ref{finham}) derived there as follows
\begin{align}\label{finham1}
H&=m{\bm\psi}^\dagger\sigma_z{\bm\psi}+\sum_i\int_0^\infty\!dk\,\omega\,c_i^\dagger(k)c_i(k)\nonumber\\
&+{\bm\psi}^\dagger{\bm\sigma}{\bm\psi} \!\cdot\!\!\int_0^\infty\!dk\,g(k){\bm\phi}(k),
\end{align}
where we introduced the dipole vector field ${\bm\phi}(k)$ built from the Cartesian components of the annihilation and creation operators
\begin{eqnarray}\label{phi}
\phi_i(k)=\frac{c_i(k)+c_i^\dagger(k)}{\sqrt{2 k}}.
\end{eqnarray}
The formfactor $g(k)$ is defined in Eq.~(\ref{formf0}) and according to the formula (\ref{formf1}) it is proportional to the Fourier transform ${\tilde\rho}(k)$ of the distribution function $\rho(r)$
\begin{eqnarray}\label{formf}
g(k)=\frac{\mu\,k^2}{\pi\sqrt{3}}{\tilde\rho}(k).
\end{eqnarray}
The normalization condition imposed on $\rho$ requires that ${\tilde\rho}(0)=1$. Therefore, for small values of $k$ the formfactor behaves as $g(k)\approx \mu k^2/\pi\sqrt{3}$. To illustrate this property, let us consider the qubit realized as the spin degree of freedom of a nonrelativistic electron in the ground state of the Coulomb potential. In this case the distribution function $\rho(r)$ and the corresponding formfactor $g(k)$ are
\begin{subequations}\label{gs}
\begin{align}
\rho(r)&=\frac{1}{\pi a_0^3} e^{-2r/a_0},\\
g(k)&=\frac{\mu k^2}{\pi\sqrt{3}}\frac{1}{(1+k^2a_0^2/4)^2},
\end{align}
\end{subequations}
where $a_0$ is the Bohr radius.
The applicability of the model interaction Hamiltonian (\ref{finham1}) extends beyond the simplest case considered here. Should the distribution function $\rho(r)$ be of a more general character or the internal degrees be more complicated, the elimination of higher multipoles could still be justified as an approximation based on the small value of the ratio: atomic size/wavelength.
\subsection{Two-level atom Hamiltonian}
In the case of a literal two-level atom considered by most authors, only {\em one component} of the electromagnetic field is coupled to the atom. Namely, the component that causes transitions between the ground state and one selected excited state. Therefore, it is sufficient to replace the three-component vector ${\bm\phi}(k)$ by a single component $\phi(k)$. In this way, we obtain the standard Hamiltonian for a two-level atom interacting with the quantized electromagnetic field in the form \cite{l,lb}
\begin{align}\label{hamlb}
H&=m\sigma_z+\sum_i\int_0^\infty\!dk\,\omega\,c_i^\dagger(k)c_i(k)\nonumber\\
&+\sigma_x\int_0^\infty\!dk\,{\hat g}(k)\phi(k),
\end{align}
which after the second quantization becomes
\begin{align}\label{hamlb1}
H&=m{\bm\psi}^\dagger\sigma_z{\bm\psi}+\sum_i\int_0^\infty\!dk\,
\omega\,c_i^\dagger(k)c_i(k)\nonumber\\
&+{\bm\psi}^\dagger\sigma_x{\bm\psi}\int_0^\infty\!dk\,{\hat g}(k){\phi}(k),
\end{align}
where the formfactor ${\hat g}(k)$
\begin{eqnarray}\label{formf2}
{\hat g}(k)=\frac{d\,k^2}{\pi\sqrt{3}}{\tilde\kappa}(k).
\end{eqnarray}
is obtained from the formula (\ref{formf}) by replacing the magnetic dipole $\mu$ and its distribution function $\rho$ by the electric dipole $d$ and its distribution function $\kappa$. This natural prescription will be confirmed in the next subsection when we derive the interaction Hamiltonian for a true atomic dipole vector. We place a hat on the symbols of all quantities that refer specifically to two-level atoms to distinguish them from the corresponding quantities for the spin system.
\subsection{Electric dipole Hamiltonian}
The truncation of the atomic Hilbert space to only two dimensions does not allow for the construction of an atomic dipole vector that could be coupled to the electric dipole field. Such a construction can be carried out if we enlarge the Hilbert space of the relevant atomic states to four dimensions. We shall still have only two energy levels but in addition to the ground state we introduce three states corresponding to the degenerate upper level. This is precisely the situation in real atoms if the transitions take place between the ground S state and the three excited P states. The inclusion of all three P states leads to full rotational invariance. Using this specific example we show how to extend our formalism to $N$-level systems. The Hamiltonian $H=H_0+H_I$ expressed in the formalism of second quantization can now be written in the form (cf. Appendix \ref{a1})
\begin{align}\label{1hamel}
H&={\bm\psi}^\dagger {\breve m}{\bm\psi}+\sum_i\int_0^\infty\!dk\,\omega d_i^\dagger(k)d_i(k)\nonumber\\
&+{\bm\psi}^\dagger{\bm\tau}{\bm\psi} \!\cdot\!\!\int_0^\infty\!dk\,{\breve g}(k){\bm\phi}(k),
\end{align}
where we kept the same symbol ${\bm\phi}(k)$ to denote the electromagnetic field because the change from the magnetic dipole field to the electric dipole field does not change any of the mathematical properties of the field ${\bm\phi}(k)$. We introduced four annihilation and four creation operators corresponding to four atomic states. The operators for the ground state and the operators for the excited states in the Cartesian basis are combined into four-dimensional objects ${\bm\psi}=\{\psi_x,\psi_y,\psi_z,\psi_g\}$ and ${\bm\psi}^\dagger=\{\psi_x^\dagger,\psi_y^\dagger,\psi_z^\dagger,\psi_g^\dagger\}$. They obey the fermionic anticommutation relations (\ref{stand}). The matrices ${\breve m}$ and ${\bm\tau}$ are defined in Eqs.~(\ref{mat}). The derivation in Appendix \ref{a1} of the formula for the formfactor function ${\breve g}(k)$ gives the precise meaning to the dipole moment $d$ of the atomic transition and the dipole distribution function $\kappa(r)$ and its transform ${\breve\kappa}(k)$.
\begin{align}\label{wg}
{\breve g}(k)=\frac{d\,k^2}{\pi\sqrt{3}}{\breve\kappa}(k).
\end{align}
Since for small values of $k$ we have $j_1(kr)\approx kr/3$, the function ${\breve\kappa}(k)$ has the same normalization as ${\tilde\rho}(k)$ --- it approaches 1, when $k\to 0$. In particular, for the P-S transitions in the hydrogen atom we obtain
\begin{subequations}\label{khyd}
\begin{align}
\kappa(r)&=\frac{er^2}{4\pi a_0^4d\sqrt{6}}\exp\left(-\frac{3r}{2a_0}\right),\\
{\breve g}(k)&=\frac{d\,k^2}{\pi\sqrt{3}}\frac{1}{\left(1+4k^2a_0^2/9\right)^3}.\\
d&=\frac{2^{15/2}ea_0}{3^5}.
\end{align}
\end{subequations}
\subsection{Conservation of angular momentum}
The interaction Hamiltonian for the spin system is invariant under all rotations since it is a scalar products of two vectors. However, the full Hamiltonian is invariant only under rotations around the $z$ axis since the free fermion Hamiltonian (\ref{finham1}) contains the $z$ component of the vector ${\bm\sigma}$. The physical origin of the symmetry breaking is the external magnetic field $B_0$ fixed along the $z$-axis. It splits the energy levels of the magnetic dipole and breaks the full rotational invariance. In contrast, the Hamiltonian for the electric dipole is invariant under the full rotation group. This invariance is possible because the Coulomb potential of the hydrogenic atom is rotationally symmetric and we have included all three components of the excited P state. These components form a vector representation of the rotation group.
The invariance of the Hamiltonian implies the commutativity of the angular momentum operator $M_z$ with $H$ leading to the conservation of the $M_z$ in both cases. The angular momentum operators for the spin system and for the electric dipole are
\begin{align}\label{mtot}
M_i&=\frac{1}{2}{\bm\psi}^\dagger\sigma_i{\bm\psi}
-i\int_0^\infty\!\!\!dk\,\epsilon_{ijk}c^\dagger_j(k)c_k(k),\\
{\breve M}_i&={\bm\psi}^\dagger(k){\bm\psi}(k)
-i\int_0^\infty\!\!\!dk\,\epsilon_{ijk}d^\dagger_j(k)d_k(k),
\end{align}
where the spin-one matrices $s_i$ with elements $(s_i)_{jk}=-i\epsilon_{ijk}$ act in the subspace of excited states. Conservation of angular momentum during interaction becomes obvious when the angular momentum operator and interaction Hamiltonian are written in the angular momentum basis. We shall use the spin system to illustrate these properties. Let us construct the components of the magnetic dipole field ${\phi}_\pm(k)$ and ${\phi}_0(k)$ from the annihilation and creation operators of photons with the definite angular momentum $M_z=\pm 1,0$ introduced in the Appendix
\begin{subequations}
\begin{align}\label{phif}
&{\phi}_+(k)=\frac{c_-(k)-c_+^\dagger(k)}{\sqrt{2 k}},\\
&{\phi}_-(k)=\frac{c_-^\dagger(k)-c_+(k)}{\sqrt{2 k}}={\phi}_+^\dagger(k),\\
&{\phi}_0(k)=\frac{c_0(k)+c_0^\dagger(k)}{\sqrt{2 k}}.
\end{align}
\end{subequations}
The operators $M_z$ and $H_I$ take now the form
\begin{equation}\label{finmz}
M_z=\frac{1}{2}\psi^\dagger\sigma_z\psi +\int_0^\infty\!\!\!\!dk\,\left[c^\dagger_{+}(k)c_{+}(k)-c^\dagger_{-}(k)c_{-}(k)\right],
\end{equation}
\begin{align}\label{finham2}
H_I&=\psi^\dagger{\sigma}_+\psi\int_0^\infty\!\!\!dk\,g(k){\phi}_-(k)
+\psi^\dagger{\sigma}_-\psi\int_0^\infty\!\!\!dk\,g(k){\phi}_+(k)\nonumber\\
&+\psi^\dagger{\sigma}_z\psi\int_0^\infty\!dk\,g(k){\phi}_0(k),
\end{align}
where
\begin{align}\label{angsig}
\sigma_+=\frac{\sigma_x+i\sigma_y}{\sqrt{2}},\;\;\;\sigma_-=\frac{\sigma_x-i\sigma_y}{\sqrt{2}}.
\end{align}
The field ${\phi}_+(k)$ coupled to $\sigma_-$ annihilates the photon with $M_z=-1$ or creates the photon with $M_z=1$. Thus, it increases the angular momentum by one unit. The field ${\phi}_-(k)$ coupled to $\sigma_+$ decreases the angular momentum by one unit. Each term in the Hamiltonian (\ref{finham2}) conserves angular momentum. For example, when $\sigma_+$ transfers the electron from the ground state to the excited state increasing its angular momentum by one (the first term), the angular momentum of the electromagnetic field decreases by one unit. Similar analysis can be carried out for the electric dipole. Of course, for the literal two-level atom there is no invariance under rotation because only one angular momentum state of the photon interacts with the atom. Hence, only one component of the electronic P state (and not all three) can be excited.
\subsection{Time-reversal invariance}
Both theories, describing the spin and the two-level atom, are invariant under the time reversal. This invariance can be proven directly but it also follows from the fact that our models are obtained by the dimensional reduction from QED which has this property. Time-reversal invariance is an important requirement to obtain a correct description of the optical damping, as stressed in Ref.~\cite{sted1}. In what follows we shall make use of this invariance. Under the time reversal the signs of the frequency and angular momentum are reversed. Therefore, there is no need to calculate the photon propagator for the negative values of $M_z$ for the spin system because they can be obtained from those for the positive values by reversing the sign of the frequency. When the results are the same for positive and negative values of $M_z$, as is the case for the atomic system, time-reversal invariance means that the photon propagator is an even function of the frequency. The conservation of angular momentum and time-reversal invariance simplify the calculations since they reduce the number of Feynman integrals that are to be evaluated.
\section{Propagators and the $S$ matrix}
All transition amplitudes can be expressed in terms of Feynman propagators --- the expectation values in the ground state of the time-ordered products of field operators. Since we shall be working within perturbation theory, the most useful representation of the propagators is the one that is based on the perturbative expansion of the $S$ matrix. The relevant formula for the $S$ matrix is the following standard expansion into the time-ordered products of the interaction Hamiltonians \cite{dys}:
\begin{align}\label{sop}
S&=T\exp\left(-i\int\!dt\,H_I(t)\right)\nonumber\\
&\equiv\sum_{n=0}^\infty\frac{(-i)^n}{n!}\int\!dt_1\cdots\int\!dt_nT\left[H_I(t_1)\cdots H_I(t_n)\right].
\end{align}
The interaction Hamiltonian in this formula is taken in the Dirac picture. We shall introduce all the necessary theoretical tools starting with the spin system but later extending them to atoms by making obvious modifications. We will find it expedient, even though it is not necessary since there are no infinities, to perform the mass renormalization. This amounts, exactly like in QED, to adding the mass-correction term $\delta m\psi^\dagger\sigma_z\psi$ to the free Hamiltonian and subtracting the same term from the interaction Hamiltonian. In our case, the freedom of choosing $\delta m$ can be viewed as a mechanism to improve the convergence of perturbation theory. After the mass renormalization, the free Hamiltonian and interaction Hamiltonian in the Dirac picture become
\begin{align}\label{freeham}
&H_0=(m_0+\delta m){\bm\psi}^\dagger\sigma_z{\bm\psi}\nonumber\\
&+ \frac{1}{2}\int_0^\infty\!dk:\!\left({\bm\pi}^2(k)+k^2\phi^2(k)\right)\!:,
\end{align}
\begin{align}\label{intham}
&H_I(t)=e^{iH_0t}H_Ie^{-iH_0t}\nonumber\\
&={\bm\psi}^\dagger(t){\bm\sigma}{\bm\psi}(t)\!\cdot\!\!\int_0^\infty\!dk\,g(k){\bm\phi}(k,t) -\delta m{\bm\psi}^\dagger(t)\sigma_z{\bm\psi}(t),
\end{align}
where ${\bm\pi}(k)$ is the canonically conjugate momentum
\begin{align}\label{canmom}
{\bm\pi}(k)=-i\sqrt{\frac{k}{2}}\left({\bm c}(k)-{\bm c}^\dagger(k)\right).
\end{align}
The time dependence of the operators ${\bm\psi}(t),{\bm\psi}^\dagger(t)$, and ${\bm\phi}(k,t)$ is determined by the renormalized fermionic Hamiltonian (\ref{freeham}) and it has the following form:
\begin{eqnarray}\label{fintpic}
{\bm\psi}(t)&=&\left(\begin{array}{c}\psi_e e^{-imt}\\\psi_ge^{imt}\end{array}\right),\nonumber\\
{\bm\psi}^\dagger(t)&=&\left(\psi_e^\dagger e^{imt},\psi^\dagger_ge^{-imt}\right),\\
\end{eqnarray}
where $m=m_0+\delta m$. The time dependence of the field ${\bm\phi}(k,t)$ is
\begin{eqnarray}\label{bintpic}
{\bm\phi}(k,t)=\frac{{\bm c}(k)e^{-i\omega t}+{\bm c}^\dagger(k)e^{i\omega t}}{\sqrt{2 k}}.
\end{eqnarray}
Note that due to our normalization, the electromagnetic field operators ${\bm\phi}(k,t)$ and ${\bm\pi}(k',t)={\dot{\bm\phi}}(k',t)$ satisfy the equal-time canonical commutation relations
\begin{eqnarray}\label{ccr0}
[\phi_i(k,t),\pi_j(k',t)]=i\delta_{ij}\delta(k-k').
\end{eqnarray}
In order to describe the interacting system, we need the propagators defined in terms of the field operators ${\bm\Psi}(t),{\bm\Psi}^\dagger(t)$, and ${\bm\Phi}(k,t)$ evaluated in the Heisenberg picture. We shall use lower case and upper case letters to keep the distinction between the Dirac (interaction) picture and the Heisenberg picture operators. The Heisenberg picture operators obey the following equations of motion:
\begin{subequations}\label{heqm}
\begin{align}
(i\partial_t-m_0\sigma_z)\Psi(t)&=\int_0^\infty\!dk g(k){\bm\sigma}\!\cdot\!{\bm\Phi}(k,t)\Psi(t),\label{heqm1}\\
(\partial_t^2+k^2){\bm\Phi}(k,t)&=-g(k)\Psi^\dagger(t){\bm\sigma}\Psi(t)\label{heqm2}.
\end{align}
\end{subequations}
The canonical equal-time commutation relations of the Heisenberg operators are the same as their free counterparts
\begin{subequations}\label{ccr}
\begin{align}
\left\{\Psi_\alpha(t),\Psi_\beta^\dagger(t)\right\}&=\delta_{\alpha\beta},\label{ccr1}\\
\left[\Phi_i(k,t),{\dot\Phi}_j(k',t)\right]&=i\delta_{ij}\delta(k-k')\label{ccr2}.
\end{align}
\end{subequations}
All remaining commutators or anticommutators vanish.
The perturbation expansion of the propagators can be obtained from the following formula \cite{gml,bd} by expanding the time-ordered exponential function into a power series according to Eq.~(\ref{sop}):
\begin{widetext}
\begin{align}\label{sprop}
\langle G|T\big[\Psi(t_1)\cdots\Psi(t_i)\Psi^\dagger(t_1')&\cdots\Psi^\dagger(t_i') \Phi(k_1,t_1'')\cdots\Phi(k_l,t_l'')\big]|G\rangle\nonumber\\
&=\frac{\langle g|T\left[\psi(t_1)\cdots\psi(t_i)\psi^\dagger(t_1')\cdots\psi^\dagger(t_i') \phi(k_1,t_1'')\cdots \phi(k_l,t_l'')\exp\left(-i\int\!dt\,H_I(t)\right)\right]|g\rangle}{\langle g|T\exp\left(-i\int\!dt\,H_I(t)\right)|g\rangle}.
\end{align}
\end{widetext}
We have omitted here all indices leaving only the dependence on time and on the wave vector. The operators on the left hand side of this equation are in the Heisenberg picture while those on the right hand side are all in the Dirac picture. In this formula $|G\rangle$ denotes the true ground state of the interacting system and $|g\rangle$ denotes the ground state of the free Hamiltonian $H_0$. In the state $|g\rangle$ there are no photons and the negative energy state of the electron is occupied. The advantage of using this fundamental result, already mentioned in the Introduction, is that the detailed knowledge of the ground state $|G\rangle$ is not needed. The difference between the state vectors $|G\rangle$ and $|g\rangle$ is just a phase factor and the denominator in the formula (\ref{sprop}) representing the contributions from all disconnected vacuum diagrams takes care of that.
\section{Feynman diagrams and Feynman rules}
In order to derive the Feynman rules that connect the Feynman diagrams with the corresponding transition amplitudes we start, as in QED, from the free field operators. The time evolution of these operators is given by Eqs.~(\ref{fintpic}) and (\ref{bintpic}).
The basic ingredients of the Feynman formulation of QED are the free one-electron propagator $S_F$ and one-photon propagator $D_F$. In our model they are defined as follows:
\begin{align}
S_{F\alpha\beta}(t-t')&=-i\langle g|T\left(\psi_\alpha(t)\psi^\dagger_\beta(t')\right)|g\rangle,\label{elprop}\\
\!\!D_{Fij}(k,k',t-t')&=-i\langle g|T\left(\phi_i(k,t)\phi_j(k',t')\right)|g\rangle,\label{phprop}
\end{align}
where $|g\rangle$ is the ground state of the system without interaction. We have introduced the photon propagator only for those photons that are coupled to the electron.
\subsection{Free electron propagators}
The free electron propagator is easily evaluated with the use of Eqs.~(\ref{fintpic}) taking into account that the only nonvanishing matrix elements of the bilinear product of the creation and annihilation operators are $\langle g|\psi_e\psi_e^\dagger|g\rangle=1$ and $\langle g|\psi_g^\dagger\psi_g|g\rangle=1$. Therefore, we obtain
\begin{align}\label{eprop0}
&iS_{F\alpha\beta}(t-t')\nonumber\\
=&\theta(t-t')\langle
g|\psi_\alpha(t)\psi^\dagger_\beta(t')|g\rangle\!
-\theta(t'\!-t)\langle g|\psi^\dagger_\beta(t')\psi_\alpha(t)|g\rangle\nonumber\\
=&\theta(t-t'){\mathbb
P}_{e\alpha\beta}e^{-im_e(t-t')}-\theta(t'-t){\mathbb
P}_{g\alpha\beta}e^{-im_g(t-t')},
\end{align}
where ${\mathbb P}_e=(1+\sigma_z)/2$ and ${\mathbb
P}_g=(1-\sigma_z)/2$ are the projection matrices on the upper and lower energy states, respectively. For the spin system and the two-level atom we have $m_e=m$ and $m_g=-m$. However, for the dipole atom these two parameters will be independent. The final result can be expressed in matrix notation (omitting the indices $\alpha$ and $\beta$)
as the following Fourier integral:
\begin{eqnarray}\label{eprop}
{S}_{F}(t-t')=\int_{-\infty}^\infty\!\frac{d p_0}{2\pi}{S}_{F}(p_0)e^{-ip_0(t-t')},
\end{eqnarray}
where ${S}_{F}(p_0)$ has the form
\begin{subequations}\label{eprop1}
\begin{align}
{S}_{F}(p_0)&=\frac{{\mathbb P}_e}{p_0-m_e+i\epsilon}+\frac{{\mathbb
P}_g}{p_0-m_g-i\epsilon}\label{epropa}\\
&=\frac{\sigma_z}{p_0{\sigma}_z-m+i\epsilon}\label{epropb}\\
&=\frac{1}{p_0-(m-i\epsilon){\sigma}_z}\label{epropc}.
\end{align}
\end{subequations}
The formula (\ref{epropa}) holds also for the atomic dipole when the excited states form a subspace. In what follows we shall use the same symbols ${\mathbb P}_e$ and ${\mathbb P}_g$ to denote the projectors in all three cases. It will be clear from the context, whether ${\mathbb P}_e$ projects on the one-dimensional subspace (spin and two-level atom) or on the three dimensional subspace (atomic dipole). As compared with the Fourier transform of the electron propagator in the relativistic theory $1/(\gamma\cdot p -m+i\epsilon)$, the two-level propagator (\ref{eprop1}) lacks the spatial part of the momentum vector and has the Pauli $\sigma_z$ matrix instead of $\gamma_0$. The presence of $\sigma_z$ in the numerator in Eq.~(\ref{epropa}) reflects the fact that we work with ${\bm\psi}^\dagger$ instead of $\bar{\bm\psi}={\bm\psi}\gamma_0$. We shall use the same symbols to denote the propagators and their Fourier transforms. The arguments will always indicate which is the case.
\subsection{Free photon propagators}
The free photon propagator is
\begin{align}\label{pprop0}
D_{Fij}(k,k',t-t')&=-i\theta(t-t')\langle g|\phi_i(k,t)\phi_j(k',t')|g\rangle\nonumber\\
&-i\theta(t'-t)\langle g|\phi_i(k',t')\phi_j(k,t)|g\rangle\nonumber\\
&=-i\frac{\delta_{ij}\delta(k-k')}{2k}e^{-i\omega|t-t'|}.
\end{align}
We shall also need its Fourier representation
\begin{align}
D_{Fij}(k,k',t-t')=\int\!\frac{dk_0}{2\pi}D_{Fij}(k,k',k_0)e^{-ik_0(t-t')},
\end{align}
where
\begin{subequations}\label{pprop}
\begin{align}
&D_{Fij}(k,k',k_0)=\frac{\delta_{ij}\delta(k-k')}{k_0^2-k^2+i\epsilon}\label{ppropa}\\
&=\frac{\delta_{ij}\delta(k-k')}{2k}\left(\frac{1}{k_0-k+i\epsilon}-\frac{1}{k_0+k-i\epsilon}\right)\label{ppropb}.
\end{align}
\end{subequations}
All Feynman amplitudes can be constructed from the electron propagator (\ref{eprop}), the photon propagator (\ref{pprop}), the vertex, and the mass insertion following the same general rules as in QED. The starting point is the definition (\ref{sprop}) of a general propagator. In the $n$-th order of perturbation theory the contribution to the propagator is expressed as an expectation value of the time-ordered product of operators ${\bm\psi}, {\bm\psi}^\dagger$, and ${\bm\phi}$ integrated over $n$ time variables. In our model, as in the standard QED, all these expectation values can be evaluated with the help of the Wick theorem (cf., for example, \cite{bd,iz}). The only difference in applying this theorem is, in contrast to QED, that we have not interchanged the creation and annihilation operators for the negative energy state. Calling the electron in the ground state an antiparticle would stretch the analogy with QED too far. Therefore, in our case the normal ordering means that all operators $\psi^\dagger_e$ and $\psi_g$ stand to the left of all operators $\psi^\dagger_g$ and $\psi_e$.
\subsection{Feynman rules}
The scattering amplitudes in QED are commonly evaluated in momentum representation. In our case, the transformation to momentum representation means the transformation from the time domain to the frequency domain. The Feynman rules in the frequency domain are obtained by substituting everywhere the free electron propagators and photon propagators in the form of the Fourier integrals (\ref{eprop}) and (\ref{pprop}). Next, in the $n$-th order of perturbation theory we perform $n$ time integrations. Finally, we take the inverse Fourier transforms with respect to all remaining time arguments of the propagator (\ref{sprop}). These operations lead to the following Feynman rules:
\begin{itemize}
\item Each electron line corresponds to the Fourier transform of the electron propagator and is represented by $iS_F(p_0)$.
\item Each photon line corresponds to the Fourier transform of the photon propagator and is represented by $iD_{Fij}(k,k',k_0)$.
\item Each vertex is depicted by two electron lines and the photon line meeting at one point. It is represented by $-iV_i(k)=-ig(k)\sigma_i$. The energy conservation at each vertex results in the appearance of $2\pi\delta(p_0-q_0-k_0)$.
\item Each mass insertion is depicted by a cross where two electron lines meet. It is represented by $i\delta m\sigma_z$. The energy conservation at each mass insertion results in the appearance of $2\pi\delta(p_0-q_0)$.
\item All $2\times 2$ matrices corresponding to electron propagators are multiplied in the order indicated by the arrows on the diagram.
\item Each closed electronic loop brings in a minus sign and a trace over the matrix indices.
\item There is a summation over all repeated vector indices and an integration over all repeated values of the length of the wave vector.
\item There is one integration over the energy variable for each closed loop, accompanied by the division by $2\pi$.
\end{itemize}
These rules are summarized in Fig.~\ref{Fig3}. Calculations of the lowest order radiative corrections to the electron and photon propagators based on these rules are presented in Sections \ref{sep} and \ref{spp}.
\begin{figure}
\centering \vspace{0.5cm}
\includegraphics[scale=1.2]{Fig3.eps}
\caption{Feynman rules. For clarity, we have written explicitly all indices. }\label{Fig3}
\end{figure}
In the case of the two-level atom the only changes in the Feynman rules as compared to the case of the spin system is that the elementary vertex is represented just by $-ig(k)\sigma_x$ and the photon propagator has no indices. In the case of the atom with an electric dipole the free photon propagator retains its form (\ref{pprop}). The free electron propagator must be taken in the general form (\ref{epropa})
\begin{align}\label{elpropgen}
S_F(p_0)=\frac{{\mathbb P}_e}{p_0-m_e+i\epsilon}+\frac{{\mathbb
P}_g}{p_0-m_g-i\epsilon}
\end{align}
and at each vertex the matrices ${\bm\sigma}$ must be replaced by the matrices ${\bm\tau}$.
\section{Radiative corrections}
Owing to the absence of the space components of momentum vectors, the calculation of radiative corrections is much simpler here than in the full-fledged QED. There is no need to combine denominators \'a la Feynman and Schwinger. All integrations with respect to the loop variables $p_0, k_0$ etc. can be evaluated analytically by the residue method {\em in any order of perturbation theory}. At the end we will be left only with the integrals over the wave vectors of photons weighted with $g^2(k)$. Of course, those integrals cannot be evaluated if the function $g(k)$ is not specified.
\begin{figure}
\centering \vspace{0.5cm}
\includegraphics[scale=0.8]{Fig4.eps}
\caption{Feynman diagrams representing the lowest-order radiative corrections to the electron propagator, photon propagator, and the vertex part.}\label{Fig4}
\end{figure}
\begin{figure}
\centering \vspace{0.5cm}
\includegraphics[scale=0.8]{Fig5.eps}
\caption{Graphical representation of the relationship between the propagators and the corresponding self-energy parts. The double-lines represent full propagators and the gray box and circle represent the self-energy parts.}\label{Fig5}
\end{figure}
In order to explain how the calculations are done, let us consider an integral represented by an arbitrary Feynman diagram. The integrand is a product of electron and photon propagators. To perform all the integrations with respect to the loop variables, one may choose the electron propagator in the form (\ref{epropa}) and use the photon propagator in the form (\ref{ppropb}). The numerator of the integrand corresponding to each Feynman diagram is a polynomial in the integration variables. The denominator is a product of first-order polynomials in the integration variables, each factor leading to a simple pole. All integrations can easily be done by the standard residue method. Note that after each successive integration the integrand retains its rational form. Therefore, it will continue to be amenable to the same treatment as during the first integration. Alternatively, we may choose the interaction Hamiltonian in the angular momentum basis (\ref{finham2}). The following algebraic properties of the matrices $\sigma_\pm$ are then very useful:
\begin{subequations}\label{algprop}
\begin{eqnarray}
\sigma_+^2=0=\sigma_-^2,\;\;\sigma_+\sigma_-=2{\mathbb P}_e,\;\;\sigma_-\sigma_+=2{\mathbb P}_g,\\
\sum_n\sigma_n M \sigma_n = \sigma_+ M\sigma_- + \sigma_- M\sigma_+ +\sigma_z M\sigma_z,\\
\sum_n\!\sigma_n {\mathbb P}_e \sigma_n = {\mathbb P}_e+2{\mathbb P}_g,\;
\sum_n\!\sigma_n {\mathbb P}_g \sigma_n = {\mathbb P}_g+2{\mathbb P}_e,
\end{eqnarray}
\end{subequations}
where $M$ is an arbitrary matrix. With their help, and using the anticommutativity of $\sigma_\pm$ with $\sigma_z$, we can reduce every Feynman integral to to a very simple form.
In the case of a two-level atom the calculations are simpler than in the case of the spin system. Due to the appearance of only the $\sigma_x$ matrix in all vertices, the matrix algebra is almost trivial. In each integrand we can bring up front all $\sigma_x$ matrices using the relations $\sigma_x\sigma_z=-\sigma_z\sigma_x$ and $\sigma_x^2=1$. Therefore, each time we interchange the order of $\sigma_x$ and $\sigma_z$ in the electron propagator the sign of $\sigma_z$ must be reversed. Since there will be an even number of vertices in all the diagrams under consideration, the matrices $\sigma_x$ will disappear completely and we will be left with a diagonal matrix that contains only the matrices $\sigma_z$. The trace of such an expression is the sum of the terms corresponding to the eigenvalues $\pm 1$ of $\sigma_z$.
In the case of the electric dipole, the following algebraic properties of the ${\bm\tau}$ matrices:
\begin{eqnarray}\label{algprop1}
\tau_i{\mathbb P}_g={\mathbb P}_e\tau_i,\;\;\;\tau_i{\mathbb P}_e={\mathbb P}_g\tau_i,\;\;
\sum_n\tau_n\tau_n={\mathbb P}_e+3{\mathbb P}_g,
\end{eqnarray}
used in conjunction with the general form (\ref{epropa}) of the free electron propagator greatly reduce the number of integrals that are to be evaluated.
We shall show how these rules work in practice by calculating radiative corrections to the electron and photon propagators. The procedure employed very often in QED relates the full electron and photon propagators to the self-energy parts. This procedure enables one to go beyond the simplest version of perturbation theory and sum up an infinite (geometric) series. The self-energy is the sum of contributions from strongly connected diagrams, i.e. the diagrams that cannot be disconnected by cutting only one line. The relations between the full propagators and the self-energy parts are shown schematically in Fig.~\ref{Fig5}.
\section{Electron propagator}\label{sep}
In the case of the electron propagator $G_F(p_0)$ the relation between the propagator and the self-energy part $\Sigma(p_0)$, illustrated in Fig.~\ref{Fig5}a, reads
\begin{eqnarray}\label{g}
G_F(p_0)=S_F(p_0)+S_F(p_0)\Sigma(p_0)G_F(p_0).
\end{eqnarray}
All three objects that appear in this equation are $2\times 2$ matrices. The iterative solution of Eq.~(\ref{g}) that shows explicitly the relation between the propagator and the self-energy part, is
\begin{align}\label{git}
G_F(p_0)&=S_F(p_0)+S_F(p_0)\Sigma(p_0)S_F(p_0)\nonumber\\
&+S_F(p_0)\Sigma(p_0)S_F(p_0)\Sigma(p_0)S_F(p_0)+\dots.
\end{align}
This formal geometric series can be summed up to the following compact form:
\begin{eqnarray}\label{g1}
G_F(p_0)=\frac{1}{S_F^{-1}(p_0)-\Sigma(p_0)},
\end{eqnarray}
where the inverse is to be understood as the inverse of a matrix. The series (\ref{git}) without resummation is meaningless because it is divergent when $p_0\approx m$.
The radiative corrections to the electron propagator in the second order of perturbation theory are represented by the three Feynman diagrams (a)--(c) shown in Fig.~\ref{Fig4}. The self-energy parts in this order for the spin system ${\Sigma}^{(2)}(p_0)$, for the two-level atom ${\hat\Sigma}^{(2)}(p_0)$, and for the dipole atom ${\breve\Sigma}^{(2)}(p_0)$, constructed according to the rules stated in the previous section, have the form
\begin{widetext}
\begin{subequations}
\begin{align}\label{sigma}
\Sigma^{(2)}(p_0)&=\Sigma^{(2a)}(p_0)+\Sigma^{(2b)}(p_0)+\Sigma^{(2c)}(p_0)= i\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}\sum_{i}\int_0^\infty\!\!dk\sum_{j}\int_0^\infty\!\!dk'
V_i(k)S_F(p_0+k_0)V_j(k')D_{Fij}(k,k',k_0)\nonumber\\
&-i\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi}\sum_{i}\int_0^\infty\!\!dk\sum_{j}\int_0^\infty\!\!dk'
{\mathrm Tr}\{V_i(k)S_F(p_0)\}V_j(k')D_{Fij}(k,k',0)-\delta m\,\sigma_z,\\
{\hat\Sigma}^{(2)}(p_0)&= {\hat\Sigma}^{(2a)}(p_0)+{\hat\Sigma}^{(2c)}(p_0)= i\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}\int_0^\infty\!\!dk\int_0^\infty\!\!dk'
V(k)S_F(p_0+k_0)V(k')D_{F}(k,k',k_0)-\delta{\hat m}\,\sigma_z,\\
{\breve\Sigma}^{(2)}(p_0)&= {\breve\Sigma}^{(2a)}(p_0)+{\breve\Sigma}^{(2c)}(p_0)= i\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}\sum_{i}\int_0^\infty\!\!dk\sum_{j}\int_0^\infty\!\!dk'
V_i(k)S_F(p_0+k_0)V_j(k')D_{Fij}(k,k',k_0)-\delta{\breve m}.
\end{align}
\end{subequations}
The tadpole diagram (Fig.~\ref{Fig4}b) does not contribute in the case of the two-level atom and the dipole atom because ${\mathrm Tr}\{\sigma_x S_F(p_0)\}=0$ and ${\mathrm Tr}\{\tau_i S_F(p_0)\}=0$. The analytic expressions for the self-energy parts obtained by the application of the Feynman rules are
\begin{subequations}
\begin{align}\label{sigma1}
&\Sigma^{(2)}(p_0)
=i\sum_{n}\int_0^\infty\!\!dk\,g^2(k)\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}
\sigma_n\frac{1}{p_0+k_0-(m-i\epsilon)\sigma_z}\sigma_n\frac{1}{k_0^2-k^2+i\epsilon}\nonumber\\
&-i\sum_{n}\int_0^\infty\!\!dk\,g^2(k)\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi}
{\rm Tr}\left\{\sigma_n\frac{1}{p_0-(m-i\epsilon)\sigma_z}\right\}\sigma_n\frac{1}{-k^2+i\epsilon}
-\delta m\,\sigma_z,\\
&{\hat\Sigma}^{(2)}(p_0)
=i\int_0^\infty\!\!dk\,{\hat g}^2(k)\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}
\sigma_x\frac{1}{p_0+k_0-(m-i\epsilon)\sigma_z}\sigma_x\frac{1}{k_0^2-k^2+i\epsilon}
-\delta{\hat m}\,\sigma_z,\\
&{\breve\Sigma}^{(2)}(p_0)
=i\sum_{n}\int_0^\infty\!\!dk\,{\hat g}^2(k)\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}
\tau_n\left(\frac{{\mathbb P}_e}{p_0+k_0-m_e+i\epsilon}+\frac{{\mathbb
P}_g}{p_0+k_0-m_g-i\epsilon}\right)\tau_n\frac{1}{k_0^2-k^2+i\epsilon}
-\delta{\breve m},
\end{align}
\end{subequations}
where $\delta{\breve m}$ is the mass renormalization matrix with the eigenvalues $\delta m_e$ and $\delta m_g$.
With the use of the relations (\ref{algprop}) and (\ref{algprop1}) we can replace all matrices by the projectors
\begin{subequations}
\begin{align}\label{sigma2}
&\sum_{n}\sigma_n\frac{1}{p_0+k_0-(m-i\epsilon)\sigma_z}\sigma_n=\frac{2{\mathbb P}_e+{\mathbb P}_g}{p_0+k_0+m-i\epsilon}+\frac{2{\mathbb P}_g+{\mathbb P}_e}{p_0+k_0-m+i\epsilon},\\
&{\rm Tr}\left\{\sigma_n\frac{1}{p_0-(m-i\epsilon)\sigma_z}\right\}=\left\{\begin{array}{cc}
0&(n=x,y)\\
2m(p_0^2-4m^2+i\epsilon)^{-1}&(n=z)
\end{array}\right.,\\
&\sigma_x\frac{1}{p_0+k_0-(m-i\epsilon)\sigma_z}\sigma_x=\frac{{\mathbb P}_e}{p_0+k_0+m-i\epsilon}+\frac{{\mathbb P}_g}{p_0+k_0-m+i\epsilon},\\
&\sum_{n}\tau_n\left(\frac{{\mathbb P}_e}{p_0+k_0-m_e+i\epsilon}
+\frac{{\mathbb P}_g}{p_0+k_0-m_g-i\epsilon}\right)\tau_n
=\frac{{\mathbb P}_e}{p_0+k_0-m_g-i\epsilon}+\frac{3{\mathbb P}_g}{p_0+k_0-m_e+i\epsilon}
\end{align}
\end{subequations}
and then we can easily perform the integrations over $k_0$ ($m_\lambda$ will be equal either to $m_e$ or $m_g$)
\begin{subequations}
\begin{align}\label{sigma3}
i\int_{-\infty}^{\infty}\!\frac{dk_0}{2\pi}\frac{1}{p_0+k_0- m_\lambda\mp i\epsilon}\frac{1}{k_0^2-k^2+i\epsilon}=\frac{1}{2k(p_0\pm k- m_\lambda\mp i\epsilon)},\;\;
i\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi}\frac{2m}{p_0^2-4m^2+i\epsilon}=1.
\end{align}
\end{subequations}
Finally, we obtain
\begin{align}\label{fese}
\Sigma^{(2)}(p_0)=(2{\mathbb P}_e+{\mathbb P}_g)\int_0^\infty\!\!\frac{dk}{2k}\frac{g^2(k)}{p_0+k+m-i\epsilon}
+(2{\mathbb P}_g+{\mathbb P}_e)\int_0^\infty\!\!\frac{dk}{2k}\frac{g^2(k)}{p_0-k-m+i\epsilon}
+({\mathbb P}_e-{\mathbb P}_g)(m_t-\delta m),
\end{align}
\end{widetext}
where
\begin{align}\label{tad}
m_t=\int_0^\infty\!\!\frac{dk}{k^2} g^2(k).
\end{align}
Note that the contribution proportional to $m_t$, corresponding to the tadpole diagram, has the same form as the contribution from the mass correction.
For the two-level atom, we obtain
\begin{align}\label{fese1}
&{\hat\Sigma}^{(2)}(p_0)={\mathbb P}_e\int_0^\infty\!\!\frac{dk}{2k}\frac{\hat{g}^2(k)}{p_0+k+m-i\epsilon}\\
&-{\mathbb P}_g\int_0^\infty\!\!\frac{dk}{2k}\frac{\hat{g}^2(k)}{p_0-k-m+i\epsilon}
-({\mathbb P}_e-{\mathbb P}_g)\delta\hat{m}.
\end{align}
The electron self-energy part for the dipole atom is slightly more complicated
\begin{align}\label{fese2}
&{\breve\Sigma}^{(2)}(p_0)
={\mathbb P}_e\int_0^\infty\!\!\frac{dk}{2k}\frac{{\breve g}^2(k)}{p_0+k-m_g-i\epsilon}\\
&+3{\mathbb P}_g\int_0^\infty\!\!\frac{dk}{2k}\frac{{\breve g}^2(k)}{p_0-k-m_e+i\epsilon}
-{\mathbb P}_e\delta m_e-{\mathbb P}_g\delta m_g.
\end{align}
The mass corrections $\delta m$, $\delta{\hat m}$, and $\delta{\breve m}$ will be chosen so that the propagator $G(p_0)$ with radiative corrections has a pole at the renormalized mass. These pole conditions imply that $\Sigma^{(2)}(m\,\sigma_z)=0$ and ${\hat\Sigma}^{(2)}(m\,\sigma_z)=0$ and they give
\begin{subequations}
\begin{align}
\delta m&=\int_0^\infty\frac{dk}{2k^2}\,g^2(k)\frac{3k+2m}{k+2m},\\
\delta{\hat m}&=\int_0^\infty\frac{dk}{2k}\,{\hat g}^2(k)\frac{1}{k+2m}.
\end{align}
\end{subequations}
For the dipole atom the mass corrections are different for the ground state and for the excited state --- the energy of the excited state is raised and the energy of the ground state, as is always the case, is pushed down
\begin{subequations}\label{mcorr}
\begin{align}
\delta m_e&=\int_0^\infty\frac{dk}{2k}\,\frac{{\breve g}^2(k)}{k+\Delta m},\\
\delta m_g&=-3\int_0^\infty\frac{dk}{2k}\,\frac{{\breve g}^2(k)}{k+\Delta m},
\end{align}
\end{subequations}
where $\Delta m=m_e-m_g$. All these mass corrections give frequency-independent shifts in the level separation. The electron propagators do not have a direct physical interpretation but they serve as important ingredients in the calculation of the photon propagators. In particular, we will need the mass corrections to complete the calculation of the spin susceptibility and the atomic polarizability in the fourth order of perturbation theory.
\section{Photon propagator}\label{spp}
The photon propagator plays a distinguished role in our formulation, much more so than the electron propagator, since it enables one to calculate several important physical characteristics of two-level systems. The propagation of photons is, of course, modified by the presence of a two-level system. The scattering of photons off a two-level system is the counterpart of an important phenomenon in QED --- the vacuum polarization.
The relation of the full photon propagator to the self-energy part ${\Pi}_{ij}(k,k',k_0)$ is illustrated in Fig.~\ref{Fig5}b. It is slightly more complicated than in the case of the electron propagator because, in addition to a multiplication of matrices in the space of the vector components, we must perform an integration over the wave vector $k$. The counterpart of Eq.~(\ref{g}) is
\begin{align}\label{gp}
&{\cal G}_{Fij}(k,k',k_0)\nonumber\\
&=D_{Fij}(k,k',k_0)+\sum_{l}\int_0^\infty\!dk_1\sum_{n}\int_0^\infty\!dk_2\nonumber\\
&\times D_{Fil}(k,k_1,k_0){\Pi}_{ln}(k_1,k_2,k_0){\cal G}_{Fnj}(k_2,k',k_0).
\end{align}
Taking into account the fact that $D_{Fij}(k,k',k_0)$ is proportional to the Kronecker $\delta_{ij}$ and the Dirac $\delta(k-k')$, we can rewrite this equation in the form
\begin{align}\label{gp1}
&{\cal G}_{Fij}(k,k',k_0)
=\frac{\delta_{ij}\delta(k-k')}{k_0^2-k^2+i\epsilon}
+\frac{g(k)}{k_0^2-k^2+i\epsilon}\nonumber\\
&\times\sum_{l}\int_0^\infty\!dk''\,g(k'')
{\cal P}_{il}(k_0){\cal G}_{Flj}(k'',k',k_0),
\end{align}
where we took advantage of the factorization of ${\Pi}_{ij}(k,k',k_0)$
\begin{eqnarray}\label{pi2p}
{\Pi}_{ij}(k,k',k_0)=g(k){\cal P}_{ij}(k_0)g(k').
\end{eqnarray}
The iteration of Eq.~(\ref{gp1}) leads to the following expansion:
\begin{align}\label{gp2}
&{\cal G}_{Fij}(k,k',k_0)\nonumber\\
&=\frac{\delta_{ij}\delta(k-k')}{k_0^2-k^2+i\epsilon}
+\frac{g(k)}{k_0^2-k^2+i\epsilon}{\cal P}_{ij}(k_0)\frac{g(k')}{k_0^2-k'^{\,2}+i\epsilon}\nonumber\\
&+\frac{g(k)}{k_0^2-k^2+i\epsilon}{\cal P}_{il}(k_0)\int_0^\infty\!\!dk''\frac{g^2(k'')}{k_0^2-k''^{\,2}+i\epsilon}\nonumber\\
&\times{\cal P}_{lj}(k_0)\frac{g(k')}{k_0^2-k'^{\,2}+i\epsilon}+\dots.
\end{align}
This geometric series can be summed up and the final formula is
\begin{align}\label{gpf}
{\cal G}_{Fij}(k,k',k_0)&=D_{Fij}(k,k',k_0)\nonumber\\
&+\frac{g(k)}{k_0^2-k^2+i\epsilon}T_{ij}(k_0)\frac{g(k')}{k_0^2-k'^{\,2}+i\epsilon}.
\end{align}
The transition matrix $T(k_0)$ has the following representation in terms of the self-energy part:
\begin{align}\label{t}
T(k_0)=\frac{{\cal P}(k_0)}{1+{\cal P}(k_0)h(k_0)}
=\frac{1}{{\cal P}(k_0)^{-1}+h(k_0)},
\end{align}
where
\begin{align}\label{h}
h(k_0)=\int_0^\infty\!\frac{dk\,g^2(k)}{k^2-k_0^2-i\epsilon}.
\end{align}
Both $T(k_0)$ and ${\cal P}(k_0)$ in Eq.~(\ref{t}) are to be treated as $3\times 3$ matrices and the matrix to the power of $-1$ is meant as the inverse matrix.
The function $h(k_0)$ will play an important role in our calculations because in the lowest order of perturbation theory its real part determines the shift in the position of the resonance and the imaginary part determines the width of the resonance
\begin{align}\label{t2}
\mathrm{Re}\,h(k_0)&={\rm P}\int_0^\infty\!\frac{dk\,g^2(k)}{k^2-k_0^2},\\
\mathrm{Im}\,h(k_0)&=\frac{\pi g^2(|k_0|)}{2|k_0|}.
\end{align}
It follows from the assumptions that determine the validity of our model that the real part of $h(k_0)$ is practically constant and can be replaced by its value at 0 and the imaginary part varies as $k_0^3$. For example, when $\rho(r)$ and $g(k)$ are given by Eqs.~(\ref{gs}), we obtain
\begin{align}\label{gs1}
h(k_0)&=\frac{\mu^2\left(1+9\xi^2-9\xi^4-\xi^6+16i\xi^3\right)}{12\pi a_0^3(1+\xi^2)^4},
\end{align}
where $\xi=k_0a_0/2$. The value of the dimensionless parameter $\xi$ is very small in the range of wave vectors that cause the transitions between the two energy levels of our qubit. Thus, we can take only the leading terms and neglect higher powers of $\xi$ as compared to 1, to obtain
\begin{align}\label{h1}
h(k_0)\approx\frac{\mu^2}{12\pi a_0^3}+i\frac{\mu^2k_0^3}{6\pi}.
\end{align}
The formulas (\ref{gpf}) and (\ref{t}) are also valid for the two-level atom and the dipole atom. In both cases $h(k_0)$ is defined by Eq.~(\ref{h}) where $g(k)$ should be replaced either by ${\hat g}(k)$ or by ${\breve g}(k)$. Of course, in the first case there are no vector indices --- ${\hat T}(k_0)$ and ${\hat{\cal P}}(k_0)$ are not matrices but ordinary functions. In the second case, as is seen in Eq.~(\ref{pid1}) below, owing to the full rotational invariance, the matrix ${\breve{\cal P}}(k_0)$ is proportional to $\delta_{ij}$.
\subsection{Second order of perturbation theory}
In the second order, the radiative correction to the photon propagator is represented by the diagram (a) in Fig.~\ref{Fig6}. The photon self-energy part, constructed according to the rules given in Fig.~\ref{Fig3} has the form
\begin{align}\label{pi1}
{\cal P}_{ab}^{(2)}(k_0) = -i\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi}
\mathrm{Tr}\left\{\sigma_aS_F(p_0+k_0)\sigma_bS_F(p_0)\right\}.
\end{align}
The indices $(a,b)$ may take the values $x,y$ and $z$ in the Cartesian basis or the values $+,-$ and 0 in the angular momentum basis. The matrices ${\bm\sigma}$ are to be replaced by $\sigma_x$ for the two-level atom and by the matrices ${\bm\tau}$ in the case of the atomic dipole.
For the spin system it is convenient to choose the interaction Hamiltonian in the angular momentum basis (\ref{finham2}) because in this basis the photon self-energy part is diagonal. The components of the self-energy in the angular momentum basis are ${\cal P}_{\pm}(k_0)$ and ${\cal P}_{0}(k_0)$. They correspond to the following choices of the matrices $\sigma$ in Eq.~(\ref{pi1}):
\begin{subequations}\label{pi11}
\begin{align}
{\cal P}_{+}(k_0):\;\;\sigma_a=\sigma_-,\;\sigma_b=\sigma_+\\
{\cal P}_{-}(k_0):\;\;\sigma_a=\sigma_+,\;\sigma_b=\sigma_-\\
{\cal P}_{0}(k_0):\;\;\sigma_a=\sigma_z,\;\sigma_b=\sigma_z.
\end{align}
\end{subequations}
Making use of the properties (\ref{algprop}) of the $\sigma$ matrices, we end up with the following integrals:
\begin{subequations}\label{pi2}
\begin{align}
{\cal P}^{(2)}_{\pm}(k_0)
&=2\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi i}
\frac{1}{p_0+k_0\mp m\pm i\epsilon}\frac{1}{p_0\pm m\mp i\epsilon}\nonumber\\
&=-\frac{2}{2m\mp k_0},\\
{\cal P}^{(2)}_{0}(k_0)&=0.\label{pi2b}
\end{align}
\end{subequations}
The component ${\cal P}^{(2)}_{0}(k_0)$ vanishes because in the corresponding integrals both residues lie in the same half-plane. The relation ${\cal P}^{(2)}_{-}(k_0)={\cal P}^{(2)}_{+}(-k_0)$ is a direct confirmation of the time-reversal invariance. The angular momentum components of the transition matrix $T(k_0)$ obtained by substituting these self-energy parts into Eq.~(\ref{t}) are
\begin{subequations}\label{ts}
\begin{align}
T_{\pm}^{(2)}(k_0)&=-\frac{2}{2m\mp k_0-2h(k_0)}\\
T_{0}^{(2)}(k_0)&=0.
\end{align}
\end{subequations}
For the two-level atom we must take $a=x$ and $b=x$ in Eq.~(\ref{pi1}). After evaluating the trace, the integral reduces to the sum of two simple integrals
\begin{align}\label{pse2}
&{\hat{\cal P}}^{(2)}(k_0) = -i\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi}\nonumber\\
&\times\mathrm{Tr}\left\{\sigma_x\frac{1}{p_0+k_0-(m-i\epsilon)\sigma_z}\sigma_x
\frac{1}{p_0-(m-i\epsilon)\sigma_z}\right\}\nonumber\\
&=\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi i}
\frac{1}{p_0+k_0+m-i\epsilon}\,\frac{1}{p_0-m+i\epsilon}\nonumber\\
&+\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi i}\frac{1}{p_0+k_0-m+i\epsilon}\,\frac{1}{p_0+m-i\epsilon}.
\end{align}
The result of the integrations is
\begin{align}\label{pse3}
{\hat{\cal P}}^{(2)}(k_0)=-\frac{4m}{4m^2-k_0^2}
\end{align}
and it leads to the following formula for $\hat{T}(k_0)$ in the lowest order of perturbation theory:
\begin{align}\label{ta1}
\hat{T}^{(2)}(k_0)=-\frac{4m}{4m^2-k_0^2-4m{\hat h}(k_0)}.
\end{align}
For the dipole atom the contribution represented by the diagram (a) in Fig.~\ref{Fig6} leads to the following expression for the self-energy part:
\begin{align}\label{pid1}
{\breve{\cal P}}_{ij}^{(2)}(k_0)&=\delta_{ij}\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi i}
\frac{1}{p_0+k_0-m_e+i\epsilon}\frac{1}{p_0-m_g-i\epsilon}\nonumber\\
&+\delta_{ij}\int_{-\infty}^{\infty}\!\frac{dp_0}{2\pi i}
\frac{1}{p_0+k_0-m_g-i\epsilon}\frac{1}{p_0-m_e+i\epsilon}\nonumber\\
&=-\delta_{ij}\frac{2\Delta m}{\Delta m^2-k_0^2}.
\end{align}
This leads to the transition matrix of the form
\begin{align}\label{ta}
\breve{T}^{(2)}_{ij}(k_0)=-\delta_{ij}\frac{2\Delta m}{\Delta m^2-k_0^2-2\Delta m{\breve h}(k_0)}.
\end{align}
\begin{figure}
\centering \vspace{0.5cm}
\includegraphics[scale=0.8]{Fig6.eps}
\caption{Photon self-energy diagrams in the second and fourth order of perturbation theory.}\label{Fig6}
\end{figure}
\subsection{Fourth order of perturbation theory}
The calculation of the photon self-energy part up to the fourth-order of perturbation theory requires the evaluation of all the contributions to the photon self-energy represented by the Feynman diagrams (b)--(h) shown in Fig.~\ref{Fig6}. These calculations are presented in Appendix \ref{a2}. Upon substituting the results of this calculation into Eq.~(\ref{t}), we obtain the formula for the transition matrix. However, there is an additional problem now that has not been present in the calculation of the self-energy part in the lowest order. In the formulas for the self-energy parts (\ref{all1}) of the spin system we encounter double poles $1/(2m-k_0)^2$ and $1/(2m+k_0)^2$. Such terms indicate a breakdown of the simple perturbation theory since for $k_0\approx 2m$ fourth-order terms dominate over second-order terms. The remedy comes from the realization that these double poles simply indicate an additional shift in the position of the resonance. Indeed, expanding $1/(2m-k_0-\delta)$ into powers of $\delta$ we encounter higher-order poles. We encountered the same problem in the expansion of the electron and photon propagators into the perturbation series but the difference is that then we were able to sum up the whole geometric series. Here, we can do it only order by order. In the present case, we can eliminate the double pole in the formulas (\ref{all}) by the following substitution:
\begin{align}\label{rear}
\frac{1}{2m-k_0}+\frac{\delta}{(2m-k_0)^2} \to \frac{1}{2m-k_0-\delta},
\end{align}
that reproduces correctly the lowest order correction in $\delta$. Higher powers of $\delta$ contribute to higher orders of perturbation theory. Applying this procedure to the expressions for the self-energy parts (\ref{all}) we obtain the following formulas that do not suffer from the double-pole problem:
\begin{subequations}\label{all5}
\begin{align}
&{\cal P}_{\pm}^{(2+4)}(k_0)=-\frac{2(1-b)}{2m\mp k_0-\delta},\\
&{\cal P}_{0}^{(2+4)}(k_0)={\cal P}_{0}^{(4)}(k_0)\nonumber\\
&=-4\int_0^\infty\!\frac{dk}{k}\frac{g^2(k)}{k+2m}\,\frac{1}{(k+2m)^2-k_0^2-i\epsilon}.
\end{align}
\end{subequations}
Therefore, the transition matrix in this order is
\begin{subequations}\label{t5}
\begin{align}
&{T}_{\pm}^{(2+4)}(k_0)=-\frac{2(1-b)}{2m\mp k_0-\delta-2(1-b)h(k_0)},\label{t5a}\\
&{T}_{0}^{(2+4)}(k_0)={\cal P}_{0}^{(2+4)}(k_0).\label{t5b}
\end{align}
\end{subequations}
The last equation follows from the fact that, as seen from Eq.~(\ref{pi2b}), ${\cal P}_{0}$ does not contain terms of the second order.
The results for the two-level atom in the fourth order are even simpler since there is only one component of the self-energy part and there are no double poles. Substituting the self-energy part (\ref{pse2l}) into the formula (\ref{t}), we obtain
\begin{align}\label{f5}
\hat{T}^{(2+4)}(k_0)=-\frac{4m(1-\hat{b})}{4m^2-k_0^2-4m(1-\hat{b})\hat{h}(k_0)}.
\end{align}
The transition matrix for the dipole atom in the fourth-order, obtained from the self-energy part (\ref{pseda}), has the same general form as that for the two-level atom
\begin{align}\label{fb5}
\breve{T}^{(2+4)}_{ij}(k_0)=-\delta_{ij}\frac{2\Delta m(1-\breve{b})}{\Delta m^2-k_0^2-2\Delta m(1-\breve{b})\breve{h}(k_0)}.
\end{align}
\section{Photon scattering amplitude}
The photon scattering amplitude $f_{ij}(\omega)$ can be obtained \cite{bd,iz} from the photon propagator (\ref{gpf}) by stripping off the free propagators at both ends and putting the whole expression on the energy shell $k_0=\omega,\,k= \omega,\,k'=\omega$. Therefore the scattering amplitude is related to the transition matrix by the formula
\begin{align}\label{scatta}
f_{ij}(\omega)=g^2(\omega)T_{ij}(\omega).
\end{align}
The argument $\omega$ of the scattering amplitude can only take positive values because the photon energy is positive.
In the second order of perturbation theory, the self-energy part for the spin system is given by Eq.~(\ref{pi2}). Therefore, according to Eq.~(\ref{scatta}), the photon scattering amplitude for a spin system is
\begin{subequations}\label{t4}
\begin{align}
f_\pm^{(2)}(\omega)&=-\frac{2g^2(\omega)}{2m-\Delta(\omega)\mp\omega-i\Gamma(\omega)},\\
f_0^{(2)}(\omega)&=0,
\end{align}
\end{subequations}
where the energy-dependent shift and width in this order according to Eqs.~(\ref{t2}) are
\begin{align}\label{ws}
{\Delta}(\omega)&={\rm P}\int_0^\infty\!\frac{dk\,2g^2(k)}{k^2-\omega^2},\\
{\Gamma}(\omega)&=\frac{\pi g^2(\omega)}{|\omega|}.
\end{align}
Owing to the angular momentum conservation, the photons with $M_z=0$ do not scatter in the lowest order because such photons cannot cause a direct transition from the lower to the upper state. They may cause such a transition provided it is accompanied by a simultaneous emission of a photon with $M_z=-1$ but this is a higher order process. Indeed, in the fourth order the scattering amplitude for $M_z=0$, as seen from Eq.~({\ref{t5b}), does not vanish. In comparison with the standard Breit-Wigner resonance formula, our $\Gamma(\omega)$ is equal to half-width. As a result of the angular momentum conservation, the amplitude with $M_z=1$ is resonant but the one with $M_z=-1$ is not. These two amplitudes correspond to two possible time orderings illustrated in Fig.~\ref{Fig2}.
The amplitude for the scattering of a photon off a two-level atom in the second order is obtained from Eq.~(\ref{pse3}) for the transition matrix
\begin{align}\label{t1}
{\hat f}^{(2)}(\omega)=-\frac{4m{\hat g}^2(\omega)}{4m^2-\omega^2-4m{\hat\Delta}(\omega)-4mi{\hat\Gamma}(\omega)},
\end{align}
where
\begin{align}\label{ws1}
{\hat\Delta}(\omega)&={\rm P}\int_0^\infty\!\frac{dk\,\hat{g}^2(k)}{k^2-\omega^2},\\
{\hat\Gamma}(\omega)&=\frac{\pi \hat{g}^2(\omega)}{2|\omega|}.
\end{align}
The standard resonance behavior can be seen after rewriting ${\hat f}(\omega)$ in a different form. Disregarding the square of ${\hat\Delta}(\omega)+i{\hat\Gamma}(\omega)$ and its product with $\hat{g}^2$ since they are both of the fourth-order, we can approximately decompose the scattering amplitude (\ref{t1}) into the following sum of simple fractions:
\begin{align}\label{t3}
{\hat f}^{(2)}(\omega)\approx-\frac{\hat{g}^2(\omega)}{2m-{\hat\Delta}(\omega)-\omega-i{\hat\Gamma}(\omega)}\nonumber\\
-\frac{\hat{g}^2(\omega)}{2m-{\hat\Delta}(\omega)+\omega-i{\hat\Gamma}(\omega)}.
\end{align}
The first term is clearly resonant and the second is not. This expression agrees with the equal sign prescription --- the width ${\hat\Gamma}(\omega)$ enters with {\em the same sign} in both terms. Thus, the equal sign prescription, advocated in Ref.~\cite{sted2}, is appropriate for the photon scattering amplitude but not for the polarizability, as will be explained later. The scattering amplitude off a dipole atom has the same general form as for the two-level atom, so that our analysis applies also to this case.
\section{Linear response theory}
We shall use the quantum linear response theory (cf., for example, \cite{fw}) to relate the Feynman propagators to the important physical characteristics of two-level systems: the spin susceptibility and the atomic polarizability. Linear response theory describes the reaction of a quantum system to a weak external perturbation. In the linear response theory, changes in an expectation values of observables are expressed in terms of retarded propagators. In our opinion most of the controversies in the treatment of damping resulted from the lack of a clear distinction between the scattering of photons (described by the S matrix and the Feynman propagators) and the time evolution of the expectation values (described by the solutions of the Heisenberg equations of motion and the retarded propagators). For the spin system, the observables are the spin (or the magnetic moment) components $\Psi^\dagger{\bm\sigma}\Psi$. For the two-level atom, the observable is the atomic induced dipole represented (up to a constant factor) by the operator $\Psi^\dagger{\sigma}_x\Psi$. The spin susceptibility determines the response of the magnetic moment to the applied magnetic field and the polarizability determines the response of the atom to the applied electromagnetic field. These external perturbations are represented by the terms $\Psi^\dagger{\bm\sigma}\Psi\!\cdot\!\delta{\bm\varphi}(t)$ or $\Psi^\dagger\sigma_x\Psi\delta{\varphi}(t)$ added to the Hamiltonian. The change in the average spin value produced by such a perturbation is \cite{fw}
\begin{align}\label{change}
&\delta\langle G|S_i(t)|G\rangle\nonumber\\
&=-i\int_{-\infty}^{\infty}\!dt'\theta(t-t')\langle G|[S_i(t),S_j(t')]|G\rangle \delta\varphi_j(t').
\end{align}
The spin operators in this formula are in the Heisenberg picture ${\bm S}(t)=\Psi^\dagger(t){\bm\sigma}\Psi(t)$. Since the ground state $|G\rangle$ is stationary with some energy $E_G$, we can write
\begin{align}\label{stat}
&\langle G|[S_i(t),S_j(t')]|G\rangle\nonumber\\
&=\langle G|S_i(0)e^{-i(H-E_G)(t-t')}S_j(0)|G\rangle\nonumber\\
&-\langle G|S_j(0)e^{i(H-E_G)(t-t')}S_i(0)|G\rangle.
\end{align}
Thus, the integral in Eq.~(\ref{change}) is a convolution and we can transform this equation to a simple algebraic relation between the Fourier transforms
\begin{align}\label{rft}
\delta\langle G|{\tilde S}_i(\omega)|G\rangle =\chi_{ij}(\omega)\delta{\tilde\varphi}_j(\omega).
\end{align}
The spin susceptibility $\chi_{ij}(\omega)$ describes the response of the spin to a monochromatic external magnetic field and is defined by the Kubo formula \cite{kubo}
\begin{align}\label{susc}
\chi_{ij}(\omega)=-i\int_{-\infty}^0\!dt\,e^{-i\omega t+\epsilon t}\langle G|[S_i(0),S_j(t)]|G\rangle,
\end{align}
where the damping factor $e^{\epsilon t}$ guarantees that the applied field has been switched on adiabatically.
The corresponding formula for the polarizability of a two-level atom reads
\begin{align}
&\delta\langle G|S_x(t)|G\rangle\nonumber\\
&=-i\int_{-\infty}^{\infty}\!dt'\theta(t-t')\langle G|[S_x(t),S_x(t')]|G\rangle \delta\varphi(t'),\label{change1}\\
&\alpha(\omega)=iA\int_{-\infty}^0\!dt e^{-i\omega t+\epsilon t}\langle G|[S_x(0),S_x(t)]|G\rangle.\label{change2}
\end{align}
For a single two-level atom, the constant factor $A$ is usually (cf., for example, \cite{mb,bbm}) given the value $A=d^2/3\hslash$, where $d$ measures the strength of the dipole transition. We have reversed the sign in the definition of $\alpha(\omega)$, as compared to the definition of the spin susceptibility (\ref{susc}), to be in agreement with the standard Kramers-Heisenberg-Dirac expression for the polarizability.
The expectation value of the retarded commutator of the spin operators appearing in (\ref{change}) is directly related to the retarded photon propagator ${\cal G}_{Rij}$
\begin{align}\label{retph}
{\cal G}_{Rij}(k,k',t-t')=-i\theta(t-t')\langle G|[\Phi_i(t),\Phi_j(t')]|G\rangle.
\end{align}
Indeed, with the use of the Heisenberg equations of motion (\ref{heqm2}) for the field ${\bm\Phi}$ and the canonical commutation relations (\ref{ccr2}), we obtain
\begin{align}\label{rel}
&(\partial_t^2+k^2)(\partial_t'^{\,2}+k'^{\,2}){\cal G}_{Rij}(k,k',t-t')\nonumber\\
&=-(\partial_t^2+k^2)\delta(t-t')\delta_{ij}\delta(k-k')\nonumber\\
&-ig(k)g(k')\theta(t-t')\langle G|[S_i(t),S_j(t')]|G\rangle.
\end{align}
After the Fourier transformation with respect to $t-t'$, this relation becomes
\begin{align}\label{frel}
&(k_0^2-k^2)(k_0^2-k'^{\,2}){\cal G}_{Rij}(k,k',\omega)\nonumber\\
&=(k_0^2-k^2)\delta_{ij}\delta(k-k')+g(k)g(k')\chi_{ij}(\omega).
\end{align}
Analogous relations hold for the two-level atom. Thus, the susceptibility and the polarizability are simply proportional to the retarded photon propagator.
\subsection{Spectral representation}
One the advantages of using the methods of relativistic field theory is that the analytic properties of propagators became explicit. The retarded photon propagator {\em cannot be calculated} by a direct application of the Feynman-Dyson perturbation theory. However, once we determine the photon Feynman propagator, the retarded propagator can be unambiguously reconstructed. In order to prove this assertion, we shall follow the same procedure that in a relativistic quantum field theory leads to the K{\"a}llen-Lehmann representation. Starting from the general definition of the photon Feynman propagator we arrive at the following formula (cf., for example, \cite{fw}):
\begin{align}\label{ppros}
{\cal G}_{Fij}&(k,k',t-t')=-i\theta(t-t')\langle G|\Phi_i(k,t)\Phi_j(k',t')|G\rangle\nonumber\\
&-i\theta(t'-t)\langle G|\Phi_i(k',t')\Phi_j(k,t)|G\rangle\nonumber\\
&=-i\theta(t-t')\int_0^\infty\!\!dM\,e^{-iM(t-t')}A_{ij}(M,k,k')\nonumber\\
&-i\theta(t'-t)\int_0^\infty\!\!dM\,e^{-iM(t-t')}A_{ji}(M,k',k),
\end{align}
where the spectral matrix $A_{ij}(M,k,k')$ is defined as follows:
\begin{align}\label{spec}
&A_{ij}(M,k,k')=A^*_{ji}(M,k',k)\nonumber\\
&=\sum_n\delta(M-E_n+E_G)\langle G|\Phi_i(k,0)|n\rangle \langle n|\Phi_j(k',0)|G\rangle
\end{align}
$\{|n\rangle\}$ is any complete set of stationary states of the system and $E_G$ is the energy of the ground state. Therefore, the Fourier transform ${\cal G}_{Fij}(k,k',k_0)$ of the propagator can be written in the form of a spectral representation
\begin{align}\label{ppros1}
&{\cal G}_{Fij}(k,k',k_0)\nonumber\\
&=\int_0^\infty\!dM\,\left(\frac{A_{ij}(M,k,k')}{k_0-M+i\epsilon}
-\frac{A^*_{ij}(M,k,k')}{k_0+M-i\epsilon}\right).
\end{align}
Repeating the same procedure for the retarded propagator defined in Eq.~(\ref{retph}), we obtain its spectral representation
\begin{align}\label{retprs}
&{\cal G}_{Rij}(k,k',k_0)\nonumber\\
&=\int_0^\infty\!dM\,\left(\frac{A_{ij}(M,k,k')}{k_0-M+i\epsilon}
-\frac{A^*_{ij}(M,k,k')}{k_0+M+i\epsilon}\right).
\end{align}
The spectral matrices are the same and {\em the only difference} is the change of the sign of the $\epsilon$-term in the second part. Since $M\ge 0$, this is equivalent to the replacement of $i\epsilon$ in the denominators of the Feynman propagator by $i{\rm sgn}(k_0)\epsilon$. This ``epsilon rule'' in the simplest case of a one-component propagator reduces to the standard rule of quantum linear response theory \cite{fw}:
\begin{subequations}\label{rvsf}
\begin{align}
{\rm Re}\,{\cal G}_{R}(k,k',k_0)&={\rm Re}\,{\cal G}_{F}(k,k',k_0),\\
{\rm Im}\,{\cal G}_{R}(k,k',k_0)&={\rm sgn}(k_0){\rm Im}\,{\cal G}_{F}(k,k',k_0).
\end{align}
\end{subequations}
These simple relations do not hold in general because the imaginary unit may appear not only together with $\epsilon$ but also in other places. In particular, they do not hold for the spin system in the Cartesian basis. However, in the angular momentum basis the photon propagator is diagonal. Therefore, we can treat each component separately and use the relations (\ref{rvsf}) to obtain the components of susceptibility from the Feynman propagator (or more precisely from the scattering matrix)
\begin{subequations}\label{susc1}
\begin{align}
{\rm Re}\,\chi_{a}(\omega)&={\rm Re}\,T_{a}(\omega),\\
{\rm Im}\,\chi_{a}(\omega)&={\rm sgn}(\omega){\rm Im}\,T_{a}(\omega),
\end{align}
\end{subequations}
where the subscript $a$ takes on the values $\pm$ and $0$. Here, unlike in the formula (\ref{scatta}) for the scattering amplitude, in Eqs.~(\ref{susc1}) the frequency $\omega$ takes on positive and negative values because the real function $\varphi(t)$ describing an external perturbation must contain both positive and negative frequencies. This important difference must be kept in mind when discussing the relations between the scattering and the linear response.
The relations between the Feynman propagator and the retarded propagator in the simple form (\ref{rvsf}) hold for the two-level atom and for the dipole atom. In the first cases there is just one scalar function from the very beginning. In the second case, due to the conservation of the three components of angular momentum, the propagator is proportional to $\delta_{ij}$ so that it effectively reduces to just one function.
We shall confirm the validity of the spectral representations of the photon propagator by direct calculations in perturbation theory.
Since our theory is invariant under time reversal, the retarded propagator will automatically satisfy the general crossing relation
\begin{align}\label{cross}
{\cal G}_{Rij}(k,k',-k_0)&={\cal G}^*_{Rji}(k,k',k_0).
\end{align}
This relation for a two-level atom implies that the atomic polarizability satisfied the condition $\alpha^*(\omega)=\alpha(-\omega)$. The significance of this condition has been emphasized in Ref.~\cite{lb}.
\subsection{Spin susceptibility}\label{ss}
The components of the photon transition matrix $T(k_0)$ evaluated in Sec.~\ref{spp} gives the following formula for the spin susceptibility in the lowest order of perturbation theory:
\begin{subequations}\label{chi2}
\begin{align}
&\chi_\pm^{(2)}(\omega)=-\frac{2}{2m\mp\omega-2\left(\Delta(\omega)\pm i\,{\rm sgn}(\omega)\Gamma(\omega)\right)},\\
&\chi_0^{(2)}(\omega)=0.
\end{align}
\end{subequations}
From the transition matrix (\ref{t5}) we can obtain the spin susceptibility in the fourth order
\begin{subequations}\label{chi4}
\begin{align}
&\chi_\pm^{(2+4)}(\omega)\nonumber\\
&=-\frac{2(1-b)}{2m\mp\omega-\delta-2(1-b)\left(\Delta(\omega)\pm i\,{\rm sgn}(\omega)\Gamma(\omega)\right)},\\
&\chi_0^{(2+4)}(\omega)={\cal P}_0^{(2+4)}(\omega).
\end{align}
\end{subequations}
There is only one term for each transition but the opposite sign prescription is still visible. The sign of the imaginary part in the denominator depends on the sign of $\omega$.
\subsection{Atomic polarizability}\label{ap}
In this Section we shall use the Hamiltonian (\ref{hamlb}) for the two-level atom to calculate the photon propagator up to the fourth order of perturbation theory and use it to find the polarizability. All Feynman diagrams corresponding to the radiative corrections that will be taken into account in our calculation are shown in Fig.~\ref{Fig6}. The frequency-dependent atomic polarizability $\alpha(\omega)$ can be obtained from the functions ${\hat T}(k_0)$ and ${\breve T}(k_0)$ by changing their imaginary parts according to the prescription (\ref{susc1}).
The second-order self-energy part for the two-level atom is given in Eq.~(\ref{pse3}). Substituting this expression into the formula (\ref{t}) and changing the sign of the imaginary part, like in Eq.~(\ref{susc1}), we obtain the following expression for the atomic polarizability (\ref{change2}):
\begin{align}\label{pol1}
&\alpha^{(2)}(\omega)\nonumber\\
&=\frac{4mA}{4m^2-\omega^2-4m\left({\hat\Delta}(\omega)+i\,{\rm sgn}(\omega){\hat\Gamma}(\omega)\right)}.
\end{align}
In order to see that this expression obeys the opposite sign prescription we could write it in the form of a spectral representation. However, following the treatment of the photon scattering amplitude, we shall convert this expression into simple fractions neglecting again higher-order terms
\begin{align}\label{pol2}
&\alpha^{(2)}(\omega)\approx\frac{A}{2m-{\hat\Delta}(\omega)-\omega-i\,{\rm sgn}(\omega){\hat\Gamma}(\omega)}\nonumber\\
&+\frac{A}{2m-{\hat\Delta}(\omega)+\omega-i\,{\rm sgn}(\omega){\hat\Gamma}(\omega)}.
\end{align}
Depending on the sign of $\omega$, either the first or the second term is resonant. Therefore, if we only care about the important resonant terms, we can write this formula as
\begin{align}\label{pol2a}
&\alpha^{(2)}(\omega)\approx\frac{A}{2m-{\hat\Delta}(\omega)-\omega-i{\hat\Gamma}(\omega)}\nonumber\\
&+\frac{A}{2m-{\hat\Delta}(\omega)+\omega+i{\hat\Gamma}(\omega)}.
\end{align}
In other words, this expression is a good approximation to the exact formula (\ref{pol1}) near both resonances, when $\omega\approx\pm 2m$. Thus, our expressions for the atomic polarizability derived from the quantum linear response theory agree with the {\em opposite sign prescription}, as advocated in Refs.~\cite{bf,mb,bbm}.
To extend this result to the fourth order of perturbation theory, we use the formula (\ref{f5}) for the photon propagator. The resulting expression differs from the formula (\ref{pol1}) for the polarizability in the second order {\em only} by the presence of the factors $(1-{\hat b})$
\begin{align}\label{pol3}
&\alpha^{(2+4)}(\omega)\nonumber\\
&=\frac{4m(1-{\hat b})A}{4m^2-\omega^2-4m(1-{\hat b})\left({\hat\Delta}(\omega)+i\,{\rm sgn}(\omega){\hat\Gamma}(\omega)\right)}.
\end{align}
Our result is quite different from the formula derived in Ref.~\cite{lb}. It seems to us that this difference is due to the difficulties in systematically accounting in the standard treatment for all higher order corrections. In particular, Loudon and Barnett have not included all corrections to the ground state up to the fourth order and they have disregarded all level shifts. In our formulation, the method of Feynman diagrams guarantees an unambiguous derivation of all corrections in any order of perturbation theory.
Finally, we would like to emphasize that all those equal or opposite sign prescriptions, that are widely used in the semiphenomenological treatment, have some practical limitations. They can be directly applied only to the spectral representations (\ref{ppros1}) or (\ref{retprs}). In general, as seen for example in Eq.~(\ref{pol3}), the expressions obtained directly in perturbation theory cannot be easily decomposed into two parts because they are not given in the form of a spectral representation. Of course, we can always find this representation, but the formulas are quite complicated and they hide the resonant character of the process. Still, having closed expressions we can always identify the analytic properties of $\alpha(\omega)$ that correspond to these prescriptions. Namely, we can locate the positions of the poles of $\alpha(\omega)$ in the complex $\omega$ plane to discover that the pole near $2m$ lies in the {\em upper} half-plane while the the pole near $-2m$ lies in {\em lower} half-plane. This property extends the opposite sign prescription to the general case.
\section{Conclusions}
We have shown that the methods of relativistic quantum field theory applied to a two-level (and also to a many-level) system interacting with the quantized electromagnetic field lead to significant simplifications in the evaluation of various physical properties of the system. Owing to these simplifications we can easily go easily beyond the lowest orders of perturbation theory. The difference in complexity of the calculations performed with the use of the traditional approach and the new methods is enormous. For example, the interaction Hamiltonian for the spin system has six terms, so there are $6^4=1296$ terms in the fourth order of the standard perturbation theory while in our approach we have only several Feynman diagrams to consider. It is true that for a particular process many terms will not contribute but still a lot of terms must be taken into account. In addition to the simplifications in the calculations, we also gain new physical insights that stem from the connections that exist in quantum field theory between different characteristics of the system. In particular, the connection between the photon scattering amplitude and the linear response functions of the two-level system to an applied electromagnetic field is very useful. This connection is crucial to the understanding of the hotly debated relation between the equal sign prescription and the opposite sign prescription in the description of damping.
\acknowledgments
We acknowledge the support by the Polish Ministry of Science and Higher Education under the Grant for Quantum Information and Quantum Engineering.
|
2,877,628,088,893 | arxiv | \section{Introduction}
A polaron is a fermionic quasiparticle that was introduced by Landau in a 1933 seminal paper to describe the trapping of an electron by the ionic distorsion it induces in a crystal [\onlinecite{landau1933electron}]. The self-trapping of such an electron was subsequently studied in the case of weak electron-phonon coupling by Pekar and Fr\"ohlich [\onlinecite{pekar1946autolocalization},\,\onlinecite{frohlich1954electrons}]. They showed that, within a continuum dielectric medium, a single electron can drag a phonon cloud along a slow motion without being trapped, thus resulting in a large polaron that propagates freely with an effective mass. By opposition, the polaron size becomes small - of the order of the lattice constant - in the regime of a strong electron-phonon coupling compared to the electron bandwidth. This situation depicted by Holstein, Lang and Firsov refers to a quasi-trapped polaron that propagates with an exponentially heavier effective mass [\onlinecite{holstein1959studies},\,\onlinecite{lang1962title}]. Importantly, all these polaron features were finally unified within a path-integral-based variational approach that allowed Feynman to characterize the binding energy and effective mass of Fr\"ohlich's polaron for all coupling strengths [\onlinecite{feynman1948space},\,\onlinecite{feynman1955slow}].
From the experimental perspective these quasiparticles were first identified in uranium dioxide as small polarons [\onlinecite{nagels1963electrical}]. Later, localized lattice distortions were pointed out to affect the Curie temperature of the ferromagnetic transition in perovskites, and to be involved in the colossal magnetoresistance of manganites [\onlinecite{millis1995double,zhao1996giant,alexandrov1999carrier,sharma2002oxygen,edwards2002ferromagnetism,hartinger2006polaronic}]. Whereas the phonons turn out to be crucial in the context of symmetry breaking phase transitions with for example structural Peierls dimerization and conventional BCS superconductivity [\onlinecite{bardeen1957theory},\,\onlinecite{bardeen1957microscopic}], their coupling to the charge carriers would also play a significant role in high-temperature superconductors [\onlinecite{alexandrov1996coherent,bianconi1996determination,lanzara2001evidence,lee2006interplay,gweon2006strong,takahashi2008superconductivity,chen2008superconductivity,kresin2009colloquium}], although the underlying microscopic pairing mechanism has not been clearly identified yet. Polaron physics was also seriously discussed in connection to organic molecular crystals with possible applications as field-effects transistors [\onlinecite{sundar2004elastomeric,takeya2007very,kawai2012characteristics}]. It was first thought that local electron-phonon interactions of Holstein type were sufficient to explain understand the physics of organic semiconductors. Nevertheless, experiments achieved in aromatic hydrocarbon crystals showed that nonlocal electron-phonon interactions are also involved in transport properties [\onlinecite{roberts1980temperature}], resulting in many studies that aimed to highlight the interplay between local and nonlocal electron-phonon interactions in these organic materials [\onlinecite{munn1985theory, munn1985theory3, zhao1994munn,PhysRevLett.89.275503,zoli2005nonlocal,PhysRevLett.96.086601,PhysRevLett.103.266601,PhysRevB.82.035208,Ciuchi:2011dn,Li:2011nr,Li:2013eu}].
On the other hand, the last years witnessed a growing interest inside the condensed matter community for out-of-equilibrium physics [\onlinecite{aoki2014nonequilibrium}]. With the development of ultrafast pump-probe spectroscopy, it became possible to study excitation and relation processes, as well as steady regimes in many-body systems [\onlinecite{joura2008steady,tsuji2008correlated,tsuji2009nonequilibrium,wall2011quantum}], leading to phenomena such as ultrafast time-scale induced superconductivity [\onlinecite{fausti2011light}] and symmetry-protected topological transitions [\onlinecite{oka2009photovoltaic,lindner2011floquet,carpentier2015topological,dutreix2016laser}]. This is quite naturally then that the poralon problem was revisited in this nonequilibrium context. For example, the electron-phonon coupling offers a dominant relaxation channel to the photo-excited quasiparticles of Mott insulators [\onlinecite{PhysRevLett.112.117801}]. It was also reported that quenching the Holstein coupling reduces the Coulomb interaction and enhances the production of doublons in the Mott insulating phase [\onlinecite{PhysRevB.88.165108}]. In order to get some insight into the nonequilibrium dynamics of such many-body phases, the real-time dynamics of a single electron in Holstein model has recently been studied [\onlinecite{PhysRevB.91.104302},\,\onlinecite{PhysRevB.91.104301}]. This highlights for instance what the electron transient dynamics is, from the time at which a DC electric field is turned on until the electron reaches a steady state with constant velocity thanks to energy dissipation through optical phonons [\onlinecite{vidmar2011nonequilibrium}], as predicted by Thornber and Feynman in 1970 [\onlinecite{thornber1970velocity}]. Interestingly, it has also been proposed that driving infrared active phonons by ultrafast laser irradiation could induce superconductivity at temperatures much higher than the equilibrium critical one [\onlinecite{knap2015dynamical}].
Here, we revisit the polaron problem out of equilibrium when the electrons are periodically driven and show through explicit expressions how the binding energy and effective mass of the polaron can be controlled from the driving strength. To this purpose, we address the problem of noninteracting electrons that are rapidly driven and linearly coupled to vibrational modes in a one-dimensional crystal. Contrary to most of the nonequilibrium papers that we have mentioned so far and that deal with the real-time dynamics of an electron-phonon system, we rather focus on its stroboscopic dynamics, which is apprehended up to the third-order in the high-frequency expansion. This analytical approach provides a time-independent description of the problem in term of an effective Hamiltonian. In the absence of vibrational modes, it is well known that the Bloch band structure is simply renormalized by the time-periodic driving, which can result in the dynamical Wannier-Stark localization of electrons [\onlinecite{PhysRevB.34.3625}]. To our knowledge, this effect was first considered in Ref.\,[\onlinecite{Vonsovsky1939}]. In the presence of vibrational modes, we show that the driving actually modifies the electron-phonon interaction which becomes dynamically controllable when varying the driving strength. In order to be more specific, we focus on organic molecular crystals with electron-phonon interaction of Holstein type in equilibrium. Out of equilibrium, the driving additionally generates tunable nonlocal Peierls interactions and phonon-assisted hopping between distant neighbors. It turns out that both the phonon-assisted distant hopping and the renormalized nearest-neighbor tunneling can be dynamically suppressed when varying the driving strength. However, they cannot be suppressed simultaneously, meaning that the dynamical Wannier-Stark localization can no longer occur when the electrons are allowed to dissipate their energy on the vibrational modes of the crystal. Besides, we report the controllable nonequilibrium binding energy and effective mass of the polaron that the local and nonlocal electron-phonon interactions induce. This is achieved within both the weak- and strong-coupling regimes, since varying the driving strength enables the system to visit these two regimes dynamically.
While the high-frequency limit and simulations of lattice vibrations are already relevant in optical lattices of cold atomic gases [\onlinecite{lignier2007dynamical,eckardt2009exploring,struck2012tunable,greschner2014density,goldman2014periodically,PhysRevA.76.011605}], the explicit knowledge of the electron-phonon mechanisms we derive here in the third-order expansion allows the description of slower frequencies that become reasonable for solid state physics too, for example during multicycle laser irradiations in pump-probe experiments. The dynamical control allowed by the driving strength offers several opportunities among which the possibility to test weak- and strong-coupling polaron theories within a single material, or to understand a bit more the interplay between local and nonlocal electron-phonon interactions in organic crystals.
\section{Dynamical electron-phonon coupling}
\subsection{Time-periodic Hamiltonian}
When a homogeneous time-periodic electric field with magnitude $E_{0}$ and frequency $\Omega$ is driving noninteracting electrons in a one-dimensional crystal, it yields a vector potential that can be written as $A(t) = - E_{0} \sin (\Omega t) /\Omega$. The scalar potential is not relevant here for we consider the temporal gauge. Moreover Planck constant and the light celerity are set to unity, i.e. $\hbar=c=1$, and we chose the interatomic distance as unit of length. If the charge carriers are additionally coupled to vibrational modes, the system can generically be described by a time-periodic Hamiltonian of the form $H(t) = H_{e}(t) + H_{p} + H_{ep}$, with
\begin{align}\label{Time-Dependent Hamiltonian}
&H_{e}(t) =\sum_{k} \epsilon_{k}(t) \, c^{\dagger}_{k}c_{k} ~, ~~~ H_{p} = \sum_{q} \omega_{q} \, b^{\dagger}_{q}b_{q} ~, \notag \\
&H_{ep} = \sum_{k,q} g_{q} \, c^{\dagger}_{k+q}c_{k}B_{q} ~.
\end{align}
According to Peierls substitution, the electronic dispersion relation is given by $\epsilon_{k}(t)=2\nu\cos( k+z\sin\Omega t)$, where $\nu$ refers to the nearest-neighbor hopping amplitude, $z=eE_{0}/\Omega$, and $e$ denotes the electron charge. In the model we are concerned with, the electrons are assumed to be linearly coupled to the atomic displacement operator $B_{q}=b^{\dagger}_{-q}+b_{q}$ through the coupling constant $g_{q}$, while $\omega_{q}$ defines the dispersion relation of phonons. No assumptions are made over these $q$-dependent functions for the moment.
\subsection{Third-order high-frequency description}
The dynamics of a quantum state $\phi (t)$ is then ruled by the time-dependent Schr\"odinger equation
\begin{align}
i \, \partial_\tau \phi(\tau) =~\lambda \, H(\tau) \, \phi (\tau) ~,
\end{align}
where $\tau=\Omega t$ and $\lambda=\delta E / \Omega$. Here $\delta E$ denotes a certain energy scale involved in the Hamiltonians of Eq.\,(\ref{Time-Dependent Hamiltonian}). Consequently, $\tau$ and $H(\tau)$ are dimensionless, though we still refer to them as time and Hamiltonian, respectively.
The high-frequency limit corresponds to $\lambda \ll 1$ or equivalently to $\delta E \ll \Omega$. If $\delta E$ is chosen as the largest characteristic energy scale met in Eq. (\ref{Time-Dependent Hamiltonian}), then there are no resonances with the driving which is said to be off-resonant. This limit can be apprehended through several analytical approaches among which Floquet-Magnus expansion, van Vleck and Brillouin-Wigner perturbation theories [\onlinecite{0305-4470-34-16-305,1367-2630-17-9-093039,PhysRevB.93.144307}]. Here we use a method which has been reported in Refs.\,[\onlinecite{Itin2}] and [\onlinecite{PhysRevLett.115.075301}]. It relies on the gauge transformation $\tilde{\phi} (\tau) = \exp\{-i\Delta(\tau)\}\,\phi (\tau)$, where $\Delta(\tau) = \sum_{n=1}^{+\infty} \Delta_{n}(\tau)\lambda^{n}$. Starting from the lowest order in $\lambda$, we iteratively build up operator $\Delta(\tau)$ under the constraint that $\Delta_{n}(\tau)$ is $2\pi$-periodic and averages at zero. The latter boundary condition ensures, similarly to van Vleck and Brillouin-Wigner approaches, that the perturbation theory does not depend on the arbitrary phase of the periodic driving [\onlinecite{PhysRevB.93.144307}]. By construction, this transformation is also required to remove the time-dependence of $H(\tau)$ in all orders in $\lambda$. So we end up with the effective Hamiltonian
\begin{align}\label{Effective Hamiltonian}
\tilde{H}=\lambda e^{i\Delta(t)} H(t) e^{-i\Delta(t)} -i e^{i\Delta(t)} \partial_{t} e^{-i\Delta(t)}
\end{align}
that is time independent and also satisfies a Schr\"odinger-like equation:
\begin{align}
i\partial_\tau \tilde{\phi}(\tau) = \tilde H \tilde{\phi} (\tau) ~.
\end{align}
When assuming $\tilde{H}=\sum_{n=1}^{+\infty} \tilde{H}_{n}\lambda^{n}$ and restricting the high-frequency analysis to the third order in $\lambda$, Eq. (\ref{Effective Hamiltonian}) leads to
\begin{align}\label{Time-dependent Hamiltonians}
\tilde H_{1} &= H(\tau)-\partial_{\tau}\Delta_{1}(\tau) ~, \notag \\
\tilde H_{2} &= \frac{i}{2}[\Delta_{1}(\tau),H(\tau)]+\frac{i}{2}[\Delta_{1}(\tau),\tilde H_{1}]-\partial_{\tau}\Delta_{2}(\tau) \notag \\
\tilde H_{3} &= \frac{i}{2}[\Delta_{2}(\tau),H(\tau)]+\frac{i}{2} [\Delta_{1}(\tau),\tilde H_{2}] + \frac{i}{2} [\Delta_{2}(\tau),\tilde H_{1}] ~, \notag \\
&+ \frac{1}{12}[[\Delta_{1}(\tau),\partial_{t}\Delta_{1}(\tau)],\Delta_{1}(\tau)] - \partial_{\tau}\Delta_{3}(\tau)
~,
\end{align}
where the brackets refer to standard commutators. Since $\tilde H_{1}$, $\tilde H_{2}$ and $\tilde H_{3}$ have to be static by construction, they must be equal to their time average. Then taking the time average of the right-hand side terms in Eq.\,(\ref{Time-dependent Hamiltonians}) results in
\begin{align}\label{Time-independent Hamiltonians}
&\tilde H_{1} = H_{0} ~, ~~~~~~~~~~~~~~
\tilde H_{2} = -\frac{1}{2}\sum_{m\neq0}\frac{[H_{m},H_{-m}]}{m} ~, \\
&\tilde H_{3} = \frac{1}{2} \sum_{m\neq 0}\frac{[[H_{m},H_{0}],H_{-m}]}{m^{2}} + \frac{1}{3}\sum_{m\neq 0}\sum_{n\neq 0,m} \frac{[[H_{m},H_{n-m}],H_{-n}]}{mn} ~, \notag
\end{align}
where $H_{m}=\int_{-\pi}^{+\pi} \frac{d\tau}{2\pi}~ e^{im\tau} H(\tau)$. The first order simply refers to the time-averaged Hamiltonian because the electrons cannot follow the dynamics of the driving. Higher orders are commutation-based corrections that describe emissions and absorptions of virtual photons. As a result, the averaging method introduced above leads to time-independent effective Hamiltonians that describe the stroboscopic dynamics, whereas the evolution between two stroboscopic times is encoded into the operators $\Delta_{n}(\tau)$.
Importantly, the first and second orders of the high-frequency expansion are already realistic in systems such as ultracold atomic gases, for expample when shaking optical lattices with frequencies of a few $kHz$ [\onlinecite{lignier2007dynamical,eckardt2009exploring,struck2012tunable,greschner2014density}]. So the third-order description we address here may also be interesting to observe the effects of sub-$kHz$ frequencies in these systems. In solid state physics, however, rapidly driving electrons in the high-frequency limit faces several issues. On the one hand, the interesting effects predicted for noninteracting electrons such as dynamical localization and symmetry-protected topological phase transitions are based on the condition $J_{0}(z)=0$. For the first root of the 0-th order Bessel function this condition already requires a driving strength satisfying $eE_{0}\sim 2.4\,\Omega$. As we shall see later on, the high-frequency expansion usually relies on $2\nu \ll \Omega$ and is basically valid for laser frequencies of a few $eV$. Therefore, the condition $eE_{0}\sim 2.4\,\Omega$ involves even more energetic intensities that, additionally to be already challenging technically, are very likely to burn the crystal where the typical atomic binding energy is of the order of a few $eV$ per Angstrom too for covalent bonds. This issue is no longer a problem when dealing with interactions because the interesting physics due to corrections arises with $J_{m}(z)$, meaning with nonzero-th order Bessel functions. So they start playing a role as soon as the driving is turned on and there are already interesting effects for $eE_{0}<2.4\,\Omega$. Moreover we provide a high-frequency description up to the third-order, which is also expected to describe effects of slower driving frequencies and is \textit{a priori} more reasonable for solid state physics. As far as we shall be concerned, the hopping amplitude is about $0.1eV$ in organic molecular crystal like pentacene [\onlinecite{Li:2011nr},\,\onlinecite{Li:2013eu}], so the high-frequency effects we address further should be relevant for $eE_{0} \sim \Omega \sim 1\,eV$, namely infrared light of $241.8\,THz$.
On the other hand, even if one can describe how electronic states are changed out of equilibrium, the question of how to reach a steady regime and populate the states in order to probe observables in solid states physics experiments is still under investigations [\onlinecite{seetharam2015controlled,canovi2016stroboscopic,mori2016rigorous}]. Here, we do not regard this latter issue. Instead, we rather address what kinds of electron-phonon interactions are induced by the off-resonant driving and how these interactions modify the equilibrium polaronic states.
\subsection{Time-independent effective Hamiltonian}
Now we are ready to apply the high-frequency approach introduced above to Hamiltonian $H(t)$ defined in Eq. (\ref{Time-Dependent Hamiltonian}). Its time Fourier transform consists of
\begin{align}
H_{m} &= \sum_{k} \epsilon_{k,m} \, c^{\dagger}_{k}c_{k} + \left( H_{p} + H_{ep} \right) \delta_{m,0} ~,
\end{align}
where $\epsilon_{k,m}=\int_{-\pi}^{+\pi} \frac{d\tau}{2\pi}~ e^{im\tau} \epsilon_{k}(\tau)$. In the absence of phonons, $H_{m}$ is a quadratic scalar operator, and $[c^{\dagger}_{k}c_{k}, c^{\dagger}_{k'}c_{k'}]=0$ is responsible for the cancellation of all commutators in Eq.\,(\ref{Time-independent Hamiltonians}). In this case, the stroboscopic dynamics is only described by the time-averaged Hamiltonian
\begin{align}
\tilde H_{1} = \sum_{k} \epsilon_{k,0}(z) \, c^{\dagger}_{k}c_{k} ~,
\end{align}
where $\epsilon_{k,m}(z)= 2\nu J_{m}(z) \cos(k) / \delta E$ and $J_{m}$ is the $m$-th order Bessel function of the first kind. Thus, the off-resonant driving renormalizes the hopping amplitudes and is likely to localize the electrons for driving strengths that satisfy $J_{0}(z)=0$, which yields the so-called dynamical Wannier-Stark ladder in the density of states [\onlinecite{PhysRevB.34.3625}].
Such a renormalization of the electronic band structure suggests that, in the presence of interactions, the system may dynamically visit weak-, intermediate-, and strong-coupling regimes, as well as the one of strictly localized electrons. Moreover the interactions are time independent, so they only appear through Fourier component $H_{0}$. As the latter is not involved in the definition of $\tilde{H}_{2}$ in Eq.\,(\ref{Time-independent Hamiltonians}), there is no contribution at the second order of the high-frequency limit and $\tilde{H}_{2}=0$. The third order in $\lambda$, however, does depend on $H_{0}$ and leads to
\begin{align}\label{Effective Hamiltonian H3}
\tilde H_{3} &= \frac{1}{2}\sum_{m\neq0} \sum_{k,k'} \frac{\epsilon_{k,m} \epsilon_{k',-m}}{m^{2}} [ [ c^{\dagger}_{k}c_{k}, H_{ep} ], c^{\dagger}_{k'}c_{k'} ] ~.
\end{align}
Consequently, the electron-phonon interaction, though time independent, is responsible for additional corrections to the effective Hamiltonian. In the case of the electron-phonon interaction, the effective Hamiltonian can by rewritten as follows
$\tilde{H} = \tilde{H}_{e} + \tilde{H}_{p} + \tilde{H}_{ep} + o(\lambda^{3})$, where
\begin{align}\label{Effective Holstein Hamiltonians}
&\tilde{H}_{e} = \sum_{k} 2\tilde{t}_{1}(z) \cos(k) \, c^{\dagger}_{k}c_{k} ~, ~~~ \tilde{H}_{p} = \sum_{q} \tilde{\omega}_{q} b^{\dagger}_{q}b_{q} ~, \notag \\
&\tilde{H}_{ep} = \sum_{k,q} \gamma_{k,q}(z) ~ c^{\dagger}_{k+q}c_{k}B_{q} ~,
\end{align}
while $\tilde{t}_{1}(z) = \tilde{\nu} J_{0}(z)$, $\tilde{\nu}=\nu/\Omega$ and $\tilde{\omega}_{q}=\omega_{q}/\Omega$. The effective electron-phonon coupling $\gamma_{k,q}$ is specified in the next section. The reader may also find a detailed discussion about the role played by generic kinds of interactions in the high-frequency description in Ref. [\onlinecite{1367-2630-17-9-093039}].
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 26mm 0mm 15mm 0mm, clip, width=4.1cm]{RenormalizationEta_180.pdf} &
\includegraphics[trim = 26mm 0mm 15mm 0mm, clip, width=4.1cm]{RenormalizationGammaSquare_180.pdf}
\end{array}$
\caption{\small (Color online) Third-order correction $\eta_{k,q}$ (left) and effective electron-phonon coupling $\gamma_{k,q}$ in units of $g_{q}$ (right) for $\Omega=5\nu$ and $z=1.8$.}
\label{Photon-Renormalized Coupling}
\end{figure}
\subsection{Dynamical control of the electron-phonon coupling}
Whereas the phononic dispersion relation remains unchanged, the off-resonant driving renormalizes the electron-phonon interactions which, \textit{a priori}, becomes $k$-dependent out of equilibrium. This is characterized by the effective electron-phonon coupling
\begin{align}\label{Effective Coupling Gamma}
\gamma_{k,q}(z) = \tilde{g}_{q} \left( 1- \eta_{k,q}(z) \, \lambda^{2} \right) ~,
\end{align}
where $\eta_{k,q}$ arises from Eq\,(\ref{Effective Hamiltonian H3}) and appears as a second-order correction in $\lambda$ to the equilibrium electron-phonon coupling $\tilde{g}_{q}=g_{q}/\Omega$. It is given by
\begin{align}
\eta_{k,q}(z) = \sum_{m>0} \left( \frac{\bar{\epsilon}_{k+q,m}(z) - \bar{\epsilon}_{k,m}(z)}{m} \right)^{2} ~,
\end{align}
where $\bar{\epsilon}_{k,2n}=\epsilon_{k,2n}$ or $\bar{\epsilon}_{k,2n+1}=2\nu J_{2n+1}(z) \sin(k) / \delta E$ for any integer $n$. This correction turns out to be positive for all strengths of the driving. As a result, the minus sign in Eq.\,(\ref{Effective Coupling Gamma}) suggests that it can only reduce the equilibrium electron-phonon coupling. The reader may find more details about the derivation of $\eta_{k,q}$ in Appendix. It is also straightforward to show that maxima of $\eta_{k,q}$ lye along the line $(k,0)$ in the $kq$-plane, whereas minima are located at $\pm(\pm \frac{\pi}{2}, \pi)$, in agreement with the map in Fig.\,\ref{Photon-Renormalized Coupling}. Thus, the effective electron-phonon coupling $|\gamma_{k,q}|^{2}$ favors the interactions between electrons and long-wavelength phonons $q\simeq 0$, as well as interactions with phonons of wavevectors $q\simeq -2k \pm \pi$. In this sense, the off-resonant driving acts as an interaction selector and can be regarded as a way to control the electron-phonon coupling in a dynamical and reversible way.
\begin{figure}[t]
\centering
$\begin{array}{c}
\includegraphics[trim = 5mm 0mm 10mm 0mm, clip, width=5cm]{RenormalizedCoupling.pdf}
\end{array}$
\caption{\small (Color online) Field-renormalized hopping and nonequilibrium corrections to the electron-phonon interaction as a function of the driving strength for $\Omega=5\nu$ and $g_{0}=\nu$.}
\label{Renormalized Coupling}
\end{figure}
It is also instructive to rephrase the effective electron-phonon Hamiltonian in terms of real-space coordinates. In order to clearly highlight the microscopic processes generated by the off-resonant driving, we now focus on a Hamiltonian that describes local electron-phonon interactions in equilibrium, meaning $g_{q}=g_{0}$. This kind of interactions is for example relevant in the context of polarons in organic molecular crystals, as reported by Hostein [\onlinecite{holstein1959studies}]. As detailed in Appendix, the effective electron-phonon Hamiltonian can be written in real space as
\begin{align}\label{Real Space Hep}
\tilde{H}_{ep} &= \tilde{g}_{0} \sum_{n} c^{\dagger}_{n}c_{n} \, B_{n} \\
&+ \tilde{g}_{1} (z) \sum_{n} c^{\dagger}_{n}c_{n}\left(B_{n-1}-2B_{n}+B_{n+1}\right) \notag \\
&+ \tilde{g}_{2}(z) \sum_{n} c^{\dagger}_{n}c_{n+2}\left(B_{n}-2B_{n+1}+B_{n+2}\right) + h.c. \notag
\end{align}
where the different electron-phonon couplings are defined by
\begin{align}\label{Coupling Definitions}
&\tilde{g}_{0} = \frac{g_{0}}{\Omega} ~, ~~~~~~~~ \tilde{g}_{1}(z) = \frac{1}{2} \frac{g_{0}}{\Omega} \left( \frac{2\nu}{\Omega} \right)^{2} \sum_{m>0}\frac{J_{m}^{2}(z)}{m^{2}} ~, \\
&\tilde{g}_{2}(z) = \frac{1}{4} \frac{g_{0}}{\Omega} \left( \frac{2\nu}{\Omega} \right)^{2} \sum_{m>0} \left( \frac{J_{2m-1}^{2}(z)}{(2m-1)^{2}} - \frac{J_{2m}^{2}(z)}{(2m)^{2}} \right) ~. \notag
\end{align}
Coupling $\tilde{g}_{0}$ comes from the time-averaged Hamiltonian $\tilde{H}_{1}$ and refers to Holstein local interactions as defined in equilibrium. Coupling $\tilde{g}_{1}$ is a nonequilibrium correction that simulates Peierls antisymmetric nonlocal interactions [\onlinecite{munn1985theory}], as introduced in the so-called SSH model to explain the formation of topological solitons in polyacetylene [\onlinecite{su1979solitons}]. Coupling $\tilde{g}_{2}$ is a nonequilibrium correction too. It describes phonon-assisted next-nearest-neighbor hopping processes. Both $\tilde{g}_{1}$ and $\tilde{g}_{2}$ refer to antisymmetric nonlocal interactions, which could already be known from the map $\gamma_{k,q}$ in Fig.\,\ref{Photon-Renormalized Coupling}, accordingly to the study of the symmetry effects of nonlocal electron-phonon interactions in Ref. [\onlinecite{Li:2011nr}]. Besides, $\tilde{g}_{1}$ and $\tilde{g}_{2}$ can both be controlled dynamically via the driving strength, as illustrated in Fig.\,(\ref{Renormalized Coupling}). Importantly, the phonon-assisted hopping processes can be turned off for some specific driving strengths. However, it cannot vanish simultaneously with the field-renormalized hopping $\tilde{t}_{1}$, which means that the noninteracting electrons can no longer experience the dynamical Wannier-Stark localization in the presence of lattice vibrations. It is worth mentioning that a similar conclusion holds when the electrons are driven by an electric field constant in time (instead of time-periodic). Indeed, the DC field leads to the Wannier-Stark localization (instead of dynamical Wannier-Stark localization) of the noninteracting electrons, but they get delocalized when they are coupled to lattice vibrations [\onlinecite{PhysRevB.88.035132}].
Besides, third-order corrections $\tilde{g}_{1}$ and $\tilde{g}_{2}$ scale with the factor $(2\nu / \Omega)^{2}$, regardless of the energy scale $\delta E$ we chose to define the small parameter $\lambda$ in the high-frequency expansion. As shown by Eq.\,(\ref{Effective Hamiltonian H3}), it is so because these corrections are defined from the square of harmonics of the electronic dispersion relation, whose characteristic energy scale corresponds to half the equilibrium bandwidth, namely $2\nu$. Of course, these corrections always remain small compared to Holstein coupling $\tilde{g}_{0}$. Nevertheless, they may compete the renormalized hopping processes when varying the driving strength $z$. Such a dynamical control, which should be suitable for multicycle laser pulse experiments and shaken optical lattices, may be useful for example to understand the role played by the nonlocal electron-phonon interactions in organic molecular semiconductors, where local Holstein interactions alone would not be sufficient to explain electronic transport [\onlinecite{zhao1994munn,Ciuchi:2011dn,Li:2011nr,Li:2013eu}].
\section{Effective Green functions}
\subsection{Perturbation theory along Schwinger-Keldysh contour}
Since the system is supposed to be in a nonequilibrium steady state, one can consider the time-dependent problem along the Schwinger-Keldysh contour $C$, as illustrated in Fig.\,\ref{Diagram}. In the interaction picture, the full Green function of the system can be written as a thermal average
\begin{align}\label{Full GF}
iG(k,t,t') = \big\langle {\cal{T}}_{C}e^{-i\int_{C}d\tau \sum_{k} V(k,\tau)}c_{k}^{~}(t)c_{k}^{\dagger}(t') \big\rangle_{0} ~,
\end{align}
where ${\cal{T}}_{C}$ denotes the time-ordering operator associated to the oriented contour $C$. The time evolution of operator $c_{k}(t)$ is ruled by the equation of motion based on time-dependent Hamiltonian $H_{e}(t)$ introduced in Eq.\,(\ref{Time-Dependent Hamiltonian}). Importantly the bracket index refers to the noninteracting density matrix of the system in equilibrium. This means that, first, we explicitly know the density matrix which is then given by $\rho_{0} = \frac{e^{-\beta H_{0}(-\infty)}}{\Tr [e^{-\beta H_{0}(-\infty)}]}$ and, second, we can take advantage of Wick theorem. The electron-phonon interaction is introduced as
\begin{align}
V(k,\tau) = \sum_{q} g_{q}~c_{k+q}^{\dagger}(\tau)c_{k}(\tau)B_{q}(\tau) ~.
\end{align}
In the framework of a perturbation theory, the first-order expansion in the electron-phonon coupling yields the thermal average of a single bosonic operator $B_{q}$ and therefore vanishes. Then the lowest-order contribution arises from the second-order, which leads to the following Green function
\begin{align}\label{Contour 2nd Oder GF}
G^{(2)}(k,t,t') &= \frac{i}{2}\int_{C} dt_{1}dt_{2}\sum_{k_{1},k_{2}}\big\langle {\cal{T}}_{C}V(k_{1},t_{1})V(k_{2},t_{2})c_{k}^{~}(t)c_{k}^{\dagger}(t') \big\rangle_{0} \notag \\
&= \int_{C} dt_{1}dt_{2}~ G^{(0)}(k,t,t_{1}) \Sigma^{(2)}(k,t_{1},t_{2}) G^{(0)}(k,t_{2},t') ~,
\end{align}
The bare electron and phonon Green functions are respectively defined as $G^{(0)}(k,t,t') = \big\langle {\cal{T}}_{C}~ c_{k}(t)c_{k}^{\dagger}(t') \big\rangle_{0}$ and $D^{(0)}(q, t,t') = \big\langle {\cal{T}}_{C}~ B_{q}(t)B_{q}^{\dagger}(t') \big\rangle_{0}$. It corresponds to the Fock-like diagram illustrated in Fig.\,\ref{Diagram}. This is the single non-vanishing second-order contribution. It describes the emission of a phonon with momentum $q$ at $t_{2}$ and its subsequent absorption at $t_{1}$. The self-energy associated to this single-phonon process is
\begin{align}
\Sigma^{(2)}(k,t_{1},t_{2}) = i \int_{BZ} dq~ g_{q}^{2}~ G^{(0)}(k+q,t_{1},t_{2})~ D^{(0)}(q,t_{1},t_{2}) ~.
\end{align}
Considering that any time variable can be located either along the forward branch or along the backward one of contour $C$, it is then possible to rephrase this equation in terms of 2$\times$2 matrices. In Keldysh basis, the second-order Green function can be rewritten as
\begin{align}
G^{(2)}(t,t') &= \int \int dt_{1} dt_{2}~ G^{(0)}(t,t_{1}) \, \Sigma^{(2)}(t_{1},t_{2}) \, G^{(0)}(t_{2},t') ~,
\end{align}
where momentum $k$ have been omitted for more clearness, integral $\int$ runs from $t=-\infty$ up to $t=+\infty$ and
\begin{align}
G^{(0)}
&=
\left( \begin{array}{cc}
G_{R}^{(0)} & G_{K}^{(0)} \\
0 & G_{A}^{(0)} \\
\end{array} \right) ~,~
D^{(0)}
=
\left( \begin{array}{cc}
D_{R}^{(0)} & D_{K}^{(0)} \\
0 & D_{A}^{(0)} \\
\end{array} \right) ~,\\
\Sigma^{(2)}
&=
\left( \begin{array}{cc}
\Sigma_{R}^{(2)} & \Sigma_{K}^{(2)} \\
0 & \Sigma_{A}^{(2)} \\
\end{array} \right) ~. \notag
\end{align}
The indices $R$, $K$ and $A$ respectively label the retarded, Keldysh and advanced Green functions.
\begin{figure}[t]
\centering
$\begin{array}{c}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=4cm]{ElectronPhononDiagram.pdf}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=4cm]{KeldyshContour.pdf}
\end{array}$
\caption{\small Diagrammatic representation of the electron-phonon interaction in a second-order perturbation theory (left) which is regarded here along Schwinger-Keldysh contour $C$ (right).}
\label{Diagram}
\end{figure}
The retarded component of the self-energy in Keldysh formalism is
\begin{align}
\Sigma^{(2)}_{R}(k)= \frac{i}{2} \int_{BZ}dq \left[G_{R}^{0}(k+q) \, D_{K}^{0}(q) + G_{K}^{0}(k+q) \, D_{R}^{0}(q) \right] ~,
\end{align}
where the two time variables have been omitted for more clearness. Because the system is out of equilibrium, the two time variables of Green functions are independent. Then it is convenient to rephrase them in terms of the relative time $t=t_{1}-t_{2}$ and the averaged time $T=(t_{1}+t_{2})/2$ [\onlinecite{wigner1932quantum}]. This can be compared to the equilibrium situation, where Green functions only depend on the relative time, whose conjugate variable is the frequency $\omega$. The Fourier transform of the retarded and Keldysh Green functions, with respect to the relative time, leads to the following expression for the self-energy
\begin{align}
\Sigma^{(2)}_{R}(k,\omega,T) = \int_{BZ}dq~ g^{2}_{q}~ \Big \{ [N_{q}+n_{k+q}]G_{R}^{0}(k+q,\omega+\omega_{q},T)& \notag \\
+[N_{q}+1-n_{k+q}]G_{R}^{0}(k+q,\omega-\omega_{q},T)& \Big \} ~.
\end{align}
The functions $N_{q}$ and $n_{k+q}$ respectively denote the Bose-Einstein and Fermi-Dirac distributions, meaning the distributions for identical particles when the system was in equilibrium at time $t=-\infty$.
\subsection{Perturbation theory for effective Green functions}
The nonequilibrium perturbation theory along Schwinger-Keldysh contour refers to Green functions based on time-periodic Hamiltonian (\ref{Time-Dependent Hamiltonian}). At present we show that we can equivalently define effective Green functions based on time-independent effective Hamiltonian (\ref{Effective Holstein Hamiltonians}) that describes the system in the high-frequency limit. We can start from the equation of motion
\begin{align}
\left[ i \partial_{\tau} - \lambda H(\tau) \right] G(\tau, \tau') &= \delta(\tau, \tau')
\end{align}
and straightforwardly show that the gauge transformation introduced earlier to define the effective Hamiltonian leads to
\begin{align}
\left[ i\partial_{\tau} + \tilde{H} \right] \tilde{G}(\tau'-\tau) &= \delta(\tau, \tau') ~,
\end{align}
where we refer to $\tilde{G}(\tau'-\tau) = e^{i\Delta(\tau)}\,G(\tau, \tau')\,e^{-i\Delta(\tau')}$ as effective Green function. This is a one-time-argument function that describes a system invariant by time translation. Consequently, two stroboscopic times characterized by an integer $n$ such that $\tau'-\tau = n \, 2\pi$, along with the $2\pi$-periodicity of $\Delta(\tau)$, result in
\begin{align}
\Tr \tilde{G}(\tau'-\tau) = \Tr G(\tau, \tau') ~.
\end{align}
Observables such as the density of states are then equal in both descriptions. As far as we are concerned, the single-orbital electronic Green functions are scalars and then equal each other for stroboscopic times.
Now that we have introduced the notion of effective Green function in the high-frequency limit, we are ready to revisit the perturbation theory. The multiplicative structure of the Dyson equation is responsible for
\begin{widetext}
\begin{align}
G(\tau, \tau') &= G^{0}(\tau, \tau')
+ \int\int d\tau_{1} d\tau_{2} G^{0}(\tau, \tau_{1})\,\Sigma(\tau_{1},\tau_{2}) \, G^{0}(\tau_{2}, \tau')
+ ... \notag \\
&= e^{-i\Delta(\tau)} \, \tilde{G}^{0}(\tau'-\tau) \, e^{i\Delta(\tau')}
+ e^{-i\Delta(\tau)} \, \int\int d\tau_{1} d\tau_{2} \tilde{G}^{0}(\tau_{1}-\tau)\,\tilde{\Sigma}(\tau_{2}-\tau_{1}) \,\tilde{G}^{0}(\tau'-\tau_{2}) \, e^{i\Delta(\tau')}
+ ...
\end{align}
\end{widetext}
where $\tilde{\Sigma}(\tau'-\tau) = e^{i\Delta(\tau)} \, \Sigma(\tau',\tau) \, e^{-i\Delta(\tau')}$ defines the effective self-energy. As a result, there is a one to one correspondence at in all orders in the perturbation theory between the $n$-th order of the time-periodic problem and the $n$-th order of the time-independent effective problem. However, the interaction vertex $g$ the self-enerfy $\Sigma(\tau_{1},\tau_{2})$ relies on is renormalized in the effective description, meaning that $\tilde{\Sigma}(\tau_{2}-\tau_{1})$ refers to an effective interaction vertex $\tilde{g}$. In other words, the local-in-time gauge transformation $e^{i\Delta(\tau)}$ enables us to regard the time-evolution of the initial time-periodic system in terms of the evolution of an effective time-independent one with a renormalized band structure and renormalized interactions. This greatly simplifies the problem since we can simply use the standard rules for equilibrium Green functions.
For example, the second-order perturbation theory leads to the following retarded component for the effective self-energy:
\begin{widetext}
\begin{align}\label{Self-Energy}
\tilde{\Sigma}^{(2)}_{R}(k,\tilde{\omega}) &= \int_{BZ}dq \, \gamma_{k,q}\gamma_{k+q,-q} \left( \frac{N_{0} + n_{q+k} }{\tilde{\omega} + \tilde{\omega}_{0} - \epsilon_{k+q,0} + i\delta} +\frac{N_{0}+1-n_{q+k}}{\tilde{\omega} - \tilde{\omega}_{0} - \epsilon_{k+q,0} + i\delta} \right) ~,
\end{align}
\end{widetext}
where $N_{0}$ denotes the equilibrium distribution function of dispersionless phonons and $\delta$ is the inverse of the quasiparticle lifetime which is introduced in the definition of the bare Green function. The first term proportional to $N_{0}$ describes the absorbtion of a phonon, whereas the second term, which is proportional to $N_{0}+1$ and does not vanish even at zero temperature, corresponds to the emission of phonons by the electrons. Besides, the renormalized coupling preserves the Hermitian structure of the effective electron-phonon Hamiltonian and satisfies
\begin{align}
\gamma_{k,q}\gamma_{k+q,-q} &= |\gamma_{k,q}|^{2} \\
&= \tilde{g}_{0}^{2}(1-2\eta_{k,q}\lambda^{2}) + o (\lambda^{3}) ~, \notag
\end{align}
We remind the reader of the map $|\gamma_{k,q}|^{2}$ that has already been introduced in Fig.\,\ref{Photon-Renormalized Coupling}.
\section{Weak-coupling regime}
\subsection{Single electron properties}
Because the off-resonant driving renormalizes the electronic bandwidth, it enables the system to visit weak- and strong-couling regimes in a dynamical way. Here, we begin with the description of the weak-coupling regime, which corresponds to driving strengths $z$ that satisfy $\tilde{g}_{0} \ll |\tilde{t}_{1}(z)|$. Moreover, we consider that Eq.\,(\ref{Self-Energy}) does not depend on the fermionic statistic for we consider a single electron in the band, as assumed in Fr\"ohlich polaron problem [\onlinecite{frohlich1950xx,frohlich1952interaction,feynman1955slow}]. Within Holstein description of organic molecular crystals [\onlinecite{holstein1959studies}], an electron that hops onto a molecule excites a vibrational mode which subsequently relaxes after the electron moves away. The molecular displacement the electron induces along its motion results in a surrounding phonon cloud, which changes the electron energy and effective mass. This electron dressed by the lattice polarization is referred to as polaron. In the presence of off-resonant driving, one naturally expects third-order corrections $\tilde{g}_{1}$ and $\tilde{g}_{2}$ in Hamiltonian (\ref{Real Space Hep}) to modify the equilibrium polaronic properties. This is the purpose of the subsequent lines.
\subsubsection{Generic case}
First of all, it can be noticed that the retarded component of the effective self-energy in Eq.\,(\ref{Self-Energy}) is a complex function whose real and imaginary part can be known analytically and exactly for arbitrary parameters. Its expression is derived in Appendix but, because it is rather cumbersome, we do not present it in the main text. Instead, we present its real and imaginary parts in Fig.\,\ref{Effective SelfEnergy}, when there is a single electron in the band that is linearly coupled to vibrational modes at room temperature, i.e. $k_{B}T = 25 \,meV$. In this case, the electron is allowed to emit and absorb phonons. This yields two emission and two absorption peaks that are located at $|\tilde{\omega}-\tilde{\omega}_{0}|=2|\tilde{t}_{1}|$ and $|\tilde{\omega}+\tilde{\omega}_{0}|=2|\tilde{t}_{1}|$, respectively. Fig.\,\ref{Effective SelfEnergy} also compares our analytical evaluation of the effective self-energy to its numerical computation obtained from Eq.\,(\ref{Self-Energy}). They both exhibit the same behavior, the little error in between the full and dashed lines being due to the finite quasiparticle lifetime $1/\delta$ that is required to perform integral (\ref{Self-Energy}) numerically.
In order to get some more physical insight into this self-energy, we now focus on two peculiar situations, namely the adiabatic and non-adiabatic cases.
\begin{figure}[t]
\centering
$\begin{array}{c}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=6cm]{SelfEnergyAnalytics.pdf}
\end{array}$
\caption{\small (Color online) Real and imaginary parts of the retarded component of the effective self-energy for a single electron at room temperature. Analytics (full lines) is compared to numerics (dashed lines) for $\Omega = 5\nu$, $\omega_{0}=0.1\nu$, $g_{0}=0.2\nu$, $z=1.8$, $\delta=0.01$ and $k=0$.}
\label{Effective SelfEnergy}
\end{figure}
\subsubsection{Non-adiabatic limit $|\tilde{t}_{1}| \ll \tilde{\omega}_{0}$}
The non-adiabatic limit $|\tilde{t}_{1}| \ll \tilde{\omega}_{0}$ refers to a situation in which the electron tunneling is much slower than the vibrations of molecules. In the limit of small $k$, the retarded component of the effective self-energy introduced in Eq.\,(\ref{Self-Energy}) leads to the following polaronic dispersion relation
\begin{align}
\tilde{\xi}_{k} &= \epsilon_{k,0} + \Real \tilde{\Sigma}^{(2)}_{R}(k,\tilde{\xi}_{k}) \notag \\
&\simeq -\tilde{\Delta} + \frac{1}{1+ (2N_{0}+1) \frac{\tilde{\Delta}}{\tilde{\omega}_{0}}} \, \frac{k^{2}}{2\tilde{m}} ~.
\end{align}
This expression, which is derived in Appendix, looks like the one obtained at zero temperature in equilibrium [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}]. However, the electron mass $\tilde{m}$ takes into account the flattening of the noninteracting electron band due to the time-periodic driving. So it is a function of the driving strength that is defined as
\begin{align}
\tilde{m}(z) = \frac{1}{\tilde{t}_{1}(z)} ~.
\end{align}
Moreover, the polaron binding energy is corrected by the electron-phonon couplings induced out of equilibrium. It is also a function of the driving strength and satisfies
\begin{align}\label{Binding Energy NonAdiabatic}
\tilde{\Delta}(z) = \frac{\tilde{g}_{0}^{2} - 4\tilde{g}_{0}\tilde{g}_{1}(z)+4\tilde{g}_{0}\tilde{g}_{2}(z)}{\tilde{\omega}_{0}} ~.
\end{align}
Finally the polaron mass $\tilde{m}^{*}$ depends on the phonon temperature and driving strength as
\begin{align}
\tilde{m}^{*}(z) = \left[ 1+ (2N_{0}+1)\frac{\tilde{\Delta}(z)}{\tilde{\omega}_{0}} \right] \tilde{m}(z) ~.
\end{align}
When the off-resonant driving is turned off, the binding energy reduces to $\tilde{\Delta}(0) = \tilde{g}_{0}^{2}/\tilde{\omega}_{0}$ and the expressions above are in agreement with the polaron behavior in equilibrium [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}].
\subsubsection{Adiabatic limit $\tilde{\omega}_{0} \ll |\tilde{t}_{1}|$}
The adiabatic limit $\tilde{\omega}_{0}\ll |\tilde{t}_{1}|$ corresponds to the case of an electron hopping that is much faster than the vibrations of the lattice. This limit is for instance relevant when the electron-phonon coupling is weak ($\tilde{g}_{0} \ll |\tilde{t}_{1}|$) in organic molecular crystals like pentacene where $\tilde{g}_{0} \sim \tilde{\omega}_{0}$ [\onlinecite{Li:2011nr},\,\onlinecite{Li:2013eu}].
When $-2|\tilde{t}_{1}| + \tilde{\omega}_{0} < \tilde{\omega} < 2|\tilde{t}_{1}| - \tilde{\omega}_{0}$, we obtain from Eq.\,(\ref{Self-Energy}) a simple expression for the polaronic dispersion relation, namely
\begin{align}
\tilde{\xi}_{k} &= \tilde{\Delta} + \frac{\tilde{m}}{\tilde{m}^{*}} \, \epsilon_{k,0} ~.
\end{align}
Note that this expression holds for all values of $k$ within the Brillouin zone, so it characterizes a whole polaron band. The onsite energy felt by the polaron is
\begin{align}
\tilde{\Delta}(z) = 2\frac{\tilde{g}_{0}\tilde{g}_{2}(z)}{\tilde{t}_{1}^{2}(z)} \, \tilde{\omega}_{0}
\end{align}
and its effective mass is defined by
\begin{align}
\tilde{m}^{*}(z) = \left[ 1+ (2N_{0}+1) \frac{\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{t}_{1}^{2}}\right] \tilde{m}(z) ~.
\end{align}
Contrary to the non-adiabatic case, the onsite energy $\tilde{\Delta}$ can dynamically change signs as a function of the driving strength. Therefore, it does not necessarily refer to a binding energy since, when $\tilde{\Delta} >0$, the polaron feels a repulsive potential on each lattice site. The effective mass, however, is always heavier than it is in equilibrium because, first, the driving flattens the curvature of the electronic band and, second, the electron drags the phonon cloud along its motion. It is also worth mentioning that the onsite energy felt by the polaron and its effective mass both vanish in equilibrium and consist of purely out-of-equilibrium polaronic effects.
Moreover, the polaron energy $\tilde{\xi}_{k}$ is larger than the phonon frequency $\tilde{\omega}_{0}$. Thus, the polaron can also emit a phonon, even at zero temperature when $N_{0}=0$, which yields a nonzero imaginary part to the self-energy. The zeroth order in the adiabatic limit $\tilde{\omega}_{0} \ll |\tilde{t}_{1}|$ leads to a scattering time $\tilde{\tau}$ that satisfies
\begin{align}
\frac{1}{\tilde{\tau}(k,\tilde{\omega})} &= - \Imag \tilde{\Sigma}^{(2)}_{R}(k,\tilde{\omega})\notag \\
&=
\frac{2N_{0}+1}{\sqrt{4\tilde{t}_{1}^{2}-\tilde{\omega}^{2}}}
\Bigg[
\tilde{g}_{0}^{2} - \tilde{g}_{0}\tilde{g}_{1} \left( 4 - \frac{\epsilon_{k,0}}{\tilde{t}_{1}} \frac{\tilde{\omega}}{\tilde{t}_{1}} \right) \notag \\
&- \tilde{g}_{0}\tilde{g}_{2} \left( 4 - 2\frac{\epsilon_{2k,0}}{\tilde{t}_{1}} + 2\frac{\epsilon_{k,0}}{\tilde{t}_{1}} \frac{\tilde{\omega}}{\tilde{t}_{1}} - 2 \frac{\tilde{\omega}^{2}}{\tilde{t}_{1}^{2}} \right)
\Bigg] ~.
\end{align}
The polaron lifetime is already finite in equilibrium but the nonequilibrium corrections make it $k$-dependent.
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 17mm 0mm 25mm 0mm, clip, width=4.2cm]{SpectralFunction_FermiSea_40_00_180_500_500_01_02_04.pdf} &
\includegraphics[trim = 17mm 0mm 25mm 0mm, clip, width=4.2cm]{SpectralFunction0_200_00_180_100_1100_05_20.pdf} \\
\end{array}$
\caption{\small (Color online) Effective and Floquet spectral functions for $\Omega = 5\nu$ (left) and $\Omega = \nu$ (right), respectively. Both spectral functions have been computed for zero temperature with the following parameters: $\omega_{0}=0.1\nu$, $ g_{0}=0.2\nu$, $z=1.8$, and $\delta=0.01$.}
\label{Spectral Function}
\end{figure}
When $-2\tilde{t}_{1} - \tilde{\omega}_{0} < \tilde{\omega} < - 2\tilde{t}_{1} + \tilde{\omega}_{0}$, we can also determine the polaron properties for energies in the vicinity of $-2\tilde{t}_{1}$. The reader may refer to Appendix for more details. Such energies are associated to the bottom of the equilibrium electron band, for we consider, without loss of generality, that $\tilde{t}_{1}(z)>0$. Then Eq.\,(\ref{Self-Energy}) leads to the following polaronic dispersion relation in the limit of small $k$
\begin{align}
\tilde{\xi}_{k} &\simeq -\tilde{\Delta} + \frac{1}{1+ \frac{\tilde{\Delta}}{2\tilde{\omega}_{0}}} \frac{k^{2}}{2\tilde{m}} ~.
\end{align}
The onsite energy felt by the polaron is now negative and again defines a binding energy with
\begin{align}
\tilde{\Delta}(z) = (N_{0}+1) \, \frac{\tilde{g}_{0}^{2} - 8\tilde{g}_{0}\tilde{g}_{1}(z)}{\sqrt{4\tilde{\omega}_{0}\tilde{t}_{1}(z)}} ~.
\end{align}
Note that this is a function of the phonon temperature too. Besides the effective mass of the polaron is given by
\begin{align}
\tilde{m}^{*}(z) = \left[ 1+ \frac{\tilde{\Delta}(z)}{2\tilde{\omega}_{0}} \right] \tilde{m}(z) ~.
\end{align}
Again we can check that, when the off-resonant driving is turned off, the binding energy reduces to $\tilde{\Delta} = \tilde{g}_{0}^{2}/\sqrt{4\tilde{\nu}\tilde{\omega}_{0}}$, so that the expressions above yield the same results as the equilibrium ones [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}].
\subsection{Effective and Floquet spectral functions}
The retarded component of the effective self-energy introduced in Eq.\,(\ref{Self-Energy}) leads to the effective spectral function
\begin{align}
\tilde{A}(k,\tilde{\omega}) \simeq -\frac{1}{\pi} \Imag \left[ \tilde{G}^{0}_{R}(k,\tilde{\omega}) - \tilde{\Sigma}^{(2)}_{R}(k,\tilde{\omega}) \right]^{-1} ~.
\end{align}
Importantly, the effective spectral function is a gauge invariant quantity, since it has been introduced in the context of the stroboscopic dynamics and, therefore, it is not affected by the momentum shift required to make Green functions gauge invariant out of equilibrium [\onlinecite{boulware1966gauge,davies1988narrow,aoki2014nonequilibrium}]. Note moreover that Keldysh approach relies on the equilibrium Fermi-Dirac distribution, since it assumes that the system was in equilibrium at time $\tau=-\infty$. This is the reason why the equilibrium distribution function appears in the expression of the effective self-energy in Eq.\,(\ref{Self-Energy}). Fig.\,\ref{Spectral Function} depicts an effective spectral function that takes into account the effect of a Fermi sea at half-filling in the adiabatic limit. It can be noticed that the bottom of the band reveals two parabolic band in this limit, in agreement with the two bands reported earlier in the single electron case.
Besides, the high-frequency results presented here are equivalent to the ones obtained in the framework of Floquet Green functions [\onlinecite{tsuji2008correlated}], whose definition relies on the time-dependent Hamiltonian in Eq.\,(\ref{Time-Dependent Hamiltonian}). Nevertheless, the Floquet Green functions are not based on the high-frequency assumption and enables us to numerically describe the effect of a slower driving frequency. The spectral function it leads to is illustrated in Fig.\,\ref{Spectral Function} for a frequency that satisfies $\Omega = \nu$. Out of equilibrium the energy is no longer a conserved quantity but, in the case of a time-periodic driving, Floquet's theory ensures that it is conserved up to a multiple of the frequency. This is the reason why the Floquet spectral function in Fig.\,\ref{Spectral Function} is similar to the effective one, but there are also replicas that are centered on $m\Omega$ for all values of the relative integer $m$. Actually, these replicas do exist in the high-frequency description too, but they can be neglected when the driving is off resonant.
The density of states, which is obviously a gauge invariant quantity too, can finally be obtained from the momentum integral of the spectral function over the Brillouin zone. It is depicted in Fig.\,\ref{DOS} in the adiabatic limit at zero temperature from both the high-frequency limit and Floquet Green functions. Whereas it shows a single band with polaronic peaks in the hight-frequency limite, there are additional replicas that overlap each other when reducing the driving frequency, in agreement with the Floquet spectral function in Fig.\,\ref{Spectral Function}.
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 00mm 0mm 0mm 0mm, clip, width=4.2cm]{EffectiveDOS.pdf} &
\includegraphics[trim = 00mm 0mm 0mm 0mm, clip, width=4.2cm]{DensityOfStates_200_00_180_100_1100_02_20.pdf}
\end{array}$
\caption{\small (Color online) Effective and Floquet local spectral functions for $\Omega = 5\nu$ (left) and $\Omega = \nu$ (right), respectively. Both plots corresponds to the case of zero temperature with $\omega_{0}=0.1\nu$, $z=1.8$, $\delta=0.01$ and $g_{0}=0.0\nu$ (dashed line) or $g_{0}=0.2\nu$ (full line).}
\label{DOS}
\end{figure}
\section{Strong-coupling regime}
\subsection{Lang-Firsov canonical transformation}
In equilibrium, the electron-phonon interaction may already be too large to be regarded as a perturbation with respect to the electron bandwidth. But regardless of the equilibrium interaction strength, we have also stressed that the system can always be dynamically driven toward such a strong-coupling regime defined by $|\tilde{t}_{1}(z)| \ll \tilde{g}_{0}$. This problem can be solved within a perturbation theory, whose zeroth order is given by $\tilde{t}_{1}(z)=0$ and usually describes localized electrons. This provides an exact analytical solution when the system lies in equilibrium, which is traditionally obtained from Lang-Firsov canonical transformation [\onlinecite{lang1962title}]. In our case, this transformation, which is detailed in Appendix, turns effective Hamiltonian (\ref{Effective Holstein Hamiltonians}) into
\begin{align}\label{Lang Firsov Hamiltonian}
\tilde{H}' &= \tilde{\omega}_{0} \sum_{q} \, b_{q}^{\dagger} b_{q}
- \tilde{\Delta} \sum_{n} c_{n}^{\dagger} c_{n} \\
&+ \tilde{t}_{1} \sum_{n} \left( c_{n+1}^{\dagger} c_{n} X_{n+1}^{\dagger}X_{n} + h.c. \right) \notag \\
&+ \tilde{t}_{2} \sum_{n} \left( c_{n+2}^{\dagger} c_{n} X_{n+2}^{\dagger}X_{n} + h.c. \right) \notag \\
&+ \tilde{g}_{2} \sum_{n,q} (2\cos q - 1) \, e^{-iqn} \left( c_{n+2}^{\dagger} c_{n} X_{n+2}^{\dagger}X_{n} + h.c. \right) B_{q} ~, \notag
\end{align}
where the polaron-polaron interactions are neglected and
\begin{align}
X_{n'}^{\dagger}X_{n} = \exp \left( \sum_{q} u_{q} \, (e^{-iqn}-e^{-iqn'})(b_{q}-b_{-q}^{\dagger}) \right)
\end{align}
with $u_{q}=[\tilde{g}_{0}+(2\cos q - 1)\tilde{g}_{1}]/\tilde{\omega}_{0}$.
Whereas the phonon frequency is not changed by the canonical transformation, the polaron binding energy
\begin{align}\label{Binding Energy Strong Coupling}
\tilde{\Delta}(z) = \frac{\tilde{g}_{0}^{2}-4\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{\omega}_{0}}
\end{align}
is reduced by Peierls coupling $\tilde{g}_{1}$ when the driving is turned on, which is illustrated in Fig.\,\ref{Renormalized Binding Energy}. It defines a potential well that aims to localize the electron onto a molecular site, so that the characteristic size of the polaron becomes comparable to the lattice scale, hence the name of small polaron that may be encountered sometimes in the litterature. Note that $\tilde{\Delta}$ does not change signs because $\tilde{g}_{1}$ comes as a second-order correction to $\tilde{g}_{0}$ in the high-frequency limit, accordingly to Eq.\,(\ref{Coupling Definitions}).
Of course, one naturally recovers the equilibrium binding energy when the driving is turned off ($z=0$). In this case, the binding energies introduced in the strong-coupling regime and in the non-adiabatic limit of the weak-coupling regime equal each other [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}]. Interestingly, this is no longer the case out of equilibrium, as it can be seen from Eq.\,(\ref{Binding Energy NonAdiabatic}) and Eq.\,(\ref{Binding Energy Strong Coupling}). The extra term $4\tilde{g}_{0}\tilde{g}_{2}(z)$ in Eq.\,(\ref{Binding Energy NonAdiabatic}) comes from the phonon-assisted next-nearest hopping process which leads to $4\tilde{g}_{0}\tilde{g}_{2}(z)\cos(2k)$ in momentum space (cf. non-adiabatic limit in Appendix) and whose expansion for small $k$ yields an energy off-set.
Contrary to the equilibrium situation, the canonical transformation does not diagonalize the effective Hamiltonian when the off-resonant driving turns off the nearest-neighbor hopping, i.e. when $t_{1}(z) = 0$. This is due to nonequilibrium coupling $\tilde{g}_{2}$ that is responsible for the two last terms in the right-hand side of Eq.\,(\ref{Lang Firsov Hamiltonian}). The first one, which scales with
\begin{align}
\tilde{t}_{2}(z) = 2\frac{\tilde{g}_{0}\tilde{g}_{2}(z)}{\tilde{\omega_{0}}}~,
\end{align}
describes the next-nearest-neighbor hopping of the polaron, namely the electron dressed by the phonon cloud whose annihilation operator is $c_{n} X_{n}$. This hopping process tends to delocalize the polaron and competes the nearest-neighbor hopping when $\tilde{t}_{1}\sim \tilde{t}_{2}$, which roughly occurs when
\begin{align}
\frac{\nu}{\omega_{0}} \sim \left( \frac{\Omega}{g_{0}} \right)^{2} ~.
\end{align}
Such a condition is for example accessible in the adiabatic situation where $\omega_{0}\ll \nu$.
The second term generated by nonequilibrium coupling $\tilde{g}_{2}$ in Eq.\,(\ref{Lang Firsov Hamiltonian}) describes phonon-assisted polaron hopping between next-nearest-neighbor sites.
\subsection{Peierls-Feynman-Bogoliubov variational principle}
In order to get rid of the phonon-assisted polaron hopping term in Hamiltonian (\ref{Lang Firsov Hamiltonian}), we aim to map it onto
\begin{align}\label{Quadratic Hamiltonian}
H^{*} &= \tilde{\omega}_{0} \sum_{q} b_{q}^{\dagger}b_{q}
- \tilde{\Delta} \sum_{n}c^{\dagger}_{n}c_{n} \notag \\
&+ t_{1}^{*} \sum_{n} \left(c^{\dagger}_{n+1}c_{n}+h.c.\right) + t_{2}^{*} \sum_{n} \left(c^{\dagger}_{n+2}c_{n}+h.c.\right) ~.
\end{align}
This Hamiltonian is quadratic in momentum space, so that we know its partition function $Z^{*}=\Tr e^{-\beta H^{*}}$. Parameters $t_{1}^{*}$ and $t_{2}^{*}$ are then determined under the constraint that $\rho^{*} = \Tr e^{-\beta H^{*}}/Z^{*}$ is the best approximation of the exact density operator defined from Hamiltonian $\tilde{H}'$. This leads to Peierls-Feynman-Bogoliubov variational principle [\onlinecite{PhysRev.54.918,Bogolyubov,feynman1972lectures}], which consists in minimizing, with respect to $t_{1}^{*}$ and $t_{2}^{*}$, the following functional
\begin{align}
F^{*}+\langle \tilde{H}' - H^{*} \rangle_{*} ~,
\end{align}
where $F^{*} = - (1/\beta) \ln Z^{*}$. This results in
\begin{align}
t_{1}^{*} = \tilde{t}_{1} \left\langle X_{m+1}^{\dagger}X_{m} \right\rangle_{*}
~~~~~~\text{and}~~~~~~
t_{2}^{*} = \tilde{t}_{2} \left\langle X_{m+2}^{\dagger}X_{m} \right\rangle_{*} ~,
\end{align}
The reader may find more details about the derivation of these expressions in Appendix.
\subsection{Holstein polaron band}
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 8mm 0mm 10mm 5mm, clip, width=4.2cm]{OnsitePolaronicEnergy.pdf} &
\includegraphics[trim = 8mm 0mm 10mm 5mm, clip, width=4.2cm]{RenormalizedHopping.pdf}
\end{array}$
\caption{\small (Color online) Variations of the polaron binding energy (left) and of its nearest- and next-nearest-neighbor hopping amplitudes (right) for $\Omega = 5\nu$, $g_{0}=\nu$, $\omega_{0}=0.1\,\nu$, and zero temperature.}
\label{Renormalized Binding Energy}
\end{figure}
It is worth mentioning that the variational principle simply relies on the averages of bosonic operators, meaning that it describes hopping processes that conserve the number of phonons. If this elastic process is dominant, then the electron remains coherent and can still be described in terms of Bloch band theory. The average of bosonic operators can be evaluated from Feynman disentangling method, which is detailed in Appendix. The result is
\begin{align}
\left\langle X_{m+n}^{\dagger}X_{m} \right\rangle_{*}
&= \exp \left( - (2N_{0}+1)\frac{\tilde{g}_{0}^{2}-4\tilde{g}_{0}\tilde{g}_{1}-2\tilde{g}_{0}\tilde{g}_{1}\delta_{n,1}}{\tilde{\omega}_{0}^{2}} \right)
\end{align}
so that the nearest- and next-nearest-neighbor hopping amplitudes are functions of the phonon temperature and the driving strength. They are respectively given by
\begin{align}\label{Polaron Assisted Hopping 1}
t_{1}^{*}(z) &= \tilde{t}_{1}(z) \exp \left( - (2N_{0}+1)\frac{\tilde{g}_{0}^{2}-6\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{\omega}_{0}^{2}} \right) ~~
\end{align}
and
\begin{align}\label{Polaron Assisted Hopping 2}
t_{2}^{*}(z) &= \tilde{t}_{2}(z) \exp \left( - (2N_{0}+1)\frac{\tilde{g}_{0}^{2}-4\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{\omega}_{0}} \right) ~.
\end{align}
These hopping processes both tend to delocalize the electron and thus compete the potential well $\tilde{\Delta}$ to enhance the polaron size. The largest polaron it characterizes is expected to be found at low temperatures where phonon occupation number $N_{0}$ vanishes. When increasing the temperature, the electron bandwidth becomes flatter and flatter so that its effective mass becomes heavier and heavier. Therefore, the inelastic processes, which do not conserve the number of phonons, become more and more important. The electron loses its coherence and get a diffusive motion. However, these effects are only due to the existence of polarons in the sense that they already occur in equilibrium without time-periodic driving.
The nonequilibirum effects due to the off-resonant driving are actually double. On the one hand, it yields next-nearest-neighbor hopping processes which cannot be switched off dynamically together with the nearest-neighbor ones, i.e. the conditions $\tilde{t}_{1}(z)=0$ and $\tilde{t}_{2}(z)=0$ cannot be satisfied simultaneously. This is what Fig.\,\ref{Renormalized Binding Energy} illustrates. As a consequence, the dynamical localization of electrons predicted in Ref. [\onlinecite{PhysRevB.34.3625}] no longer arises in the presence of lattice vibrations. On the other hand, nonequilibrium Peierls coupling $\tilde{g}_{1}$ enhances the exponential arguments in Eqs.\,(\ref{Polaron Assisted Hopping 1}) and (\ref{Polaron Assisted Hopping 2}). This is reason why $t_{1}(z)$ first becomes larger when turning on the driving strength in Fig.\,\ref{Renormalized Binding Energy}. The driving-renormalized polaron band it leads to finally satisfies
\begin{align}
\xi_{k}^{*}(z) = 2t_{1}^{*}(z)\cos(k) + 2 t_{2}^{*}(z)\cos(2k) - \tilde{\Delta}(z) ~.
\end{align}
Thus, contrary to the equilibrium case, the polaron is also allowed to dynamically enhance the electronic bandwidth and reduce the effective mass of the electron.
\section{Conclusion}
Here we have addressed the problem of rapidly driven electrons that are linearly coupled to vibrational modes in a one-dimensional crystal. The stroboscopic dynamics has been apprehended up to the third-order expansion in the high-frequency limit. This approach provides an effective description of the problem in term of a time-independent effective Hamiltonian. It has enabled us to show that any kind of electron-phonon interaction is responsible for corrections to the effective Hamiltonian which reduces the interaction strength between electrons and phonons of specific momenta. In this sense, the off-resonant driving can be regarded as a way to tune the electron-phonon coupling and to chose specific interaction channels in a dynamical and reversible fashion.
Finally, we have discussed the specific case of Holstein interaction in equilibrium. Such a local interaction is responsible for non-local interactions when the electrons are rapidly driven, such as antisymmetric interactions of Peierls type and phonon-assisted electron tunneling, which suppresses the dynamical Wannier-Stark localization. The polaronic effects these nonequilibrium corrections induce have been reported in the weak- and strong-coupling regimes, since these two regimes can both be visited dynamically when varying the driving strength. In particular, we have described how the binding energy, the mass and the size of the polaron may be controlled by the off-resonant driving. These high-frequency results have also been compared to the ones obtained in the formalism of Floquet Green functions, which allows the description of driving with arbitrary (low) frequencies.
Although the high-frequency limit is already relevant for systems such as shaken optical lattices, the explicit knowledge of the electron-phonon mechanisms we derive here in the third-order expansion allows the description of slower frequencies that become reasonable for solid state physics too, for example during multicycle laser irradiations in pump-probe experiments. The dynamical control allowed by the driving strength offers the possibility to test weak- and strong-coupling polaron theories within a single material and may also be helpful to understand the crucial interplay between local and nonlocal electron-phonon interactions in systems such as organic molecular crystals.
\begin{acknowledgments}
The authors are grateful to E. A. Stepanov and would like to point out his involvement into the derivation of the effective Green function formalism. This work was supported by NWO via Spinoza Prize and by ERC Advanced Grant 338957 FEMTO/NANO.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
\section{Introduction}
A polaron is a fermionic quasiparticle that was introduced by Landau in a 1933 seminal paper to describe the trapping of an electron by the ionic distorsion it induces in a crystal [\onlinecite{landau1933electron}]. The self-trapping of such an electron was subsequently studied in the case of weak electron-phonon coupling by Pekar and Fr\"ohlich [\onlinecite{pekar1946autolocalization},\,\onlinecite{frohlich1954electrons}]. They showed that, within a continuum dielectric medium, a single electron can drag a phonon cloud along a slow motion without being trapped, thus resulting in a large polaron that propagates freely with an effective mass. By opposition, the polaron size becomes small - of the order of the lattice constant - in the regime of a strong electron-phonon coupling compared to the electron bandwidth. This situation depicted by Holstein, Lang and Firsov refers to a quasi-trapped polaron that propagates with an exponentially heavier effective mass [\onlinecite{holstein1959studies},\,\onlinecite{lang1962title}]. Importantly, all these polaron features were finally unified within a path-integral-based variational approach that allowed Feynman to characterize the binding energy and effective mass of Fr\"ohlich's polaron for all coupling strengths [\onlinecite{feynman1948space},\,\onlinecite{feynman1955slow}].
From the experimental perspective these quasiparticles were first identified in uranium dioxide as small polarons [\onlinecite{nagels1963electrical}]. Later, localized lattice distortions were pointed out to affect the Curie temperature of the ferromagnetic transition in perovskites, and to be involved in the colossal magnetoresistance of manganites [\onlinecite{millis1995double,zhao1996giant,alexandrov1999carrier,sharma2002oxygen,edwards2002ferromagnetism,hartinger2006polaronic}]. Whereas the phonons turn out to be crucial in the context of symmetry breaking phase transitions with for example structural Peierls dimerization and conventional BCS superconductivity [\onlinecite{bardeen1957theory},\,\onlinecite{bardeen1957microscopic}], their coupling to the charge carriers would also play a significant role in high-temperature superconductors [\onlinecite{alexandrov1996coherent,bianconi1996determination,lanzara2001evidence,lee2006interplay,gweon2006strong,takahashi2008superconductivity,chen2008superconductivity,kresin2009colloquium}], although the underlying microscopic pairing mechanism has not been clearly identified yet. Polaron physics was also seriously discussed in connection to organic molecular crystals with possible applications as field-effects transistors [\onlinecite{sundar2004elastomeric,takeya2007very,kawai2012characteristics}]. It was first thought that local electron-phonon interactions of Holstein type were sufficient to explain understand the physics of organic semiconductors. Nevertheless, experiments achieved in aromatic hydrocarbon crystals showed that nonlocal electron-phonon interactions are also involved in transport properties [\onlinecite{roberts1980temperature}], resulting in many studies that aimed to highlight the interplay between local and nonlocal electron-phonon interactions in these organic materials [\onlinecite{munn1985theory, munn1985theory3, zhao1994munn,PhysRevLett.89.275503,zoli2005nonlocal,PhysRevLett.96.086601,PhysRevLett.103.266601,PhysRevB.82.035208,Ciuchi:2011dn,Li:2011nr,Li:2013eu}].
On the other hand, the last years witnessed a growing interest inside the condensed matter community for out-of-equilibrium physics [\onlinecite{aoki2014nonequilibrium}]. With the development of ultrafast pump-probe spectroscopy, it became possible to study excitation and relation processes, as well as steady regimes in many-body systems [\onlinecite{joura2008steady,tsuji2008correlated,tsuji2009nonequilibrium,wall2011quantum}], leading to phenomena such as ultrafast time-scale induced superconductivity [\onlinecite{fausti2011light}] and symmetry-protected topological transitions [\onlinecite{oka2009photovoltaic,lindner2011floquet,carpentier2015topological,dutreix2016laser}]. This is quite naturally then that the poralon problem was revisited in this nonequilibrium context. For example, the electron-phonon coupling offers a dominant relaxation channel to the photo-excited quasiparticles of Mott insulators [\onlinecite{PhysRevLett.112.117801}]. It was also reported that quenching the Holstein coupling reduces the Coulomb interaction and enhances the production of doublons in the Mott insulating phase [\onlinecite{PhysRevB.88.165108}]. In order to get some insight into the nonequilibrium dynamics of such many-body phases, the real-time dynamics of a single electron in Holstein model has recently been studied [\onlinecite{PhysRevB.91.104302},\,\onlinecite{PhysRevB.91.104301}]. This highlights for instance what the electron transient dynamics is, from the time at which a DC electric field is turned on until the electron reaches a steady state with constant velocity thanks to energy dissipation through optical phonons [\onlinecite{vidmar2011nonequilibrium}], as predicted by Thornber and Feynman in 1970 [\onlinecite{thornber1970velocity}]. Interestingly, it has also been proposed that driving infrared active phonons by ultrafast laser irradiation could induce superconductivity at temperatures much higher than the equilibrium critical one [\onlinecite{knap2015dynamical}].
Here, we revisit the polaron problem out of equilibrium when the electrons are periodically driven and show through explicit expressions how the binding energy and effective mass of the polaron can be controlled from the driving strength. To this purpose, we address the problem of noninteracting electrons that are rapidly driven and linearly coupled to vibrational modes in a one-dimensional crystal. Contrary to most of the nonequilibrium papers that we have mentioned so far and that deal with the real-time dynamics of an electron-phonon system, we rather focus on its stroboscopic dynamics, which is apprehended up to the third-order in the high-frequency expansion. This analytical approach provides a time-independent description of the problem in term of an effective Hamiltonian. In the absence of vibrational modes, it is well known that the Bloch band structure is simply renormalized by the time-periodic driving, which can result in the dynamical Wannier-Stark localization of electrons [\onlinecite{PhysRevB.34.3625}]. To our knowledge, this effect was first considered in Ref.\,[\onlinecite{Vonsovsky1939}]. In the presence of vibrational modes, we show that the driving actually modifies the electron-phonon interaction which becomes dynamically controllable when varying the driving strength. In order to be more specific, we focus on organic molecular crystals with electron-phonon interaction of Holstein type in equilibrium. Out of equilibrium, the driving additionally generates tunable nonlocal Peierls interactions and phonon-assisted hopping between distant neighbors. It turns out that both the phonon-assisted distant hopping and the renormalized nearest-neighbor tunneling can be dynamically suppressed when varying the driving strength. However, they cannot be suppressed simultaneously, meaning that the dynamical Wannier-Stark localization can no longer occur when the electrons are allowed to dissipate their energy on the vibrational modes of the crystal. Besides, we report the controllable nonequilibrium binding energy and effective mass of the polaron that the local and nonlocal electron-phonon interactions induce. This is achieved within both the weak- and strong-coupling regimes, since varying the driving strength enables the system to visit these two regimes dynamically.
While the high-frequency limit and simulations of lattice vibrations are already relevant in optical lattices of cold atomic gases [\onlinecite{lignier2007dynamical,eckardt2009exploring,struck2012tunable,greschner2014density,goldman2014periodically,PhysRevA.76.011605}], the explicit knowledge of the electron-phonon mechanisms we derive here in the third-order expansion allows the description of slower frequencies that become reasonable for solid state physics too, for example during multicycle laser irradiations in pump-probe experiments. The dynamical control allowed by the driving strength offers several opportunities among which the possibility to test weak- and strong-coupling polaron theories within a single material, or to understand a bit more the interplay between local and nonlocal electron-phonon interactions in organic crystals.
\section{Dynamical electron-phonon coupling}
\subsection{Time-periodic Hamiltonian}
When a homogeneous time-periodic electric field with magnitude $E_{0}$ and frequency $\Omega$ is driving noninteracting electrons in a one-dimensional crystal, it yields a vector potential that can be written as $A(t) = - E_{0} \sin (\Omega t) /\Omega$. The scalar potential is not relevant here for we consider the temporal gauge. Moreover Planck constant and the light celerity are set to unity, i.e. $\hbar=c=1$, and we chose the interatomic distance as unit of length. If the charge carriers are additionally coupled to vibrational modes, the system can generically be described by a time-periodic Hamiltonian of the form $H(t) = H_{e}(t) + H_{p} + H_{ep}$, with
\begin{align}\label{Time-Dependent Hamiltonian}
&H_{e}(t) =\sum_{k} \epsilon_{k}(t) \, c^{\dagger}_{k}c_{k} ~, ~~~ H_{p} = \sum_{q} \omega_{q} \, b^{\dagger}_{q}b_{q} ~, \notag \\
&H_{ep} = \sum_{k,q} g_{q} \, c^{\dagger}_{k+q}c_{k}B_{q} ~.
\end{align}
According to Peierls substitution, the electronic dispersion relation is given by $\epsilon_{k}(t)=2\nu\cos( k+z\sin\Omega t)$, where $\nu$ refers to the nearest-neighbor hopping amplitude, $z=eE_{0}/\Omega$, and $e$ denotes the electron charge. In the model we are concerned with, the electrons are assumed to be linearly coupled to the atomic displacement operator $B_{q}=b^{\dagger}_{-q}+b_{q}$ through the coupling constant $g_{q}$, while $\omega_{q}$ defines the dispersion relation of phonons. No assumptions are made over these $q$-dependent functions for the moment.
\subsection{Third-order high-frequency description}
The dynamics of a quantum state $\phi (t)$ is then ruled by the time-dependent Schr\"odinger equation
\begin{align}
i \, \partial_\tau \phi(\tau) =~\lambda \, H(\tau) \, \phi (\tau) ~,
\end{align}
where $\tau=\Omega t$ and $\lambda=\delta E / \Omega$. Here $\delta E$ denotes a certain energy scale involved in the Hamiltonians of Eq.\,(\ref{Time-Dependent Hamiltonian}). Consequently, $\tau$ and $H(\tau)$ are dimensionless, though we still refer to them as time and Hamiltonian, respectively.
The high-frequency limit corresponds to $\lambda \ll 1$ or equivalently to $\delta E \ll \Omega$. If $\delta E$ is chosen as the largest characteristic energy scale met in Eq. (\ref{Time-Dependent Hamiltonian}), then there are no resonances with the driving which is said to be off-resonant. This limit can be apprehended through several analytical approaches among which Floquet-Magnus expansion, van Vleck and Brillouin-Wigner perturbation theories [\onlinecite{0305-4470-34-16-305,1367-2630-17-9-093039,PhysRevB.93.144307}]. Here we use a method which has been reported in Refs.\,[\onlinecite{Itin2}] and [\onlinecite{PhysRevLett.115.075301}]. It relies on the gauge transformation $\tilde{\phi} (\tau) = \exp\{-i\Delta(\tau)\}\,\phi (\tau)$, where $\Delta(\tau) = \sum_{n=1}^{+\infty} \Delta_{n}(\tau)\lambda^{n}$. Starting from the lowest order in $\lambda$, we iteratively build up operator $\Delta(\tau)$ under the constraint that $\Delta_{n}(\tau)$ is $2\pi$-periodic and averages at zero. The latter boundary condition ensures, similarly to van Vleck and Brillouin-Wigner approaches, that the perturbation theory does not depend on the arbitrary phase of the periodic driving [\onlinecite{PhysRevB.93.144307}]. By construction, this transformation is also required to remove the time-dependence of $H(\tau)$ in all orders in $\lambda$. So we end up with the effective Hamiltonian
\begin{align}\label{Effective Hamiltonian}
\tilde{H}=\lambda e^{i\Delta(t)} H(t) e^{-i\Delta(t)} -i e^{i\Delta(t)} \partial_{t} e^{-i\Delta(t)}
\end{align}
that is time independent and also satisfies a Schr\"odinger-like equation:
\begin{align}
i\partial_\tau \tilde{\phi}(\tau) = \tilde H \tilde{\phi} (\tau) ~.
\end{align}
When assuming $\tilde{H}=\sum_{n=1}^{+\infty} \tilde{H}_{n}\lambda^{n}$ and restricting the high-frequency analysis to the third order in $\lambda$, Eq. (\ref{Effective Hamiltonian}) leads to
\begin{align}\label{Time-dependent Hamiltonians}
\tilde H_{1} &= H(\tau)-\partial_{\tau}\Delta_{1}(\tau) ~, \notag \\
\tilde H_{2} &= \frac{i}{2}[\Delta_{1}(\tau),H(\tau)]+\frac{i}{2}[\Delta_{1}(\tau),\tilde H_{1}]-\partial_{\tau}\Delta_{2}(\tau) \notag \\
\tilde H_{3} &= \frac{i}{2}[\Delta_{2}(\tau),H(\tau)]+\frac{i}{2} [\Delta_{1}(\tau),\tilde H_{2}] + \frac{i}{2} [\Delta_{2}(\tau),\tilde H_{1}] ~, \notag \\
&+ \frac{1}{12}[[\Delta_{1}(\tau),\partial_{t}\Delta_{1}(\tau)],\Delta_{1}(\tau)] - \partial_{\tau}\Delta_{3}(\tau)
~,
\end{align}
where the brackets refer to standard commutators. Since $\tilde H_{1}$, $\tilde H_{2}$ and $\tilde H_{3}$ have to be static by construction, they must be equal to their time average. Then taking the time average of the right-hand side terms in Eq.\,(\ref{Time-dependent Hamiltonians}) results in
\begin{align}\label{Time-independent Hamiltonians}
&\tilde H_{1} = H_{0} ~, ~~~~~~~~~~~~~~
\tilde H_{2} = -\frac{1}{2}\sum_{m\neq0}\frac{[H_{m},H_{-m}]}{m} ~, \\
&\tilde H_{3} = \frac{1}{2} \sum_{m\neq 0}\frac{[[H_{m},H_{0}],H_{-m}]}{m^{2}} + \frac{1}{3}\sum_{m\neq 0}\sum_{n\neq 0,m} \frac{[[H_{m},H_{n-m}],H_{-n}]}{mn} ~, \notag
\end{align}
where $H_{m}=\int_{-\pi}^{+\pi} \frac{d\tau}{2\pi}~ e^{im\tau} H(\tau)$. The first order simply refers to the time-averaged Hamiltonian because the electrons cannot follow the dynamics of the driving. Higher orders are commutation-based corrections that describe emissions and absorptions of virtual photons. As a result, the averaging method introduced above leads to time-independent effective Hamiltonians that describe the stroboscopic dynamics, whereas the evolution between two stroboscopic times is encoded into the operators $\Delta_{n}(\tau)$.
Importantly, the first and second orders of the high-frequency expansion are already realistic in systems such as ultracold atomic gases, for expample when shaking optical lattices with frequencies of a few $kHz$ [\onlinecite{lignier2007dynamical,eckardt2009exploring,struck2012tunable,greschner2014density}]. So the third-order description we address here may also be interesting to observe the effects of sub-$kHz$ frequencies in these systems. In solid state physics, however, rapidly driving electrons in the high-frequency limit faces several issues. On the one hand, the interesting effects predicted for noninteracting electrons such as dynamical localization and symmetry-protected topological phase transitions are based on the condition $J_{0}(z)=0$. For the first root of the 0-th order Bessel function this condition already requires a driving strength satisfying $eE_{0}\sim 2.4\,\Omega$. As we shall see later on, the high-frequency expansion usually relies on $2\nu \ll \Omega$ and is basically valid for laser frequencies of a few $eV$. Therefore, the condition $eE_{0}\sim 2.4\,\Omega$ involves even more energetic intensities that, additionally to be already challenging technically, are very likely to burn the crystal where the typical atomic binding energy is of the order of a few $eV$ per Angstrom too for covalent bonds. This issue is no longer a problem when dealing with interactions because the interesting physics due to corrections arises with $J_{m}(z)$, meaning with nonzero-th order Bessel functions. So they start playing a role as soon as the driving is turned on and there are already interesting effects for $eE_{0}<2.4\,\Omega$. Moreover we provide a high-frequency description up to the third-order, which is also expected to describe effects of slower driving frequencies and is \textit{a priori} more reasonable for solid state physics. As far as we shall be concerned, the hopping amplitude is about $0.1eV$ in organic molecular crystal like pentacene [\onlinecite{Li:2011nr},\,\onlinecite{Li:2013eu}], so the high-frequency effects we address further should be relevant for $eE_{0} \sim \Omega \sim 1\,eV$, namely infrared light of $241.8\,THz$.
On the other hand, even if one can describe how electronic states are changed out of equilibrium, the question of how to reach a steady regime and populate the states in order to probe observables in solid states physics experiments is still under investigations [\onlinecite{seetharam2015controlled,canovi2016stroboscopic,mori2016rigorous}]. Here, we do not regard this latter issue. Instead, we rather address what kinds of electron-phonon interactions are induced by the off-resonant driving and how these interactions modify the equilibrium polaronic states.
\subsection{Time-independent effective Hamiltonian}
Now we are ready to apply the high-frequency approach introduced above to Hamiltonian $H(t)$ defined in Eq. (\ref{Time-Dependent Hamiltonian}). Its time Fourier transform consists of
\begin{align}
H_{m} &= \sum_{k} \epsilon_{k,m} \, c^{\dagger}_{k}c_{k} + \left( H_{p} + H_{ep} \right) \delta_{m,0} ~,
\end{align}
where $\epsilon_{k,m}=\int_{-\pi}^{+\pi} \frac{d\tau}{2\pi}~ e^{im\tau} \epsilon_{k}(\tau)$. In the absence of phonons, $H_{m}$ is a quadratic scalar operator, and $[c^{\dagger}_{k}c_{k}, c^{\dagger}_{k'}c_{k'}]=0$ is responsible for the cancellation of all commutators in Eq.\,(\ref{Time-independent Hamiltonians}). In this case, the stroboscopic dynamics is only described by the time-averaged Hamiltonian
\begin{align}
\tilde H_{1} = \sum_{k} \epsilon_{k,0}(z) \, c^{\dagger}_{k}c_{k} ~,
\end{align}
where $\epsilon_{k,m}(z)= 2\nu J_{m}(z) \cos(k) / \delta E$ and $J_{m}$ is the $m$-th order Bessel function of the first kind. Thus, the off-resonant driving renormalizes the hopping amplitudes and is likely to localize the electrons for driving strengths that satisfy $J_{0}(z)=0$, which yields the so-called dynamical Wannier-Stark ladder in the density of states [\onlinecite{PhysRevB.34.3625}].
Such a renormalization of the electronic band structure suggests that, in the presence of interactions, the system may dynamically visit weak-, intermediate-, and strong-coupling regimes, as well as the one of strictly localized electrons. Moreover the interactions are time independent, so they only appear through Fourier component $H_{0}$. As the latter is not involved in the definition of $\tilde{H}_{2}$ in Eq.\,(\ref{Time-independent Hamiltonians}), there is no contribution at the second order of the high-frequency limit and $\tilde{H}_{2}=0$. The third order in $\lambda$, however, does depend on $H_{0}$ and leads to
\begin{align}\label{Effective Hamiltonian H3}
\tilde H_{3} &= \frac{1}{2}\sum_{m\neq0} \sum_{k,k'} \frac{\epsilon_{k,m} \epsilon_{k',-m}}{m^{2}} [ [ c^{\dagger}_{k}c_{k}, H_{ep} ], c^{\dagger}_{k'}c_{k'} ] ~.
\end{align}
Consequently, the electron-phonon interaction, though time independent, is responsible for additional corrections to the effective Hamiltonian. In the case of the electron-phonon interaction, the effective Hamiltonian can by rewritten as follows
$\tilde{H} = \tilde{H}_{e} + \tilde{H}_{p} + \tilde{H}_{ep} + o(\lambda^{3})$, where
\begin{align}\label{Effective Holstein Hamiltonians}
&\tilde{H}_{e} = \sum_{k} 2\tilde{t}_{1}(z) \cos(k) \, c^{\dagger}_{k}c_{k} ~, ~~~ \tilde{H}_{p} = \sum_{q} \tilde{\omega}_{q} b^{\dagger}_{q}b_{q} ~, \notag \\
&\tilde{H}_{ep} = \sum_{k,q} \gamma_{k,q}(z) ~ c^{\dagger}_{k+q}c_{k}B_{q} ~,
\end{align}
while $\tilde{t}_{1}(z) = \tilde{\nu} J_{0}(z)$, $\tilde{\nu}=\nu/\Omega$ and $\tilde{\omega}_{q}=\omega_{q}/\Omega$. The effective electron-phonon coupling $\gamma_{k,q}$ is specified in the next section. The reader may also find a detailed discussion about the role played by generic kinds of interactions in the high-frequency description in Ref. [\onlinecite{1367-2630-17-9-093039}].
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 26mm 0mm 15mm 0mm, clip, width=4.1cm]{RenormalizationEta_180.pdf} &
\includegraphics[trim = 26mm 0mm 15mm 0mm, clip, width=4.1cm]{RenormalizationGammaSquare_180.pdf}
\end{array}$
\caption{\small (Color online) Third-order correction $\eta_{k,q}$ (left) and effective electron-phonon coupling $\gamma_{k,q}$ in units of $g_{q}$ (right) for $\Omega=5\nu$ and $z=1.8$.}
\label{Photon-Renormalized Coupling}
\end{figure}
\subsection{Dynamical control of the electron-phonon coupling}
Whereas the phononic dispersion relation remains unchanged, the off-resonant driving renormalizes the electron-phonon interactions which, \textit{a priori}, becomes $k$-dependent out of equilibrium. This is characterized by the effective electron-phonon coupling
\begin{align}\label{Effective Coupling Gamma}
\gamma_{k,q}(z) = \tilde{g}_{q} \left( 1- \eta_{k,q}(z) \, \lambda^{2} \right) ~,
\end{align}
where $\eta_{k,q}$ arises from Eq\,(\ref{Effective Hamiltonian H3}) and appears as a second-order correction in $\lambda$ to the equilibrium electron-phonon coupling $\tilde{g}_{q}=g_{q}/\Omega$. It is given by
\begin{align}
\eta_{k,q}(z) = \sum_{m>0} \left( \frac{\bar{\epsilon}_{k+q,m}(z) - \bar{\epsilon}_{k,m}(z)}{m} \right)^{2} ~,
\end{align}
where $\bar{\epsilon}_{k,2n}=\epsilon_{k,2n}$ or $\bar{\epsilon}_{k,2n+1}=2\nu J_{2n+1}(z) \sin(k) / \delta E$ for any integer $n$. This correction turns out to be positive for all strengths of the driving. As a result, the minus sign in Eq.\,(\ref{Effective Coupling Gamma}) suggests that it can only reduce the equilibrium electron-phonon coupling. The reader may find more details about the derivation of $\eta_{k,q}$ in Appendix. It is also straightforward to show that maxima of $\eta_{k,q}$ lye along the line $(k,0)$ in the $kq$-plane, whereas minima are located at $\pm(\pm \frac{\pi}{2}, \pi)$, in agreement with the map in Fig.\,\ref{Photon-Renormalized Coupling}. Thus, the effective electron-phonon coupling $|\gamma_{k,q}|^{2}$ favors the interactions between electrons and long-wavelength phonons $q\simeq 0$, as well as interactions with phonons of wavevectors $q\simeq -2k \pm \pi$. In this sense, the off-resonant driving acts as an interaction selector and can be regarded as a way to control the electron-phonon coupling in a dynamical and reversible way.
\begin{figure}[t]
\centering
$\begin{array}{c}
\includegraphics[trim = 5mm 0mm 10mm 0mm, clip, width=5cm]{RenormalizedCoupling.pdf}
\end{array}$
\caption{\small (Color online) Field-renormalized hopping and nonequilibrium corrections to the electron-phonon interaction as a function of the driving strength for $\Omega=5\nu$ and $g_{0}=\nu$.}
\label{Renormalized Coupling}
\end{figure}
It is also instructive to rephrase the effective electron-phonon Hamiltonian in terms of real-space coordinates. In order to clearly highlight the microscopic processes generated by the off-resonant driving, we now focus on a Hamiltonian that describes local electron-phonon interactions in equilibrium, meaning $g_{q}=g_{0}$. This kind of interactions is for example relevant in the context of polarons in organic molecular crystals, as reported by Hostein [\onlinecite{holstein1959studies}]. As detailed in Appendix, the effective electron-phonon Hamiltonian can be written in real space as
\begin{align}\label{Real Space Hep}
\tilde{H}_{ep} &= \tilde{g}_{0} \sum_{n} c^{\dagger}_{n}c_{n} \, B_{n} \\
&+ \tilde{g}_{1} (z) \sum_{n} c^{\dagger}_{n}c_{n}\left(B_{n-1}-2B_{n}+B_{n+1}\right) \notag \\
&+ \tilde{g}_{2}(z) \sum_{n} c^{\dagger}_{n}c_{n+2}\left(B_{n}-2B_{n+1}+B_{n+2}\right) + h.c. \notag
\end{align}
where the different electron-phonon couplings are defined by
\begin{align}\label{Coupling Definitions}
&\tilde{g}_{0} = \frac{g_{0}}{\Omega} ~, ~~~~~~~~ \tilde{g}_{1}(z) = \frac{1}{2} \frac{g_{0}}{\Omega} \left( \frac{2\nu}{\Omega} \right)^{2} \sum_{m>0}\frac{J_{m}^{2}(z)}{m^{2}} ~, \\
&\tilde{g}_{2}(z) = \frac{1}{4} \frac{g_{0}}{\Omega} \left( \frac{2\nu}{\Omega} \right)^{2} \sum_{m>0} \left( \frac{J_{2m-1}^{2}(z)}{(2m-1)^{2}} - \frac{J_{2m}^{2}(z)}{(2m)^{2}} \right) ~. \notag
\end{align}
Coupling $\tilde{g}_{0}$ comes from the time-averaged Hamiltonian $\tilde{H}_{1}$ and refers to Holstein local interactions as defined in equilibrium. Coupling $\tilde{g}_{1}$ is a nonequilibrium correction that simulates Peierls antisymmetric nonlocal interactions [\onlinecite{munn1985theory}], as introduced in the so-called SSH model to explain the formation of topological solitons in polyacetylene [\onlinecite{su1979solitons}]. Coupling $\tilde{g}_{2}$ is a nonequilibrium correction too. It describes phonon-assisted next-nearest-neighbor hopping processes. Both $\tilde{g}_{1}$ and $\tilde{g}_{2}$ refer to antisymmetric nonlocal interactions, which could already be known from the map $\gamma_{k,q}$ in Fig.\,\ref{Photon-Renormalized Coupling}, accordingly to the study of the symmetry effects of nonlocal electron-phonon interactions in Ref. [\onlinecite{Li:2011nr}]. Besides, $\tilde{g}_{1}$ and $\tilde{g}_{2}$ can both be controlled dynamically via the driving strength, as illustrated in Fig.\,(\ref{Renormalized Coupling}). Importantly, the phonon-assisted hopping processes can be turned off for some specific driving strengths. However, it cannot vanish simultaneously with the field-renormalized hopping $\tilde{t}_{1}$, which means that the noninteracting electrons can no longer experience the dynamical Wannier-Stark localization in the presence of lattice vibrations. It is worth mentioning that a similar conclusion holds when the electrons are driven by an electric field constant in time (instead of time-periodic). Indeed, the DC field leads to the Wannier-Stark localization (instead of dynamical Wannier-Stark localization) of the noninteracting electrons, but they get delocalized when they are coupled to lattice vibrations [\onlinecite{PhysRevB.88.035132}].
Besides, third-order corrections $\tilde{g}_{1}$ and $\tilde{g}_{2}$ scale with the factor $(2\nu / \Omega)^{2}$, regardless of the energy scale $\delta E$ we chose to define the small parameter $\lambda$ in the high-frequency expansion. As shown by Eq.\,(\ref{Effective Hamiltonian H3}), it is so because these corrections are defined from the square of harmonics of the electronic dispersion relation, whose characteristic energy scale corresponds to half the equilibrium bandwidth, namely $2\nu$. Of course, these corrections always remain small compared to Holstein coupling $\tilde{g}_{0}$. Nevertheless, they may compete the renormalized hopping processes when varying the driving strength $z$. Such a dynamical control, which should be suitable for multicycle laser pulse experiments and shaken optical lattices, may be useful for example to understand the role played by the nonlocal electron-phonon interactions in organic molecular semiconductors, where local Holstein interactions alone would not be sufficient to explain electronic transport [\onlinecite{zhao1994munn,Ciuchi:2011dn,Li:2011nr,Li:2013eu}].
\section{Effective Green functions}
\subsection{Perturbation theory along Schwinger-Keldysh contour}
Since the system is supposed to be in a nonequilibrium steady state, one can consider the time-dependent problem along the Schwinger-Keldysh contour $C$, as illustrated in Fig.\,\ref{Diagram}. In the interaction picture, the full Green function of the system can be written as a thermal average
\begin{align}\label{Full GF}
iG(k,t,t') = \big\langle {\cal{T}}_{C}e^{-i\int_{C}d\tau \sum_{k} V(k,\tau)}c_{k}^{~}(t)c_{k}^{\dagger}(t') \big\rangle_{0} ~,
\end{align}
where ${\cal{T}}_{C}$ denotes the time-ordering operator associated to the oriented contour $C$. The time evolution of operator $c_{k}(t)$ is ruled by the equation of motion based on time-dependent Hamiltonian $H_{e}(t)$ introduced in Eq.\,(\ref{Time-Dependent Hamiltonian}). Importantly the bracket index refers to the noninteracting density matrix of the system in equilibrium. This means that, first, we explicitly know the density matrix which is then given by $\rho_{0} = \frac{e^{-\beta H_{0}(-\infty)}}{\Tr [e^{-\beta H_{0}(-\infty)}]}$ and, second, we can take advantage of Wick theorem. The electron-phonon interaction is introduced as
\begin{align}
V(k,\tau) = \sum_{q} g_{q}~c_{k+q}^{\dagger}(\tau)c_{k}(\tau)B_{q}(\tau) ~.
\end{align}
In the framework of a perturbation theory, the first-order expansion in the electron-phonon coupling yields the thermal average of a single bosonic operator $B_{q}$ and therefore vanishes. Then the lowest-order contribution arises from the second-order, which leads to the following Green function
\begin{align}\label{Contour 2nd Oder GF}
G^{(2)}(k,t,t') &= \frac{i}{2}\int_{C} dt_{1}dt_{2}\sum_{k_{1},k_{2}}\big\langle {\cal{T}}_{C}V(k_{1},t_{1})V(k_{2},t_{2})c_{k}^{~}(t)c_{k}^{\dagger}(t') \big\rangle_{0} \notag \\
&= \int_{C} dt_{1}dt_{2}~ G^{(0)}(k,t,t_{1}) \Sigma^{(2)}(k,t_{1},t_{2}) G^{(0)}(k,t_{2},t') ~,
\end{align}
The bare electron and phonon Green functions are respectively defined as $G^{(0)}(k,t,t') = \big\langle {\cal{T}}_{C}~ c_{k}(t)c_{k}^{\dagger}(t') \big\rangle_{0}$ and $D^{(0)}(q, t,t') = \big\langle {\cal{T}}_{C}~ B_{q}(t)B_{q}^{\dagger}(t') \big\rangle_{0}$. It corresponds to the Fock-like diagram illustrated in Fig.\,\ref{Diagram}. This is the single non-vanishing second-order contribution. It describes the emission of a phonon with momentum $q$ at $t_{2}$ and its subsequent absorption at $t_{1}$. The self-energy associated to this single-phonon process is
\begin{align}
\Sigma^{(2)}(k,t_{1},t_{2}) = i \int_{BZ} dq~ g_{q}^{2}~ G^{(0)}(k+q,t_{1},t_{2})~ D^{(0)}(q,t_{1},t_{2}) ~.
\end{align}
Considering that any time variable can be located either along the forward branch or along the backward one of contour $C$, it is then possible to rephrase this equation in terms of 2$\times$2 matrices. In Keldysh basis, the second-order Green function can be rewritten as
\begin{align}
G^{(2)}(t,t') &= \int \int dt_{1} dt_{2}~ G^{(0)}(t,t_{1}) \, \Sigma^{(2)}(t_{1},t_{2}) \, G^{(0)}(t_{2},t') ~,
\end{align}
where momentum $k$ have been omitted for more clearness, integral $\int$ runs from $t=-\infty$ up to $t=+\infty$ and
\begin{align}
G^{(0)}
&=
\left( \begin{array}{cc}
G_{R}^{(0)} & G_{K}^{(0)} \\
0 & G_{A}^{(0)} \\
\end{array} \right) ~,~
D^{(0)}
=
\left( \begin{array}{cc}
D_{R}^{(0)} & D_{K}^{(0)} \\
0 & D_{A}^{(0)} \\
\end{array} \right) ~,\\
\Sigma^{(2)}
&=
\left( \begin{array}{cc}
\Sigma_{R}^{(2)} & \Sigma_{K}^{(2)} \\
0 & \Sigma_{A}^{(2)} \\
\end{array} \right) ~. \notag
\end{align}
The indices $R$, $K$ and $A$ respectively label the retarded, Keldysh and advanced Green functions.
\begin{figure}[t]
\centering
$\begin{array}{c}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=4cm]{ElectronPhononDiagram.pdf}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=4cm]{KeldyshContour.pdf}
\end{array}$
\caption{\small Diagrammatic representation of the electron-phonon interaction in a second-order perturbation theory (left) which is regarded here along Schwinger-Keldysh contour $C$ (right).}
\label{Diagram}
\end{figure}
The retarded component of the self-energy in Keldysh formalism is
\begin{align}
\Sigma^{(2)}_{R}(k)= \frac{i}{2} \int_{BZ}dq \left[G_{R}^{0}(k+q) \, D_{K}^{0}(q) + G_{K}^{0}(k+q) \, D_{R}^{0}(q) \right] ~,
\end{align}
where the two time variables have been omitted for more clearness. Because the system is out of equilibrium, the two time variables of Green functions are independent. Then it is convenient to rephrase them in terms of the relative time $t=t_{1}-t_{2}$ and the averaged time $T=(t_{1}+t_{2})/2$ [\onlinecite{wigner1932quantum}]. This can be compared to the equilibrium situation, where Green functions only depend on the relative time, whose conjugate variable is the frequency $\omega$. The Fourier transform of the retarded and Keldysh Green functions, with respect to the relative time, leads to the following expression for the self-energy
\begin{align}
\Sigma^{(2)}_{R}(k,\omega,T) = \int_{BZ}dq~ g^{2}_{q}~ \Big \{ [N_{q}+n_{k+q}]G_{R}^{0}(k+q,\omega+\omega_{q},T)& \notag \\
+[N_{q}+1-n_{k+q}]G_{R}^{0}(k+q,\omega-\omega_{q},T)& \Big \} ~.
\end{align}
The functions $N_{q}$ and $n_{k+q}$ respectively denote the Bose-Einstein and Fermi-Dirac distributions, meaning the distributions for identical particles when the system was in equilibrium at time $t=-\infty$.
\subsection{Perturbation theory for effective Green functions}
The nonequilibrium perturbation theory along Schwinger-Keldysh contour refers to Green functions based on time-periodic Hamiltonian (\ref{Time-Dependent Hamiltonian}). At present we show that we can equivalently define effective Green functions based on time-independent effective Hamiltonian (\ref{Effective Holstein Hamiltonians}) that describes the system in the high-frequency limit. We can start from the equation of motion
\begin{align}
\left[ i \partial_{\tau} - \lambda H(\tau) \right] G(\tau, \tau') &= \delta(\tau, \tau')
\end{align}
and straightforwardly show that the gauge transformation introduced earlier to define the effective Hamiltonian leads to
\begin{align}
\left[ i\partial_{\tau} + \tilde{H} \right] \tilde{G}(\tau'-\tau) &= \delta(\tau, \tau') ~,
\end{align}
where we refer to $\tilde{G}(\tau'-\tau) = e^{i\Delta(\tau)}\,G(\tau, \tau')\,e^{-i\Delta(\tau')}$ as effective Green function. This is a one-time-argument function that describes a system invariant by time translation. Consequently, two stroboscopic times characterized by an integer $n$ such that $\tau'-\tau = n \, 2\pi$, along with the $2\pi$-periodicity of $\Delta(\tau)$, result in
\begin{align}
\Tr \tilde{G}(\tau'-\tau) = \Tr G(\tau, \tau') ~.
\end{align}
Observables such as the density of states are then equal in both descriptions. As far as we are concerned, the single-orbital electronic Green functions are scalars and then equal each other for stroboscopic times.
Now that we have introduced the notion of effective Green function in the high-frequency limit, we are ready to revisit the perturbation theory. The multiplicative structure of the Dyson equation is responsible for
\begin{widetext}
\begin{align}
G(\tau, \tau') &= G^{0}(\tau, \tau')
+ \int\int d\tau_{1} d\tau_{2} G^{0}(\tau, \tau_{1})\,\Sigma(\tau_{1},\tau_{2}) \, G^{0}(\tau_{2}, \tau')
+ ... \notag \\
&= e^{-i\Delta(\tau)} \, \tilde{G}^{0}(\tau'-\tau) \, e^{i\Delta(\tau')}
+ e^{-i\Delta(\tau)} \, \int\int d\tau_{1} d\tau_{2} \tilde{G}^{0}(\tau_{1}-\tau)\,\tilde{\Sigma}(\tau_{2}-\tau_{1}) \,\tilde{G}^{0}(\tau'-\tau_{2}) \, e^{i\Delta(\tau')}
+ ...
\end{align}
\end{widetext}
where $\tilde{\Sigma}(\tau'-\tau) = e^{i\Delta(\tau)} \, \Sigma(\tau',\tau) \, e^{-i\Delta(\tau')}$ defines the effective self-energy. As a result, there is a one to one correspondence at in all orders in the perturbation theory between the $n$-th order of the time-periodic problem and the $n$-th order of the time-independent effective problem. However, the interaction vertex $g$ the self-enerfy $\Sigma(\tau_{1},\tau_{2})$ relies on is renormalized in the effective description, meaning that $\tilde{\Sigma}(\tau_{2}-\tau_{1})$ refers to an effective interaction vertex $\tilde{g}$. In other words, the local-in-time gauge transformation $e^{i\Delta(\tau)}$ enables us to regard the time-evolution of the initial time-periodic system in terms of the evolution of an effective time-independent one with a renormalized band structure and renormalized interactions. This greatly simplifies the problem since we can simply use the standard rules for equilibrium Green functions.
For example, the second-order perturbation theory leads to the following retarded component for the effective self-energy:
\begin{widetext}
\begin{align}\label{Self-Energy}
\tilde{\Sigma}^{(2)}_{R}(k,\tilde{\omega}) &= \int_{BZ}dq \, \gamma_{k,q}\gamma_{k+q,-q} \left( \frac{N_{0} + n_{q+k} }{\tilde{\omega} + \tilde{\omega}_{0} - \epsilon_{k+q,0} + i\delta} +\frac{N_{0}+1-n_{q+k}}{\tilde{\omega} - \tilde{\omega}_{0} - \epsilon_{k+q,0} + i\delta} \right) ~,
\end{align}
\end{widetext}
where $N_{0}$ denotes the equilibrium distribution function of dispersionless phonons and $\delta$ is the inverse of the quasiparticle lifetime which is introduced in the definition of the bare Green function. The first term proportional to $N_{0}$ describes the absorbtion of a phonon, whereas the second term, which is proportional to $N_{0}+1$ and does not vanish even at zero temperature, corresponds to the emission of phonons by the electrons. Besides, the renormalized coupling preserves the Hermitian structure of the effective electron-phonon Hamiltonian and satisfies
\begin{align}
\gamma_{k,q}\gamma_{k+q,-q} &= |\gamma_{k,q}|^{2} \\
&= \tilde{g}_{0}^{2}(1-2\eta_{k,q}\lambda^{2}) + o (\lambda^{3}) ~, \notag
\end{align}
We remind the reader of the map $|\gamma_{k,q}|^{2}$ that has already been introduced in Fig.\,\ref{Photon-Renormalized Coupling}.
\section{Weak-coupling regime}
\subsection{Single electron properties}
Because the off-resonant driving renormalizes the electronic bandwidth, it enables the system to visit weak- and strong-couling regimes in a dynamical way. Here, we begin with the description of the weak-coupling regime, which corresponds to driving strengths $z$ that satisfy $\tilde{g}_{0} \ll |\tilde{t}_{1}(z)|$. Moreover, we consider that Eq.\,(\ref{Self-Energy}) does not depend on the fermionic statistic for we consider a single electron in the band, as assumed in Fr\"ohlich polaron problem [\onlinecite{frohlich1950xx,frohlich1952interaction,feynman1955slow}]. Within Holstein description of organic molecular crystals [\onlinecite{holstein1959studies}], an electron that hops onto a molecule excites a vibrational mode which subsequently relaxes after the electron moves away. The molecular displacement the electron induces along its motion results in a surrounding phonon cloud, which changes the electron energy and effective mass. This electron dressed by the lattice polarization is referred to as polaron. In the presence of off-resonant driving, one naturally expects third-order corrections $\tilde{g}_{1}$ and $\tilde{g}_{2}$ in Hamiltonian (\ref{Real Space Hep}) to modify the equilibrium polaronic properties. This is the purpose of the subsequent lines.
\subsubsection{Generic case}
First of all, it can be noticed that the retarded component of the effective self-energy in Eq.\,(\ref{Self-Energy}) is a complex function whose real and imaginary part can be known analytically and exactly for arbitrary parameters. Its expression is derived in Appendix but, because it is rather cumbersome, we do not present it in the main text. Instead, we present its real and imaginary parts in Fig.\,\ref{Effective SelfEnergy}, when there is a single electron in the band that is linearly coupled to vibrational modes at room temperature, i.e. $k_{B}T = 25 \,meV$. In this case, the electron is allowed to emit and absorb phonons. This yields two emission and two absorption peaks that are located at $|\tilde{\omega}-\tilde{\omega}_{0}|=2|\tilde{t}_{1}|$ and $|\tilde{\omega}+\tilde{\omega}_{0}|=2|\tilde{t}_{1}|$, respectively. Fig.\,\ref{Effective SelfEnergy} also compares our analytical evaluation of the effective self-energy to its numerical computation obtained from Eq.\,(\ref{Self-Energy}). They both exhibit the same behavior, the little error in between the full and dashed lines being due to the finite quasiparticle lifetime $1/\delta$ that is required to perform integral (\ref{Self-Energy}) numerically.
In order to get some more physical insight into this self-energy, we now focus on two peculiar situations, namely the adiabatic and non-adiabatic cases.
\begin{figure}[t]
\centering
$\begin{array}{c}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=6cm]{SelfEnergyAnalytics.pdf}
\end{array}$
\caption{\small (Color online) Real and imaginary parts of the retarded component of the effective self-energy for a single electron at room temperature. Analytics (full lines) is compared to numerics (dashed lines) for $\Omega = 5\nu$, $\omega_{0}=0.1\nu$, $g_{0}=0.2\nu$, $z=1.8$, $\delta=0.01$ and $k=0$.}
\label{Effective SelfEnergy}
\end{figure}
\subsubsection{Non-adiabatic limit $|\tilde{t}_{1}| \ll \tilde{\omega}_{0}$}
The non-adiabatic limit $|\tilde{t}_{1}| \ll \tilde{\omega}_{0}$ refers to a situation in which the electron tunneling is much slower than the vibrations of molecules. In the limit of small $k$, the retarded component of the effective self-energy introduced in Eq.\,(\ref{Self-Energy}) leads to the following polaronic dispersion relation
\begin{align}
\tilde{\xi}_{k} &= \epsilon_{k,0} + \Real \tilde{\Sigma}^{(2)}_{R}(k,\tilde{\xi}_{k}) \notag \\
&\simeq -\tilde{\Delta} + \frac{1}{1+ (2N_{0}+1) \frac{\tilde{\Delta}}{\tilde{\omega}_{0}}} \, \frac{k^{2}}{2\tilde{m}} ~.
\end{align}
This expression, which is derived in Appendix, looks like the one obtained at zero temperature in equilibrium [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}]. However, the electron mass $\tilde{m}$ takes into account the flattening of the noninteracting electron band due to the time-periodic driving. So it is a function of the driving strength that is defined as
\begin{align}
\tilde{m}(z) = \frac{1}{\tilde{t}_{1}(z)} ~.
\end{align}
Moreover, the polaron binding energy is corrected by the electron-phonon couplings induced out of equilibrium. It is also a function of the driving strength and satisfies
\begin{align}\label{Binding Energy NonAdiabatic}
\tilde{\Delta}(z) = \frac{\tilde{g}_{0}^{2} - 4\tilde{g}_{0}\tilde{g}_{1}(z)+4\tilde{g}_{0}\tilde{g}_{2}(z)}{\tilde{\omega}_{0}} ~.
\end{align}
Finally the polaron mass $\tilde{m}^{*}$ depends on the phonon temperature and driving strength as
\begin{align}
\tilde{m}^{*}(z) = \left[ 1+ (2N_{0}+1)\frac{\tilde{\Delta}(z)}{\tilde{\omega}_{0}} \right] \tilde{m}(z) ~.
\end{align}
When the off-resonant driving is turned off, the binding energy reduces to $\tilde{\Delta}(0) = \tilde{g}_{0}^{2}/\tilde{\omega}_{0}$ and the expressions above are in agreement with the polaron behavior in equilibrium [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}].
\subsubsection{Adiabatic limit $\tilde{\omega}_{0} \ll |\tilde{t}_{1}|$}
The adiabatic limit $\tilde{\omega}_{0}\ll |\tilde{t}_{1}|$ corresponds to the case of an electron hopping that is much faster than the vibrations of the lattice. This limit is for instance relevant when the electron-phonon coupling is weak ($\tilde{g}_{0} \ll |\tilde{t}_{1}|$) in organic molecular crystals like pentacene where $\tilde{g}_{0} \sim \tilde{\omega}_{0}$ [\onlinecite{Li:2011nr},\,\onlinecite{Li:2013eu}].
When $-2|\tilde{t}_{1}| + \tilde{\omega}_{0} < \tilde{\omega} < 2|\tilde{t}_{1}| - \tilde{\omega}_{0}$, we obtain from Eq.\,(\ref{Self-Energy}) a simple expression for the polaronic dispersion relation, namely
\begin{align}
\tilde{\xi}_{k} &= \tilde{\Delta} + \frac{\tilde{m}}{\tilde{m}^{*}} \, \epsilon_{k,0} ~.
\end{align}
Note that this expression holds for all values of $k$ within the Brillouin zone, so it characterizes a whole polaron band. The onsite energy felt by the polaron is
\begin{align}
\tilde{\Delta}(z) = 2\frac{\tilde{g}_{0}\tilde{g}_{2}(z)}{\tilde{t}_{1}^{2}(z)} \, \tilde{\omega}_{0}
\end{align}
and its effective mass is defined by
\begin{align}
\tilde{m}^{*}(z) = \left[ 1+ (2N_{0}+1) \frac{\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{t}_{1}^{2}}\right] \tilde{m}(z) ~.
\end{align}
Contrary to the non-adiabatic case, the onsite energy $\tilde{\Delta}$ can dynamically change signs as a function of the driving strength. Therefore, it does not necessarily refer to a binding energy since, when $\tilde{\Delta} >0$, the polaron feels a repulsive potential on each lattice site. The effective mass, however, is always heavier than it is in equilibrium because, first, the driving flattens the curvature of the electronic band and, second, the electron drags the phonon cloud along its motion. It is also worth mentioning that the onsite energy felt by the polaron and its effective mass both vanish in equilibrium and consist of purely out-of-equilibrium polaronic effects.
Moreover, the polaron energy $\tilde{\xi}_{k}$ is larger than the phonon frequency $\tilde{\omega}_{0}$. Thus, the polaron can also emit a phonon, even at zero temperature when $N_{0}=0$, which yields a nonzero imaginary part to the self-energy. The zeroth order in the adiabatic limit $\tilde{\omega}_{0} \ll |\tilde{t}_{1}|$ leads to a scattering time $\tilde{\tau}$ that satisfies
\begin{align}
\frac{1}{\tilde{\tau}(k,\tilde{\omega})} &= - \Imag \tilde{\Sigma}^{(2)}_{R}(k,\tilde{\omega})\notag \\
&=
\frac{2N_{0}+1}{\sqrt{4\tilde{t}_{1}^{2}-\tilde{\omega}^{2}}}
\Bigg[
\tilde{g}_{0}^{2} - \tilde{g}_{0}\tilde{g}_{1} \left( 4 - \frac{\epsilon_{k,0}}{\tilde{t}_{1}} \frac{\tilde{\omega}}{\tilde{t}_{1}} \right) \notag \\
&- \tilde{g}_{0}\tilde{g}_{2} \left( 4 - 2\frac{\epsilon_{2k,0}}{\tilde{t}_{1}} + 2\frac{\epsilon_{k,0}}{\tilde{t}_{1}} \frac{\tilde{\omega}}{\tilde{t}_{1}} - 2 \frac{\tilde{\omega}^{2}}{\tilde{t}_{1}^{2}} \right)
\Bigg] ~.
\end{align}
The polaron lifetime is already finite in equilibrium but the nonequilibrium corrections make it $k$-dependent.
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 17mm 0mm 25mm 0mm, clip, width=4.2cm]{SpectralFunction_FermiSea_40_00_180_500_500_01_02_04.pdf} &
\includegraphics[trim = 17mm 0mm 25mm 0mm, clip, width=4.2cm]{SpectralFunction0_200_00_180_100_1100_05_20.pdf} \\
\end{array}$
\caption{\small (Color online) Effective and Floquet spectral functions for $\Omega = 5\nu$ (left) and $\Omega = \nu$ (right), respectively. Both spectral functions have been computed for zero temperature with the following parameters: $\omega_{0}=0.1\nu$, $ g_{0}=0.2\nu$, $z=1.8$, and $\delta=0.01$.}
\label{Spectral Function}
\end{figure}
When $-2\tilde{t}_{1} - \tilde{\omega}_{0} < \tilde{\omega} < - 2\tilde{t}_{1} + \tilde{\omega}_{0}$, we can also determine the polaron properties for energies in the vicinity of $-2\tilde{t}_{1}$. The reader may refer to Appendix for more details. Such energies are associated to the bottom of the equilibrium electron band, for we consider, without loss of generality, that $\tilde{t}_{1}(z)>0$. Then Eq.\,(\ref{Self-Energy}) leads to the following polaronic dispersion relation in the limit of small $k$
\begin{align}
\tilde{\xi}_{k} &\simeq -\tilde{\Delta} + \frac{1}{1+ \frac{\tilde{\Delta}}{2\tilde{\omega}_{0}}} \frac{k^{2}}{2\tilde{m}} ~.
\end{align}
The onsite energy felt by the polaron is now negative and again defines a binding energy with
\begin{align}
\tilde{\Delta}(z) = (N_{0}+1) \, \frac{\tilde{g}_{0}^{2} - 8\tilde{g}_{0}\tilde{g}_{1}(z)}{\sqrt{4\tilde{\omega}_{0}\tilde{t}_{1}(z)}} ~.
\end{align}
Note that this is a function of the phonon temperature too. Besides the effective mass of the polaron is given by
\begin{align}
\tilde{m}^{*}(z) = \left[ 1+ \frac{\tilde{\Delta}(z)}{2\tilde{\omega}_{0}} \right] \tilde{m}(z) ~.
\end{align}
Again we can check that, when the off-resonant driving is turned off, the binding energy reduces to $\tilde{\Delta} = \tilde{g}_{0}^{2}/\sqrt{4\tilde{\nu}\tilde{\omega}_{0}}$, so that the expressions above yield the same results as the equilibrium ones [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}].
\subsection{Effective and Floquet spectral functions}
The retarded component of the effective self-energy introduced in Eq.\,(\ref{Self-Energy}) leads to the effective spectral function
\begin{align}
\tilde{A}(k,\tilde{\omega}) \simeq -\frac{1}{\pi} \Imag \left[ \tilde{G}^{0}_{R}(k,\tilde{\omega}) - \tilde{\Sigma}^{(2)}_{R}(k,\tilde{\omega}) \right]^{-1} ~.
\end{align}
Importantly, the effective spectral function is a gauge invariant quantity, since it has been introduced in the context of the stroboscopic dynamics and, therefore, it is not affected by the momentum shift required to make Green functions gauge invariant out of equilibrium [\onlinecite{boulware1966gauge,davies1988narrow,aoki2014nonequilibrium}]. Note moreover that Keldysh approach relies on the equilibrium Fermi-Dirac distribution, since it assumes that the system was in equilibrium at time $\tau=-\infty$. This is the reason why the equilibrium distribution function appears in the expression of the effective self-energy in Eq.\,(\ref{Self-Energy}). Fig.\,\ref{Spectral Function} depicts an effective spectral function that takes into account the effect of a Fermi sea at half-filling in the adiabatic limit. It can be noticed that the bottom of the band reveals two parabolic band in this limit, in agreement with the two bands reported earlier in the single electron case.
Besides, the high-frequency results presented here are equivalent to the ones obtained in the framework of Floquet Green functions [\onlinecite{tsuji2008correlated}], whose definition relies on the time-dependent Hamiltonian in Eq.\,(\ref{Time-Dependent Hamiltonian}). Nevertheless, the Floquet Green functions are not based on the high-frequency assumption and enables us to numerically describe the effect of a slower driving frequency. The spectral function it leads to is illustrated in Fig.\,\ref{Spectral Function} for a frequency that satisfies $\Omega = \nu$. Out of equilibrium the energy is no longer a conserved quantity but, in the case of a time-periodic driving, Floquet's theory ensures that it is conserved up to a multiple of the frequency. This is the reason why the Floquet spectral function in Fig.\,\ref{Spectral Function} is similar to the effective one, but there are also replicas that are centered on $m\Omega$ for all values of the relative integer $m$. Actually, these replicas do exist in the high-frequency description too, but they can be neglected when the driving is off resonant.
The density of states, which is obviously a gauge invariant quantity too, can finally be obtained from the momentum integral of the spectral function over the Brillouin zone. It is depicted in Fig.\,\ref{DOS} in the adiabatic limit at zero temperature from both the high-frequency limit and Floquet Green functions. Whereas it shows a single band with polaronic peaks in the hight-frequency limite, there are additional replicas that overlap each other when reducing the driving frequency, in agreement with the Floquet spectral function in Fig.\,\ref{Spectral Function}.
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 00mm 0mm 0mm 0mm, clip, width=4.2cm]{EffectiveDOS.pdf} &
\includegraphics[trim = 00mm 0mm 0mm 0mm, clip, width=4.2cm]{DensityOfStates_200_00_180_100_1100_02_20.pdf}
\end{array}$
\caption{\small (Color online) Effective and Floquet local spectral functions for $\Omega = 5\nu$ (left) and $\Omega = \nu$ (right), respectively. Both plots corresponds to the case of zero temperature with $\omega_{0}=0.1\nu$, $z=1.8$, $\delta=0.01$ and $g_{0}=0.0\nu$ (dashed line) or $g_{0}=0.2\nu$ (full line).}
\label{DOS}
\end{figure}
\section{Strong-coupling regime}
\subsection{Lang-Firsov canonical transformation}
In equilibrium, the electron-phonon interaction may already be too large to be regarded as a perturbation with respect to the electron bandwidth. But regardless of the equilibrium interaction strength, we have also stressed that the system can always be dynamically driven toward such a strong-coupling regime defined by $|\tilde{t}_{1}(z)| \ll \tilde{g}_{0}$. This problem can be solved within a perturbation theory, whose zeroth order is given by $\tilde{t}_{1}(z)=0$ and usually describes localized electrons. This provides an exact analytical solution when the system lies in equilibrium, which is traditionally obtained from Lang-Firsov canonical transformation [\onlinecite{lang1962title}]. In our case, this transformation, which is detailed in Appendix, turns effective Hamiltonian (\ref{Effective Holstein Hamiltonians}) into
\begin{align}\label{Lang Firsov Hamiltonian}
\tilde{H}' &= \tilde{\omega}_{0} \sum_{q} \, b_{q}^{\dagger} b_{q}
- \tilde{\Delta} \sum_{n} c_{n}^{\dagger} c_{n} \\
&+ \tilde{t}_{1} \sum_{n} \left( c_{n+1}^{\dagger} c_{n} X_{n+1}^{\dagger}X_{n} + h.c. \right) \notag \\
&+ \tilde{t}_{2} \sum_{n} \left( c_{n+2}^{\dagger} c_{n} X_{n+2}^{\dagger}X_{n} + h.c. \right) \notag \\
&+ \tilde{g}_{2} \sum_{n,q} (2\cos q - 1) \, e^{-iqn} \left( c_{n+2}^{\dagger} c_{n} X_{n+2}^{\dagger}X_{n} + h.c. \right) B_{q} ~, \notag
\end{align}
where the polaron-polaron interactions are neglected and
\begin{align}
X_{n'}^{\dagger}X_{n} = \exp \left( \sum_{q} u_{q} \, (e^{-iqn}-e^{-iqn'})(b_{q}-b_{-q}^{\dagger}) \right)
\end{align}
with $u_{q}=[\tilde{g}_{0}+(2\cos q - 1)\tilde{g}_{1}]/\tilde{\omega}_{0}$.
Whereas the phonon frequency is not changed by the canonical transformation, the polaron binding energy
\begin{align}\label{Binding Energy Strong Coupling}
\tilde{\Delta}(z) = \frac{\tilde{g}_{0}^{2}-4\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{\omega}_{0}}
\end{align}
is reduced by Peierls coupling $\tilde{g}_{1}$ when the driving is turned on, which is illustrated in Fig.\,\ref{Renormalized Binding Energy}. It defines a potential well that aims to localize the electron onto a molecular site, so that the characteristic size of the polaron becomes comparable to the lattice scale, hence the name of small polaron that may be encountered sometimes in the litterature. Note that $\tilde{\Delta}$ does not change signs because $\tilde{g}_{1}$ comes as a second-order correction to $\tilde{g}_{0}$ in the high-frequency limit, accordingly to Eq.\,(\ref{Coupling Definitions}).
Of course, one naturally recovers the equilibrium binding energy when the driving is turned off ($z=0$). In this case, the binding energies introduced in the strong-coupling regime and in the non-adiabatic limit of the weak-coupling regime equal each other [\onlinecite{klamt1988tight},\,\onlinecite{barivsic2008phase}]. Interestingly, this is no longer the case out of equilibrium, as it can be seen from Eq.\,(\ref{Binding Energy NonAdiabatic}) and Eq.\,(\ref{Binding Energy Strong Coupling}). The extra term $4\tilde{g}_{0}\tilde{g}_{2}(z)$ in Eq.\,(\ref{Binding Energy NonAdiabatic}) comes from the phonon-assisted next-nearest hopping process which leads to $4\tilde{g}_{0}\tilde{g}_{2}(z)\cos(2k)$ in momentum space (cf. non-adiabatic limit in Appendix) and whose expansion for small $k$ yields an energy off-set.
Contrary to the equilibrium situation, the canonical transformation does not diagonalize the effective Hamiltonian when the off-resonant driving turns off the nearest-neighbor hopping, i.e. when $t_{1}(z) = 0$. This is due to nonequilibrium coupling $\tilde{g}_{2}$ that is responsible for the two last terms in the right-hand side of Eq.\,(\ref{Lang Firsov Hamiltonian}). The first one, which scales with
\begin{align}
\tilde{t}_{2}(z) = 2\frac{\tilde{g}_{0}\tilde{g}_{2}(z)}{\tilde{\omega_{0}}}~,
\end{align}
describes the next-nearest-neighbor hopping of the polaron, namely the electron dressed by the phonon cloud whose annihilation operator is $c_{n} X_{n}$. This hopping process tends to delocalize the polaron and competes the nearest-neighbor hopping when $\tilde{t}_{1}\sim \tilde{t}_{2}$, which roughly occurs when
\begin{align}
\frac{\nu}{\omega_{0}} \sim \left( \frac{\Omega}{g_{0}} \right)^{2} ~.
\end{align}
Such a condition is for example accessible in the adiabatic situation where $\omega_{0}\ll \nu$.
The second term generated by nonequilibrium coupling $\tilde{g}_{2}$ in Eq.\,(\ref{Lang Firsov Hamiltonian}) describes phonon-assisted polaron hopping between next-nearest-neighbor sites.
\subsection{Peierls-Feynman-Bogoliubov variational principle}
In order to get rid of the phonon-assisted polaron hopping term in Hamiltonian (\ref{Lang Firsov Hamiltonian}), we aim to map it onto
\begin{align}\label{Quadratic Hamiltonian}
H^{*} &= \tilde{\omega}_{0} \sum_{q} b_{q}^{\dagger}b_{q}
- \tilde{\Delta} \sum_{n}c^{\dagger}_{n}c_{n} \notag \\
&+ t_{1}^{*} \sum_{n} \left(c^{\dagger}_{n+1}c_{n}+h.c.\right) + t_{2}^{*} \sum_{n} \left(c^{\dagger}_{n+2}c_{n}+h.c.\right) ~.
\end{align}
This Hamiltonian is quadratic in momentum space, so that we know its partition function $Z^{*}=\Tr e^{-\beta H^{*}}$. Parameters $t_{1}^{*}$ and $t_{2}^{*}$ are then determined under the constraint that $\rho^{*} = \Tr e^{-\beta H^{*}}/Z^{*}$ is the best approximation of the exact density operator defined from Hamiltonian $\tilde{H}'$. This leads to Peierls-Feynman-Bogoliubov variational principle [\onlinecite{PhysRev.54.918,Bogolyubov,feynman1972lectures}], which consists in minimizing, with respect to $t_{1}^{*}$ and $t_{2}^{*}$, the following functional
\begin{align}
F^{*}+\langle \tilde{H}' - H^{*} \rangle_{*} ~,
\end{align}
where $F^{*} = - (1/\beta) \ln Z^{*}$. This results in
\begin{align}
t_{1}^{*} = \tilde{t}_{1} \left\langle X_{m+1}^{\dagger}X_{m} \right\rangle_{*}
~~~~~~\text{and}~~~~~~
t_{2}^{*} = \tilde{t}_{2} \left\langle X_{m+2}^{\dagger}X_{m} \right\rangle_{*} ~,
\end{align}
The reader may find more details about the derivation of these expressions in Appendix.
\subsection{Holstein polaron band}
\begin{figure}[t]
\centering
$\begin{array}{cc}
\includegraphics[trim = 8mm 0mm 10mm 5mm, clip, width=4.2cm]{OnsitePolaronicEnergy.pdf} &
\includegraphics[trim = 8mm 0mm 10mm 5mm, clip, width=4.2cm]{RenormalizedHopping.pdf}
\end{array}$
\caption{\small (Color online) Variations of the polaron binding energy (left) and of its nearest- and next-nearest-neighbor hopping amplitudes (right) for $\Omega = 5\nu$, $g_{0}=\nu$, $\omega_{0}=0.1\,\nu$, and zero temperature.}
\label{Renormalized Binding Energy}
\end{figure}
It is worth mentioning that the variational principle simply relies on the averages of bosonic operators, meaning that it describes hopping processes that conserve the number of phonons. If this elastic process is dominant, then the electron remains coherent and can still be described in terms of Bloch band theory. The average of bosonic operators can be evaluated from Feynman disentangling method, which is detailed in Appendix. The result is
\begin{align}
\left\langle X_{m+n}^{\dagger}X_{m} \right\rangle_{*}
&= \exp \left( - (2N_{0}+1)\frac{\tilde{g}_{0}^{2}-4\tilde{g}_{0}\tilde{g}_{1}-2\tilde{g}_{0}\tilde{g}_{1}\delta_{n,1}}{\tilde{\omega}_{0}^{2}} \right)
\end{align}
so that the nearest- and next-nearest-neighbor hopping amplitudes are functions of the phonon temperature and the driving strength. They are respectively given by
\begin{align}\label{Polaron Assisted Hopping 1}
t_{1}^{*}(z) &= \tilde{t}_{1}(z) \exp \left( - (2N_{0}+1)\frac{\tilde{g}_{0}^{2}-6\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{\omega}_{0}^{2}} \right) ~~
\end{align}
and
\begin{align}\label{Polaron Assisted Hopping 2}
t_{2}^{*}(z) &= \tilde{t}_{2}(z) \exp \left( - (2N_{0}+1)\frac{\tilde{g}_{0}^{2}-4\tilde{g}_{0}\tilde{g}_{1}(z)}{\tilde{\omega}_{0}} \right) ~.
\end{align}
These hopping processes both tend to delocalize the electron and thus compete the potential well $\tilde{\Delta}$ to enhance the polaron size. The largest polaron it characterizes is expected to be found at low temperatures where phonon occupation number $N_{0}$ vanishes. When increasing the temperature, the electron bandwidth becomes flatter and flatter so that its effective mass becomes heavier and heavier. Therefore, the inelastic processes, which do not conserve the number of phonons, become more and more important. The electron loses its coherence and get a diffusive motion. However, these effects are only due to the existence of polarons in the sense that they already occur in equilibrium without time-periodic driving.
The nonequilibirum effects due to the off-resonant driving are actually double. On the one hand, it yields next-nearest-neighbor hopping processes which cannot be switched off dynamically together with the nearest-neighbor ones, i.e. the conditions $\tilde{t}_{1}(z)=0$ and $\tilde{t}_{2}(z)=0$ cannot be satisfied simultaneously. This is what Fig.\,\ref{Renormalized Binding Energy} illustrates. As a consequence, the dynamical localization of electrons predicted in Ref. [\onlinecite{PhysRevB.34.3625}] no longer arises in the presence of lattice vibrations. On the other hand, nonequilibrium Peierls coupling $\tilde{g}_{1}$ enhances the exponential arguments in Eqs.\,(\ref{Polaron Assisted Hopping 1}) and (\ref{Polaron Assisted Hopping 2}). This is reason why $t_{1}(z)$ first becomes larger when turning on the driving strength in Fig.\,\ref{Renormalized Binding Energy}. The driving-renormalized polaron band it leads to finally satisfies
\begin{align}
\xi_{k}^{*}(z) = 2t_{1}^{*}(z)\cos(k) + 2 t_{2}^{*}(z)\cos(2k) - \tilde{\Delta}(z) ~.
\end{align}
Thus, contrary to the equilibrium case, the polaron is also allowed to dynamically enhance the electronic bandwidth and reduce the effective mass of the electron.
\section{Conclusion}
Here we have addressed the problem of rapidly driven electrons that are linearly coupled to vibrational modes in a one-dimensional crystal. The stroboscopic dynamics has been apprehended up to the third-order expansion in the high-frequency limit. This approach provides an effective description of the problem in term of a time-independent effective Hamiltonian. It has enabled us to show that any kind of electron-phonon interaction is responsible for corrections to the effective Hamiltonian which reduces the interaction strength between electrons and phonons of specific momenta. In this sense, the off-resonant driving can be regarded as a way to tune the electron-phonon coupling and to chose specific interaction channels in a dynamical and reversible fashion.
Finally, we have discussed the specific case of Holstein interaction in equilibrium. Such a local interaction is responsible for non-local interactions when the electrons are rapidly driven, such as antisymmetric interactions of Peierls type and phonon-assisted electron tunneling, which suppresses the dynamical Wannier-Stark localization. The polaronic effects these nonequilibrium corrections induce have been reported in the weak- and strong-coupling regimes, since these two regimes can both be visited dynamically when varying the driving strength. In particular, we have described how the binding energy, the mass and the size of the polaron may be controlled by the off-resonant driving. These high-frequency results have also been compared to the ones obtained in the formalism of Floquet Green functions, which allows the description of driving with arbitrary (low) frequencies.
Although the high-frequency limit is already relevant for systems such as shaken optical lattices, the explicit knowledge of the electron-phonon mechanisms we derive here in the third-order expansion allows the description of slower frequencies that become reasonable for solid state physics too, for example during multicycle laser irradiations in pump-probe experiments. The dynamical control allowed by the driving strength offers the possibility to test weak- and strong-coupling polaron theories within a single material and may also be helpful to understand the crucial interplay between local and nonlocal electron-phonon interactions in systems such as organic molecular crystals.
\begin{acknowledgments}
The authors are grateful to E. A. Stepanov and would like to point out his involvement into the derivation of the effective Green function formalism. This work was supported by NWO via Spinoza Prize and by ERC Advanced Grant 338957 FEMTO/NANO.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
2,877,628,088,894 | arxiv | \section{Introduction}
\label{sec_intro}
Magneto-optical imaging (MOI) has deservedly gained a privileged place among magnetic mapping techniques for investigating ferromagnetic\cite{McCord2015} and superconducting\cite{Jooss2002,Johansen2004} materials. The assets that make this technique stand out from the rest are its limited invasiveness\cite{Goa2003a,Brisbois2014,Brisbois2017}, short acquisition time\cite{Koblischka1995,Bending1999}, and a fair spatial and magnetic field resolution while being able to map the magnetic field profile over large surfaces. These advantages have allowed, for instance, to investigate the fast magnetic flux dynamics in superconductors\cite{Wertheimer1967,Bolz2003,Bujok1993} and large scale phenomena, such as flux avalanches\cite{Johansen2002,Brisbois2016}.
Over the years, a continuous strive to improve the MOI technique has allowed to reach a sufficiently high spatial resolution so as to achieve single flux quantum resolution\cite{Goa2001,Goa2003,Golubchik2009,Veshchunov2016}. Several protocols have been developed to enhance the magnetic field resolution of the MOI technique down to a few $\mu$T which has enabled the detection of subtle phenomena such as the melting of the Abrikosov vortex lattice\cite{Soibel2000,Banerjee2003,Wijngaarden2001,Mandal2012}. A major challenge in MOI remains the quantification of the acquired raw data\cite{Jooss2002}. Conversion of the light intensity distribution information obtained by MOI into magnetic field texture is complicated by several factors, such as non-uniform and time-dependent illumination, sample tilt and topography, depolarization by optical elements, defects and magnetic domains in the MOI indicator film, sample drift, intrinsic camera noise, etc. Hence, significant efforts have been made, first identifying and understanding the sources of limitations, and then mastering them for achieving a quantitative interpretation of MOI data\cite{Johansen1996,Laviano2003,Paturi2003,Roussel2007,Patterson2015,Albrecht2016,Grechishkin2016,Wells2016}.
Although MOI has been successful for mapping the magnetic field generated by magnets and superconductors, its performance in hybrid systems combining both type of materials is less certain. The reason being the difficulty to discern the contribution of each sub-system to the total magnetic field. In particular, when using hard ferromagnets and conventional superconductors, the former can lead to an overwhelming signal making it rather challenging to unveil the weak signal from the superconductor. One way to overcome this difficulty would be to quantify the field distribution in order to precisely remove the strong background signal from the ferromagnet. In this report, we present comprehensive protocols developed for conversion of the MOI data into a magnetic field distribution, with the goal of revealing the comparatively weaker magnetic response of a superconductor buried in a larger background field associated with a magnetic layer in its vicinity. Our work-flow, partially inspired by Ref. \onlinecite{Roussel2007}, involves calibration of intensities (gray levels) of images and conversion into local field values. For a superconducting sample, a set of calibration images are obtained above its critical temperature $T_\mathrm{c}$, providing magnetic field distribution information of the system in the absence of a superconducting signal. Then intensity versus field calibration curves are determined for each pixel in the image. The calibration curves thus obtained are subsequently applied on images acquired below $T_\mathrm{c}$ to obtain local magnetic field distribution information. The pixel-by-pixel calibration method enables an a posteriori correction of artifacts such as inhomogeneous illumination. Furthermore, the contrast originating from the in-plane magnetic domains in the indicator film has been removed by implementing a post image processing algorithm. The intensity-to-magnetic field conversion protocol and procedure for correction for magnetic domains are discussed in detail in Section \ref{sec_calib}, after introducing the employed experimental setup in Section \ref{sec_expt}. In the following Section \ref{sec_results} we discuss the application of the proposed protocols on systems of increasing complexity. In the first stage, we tested the protocols on MOI data obtained on magnetic structures (magnetic bars, disks, and a micro-scale electromagnet) with promising results (Sections \ref{sec_bars}, \ref{sec_dots}, and \ref{sec_coil}). Subsequently, we applied the technique to reveal new aspects of innovative superconductor/ferromagnet (S/F) hybrids (Sections \ref{sec_TMP} and \ref{sec_NbPy}). Quantification of MOI data is particularly useful for the hybrid structures as it enables clearer insight into the physics of these systems. However, as we discuss in Section \ref{sec_TMP}, we also confront the limitations of the proposed approach when applied on systems with large magnetic field amplitudes.
\section{Experimental details}
\label{sec_expt}
Let us start by introducing the experimental setup used for acquiring magneto-optical images. MOI is a magnetic field mapping microscopy technique based on the Faraday effect, where the direction of polarization of a light beam is rotated proportionally to the local magnetic field\cite{Jooss2002,Koblischka1995,McCord2015,Patterson2015,Lange2017}. This effect is strongest in purposely designed indicator films, placed on top of the sample under study. The Faraday active indicator we use throughout this work is a 3~$\mu$m thick Bi-doped yttrium iron garnet epitaxially grown on a 450~$\mu$m thick Gd$_3$Ga$_5$O$_{12}$ transparent substrate. A 100~nm thick Al mirror was deposited on the optically active layer side in order to assure sufficient reflection of the incident light beam produced by linearly polarizing the 550 nm emission of a Hg lamp. The magnetization of the Faraday active layer is in-plane in the absence of external magnetic field, but it is tilted out-of-plane by the presence of local magnetic fields perpendicular to the plane of the indicator. The rotation of the polarization is proportional to the component of the magnetic moment along the direction of light propagation. After crossing the analyzer, blocking the initial polarization direction, the reflected beam is captured by a high resolution RETIGA-4000 CCD camera recording 2048 px $\times$ 2048 px gray-scale images, thus obtaining a light intensity map representative of the magnetic field texture at the indicator's plane. In order to increase the signal-to-noise ratio, and unless stated otherwise, all the magneto-optical (MO) images presented throughout this work result from averaging between 3 and 10 images acquired by the camera with an exposure time of the order of 0.5 to 1 s. The polarization microscope is a commercial Olympus modular system, and the external magnetic field is provided by a cylindrical copper coil fed by a Keithley-2440 current source. Calibration of the coil was done with a USB Hall probe and consists in measuring the magnetic field as a function of the current at the center of the coil, at the location of the sample. Our configuration guarantees spatial variations of the field at most 1\% of the maximum field at the sample location (in a $5 \times 5$ mm$^2$ area). The sample is cooled down to temperatures as low as 4 K in a closed-cycle He cryostat (Montana Cryostation). The whole setup is installed on an actively damped non-magnetic Newport optical table. More details about the setup can be found in the supplementary material and in Ref. \onlinecite{Brisbois2016a}.
In some cases, MOI data has been complemented and compared with scanning Hall probe microscopy (SHPM) at room temperature. This technique allows acquisition of the real magnetic field $B_{z}(x,y)$ at various heights and was used to obtain direct quantitative information about the $z$-component of the stray magnetic field in the studied samples, over areas of hundreds of $\mu$m$^2$. A Hall probe with 5 $\times$ 5 $\mu$m$^2$ active area was used for the measurements. The scan resolution is 2.5 $\mu$m. Further details on the SHPM setup can be found in Ref. \onlinecite{Shaw2016}.
\section{Quantitative magneto-optical imaging}
\label{sec_calib}
\subsection{Intensity-to-magnetic field conversion protocol}
MOI does not provide direct access to the out-of-plane magnetic field $B_z$, but rather to light intensity values $I$ related to it. Therefore, caution must be exercised when interpreting the raw images, since $I$ at a given pixel depends strongly on several parameters other than the local $B_z$, such as incident light intensity, exposure time, depolarization effects due to the optics, indicator tilt, or in-plane magnetic field components, to name a few. Furthermore, depending on the choice of the analyzer angle, pixels with the same light intensity may sometimes correspond to different values of the magnetic field. When it is desirable to recover information on the local magnetic field, protocols have been developed to convert the light intensity pixel values into absolute magnetic field values\cite{Jooss2002, Habermeier1979,Johansen1996,Laviano2003,Roussel2007}. The procedure we present here is inspired by those, but with two significant added values: (i) we use an exact pixel-by-pixel method\cite{Rave1987,McCord1999,Patterson2015}, instead of generalizing a calibration performed on a reference zone to the whole image, and (ii) we separate the contribution of the superconductor from other sources of constant magnetic field, such as ferromagnetic structures. The complete procedure is summarized in Fig. \ref{Fig-calib} using as illustration a rectangular superconducting Nb film with two ferromagnetic disks as sources of inhomogeneous magnetic field.
According to Malus' law, the intensity $I(B_z,x,y)$ recorded by the camera for a pixel $(x,y)$ in the image can be approximated as follows\cite{Jooss2002}:
\begin{equation}\label{eq:malus}
I(B_z,x,y) = I_0(x,y) \sin^2 \left( \alpha(B_z,x,y) + \beta (x,y) \right),
\end{equation}
where $I_0 (x,y)$ is the incident light intensity, diminished by depolarizing effects and absorption, $\alpha(B_z,x,y)$ is the angle of rotation of the light polarization, coming from the Faraday effect for a local out-of-plane magnetic field $B_z(x,y)$, and $\beta (x,y)$ is the deviation from the extinction configuration. The spatial distribution of $\beta (x,y)$ is non-uniform, mainly due to the small deviation of the incident light from normal incidence. This means that the extinction angle is not uniquely defined for a given image. In other words, the minimum of intensity does not occur for the same analyzer-polarizer angle at every pixel $(x,y)$. The value of $\alpha(B_z,x,y)$ is related to $B_z(x,y)$ through the out-of-plane magnetization $M_z(x,y)$ of the MO active layer of the indicator: $\alpha(B_z,x,y) = C(x,y) M_z(B_z,x,y)$, where $C(x,y)$ depends on the sensitivity of the MO active layer. Indeed, $B_z$ affects the originally in-plane magnetization of the MO active layer of the indicator by tilting it out of the plane. The out-of-plane component of the magnetization is given by $M_z(B_z,x,y) = M_{\mathrm{s}} \sin \theta(B_z,x,y)$, with $M_{\mathrm{s}}$ the saturation magnetization of the MO active layer and $\theta = \arctan \left( B_z(x,y)/B_{\mathrm{s}} \right)$ the angle between $M_\mathrm{s}$ and the in-plane direction\cite{theta}. $B_{\mathrm{s}}$ is related to the saturation field of the MO active layer and is of the order of 80 mT for indicators of the type employed in this work\cite{Johansen1996}. Substitution of $\alpha(B_z,x,y)$ in Eq. (\ref{eq:malus}) yields the following expression:
\begin{widetext}
\begin{equation}\label{eq:calib_exact}
I(B_z,x,y) = I_0(x,y) \sin^2 \left( C(x,y) M_{\mathrm{s}} \sin \left( \arctan \left( \frac{B_z(x,y)}{B_{\mathrm{s}}}\right) \right) + \beta(x,y) \right).
\end{equation}
\end{widetext}
This equation, linking $B_z(x,y)$ with the intensity $I(x,y)$ picked up by MOI, is at the core of quantitative MOI. Provided the parameters are determined through a preliminary calibration done on a set of data where $B_z(x,y)$ is known, Eq. (\ref{eq:calib_exact}) can be used to convert $I(x,y)$ images to $B_z(x,y)$ field maps in any subsequent measurements.
When local magnetic fields $B_z(x,y) \ll B_{\mathrm{s}}$, which is the case in most of our experiments (with the notable exception of the patterns presented in section \ref{sec_TMP}), one can make a Taylor series expansion of Eq. (\ref{eq:calib_exact}), keeping only terms up to order 2. In this case, $I(B_z,x,y)$ takes a much simpler parabolic dependence on $B_z(x,y)$:
\begin{equation}
I(B_z,x,y) \simeq a(x,y) + b(x,y) B_z + c(x,y) B_z^2,
\end{equation}
where
\begin{align}
a(x,y) &=I_0 \sin (C M_{\mathrm{s}} \sin \beta),\\
b(x,y) &=\frac{I_0 C M_{\mathrm{s}}}{B_{\mathrm{s}}} \cos \beta \sin (2 C M_{\mathrm{s}} \sin \beta),\\
c(x,y) &=\frac{I_0 C M_{\mathrm{s}}}{B^2_{\mathrm{s}}} \lbrace C M_{\mathrm{s}} \cos^2 \beta \cos (2 C M_{\mathrm{s}} \sin \beta) \notag \\
&- \frac{\sin \beta}{2} \sin (2 C M_{\mathrm{s}} \sin \beta) \rbrace.
\end{align}
This equation can also be rewritten in another form, so as the parameters acquire a more intuitive physical interpretation:
\begin{equation}\label{eq:calib}
I(B_z,x,y) \simeq I_\mathrm{min}(x,y) + A(x,y) \left[ B_z(x,y) - B_\mathrm{min}(x,y) \right]^2
\end{equation}
where $A(x,y)=c(x,y)$, $B_\mathrm{min}(x,y)=-b(x,y)/2c(x,y)$, and $I_\mathrm{min}(x,y)=a(x,y)-b^2(x,y)/4c(x,y)$, are experimental parameters to be determined for each $(x,y)$ pixel of the image. Indeed, in Eq. (\ref{eq:calib}), $B_\mathrm{min}(x,y)$ is the value of the local magnetic field that gives the minimum intensity $I_\mathrm{min}(x,y)$ for a given pixel $(x,y)$, while $A(x,y)$ is related to the sensitivity of the MO active layer.
\begin{figure*}[ht]
\centering
\includegraphics[width=16.5cm]{Figure_calib.pdf}
\caption{ {\bf Quantitative magneto-optical imaging.} (a) Data cube formed by the reference intensity images $I(x,y)$ recorded above $T_\mathrm{c}$ when sweeping the applied magnetic field $H$. (b) The intensity profiles $I(H)$ for each $(x,y)$ pixel are fitted by a parabolic curve, given by Eq. (\ref{eq:calib}), and have a minimum $I_\mathrm{min,cal}$ for an applied field $H_\mathrm{min,cal}$. These two parameters, as well as the concavity $A$, are mapped in panel (c) for a small part of the sample, including a $20 \, \mu$m diameter in-plane polarized magnetic disk. (d) Sequence of images to be converted to local magnetic field $B_{z}$, taken below $T_\mathrm{c}$ when sweeping $H$. (e) The $I(H)$ curve for each $(x,y)$ pixel is compared to the calibration curve to determine $B_{z}(I)$. To discriminate between the two possible values of $B_{z}$ for a given $I$, the field $H_\mathrm{min,meas}$ at which the minimum of intensity $I_\mathrm{min,meas}$ occurs is compared with the applied field $H$: if $H>H_\mathrm{min,meas}$ ($H<H_\mathrm{min,meas}$), the upper (lower) branch of the reference parabola is selected. A comparison of the original $I$ image with the final $B_{z}$ image is shown in panels (f-g) for a superconducting film with magnetic disks, at $T=3.75$ K and $\mu_0 H = 5.5$ mT. The enlargements on the right show the effectiveness of the conversion procedure to remove the inhomogeneous field source of the magnetic disks.}
\label{Fig-calib}
\end{figure*}
The aim of the calibration procedure is to extract values of these parameters for each system configuration, i.e., for a given set of sample, indicator mounting, camera parameter and analyzer angle. This is done by recording a series of images, varying the external applied magnetic field $H=B_{z}/\mu_0$, as illustrated in Fig. \ref{Fig-calib}(a). In the case of a superconducting sample, the calibration needs to be done at a temperature $T>T_\mathrm{c}$, in order to exclude any superconducting signal. Under this condition, we typically record the average of 3 images for every value of $\mu_0 H$, varying between +12.5 mT and -12.5 mT by steps of 0.1 mT. Since we perform pixel-by-pixel calibration, it is important that all the recorded images overlap perfectly, i.e. unwanted drifts occurring during the measurements have to be taken into account. To that end, we use the StackReg plugin\cite{StackReg} of the ImageJ software\cite{ImageJ} which corrects the small translation of the sample of the order of $0.15 \, \mu$m/min (about 0.1 px/min), due to temperature gradients in the cryostat. The correction provided by the software has an error of $\pm 2$ px (i.e., about $3 \, \mu$m).
The sequence of calibration images thus obtained can be regarded as a data cube made of $N$ images with dimensions of $N_x \times N_y$ px$^2$, where each pixel contains the intensity value $I(H,x,y)$. Note that since the unwanted fluctuations of the light intensity produced by the Hg-lamp are homogeneous, they can be accounted for and eliminated by fitting with Eq. (\ref{eq:calib}) the intensity $I_\mathrm{mean}(H)$, corresponding to the average intensity in a $30 \times 30$ px$^2$ region free of defects and located far from the sample (i.e., where the magnetic field is homogeneous and has practically the same intensity as the applied field). Subsequently, the fitting of $I_\mathrm{mean}(H)$ returns the parabolic function $I_\mathrm{ref}(H)$, and for each image $I(H,x,y)$ is then corrected by multiplying every pixel by $I_\mathrm{ref}(H)/I_\mathrm{mean}(H)$.
After correcting for image drift and fluctuations of intensity, we plot the intensity $I(H,x,y)$ as a function of the applied magnetic field $H$ for each pixel $(x,y)$ of the data cube, and fit it with Eq. (\ref{eq:calib}), as illustrated in Fig. \ref{Fig-calib}(b). The calibration process yields the functions $H_\mathrm{min,cal}(x,y)$, $I_\mathrm{min,cal}(x,y)$ and $A(x,y)$, represented in Fig. \ref{Fig-calib}(c). This procedure allows us to accurately relate the detected intensity $I(H,x,y)$ with the local magnetic field $B_{z}(x,y)=\mu_0 H(x,y)$, using for each pixel its specific reference curve $I(H,x,y)$. This means that the calibration takes into account and allows to eliminate any spatial inhomogeneity in the illumination, defects and artifacts in the sample and indicator, as well as constant magnetic field sources other than $H$. Of course, we cannot recover magnetic field information from unresponsive pixels (for instance those resulting from damage in the indicator due to scratches) or when the light intensity saturates (due to local high magnetic fields), so the parameter values at these points do not have a physical interpretation. Interpolation procedures could be used as a first approach to estimate this missing information.
$H_\mathrm{min,cal}(x,y)$ represents the magnetic field at which the minimum of intensity $I_\mathrm{min,cal}(x,y)$ occurs. In the absence of fields other than the external applied field $H$, $H_\mathrm{min,cal}$ basically corresponds to the field rotating the light polarization by an angle $-\beta$, compensating the deviation of the analyzer and polarizer from the crossed configuration. However, in the presence of a magnetic field source other than $H$, the additional contribution to the rotation of light polarization also has to be compensated, thus impacting the value of $H_\mathrm{min,cal}$. Therefore, $H_\mathrm{min,cal}$ already provides us with a cartography of the magnetic field source, as can be seen in the example of the magnetic disk presented in Fig. \ref{Fig-calib}(c). Provided we subtract the background coming from $\beta(x,y)$, we have access to the average field generated by the source over the thickness of the MO active layer. Moreover, although the $I_\mathrm{min,cal}(x,y)$ distribution shows primarily the inhomogeneities in the background intensity, it may also reflect the shape of an inhomogeneous magnetic field source. This effect results from the fact that a uniform external magnetic field $H$ can not exactly compensate the local magnetic field $B$ generated by the inhomogeneous magnetic field source, since $B$ will be non-uniform through the thickness of the MO active layer, due to field decay with distance. The partial compensation thus leads to a change in $I_\mathrm{min,cal}$. Finally, the parameter $A(x,y)$ is related to the sensitivity of the indicator, and may also vary due to composition inhomogeneities in the indicator. A careful look at Fig. \ref{Fig-calib}(c) allows us to identify some traces of the magnetic disk on the $A(x,y)$ parameter, which fall within the error bar of the procedure.
Let us now illustrate this calibration procedure in order to convert the collected light intensity distribution into magnetic field maps in a set of images corresponding to the rectangular Nb superconducting sample with two magnetic (Co) disks on top. More details of this particular sample layout are presented in section \ref{sec_dots}. At the end of the conversion procedure, the inhomogeneous field produced by the magnetic disks will be removed from the field map, leaving only the signal of the superconductor. For that purpose, we usually take a set of 10 images/field at $T<T_\mathrm{c}$ applying a zero-field-cooling (ZFC) procedure and increasing $\mu_0 H$ from 0 to 12.5 mT, as illustrated on Fig. \ref{Fig-calib}(d). As explained above, we correct the sample drift and the fluctuations of the incident light in the images, taking care of choosing exactly the same reference $30 \times 30$ px$^2$ region as for the calibration. Ideally, the $I(H)$ parabola in the reference square should be, after correction, the same as for the calibration set.
Subsequently, $I(H,x,y)$ curves for every pixel of the data cube are acquired (red curve in Fig. \ref{Fig-calib}(e)) and the value of the applied field $H_\mathrm{min,meas}$ at which the minimum intensity $I_\mathrm{min,meas}$ appears is estimated. This is an important step in order to convert the intensity into local magnetic field, since for each value of $I(H,x,y)$, there are two possible corresponding values of $B_{z}(x,y)$, one on each branch of the calibration parabola (black curve in Fig. \ref{Fig-calib}(e)). The criteria determining the proper $B_{z}$ for a given $I(H,x,y)$ is to compare $H$ with $H_\mathrm{min,meas}(x,y)$: if $H<H_\mathrm{min,meas}(x,y)$, the lower branch of the $B_{z}(I)$ curve needs to be used; if $H>H_\mathrm{min,meas}(x,y)$, the upper branch must be considered. For this reason, and to avoid imprecision on intensities close to the parabola minimum, it is convenient to choose in advance the analyzer angle appropriately $(\sim 4^{\circ})$ , if possible, so that the minimum of intensity does not occur in the range of applied fields $H$.
The result of the conversion procedure is shown in Fig. \ref{Fig-calib}(f-g), for the rectangular superconductor with magnetic disks highlighted by the yellow dotted rectangular selections. Figure \ref{Fig-calib}(f) shows the original light intensity image compared with the final quantitative magnetic field image in Fig. \ref{Fig-calib}(g). Note that throughout this report, the color scales in images are adjusted such that \textquoteleft up\textquoteright\space (\textquoteleft down\textquoteright) magnetization or \textquoteleft positive\textquoteright\space (\textquoteleft negative\textquoteright) magnetic field is represented by blue-white (red). Zero field is represented by black. After conversion, the background of the image is more homogeneous, and some inhomogeneities in the indicator response, such as the circular spot in the upper right corner of the sample, are corrected within the error associated to the method. Moreover, using this procedure, we can significantly attenuate the signal coming from ferromagnetic materials (with the assumption the signal does not change in the range of fields we apply) and separate it from the contribution due to the superconductor. In Fig. \ref{Fig-calib}, the magnetic disks are notably visible in panel (f), but become hardly visible in panel (g). This is confirmed by the direct comparison of the zoom-in of the disks in panel (f) (images 1 and 2) with the same regions in panel (g) (images 3 and 4). Unfortunately, magnetic domains and artifacts arising from the MO active layer change as the magnetic field is ramped, and thus can not be fully accounted for using this procedure. However, as we explain in the remainder of the section, it is still possible to remove them from the images to some extent.
Note that our quantitative procedure does not take into account possible in-plane components of magnetic field that might affect the Faraday rotation in the indicator film\cite{Johansen1996,Laviano2003}. In-plane components are primarily induced by non-homogeneous magnetic field sources such as superconductors or ferromagnets. For the systems we have studied in the present work, we expect these components to lie below a few mT and therefore to be substantially smaller than the saturation magnetization of the indicator films ($\sim$ 80 mT). This justifies neglecting this effect and the associated corrections. Furthermore, consideration and estimation of in-plane components involves elaborate protocols of inversion of magnetic field maps to get current density distribution maps, which are then used to correct the magnetic field information in an iterative manner. The first iteration gives the out-of-plane component ($B_{z}$) only, and even after correction due to in-plane components, most of the features in $B_{z}$ are not altered significantly. As a consequence, the features we are interested in this report would not change noticeably by inclusion of these corrections.
\subsection{Correction for magnetic domains in the MO indicator}
\label{subsec_correction}
Special care has to be taken in order to avoid the proliferation of magnetic domain walls in the MO active layer of the indicator, since in their presence the local magnetic field is substantially modified and the technique can therefore no longer be considered as non-invasive\cite{Vestgaarden2007}. These magnetic domains appear as triangular-shaped regions where the intensity undergoes a jump compared to the neighboring regions thus degrading the clarity of the images and sometimes hiding actual sample features. In this section, we present an original method to reduce the impact of the magnetic domains on the images, by correcting the concerned areas. As a proof of concept, we apply our procedure in a situation where domain proliferation is favorable, which is the case when large gradients of magnetic field are present. This is clearly visible in the image shown in Fig. \ref{Fig-corr}(a), representing the magnetic field distribution in a rectangular superconducting Nb sample at $T=3.6$ K for a perpendicular applied field of $\mu_0 H=2.1$ mT.
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{Figure_Corr.pdf}
\caption{ {\bf Illustration of the correction of the indicator magnetic domains in a MO image.} (a) Original raw image of a $800 \times 400 \, \mu$m$^2$ Nb superconducting film at $T=3.6$ K and $\mu_0 H=2.1$ mT\cite{Brisbois2016a}, where the triangular artifacts pollute the intensity map. (b) Image after treatment with the correction algorithm, showing an enhanced contrast of the intensity map.}
\label{Fig-corr}
\end{figure}
The correction algorithm takes as input the 12-bit gray-scale image obtained from the experiment. First, the boundaries of the magnetic domains are drawn by applying a discrete differential operator (i.e., Sobel filter), based on the value of the intensity gradient at each pixel, as an edge detector. This operation creates a black and white boundary map of the image. In this map, we then select manually the regions corresponding to artifacts due to the indicator magnetization domains. We assume that the effect of a domain on the underlying image is to shift the intensity by a constant value, possibly different for every artifact\cite{shift}. In order to determine these constants, the user has to provide two reference points for each artifact, one inside and one outside the domain. The difference between the intensity at these two points gives the correction to apply.
Finally, using the boundary map drawn with the edge detector, we apply the appropriate correction to each domain using a flood fill algorithm. Since the corrections are based on a map of the boundaries and applied only to non-boundary pixels, this method leaves at least a single uncorrected pixel line between each domain. To smooth the image, an optional step is to replace each pixel of the boundary by the mean value of the nearest pixels not belonging to any boundary. This is however not always helpful as this step requires a precise determination of the boundaries, which cannot always be achieved, for instance in noisy images or for magnetically textured samples, where it could result in a degraded image quality.
The result of the correction procedure described above is illustrated in Fig. \ref{Fig-corr}(b). As a final remark, since this method manipulates the values of the intensity, caution must be exercised when trying to extract quantitative information from the corrected image.
\section{Results and discussion}
\label{sec_results}
In the remainder of the paper, we apply the conversion protocol detailed above to study various ferromagnet/superconductor hybrid structures with increasing complexity. We start by analyzing a simple non-superconducting system made of magnetic Co bars and use it to further characterize and calibrate the MOI system. In a second step we investigate a heterostructure, where Co magnetic disks are positioned on top of a Nb superconducting film. Next, we consider the case of a micro-coil as a tunable inhomogeneous magnetic field source allowing us to precisely establish the field resolution of the MO indicator. Afterwards, we address a sample configuration where the whole superconducting Nb film is covered with a hard ferromagnetic NdFeB film with a predesigned chessboard pattern. Finally, we consider the case of a superconductor partially covered with a soft ferromagnetic permalloy (Py) layer, where the magnetic landscape can be customized by pre-imprinting a magnetic pattern into the Py layer.
\subsection{Arrays of magnetic bars}
\label{sec_bars}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{Figure_bars.pdf}
\caption{{\bf Quantitative magneto-optical imaging of magnetic bars.} (a) Schematic of a 30 nm thick Co magnetic bar of length $L$ and width $W$. (b) Optical image showing the periodicity of the Co bar arrays, characterized by a spacing $L$ between neighboring bars. (c) Magnetic field image, obtained via MOI, of a full $800 \times 800 \, \mu$m$^2$ array of magnetic bars with $L=20 \, \mu$m and $W=5 \, \mu$m. (d) Zoom on a half width of the array shown in panel (c). This field distribution $B_{z}(z_\mathrm{MOI})$ is compared with the exact analytic solution for $B_{z}(z)$, based on Eq. (\ref{eq:B_bar}), taking into account the MO active layer thickness $d\simeq 3 \, \mu$m. In panels (d-f), the leftmost red spots delineate the border of the array. (e) Same for a sample with $L=10 \, \mu$m and $W=3 \, \mu$m, at the limit where the stray field of a single bar can be resolved. (f) Same for a sample with $L=5 \, \mu$m and $W=1 \, \mu$m, where single bars cannot be seen any more and the response is dominated by the stray field at the border of the array. From the comparison of the theoretical and experimental magnetic field distributions, we estimate $z_\mathrm{MOI}$ to be between 5 $\mu$m and 7 $\mu$m.}
\label{Fig-bars}
\end{figure*}
One of the crucial parameters determining the spatial resolution and the magnetic field reaching the magneto-optical indicator is the gap between the sample and the indicator placed on top of it. This distance is difficult to control, since it depends strongly on the roughness and the cleanliness of the sample and mirror surfaces, and it is therefore also challenging to reproduce, making it problematic to compare images from different experiments. Moreover, quantifying the spatial and field resolution of the technique is not straightforward, since it requires a precise knowledge of the field at the optically active layer position and the geometry of the field source. For these reasons, it is of interest to design localized magnetic field sources with a well-known field distribution. This can be achieved by using parallelepipedal magnetic bars, for which the magnetic field distribution follows an analytical expression\cite{Engel2005}.
Panels (a) and (b) of Fig. \ref{Fig-bars} show the structure of the sample, consisting of several $800 \times 800~\mu$m$^2$ arrays of nearly parallelepipedal 30 nm thick Co bars of length $L$ and width $W$. The samples are fabricated by a combination of e-beam lithography and MBE evaporation (cf. supplementary material for details). $L$ is the nominative length of the bars and does not take into account the round edges, effectively reducing the size of the domain with in-plane magnetization. All the arrays are fabricated on the same substrate, ensuring their observation under similar experimental conditions. We consider three different arrays: $L=20 \, \mu$m and $W=5 \, \mu$m, $L=10 \, \mu$m and $W=3 \, \mu$m, and $L=5 \, \mu$m and $W=1 \, \mu$m.
We applied quantitative magneto-optical imaging to map the magnetic field $B_{z}(z_\mathrm{MOI})$ generated by the magnetic bars in the MO indicator, located at a distance $z_\mathrm{MOI}$ from the sample surface. As explained in the calibration procedure in section \ref{sec_calib}, we sweep the perpendicular applied magnetic field $\mu_0 H$ from +12.5 mT to -12.5 mT by steps of 0.1 mT and record the average of 3 images for every value of $H$. Before we start MOI, the Co bars are magnetized along their length with a permanent magnet. Note that the maximum applied field $\mu_0 H = 12.5$ mT does not change irreversibly the magnetization $M$ in the magnetic bars, a fact supported by the comparison of images before and after the calibration procedure. Any reversible changes in $M$ are accounted for by the calibration. Fig. \ref{Fig-bars}(c) shows the local magnetic field $B_{z}$ map of the magnetic bar array with $L=20 \, \mu$m and $W=5 \, \mu$m. In this image, each individual bar is identified by a pair of red and blue-white dots, representing the stray field of opposite polarity associated with the bar extremities. Moreover, the ratio $L/W=4$ is of the order of magnitude of the expected value for the appearance of single domains in Co bars\cite{Seynaeve2001}, a fact that is evidenced by the pairs of dots in the MO image. Our observation of well-defined poles at the ends of the bars agrees with either a mono-domain structure or at least with highly aligned domains. Since in our case the domain structure is driven by shape anisotropy, similar structures are also expected for the other two arrays.
Knowing the dimensions of the array of bars and approximating their geometry by a parallelepiped, we can calculate the local magnetic field map at a distance $z$ from the sample surface. Indeed, as was shown in Ref. \onlinecite{Engel2005}, the magnetic field $B(x,y,z)$ generated at the coordinates $(x,y,z)$ by a parallelepipedal single domain with magnetization $M$ and dimensions $L \times W \times t$ can be calculated analytically. The out-of-plane component of the magnetic field, $B_z(x,y,z)$, that MOI is sensitive to, can be expressed as follows:
\begin{widetext}
\begin{equation}\label{eq:B_bar}
\begin{split}
B_z (x,y,z) =& \frac{\mu_0 M}{4\pi} \sum_{k,l,m=1}^{2} (-1)^{k+l+m} \ln \biggl[ \left( x-x_0 \right) +(-1)^k \frac{W}{2} \\
&+ \sqrt{\left( \left( x-x_0 \right) +(-1)^k \frac{W}{2} \right)^2
+ \left( \left( y-y_0 \right) +(-1)^l \frac{L}{2} \right)^2
+ \left( z +(-1)^m \frac{t}{2} \right)^2 } \biggr].
\end{split}
\end{equation}
\end{widetext}
The bar has its center at the point of coordinates $(x_0,y_0,0)$ and has its main axis oriented along the $y$-axis. We add up the contributions of all the magnetic bars, obtained by changing $x_0$ and $y_0$ in the previous equation. Since the MO active layer of the indicator has a finite thickness $d=3 \, \mu$m, it is imperative to account for the fact that the magnetic field is not constant throughout the indicator. The MO images will therefore represent the average of the magnetic field over the distance $d$. In the calculations, the magnetic field distribution $B_{z}(x,y,z_0)$ obtained when the gap between the indicator and the sample surface is $z_0$ is obtained by averaging 31 magnetic field distributions $B_{z}(x,y,z)$, calculated by sweeping $z$ from $z_0$ to $z_0+d$ by steps of $0.1 \, \mu$m.
Figure \ref{Fig-bars}(d) shows a comparison of the experimental out-of-plane magnetic field $B_{z}(z_\mathrm{MOI})$ with the calculations based on Eq. (\ref{eq:B_bar}), for three distances $z_0$ between sample and indicator: 3 $\mu$m, 5 $\mu$m and 7 $\mu$m. The left side of the images corresponds to the left edge of the Co bar array shown in Fig. \ref{Fig-bars}(c). We chose linear color scales to represent the magnetic field $B_{z}$, meaning that we can directly compare visually the theoretical magnetic field distributions with the experimental one. This allows us to estimate the distance $z_\mathrm{MOI}$ between 5 $\mu$m and 7 $\mu$m. Moreover, the experimental $B_{z}$ image gives a maximum field $0.32\,\pm\,0.03$ mT. This can be compared with the maximum field $B_\mathrm{max}$ obtained with the analytical expression, where $M=1.4 \times 10^6$ A/m is taken as the saturation magnetization of Co\cite{Cullity2011}. This gives $B_\mathrm{max} = 0.99$ mT, $0.49$ mT and $0.29$ mT for distances $z_0$ of $3 \, \mu$m, $5 \, \mu$m and $7 \, \mu$m, respectively. These values allow for an estimation of $z_\mathrm{MOI} \sim 6.3 \, \mu$m falling in the range of the distances anticipated by visual inspection.
From the MO data, we observe that the distance between the two poles of a magnetic bar, i.e. the distance between the center of the red and blue-white dots, is not 20 $\mu$m as in the theoretical field maps, but is actually $\sim 18 \, \mu$m. This is due to the fact that the magnetic bars are not perfectly parallelepipedal but have round corners, as shown in the optical image in Fig. \ref{Fig-bars}(b). Therefore, the influence of the bars' borders, where the magnetization tends to align with the edges to minimize the energy of the system, reduces the effective size of the domain of magnetization $M$.
Panels (e) and (f) of Fig. \ref{Fig-bars} follow the same principle for arrays of magnetic bars with $L= 10 \, \mu$m and $W=3 \, \mu$m, and $L= 5 \, \mu$m and $W=1 \, \mu$m, respectively. In the MO-based field map of Fig. \ref{Fig-bars}(e), the single poles of the magnetic bars can barely be resolved, meaning the spatial resolution of our system is of the order of $5~\mu$m for this particular $z_\mathrm{MOI}$. The neighboring bars look nearly connected and form long stripes extending in the direction perpendicular to the bars main axis. Comparison with the theoretical field maps gives $z_\mathrm{MOI} \sim 6 \, \mu$m and confirms the value we found based on Fig. \ref{Fig-bars}(d). Note that here, in comparison to the bigger bars, the magnitude of the maximum magnetic field decayed to $0.22\,\pm\,0.03$ mT in the MO image, while it is $0.65$ mT, $0.30$ mT and $0.19$ mT in the calculations, for $z = 3 \, \mu$m, $5 \, \mu$m and $7 \, \mu$m, respectively.
In figure \ref{Fig-bars}(f), the perpendicular stripes observed in panel (e) nearly disappeared from the MO-based field map, leaving as the main feature in the image a bright line at the border of the bar array, where the field has a maximum magnitude of around $0.07\,\pm\,0.03$ mT. This means that the magnetic resolution of our system is better than 0.1 mT. The theoretical field maps give $B_\mathrm{max} = 0.26$ mT, $0.14$ mT and $0.09$ mT for $z = 3 \, \mu$m, $5 \, \mu$m and $7 \, \mu$m, respectively, which is in fair agreement with the $z_\mathrm{MOI}$ value estimated previously.
\subsection{Magnetic Co disks on top a superconducting Nb film}
\label{sec_dots}
\begin{figure*}[ht]
\includegraphics[width=12.0cm]{Figure_dots_1.pdf}
\caption{{\bf Sample layout and mounting for the Nb film with Co disks.} (a) Optical image of the $800 \times 400 \, \mu$m$^2$ Nb film (100 nm thick), where the two 30 nm thick Co disks with a diameter of $20 \, \mu$m and $10 \, \mu$m are located at $20 \, \mu$m and $10 \, \mu$m, respectively, from the sample edge. (b) Classical configuration for the MOI measurements, with the sample mounted on top of the sample holder, and the indicator placed on top of it. (c) MO image of the $20 \, \mu$m diameter Co disk after polarization with an in-plane field of 3 mT, in the configuration of panel (b). (d) Alternative configuration where the indicator is pressed closer to the sample surface with a purposely designed metallic clip. (e) MO image of the same disk as in panel (c) showing the enhanced MO signal using the alternative configuration presented in (d).}
\label{Fig-dots-1}
\end{figure*}
\begin{figure*}[ht]
\includegraphics[width=12.0cm]{Figure_dots_2.pdf}
\caption{{\bf Influence of a Co disk on flux penetration in the superconducting Nb film.} (a) Magnetic field distribution of a $20 \, \mu$m diameter Co magnetic disk at $T=10$ K. The disk is outlined by the yellow dotted circle, while the border of the superconductor is marked by the dashed straight line. The sample is subsequently cooled down to $T=3.7$ K and a sequence of MO images is recorded for (b) $\mu_0 H=1$ mT, (c) $\mu_0 H=1.5$ mT, (d) $\mu_0 H=1.8$ mT and (e) $\mu_0 H=2.5$ mT. These images show the magnetic field distribution after the field of the disk, represented in (a), has been removed following the procedure described in section \ref{sec_calib}. Panel (f) shows a sketch of the interaction between a magnetized Co disk, assuming a perfect dipolar configuration, and the superconductor film below. The indicator positioned on top of it is not shown for the sake of clarity. Field lines representing negative (positive) $B_z$ components are in red (blue). Near the magnetized disk, the positive field as seen by the indicator is opposed to the field in the superconductor below. The vertical blue lines at the right side of (f) represent the field lines from the applied field $H$.}
\label{Fig-dots-2}
\end{figure*}
In this section, we will increase the complexity of the MOI detected signal by adding a superconducting film to the localized source of magnetic field produced by magnetic disks. To that end, the very same system that has been used to illustrate the protocol for conversion from $I$ to $B_{z}$ in Fig. \ref{Fig-calib}, is now analyzed in more detail. The sample under study consists of a rectangular $800 \times 400 \, \mu$m$^2$ Nb film (100 nm thick) with two Co disks (30 nm thick) on top of its surface, as shown in Fig. \ref{Fig-dots-1}(a). The critical temperature of the Nb film is $T_\mathrm{c} = 9.0$ K, as determined by AC magnetic susceptibility measurements. DC measurements in a SQUID magnetometer confirmed a strong in-plane magnetic anisotropy of the Co disks.
Figure \ref{Fig-dots-1}(b) shows the classical MOI configuration used till this point in the paper. The MO indicator is placed on top of the sample, with a gap $z_\mathrm{MOI}$ between 5 and 7 $\mu$m between them, as determined in Section \ref{sec_bars}. We magnetized the disks with an in-plane field of the order of 3 mT using a commercial neodymium magnet. A MO image of the $20 \, \mu$m diameter Co disk at room temperature is shown in Fig. \ref{Fig-dots-1}(c). The stray field of the Co disk, indicating the direction of the in-plane magnetization, is visible in the image as a red (negative $B_z$) and blue-white (positive $B_z$) spot. Since the signal is quite weak, leading to poor contrast, it is important to minimize the gap between indicator and sample. The distance $z_\mathrm{MOI}$ can be considerably reduced if instead of just placing the MO indicator on the sample, a clip is used to press the indicator firmly onto the sample surface, as illustrated in the sketch of Fig. \ref{Fig-dots-1}(d). This leads to significant improvement in the MO contrast, as shown in the MO image in Fig. \ref{Fig-dots-1}(e), indicating a reduced effective distance $z_\mathrm{MOI}$ between indicator and sample surface.
A negative side-effect of using the clip is that the mechanical stress induced by the clip favors the proliferation of in-plane magnetic domains in the indicator film. Since this effect severely affects the image quality, the use of a pressing clip is not systematic but rather limited to particular cases. In addition, sample mounting has been performed in an environment with a large amount of particles of at least 2.5 $\mu$m diameter in the atmosphere\cite{Air}. This restricts the reduction of the gap that can be achieved by pressing with a clip due to the unavoidable presence of such particles between the indicator and the sample.
The conversion procedure allowed us to determine the values of the magnetic field on top of and around the disks. At zero applied field, the extreme values around the poles are, $B_z = 2.4 \pm 0.3$ mT for the big disk (20 $\mu$m), and $B_z = 1.5 \pm 0.3$ mT for the $10 \, \mu$m disk. For the range of values of the applied field $H$, the field produced by the disk typically exceeds the field of the flux penetrating the superconducting sample, and therefore masks the contribution of the latter to the total field. For this reason, the image conversion method presented in section \ref{sec_calib} proves very useful, since it allows to isolate the contribution of the superconductor from the stronger signal of the Co disks.
In Figure \ref{Fig-dots-2}, we show the influence of the magnetic disks on flux penetration in the superconducting Nb film. Beforehand, the disks were magnetized at an angle with respect to the closest sample border, represented by the yellow dashed line in Fig. \ref{Fig-dots-2}(a). Panel (a) represents the magnetic field at $T=10$ K for the $20 \, \mu$m diameter disk, outlined with the dotted yellow circle, clearly visible as red and blue spots corresponding respectively to negative and positive $B_z$. The sample is subsequently cooled down to $T=3.7$ K and the applied field $H$ is increased. Images obtained following the procedure described in section \ref{sec_calib}, and where the magnetic field landscape in panel (a) is thus removed, show the magnetic field distribution for (b) $\mu_0 H=1$ mT, (c) $\mu_0 H =1.5$ mT, (d) $\mu_0 H=1.8$ mT, and (e) $\mu_0 H=2.5$ mT. In Fig. \ref{Fig-dots-2}(b), no magnetic field has penetrated into the superconducting layer yet and the magnetic disk signal has been reduced down to $0.1$ mT by carrying out the intensity-to-magnetic field conversion procedure. However, a weak magnetic signal is visible around the magnetic disk which might be attributed to the influence of the superconducting screening currents, repelling the magnetic field distribution of the disk and thus modifying slightly the field distribution shown in panel (a)\cite{Pokrovsky2006,Fraerman2005}.
When $H$ is increased, magnetic flux starts to penetrate into the sample in small flux jumps (Fig. \ref{Fig-dots-2}(c)) and interestingly, it can be clearly seen that it propagates preferentially through the side of the disk having the same polarity (positive $B_z$) as the applied field (Fig. \ref{Fig-dots-2}(d-e)). In order to understand this behavior, one must keep in mind that the polarity of the field generated by the disks in the vicinity of the poles is reversed in the indicator with respect to the superconductor\cite{Brisbois2016}, as shown in the sketch of \ref{Fig-dots-2}(f). The natural attraction (repulsion) between the vortices from the border and the antivortices (vortices) created by one of the poles is responsible for the asymmetry in flux penetration. This is visible in Fig. \ref{Fig-dots-2}(d-e) by the enhanced magnitude of the field at the preferred side of the disk, while the other side shows a perceptible shielding of flux. Similar results are obtained for the $10 \, \mu$m diameter magnetic disk, except that the influence on the flux penetration is weaker. Changing the orientation of the disk magnetization gives essentially the same results, the entering vortices being attracted (repelled) by the side of the disk with the same (opposed) polarity.
\subsection{Micro-scale electromagnet}
\label{sec_coil}
In the system described in Section \ref{sec_dots}, we have little control other than the direction of the magnetic moment of disks. An attempt to overcome this limitation is explored in this section where the magnetic disk has been replaced by a single turn micro-coil excited externally with a continuous current. This approach allows us to have a tunable yet localized source of inhomogeneous magnetic field essential for determining the magnetic field resolution of the technique.
\begin{figure*}[ht]
\includegraphics[width=14.0cm]{Figure-coil.pdf}
\caption{{\bf Quantitative magneto-optical imaging on a planar coil.} (a) Scanning electron microscope image of the 50 nm-thick coil made of Al. The coil has inner and outer radius of $10 \, \mu$m and $12 \, \mu$m, respectively. (b) Magnetic field of the coil fed by a current $I=50$ mA, obtained by MOI with the MO indicator pressed on the sample surface. (c) Magnetic field $B_z$ at the center of the coil as a function of $I$. The error on the magnetic field value of a single pixel is around $0.1$ mT, but for extended sources such as the coil, the correlation between pixels gives a lower detection threshold of $0.01$ mT. (d) Mesh for the numerical simulations. (e) Magnetic field of the coil fed by a current $I=50$ mA, obtained by numerical simulations, at a distance $z=6 \, \mu$m of the sample surface. (f) Magnetic field profile through the center of the coil for $I=50$ mA taken in the MO image of panel (b) and in the numerical simulation for $z=6 \, \mu$m.}
\label{Fig-coil}
\end{figure*}
Figure \ref{Fig-coil}(a) shows a scanning electron microscope image of the sample layout. The Al coil is 50 nm thick and has inner and outer diameters of $20\, \mu$m and $24 \, \mu$m, respectively. All the measurements have been performed at room temperature. Figure \ref{Fig-coil}(b) shows a MO image of the out-of-plane magnetic field distribution of the coil for a current $I=50$ mA, obtained after applying the conversion procedure. The indicator was pressed on the sample using the clip mentioned in Section \ref{sec_dots}. To enhance the signal to noise ratio, the image shows the difference between the magnetic field obtained for positive and negative currents.
Figure \ref{Fig-coil}(c) shows the out-of-plane magnetic field $B_z$ at the center of the coil as a function of the applied current amplitude $I$. For a coil of radius $R$ made of a unidimensional wire, $B_z$ on the coil axis at a distance $z$ from the coil center is given by:
\begin{equation}\label{eq:coil}
B_z = \frac{\mu_0 I R^2}{2 \left( R^2 + z^2 \right)^{3/2}}.
\end{equation}
Although the experimental coil has a finite thickness and its wire has a $2 \, \mu$m width, Eq. (\ref{eq:coil}) describes within an accuracy of 1\% the magnetic field of the real single-turn coil and can be used to fit the curve of Fig. \ref{Fig-coil}(c). The slope gives a value of $z_\mathrm{MOI} = 5.7 \pm 1 \, \mu$m. The error on the magnetic field for a single pixel is of the order of 0.1 mT, but the correlation between all the pixels forming the coil gives a significantly lower detection threshold of around 0.01 mT, at which the coil cannot be seen anymore. This value is in agreement with the resolution limit reported previously with similar MO indicators\cite{Koblischka1995} and could possibly be pushed further down by averaging a large number of images.
A more realistic description of magnetic field texture generated by the micro-coil can be obtained by numerical simulations, with the same geometry as the real coil. The mesh of finite elements used in the simulations is shown in Fig. \ref{Fig-coil}(d) and the magnetic field distribution obtained with the simulations for a current $I=50$ mA at a distance $z=6 \, \mu$m from the coil surface is represented in Fig. \ref{Fig-coil}(e). The good agreement between the experimental and the numerical results is further confirmed by plotting the magnetic field profile across the coil center for both the simulated images and the MO images, as shown in Fig. \ref{Fig-coil}(f). Note that the vertical distance $z=6 \, \mu$m obtained here corresponds approximately to the distance between the middle of the MO active layer and the coil, which gives a gap of about 4.5 $\mu$m between the coil and the bottom of the indicator.
\subsection{Nb/NdFeB heterostructures}
\label{sec_TMP}
Thus far we have tested the proposed protocol with prototypical inhomogeneous magnetic field sources of relatively weak intensity permitting us to determine the ultimate magnetic field resolution limit of the Bi:YIG MO indicator as 10 $\mu$T and a spatial resolution similar to the size of the gap separating the MO indicator and the source of inhomogeneous field. In what follows, we will exploit the quantitative MOI procedure in order to investigate more complex and innovative systems, with substantially higher magnetic fields across the sample area, and thus allowing us to test the proposed method in the opposite extreme of high magnetic fields and field gradients. It is worth emphasizing that to the best of our knowledge, application of extremely hard permanent magnetic materials, such as the NdFeB described hereunder as possible sources of vortex pinning remains relatively unexplored. These permanent magnetic materials are characterized by much higher coercive field $(\mu_\mathrm{0}H_\mathrm{c})$ and remanent magnetization $(\mu_\mathrm{0}M_\mathrm{r})$ than magnetic elements or binary alloys previously used in S/F hybrids. Indeed, for NdFeB\cite{Dempsey2007,Dumas-Bouchiat2010,Kustov2010} the reported value is $\mu_\mathrm{0}H_\mathrm{c}$ $\geq$ 1.5 T, much larger than those for Co/Pd multilayers ($\leq$ 0.6 T)\cite{Gillijns2005}, Co/Pt multilayers ($\leq$ 0.4 T)\cite{Gillijns2007}, Co dots ($\leq$ 0.03 T)\cite{Bael1999}, Co/Pd dots ($\leq$ 0.15 T)\cite{Lange2003}, Co/Pt dots ($\leq$ 0.23 T)\cite{VanBael2003,Neal2007,Gheorghe2008}, and Fe dots ($\leq$ 0.15 T)\cite{Villegas2007}, to name a few. Corresponding remanent magnetization values are also significantly larger for NdFeB in comparison to the other materials mentioned above. Hence, the use of magnetically patterned NdFeB as templates of spatially modulated magnetic field to control the pinning landscape in a superconductor may offer a number of advantages over softer magnetic materials, such as greater stability and larger magnetic field amplitudes of potential interest in technological applications.
\subsubsection{Characteristics of TMP NdFeB}
\label{subsec_TMP1}
\begin{figure*}[ht]
\centering
\includegraphics[width=12.0cm]{Figure_TMP_1.pdf}
\caption{{\bf The Nb/NdFeB sample and characteristics of the TMP layer.} (a) Schematic of the Nb/NdFeB sample. (b) Part of a MO image of the Nb/NdFeB sample with chessboard pattern, obtained at $T$ = 10 K and $\mu_0 H$ = 1 mT. (c) $H_\mathrm{min}$ image showing the $B_{z}$ distribution in the chessboard pattern obtained by the calibration protocol (cf. text for details). (d) SHPM image of the Nb/NdFeB sample obtained at a scan height of $\sim$ 4 $\mu$m at room temperature showing a part of the chessboard pattern in NdFeB. (e) $B_{z}(x)$ profiles, from the calibration image in (c) (orange circles), and at different scan heights obtained from SHPM (squares), along the horizontal lines shown in (c) and (d). The profile from the calibration image is adjusted for the offset due to the non-zero analyzer angle. The error bar in one of the data points in this profile indicates the maximum extent of noise in the data (fluctuation in $B_{z}$ inside a square) $\sim \pm$ 2 mT. Also shown are $B_{z}(x)$ profiles at different heights from the sample surface obtained from calculations (cf. text for details).}
\label{Fig-TMP-1}
\end{figure*}
Recently, a promising technique, thermomagnetic patterning (TMP), has been developed to produce magnetic fields spatially modulated in the range from tens to hundreds of microns\cite{Dumas-Bouchiat2010}. In this technique, the magnetization of a hard ferromagnetic film is initially saturated in one direction. The film is then irradiated by a pulsed laser through a mask, while an external field weaker than the film\textquoteright s room temperature value of coercive field is applied in the direction opposite to that of the magnetization of the film. Regions exposed to the irradiation are heated up, resulting in local reduction of coercivity and hence inducing reversal of magnetization in these regions. The final structure consists of an array of opposite magnetized micromagnets. Further details about the technique can be found in Ref. \onlinecite{Dumas-Bouchiat2010}. Patterning of a few microns thick high performance NdFeB hard magnetic films has been achieved using this technique. TMP can be used to produce a variety of configurations of micromagnets, e.g., chessboard, stripes, and periodic arrays of circular/square domains. We have prepared S/F hybrid structures by depositing a Nb film on top of such TMP NdFeB films. Figure \ref{Fig-TMP-1}(a) shows a schematic of a Nb/NdFeB sample showing the different layers. A 3 $\mu$m-thick NdFeB film is deposited on a Si wafer covered by a 100 nm-thick buffer layer of Ta. A capping layer of 100 nm-thick Ta is deposited on top of the NdFeB to avoid oxidation. A particular magnetic texture is imprinted in the NdFeB film by TMP and finally a 100 nm layer of Nb is deposited on top. The sample we primarily investigated is square-shaped (4 mm $\times$ 4 mm) and consists of a chessboard pattern of 100 $\times$ 100 $\mu$m$^2$ alternating squares with opposite magnetization in NdFeB (cf. arrows indicating the magnetization direction in adjoining squares in Fig. \ref{Fig-TMP-1}(a)).
The relatively large size of the patterned magnetic domains in NdFeB can be revealed by MOI carried out above the $T_{c}$ of Nb. Figure \ref{Fig-TMP-1}(b) shows a MO image of the sample obtained at $T$ = 10 K and $\mu_0 H$ = 1 mT whereas Fig. \ref{Fig-TMP-1}(c) shows a $H_\mathrm{min}$ image generated using the calibration protocol described in Section \ref{sec_calib}. The $H_\mathrm{min}$ image essentially shows the $B_{z}$ distribution in the chessboard TMP pattern, with an offset due to the non-zero analyzer angle used in our measurements. The calibration works well to reveal the relatively large magnetic fields in the chessboard pattern, except at the borders of the oppositely magnetized squares, where many of the pixels are saturated due to large magnetic fields in these locations. The contrast of the image in Fig. \ref{Fig-TMP-1}(c) is adjusted to mask the saturating effect of such pixels.
Magnetic properties of the NdFeB layer have also been studied using SHPM performed at room temperature\cite{Shaw2016}. SHPM images were obtained at various scan heights from the sample surface. Figure \ref{Fig-TMP-1}(d) shows a SHPM image of the sample showing the chessboard pattern in the NdFeB obtained at the closest scan height $\sim$ 4 $\mu$m. From both the MO and SHPM images it is clear that the magnetic landscape inside the two sets of squares in the pattern are distinct from each other. While the \textquoteleft red\textquoteright\space squares in the SHPM image appear quite smooth, the \textquoteleft blue\textquoteright\space squares appear rather coarse in comparison. This is a result of the TMP process. Since magnetization reversal induced by the laser occurs at an elevated temperature and the external field applied during TMP is relatively weak ($\sim$ 0.5 T), the magnetic profile in the reversed (irradiated) regions is coarser in comparison to that in the non-reversed (non-irradiated) regions.
A set of $B_{z}(x)$ profiles, one from the MOI calibration image in Fig. \ref{Fig-TMP-1}(c) (orange circles) and the rest from SHPM images (squares) at different scan heights along the dashed lines shown in panels (c) and (d), are shown in panel (e). Note that a few data points, corresponding to saturated pixels at the square boundaries, have been removed from the MOI profile. We have also computed the $B_{z}$ distribution in the chessboard pattern (a 40 $\times$ 40 array of 100 $\times$ 100 $\mu$m$^2$ squares) using the analytical approach discussed for an array of magnetic bars in Section \ref{sec_bars}. The resulting $B_{z}(x)$ profiles (colored lines) at various heights from the sample surface are shown together with the experimental profiles in Fig. \ref{Fig-TMP-1}(e). For simplicity, a fixed value of remnant magnetization\cite{Kustov2010} $M$ = $2.8 \times 10^{5}$ A/m was assumed for the whole sample such that the calculated profiles at heights 4-40 $\mu$m agree well with the experimental profiles obtained using SHPM. The calculated profile thus obtained at 2 $\mu$m agrees reasonably well with the profile obtained from MOI. This indicates that the effective height of the MO imaging plane from the sample surface ($z_\mathrm{MOI}$) is in this case $\sim$ 2 $\mu$m, which is expected as the indicator was pressed with the clip for this set of measurements. The estimations above do not take into account the fact that Nd\textsubscript{2}Fe\textsubscript{14}B undergoes a spin reorientation transition from easy-axis to easy-cone configuration, with a cone angle of $30^{\circ}$, at 135 K. This would lead to a reduction of the magnetization by about 13\%, in addition to the effects produced by a non-perfectly $c$-axis oriented magnetic texturing\cite{Givord1985,Garcia2000}. It is worth noting that the $B_{z}$ profiles in the TMP pattern resemble that expected for a ferromagnetic film with domain widths much larger than the film thickness\cite{Aladyshkin2003,Aladyshkin2009}, with the magnetic field decaying from a domain wall towards its center.
\subsubsection{Isothermal magnetic flux penetration in Nb/NdFeB hybrid system}
\label{subsec_TMP2}
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{Figure_TMP_2.pdf}
\caption{{\bf Smooth flux penetration in Nb/NdFeB at 6 K.} (a) MO Image obtained at +0.5 mT while increasing $\mu_0 H$ from 0 mT after zero-field cooling the sample to 6 K. A region without Nb at lower-left of the image is indicated by the dashed lines. The image in (b) is obtained by subtracting the MO image at 6 K and 0 mT (i.e., before applying any magnetic field after cooling down to 6 K) from the MO image shown in (a). (c) Image of $B_{z}$ distribution converted from the MO image shown in (a) using the calibration procedure. (d) Image of $B_{z}$ distribution converted from MO image obtained at -0.5 mT while decreasing $\mu_0 H$ from 0 mT after zero-field cooling. (e)-(f) Images cropped and zoomed from (c) and (d), respectively, showing the flux distribution in a few squares for the two scenarios. The $B_{z}$ distribution in (c)-(f) is indicated by the adjoining scale bar.}
\label{Fig-TMP-2}
\end{figure}
In order to visualize magnetic flux penetration in Nb, the sample was cooled down to 6 K $< T_\mathrm{c}$ in zero field (ZFC). Then a series of MO images were obtained by cycling $\mu_0 H: 0 \rightarrow +12.5 \rightarrow -12.5 \rightarrow 0$ mT. Figure \ref{Fig-TMP-2}(a) shows a raw MO image obtained in the first run at $\mu_0 H$ = +0.5 mT (increasing from $\mu_0 H$ = 0 mT). A region without Nb, a consequence of the shadow of a holding clip during Nb deposition, is demarcated with dashed lines near the lower left of the image. In this figure, no hint of a superconducting response can be seen due to the overwhelming signal from the underlying NdFeB concealing the much weaker signal from Nb. A possible way to recover the superconducting signal from Nb is to take the difference of this image with the image obtained at 6 K and zero magnetic field, as shown in Figure \ref{Fig-TMP-2}(b). In this difference image, quite a few features are revealed. The region without Nb is readily identified. In the region with Nb, the smooth magnetic flux penetration into the Nb film progressively unveils the chessboard pattern underneath. The region of dark contrast towards the upper right corner of the image is where magnetic flux is absent.
Let us now compare this approach with the result of applying the protocol for converting the intensity images into $B_{z}$ maps. Figure \ref{Fig-TMP-2}(c) shows the resulting $B_{z}$ map converted from the MO image in panel (a). From this figure it can be seen that our protocol is able to recover the smooth magnetic flux penetration into the Nb film. However, the fact that the chessboard pattern remains faintly visible in the Meissner region indicates that it fails to properly correct the magnetic field of the underlying magnetic pattern. This is understandable since the MO indicator saturates and therefore becomes insensitive to fields above 80 mT whereas the magnetic field close to the boundary between two squares can largely exceed that value.
A closer inspection of the box in Fig. \ref{Fig-TMP-2}(c), and the zoomed-in view of this region shown in Fig. \ref{Fig-TMP-2}(e), reveals that the red squares are devoid of flux, indicated by the uniform dark contrast whereas a strong accumulation of flux is seen in the peripheral regions of the blue squares. When $(H)$ polarity is reversed, the opposite effect is observed as shown in Fig. \ref{Fig-TMP-2}(d). In this case, flux accumulation is observed inside the red squares (cf. strong red contrast in the periphery of the red squares in Figs. \ref{Fig-TMP-2}(d) and \ref{Fig-TMP-2}(f)). In other words, the flux propagates following staircase-like paths along one set of squares, while largely avoiding the other set.
Vortices generated in a superconductor by an external magnetic field experience magnetic pinning by a domain structure due to interactions of the stray fields of the magnetic structure with screening currents in the superconductor\cite{Bulaevskii2000,Erdin2001,Bespyatykh2001,Milo2002,Laiho2003,Aladyshkin2009}. Moreover, the magnetic pattern is expected to spontaneously induce vortices in the superconductor even in zero applied field, with vortices of opposite polarity occupying the alternate sets of squares\cite{Erdin2001,Laiho2003,Aladyshkin2006}. In this scenario, a vortex generated in Nb by a positive applied $H$ approaching a red square will be attracted and annihilated by antivortices present at the edges of the square. While approaching the edge of a blue square, it will encounter a permeable wall of repulsive vortices, hence flux entry into the blue square is not impeded. Once inside a blue square, the vortex will be strongly pinned by virtue of magnetic pinning within the domain. This would lead to an accumulation of flux inside the blue squares. In agreement with the variation of $B_{z}$ within a square, more vortices would tend to accumulate near the edges of the square rather than near its center, due to the larger $B_{z}$ near the edge, as is observed in Figs. \ref{Fig-TMP-2}(c) and \ref{Fig-TMP-2}(e). For a negative applied $H$, the scenario will be reversed, with vortices accumulating in the red squares (cf. Figs. \ref{Fig-TMP-2}(d) and \ref{Fig-TMP-2}(f)). This is consistent with earlier reports demonstrating spatial variation of flux density in a superconductor guided by the magnetic landscape in its vicinity (e.g, \onlinecite{Laviano2005,Gillijns2007,Iavarone2014}), and will progressively result in a staircase-like path of flux propagation with increasing $H$.
\subsubsection{Magnetic flux avalanches in Nb/NdFeB hybrid system}
\label{subsec_TMP3}
It is a well documented fact that the increase of critical current density along with the decrease of thermal conductivity as temperature decreases, lead to the development of thermomagnetic instabilities\cite{Mints1981}. These instabilities manifest themselves as a burst of magnetic flux penetrating into the superconductor and acquire a particularly dramatic aspect in thin films with a branching structure similar to Lichtenberg figures\cite{Takahashi1979}. Literature on the influence of a magnetic template on the propagation of magnetic flux avalanches is rather scarce\cite{Gheorghe2008,Brisbois2016} and motivates us to explore this regime of flux penetration in the Nb/NdFeB hybrid system. To that end, the sample is cooled down to the base temperature of the cryostat ($\sim$ 4 K) in zero field (ZFC). Figure \ref{Fig-TMP-3} summarizes our observations of avalanche-like flux jumps observed in this regime. Fig. \ref{Fig-TMP-3}(a) shows the magnetic field map for $\mu_0 H$ = 7 mT where the Nb sample has been fully penetrated and no evidence of avalanche is seen. However, by taking the differential image ${\delta}B_{z} = B_{z}(\mu_0 H+0.1$~mT)$-B_{z}(\mu_0 H)$ as shown in Fig. \ref{Fig-TMP-3}(b), a sudden large jump is observed along a diagonal, in an area encompassing a set of squares of similar magnetization. This is somewhat analogous to the observation of secondary branches of flux avalanches in Nb and MoGe films with an antidot lattice\cite{Motta2014}. By further increasing $H$ a smooth flux penetration proceeds, which is interspersed by similar jumps at higher $H$.
\begin{figure}[ht]
\centering
\includegraphics[width=8.5cm]{Figure_TMP_3.pdf}
\caption{{\bf Flux avalanches in Nb/NdFeB at 4 K.} (a) Image of $B_{z}$ distribution converted from MO image obtained at 7 mT, while increasing $\mu_0 H$ from 0 mT after zero-field cooling the sample to 4 K. (b) Differential image showing $\delta B_{z}$ map in response to a 0.1 mT change in $\mu_0 H$ at 7 mT.}
\label{Fig-TMP-3}
\end{figure}
It is worth noting that the flux jumps observed at 4 K occur over and above smooth flux penetration. This observation leads us to suggest that the observed avalanches might not be the result of thermomagnetic instabilities but rather the effect of flux channeling. Further experimental investigations will be needed to clarify the origin of the flux jumps reported here.
\subsection{Py/Nb heterostructures}
\label{sec_NbPy}
\begin{figure*}[ht]
\centering
\includegraphics[width=15.0cm]{Figure_NbPy_1.pdf}
\caption{{\bf Imprinting the micromagnetic pattern in permalloy.} (a) Schematic showing the imprinting process. (b) Py/Nb sample magnetized in-plane with the magnetization direction indicated by the arrow. (c)-(d) Py/Nb sample with micromagnetic pattern imprinted in the Py layer. (c) Chessboard pattern aligned roughly along the long side of Py. (d) Chessboard pattern at $\sim 45^{\circ}$ with respect to the long side. (e) SHPM image at room temperature showing a part of the chessboard pattern in the Py layer of panel (c), obtained at a scan height $z \sim$ 4 $\mu$m. (f)-(g) MO Images showing smooth flux penetration at 6 K in the Py/Nb sample with the imprinted patterns shown in (c) and (d), respectively. These images are obtained by subtracting the image at $\mu_0 H$ = 0 mT in each set to remove the signal from the magnetic pattern. Images in (f) and (g) are at $\mu_0 H$ = 4.9 mT and 5.9 mT, respectively. Both images were obtained while increasing $\mu_0 H$ from 0 mT after zero-field cooling the sample to 6 K.}
\label{Fig-NbPy-1}
\end{figure*}
In this section we will address the possibility to reproduce the magnetic landscape of TMP templates in a softer ferromagnet, so as to obtain a weaker, erasable, and tailor-made pinning potential.
Permalloy (Py) is an interesting material to use as a source of flexible magnetic landscape to induce vortex pinning in a superconductor\cite{Brisbois2016}. This can be achieved by transferring the TMP templates in NdFeB onto a 460 nm thick Py layer partially covering a Nb film. The imprinting process starts from a state where the Py layer is magnetized in-plane along the direction indicated in Fig. \ref{Fig-NbPy-1}(b). Then, the Py/Nb sample is fixed at the base of a micro-manipulator probe station with the Py layer on top and the base is clamped (cf. schematic in Fig. \ref{Fig-NbPy-1}(a)). The NdFeB sample is attached to a probe, with the patterned NdFeB surface facing down, towards the Py/Nb sample. Then the probe with the NdFeB sample is approached towards the Py/Nb sample till the two surfaces are in contact. Afterwards, the base is unclamped and pulled down ensuring minimal slipping between the two surfaces which results in a stable and clear imprinting of the TMP pattern in the Py. The TMP chessboard pattern was used to generate two different configurations in Py as shown in Figs. \ref{Fig-NbPy-1}(c) and \ref{Fig-NbPy-1}(d), with the chessboard pattern along and at $45^\circ$ with respect to the long edge of Py, respectively.
In order to expose the small scale details of the imprinted magnetic landscape, the Py layer of Fig.~\ref{Fig-NbPy-1}(d) has been characterized using SHPM performed at room temperature. The SHPM image in Fig. \ref{Fig-NbPy-1}(e) shows the chessboard pattern in the Py obtained at the closest scan height $z \sim$ 4 $\mu$m. $B_{z}$ in the pattern varies by 5 mT, which is $\sim$ 6 times less than that observed in NdFeB (cf. Fig. \ref{Fig-TMP-1}(d)), consistent with the fact that Py has a much lower remanent field than NdFeB.
As with the Nb/NdFeB sample, to visualize flux profiles in Nb, the sample was cooled down to 6 K in zero field (ZFC). Then a series of MO images were obtained by cycling $\mu_0 H: 0 \rightarrow +12.5 \rightarrow -12.5 \rightarrow 0$ mT. These measurements were performed for both of the imprinted configurations shown in Fig. \ref{Fig-NbPy-1}. Panels (f) and (g) show one MO image for each configuration obtained in these sets. These images are obtained by subtracting the image at $\mu_0 H$ = 0 mT in each set to remove the signal from the magnetic pattern. The images in Figs. \ref{Fig-NbPy-1}(f) and \ref{Fig-NbPy-1}(g) are at $\mu_0 H$ = 4.9 mT and 5.9 mT, respectively. From these images, the modulation of flux path in Nb by the magnetic pattern is clearly established. The staircase-like paths of flux flow, which was also observed in the Nb/NdFeB sample, is much clearer in this sample. The principal reason behind this is that the magnetic signal from Py is much weaker than that from NdFeB, which allows revealing the response of Nb more easily. Furthermore, similar to our observations in Ref. \onlinecite{Brisbois2016}, flux motion is observed to be asymmetric with respect to the different edges of Py. Flux in Nb guided by the chessboard pattern penetrates much more quickly along one side of sample with respect to the others (cf. top of the sample in Fig. \ref{Fig-NbPy-1}(f) and left side of the sample in Fig. \ref{Fig-NbPy-1}(g)). This indicates that the underlying in-plane magnetization might still play a significant role even after imprinting the out-of-plane magnetic pattern in Py. A preliminary investigation of the flux jumps in this system shows that large dendritic avalanches develop at low temperatures, most probably associated with thermomagnetic instabilities.
\section{Conclusion}
\label{sec_conclusion}
In summary, we have developed a comprehensive protocol for calibration and conversion of MOI data into magnetic field distribution. A side benefit of this method is the removal of unwanted experimental artifacts from the raw data. The protocols have been applied on systems with increasing complexity. Several low magnetic field sources have been used to determine the magnetic field resolution of the technique as 10 $\mu$T whereas the spatial resolution has been shown to be similar to the gap separating the magnetic source and the MO indicator. For typical mounting conditions this spatial resolution lies between 2 and 10 $\mu$m. The introduced protocol helps to extract the comparatively weaker magnetic response of the superconductor from the background of larger fields associated with the magnetic layer in its vicinity. This has been notably useful to reveal magnetic flux penetration in Nb/NdFeB hybrids with a chessboard magnetic pattern. For this system, smooth flux penetration in Nb is observed to be strongly influenced by the underlying micromagnetic pattern, with incoming vortices preferentially occupying one set of squares of the pattern. Smooth flux penetration at lower $T$ is interspersed with unconventional avalanche-like flux jumps. In addition, thermomagnetically patterned micromagnet structures have been imprinted in permalloy (Py) to obtain flexible magnetic landscapes for flux guidance in a Nb layer underneath. Further refinements of the technique could be envisaged by incorporating deconvolution tools and taking into account the in-plane field component in the MO indicator.
\section{Supplementary Material}
\label{sec_supplementary}
See supplementary material below for more details on our magneto-optical imaging setup and on the preparation of the Co bars discussed in Section \ref{sec_bars}.
\begin{acknowledgments} This work was partially supported by the Fonds de la Recherche Scientifique - FNRS, the ARC grant 13/18-08 for Concerted Research Actions, financed by the French Community of Belgium (Wallonia-Brussels Federation), the COST action NanoCoHybri (CA16218), the Brazilian National Council for Scientific and Technological Development (CNPq) and the Sao Paulo Research Foundation (FAPESP). J.B. acknowledges support from F.R.S.-FNRS (Research Fellowship). The work of G.S. is supported by the University of Li\`{e}ge and the EU in the context of the FP7-PEOPLE-COFUND-BeIPD project. The work of S.B.A., A.V.S., and S.M. is partially supported by PDR T.0106.16 of the F.R.S.-FNRS. The authors thank the ULg Microscopy facility CAREM for part of the SEM investigations. L.B.L.G.P. was supported by a fellowship from CNPq-CsF Program. The LANEF framework (ANR-10-LABX-51-01) and the Nanoscience Foundation are acknowledged for their support with mutualized infrastructure.
\end{acknowledgments}
|
2,877,628,088,895 | arxiv | \subsection*{Acknowledgements:} I would like to thank my thesis supervisor Professor D.M Steinberg for his patient and insightful guidance, and to thank Ms. Ilana Gelertner for the invaluable practical experience I obtained while working with her in the Statistical Laboratory at Tel-Aviv university, which helped me develop the ideas presented in this paper.
\begin{abstract}
Most classification methods provide either a prediction of class membership or an assessment of class membership probability. In the case of two-group classification the predicted probability can be described as "risk" of belonging to a ``special" class . When the required output is a set of ordinal-risk groups, a discretization of the continuous risk prediction is achieved by two common methods: by constructing a set of models that describe the conditional risk function at specific points (quantile regression) or by dividing the output of an "optimal" classification model into adjacent intervals that correspond to the desired risk groups. By defining a new error measure for the distribution of risk onto intervals we are able to identify lower bounds on the accuracy of these methods, showing sub-optimality both in their distribution of risk and in the efficiency of their resulting partition into intervals. By adding a new form of constraint to the existing maximum likelihood optimization framework and by introducing a penalty function to avoid degenerate solutions, we show how existing methods can be augmented to solve the ordinal risk-group classification problem. We implement our method for generalized linear models (GLM) and show a numeric example using Gaussian logistic regression as a reference.
\end{abstract}
\section{Introduction \label{section_intro}}
\par The classical problem of discriminating between two classes of observations based on a given dataset has been widely discussed in the statistical literature. When only two classes are involved, the question of discrimination is reduced to whether or not a given observation is a member of a ''special" class (where the other class is the default state, for example sick vs. healthy). Some classification methods, such as Fisher's linear discriminant analysis (LDA), make a decisive prediction of class membership while minimizing error in some sense, typically the misclassification rate. Other methods, such as logistic regression, provide an estimate of the exact conditional probability of belonging to the ''special" class given a set of predictor variables. Throughout this paper we shall refer to this conditional probability as ''\emph{conditional risk}" or simply ''\emph{risk}", although sometimes belonging to the special class might actually have a very positive context (e.g. success).
\par There are two ways to estimate the conditional risk function: parametric and non-parametric. Parametric methods primarily include logit/probit models (Martin 1977 \cite{martin_1977}, Ohlsen 1980 \cite{ohlson_1980}) and linear models (Amemiya 1981 \cite{amemiya_1981}, Maddala 1986 \cite{maddala_1986} and Amemiya 1985 \cite{amemiya_1985}). Powell (1994 \cite{powell_1994}) has a review of non-parametric estimators. For a comparison of these approaches and complete review see Elliott and Lieli (2006) \cite{elliott_2006} and more recently Green and Hensher (2010) \cite{greene_2010}.
\par The estimation of the exact structure of the conditional risk function comes in handy when we wish to make distinctions between observations that are finer than simply class membership. However, in realistic scenarios acting upon such estimations alone may prove to be difficult. Assessments on a scale of 1:100 (as percentages) or finer assessments have little practical use, primarily since the possible actions resulting from such information are usually few. For such cases an ordinal output is required. It is important to note that this problem is not equivalent to multi-group classification in two ways: first, our groups are ordinal by nature and relate to the underlying risk; second, the assignment into groups is not given a-priori and greatly depends on the selection of model, model parameters and the borders of the intervals assigned to each risk group.
\par There are two common approaches to creating an ordered set of risk groups to match a finite set of escalating actions. The first approach is to create multiple models describing the behaviour of the conditional risk function at specific points (also known as "quantile regression"); the second approach is to divide post-hoc the continuous risk estimation of a known model into intervals.
\par The first approach attempts to construct separate models that describe the behaviour of the conditional risk function at specific levels of risk. In linear models this approach is known as \emph{quantile regression} (Koenker \& Bassett 1978 \cite{Quantile_regression_1978}). Manski (\cite{Manski_1975}, \cite{Manski_1985}, \cite{Manski_1986}) implemented this notion to binary response models (the equivalent of two-group classification) naming it "Maximum Score Estimation". In a series of papers he shows the existence, uniqueness and optimal properties of the estimators and follows by showing their stable asymptotic properties. The primary justification for using this approach is methodological: it demands that we specify in advance the levels of risk that are of interest to us (a vector $q$ of quantiles), and then constructs a series of models that describe conditional risk at these quantiles. However, as we shall demonstrate in section \ref{section_cond_precentiles}, using risk-quantiles (or conditional probability over left-unbounded and overlapping intervals) is not relevant to our definition of the problem and even the term "conditional quantiles" is in itself misleading.
\par In the second, more ``practical" approach, the continuous output of an existing optimal risk model (logit, linear or non-parametric) is divided into intervals, thus translating the prediction of risk (usually continuous in nature) into large "bins of risk" - i.e "low"/ "medium"/ "high" or "mild"/ "moderate"/ "severe" (depending on context). The final result of this discretization process is a set of ordinal risk groups based on the continuous prediction of conditional risk. The primary drawback of this approach is that the selection of the classification model and its parameters is not performed in light of the final set of desired risk groups. Instead, an "optimal model" (in some sense) is constructed first, and the partition into discrete groups is performed post-hoc.
\par The primary objective of this paper is to combine the idea of pre-set levels of risk over adjacent intervals (rather than risk quantiles) into a standard classification framework. Instead of constructing multiple models, we offer a process that optimizes a single risk estimation model (or "score") paired with a matching set of breakpoints that partition the model's output into ordinal risk groups. To that end we define a new measure of accuracy - \emph{Interval Risk Deviation} (IRD) - which describes a model's ability to distribute risk correctly into intervals given a pre-set vector $r$ of risk levels. We show how this new measure of error can be integrated into existing classification frameworks (specifically the maximum likelihood framework) by adding a constraint to the existing optimization problem. In addition, we address the more practical problem of effectively selecting breakpoints by introducing a penalty function to the modified optimization scheme.
\par The remainder of this paper is organized as follows. Section \ref{section_definitions} defines risk groups and a measure of error (IRD) that will be necessary for optimality. Section \ref{section_existing_methods} demonstrates the problems of using existing approaches.
Section \ref{section_ORGC} formulates a new optimization problem that will provide accurate, optimal and non-degenerate solutions, and section \ref{section_case_study} provides a case study where the new framework is applied to logistic regression and presents an example.
\section{Definitions \label{section_definitions}}
\par Let $r \in [0,1]^T$ be an ordered vector of \emph{risk levels} ($0 \leq r_{1} < r_{2} < \ldots < r_{T} \leq 1$), let $X = (X_1, \ldots, X_P)$ be a continuous $P$-dimensional random vector and let $Y \in \{0,1\}$ be a Bernoulli random variable representing class membership. An \emph{Ordinal Risk-Group Score} (ORGS) for a pre-set risk vector $r$ is a couplet $(\Psi, \tau)$ where $\Psi: \mathbb{R}^P \rightarrow \mathbb{R}$ is a continuous (possibly not normalized) risk predictor, which summarizes the attributes of $X$ into a single number (a score), and $\tau \in \mathbb{R}^{T-1}$ is a complete partition of $\mathbb{R}$ into $T$ distinct and adjacent intervals ($- \infty = \tau_0 < \tau_1 < \tau_2 < \ldots < \tau_{T-1} < \tau_T = \infty$). The couplet $(\Psi, \tau)$ classifies observations into risk groups by the following equivalence: An observed vector $X$ belongs to the $i$'th risk group if and only if $\Psi(X) \in (\tau_{i-1}, \tau_{i}]$ (the intervals are right-side open to avoid ambiguities). The actual conditional risk level of the $i$'th risk group defined by a couplet $(\Psi, \tau)$ is:
\begin{equation} \label{eq_R_def}
R_{i} (\Psi, \tau) = P ( Y=1 \mid \Psi(X) \in (\tau_{i-1}, \tau_{i}])
\end{equation}
\par It is worth noting that score-based classification methods for two classes can be described as a special of $T=2$ (two risk groups). Such methods look for a single breakpoint $\tau \in \mathbb{R}$, and the two resulting intervals $(-\infty, \tau], (\tau, \infty)$ become an absolute prediction of class membership: $\Psi(X)> \tau \Rightarrow$ $X$ belongs to class $1$. Other methods, designed to deal with more than one risk group, typically assign a single breakpoint to each risk group (see section \ref{section_cond_precentiles}), reflecting the idea that the assignment to risk group is based on \emph{thresholds}: an observed $X$ is assigned to the $i$'th group if and only if $\Psi(X)$ crosses the ($i-1$)'th threshold ($\Psi(X) > \tau_{i-1}$) but does not cross the $i$'th threshold ($\Psi(X) \leq \tau_{i}$).
\par Even from the latter definition, it becomes evident that the assignment to groups is in fact based on \emph{adjacent intervals} $\lbrace (\tau_{i-1}, \tau_{i}] \rbrace_{i}^{T-1}$ (rather than on right-side open ended intervals defined by thresholds) ans that any breakpoint we set affects the definition of two intervals (and hence two risk groups). Although further on in this paper we shall discuss separate breakpoints in relation to risk groups in order to demonstrate the key problem that arises from the use of adjacent intervals (section \ref{section_lower_bounds_IRD}), the notion of assigning intervals \emph{simultaneously} rather than separate breakpoints should remain clear throughout this paper.
\par We can now describe the accuracy of an ordinal risk score $(\Psi, \tau)$ in relation to a pre-set vector $r$ as the overall difference between the pre-defined risk levels of $r$ and the actual conditional risk levels $R(\Psi,\tau)$. We define an error measure for risk-group classification models which is a parallel of \emph{misclassification rate} in standard classification methods. We name this measure \emph{Interval Risk Deviation} (IRD):
\begin{equation} \label{eq_IRD_def}
\text{IRD}_{r}(\Psi, \tau) = \Vert R(\Psi, \tau) - r \Vert
\end{equation}
\par On it's own, the very definition of IRD marks a new approach to the evaluation of ordinal risk scores. Having a predefined set of risk levels means that any risk score $(\Psi, \tau)$ we consider as a candidate must uphold $\text{IRD}_{r}(\Psi, \tau) = 0$ (or at the very least $\text{IRD}_{r}(\Psi, \tau) < \varepsilon$ for a predefined small $\varepsilon >0$). This makes $\text{IRD} = 0$ a \emph{necessary condition} for optimality. In the next two sections we demonstrate how the two existing approaches for creating ordinal risk scores do not necessarily fulfil this condition, either because of unsuitable definitions of optimality, as is the case with risk-quantile based methods, or by ignoring it altogether, as is the case with the 2-step approach.
\section{Problems with Existing Scoring Methods \label{section_existing_methods}}
\subsection{Risk-Quantiles (and why we can't use them) \label{section_cond_precentiles}}
\par When first presented with the problem of selecting an optimal model paired with a set of optimal breakpoints, our initial idea was to use quantile-oriented models. Such models have been extensively studied in econometrics, where they are commonly referred to as ``ordered choice models" (\cite{train_2003}, \cite{greene_2010}). The most relevant model in that group is Manski's \emph{maximum score estimation} which defines the optimization problem using a set of probabilities over \emph{left-unbounded overlapping} intervals (or \emph{rays}) in contrast to the definition of the problem over \emph{adjacent, non-overlapping} intervals.
\par In order to better illustrate the differences between our definitions and Manski's quantile-oriented approach we must first describe quantile oriented models in our terms. First we replace the vector $r$ with a vector $q$ of "conditional quantiles", which are in fact the desired conditional probabilities over left-unbounded and overlapping intervals. Using Manski's adaptation of quantile regression \cite{Manski_1975} we can build a different set of model parameters for each quantile $q_{i}$ optimizing:
\begin{equation}
\vert P(Y=1 \mid \Psi_{i}(X) \leq 0) - q_{i} \vert \longrightarrow \min_{\Psi_{i}}
\end{equation}
\par It is easy to see how this approach can be slightly modified to match the original objective of finding a single model: by coercing the models $\Psi_{i}$ to be parallel we can create a "master model" $\Psi(X)$ and derive appropriate thresholds $\lbrace \tau_{i} \rbrace_{i=1}^{T-1}$ such that:
\begin{equation*}
\Psi_{i}(X) \leq 0 \quad \Leftrightarrow \quad \Psi(X) \leq \tau_{i}
\end{equation*}
\begin{equation} \label{optim_cond_quant}
\vert P(Y=1 \mid \Psi(X) \leq \tau_{i}) - q_{i} \vert \longrightarrow \min_{\Psi} \quad i \in \lbrace 1, \dots T \rbrace
\end{equation}
\par Using (\ref{optim_cond_quant}) we can easily define $Q_{i} (\Psi,\tau)$ = $P(Y = 1 \mid \Psi(X) \leq \tau_{i})$ and the equivalent \emph{Quantile Risk Deviation} $QRD_{q}(\Psi, \tau) = \Vert Q(\Psi,\tau) -q \Vert$, and look for a model with $QRD = 0$. However, while it is tempting to describe the vector $q$ as a vector of ``\emph{conditional quantiles}", the term is in itself misleading and should be avoided. Figure \ref{figure_hetero_cond_graphs} demonstrates how even under relatively simple assumptions (a one dimensional Gaussian distribution with unequal conditional variances) the function $Q_{i} (\Psi,\tau) = P(Y = 1 \mid X \leq \tau_{i})$ is not even monotone in $\tau_{i}$.
\begin{figure}[h]
\center
\includegraphics[scale=0.9, angle = 270]{hetero_graphs.eps}
\caption[Conditional probability over left-unbounded intervals]{\label{figure_hetero_cond_graphs} Different behaviour of conditional probability over left-unbounded intervals as a function of the threshold $x$ in the case of one-dimensional Gaussian distribution with $\mu_{0}= -1$, $\mu_{1} = 1$ and $P(Y=1) = 0.2$. In the left panel $\sigma_1 =4, \sigma_0 = 1$, in the middle panel $\sigma_1 = \sigma_0 = 2$ (homoscedastic case) and in the right panel $\sigma_1 = 1, \sigma_0 = 4$.}
\end{figure}
\par Even if we assume strict monotonicity of $P(Y=1 \mid \Psi(X) \leq x)$, for example by assuming the strict monotone likelihood ratio property (SMLRP, for details see Appendix \ref{appendix_MLRP_cdf_ratio_mono}) and thus giving the term ``conditional quantiles" a meaningful sense, it would still be impossible to apply this approach to optimizing the distribution of risk over adjacent intervals. In order to use ``risk-quantiles" to solve our problem we must first find an a-priori mechanism that will translate any given vector of desired conditional probabilities over adjacent intervals $r$ to the equivalent vector of desired conditional probabilities over left unbounded and overlapping intervals $q$.
\par However it is easy to show that such an a-priori translation is impossible. Using the \emph{law of total probability} in its conditional form we can calculate for any given $R$ the equivalent $Q^{(R)}$ (actual probabilities over left unbounded intervals):
\begin{equation} \begin{split} \label{eq_R_Q}
Q^{(R)}_{i} & (\Psi,\tau) = P(Y = 1 \mid \Psi(X) \leq \tau_{i})\\
= & \sum_{j \leq i} P(Y=1 \mid \Psi(x) \in (\tau_{j-1}, \tau_{j}], \Psi(X) \leq \tau_{i}) P(\Psi(X) \in (\tau_{j-1}, \tau_{j}] \mid \Psi(X) \leq \tau_{i})\\
= & \sum_{j \leq i} P(Y=1 \mid \Psi(x) \in (\tau_{j-1}, \tau_{j}]) \frac{P(\Psi(X) \in (\tau_{j-1}, \tau_{j}] ,\Psi(X) \leq \tau_{i})}{P( \Psi(X) \leq \tau_{i})} \\
= & \frac{1}{P(\Psi(X) \leq \tau_{i})} \sum_{j \leq i} R_{j}(\Psi,\tau) P(\Psi(X) \in (\tau_{j-1}, \tau_{j}])
\end{split} \end{equation}
Or equivalently:
\begin{equation}
R_{i} (\Psi,\tau) = \frac{P(\Psi(X) \leq \tau_{i})}{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])} Q^{(R)}_{i} (\Psi,\tau) - \frac{P(\Psi(X) \leq \tau_{i-1})}{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])} Q^{(R)}_{i-1}(\Psi,\tau)
\end{equation}
The same process can be applied to the corresponding vector of risk quantiles $q^{(r)}$:
\begin{equation} \begin{split} \label{eq_r_q}
q^{(r)}_{i} = & \frac{\sum_{j < i} \; r_{j} \; P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])}{P(\Psi(X) \leq \tau_{i})} \\
r_{i} = & \frac{P(\Psi(X) \leq \tau_{i})}{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])} q^{(r)}_{i} - \frac{P(\Psi(X) \leq \tau_{i-1})}{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])} q^{(r)}_{i-1}
\end{split} \end{equation}
As a result for a fixed $(\Psi, \tau)$ we have:
\begin{equation*}
R_{i} (\Psi,\tau) = r_i \Leftrightarrow Q^{(R)}(\Psi,\tau) = q^{(r)}_{i}
\end{equation*}
\begin{equation}
\text{IRD}_{r}(\Psi,\tau) = 0 \Leftrightarrow QRD_{q^{(r)}}(\Psi,\tau) = 0
\end{equation}
\par The primary problem of using quantiles to define this problem stems from the relation between $r$ and the resulting $q_{r}$. By our own definitions the central aspect of the problem is the probability over adjacent intervals and not overlapping left-unbounded intervals. Therefore the optimization must be performed against a fixed, pre-defined vector $r$. If we wish to construct an analogous quantile-based optimization problem, we must first find the equivalent vector $q_r$ which defines quantile-based problem. However equation (\ref{eq_r_q}) shows that since the relation between $r$ and $q_r$ depends on the specific form of the optimal model $\Psi$, in order to construct $q_r$ we must first find the optimal model $\Psi$ for this problem (which is what we are looking for in the first place), or in other words the translation $r \leftrightarrow q$ is possible only once we have the optimal solution to the problem. Therefore building an analogous optimization problem over left-unbounded overlapping intervals can only be done \emph{after} we have the optimal solution. Consequently we cannot use quantile-based models to construct an optimal model for the adjacent interval-based ordinal risk-group problem.
\subsection{Lower bounds on Interval Risk Deviation \label{section_lower_bounds_IRD}}
\par Another common practice when building scores for risk groups is to build a model $\Psi$ that is optimal in some sense (e.g. maximizing likelihood or minimizing overall miss-classification rate) and then partition the range of $\Psi(X)$ into adjacent intervals the define risk groups. In this section we demonstrate how, under relatively simple assumptions, using this approach with existing classification models is not optimal for more than two risk groups.
\par Using Bayes theorem we can represent $R$ as:
\begin{equation*}
R_{i} (\Psi ,\tau) = P(Y=1 \mid \Psi(X) \in (\tau_{i-1}, \tau_{i}]) = P(Y=1)\frac{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}] \mid Y=1)}{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])}
\end{equation*}
\par We assume that $(X,Y,\Psi)$ satisfies the Strict Monotone Likelihood Ratio Property (SMLRP, see appendix \ref{appendix_MLRP_cdf_ratio_mono} for exact definition and details) and that the marginal densities $f_{X \mid Y=k}$ ($k = 0,1$) are continuous, strictly positive and finite. By continuity and finiteness we can describe the behaviour of $R_{i} (\Psi,\tau)$ for infinitely short intervals ($\tau_{i} \rightarrow \tau_{i-1}$):
\begin{equation} \label{eq_lim_of_R} \begin{split}
& \lim_{\tau_{i} \rightarrow \tau_{i-1}} R_{i} (\Psi,\tau) = \lim_{\tau_{i} \rightarrow \tau_{i-1}} \frac{P(Y=1) \; P(\Psi(X) \in (\tau_{i-1}, \tau_{i}] \mid Y = 1)}{P(\Psi(X) \in (\tau_{i-1}, \tau_{i}])} = \\
& = P(Y=1) \frac{\lim_{\tau_{i} \rightarrow \tau_{i-1}} \frac{P(\Psi(X) \in (\tau_{i},\tau_{i-1}] \mid Y = 1)}{\tau_{i} - \tau_{i-1}}}{\lim_{\tau_{i} \rightarrow \tau_{i-1}}\frac{P(\Psi(X) \in (\tau_{i},\tau_{i-1}])}{\tau_{i} - \tau_{i-1}}} = P(Y=1) \frac{f_{\Psi(X) | Y=1} (\tau_{i-1})}{f_{\Psi(X)} (\tau_{i-1})}
\end{split} \end{equation}
where $f$ is the appropriate density function and the limit is from the right-hand side. Similarly for any $z \in (\tau_{i-1}, \tau_{i}]$,
\begin{equation} \label{eq_lim_of_R_general}
\lim_{\tau_{i} \rightarrow z} \lim_{\tau_{i-1} \rightarrow z} R_{i} (\Psi,\tau) = \lim_{\tau_{i-1} \rightarrow z} \lim_{\tau_{i} \rightarrow z} R_{i} (\Psi,\tau) = P(Y=1) \frac{f_{\Psi(X) | Y=1} (z)}{f_{\Psi(X)} (z)}
\end{equation}
\par Although we have stressed the importance of simultaneity when assigning intervals to risk groups, in order to understand the implications of (\ref{eq_lim_of_R_general}) on optimal model selection we must look at the problem from a different perspective. First we fix $\Psi$ and assume that a given partition $\tau$ supports a perfect distribution of conditional risk up to the $(i-1)$'th group, meaning that $R_{j}(\Psi, \tau) = r_{j}$ for all $j < i$. Under these conditions, combined with our previous assumptions of continuous, strictly positive conditional densities and SMLRP, we can explicitly show that not all values of $r_{i}$ are exactly achievable without introducing some IRD: by theorem \ref{th_SMLRP_mono_R_equiv} $R_{i}(\Psi, \tau)$ is strictly increasing in $\tau_{i}$ and therefore we can explicitly define a feasibility criterion:
\begin{equation} \label{eq_lower_bound_on_r}
P(Y=1) \frac{f_{\Psi(X) | Y=1} (\tau_{i-1})}{f_{\Psi(X)} (\tau_{i-1})} < r_{i}
\end{equation}
\par Using continuity (which enables us to divide by $P(Y=1)f_{\Psi(X) | Y=1} (\tau_{i-1})$) we can transform (\ref{eq_lower_bound_on_r}) into a condition on the likelihood ratio $\Lambda$:
\begin{equation} \label{eq_lower_bound_on_LR}
\Lambda_{\Psi}(\tau_{i-1}) = \frac{f_{\Psi(X) \mid Y=1} (\tau_{i-1})}{f_{\Psi(X) \mid Y=0} (\tau_{i-1})} < \frac{1 - P(Y=1)}{P(Y=1)} \; \frac{r_{i}}{1-r_{i}}
\end{equation}
If $\tau$ does not meet the feasibility criterion (\ref{eq_lower_bound_on_r}), then by (\ref{eq_lim_of_R}) and strict monotonicity of $R$ any selection of $\tau_{i} > \tau_{i-1}$ will have $R_{i} (\Psi,\tau) > r_{i}$ even if we set the interval $(\tau_{i-1}, \tau_{i}]$ to be arbitrarily small. The inevitable result that, for the our fixed model $\Psi$, \emph{any} choice of $\tau$ will have $\text{IRD}_{r}(\Psi, \tau) > 0$.
\par It is important to note that the set of $T-1$ inequalities defined by (\ref{eq_lower_bound_on_r}), (\ref{eq_lower_bound_on_LR}) are necessary yet not sufficient conditions for IRD=0. Assume that we have a solution $(\Psi, \tau)$ which satisfies $\text{IRD}_{r}(\Psi, \tau) = 0$. Under SMLRP we have $x_{2} > x_{1} \: \Rightarrow \: \Lambda_{\Psi}(x_{2}) > \Lambda_{\Psi}(x_{1})$. Our counter example $(\Psi, \tilde{\tau})$ satisfies $\tilde{\tau_{1}} < \tau_{1}$ and $\forall i > 1 : \: \tilde{\tau_{i}} = \tau_{i}$ . By SMLRP we have:
\begin{equation*} \begin{split}
& P(Y=1) \frac{f_{\Psi(X) | Y=1} (\tilde{\tau_{1}})}{f_{\Psi(X)} (\tilde{\tau_{1}})} = \left(1 + \frac{1-p}{p} \frac{1}{\Lambda_{\Psi}(\tilde{\tau_{1}})} \right)^{-1} < \\
< & \left(1 + \frac{1-p}{p} \frac{1}{\Lambda_{\Psi}(\tau_{1})} \right)^{-1} = P(Y=1) \frac{f_{\Psi(X) | Y=1} (\tau_{1})}{f_{\Psi(X)} (\tau_{1})} < r_{2}
\end{split} \end{equation*}
Therefore (\ref{eq_lower_bound_on_r}) is maintained (the other inequalities are not affected). On the other hand by theorem \ref{th_SMLRP_mono_R_equiv} we have strict monotonicity of $R$, meaning:
\begin{equation*}
R_{1}(\Psi,\tilde{\tau}) = P(Y=1 \mid \Psi(X) < \tilde{\tau_{1}}) < P(Y=1 \mid \Psi(X) < \tau_{1}) = R_{1} (\Psi,\tau) = r_{1}
\end{equation*}
and therefore $\text{IRD}_{r}(\Psi, \tilde{\tau}) > 0$. The conclusion is that even under SMLRP we can use (\ref{eq_lower_bound_on_r}),(\ref{eq_lower_bound_on_LR}) only as necessary conditions for the feasibility of a given solution and that the test of feasibility must be performed using (\ref{eq_R_def}) and (\ref{eq_IRD_def}) directly.
\par In order to satisfy the necessary conditions for IRD=0 in the absence of SMLRP we can generally require $r_{i} > \underset{\lbrace \tau_{i}: \tau_{i} > \tau_{i-1} \rbrace}{\inf} R_{i} (\Psi,\tau)$ (we require strong inequalities to avoid degenerate zero-length intervals), however for such cases the existence of a closed-form expression would depend on the exact distribution of $X | Y=k$ ($k = 0,1$). We leave the exact formulation of non-SMLRP lower bounds outside the scope of this paper.
\par The final conclusion is that given two sets of risk categories $r_{1}, r_{2}$ and a couplet $(\Psi, \tau_{1})$ which satisfies $\text{IRD}_{r_{1}}(\Psi,\tau_{1}) = 0$, we may not be able to find a set of breakpoints $\tau_{2}$ which satisfies $\text{IRD}_{r_{2}}(\Psi,\tau_{2}) = 0$ (using the same model $\Psi$). Specifically we can now claim that optimal models of existing classification methods (typically optimized for $r=(0,1)$) are not necessarily feasible for any choice of $r$.
\par The existence of lower bounds on the IRD is perhaps the most counter-intuitive result of this paper. The reason why these limitations have not been addressed before has to do with the fact that most classification methods use a single breakpoint to distinguish between the two groups ($\tau \in \mathbb{R}$) and the issue of degenerate solutions or non-feasibility of $\Psi$ is avoided altogether. Although the fulfilment of (\ref{eq_lower_bound_on_r}),(\ref{eq_lower_bound_on_LR}) does not ensure the feasibility of a given solution, these inequalities are instrumental in demonstrating why the solutions from existing methods may not be feasible for a different choice of $r$, and provide an elegant method to disqualify such solutions. Once we define our objective as the distribution pre-set risk levels over multiple adjacent intervals we must recognize the existence of possible limitations on IRD for existing methods and as a result define new conditions for optimality.
\section{Ordinal Risk-Group Classification \label{section_ORGC}}
\par Although the definition of IRD naturally suggests itself as a new criterion for optimality (look for a couplet $(\Psi, \tau)$ such that $\text{IRD}_{r}(\Psi, \tau(\Psi)) = 0$), there are two problems with using IRD as a single optimality criterion. First, since our problem is a classification problem we must consider some sense of the quality of separation between the two classes in order to avoid degenerate solutions. This principle is not straight forwardly reflected by the definition of IRD (\ref{eq_IRD_def}). Second, our definition of IRD and the resulting necessary inequalities (\ref{eq_lower_bound_on_r}) do not ensure existence or uniqueness of an optimal solution.
\par Our practical solution to these problems is to define IRD as a feasibility criterion and use it as a constraint in an existing optimization problem. Since we are still in the domain of classification problems it would be reasonable to preserve some basic concepts, particularly the definition of optimality: We seek a model that on the one hand maximizes our ability to discriminate between the two classes, but on the other hand distributes risk correctly, meaning that it belongs to the set of feasible solutions:
\begin{equation}
C_{r}(0) = \lbrace (\Psi,\tau) : \: \text{IRD}_{r}(\Psi, \tau) = 0 \rbrace
\end{equation}
In the event that $C$ is an empty set we would have to reconsider our pre-set $r$ or change our method of constructing $\Psi$.
\par Any classification method we might consider for IRD ``augmentation" must satisfy several criteria. First, it must provide a continuous output $\Psi(X)$, ensuring that we have an appropriate output that can be partitioned into intervals (using $\tau$). This requirement automatically excludes classification methods that do not combine the vector of explanatory variables $X$ into a single real-valued score $\Psi(X)$ before making a prediction of risk or class membership (classification trees are an example of such excluded methods). Furthermore, we would like to maintain the notion that observations with higher scores have a higher conditional risk, and therefore require that the output $\Psi(X)$ is strongly correlated with the conditional risk function $P(Y=1 \mid \Psi(X)=x)$. Methods such as Fisher's LDA \cite{fisher_LDA} or SVM for two classes do provide a continuous scale and a single breakpoint to predict class membership, however these scales are not necessarily correlated with the conditional risk and only ensure that a majority of the observations from the special class are on one side of the breakpoint. We therefore decided to focus our discussion on risk estimation methods that provide a direct estimation of the risk function:
\begin{equation}
\Psi: \mathbb{R}^{P} \longrightarrow [0,1] \:, \quad
\Psi(X) = P(Y=1\mid X)
\end{equation}
\par Finally, in order to simplify our construction we assume SMLRP (see appendix \ref{appendix_MLRP_cdf_ratio_mono}). As we have seen before, this assumption ensures that we have a simple way to calculate the lower bounds on IRD, and also ensures that for a given model $\Psi$, if exists $\tau(\Psi)$ such that $\text{IRD}_{r}(\Psi, \tau(\Psi)) = 0$ then it is unique (see lemma \ref{lemma_tau_unique}). These properties enable us to simplify our parameter space by optimizing over $\Psi$ alone, and provide a simple way to test for the existence of necessary conditions for $\text{IRD}_{r}(\Psi) = \text{IRD}_{r}(\Psi, \tau(\Psi))=0$ and optimize under the constraint $C_{r}(\Psi) = \lbrace \Psi : \: \text{IRD}_{r}(\Psi) = 0 \rbrace $.
\subsection{Penalized Optimization \label{section_penalty}}
\par While the idea of fitting an optimal risk predictor that maximizes class discrimination is a well defined concept, the requirement of $\text{IRD}_{r}(\Psi)=0$ may lead to degenerate solutions of $\tau(\Psi)$ for certain values of $r$. We demonstrate this problem for a simple case of homoscedastic one-dimensional Gaussian logistic regression: Let $X \mid Y = 1 \sim N(\mu, \sigma)$, $X \mid Y = 0 \sim N(-\mu, \sigma)$, $P(Y=1) = \frac{1}{2}$ and the model $\Psi(\beta,x)$ is the one-dimensional logistic function with the parameter $\beta$, meaning $\Psi: \mathbb{R} \times \mathbb{R} \longrightarrow [0,1]$, $\Psi(\beta, x) = \frac{\exp(\beta x)}{1+\exp(\beta x)}$. We set $r = (0,0.5,1)$.
\par Denoting $\tau(\Psi, \beta, t) = (\Psi(\beta,-t),\Psi(\beta,t))$, we use symmetry of the conditional distributions around $x=0$ and the strict monotonicity of $\Psi(\beta, x)$ in $x$ and $\beta$ to show that for any choice of $\beta,t \in \mathbb{R}$ we can minimize error for $i=2$:
\begin{equation*} \begin{split}
R_{2} & (\Psi(\beta,X), \tau(\Psi, \beta, t)) = P(Y = 1 \mid \Psi(\beta, X) \in (\Psi(\beta,-t),\Psi(\beta,t)]) \\
& = P(Y = 1 \mid \beta X \in (- \beta t, \beta t]) = P(Y = 1 \mid X \in (-t, t]) = 0.5 = r_{2}
\end{split} \end{equation*}
Similar considerations ensure that the risk prediction errors are equal on both sides:
\begin{equation*} \begin{split}
R_{1} & (\Psi(\beta,X), \tau(\Psi, \beta, t)) = P(Y=1 \mid X<-t) \\
& = 1 - P(Y=1 \mid X>-t) = 1 - R_{3} (\Psi(\beta,X), \tau(\Psi, \beta, t))
\end{split} \end{equation*}
Using Bayes theorem we have:
\begin{equation*} \begin{split}
P(Y=1 \mid X<-t) = & \frac{P(Y=1)P(X < -t \mid Y=1)}{P(Y=1)P(X < -t \mid Y=1) + P(Y=0)P(X < -t \mid Y=0)} \\
= & \frac{P(Y=1) \Phi(-t - \mu)}{P(Y=1) \Phi(-t - \mu) + P(Y=0) \Phi(-t + \mu)} \\
= & \left( 1 + \frac{P(Y=0)}{P(Y=1)} \; \frac{\Phi(-t + \mu)}{\Phi(-t - \mu)} \right)^{-1}
\end{split} \end{equation*}
where $\Phi$ is the CDF of the standard normal distribution. Using the known inequality:
\begin{equation} \label{eq_Phi_approx}
\frac{\phi(x)}{x + 1/x} < \Phi(-x) < \frac{\phi(x)}{x} \quad
\forall x>0,
\end{equation}
where $\phi$ is the PDF of the standard normal distribution, we show
an upper bound:
\begin{equation*}
\frac{\Phi(-t + \mu)}{\Phi(-t - \mu)} > \frac{(t+\mu) + \frac{1}{t+\mu}}{t-\mu} \frac{\phi(t-\mu)}{\phi(t+\mu)} = \frac{(t+\mu) + \frac{1}{t+\mu}}{t-\mu} \: e ^{2 \mu t} \quad \forall t>\mu
\end{equation*}
Therefore $\lim_{t \rightarrow \infty} P(Y=1 \mid X<-t) = 0$ and similarly $\lim_{t \rightarrow \infty} P(Y=1 \mid X>t) = 1$. For any arbitrarily small $\varepsilon >0$ we can find a sufficiently large $t$ such that
\begin{equation*}
R_{1} (\Psi(\beta,X), \tau(\Psi, \beta, t)) = P(Y=1 \mid X<-t) = \leq \varepsilon / 2
\end{equation*}
making the total IRD:
\begin{equation*}
\text{IRD}_{r} (\Psi(\beta,X), \tau(\Psi, \beta, t)) = \sqrt{\sum_{i=1}^3 \left( R_{i} (\Psi(\beta,X), (\Psi(\beta,-t),\Psi(\beta,t))) - r_{i} \right)^2} \leq \varepsilon
\end{equation*}
As a result, for any given $\beta$ the only solution that satisfies IRD=0 is degenerate:
\begin{equation*}
\lim_{t \rightarrow \infty} \text{IRD}_{r} (\Psi(\beta,X), \tau(\Psi, \beta, t)) = 0
\end{equation*}
\par There are several alternatives for dealing with this problem. First, we may decide that methodologically we do not allow setting $r_{1} = 0$ or $r_{T} = 1$. This will ensure that the values of $\tau$ are finite but might still lead to very large or very small intervals, depending on the parameters of the model. Alternatively, if our risk estimation method uses optimization to fit the optimal model (for example maximizing the likelihood function in the case of parametric methods) then we can introduce a penalty function $\text{Pen}: \mathbb{R}^{T-1} \rightarrow \mathbb{R}$, which will enable us to balance the properties of $\tau$ (minimal or maximal distance between breakpoints) with the discrimination properties of $\Psi$. This means that instead of maximizing or minimizing a target function $f(\Psi \mid X,Y)$ we maximize/minimize $f(\Psi \mid X,Y) + \gamma \text{Pen}(\tau)$ under an IRD constraint, where $\gamma$ is a tuning parameter that represents the degree of aversion to degenerate solutions.
\par In cases where the degenerate solutions are encountered we would opt for the use of a penalty function. This reflects our understanding that the requirement of ``evenly spread" breakpoints is relatively subjective and should allow for some discretion as to the balance between the ability of the model to separate classes and the resulting interval lengths. By choosing an appropriate penalty function and an aversion parameter $\gamma$ we enable better fitting of the model according to the circumstances at hand, while introducing a relatively small number of additional parameters. On the other hand, since IRD represents an absolute measure of the model's quality, we believe it must be tightly controlled as the constraint $\text{IRD}_{r} (\Psi, \tau) = 0 $ on any model we might consider. We address the details of constructing this constraint for parametric models in the following section.
\subsection{Estimation of Interval Risk Deviation \label{sec_IRD_est}}
\par So far, we have defined interval risk deviation (IRD) as a property of a score model $\Psi$ and the joint distribution of $(X,Y)$. In order to implement the concept of IRD in a real-life scenario we must describe a way to estimate $\text{IRD}_{r}(\Psi)$ based on a sample of $N$ i.i.d observations from a known $P$-dimensional multivariate distribution $\mathcal{F}(\theta)$ in the form of a $N \times P$ matrix $\mathbf{X}$ and a vector $y \in \lbrace 0,1 \rbrace^{N}$ representing known class memberships (depending on the design of the experiment $y$ may or may not be a random sample). Focusing on parametric methods, we assume that $ P(Y=1 \mid X) = \Psi(\beta,X)$ where $\Psi: \mathbb{R}^{M} \times \mathbb{R}^{P} \rightarrow [0,1]$ is a known function and $\beta \in \mathbb{R}^{M}$ is the set of parameters controlling the shape of the function (e.g. generalized linear models \cite{GLM_1972} where $M = P$, $g$ is a known, strictly monotone and bijective link function and $\Psi(\beta,x) = g(\beta^{T} x$)). Having previously assumed SMLRP and a closed-from $\Psi$, we can simplify our notation by denoting $\tau(\beta) = \tau(\Psi(\beta,X))$, IRD for a given $\beta$ as $\text{IRD}_{r} (\beta) = \text{IRD}_{r} (\Psi(\beta,X), \tau(\Psi(\beta,X)))$ and the constraint set $C_{r}(\beta) = \lbrace \beta \in \mathbb{R}^{M} : \: \text{IRD}_{r}(\beta) = 0 \rbrace$
\par Many parametric classification methods solve the problem of estimating $\beta$ from a given sample $(\mathbf{X},y)$ by using the maximum likelihood (ML) method. We denote $\mathbf{X}_{j, \cdot}$ the $j$'th row of the matrix $\mathbf{X}$, making our model-predicted probability for the $j$'th observation $\Psi(\beta,\mathbf{X}_{j, \cdot}) = P(Y=1 \mid \mathbf{X}_{j, \cdot})$. Assuming random sampling, there are two equivalent formulations of the ``complete" likelihood function:
\begin{equation} \label{eq_likelihood_cond}
L(\theta_{0}, \theta_{1}, p \mid \mathbf{X},y) = \prod_{j=1}^{N} f_{X,Y} (\mathbf{X}_{j, \cdot}, y_{j}) = \prod_{j=1}^{N} f_{\theta_{y_{j}}} (\mathbf{X}_{j, \cdot}) P(Y_{j} = y_{j})
\end{equation} \begin{equation} \label{eq_likelihood_beta}
L(\beta, \theta \mid \mathbf{X},y) = \prod_{j=1}^{N} P(Y = y_{j} \mid \mathbf{X}_{j, \cdot}) f_{\theta} (\mathbf{X}_{j, \cdot})
\end{equation}
where $f_{\theta}$ is the density function of the distribution $\mathcal{F}(\theta)$ of $X$ and $f_{\theta_{y_{j}}}$ is the density function of the conditional distribution $\mathcal{F}_{y_{j}}(\theta_{y_{j}})$ of $X \mid Y = y_j$. When using (\ref{eq_likelihood_cond}) we must make additional assumptions about the conditional distribution of $X \mid Y=k$ and a random sampling process, and as a result our estimator $\hat{\beta}$ becomes a function of the estimators $\hat{\theta}_{0}, \hat{\theta}_{1}, \hat{p}$. On the other hand using (\ref{eq_likelihood_beta}) can significantly simplify the optimization process. Since our parameter of interest $\beta$ is isolated in the term $P(Y = 1 \mid \mathbf{X}_{j, \cdot}) = \Psi(\beta,\mathbf{X}_{j, \cdot}) $ and does not effect the term $f_{\theta} (\mathbf{X}_{j, \cdot})$, we can directly maximize the the partial likelihood function $L_{\Psi}$ over the values of $\beta$:
\begin{equation} \label{qe_likelihood_Psi}
L_{\Psi} (\beta \mid \mathbf{X},y) = \prod_{j=1}^{N} P(Y = y_{j} \mid \mathbf{X}_{j, \cdot}) = \prod_{j=1}^{N} \Psi(\beta, \mathbf{X}_{j, \cdot})^{y_{j}} (1 - \Psi(\beta, \mathbf{X}_{j, \cdot}))^{1-y_{j}}
\end{equation}
making the corresponding maximum likelihood optimization problem:
\begin{equation} \label{eq_ML_Psi}
\hat{\beta}_{ML} = \underset{ \beta \in \mathbb{R}^{M} }{ \text{argmax} } \; L_{\Psi} (\beta \mid \mathbf{X},y)
\end{equation}
One of the primary advantages of using the second approach for maximum likelihood estimation is that it circumvents the need to estimate the parameters of the conditional distributions, thus making $\beta$ the only estimated parameter. This construction also enables a relatively simple extension of the maximum likelihood framework to semi-parametric models or non-random sampling (see \cite{PRENTICE01121979} for such an extension of logistic regression).
\par Incorporating the IRD constraint into the maximum likelihood framework means that for any given $\beta$ we must be able to estimate the conditional probability over intervals $R_{i}(\beta, \tau) = P(Y = 1 \mid \Psi(\beta, X) \in (\tau_{i-1}, \tau_{i}])$ for an arbitrary $\tau$ from the sample $(\mathbf{X},y)$, which we can then use to calculate the estimate for the unique optimal $\tau(\beta)$ (which is a function of both $\beta$ and the distribution of $X$). Alas, it is usually difficult to derive $R_{i}(\beta,\tau)$ directly from the point-wise conditional probability $P(Y = 1 \mid X)$. In order to facilitate the estimation of IRD for such cases we use an approach similar to (\ref{eq_likelihood_cond}) but in a different context. As we shall see this will enable us to provide parametric estimates of IRD while retaining the convenient structure of our target function $L_{\Psi}$.
\par We assume that $\Psi(\beta, X) \mid Y=k$ has a known distribution which we denote $F(\eta_{k}({\beta}))$, and that $F$ is continuous. We can now use Bayes's theorem to represent $R(\beta, \tau)$ as:
\begin{equation} \label{eq_nu_R}
R(\beta, \tau) = R(\Psi(\beta,X), \tau) = \left( 1 + \frac{1-p}{p} \: \frac{1}{\nu(\beta, \tau)} \right)^{-1}
\end{equation}
where:
\begin{equation} \label{eq_nu_def}
\nu_{i}(\beta, \tau) = \frac{P(\Psi(\beta, X) \in (\tau_{i-1},\tau_{i}] \mid Y=1)}{P(\Psi(\beta, X) \in (\tau_{i-1},\tau_{i}] \mid Y=0)} = \frac{F_{\eta_{1}(\beta)}(\tau_{i}) - F_{\eta_{1}(\beta)}(\tau_{i-1})}{F_{\eta_{0}(\beta)}(\tau_{i}) - F_{\eta_{0}(\beta)}(\tau_{i-1})}
\end{equation}
Therefore by estimating the parameters $p$, $\eta_{0}(\beta)$, $\eta_{1}(\beta)$ we can calculate the estimators $\hat{\nu}(\beta, \tau)$ and $\hat{R}(\beta, \tau)$ and use them to find the unique $\hat{\tau}(\beta)$ which solves:
\begin{equation}
\hat{\text{IRD}}_{r}(\beta) = \text{IRD}_{r}(\beta, \hat{\tau}(\beta)) = 0
\end{equation}
The complete likelihood function under these assumptions is:
\begin{equation}
L(\eta_{0}(\beta), \eta_{1}(\beta), p \mid \mathbf{X}, y, \beta) = \prod_{j=1}^{N} f_{\eta_{y_{j}}(\beta)} ( \Psi(\beta, \mathbf{X}_{j, \cdot}) ) P(Y_{j} = y_{j})
\end{equation}
where $f_{\eta_{y_{j}}(\beta)}$ is the density function of $\Psi(X) \mid Y = y_{j} \sim F(\eta_{y_{j}}({\beta}))$. Since the estimation of $p$, $\eta_{0}(\beta)$, $\eta_{1}(\beta)$ is performed for each $\beta$ independently and only in order to verify the compliance with the constraint $\hat{\text{IRD}}_{r}(\beta)=0$, the estimation of these parameters does not effect the value of the target function $L_{\Psi}(\beta \mid \mathbf{X}, y)$. We use this property together with the separability of the likelihood function to estimate $p$ and $\eta_{0}(\beta), \eta_{1}(\beta)$ separately for each beta. For the case of random sampling, the parameter $p$ is relatively easy to estimate independently of $\beta$ as:
\begin{equation}
\hat{p} = \hat{P}(Y=1)=\frac{\sum_{j=1}^{N} y_{i}}{N}
\end{equation}
although for other cases, such as case-control studies, we might need additional information to correct for biased sampling. For the estimation of $\eta_{0}(\beta), \eta_{1}(\beta)$ we use our assumptions about $F$ to build an ancillary optimization problem for each $\beta$ individually, and use the estimates provided by the ancillary problem (conditional on the value of $\beta$) to estimate $\hat{\text{IRD}}_{r}(\beta, \tau)$ (again the target function $L_{\Psi}$ remain unchanged). The likelihood function for the ancillary problem can be rewritten as:
\begin{equation}
L_{F}(\eta_{0}(\beta), \eta_{1}(\beta) \mid \mathbf{X}, y, \beta) = \prod_{\lbrace j: \: y_{j} = 0 \rbrace} f_{\eta_{0}(\beta)} ( \Psi(\beta, \mathbf{X}_{j, \cdot}) ) \prod_{\lbrace j: \: y_{j} = 1 \rbrace} f_{\eta_{1}(\beta)} ( \Psi(\beta, \mathbf{X}_{j, \cdot}) )
\end{equation}
where $f_{\eta_{k}(\beta)}$ is the density function of the distribution $\Psi(\beta, X) \mid Y=k \sim F(\eta_{k}(\beta))$, and the ancillary maximum likelihood estimation problem is:
\begin{equation}
(\hat{\eta}_{0}(\beta), \hat{\eta}_{1}(\beta)) = \underset{\eta_{0}(\beta), \eta_{1}(\beta)}{\text{argmax}} \: L_{F} (\eta_{0}(\beta), \eta_{1}(\beta) \mid \mathbf{X},y, \beta)
\end{equation}
\par The construction of multiple ancillary ML estimation problems can be computationally demanding, but luckily for many known distributions the formula for the ML estimator $\hat{\eta}_{k}(\beta)$ is known, and in fact it is relatively simple to calculate it directly from the known ML estimators of the conditional distribution $X \mid Y=k \sim \mathcal{F}(\theta_{k})$ as $\hat{\eta}_{k}(\beta) = \eta(\beta, \hat{\theta}_{k})$. This fact significantly reduces the complexity of estimating the IRD constraint.
For a concrete example of such a case using Gaussian logistic regression (which can be easily extended to other GLM instances) see section \ref{section_case_study}.
\par Alternatively it would be possible to use non-parametric estimators or approximation. While it is possible to attempt to directly approximate $Q_{i}(\beta) = P(Y = 1 \mid \Psi(\beta, X) \leq \tau_{i}(\beta))$, $P( \Psi(\beta, X) \leq \tau_{i-1}(\beta))$, $P( \Psi(\beta, X) \leq \tau_{i}(\beta))$ and utilize (\ref{eq_R_Q}) for a direct estimation of $R(\beta)$ (in this case the equality is useful since $\beta$ is fixed), it would often be more convenient to approximate $p$ and $F_{\eta_{k}(\beta)}$ at $\lbrace \tau_{i}(\beta) \rbrace_{i=1}^{T-1}$ (a total of $2T-1$ approximations per $\beta$) and use (\ref{eq_nu_R}) to calculate $\hat{\nu}(\beta)$ and the resulting IRD estimate.
The primary advantage of this approach is that it requires no additional assumptions about the distribution of $(X,Y)$ and can therefore be easily extended to other non-ML estimation methods of $\beta$. On the other hand, by using non-parametric methods we pay a price both in the quality of our estimates (we ignore information about the distribution of $X$) and in the computational complexity of our estimation scheme.
\par Finally, regardless of our approach to estimation, we recognize the fact that under realistic scenarios we will have to use numeric methods for the calculation of $\hat{\tau}(\beta)$ and for the estimation of the required parameters. We therefore set a low threshold $\varepsilon$ and accept $\beta$ as feasible if our estimated IRD satisfies:
\begin{equation} \label{eq_est_IRD_constr}
\hat{\text{IRD}}_{r}(\beta) = \text{IRD}_{r}(\beta, \hat{\tau}(\beta)) = \Vert \hat{R}(\beta, \hat{\tau}(\beta)) - r \Vert < \varepsilon
\end{equation}
making our feasibility set $\hat{C}_{r}(\varepsilon) = \lbrace \beta \in \mathbb{R}^{M} : \: \hat{\text{IRD}}_{r}(\beta) < \varepsilon \rbrace$ and the penalized Ordinal Risk-Group (ORG) optimization problem:
\begin{equation} \label{optim_constr_pen}
\hat{\beta}_{ORG} = \underset{\beta \in \hat{C}_{r}(\varepsilon)}{\text{argmax}} \: L_{\Psi}(\beta \mid \mathbf{X},y) + Pen(\hat{\tau}(\beta))
\end{equation}
\par If for our choice of $r$ we have $\hat{\beta}_{LR} \in \hat{C}_{r}(\varepsilon)$ and the distances between the set of breakpoints $\hat{\tau}(\hat{\beta}_{LR})$ are non-degenerate, then the global (unconstrained) optimality of $\hat{\beta}_{LR}$ ensures that it is also the optimal solution of the constrained ordinal problem. It may also serve as the optimal solution for the constrained and penalized ordinal problem, but that will depend on the selection of the aversion parameter. On the other hand, as we've seen in section \ref{section_lower_bounds_IRD}, once we have more than a single breakpoint we introduce limits of feasibility into the maximization problem and may discover that the solution to (\ref{optim_LR}) is no longer feasible ($\hat{\text{IRD}}_{r} (\hat{\beta}_{LR}) \geq \varepsilon$). For such cases we must define a new constrained optimization problem and look for a new optimal solution. We discuss an example for such a case in the following section.
\par There are two issues we leave outside the scope of this paper. First, although the consistency of constrained ML estimation has been explored in various contexts (for example for mixture models \cite{CML_hathaway_1985}), the consistency of the ML estimators under the specific constraint of $\hat{\text{IRD}}=0$ requires verification. Similarly, although in our description of the problem $\tau$ is a function of $\beta$ and the parameters of $\mathcal{F}$ (a result of the uniqueness demonstrated in appendix \ref{appendix_unique_tau}), it remains to be verified whether the consistency of the estimator $\hat{\beta}$ ensures the consistency of $\hat{\tau}(\hat{\beta})$. We expect the fact that for many cases $\hat{\tau}(\beta)$ has no analytical solution to further complicate this problem.
\par Second, in order to measure our estimation errors we require a method for building right-sided confidence intervals for IRD based on the distribution of $\hat{\text{IRD}}$ for a given $\beta$. Since $\nu(\beta, \tau(\beta))$ is a ratio of CDF differences, the process of deriving the distribution of $\hat{\nu}(\beta, \hat{\tau}(\beta))$ from the distribution of $\hat{\eta}_{k}(\beta)$ would require several steps of approximation, primarily since $\hat{\tau}(\beta)$ the result of numeric estimation (even if the distribution of $\hat{\eta}_{k}(\beta)$ is known). Similar problems apply for non-parametric estimators,
although we can see two possible approached for a solution. The first approach would be to use an equivalent definition of IRD as $\text{IRD}_{r}(\Psi, \tau) = \max_{i} \vert R_{i}(\Psi, \tau) - r_{i} \vert$ and try to prove a Glivenko-Cantelli \cite{vdV00} type theorem for conditional distributions which would describe the necessary conditions ensuring that:
\begin{equation} \label{eq_sup_cond_diff}
\sup_{x_{2} > x_{1}} \; \vert \hat{P}_{N}(Y = 1 \mid X \in (x_{1},x_{2}]) - P(Y = 1 \mid X \in (x_{1},x_{2}]) \vert \underset{N \rightarrow \infty}{\longrightarrow} 0
\end{equation}
where $\hat{P}_{N}$ is the empirical conditional probability estimator:
\begin{equation}
\hat{P}_{N}(Y = 1 \mid X \in (x_{1},x_{2}]) = \frac{\vert \lbrace j \; : \; \Psi(\mathbf{X}_{j}) \in (x_{1},x_{2}] \; \wedge \; y_{j} = 1 \rbrace \vert}{\vert \lbrace j \; : \; \Psi(\mathbf{X}_{j}) \in (x_{1},x_{2}]\rbrace \vert}
\end{equation}
Building on this result we can attempt to derive the asymptotic distribution of (\ref{eq_sup_cond_diff}) and try to construct test that will be the conditional equivalent of the Kolmogorov-Smirnov test \cite{vdV00}.
The second approach, which is less elegant but more plausible, would be to combine (\ref{eq_nu_def}) with the well known asymptotic behaviour of the empirical conditional distribution function:
\begin{equation}
\hat{F}_{\eta_{k}(\beta)}^{(N_{k})}(t) = \frac{\vert \lbrace j \mid \Psi(\beta, \mathbf{X}_{j, \cdot}) \leq t, y_j = k \rbrace \vert}{N_{k}} \quad k=0,1
\end{equation}
where $N_{k} = \vert \lbrace j : y_{j} = k\rbrace \vert$. By the central limit theorem \cite{vdV00}, this estimator weakly converges to $F_{\eta_{k}(\beta)}$ pointwise:
\begin{equation}
\sqrt{N_{k}} \: \left( \hat{F}_{\eta_{k}}^{(N_{k})}(t) - F_{\eta_{k}}(t) \right) \underset{N_{k} \rightarrow \infty}{\longrightarrow} N \left( 0,F_{\eta_{k}}(t) \left( 1-F_{\eta_{k}}(t) \right) \right)
\end{equation}
However the fact that $\hat{\tau}(\beta)$ changes as a function of the sample (and is not a fixed quantile $t$) means that the points where $\hat{F}_{\eta_{k}}^{(N_{k})}$ is estimated change as a function of the sample ($N_{k}$), therefore the nature of the convergence and the resulting asymptotic distribution depend on the convergence of $\hat{\tau}(\beta) \rightarrow \tau(\beta)$. We would therefore need to find sufficient conditions for:
\begin{equation} \begin{split}
& \tau(\hat{\beta})^{(N_{k})} \rightarrow \tau(\beta) \: \Rightarrow \: \\
& \sqrt{N_{k}} \: \left( \hat{F}_{\eta_{k}}^{(N_{k})}(\tau(\hat{\beta})^{(N_{k})}) - F_{\eta_{k}}(\tau(\hat{\beta})^{(N_{k})}) \right) \underset{N_{k} \rightarrow \infty}{\longrightarrow} N \left( 0,F_{\eta_{k}}(\tau(\beta)) \left( 1-F_{\eta_{k}}(\tau(\beta)) \right) \right)
\end{split} \end{equation}
The next step would be to use the strong uniform convergence of $\hat{F}_{\eta_{k}}^{(N_{k})} \rightarrow \hat{F}_{\eta_{k}}$ (as provided by the Glivenko-Cantelli theorem) to approximate $\hat{F}_{\eta_{k}}^{(N_{k})}(\hat{\tau}_{i}(\beta))$ as normally distributed $\mu = \hat{F}_{\eta_{k}}^{(N_{k})}(\hat{\tau}_{i}(\beta))$ and $\sigma^{2} = \hat{F}_{\eta_{k}}^{(N_{k})}(\hat{\tau}_{i}(\beta)) \left( 1 - \hat{F}_{\eta_{k}}^{(N_{k})}(\hat{\tau}_{i}(\beta)) \right) $. Combined with Donsker's theorem \cite{dudley1999}, we should be able to estimate the asymptotic distribution of the difference of two points of $\hat{F}_{\eta_{k}}^{(N_{k})}$ (which make both the numerator and the denominator of $\hat{nu}_{i})$
as normallt distributed with mean $\mu = \hat{F}_{\eta_{k}}(\hat{\tau}_{i} (\beta)) - \hat{F}_{\eta_{k}}(\hat{\tau}_{i-1} (\beta))$ and variance $\sigma^{2} = \left( \hat{F}_{\eta_{k}}(\hat{\tau}_{i} (\beta)) - \hat{F}_{\eta_{k}}(\hat{\tau}_{i-1} (\beta)) \right) \left( 1 - \hat{F}_{\eta_{k}}(\hat{\tau}_{i} (\beta)) - \hat{F}_{\eta_{k}}(\hat{\tau}_{i-1} (\beta)) \right) $. The final step would be to use work by Hinkley \cite{HINKLEY01121969}, which describes the distribution of a ratio of two non-correlated normal random variables, to approximate the distribution of $\hat{\nu}_{i}(\beta,\hat{\tau}(\beta))$ which we can use to estimate $P(\hat{\text{IRD}}_{r}(\beta) > \varepsilon)$. We note that the construction of such test would also mean that we can change the definition of our feasibility set to $\hat{C}_{r}(\varepsilon, \alpha) = \lbrace \beta \in \mathbb{R}^{M} : \: P(\hat{\text{IRD}}_{r}(\beta) < \varepsilon) > 1-\alpha \rbrace$. We leave the details and proof of these ideas for future papers.
\section{Case study: Gaussian Logistic Regression \label{section_case_study}}
\par Logistic regression is one of the most studied classification methods in the scientific literature and has been widely applied in statistics, scientific research and industry. The name "logistic" for the function $f(x) = \frac{e^{x}}{1 + e^{x}}$ was originally coined by Verhulst as early as the 19'th century, but it was Cox \cite{cox_1969} who used it first in the context of binary data analysis. The concept of multinomial logistic regression was first suggested by Cox (1966) \cite{Cox_1966} and developed independently by Theil (1969) \cite{Theil_1969}. The link to ordered choice models (ordered logistic regression) was made by McFadden in his paper from 1973 \cite{McFadden_1973}. Cramer (2002) \cite{{RePEc:dgr:uvatin:20020119}} has a complete historical review.
\par Logistic regression belongs to the group of classification methods that estimate class membership probability rather than predict class membership. It is a special instance of a larger group of parametric models called generalized linear models (GLM \cite{GLM_1972}), which extends linear models by allowing the addition of a predefined link function $g$ that connects the linear model $\beta^{T} X$ ($\beta \in \mathbb{R}^{P}$) to the response variable $Y$ (meaning that $g^{-1} (Y) = \beta^{T} X$) and assuming that the distribution of $X$ is from an exponential family. In the case of logistic regression the link function is assumed to be the logistic function $g(\beta^{T} x) = \frac{e^{\beta^{T} x}}{1+e^{\beta^{T} x}}$, making the inverse function $g^{-1}(p) = \text{logit}(p) =log \left( \frac{p}{1-p} \right) $. The probability of belonging to the ``special class" (in the case of 2 classes) conditioned on the r.v $X$ is assumed to be:
\begin{equation} \label{eq_LR_model}
P_{\beta}(Y = 1 \mid X ) = \frac{e^{\beta^{T} X}}{1+e^{\beta^{T} X}}
\end{equation}
where the assumptions on $(X,Y)$ can be modified to match a wide variety of cases, for example to a non-random sampling scheme like case-control studies or semi-parametric models \cite{PRENTICE01121979}.
\par For the purpose of demonstrating our method we assume that $\mathbf{X}$ is a $N \times P$ matrix representing $N$ i.i.d random samples from a $P$-dimensional multivariate normal distribution.
Under this assumption the vector $y \in \lbrace 0,1 \rbrace^{N}$ of class memberships represents the result of $N$ independent Bernoulli random variables $\lbrace Y_{j} \rbrace_{j=1}^{T}$. Even under these assumptions, the ML problem does not have an analytical least squares solution, and is usually solved using numerical maximum likelihood algorithms. The log-likelihood function is:
\begin{equation}
l_{LR} (\beta \mid \mathbf{X},y) = \log(L_{LR}(\beta \mid \mathbf{X},y)) = \sum_{j=1}^{N} y_{j} \beta^{T} \mathbf{X}_{j, \cdot} - \sum_{j=1}^{N} \log \left( 1 + e^{ \beta^{T} \mathbf{X}_{j, \cdot} } \right)
\end{equation}
and the logistic regression (LR) maximum likelihood optimization problem is:
\begin{equation} \label{optim_LR}
\hat{\beta}_{LR} = \underset{\beta \in \mathbb{R}^{P}}{\text{argmax}} \; l_{LR}(\beta \mid \mathbf{X},y)
\end{equation}
\par As we have noted in section \ref{sec_IRD_est}, the parametric estimation of IRD requires several additional assumptions. We assume that the conditional distributions $X \mid Y=k$ ($k=0,1$) are also multi-variate normal, and in order to achieve SMLRP we assume equal conditional covariance. Using terms defined to construct (\ref{optim_constr_pen}) our assumptions on Gaussian logistic regression translate into the following:
\begin{equation}
\Psi(\beta, x) = \frac{\exp(\beta^{T}x)}{1 + \exp(\beta^{T}x)}, \quad X \mid Y=k \sim MVN(\underline{\mu}_{k}, \Sigma)
\end{equation}
As a result:
\begin{equation} \label{eq_beta_X_norm_dist}
\text{logit}(\Psi(\beta, X)) \mid Y=k \: \sim \: N(\mu_{k}(\beta) = \beta^{T} \underline{\mu}_{k}, \sigma^{2}(\beta) = \beta^{T} \Sigma \beta)
\end{equation}
Conveniently, the ML estimators follow a similar pattern. For the construction of the known ML estimators of the distribution of $X$ we denote $\mathbf{X}^{(k)}$ as the matrix composed of all the lines of $\mathbf{X}$ for which $y_{j} = k$, $N_{1} = \sum_{j=1}^{N} y_{j}$, $N_{0} = N - \sum_{j=1}^{N} y_{j}$ and $\overline{\mathbf{X}}_{m}^{(k)} = \frac{\sum_{j=1}^{N_{k}} \mathbf{X}^{(k)}_{j,m}}{N_{k}}$ as the average of the $m$'th column of $\mathbf{X}^{(k)}$ ($m = 1, \dots, P$). The ML estimators are:
\begin{equation}
\hat{\underline{\mu}}_{k} = \overline{\mathbf{X}}^{(k)} = (\overline{\mathbf{X}}_{1}^{(k)}, \ldots, \overline{\mathbf{X}}_{P}^{(k)}) \quad (k = 0,1)
\end{equation}
and having assumed equal covariance we use the pooled covariance matrix estimator:
\begin{equation} \begin{split}
\hat{\Sigma}_{k} = & \frac{1}{N_{k}} \sum_{j=1}^{N_{k}} (\mathbf{X}^{(k)}_{j,\cdot} - \overline{\mathbf{X}}^{(k)})^{T} (\mathbf{X}^{(k)}_{j,\cdot} - \overline{\mathbf{X}}^{(k)}) \\
\hat{\Sigma} = & \frac{1}{N} ( N_{0} \hat{\Sigma}_{0} + N_{1} \hat{\Sigma}_{1}) \\
\end{split} \end{equation}
\par The assumption of multivariate-normal conditional distributions enables us to avoid the construction of an ancillary ML problem for each $\beta$ by using the known relationship
between the ML estimators $(\hat{\underline{\mu}}_{0}, \hat{\underline{\mu}}_{1}, \hat{\Sigma})$ and the $\beta$-transformed ML estimators $(\hat{\mu_{k}}(\beta), \hat{\sigma}(\beta))$:
\begin{equation}
\hat{\mu_{k}}(\beta) = \beta^{T} \hat{\underline{\mu_{k}}}, \quad \hat{\sigma}(\beta) = \sqrt{\beta^{T} \hat{\Sigma} \beta}
\end{equation}
\par The strict monotonicity of $logit(p) = log \left( \frac{p}{1-p} \right) = \Psi^{-1}_{\beta}(p)$ in $p$ means that we can use the equality:
\begin{equation} \begin{split}
P & (\Psi(\beta, X) \in (\tau_{i-1}(\beta),\tau_{i}(\beta)] \mid Y=k) \\
& = P(\beta^{T} X \in (\text{logit}(\tau_{i-1}(\beta)),\text{logit}(\tau_{i}(\beta))] \mid Y=k) \\
& = \Phi \left( \frac{\text{logit}(\tau_{i}(\beta)) - \mu_{k}(\beta)}{\sigma(\beta)} \right) - \Phi \left( \frac{\text{logit}(\tau_{i-1}(\beta)) - \mu_{k}(\beta)}{\sigma(\beta)} \right)
\end{split} \end{equation}
to estimate $\nu_{i}(\beta)$ as:
\begin{equation}
\hat{\nu}_{i}(\beta) =
\frac{
\Phi \left( \frac{\text{logit}(\hat{\tau}_{i}(\beta)) - \hat{\mu}_{1}(\beta)}{\hat{\sigma}(\beta)} \right) -
\Phi \left( \frac{\text{logit}(\hat{\tau}_{i-1}(\beta)) - \hat{\mu}_{1}(\beta)}{\hat{\sigma}(\beta)} \right)}
{
\Phi \left( \frac{\text{logit}(\hat{\tau}_{i}(\beta)) - \hat{\mu}_{0}(\beta)}{\hat{\sigma}(\beta)} \right) -
\Phi \left( \frac{\text{logit}(\hat{\tau}_{i-1}(\beta)) - \hat{\mu}_{0}(\beta)}{\hat{\sigma}(\beta)} \right)}
\end{equation}
Finally, we use the assumption of random sampling to estimate $\hat{p} = \frac{1}{N} \sum_{j=1}^{N} y_{j}$ and utilize (\ref{eq_nu_R}) to construct our parametric estimation of IRD for logistic regression.
\subsection{Example: The Wisconsin Diagnostic Breast Cancer \\ (WDBC) Dataset\label{section_example}}
\par In this section we bring an example of the sub-optimality of using one of the most commonly used classification methods - Logistic Regression (LR) - to solve a relatively simple ordinal problem. We then provide the Ordinal Risk-Group version of Logistic Regression (ORG-LR) solution to the problem and compare our results.
\par The dataset we used for this example is the extensively used Wisconsin Diagnostic Breast Cancer \cite{10.1117/12.148698} dataset from the UCI Machine Learning Repository \cite{UCI},
which contains the analysis of cell nuclei from 556 patients using digitized images of fine needle aspirate (FNA) of extracted breast masses. Since we assumed a continuous $\mathcal{F}$ and equal variance we selected the following features for the construction of our models: texture, log area, smoothness, log compactness, log concave points, log symmetry and an intercept variable. The final result from the diagnosis ("Malignant" $N = 212$/ "Non-Malignant" $N = 344$) was used as the dependent variable for the logistic regression analysis and ordinal risk group analysis.
The code for this example was written in the R programming language \cite{cran} using internal optimization algorithms and the Augmented Lagrangian Adaptive Barrier Minimization Algorithm (the alabama library) with the constraint $\hat{\text{IRD}}_{r}(\beta) < \varepsilon$ = 1e-07.
\par The first set of risk levels we tested was $r_{1} = (10\%, 50\%, 90\%)$. The estimated IRD for the logistic regression solution $\hat{\beta}_{LR}$ and the matching (non-degenerate) set of breakpoints $\hat{\tau}(\hat{\beta}_{LR}) = (-2.6918, 9.1698)$ was slightly above our set feasibility threshold ($\hat{\text{IRD}}_{r_{1}} (\hat{\beta}_{LR})$ = 7.8776e-05).
The second set of risk levels we tested was $r_{2} = (20\%, 50\%, 80\%)$. For this set the solution $\hat{\beta}_{LR}$ provided by the logistic regression was clearly infeasible: on the one hand the interval associated to the 50\% risk level was clearly degenerate ($ \hat{\tau}_{2}(\hat{\beta}_{LR}) - \hat{\tau}_{1}(\hat{\beta}_{LR}) <$ 1.1e-06) and the estimated IRD was high above our set threshold ($\hat{\text{IRD}}_{r_{2}}(\hat{\beta}_{LR}) = 0.0014 > \varepsilon$). We proceeded to construct a constrained maximum likelihood problem for both $r_{1}, r_{2}$ as described in section \ref{section_case_study}. For $r_{1}$ an unconstrained problem was sufficient, with IRD $<$ 1e-07 and a non-degenerate $\hat{\tau}(\hat{\beta})$. For $r_{2}$, the unpenalized ordinal risk-group problem produced degenerate solutions, so we added the penalty function $Pen(\beta, \tau) = (\frac{\max \vert \tau_{i} - \tau_{i-1} \vert}{\beta^{T}(\hat{\mu}_{1} - \hat{\mu}_{0})} -1)^{2}$, which was designed to balance between the distance between the breakpoints $\tau_{1}, \tau_{2}$ and the distance between estimated class means, and selected a penalty coefficient $\gamma = 10$. Since in many cases the optimization algorithm converged to a local minimum we randomly sampled 25,000 starting points for each set of risk categories we tested. The estimates for logistic regression (LR) and ordinal risk-group logistic regression (ORG-LR) for both $r_{1}, r_{2}$ are summarized in table \ref{table_LR_ORG}.
\begin{table}
\begin{center} \begin{tabular}{l|ll|ll}
& \multicolumn{2}{c|}{$r_{1}$} & \multicolumn{2}{c}{$r_{2}$} \\
\hline
\hline
& \multicolumn{1}{c}{LR} & ORG-LR & \multicolumn{1}{c}{LR} & ORG-LR \\
\hline
Intercept & -87.6641 & 12.3928 & -87.6641 & -3.1977 \\
Texture & 0.2864 & 0.1894 & 0.2864 & 0.1844\\
log(Area) & 11.9706 & 0.8999 & 11.9706 & -0.3573 \\
Smoothness & 67.7342 & -25.2575 & 67.7342 & 4.0826 \\
log(Compactness) & -1.8107 & -3.9205 & -1.8107 & 0.3070 \\
log(Concave Points) & 3.6698 & 11.8320 & 3.6698 & 0.1162 \\
log(Symmetry) & 3.2556 & 0.7888 & 3.2556 & -0.3529 \\
\hline
log-Likelihood & -45.9909 & -94.8944 & -45.9909 & -311.019 \\
\hline
$\hat{\tau}_{1}$ & -2.6918 & -4.4029 & 3.117868 & -0.7088 \\
$\hat{\tau}_{2}$ & 9.1698 & 6.8524 & 3.117869 & 1.3376 \\
\hline
$P(Y = 1 \mid \Psi(\hat{\beta},X) \leq \tau_{1})$ & 9.2925\% & 9.9751\% & 16.7680\% & 19.9857\% \\
$P(Y = 1 \mid \Psi(\hat{\beta},X) \in (\tau_{1}, \tau_{2}])$ & 50.3289\% & 50.0130\% & 51.5483\% & 50.0116\% \\
$P(Y = 1 \mid \Psi(\hat{\beta},X) > \tau_{2})$ & 89.5768\% & 89.9870\% & 78.7930\% & 79.9948\% \\
\hline
$\hat{\text{IRD}}$ & 7.88e-05 & 9.69e-08 & 0.0014 & 3.66e-08 \\
\end{tabular} \end{center}
\caption[LR and ORG-LR maximum likelihood estimators for $r_{1},r_{2}$]{Maximum likelihood (ML) estimators of coefficients, log-likelihood, optimal breakpoints $\tau$, model predicted probabilities (assuming multivariate normal distribution) and IRD estimates for unconstrained logistic regression and ordinal risk-group logistic regression (ORG-LR) for $r_{1} = (10\%, 50\%, 90\%)$, $r_{2} = (20\%, 50\%, 80\%)$ \label{table_LR_ORG}}
\end{table}
\par The differences between the two methods can be further illustrated by looking at the distributions of the logit-transformed predicted probabilities $\lbrace logit(\hat{Y_{i}}) \rbrace_{i=1}^{N}$ of both methods. Figure \ref{figure_LR_vs_ORG-LR_10-50-90} illustrates the logistic regression solution for $r_{1}$ (top graph) and the ordinal risk-group logistic regression solution for $r_{1}$ (bottom graph), and figure \ref{figure_LR_vs_ORG-LR_20-50-80} illustrate the same results for $r_{2}$. A comparison of the two graphs in each figure shows that for both sets of risk levels, the ORG-LR solution compromises the quality of separation between the two classes in order to achieve feasibility, and in the more extreme case of $r_{2}$ reduces separation dramatically in order to avoid degenerate solutions.
\begin{figure}[h]
\center
\includegraphics[scale=0.9]{wdbc_LR_vs_ORG-LR_10-50-90.eps}
\caption[The WDBC dataset: logit-transformed LR and ORG-LR predictions for $r_{1}$]{\label{figure_LR_vs_ORG-LR_10-50-90} Logit-transformed predictions and separation between the malignant and non-malignant classes of logistic gegression (LR) (top graph) and the Ordinal Risk-Group Logistic Regression (ORG-LR) (bottom graph) for $r_{1}$. The black dotted lines mark the matching sets of breakpoints $\hat{\tau}(\hat{\beta}_{LR})$ and $\hat{\tau}(\hat{\beta}_{ORG-LR})$.}
\end{figure}
\begin{figure}[h]
\center
\includegraphics[scale=0.9]{wdbc_LR_vs_ORG-LR_20-50-80.eps}
\caption[The WDBC dataset: logit-transformed LR and ORG-LR predictions for $r_{2}$]{\label{figure_LR_vs_ORG-LR_20-50-80} Logit-transformed predictions and separation between the malignant and non-malignant classes of logistic gegression (LR) (top graph) and the Ordinal Risk-Group Logistic Regression (ORG-LR) (bottom graph) for $r_{2}$. The black dotted lines mark the matching sets of breakpoints $\hat{\tau}(\hat{\beta}_{LR})$ and $\hat{\tau}(\hat{\beta}_{ORG-LR})$.}
\end{figure}
\par In order to validate the results of our new method and compare them to the performance of the logistic regression solution we performed a cross validation study. We randomly divided the dataset into two groups: 90\% of the patients were randomly sampled as a training set, from which a logistic regression model and an ordinal risk-group model were constructed, and the remaining 10\% were used as a test set for the models. We repeated the process with 25,000 random samples and calculated the percentage of "Melignant" cases found in each predicted risk group for each of the models. The results of the implementation of the training models on the test sets for $r_{1} = (10\%, 50\%, 90\%)$ were $(0.7173\%, 74.6978\%, 100\%)$ for logistic regression (IRD = 0.07962) and $(3.8804\%, 57.4549\%, 99.9329\%)$ for ordinal risk-group logistic regression (IRD = 0.01917). The cross validation results for $r_{2} = (20\%,50\%,80\%)$ using ORG-LR were $(15.7349\%, 52.9507\%, 84.9715\%)$ (IRD = 0.0052). Since the logistic regression solution was degenerate for $r_{2}$ we were unable test it with cross validation.
\par The comparison of the cross validation results from the two methods for $r_{1}$ shows that although both models did not perform wery well, the ORG-LR solution outperformed the logistic regression solution (IRD is approx. 4 times smaller), in spite of the fact that differences in IRD between the two models do not seem significant. We estimate that one of the reasons for the of the high absolute deviance in IRD of all models and risk levels we tested is that the data is not exactly normally distributed (as evident in figures \ref{figure_LR_vs_ORG-LR_10-50-90},\ref{figure_LR_vs_ORG-LR_20-50-80}). In addition, specifically for ORG-LR, we suspect that the generic algorithms we used for the ordinal risk-group maximum likelihood constrained optimization failed to converge to the real global minimum in some of the cross-validation iterations. Verifying this hypothesis would require either use of more specific optimization algorithms (for example by analytically calculating the derivatives of the IRD constraint) or by a very large scale simulation study that would require billions of iterations. Both approaches are outside the scope of this paper and we leave them for future studies.
\section{Conclusions}
\par The exact estimation of the conditional risk function is an important part of practical and theoretical research. However the practical application of this information is very often in the form of a finite and small set of resulting actions. Although conditional risk quantiles provide valuable information, we ultimately want to know the risk associated with adjacent non-overlapping intervals in order to create distinct ordinal risk groups. As we have demonstrated in section \ref{section_cond_precentiles}, quantile regression is not useful for that purpose. Furthermore, section \ref{section_lower_bounds_IRD} demonstrates that the practice of dividing post-hoc the continuous estimate of conditional risk into intervals ignores the limitations introduced by the lower bounds on IRD and may produce sub-optimal or degenerate solutions.
\par Our formulation of the optimization problem, as presented in section \ref{section_ORGC}, reflects our understanding that while the model's ability to separate the classes remains the key issue, we must introduce both a new constraint and a penalty function in order to achieve two additional objectives: an accurate risk distribution and a usable partition scheme. While IRD represents an absolute measure of the model's quality and must be a constraint on the optimal solution, the "softer" requirement on minimal interval length should allow for flexibility in application. We believe that a penalty function enables better control and adaptation through the choice of function and the aversion parameter.
\par Finally, we wish to emphasize the implications of the most counter intuitive result of this paper - the existence of limitations on certain risk structures (the vector $r$) in the form of lower bounds on the error rate IRD (equation \ref{eq_IRD_def}). Although most of the examples we described are linear or logistic models with Gaussian conditional distributions, the existence of lower bounds holds for any continuous risk estimator. A re-evaluation of the optimal properties of such estimators in the context of risk discretization is therefore required. We leave the specifics of applying these ideas to other classification methods as well as proofs of consistency and the construction of confidence intervals to future studies.
\addcontentsline{toc}{section}{Reference}
\bibliographystyle{plain}
|
2,877,628,088,896 | arxiv | \section{Introduction}\label{sec:intro}
Convection and convective overshooting can play a fundamental role in the evolution of a star, by being efficient mixing processes.
From the coupling of convection with nuclear reactions, significant changes on the evolution may be expected when convective overshoot is present, regardless of using different prescriptions for convection and/or overshooting.
Several authors have addressed the effect of including overshoot in modelling the evolution of main and post-main sequence stars \cite[e.g.][ and references therein]{stothers90,mowlavi94}.
But the problem of how much is the amount of overshooting that has to be
included is still far from solved \cite[see][]{renzini87}.
Several authors modelled core overshooting in stars \cite[e.g.][]{roxburgh78,bressan81,langer86,xiong85,xiong86}, arriving at different results concerning the extent of the convective penetration into the radiative zone, from negligible to quite important.
Another open issue is how overshoot affects the local temperature stratification \cite[e.g.][]{canuto97}.
But as far as evolution is concerned the extent of overshooting is the key aspect \citep{deupree00,ribas00}.
There are several indications, both theoretical and observational, that support the existence of overshooting in the convective core of intermediate and high mass stars.
The comparisons of mass and radii of eclipsing binaries with theoretical models suggests not only the existence of a certain amount of overshooting, but also its dependence on stellar mass \citep{ribas00}.
On the other hand theoretical predictions of the apsidal motion rate for very eccentric binary orbits can be compatible with observed values, only if overshooting is included in the models \citep{claret93}.
Moreover 2D hydrodynamic simulations performed by \cite{deupree00} for stars with masses from 1.2 to 20~$M_\odot$ predict convective core overshooting for all models.
The effect of overshoot on stellar evolution has already been extensively discussed in the literature, but its effect on models of pre-main sequence stars has never been discussed taking the full network of nuclear reactions into account.
The overall effect to be expected is a slight delay in the evolution, redefining the age at which the star arrives at the zero age main sequence (ZAMS).
As the zero point for age of young stars is in itself an open problem, such a global effect of overshoot on the PMS has been ignored.
The mixing at most of the overshoot layer at the border of a convective core in main sequence stars is expected to be efficient \cite[e.g.][]{browning04}.
This should also be the case for the end of the PMS evolution, as the time scales for mixing due to convection are still much smaller than the time scales for nuclear reactions.
The uncertainties on the thermodynamic stratification within an overshoot layer are not expected to be of major relevance for determining the effects on the early evolution, but the extent of the overshoot layer and its effect on mixing the stellar material at that location are bound to be decisive.
In a previous work \citep{marques04} we studied the effects of
the time step and convective overshooting in the calibration of the young binary EK~Cep.
It has been shown that the time step adopted for the PMS evolution is a key numerical aspect to adequately reproduce the expected behaviour in this rapid phase of evolution.
When the time step is properly defined for calculating the evolution of intermediate mass stars with overshoot the evolutionary tracks near the end of the PMS display an extra loop.
This loop seems to be present in the output of previous works \cite[e.g.][]{siess00} although it has never been discussed in the literature, as for most cases an inadequate time step in the integration procedure would mask the loop as being a numerical artifact.
Consequently its nature and physical validity has never been analysed or discussed in the literature.
Given that an adequate definition of the time step for the PMS evolution together with the most recent set of physics produces the loop near the ZAMS (when there is a significant amount of overshooting), it has lead us to perform a detailed analysis on the underlying reasons why such a loop is predicted by an evolutionary code.
In particular the goal of this work is to establish if such an effect should be expected in real stars.
Instead of dismissing it as a numerical artifact, as implicitly done by other authors, we have investigated if it could be consistent with the physical picture that the new state-of-the-art physics predicts (in particular the network of nuclear reactions).
Here we provide such an analysis showing that there may be a physical reason for such a behaviour.
In spite of the difficulties of modelling convection and convective overshoot, our analysis seems to indicate that the loop in the HR diagram at the end of the PMS evolution could happen in real stars.
We start by discussing how overshoot is introduced in a one dimensional evolutionary code and move on to show evolutionary tracks without and with overshoot for a particular case of a 2~$M_\odot$ star.
In order to illustrate the physics behind the existence of the loop at the end of the PMS we then discuss the limit case of a specific value of the extent of the overshoot and the evolution of the chemical abundances that produce the loop.
The paper ends by addressing how different input physics and stellar parameters change the limit value of the overshoot producing the loop and what observational evidence could be expected in order to identify a star undergoing such a phase in its PMS evolution.
\section{Modelling convective overshooting in PMS evolution}
\label{sec:overs}
In this work we propose to study in detail the effects of convective overshooting in the PMS evolution of intermediate mass stars, as they approach the ZAMS, and so it is relevant to start by addressing how overshooting is implemented in an evolutionary code.
We will restrict our discussion to the case of intermediate mass stars (around 2~$M_\odot$), where there is a small central convective core.
The boundary of a convective core $r_{\mbox{\scriptsize co}}$, with a mass $M_{\mbox{\scriptsize co}}$ (see Fig.~\ref{fig:nablas_ov}), is defined by the instability criteria, namely the Schwarzschild criterium or the Ledoux criterium if the chemical composition is not homogeneous.
For the Schwarzschild criterium the border between the convective zone (core) and the radiative zone (envelope) is located where the adiabatic gradient $\nabla_{\mbox{\scriptsize ad}}$ equals the radiative gradient $\nabla_{\mbox{\scriptsize rad}}$.
Convective elements that reach the boundary with non-zero velocities overshoot it, reaching a region where $\nabla_{\mbox{\scriptsize rad}}{<}\nabla_{\mbox{\scriptsize ad}}$.
To transport all the energy generated in the convective core upwards, the luminosity transported by radiation in the zone immediately above the boundary must be superior to the total luminosity, that is, in this zone one must have $\nabla_{\mbox{\scriptsize rad}}{<}\nabla_{\mbox{\scriptsize ov}}{<}\nabla_{\mbox{\scriptsize ad}}$.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=\hsize]{\figdir/fig01}
\end{tabular}
\caption{Schematic representation of the behaviour of temperature gradient (thick continuous line) in the convective core and overshoot region of an intermediate mass star.
The notation is defined in the text.
CZ represents the proper convective zone, while MZ is the mixing zone.}
\label{fig:nablas_ov}
\end{figure}
All models of overshooting have inadequacies \citep{renzini87,canuto97}, as it is not possible to fully describe in one dimension the effect of overshooting plumes.
The standard option used is to parametrize the effect overshoot may have in the local structure by adopting a prescription.
There are two major aspects of overshoot that we need to consider in such a prescription if we want to discuss the effect on stellar evolution.
These are the temperature stratification within the overshoot layer and the mixing taking place in such a region at the top of the ``proper'' convective core.
\subsection{Thermal stratification at the overshoot layer}
\label{subsec:ovt}
The temperature stratification below $r_{\mbox{\scriptsize co}}$ is given by the MLT formulation (which corresponds to the convective zone; CZ).
Given the very small superadiabaticity expected in the proper convection zone the actual structure is not sensitive to the details of the particular formulation we use, or to the mixing length parameter $\alpha$.
Here we will adopt a value of $\alpha{=}1.35$.
As shown by \citet{marques04} for stars with masses around 2~M$_\odot$ any other reasonable value (as the one obtained from a solar calibration) will produce the same evolutionary tracks near the end of the PMS evolution.
As discussed in several works, the temperature stratification in the overshoot layer is expected to be subadiabatic.
We assume, as in \citet{y205}, that the stratification in the overshoot layer is unaffected by the overshoot, a result also obtained by \citet{browning04}.
Consequently, within the overshoot layer of thickness $d_{\mbox{\scriptsize ov}}$
(corresponding to $r_{\mbox{\scriptsize co}}{\le}r{\le}r_{\mbox{\scriptsize co}}{+} d_{\mbox{\scriptsize ov}}$) the stratification is assumed to be radiative ($\nabla_{\mbox{\scriptsize ov}}{=}\nabla_{\mbox{\scriptsize rad}}$).
Although it is a crude choice, its implementation in an evolutionary code is simple and does not affect the key aspects of overshoot that can be expected to interfere with the nuclear reactions that take place as the star approaches the main sequence.
To confirm if this is so we also consider here other options for the thermal stratification of the overshoot layer.
The representation of overshooting should be implemented using more sophisticated non-local formulations or the results of 3D simulations
\cite[e.g.][]{grossman96,canuto97,browning04}.
Unfortunately, most of these formulations can not be fully implemented in detailed evolutionary codes as the non-local character of overshooting compromises the efficiency of the integration in time for stellar evolution.
\subsection{Mixing in the overshoot layer}
\label{subsec:ovc}
The key assumption, that we adopt here, is that the mixing of the stellar matter within the overshoot layer is highly efficient taking place at the same time scales as in the proper convection zone \cite[see][]{browning04}.
That implies that in terms of evolution, and in particular at the end of the PMS, it is fully justifiable to consider that convection and overshooting induce instantaneous mixing as far as evolution is concerned.
Also, if the stellar mass is not too high, as is the case for the regime of intermediate mass stars we are addressing here, such mixing takes place in time scales much shorter than the mean lifetimes of all reactions in the CNO bi-cycle that may take place before the star reaches the ZAMS.
As long as the assumption of instantaneous mixing within the overshoot layer stands, the implications for the evolution of more complex formulations for the thermal stratification of overshoot are equivalent to the results provided by the simpler formulation that is being adopted.
For the phase of evolution we discuss here the classic assumption of using the limit of instantaneous mixing is adequate as it provides a representation of a lower limit for the extent of overshoot that may produce the loop.
This extent is well below the expected extent of overshoot for this mass range \citep{ribas00}.
In order to establish to what extent our results depended on the mixing model in the overshoot region, we also consider models calculated using the prescription of \cite{ventura98}.
Accordingly, the evolution of element $i$ follows the diffusion equation:
\begin{equation}
\frac{d X_i}{dt} =
\left(\frac{\partial X_i}{\partial t}\right)_{\mbox{\scriptsize nuc}}
+ \frac{\partial}{\partial m_r}\left[\left(4 \pi r^2 \rho\right)^2 D \;
\frac{\partial X_i}{\partial m_r}\right] \;.
\end{equation}
The diffusion coefficient $D$ is approximated in a convective zone by $D{=}u\,l_{\mbox{\scriptsize d}}/3$, where $u$ is the average turbulent velocity, computed according to eqs (88), (89) and (90) of \cite{cgm96}, and $l_{\mbox{\scriptsize d}}$ is the convective scale length.
The convective scale length $l_{\mbox{\scriptsize d}}$ is given by
\begin{equation}
l_{\mbox{\scriptsize d}} = \frac{z_{\mbox{\scriptsize up}} z_{\mbox{\scriptsize low}}}
{z_{\mbox{\scriptsize up}} + z_{\mbox{\scriptsize low}}} \;,
\end{equation}
where $z_{\mbox{\scriptsize up}}$ is the distance from the top of the convective zone increased by $\beta H_P^{\mbox{\scriptsize top}}$ and analogously for $z_{\mbox{\scriptsize low}}$.
Here, $\beta$ is a fine tuning parameter, necessary for an exact fit of actual stars, and $H_P^{\mbox{\scriptsize top}}$ the pressure scale height at the top of the convective region.
The parameter $\beta$ is constrained by $\beta{<}0.25$.
For diffusive overshooting, we write:
\begin{equation}
u = u_{\mbox{\scriptsize b}} \;
\exp \pm\left[\frac{1}{\zeta f_{\mbox{\scriptsize thick}}} \;
\ln \left(\frac{P}{P_{\mbox{\scriptsize b}}}\right) \right],
\end{equation}
where $u_{\mbox{\scriptsize b}}$ and $P_{\mbox{\scriptsize b}}$ are the turbulent velocity and the pressure at the border of the convective zone, $P$ is the local pressure, $\zeta$ is a free parameter and $f_{\mbox{\scriptsize thick}}$ is the thickness of the convective region in fractions of the local $H_P$.
This way, the turbulent velocity decreases exponentially outside convective regions.
The parameter $\zeta$ controls the e-folding distance with which the velocity of convective eddies decay outside convective regions; a higher $\zeta$ means that the velocity of the convective eddies decays slower and therefore a bigger region is affected by partial convective mixing.
The diffusive scale is approximated by $l_{\mbox{\scriptsize d}}{=}\beta H_P$ in overshoot regions.
\subsection{Extent of the overshoot layer}
\label{subsec:d_ov}
Following the assumptions discussed above the only aspect remaining to be defined is the extent $d_{\mbox{\scriptsize ov}}$ of the overshoot layer.
We adopt the common prescription (see \citealt{maeder75}; \citealt{mowlavi94}; and \citealt{y205}) that parametrises the extent of the overshooting layer in units of the local pressure scale height $H_P{\equiv}|{\rm d} r/{\rm d}\ln P|$.
So, the extension of the overshooting is given by
$ d_{\mbox{\scriptsize ov}} {=} \alpha_{\mbox{\scriptsize ov}} {\cdot} {\mbox{Min}}\left(H_{P},r_{\mbox{\scriptsize co}}\right)$,
where the border $r_{\mbox{\scriptsize co}}$ of the ``proper'' convective core
corresponds to the radius defined by the Schwarzschild criterium
and $\alpha_{\mbox{\scriptsize ov}}$ is a free parameter to be defined.
We note, however, that eclipsing binaries data indicates that this parameter is dependent on stellar mass \citep{ribas00}, reaching a value of about 0.2 for $M{\simeq}2~M_\odot$.
As discussed above, inside $r_{\mbox{\scriptsize mz}}{\equiv}r_{\mbox{\scriptsize co}}{+}d_{\mbox{\scriptsize ov}}$ there is complete mixing (we shall call this zone the mixed zone - MZ).
\section{Overshooting and evolution of the internal structure}
\label{sec:evol}
\begin{table}
\caption{Mean lifetimes (in years) for the nuclei involved in the
CNO bi-cycle, for $X{=}0.7$ and $\rho{=}64$~g~cm$^{{-}3}$
(from \citealt{caughlan62}).
Mean lifetimes for secondary nuclei are not included since their $\beta$-decay is very fast ($10^2$ -- $10^3$~s) compared to the mean lifetimes of the primary nuclei ($T_6{\equiv}T/10^6$~K).
} \label{tab:tau}
\begin{center}
\begin{tabular}{lcc}
\hline
\noalign{\smallskip}
\qquad\quad Reaction & $T_6=20$ & $T_6=15$ \cr
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$\element[12]{C}~
[\element[12]{C}(\element[1]{H},\gamma)\element[13]{N}]$ &
$1.47\times 10^4$ & $2.96\times 10^{6\phantom{1}}$ \\
$\element[13]{C}~
[\element[13]{C}(\element[1]{H},\gamma)\element[14]{N}]$ &
$3.62\times 10^3$ & $7.43\times 10^{5\phantom{1}}$ \\
$\element[14]{N}~
[\element[14]{N}(\element[1]{H},\gamma)\element[15]{O}]$ &
$1.73\times 10^6$ & $1.82\times 10^{8\phantom{1}}$ \\
$\element[15]{N}~
[\element[15]{N}(\element[1]{H},\element[4]{He})\element[12]{C}]$ &
$7.06\times10^1$ & $2.64 \times 10^{4\phantom{1}}$ \\
$\element[15]{N}~
[\element[15]{N}(\element[1]{H},\gamma)\element[16]{O}]$ &
$1.69 \times 10^5$ & $6.19 \times 10^{7\phantom{1}}$ \\
$\element[16]{O}~
[\element[16]{O}(\element[1]{H},\gamma)\element[17]{F}]$ &
$1.09\times 10^8$ & $6.48\times 10^{10}$ \\
$\element[17]{O}~
[\element[17]{O}(\element[1]{H},\element[4]{He})\element[14]{N}]$ &
$3.38\times 10^6$ & $6.94\times 10^{10}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=14cm,angle=0]{\figdir/fig02}
\caption{Left panels: evolution of the convective zones for models with different values of $\alpha_{\mbox{\scriptsize ov}}$: $\alpha_{\mbox{\scriptsize ov}}{=}0$ (top panels), $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ (middle panels) and $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$ (bottom panels).
Cross-hatched areas represent the extent of overshooting.
Right panels: evolutionary tracks in the HR diagram for the same cases as in the left panels.
Symbols on the track correspond to evolutionary ages of special interest:
they are also shown in the left panels (see Table~\ref{tab:times}).
} \label{fig:hrcz}
\end{figure*}
Models of a 2~$M_{\odot}$ star with values of $\alpha_{\mbox{\scriptsize ov}}$ varying from 0 to 0.13 have been produced (see Table~\ref{tab:times}).
The evolution was calculated using an initial helium abundance $Y_{0}{=}0.28$, initial metallicity $Z_{0}{=}0.02$ and mixing length parameter $\alpha{=}1.35$.
Models were calculated using the CESAM stellar evolutionary code \citep{morel97}.
The up-to-date physical ingredients and adopted numerical procedures relevant for the early evolution near the ZAMS - used to build the models discussed here - are described in detail by \citet{marques04}.
The key aspect in the calculation of models near the ZAMS is the energy production.
For the mass range discussed here this is the CNO bi-cycle which consists of two cycles.
In the first, the CN cycle, only carbon and nitrogen are involved:
\begin{eqnarray}
\element[12]{C}+\element[1]{H} &\rightarrow& \element[13]{N}+\gamma
\nonumber\\
\element[13]{N} &\rightarrow& \element[13]{C} + e^{+}+\nu_e \nonumber\\
\element[13]{C}+\element[1]{H} &\rightarrow& \element[14]{N}+\gamma
\nonumber\\
\element[14]{N}+\element[1]{H} &\rightarrow& \element[15]{O}+\gamma
\nonumber\\
\element[15]{O} &\rightarrow& \element[15]{N} + e^{+}+\nu_e \nonumber\\
\element[15]{N}+\element[1]{H} &\rightarrow&
\element[12]{C}+\element[4]{He}.\label{rq:cno}
\end{eqnarray}
The second, the ON cycle, is entered by the termination of the last reaction of (\ref{rq:cno}) through the $\gamma$ channel (instead of the $\alpha$ channel):
\begin{eqnarray}
\element[15]{N}+\element[1]{H} &\rightarrow& \element[16]{O}+\gamma
\nonumber \\
\element[16]{O}+\element[1]{H} &\rightarrow& \element[17]{F}+\gamma
\nonumber \\
\element[17]{F} &\rightarrow& \element[17]{O} + e^{+}+\nu_e
\nonumber \\
\element[17]{O}+\element[1]{H} &\rightarrow&
\element[14]{N}+\element[4]{He}.\label{rq:cno1}
\end{eqnarray}
Table~\ref{tab:tau} shows, however, that the termination of the fusion of $\element[15]{N}$ through the $\alpha$ channel is much more likely (about $10^{4}$ times) than through the $\gamma$ channel.
The equilibrium timescale of the CN cycle is therefore much shorter than that of the full CNO cycle; for the PMS phase, only the processes in (\ref{rq:cno}) - the CN cycle - will be considered in our discussion (although the models use the full network).
There is still another ON cycle, which is entered by the termination of the last reaction of (\ref{rq:cno1}) (the fusion of $\element[17]{O}$) through the $\gamma$ channel; its importance, however, is very small.
Table~\ref{tab:tau} also shows that the slowest process in (\ref{rq:cno}) is, by far, the combustion of $\element[14]{N}$.
Therefore, before the abundances in CN elements reach equilibrium almost all original $\element[12]{C}$ and $\element[13]{C}$ must be burned into $\element[14]{N}$.
\begin{table}
\caption{Age (in Myrs) of the models at different phases of evolution for different values of $\alpha_{\mbox{\scriptsize ov}}$.
These phases correspond to: a -- the central MZ appears; b -- the central MZ reaches the first maximum; c -- minimum extent of the central MZ; d -- ZAMS.
Also indicated is the type of symbol used in Fig.~\ref{fig:hrcz} to locate these phases.
} \label{tab:times}
\begin{center}
\begin{tabular}{ccccc}
\hline
Phase & \multispan3 $\alpha_{\mbox{\scriptsize ov}}$ & Symbol \\[4pt]
& 0.0000 & 0.1295 & 0.1300 & (Fig.~\ref{fig:hrcz}) \\
\hline
a & \phantom{1}6.6 & \phantom{1}6.6 & \phantom{1}6.6 & {\it black triangle} \\
b & \phantom{1}7.2 & \phantom{1}7.2 & \phantom{1}7.2 & {\it black square} \\
c & \phantom{1}8.7 & \phantom{1}9.2 & \phantom{1}9.2 & {\it black circle} \\
& - & \phantom{1}9.7 & \phantom{1}9.7 & {\it open square} \\
& - & - & \phantom{1}9.8 & {\it black diamond} \\
& - & - & 10.5 & {\it black pentagon} \\
d & 10.1 & 10.6 & 12.1 & {\it star} \\
\hline
\end{tabular}
\end{center}
\end{table}
All models discussed here use the full network of reactions (PP chains and CNO cycle) to calculate the energy production.
The PMS birthline \citep{palla91} is not used in these calculations as the chemical profile at the core of the models for a 2~$M_\odot$ star near the ZAMS are not significantly changed by what happens at the very early stages of stellar evolution.
Figure~\ref{fig:hrcz} shows evolutionary tracks for several values of $\alpha_{\mbox{\scriptsize ov}}$.
The most remarkable feature is that there are two kinds of tracks, with or without a ``loop'' just before the ZAMS.
The transition between these two types of tracks happens at a very definite value of $\alpha_{\mbox{\scriptsize ov}}$: for $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ there is no ``loop'' in the evolutionary tracks on the HR diagram, while there is a ``loop'' for $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$.
For values of $\alpha_{\mbox{\scriptsize ov}}$ lower than 0.1295, the tracks do not differ significantly; the same happens for values of $\alpha_{\mbox{\scriptsize ov}}$ higher than 0.1300.
This loop can also be seen in the evolutionary models available online, by \citet{siess00}, for a 2~$M_{\odot}$ star with $Z{=}0.02$ and $\alpha_{\mbox{\scriptsize ov}}{=}0.2$.
This feature requires an explanation.
In the following sections we will describe in detail the evolution of the internal structure of a 2~$M_{\odot}$ star without and with overshooting in order to understand the origin of these features in the models of PMS evolution.
\subsection{Evolution without overshooting}
\label{subsec:evol_nov}
The evolutionary track of a 2~$M_{\odot}$ star on the HR diagram without overshooting is shown in Fig.~\ref{fig:hrcz} (upper panel), as well as the evolution of the convective zones (top right panel), during the last phase of the PMS.
At an age $t_{\star}{=}5$~Myrs, the star is completely radiative and all its layers are contracting.
The central temperature rises, until it becomes high enough to burn $\element[3]{He}$ and $\element[12]{C}$.
The combustion of both $\element[3]{He}$ and $\element[12]{C}$ depends strongly on the temperature ($\varepsilon{\sim}T^{16}$ to $T^{18}$); therefore, the energy generated is strongly concentrated in the centre of the star.
The energy flux from the central regions becomes so high that it can not be transported by radiation alone.
A central convective zone appears at this point.
Figure~\ref{fig:abund_no-ov} shows the evolution of the production of energy and the central abundances of CN elements.
Comparing Figs.~\ref{fig:hrcz} (top left panel) and \ref{fig:abund_no-ov}, it is clear that the convective core appears at the time when $\element[12]{C}$-burning starts (at an age $t_{\star}{=}6.6$~Myrs); this instant is indicated in Fig.~\ref{fig:hrcz} by the black triangle (see Table~\ref{tab:times}).
\begin{figure}
\centering
\includegraphics[width=\hsize]{\figdir/fig03}
\caption{Evolution of the production of energy (top panel) and
the central abundances (lower panel) without overshooting.
} \label{fig:abund_no-ov}
\end{figure}
The structure of the core implies that $\rho_{\mbox{\scriptsize c}}/\overline{\rho}$ is about 6 for a convective model, while being higher than 50 for a radiative one.
Because the convective model has to be less centrally concentrated, the central zone must expand when convection appears; this expansion absorbs practically all the energy produced in the centre by $\element[12]{C}$-burning, which does not, thus, reach the surface.
We have the apparently paradoxical result that when nuclear energy sources become important the total luminosity of the star decreases (see top right panel of Fig.~\ref{fig:hrcz}), although there is now a new energy source.
As the burning of $\element[12]{C}$ becomes more efficient with rising central temperature and density, the central convective zone grows; when the abundance of $\element[12]{C}$ in this zone decreases, the energy generated in the centre of the star decreases as well.
In Fig.~\ref{fig:hrcz}, the black square indicates the instant the central convective zone reaches its maximum extent ($t_{\star}{=}7.2$~Myrs).
As the energy generated in the centre of the star decreases, the extent of the central convective zone decreases.
As discussed previously, when this happens the region of the convective zone must now strongly contract in order to adapt to the structure of a radiative core.
The central density rises again, as well as the central temperature.
For a 1~$M_{\odot}$ star, PP chains would generate now practically all the energy, since $\element[14]{N}$-burning would be always too slow for the CN
cycle~(\ref{rq:cno}) to generate a large fraction of the energy (see
Table~\ref{tab:tau}, for $T_6{=}15$).
For a 2~$M_{\odot}$ star, however, PP chains can not stop the contraction of the stellar core.
As almost all $\element[12]{C}$ originally in the core has been burned and the convective core retracts as a consequence, the central zone contracts again.
The luminosity generated by this contraction causes the luminosity of the star to increase, as can be seen in the top right panel of Fig.~\ref{fig:hrcz} and in Fig.~\ref{fig:abund_no-ov}.
As the central temperature rises, the combustion of $\element[14]{N}$ becomes more efficient (Table~\ref{tab:tau}); again, the energy generation becomes concentrated in the centre of the star; the convective zone grows and, as a consequence, the luminosity of the star decreases again.
The instant when the convective zone reaches a minimum is indicated in Fig.~\ref{fig:hrcz} by the black circle ($t_{\star}{=}8.7$~Myrs).
The contribution of the CNO cycle~(\ref{rq:cno}) to the energy generation grows, until the star is finally in hydrostatic equilibrium.
The ZAMS is indicated in Fig.~\ref{fig:hrcz}, top right panel, by the star ($t_{\star}{=}10.1$~Myrs).
Figure~\ref{fig:abund_no-ov} shows that the abundances of $\element[12]{C}$ and $\element[13]{C}$ do not grow significantly while the $\element[14]{N}$-burning reaction is becoming more efficient; all $\element[12]{C}$ produced by the burning of $\element[14]{N}$ very rapidly burns into $\element[13]{C}$, which even more rapidly burns into $\element[14]{N}$ again (Table~\ref{tab:tau}).
The burning of $\element[14]{N}$, because it is so slow compared with the other reactions of the CN cycle~(\ref{rq:cno}), acts as a bottleneck.
\subsection{Evolution with overshooting}
\label{subsec:evol_ov}
As seen above, for $\alpha_{\mbox{\scriptsize ov}}{\ge}0.1300$ there is a loop in the evolutionary track in the HR diagram during the final stages of the PMS.
For $\alpha_{\mbox{\scriptsize ov}}{\le}0.1295$ there is no such loop; the transition is quite abrupt.
Figure~\ref{fig:hrcz} shows the evolution of the convective zones with $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ (middle panels); it is qualitatively similar to the evolution with $\alpha_{\mbox{\scriptsize ov}}{=}0$.
The two main differences are that with $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ the extent of the MZ is higher and the evolutionary time is longer (see Table~\ref{tab:times}).
The extent of the MZ at the minimum after the main $\element[12]{C}$-burning is also significantly higher.
These differences are easily explained by the extra extent of the MZ caused by the overshooting; the burning of $\element[12]{C}$ takes more time because the MZ extends further and therefore there is more $\element[12]{C}$ to burn, while the rate of burning is similar with $\alpha_{\mbox{\scriptsize ov}}{=}0.0$ and $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ (since the central densities and temperatures are similar).
Table~\ref{tab:times} shows the ages of the models at some stages of the evolution.
Before the appearance of the central MZ, the evolutionary times are the same.
Figure~\ref{fig:hrcz} shows the evolution in the HR diagram (bottom right panel) and the evolution of the convective zones (bottom left panel) of a model with $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$.
In the bottom right panel of Fig.~\ref{fig:hrcz}, the black triangle shows the instant the central MZ appears for the first time ($t_{\star}{=}6.6$~Myrs); this zone reaches its maximum extent around $t_{\star}{=}7.2$~Myrs (black square).
As the $\element[12]{C}$ present in the central MZ burns out, the MZ retreats, reaching a minimum at $t_{\star}{=}9.2$~Myrs (black circle).
It grows again as the burning of $\element[14]{N}$ becomes more efficient, and, while with $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ the reactions of the CNO cycle (\ref{rq:cno}) reach equilibrium and the central MZ stabilises, with $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$ the MZ grows suddenly.
Until this point, the evolution for $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ and $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$ is the same; here, they split completely.
This event (at $t_{\star}{=}9.7$~Myrs) is indicated in Fig.~\ref{fig:hrcz} with the open square.
The central MZ grows rapidly, reaching a maximum (${\simeq}0.7~M_{\odot}{=}0.35~M_{\star}$!) at $t_{\star}{=}9.8$~Myrs (black diamond).
This sudden growth of the MZ is caused by an increase in the central abundance of $\element[12]{C}$ and $\element[13]{C}$, which are burned very efficiently because the central temperature is now higher than during the main $\element[12]{C}$-burning phase; so, the energy produced in the centre of the star increases rapidly.
\begin{figure}
\centering
\includegraphics[width=\hsize]{\figdir/fig04}
\caption{Evolution of the production of energy (top panel) and central abundances (lower panel) with $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$.
}
\label{fig:abund_ov-1300}
\end{figure}
Figure~\ref{fig:abund_ov-1300} shows the evolution of the production of energy and the central abundances with $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$; the main difference to Fig.~\ref{fig:abund_no-ov} ($\alpha_{\mbox{\scriptsize ov}}{=}0$) is the sudden rise in the abundance in $\element[12]{C}$ and $\element[13]{C}$ (as well as a drop in the abundance in $\element[14]{N}$).
This causes the rise in the production of energy through the CNO cycle seen in the left panel of Fig.~\ref{fig:abund_ov-1300}, causing in turn the growth of the central MZ seen in Fig.~\ref{fig:hrcz}.
From this point, the MZ retracts again, reaching a new minimum at $t_{\star}{=}10.5$~Myrs (black pentagon).
The model of the star then returns to the ``normal'' track, reaching the ZAMS at $t_{\star}{=}12.1$~Myrs.
The MZ retracts so fast because it grew so fast before.
This sudden growth causes an expansion of the central regions, which leads to a big drop in the central temperature and particularly a big drop in the central density.
The reaction rates of $\element[12]{C}$ and $\element[13]{C}$-burning, depending so strongly on the temperature and density, drop rapidly.
Since this sequence of events (after the sudden growth of the central abundance in $\element[12]{C}$ and $\element[13]{C}$) does not depend on the extent of the central MZ when $\alpha_{\mbox{\scriptsize ov}}{\ge}0.1300$, depending only on the central temperatures and densities, the format of the loop on the HR diagram does not depend on the actual extent of overshooting above $\alpha_{\mbox{\scriptsize ov}}{=}0.1300$.
\subsection{The central abundances}
\label{subsec:expl}
In this section we study the evolution of the central abundances for $\alpha_{\mbox{\scriptsize ov}}{=}0$ and $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$.
The value $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ is particularly interesting because it is almost enough to produce a ``loop'' in the evolutionary tracks on the HR diagram.
The comparison between the cases $\alpha_{\mbox{\scriptsize ov}}{=}0$ and $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ shows the reasons why a ``loop'' is produced for $\alpha_{\mbox{\scriptsize ov}}{\ge}0.1300$ and why is this transition so abrupt.
\begin{figure*}
\centering
\includegraphics[width=\hsize]{\figdir/fig05}
\caption{Upper panels: evolution of the temperature of the BMZ (full line) for several values of $\alpha_{\mbox{\scriptsize ov}}$ (where $T_6{\equiv}T/10^6$).
The temperature ($T_{\rm ign}$) at which the mean lifetime for $\element[12]{C}$ is $0.75$~Myrs (for the density at the BMZ), is also shown as a dashed line.
Lower panels: the abundance of $\element[12]{C}$ the BMZ finds during the final expansion of the MZ (dashed line); evolution of the central abundance in $\element[12]{C}$ during the same period (full line); and the abundance in $\element[12]{C}$ the BMZ would find if there would be no depletion of $\element[12]{C}$ above the MZ due to nuclear reactions (dotted line).
The shadow in the upper panels indicates the time range of the corresponding lower panel.
}
\label{fig:tc_c12}
\end{figure*}
The chemical composition of $\element[12]{C}$ at the central convective zone reaches a minimum at $t_{\star}{=}8.7$~Myrs for $\alpha_{\mbox{\scriptsize ov}}{=}0$ and at $t_{\star}{=}9.1$~Myrs for $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$.
The $\element[12]{C}$ is not fully burned above the MZ at this stage.
If temperatures above the MZ remained too low to burn significant amounts of $\element[12]{C}$ there during the final expansion of the MZ, the MZ would add this $\element[12]{C}$ during the following expansion.
Figure~\ref{fig:tc_c12} (upper panels) shows the evolution of the temperature of the BMZ (labelled $T_{\mbox{\scriptsize BMZ}}$) for $\alpha_{\mbox{\scriptsize ov}}{=}0,0.1295,0.13$.
The temperature ($T_{\mbox{\scriptsize ign}}$) at which the mean lifetime for $\element[12]{C}$ is $0.75$~Myrs (for the density at the BMZ), is also shown.
It is clear that the temperature at the BMZ decreases as the amount of overshooting increases; overshooting extends the MZ, so the BMZ is farther from the centre of the star, where the temperature is lower.
Without overshooting, the temperature of the BMZ is higher (during the final expansion of the MZ) than the temperature needed to significantly deplete $\element[12]{C}$ before the MZ adds it.
So we expect that the MZ adds almost no $\element[12]{C}$ during its final expansion.
As we increase the amount of overshooting, the temperature at the BMZ decreases (see upper panels of Fig.~\ref{fig:tc_c12}), making $\element[12]{C}$-burning above the MZ incomplete.
As a result, some $\element[12]{C}$ is added to the MZ during its expansion.
Finally, when $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ there is almost no depletion of $\element[12]{C}$ above the MZ during its last expansion.
The increase with $\alpha_{\mbox{\scriptsize ov}}$ of the amount of $\element[12]{C}$ added to the MZ when $\alpha_{\mbox{\scriptsize ov}}$ approaches the critical value of 0.1295 is very high.
Even a small increase of $\alpha_{\mbox{\scriptsize ov}}$ from $\alpha_{\mbox{\scriptsize ov}}{=}0.1290$ to $\alpha_{\mbox{\scriptsize ov}}{=}0.1295$ is enough to increase noticeably the amount of $\element[12]{C}$ added to the central MZ.
If there is no $\element[12]{C}$ added to the core (as it is the case without overshooting), the CNO reactions reach equilibrium shortly after $\element[14]{N}$-burning becomes efficient enough (that is, shortly after the central MZ starts to grow for the last time).
When, due to the extra extent of the central MZ caused by overshooting there is more $\element[12]{C}$ added to the MZ than that produced by $\element[15]{N}$-burning, there must be an excess of $\element[12]{C}$ (and $\element[13]{C}$)-burning to keep the relative abundances of CN elements at equilibrium.
\begin{figure}
\centering
\includegraphics[width=\hsize]{\figdir/fig06}
\caption{Evolution of the luminosity produced within the central MZ by the burning of $\element[12]{C}$ (full line), $\element[13]{C}$ (dashed line) and $\element[14]{N}$ (dotted line).
The panels correspond to different values of $\alpha_{\mbox{\scriptsize ov}}$.
} \label{fig:ls}
\end{figure}
This excess burning of $\element[12]{C}$ (and $\element[13]{C}$, since the extra burning of $\element[12]{C}$ generates an excess in the abundance of $\element[13]{C}$ relative to the equilibrium abundances) causes an increase on the luminosity produced within the central MZ; to transport this extra luminosity out, the central MZ must grow.
Figure~\ref{fig:ls} shows that only when $\alpha_{\mbox{\scriptsize ov}}$ approaches the critical value does this increase in the luminosity becomes noticeable.
The growth of the central MZ must bring in turn an even greater amount of $\element[12]{C}$ to the MZ, causing an even greater increase in the luminosity produced within the central MZ.
This is only stopped because such a fast increase in the energy produced within the core causes it to expand and cool, making CNO reactions much less efficient.
In short, the abruptness of the transition from an evolution with a loop to an evolution without a loop comes from the sensibility of the CNO reactions to the temperature, as the parameter $\alpha_{\mbox{\scriptsize ov}}$ determines the temperature of the BMZ, which in turn determines how much $\element[12]{C}$ is burned before the MZ grows.
\section{Sensitivity of the results to the input physics}
The physics of the early evolution for a 2~$M_\odot$ star is expected to be simpler than for the more advanced stages of evolution.
Aspects as diffusion and settling as well as the existence of steep chemical gradients as a result of hydrogen burning in the main sequence are not important in the cases discussed here.
In this section we address some of the key aspects that may be relevant and the possible effect these may have on the existence of the loop in real stars.
Although we presented a very definite value of $\alpha_{\mbox{\scriptsize ov}}$ beyond which a loop is produced in the evolutionary tracks on the HR diagram, this ``critical'' value can not be taken at face value, since it depends on several known and unknown factors.
Such a critical value depends mainly on the nuclear reaction rates, but other aspects of the models may modify the actual number.
We stress that the main goal of this work is not the determination of a critical value for $\alpha_{\mbox{\scriptsize ov}}$ but to establish what are the physical and numerical ingredients responsible for producing the loop in the evolution calculation.
\subsection{Physics and numerics of the overshoot layer}
\label{subsec:phy-ov}
Numerical simulations and observational tests (see Section~\ref{sec:intro}) consistently indicate that stars are expected to have mixing zones that go beyond the border predicted by an instability criterium for convection.
The unresolved questions are the extent and the physics of these extra mixing zones.
Here we have included such an extra mixing zone (overshoot layer) at the top of a small convective core by adding an extra mixing layer of size $\alpha_{\mbox{\scriptsize ov}} H_p$ to the proper convection zone modelled according to the mixing length theory.
There are different prescriptions for overshooting as discussed in Section~\ref{sec:overs} which will provide slightly different results for the same value of $\alpha_{\mbox{\scriptsize ov}}$.
In order to confirm that our results are not affected by the thermal stratification adopted within the overshoot layer we have also calculated the evolution when the limit of $\nabla_{\mbox{\scriptsize ov}}{=}\nabla_{\mbox{\scriptsize ad}}$ is used within the overshoot region.
As long as instantaneous mixing is assumed the critical value of $\alpha_{\mbox{\scriptsize ov}}$ does not change by more than about 0.0005.
The implications for the evolutionary track in the HR diagram are not changed and the existence of the loop is still confirmed in this limit.
The actual stratification in the overshoot layer of stars is expected to lie between these two values ($\nabla_{\mbox{\scriptsize rad}}{<}\nabla_{\mbox{\scriptsize ov}}{<}\nabla_{\mbox{\scriptsize ad}}$).
Consequently, any temperature profile will produce the same behaviour we discuss in this work.
Models calculated with different precision (number of shells for each model and time step of the evolution - see \citealt{marques04}) or using different numerical methods yield different values for the critical $\alpha_{\mbox{\scriptsize ov}}$.
These differences (below 0.001) are a direct consequence of the need for a careful numerical scheme to follow in time and space the border of convective and mixing zones in stellar interiors.
These numerical tests indicate that the existence of the loop is to be expected when adequate numeric procedures are used for time and space integration.
As far as this work is concerned these uncertainties are small corrections which will only become relevant if the measured extent of overshoot in stars is close to the critical value.
As discussed above (Section~\ref{sec:intro}), observational constraints on the value of the extent of an overshoot layer are not clear yet.
Some works \citep{ribas00} indicate that the amount of overshooting in the cores of 2~$M_{\odot}$ stars can be significantly larger than $\alpha_{\mbox{\scriptsize ov}}{\sim}0.13$.
\subsection{Diffusive versus instantaneous mixing}
\label{subsec:phy-mix}
Here we have adopted instantaneous mixing for the overshoot layer.
This is a crucial assumption that needs to be confirmed.
The time scale of the relevant nuclear reactions is significantly larger than the expected convective turnover time for the core.
However, the mixing at the overshoot layer, in particular at the very top, may have longer mixing timescales when compared with the proper convective zone.
Models incorporating overshoot as a diffusion process \citep{ventura98} indicate that for stars with convective cores the effect on the mixing is equivalent to an instantaneous overshooting around 0.2~$H_p$.
This result has been reinforced by \cite{ventura05} where nuclear burning and mixing were self-consistently coupled for post-main sequence stars and compared with instantaneous mixing.
The confirmation that such a behaviour is also valid for the pre-main sequence evolution is required, but given the physics at play in this early phase no major differences are expected.
In order to verify this assumption we have also considered the case when mixing is not instantaneous outside convective zones.
However, the exponential decay of the velocities outside the convective zone means that over time the mixed zone is larger.
During the first retreat of the central CZ extra $\element[12]{C}$ is brought to the core so the energy generation there is higher than it would if no overshooting were present.
That keeps the convective zone bigger and the temperature at its border lower, so when the CZ grows again it finds above a significant abundance of $\element[12]{C}$, as happens in the case of instantaneous mixing in the overshoot zone.
Consequently, a loop is also produced on the evolutionary tracks calculated using the prescription of \cite{ventura98} with values of $\zeta {\ge} 0.01$ and $\beta {\ge} 0.05$.
These values should be compared to $\zeta{=}0.03$ used by \cite{ventura05} to fit the cluster NGC 1866.
The loop is found to be very similar to the one obtained with instantaneous mixing; both the trajectory in the HR diagram and the time it takes.
\subsection{Stellar rotation}
\label{subsec:phy-rot}
Rotation can also play a role on how overshoot behaves.
Considering the aim of this work, the important questions to be addressed are the following: can rotation change, considerably, the time scale of mixing when compared to the burning time scale?
Is the efficiency of mixing affected by
the rotation?
\cite{browning04} presented three-dimensional simulations of core convection for a 2~$M_\odot$ star.
They found that the core has a differential rotation and an overturning period of the global scale convection of about one month.
On the other hand they support that overshooting and penetrative convection are both effective in mixing the chemical composition.
The existence of an efficient overshooting layer for rotating stellar cores was also found by \cite{deupree98} using 2D simulations for a 8.75~$M_\odot$ star.
Consequently there is no evidence to support (on the contrary) that rotation will affect the results reported here, and in particularly our assumption concerning the instantaneous mixing.
It is worth noticing that \cite{meynet00} found for high mass stars ($9{\le}M/M_\odot{\le}120$) that rotation modifies the tracks in the HR diagram as moderate overshoot would do.
A similar conclusion is reached by \cite{noels04} for A stars, where the authors argue that rotation reduces the efficiency of convection and consequently the extent of overshooting.
However the observational evidence \citep{herbst05} indicates that most PMS stars loose their angular momentum well before arriving at the main sequence, when they are already slow rotators.
In such cases suppression of overshooting due to rotation is weaker and so a loop may be expected in intermediate mass PMS slow rotators close to the main sequence.
\subsection{Dependence on stellar parameters}
\label{subsec:phy-par}
The critical value of the parameter $\alpha_{\mbox{\scriptsize ov}}$ depends strongly on the stellar mass (or the size of the convective core), decreasing for higher stellar masses.
For a 3~$M_{\odot}$ star, there is a loop in the evolutionary track on the HR diagram for $\alpha_{\mbox{\scriptsize ov}}{\ge}0.0593$, while for a 4~$M_{\odot}$ star the same thing happens for $\alpha_{\mbox{\scriptsize ov}}{\ge}0.0391$.
So, for higher mass stars a relatively small amount of overshooting is enough to produce a loop before the ZAMS.
This is due to the faster evolution of higher mass stars, leaving less time for the depletion of $\element[12]{C}$ in the regions above the MZ before the MZ grows again.
So $T_{\mbox{\scriptsize ign}}$, which we defined as the temperature at which the mean lifetime of $\element[12]{C}$ is $\tau_{\mbox{\scriptsize C12}}{=}0.75$~Myrs, should be replaced by the temperature at which $\tau_{\mbox{\scriptsize C12}}{=}0.27$~Myrs for a 3~$M_{\odot}$ star and $\tau_{\mbox{\scriptsize C12}}{=}0.13$~Myrs for a 4~$M_{\odot}$ star.
The region where the temperature is equal to $T_{\mbox{\scriptsize ign}}$ lies therefore nearer the centre of the star, and a relatively small amount of overshooting is enough to bring the BMZ to those regions.
The evolution is faster for higher masses; the loop lasts about $4{\times}10^5$~yrs for a 3~$M_{\odot}$ star and about $2{\times}10^5$~yrs for a 4~$M_{\odot}$ star, making a possible observation much more difficult.
An additional difficulty for the case of a 4~$M_{\odot}$ star is that the birthline (see \citealt{palla91,palla92}) is closer to the ZAMS; the loop should happen at an age of about $5{\times}10^5$~yrs after the birth of the star.
The effects of the protostar phase could be of some importance at such an early age.
Also, the assumption of instantaneous mixing becomes weaker as we move towards higher mass models.
In such cases the requirement to consistently follow the mixing and the nuclear reactions becomes more important, being necessary to calculate models with time dependent mixing coupled with nuclear burning (as done in \citealt{ventura05}, for post-main sequence stars) at the central core of these young stars.
\section{Expected observational evidences}
The observational confirmation that a particular star is in the loop can be straight forward, but the probability of finding such a star is very small as it depends on the relative time that a star spends in the loop.
Very young clusters with stars in the range of 1.5 to 3.0~$M_{\odot}$ arriving at the ZAMS are preferential targets for finding loop stars.
The location at the HR diagram (see Fig.~\ref{fig:loop_hr}) has to be complemented with the determination of the mass of the star.
Consequently the study of possible candidates in binary systems, with a smaller mass component to determine the age precisely, is the natural observational test for the existence and determination of the characteristics of loop-stars.
\begin{figure}
\centering
\includegraphics[height=\hsize,angle=-90]{\figdir/fig07}
\caption{Evolutionary tracks in the HR diagram for three stars.
One with $M{=}2~M_{\odot}$ (full line) having a loop due to the presence of overshoot, and two stars of smaller mass (dashed lines) without overshoot which cross the track of the more massive star.
The point ``0'' marks the beginning of the loop, the point ``4'' (in the same location on the HR diagram) marks the end.
} \label{fig:loop_hr}
\end{figure}
An alternative to study binaries, is to obtain seismic data on several individual stars in order to discriminate between stars with the same radius but different masses.
Such a test can be applied as long as a few oscillations can be identified in order to provide the observational value for the large frequency separation $\Delta_\ell$ for modes of degree $\ell$ and calculated at a reference frequency $\nu_{\mbox{\scriptsize r}}$ (e.g. \citealt{monteiro02,fernandes03}).
This quantity (for $\ell{=}0$) can differ by as much as 6~$\mu$Hz between a loop-star and a star without overshoot at the same location in the HR diagram.
If very precise seismic data are available the small frequency separation, $\delta_{\ell,\ell{+}2}$, can also be determined using modes of degree $\ell$ and $\ell{+}2$ (being calculated at the same reference frequency).
The small frequency separation, being sensitive to the central regions of the star (e.g. \citealt{monteiro02b}), can differ by as much as 30\% at $\nu_{\mbox{\scriptsize r}}{=}1200~\mu$Hz for location 2 in Fig.~\ref{fig:loop_hr}, providing a powerful additional test to the large frequency separation.
Another seismic indicator, mainly sensitive to the central regions of the stars \citep{roxburgh03,roxburgh05}, is the ratio $r_{02}{\equiv}\delta_{02}/\Delta_1$ of the small separation (using modes of degree 0 and 2) and the large separation (for modes of degree 1).
This quantity can provide an even more definite discriminant for stars with strong structural differences in the interior but with the same radius.
This is the case between loop-stars and the correspondent star without overshoot at the same location in the HR diagram.
The values of $r_{02}$ for models occupying the same location in the HR diagram show differences that can be as high as 0.025 ($\sim$26\%).
\section{Conclusions}
We have shown that a moderate amount of overshooting in evolution models of young intermediate mass stars can cause a loop in the evolutionary tracks on the HR diagram just before the ZAMS.
This is a consequence of using a more precise numerical scheme for calculating the time-step together with the best up-to-date physics.
The loop corresponds, for a 2~$M_{\odot}$ star, to a drop of about 35\% in luminosity and about 1000~K in effective temperature; the loop lasts about 1.5~Myrs.
A detailed description on why the full nuclear network predicts in the models such a loop has been presented.
This analysis indicates that such a behaviour may be expected if the standard formulations for core overshoot in stellar models are representative of the average role convective overshooting has in the early evolution of intermediate mass stars.
The major effect of the existence of the loop is on the age of the star at the ZAMS, being higher by as much as 20\% relative to the evolution without the loop.
The observational identification of loop-stars can provide definite tests on some of the key aspects of the physics that determine the evolution in early and main sequence intermediate mass stars.
In particular the existence and extent of overshoot at the core of these stars and the reaction rates for the branches in the CNO cycle responsible for producing the loop.
Such a combination of precise observational constraints make the existence of loop-stars an interesting and worth pursuing observational opportunity to test the modelling of stellar structure and evolution at these early stages.
The observational challenge is the short duration of the loop; less than about 2~Myrs.
The loop is even shorter for higher stellar masses, for which the amount of overshooting needed to produce a loop becomes smaller and therefore more likely.
Further analysis on how overshooting and nuclear reactions are coupled in the higher mass models is required in order to identify if the loop can be expected in this regime.
With the forthcoming asteroseismic space missions \cite[e.g.][]{baglin03} and ongoing ground based campaigns for the seismic study of stars across the HR diagram, there will be a large set of young stars whose seismic properties will be determined with sufficiently high precision to be possible to use the seismic analysis we have discussed here.
There are in particular campaigns planned for the seismic observation of young clusters whose ages are estimated to be around the expected values for the existence of loop stars (from about 5 to 12~Myrs).
Consequently, the possibility of confirming the existence of loop stars may be possible in the not so distant future.
The implications of finding a few such stars in a young cluster will be of great relevance for the modelling of the early evolution of intermediate mass stars.
\section*{Acknowledgements}
The authors are grateful to an anonymous referee for valuable comments and suggestions that help to improve the manuscript.
This work was supported in part by the Portuguese {\it Funda\c{c}\~ao para a Ci\^encia e a Tecnologia} through grant (JPM) {\scriptsize SFRH/BD/9228/2002}, and projects {\scriptsize POCI/CFE-AST/55691/2004} (MJM) and {\scriptsize POCI/CTE-AST/57610/2004} (JPM) from POCI, with funds from the European programme FEDER.
This work has been performed using the CESAM stellar evolution code available at {\small\tt http://www.obs-nice.fr/cesam/}
|
2,877,628,088,897 | arxiv | \section{Introduction}
\label{Section:intro}
Solar activity plays a significant role in influencing the interplanetary medium and space weather around Earth and all the other planets of the solar system \citep{Schwenn2006}. Remote-sensing instruments on-board heliophysics missions can provide a wealth of information on the Sun’s activity, primarily via capturing the emission of light from the multi-layered solar atmosphere, thereby leading to the inference of various physical quantities such as magnetic fields, plasma velocities, temperature and emission measure, to name a few.
\begin{figure*}
\includegraphics[width=\textwidth]{Channels_Horizontal.pdf}
\caption{Set of images to exemplify how degradation affects the AIA channels. The two sets are composed of seven images from different EUV channels. From left to right: AIA $94$~\AA, AIA $131$~\AA, AIA $304$~\AA, AIA $335$~\AA, AIA $171$~\AA, AIA $193$~\AA, and AIA $211$~\AA. The top row corresponds to images from May $13^{th}$, $2010$ and the bottom row shows images from August $31^{st}$, $2019$, with no degradation correction. The $304$~\AA~channel images are in log-scale due the severe degradation.}
\label{fig:autocalibrate_model_problem}
\end{figure*}
NASA currently manages the Heliophysics System Observatory (HSO), which consists of a group of satellites that constantly monitor the Sun, its extended atmosphere, space environments around Earth and other planets of the solar system \citep{HSO}. One of the flagship missions of HSO is the Solar Dynamics Observatory \citep[SDO, ][]{SDO_primary}. Launched in $2010$, SDO has been instrumental in monitoring the Sun's activity and providing a high volume of valuable scientific data every day with a high temporal and spatial resolution. It has three instruments onboard: the Atmospheric Imaging Assembly \citep[AIA,][]{AIA}, which records high spatial and temporal resolution images of the Sun in the ultraviolet (UV) and extreme UV (EUV); the Helioseismic \& Magnetic Imager \citep[HMI,][]{HMI}, that provides maps of the photospheric magnetic field, solar surface velocity, and continuum filtergrams; and the EUV Variability Experiment \citep[EVE,][]{EVE}, which measures the solar EUV spectral irradiance.
Over the past decade, SDO has played a central role in advancing our understanding of the fundamental plasma processes governing the Sun and space weather. This success can mainly be attributed to its open-data policy and a consistent high data-rate of approximately two terabytes of scientific data per day. The large volume of data accumulated over the past decade (over 12 petabytes) provides a fertile ground to develop and apply novel machine learning (ML) based data processing methods. Recent studies, such as, predicting solar flares from HMI vector magnetic fields \citep{Bobra_2015}, creating high-fidelity virtual observations of the solar corona \citep[\citealt{salvatelli2019} \&][]{Cheung2019}, forecasting far side magnetograms from the Solar Terrestrial Relations Observatory \citep[STEREO, ][]{Kaiser_2008} EUV images \citep{Kim_NatAs_2019}, super-resolution of magnetograms \citep{jungbluth-2019-super}, and mapping EUV images from AIA to spectral irradiance measurements \citep{Szenicereaaw6548}, have demonstrated the immense potential of ML applications in solar and heliophysics. In this paper, we leverage the availability of such high-quality continuous observations from SDO and apply ML techniques to address the instrument calibration problem.
One of the crucial issues that limit the diagnostic capabilities of the SDO-AIA mission is the degradation of sensitivity over time. Sample images from the seven AIA EUV channels in Fig.~\ref{fig:autocalibrate_model_problem} show an example of such a deterioration. The top row shows the images observed during the early days of the mission, from 13 May 2010, and the bottom row shows the corresponding images observed more recently on 31 August 2019, scaled within the same intensity range. It is clear that the images in the bottom row appear to be significantly dimmer compared to their top row counterparts. In some channels, especially $304$~\AA\ and $335$~\AA\, the effect is pronounced.
The dimming effect observed among the channels is due to the temporal degradation of that EUV instruments in space that is also known to diminish the overall instrument sensitivity with time~\citep[e.g.,][]{BenMoussa_etal_2013}. The possible causes include either the out-gassing of organic materials in the telescope structure, which may deposit on the optical elements \citep{Jiao_2019}, or the decrease in detector sensitivity due to exposure to EUV radiation from the Sun.
In general, first-principle models predicting the sensitivity degradation as functions of time and wavelength are not sufficiently well-constrained for maintaining the scientific calibration of such instruments. To circumvent this problem, instrument scientists have traditionally relied on empirical techniques, such as considering sources with known fluxes, the so-called "standard candles." However, no standard candles exist on the solar atmosphere at these wavelengths since the solar corona is continuously driven and structured by evolving magnetic fields, which caused localized and intermittent heating. This causes even the quiet Sun brightness in the EUV channels to vary significantly depending on the configuration of the small-scale magnetic fields \citep[][and the references therein]{2015A&A...581A..51S}. On the one hand, the Sun may not be bright enough to appear in the hotter EUV channels such as AIA 94~\AA\. On the other hand, active regions (ARs) have EUV fluxes that can vary by several orders of magnitude depending on whether it is in an emerging, flaring, or decaying state. Moreover, the brightness depends on the complexity of the AR's magnetic field \citep{2015LRSP...12....1V}. Finally, ARs in the solar corona can evolve on time scales ranging from a few minutes to several hours, leading to obvious difficulties in obtaining a standard flux for the purpose of calibration.
Current state-of-the-art methods to compensate for this degradation rely on cross-calibration between AIA and EVE instruments. The calibrated measurement of the full-disk solar spectral irradiance from EVE is passed through the AIA wavelength (filter) response function to predict the integrated AIA signal over the full field of view. Later, the predicted band irradiance is compared with the actual AIA observations \citep{Boerner2013}. The absolute calibration of SDO-EVE is maintained through periodic sounding rocket experiments \citep{EVE_rocket} that use a near-replica of the instrument on-board SDO to gather a calibrated observation spanning the short interval of the suborbital flight (lasting a few minutes). A comparison of the sounding rocket observation with the satellite instrument observation provides an updated calibration, revealing long-term trends in the sensitivities of EVE and, thus, of AIA.
Sounding rockets are undoubtedly crucial; however, the sparse temporal coverage (there are flights roughly every two years) and the complexities of inter-calibration are also potential sources of uncertainty in the inter-instrument calibration. Moreover, the inter-calibration analysis has long latencies, of months and sometimes years, between the flights and when the calibration can be updated, due to data analysis of the obtained data during the flight; also this kind of calibrations are limited to observations from Earth, and thus cannot easily be used to calibrate missions in deep space (e.g., STEREO).
In this paper, we focus on automating the correction of the sensitivity degradation of different AIA wavebands by using AIA information exclusively and adopting a deep neural network \citep[DNN, ][]{goodfellow2016deep} approach, which exploits the spatial patterns and cross-spectral correlations among the observed solar features in multi-wavelength observations of AIA. We compare our approach with a non-ML method motivated by solar physics heuristics, which we call the baseline model. We evaluate the predicted degradation curves with the ones obtained through the sounding rocket cross-calibration described above. To the best of our knowledge, this is the first attempt to develop a calibration method of this kind.\footnote{We presented an early-stage result of this work as an extended abstract at the NeurIPS workshop on ML and Physical Sciences 2019 (which has no formal proceedings) \citep[NeurIPS 2019, ][]{neuberg2019} where we described some preliminary results in this direction. In this paper, we extend the abstract with full analyses and discussion of several important issues, such as the performance on the real degradation curve and the limitations of the presented models, that are both crucial to evaluate the applicability of this ML-based technique.} We believe that the approach developed in this work could potentially remove a major impediment to developing future HSO missions that can deliver solar observations from different vantage points beyond Earth's orbit.
The paper is structured as follows: in Section \ref{Section:data}, we present and describe our dataset. In Section \ref{section:methodology} we illustrate the technique and how it has been developed. Namely, in \S~\ref{section:formulation} we state the hypothesis and propose a formulation of the problem, in \S~\ref{section:convolutional} we present the CNN models, in \S~\ref{Section:Analysis} we describe the training process and the evaluation, in \S~\ref{section:inter_channel} we probe the multi-channel relationship and in \S~\ref{section:model-benchmark-understanding} we reconstruct the temporal degradation curve. Furthermore, in Section \ref{section:baseline} we present the baseline, followed by Section \ref{Section:Results} where we present and discuss the results. The concluding remarks are in Section \ref{section:summary}.
\section{Data description and pre-processing}
\label{Section:data}
We use for this study the pre-processed SDO-AIA dataset from \citet[][hereafter referred to as SDOML]{SDOML}. This dataset is ML-ready to be used for any kind of application related to the AIA and HMI data, and it consists of a subset of the original SDO data ranging from $2010$ to $2018$. It comprises the $7$~EUV channels, $2$~UV channels from AIA, and vector magnetograms from HMI. The data from the two SDO instruments are temporally aligned, with cadences of $6$ minutes for AIA (instead of the original $12$ seconds) and EVE and $12$ minutes for HMI. The full-disk images are downsampled from $4096 \times 4096$ to $512 \times 512$ pixels and have an identical spatial sampling of $\thicksim$ $4\farcs8$ per pixel.
In SDOML, the AIA images have been compensated for the exposure time and corrected for instrumental degradation over time using piecewise-linear fits to the V8 corrections released by the AIA team in November 2017.\footnote{Available at \url{https://aiapy.readthedocs.io/en/stable/generated/gallery/instrument\_degradation.html\#sphx-glr-generated-gallery-instrument-degradation-py}} These corrections are based on cross-calibration with SDO-EVE, where the EVE calibration is maintained by periodic sounding rocket under flights (including, in the case of the V8 corrections, a flight on 1 June 2016). Consequently, the resulting dataset offers images where changes in pixel brightness are directly related to the state of the Sun rather than instrument performance.
In this paper, we applied a few additional pre-processing steps. First, we downsampled the SDOML dataset to $256\times256$ pixels from $512\times512$ pixels. We established that $256\times256$ is a sufficient resolution for the predictive task of interest (inference of a single coefficient), and the reduced size enabled quicker processing and more efficient use of the computational resources. Secondly, we masked the off-limb signal ($r>R_\odot$) to avoid possible contamination due to the telescope vignetting. Finally, we re-scaled the brightness intensity of each AIA channel by dividing the image intensity by a channel-wise constant factor. These factors represent the approximate average AIA data counts in each channel and across the period from 2011 to 2018 \citep[derived from][]{SDOML}, and this re-scaling is implemented to set the mean pixel values close to unity in order to improve the numerical stability and the training convergence of the CNN. Data normalization such as this is standard practice in NNs \citep{goodfellow2016deep}. The specific values for each channel are reported in Appendix~\ref{section:appendix_average}.
\section{Methodology}
\label{section:methodology}
\subsection{Formulation of the problem}
\label{section:formulation}
It is known that some bright structures in the Sun are observed across different wavelengths. Figure \ref{fig:morphology} shows a good example from $07$ April $2015$ of a bright structure in the center of all seven EUV channels from AIA. Based on this cross-channel structure, we establish a hypothesis divided into two parts. First is that there is a relationship between the morphological features and the brightness of solar structures in a single channel (e.g., typically, dense and hot loops over ARs). The second is that such a relationship between the morphological features and the brightness of solar structures can be found across multiple channels of AIA. We hypothesize that both these relationships can be used to estimate the dimming factors and that a deep learning model can automatically learn these inter- and cross-channel patterns and exploit them for accurately predicting the dimming factor of each channel.
\begin{figure*}
\includegraphics[width=\textwidth]{Morphology_AIA.pdf}
\caption{A co-located set of images of the seven EUV channels of AIA to exemplify structures that are observed across different wavelengths. From left to right: AIA $94$~\AA, AIA $131$~\AA, AIA $304$~\AA, AIA $335$~\AA, AIA $171$~\AA, AIA $193$~\AA, and AIA $211$~\AA.}
\label{fig:morphology}
\end{figure*}
To test our hypothesis we consider a vector $\vec{C} = \{C_i, i\in [1,...,n]\}$ of multi-channel, synchronous SDO/AIA images, where $C_i$ denotes the $i$-th channel image in the vector, and a vector $\vec{\alpha} = \{\alpha_i, i \in [1,...,n]\}$, where $\alpha_i$ is the dimming factor independently sampled from the continuous uniform distribution between [$0.01, 1.0$]. We choose an upper bound value of $\alpha_i = 1$, since we only consider dimming of the images and not enhancements. Further we create a corresponding vector of dimmed images as $\vec{D} = \{\alpha_i C_i, i\in [1,...,n]\}$, where $\vec{D}$ is the corresponding dimmed vector. It is also to be noted that the dimming factors $\alpha_i$ are applied uniformly per channel and are not spatially dependent. The spatial dependence of the degradation is assumed to be accounted for by regularly updated flat-field corrections applied to AIA images. Our goal in this paper is to find a deep learning model $M: \vec{D} \rightarrow \vec{\alpha}$ that retrieves the vector of multi-channel dimming factors $\vec{\alpha}$ from the observed SDO-AIA vector $\vec{D}$.
\subsection{Convolutional Neural Network Model}
\label{section:convolutional}
Deep learning is a very active sub-field of machine learning that focuses on specific models called deep neural networks (DNNs). A DNN is a composition of multiple layers of linear transformations and non-linear element-wise functions \citep{goodfellow2016deep}. One of the main advantages of deep learning is that it can learn from the data the best feature representation for a given task without the need to manually engineer such features. DNNs have produced state-of-the-art results in many complex tasks including object detection in images \citep{he2016deep}, speech recognition \citep{amodei2016deep} and synthesis \citep{oord2016wavenet}, translation between languages \citep{wu2016google}. A DNN expresses a differentiable function $F_{\vec\theta}: \mathcal{X} \to \mathcal{Y}$ that can be trained to perform complex non-linear transformations, by tuning parameters $\vec{\theta}$ using gradient-based optimization of a loss (also known as objective or error) function $L(\vec{\theta}) = \sum_i l(F_{\vec\theta}(\vec{x}_i), \vec{y}_i)$ for a given set of inputs and desired outputs $\{\vec{x}_i, \vec{y}_i\}$.
For the degradation problem summarized in Section~\ref{section:formulation}, we consider two CNN architectures \citep{lecun1995convolutional}. The first architecture does not exploit the spatial dependence across multi-channel AIA images, therefore ignoring any possible relationship that different AIA channels might have, and it is designed to explore only the relationship across different structures in a single channel. This architecture is a test for the first hypothesis in Section~\ref{section:formulation}. The second architecture is instead designed to exploit possible cross-channel relationships while training, and it tests our second hypothesis, that solar surface features appearing across the different channels will make a multi-channel CNN architecture more effective than a single channel CNN that only exploit inter-channel structure correlations. The first model considers a single channel as input in the form of a tensor with shape $1\times256\times256$ and has a single degradation factor $\alpha$ as output. The second model takes in multiple AIA channel images simultaneously as an input with shape $n\times256\times256$ and output $n$ degradation factors $\vec{\alpha} = \{\alpha_i, i \in [1,...,n]\}$, where $n$ is the number of channels as indicated in Fig.~\ref{fig:autocalibrate_CNN_arch}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{my_arch_one.png}
\includegraphics[width=0.5\textwidth]{my_arch_multi.png}
\caption{The CNN architectures used in this paper. At the top the single-channel architecture with a single wavelength input and composed of two blocks of a convolutional layer, ReLU activation function and max pooling layer, followed by a fully connected (FC) layer and a final sigmoid activation function. At the bottom the multi-channel architecture with a multi wavelength input and composed of two blocks of a convolutional layer, ReLU activation function and max pooling layer, followed by a fully connected (FC) layer and a final sigmoid activation function. Figures constructed with \cite{haris_iqbal_2018_2526396}}
\label{fig:autocalibrate_CNN_arch}
\end{figure}
The single- and multi-channel architectures are described in (Fig.~\ref{fig:autocalibrate_CNN_arch}). They both consist of two blocks of a convolutional layer followed by ReLU (rectified linear unit) activation function \citep{Nair:2010:RLU:3104322.3104425} and a max pooling layer. These are followed by a fully connected (FC) layer and a final sigmoid activation function that is used to output the dimming factors. The first convolution block has 64 filters, while the second convolution block has 128. In both convolution layers, the kernel size is $3$, meaning the filters applied on the image are $3\times3$ pixels, and the stride is $1$, meaning that the kernel slides through the image 1 pixel per step. No padding is applied (i.e., no additional pixels are added at the border of the image to avoid a change in size). The resulting total learnable parameters (LP) are $167,809$ for the single-channel model and $731,143$ for the multi-channel model. The final configurations of the models' architectures were obtained through a grid search among different hyperparameters and layer configurations. More details of the architectures can be found in Appendix \ref{section:appendix_archtectures}.
We use the open-source software library PyTorch \citep{paszke_2017} to implement the training and inference code for the CNN. The source code to produce this paper is publicly available at \cite{autocal_code} and \url{https://github.com/vale-salvatelli/sdo-autocal_pub}.
\subsection{Training Process}
\label{Section:Analysis}
The actual degraded factors $\alpha_i(t)$ (where $t$ is the time since the beginning of the SDO mission, and $i$ is the channel) trace a single trajectory in an $n$-dimensional space starting with $\alpha_i(t=0)=1$ $\forall$ $i\in[1,...,n]$ at the beginning of the mission. During training, we intentionally exclude this time-dependence from the model. This is done by ($1$) using the SDOML dataset, which has already been corrected for degradation effects, ($2$) not assuming any relation between $t$ and $\vec{\alpha}$ and not using $t$ as an input feature, and ($3$) temporally shuffling the data used for training. As presented in section \ref{section:formulation}, we degrade the each set of multi-channel images~$\vec{C}$ by a unique $\vec{\alpha} = \{\alpha_i, i \in [1,...,n]\}$. We then devised a strategy such that from one training epoch to the next, the same set of multi-channel images can be dimmed by a completely independent set of $\vec{\alpha}$ dimming factors. This is a data augmentation and regularization procedure that allows the model to generalize and perform well in recovering dimming factors over a wide range of solar conditions.
The training set comprises multi-channel images~$\vec{C}$ obtained during the months of January to July from $2010$ to $2013$ obtained every six hours, amounting to a total of $18,970$ images in $2,710$ timestamps.The model was trained using 64 samples per minibatch, and the training has been performed for $1,000$ epochs. We do not use the full dataset to calculate the gradient descent and propagate back to update the network's parameters/weights in the minibatch concept. Instead, we calculate the gradient descent and correct the weights as the model is still going through the data. This procedure allows lowering the computation cost while still obtaining a lower variance. As a consequence of our data augmentation strategy, after $1000$ epochs the model has been trained with $2,710,000$ unique sets of (input, output) pairs since we used a different set of $\vec{\alpha}$ each epoch. We used the Adam optimizer \citep{Optimizer} in our training with an initial learning rate of $0.001$ and the mean squared error (MSE) of the predicted degradation factor ($\alpha_P$), and the ground truth value ($\alpha_{GT}$) was used as the training objective (loss).
The test dataset, i.e., the sample of data used to provide an unbiased evaluation of a model fit on the training dataset, holds images obtained during the months of August to October between $2010$ and $2013$, again every six hours per day, totaling $9,422$ images over $1,346$ timestamps. The split by month between the training and test data has a two-fold objective: ($1$) it prevents the bias due to the variation in the solar cycle, thereby allowing the model to be deployed in future deep space missions forecasting $\alpha$ for future time steps, and ($2$) it ensures that the same image is never present in both the datasets (any two images adjacent in time will approximately be the same), leading to a more precise and a comprehensive evaluation metric.
\subsection{Toy Model Formulation to Probe the Multi-Channel Relationship}
\label{section:inter_channel}
Using the described CNN model, we tested the hypothesis using a toy dataset, which is simpler than the SDOML dataset. We tested if the physical relationship between the morphology and brightness of solar structures (e.g., ARs, coronal holes) across multiple AIA channels would help the model prediction. For this purpose, we created artificial solar images, in which a $2$D Gaussian profile is used (Equation \ref{E-relationship}) to mimic the Sun as an idealized bright disk with some center-to-limb variation:
\begin{equation}
\label{E-relationship}
C_i(x,y) = A_i \exp{(-[x^2+y^2]{\sigma^{-2}})},
\end{equation}
\noindent where $A$ is the amplitude centered at ($0,0$), characteristic width $\sigma$, and $x$ and $y$, are the coordinates at the image. $\sigma$ is sampled from a uniform distribution between $0$ and $1$. These images are not meant to be a realistic representation of the Sun. However, as formulated in Eq.~\ref{E-relationship}, they include two qualities we posit to be essential for allowing our auto-calibration approach to be effective. The first is the correlation of intensities across wavelength channels (i.e., ARs tend to be bright in multiple channels). The second is the existence of a relationship between the spatial morphology of EUV structures with their brightness. This toy dataset is designed so that we can independently test how the presence of (a) a relation between brightness $A_i$ and size $\sigma$, and (b) a relation between $A_i$ for various channels; and the presence of both (a) and (b) influences performance. To evaluate this test, we will use the MSE loss and expect the presence of both (a) and (b) to minimize this loss.
The test result of the multi-channel model with artificial solar images is shown in Table~\ref{table:toy_problem_metrics}. We can see that when $A_0 \propto \sigma$ (linear relation between size and brightness) and $A_i = A_0^i$ (i.e., dependence across channels; here $i$ superscript denotes $A_0$ raised to the $i$-th power), the CNN solution delivered minimal MSE loss (top-left cell). Eliminating the inter-channel relationship (i.e., each $A_i$ was randomly chosen) or the relation between brightness $A_i$ and size $\sigma$, the performance suffered increasing the MSE loss. Ultimately, when both $A_i$ and $\sigma_i$ were randomly sampled for all channels, the model performed equivalently to randomly guessing/regressing (bottom-right cell) and having the greater loss of all tests. These experiments confirm our hypothesis and indicate that a multi-channel input solution will outperform a single-channel input model in the presence of relationships between the morphology of solar structures and their brightness across the channels.
\begin{table}
\centering
\caption{The mean squared error (MSE) for all combinations proposed in Section \ref{section:inter_channel}. The top-left cell is for the scenario when there exists a cross-channel correlation and a relation between brightness and size of the artificial Sun. The top-right cell, has is the loss with a cross-channel correlation but not the relation between brightness and size. The bottom left cell has the loss when there is no cross-channel correlation, but it has a relation between brightness and size. The bottom right cell presents the loss when the parameters are freely chosen.}
\label{table:toy_problem_metrics}
\begin{tabular}{|cc|c|l|}
\hline
&
&
\multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Brightness and size\\ correlation\end{tabular}} \\ \cline{3-4}
& & Yes & No \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Cross-channel\\ correlation\end{tabular}}} &
Yes &
0.017 &
0.023 \\ \cline{2-4}
\multicolumn{1}{|c|}{} & No & 0.027 & 0.065 \\ \hline
\end{tabular}
\end{table}
\subsection{Reconstruction of the Degradation Curve using the CNN Models}
\label{section:model-benchmark-understanding}
In order to evaluate the model in a different dataset from the one used in the training process, we use both single-channel and multi-channel CNN architectures to recover the instrumental degradation over the entire period of SDO (from $2010$ to $2020$). To produce the degradation curve for both CNN models, we use an equivalent dataset of the SDOML dataset but without compensating the images for degradation\footnote{The SDOML dataset not corrected for degradation overtime is available at \url{https://zenodo.org/record/4430801\#.X_xuPOlKhmE} } \citep{SDOML_degraded} and having data from 2010 to 2020. All other pre-processing steps, including masking the solar limb, re-scaling the intensity, etc., remain unchanged. The CNN's estimates of degradation are then compared to the degradation estimates obtained from cross-calibration with irradiance measurements, computed by the AIA team using the technique described in \citep{Boerner2013}.
The cross-calibration degradation curve relies on the daily ratio of the AIA observed signal to the AIA signal predicted by SDO-EVE measurements up through the end of EVE MEGS-A operations in May $2014$. From May $2014$ onwards, the ratio is computed using the FISM model \citep{Chamberlin2020} in place of the EVE spectra. FISM is tuned to SDO-EVE, so the degradation derived from FISM agrees with the degradation derived from EVE through $2014$. However, the uncertainty in the correction derived from FISM is greater than that derived from EVE observations, primarily due to the reduced spectral resolution and fidelity of FISM compared to SDO-EVE. While the EVE-to-AIA cross-calibration introduced errors of approximately $4\%$ (on top of the calibration uncertainty intrinsic to EVE itself), the FISM-to-AIA cross-calibration has errors as large as $25\%$.
We examined both $V8$ and $V9$ of the cross-calibration degradation curve. The major change from $V8$ calibration (released in November $2017$, with linear extrapolations extending the observed trend after this date) to $V9$ (July $2020$) is based on the analysis of the EVE calibration sounding rocket flown on $18$ June $2018$. The analysis of this rocket flight resulted in an adjustment in the trend of all channels during the interval covered by the FISM model (from May $2014$ onwards), as well as a $20\%$ shift in the $171$~\AA\ channel normalization early in the mission. This changes become more clear when looking at Fig.~\ref{fig:degradation_curve} at Sec.\ref{Section:Results}. The uncertainty of the degradation correction during the period prior to May $2014$, and on the date of the most recent EVE rocket flight, is dominated by the $\sim10\%$ uncertainty of the EVE measurements themselves. For periods outside of this (particularly periods after the most recent rocket flight), the uncertainty is a combination of the rocket uncertainty and the errors in FISM in the AIA bands (approximately $25\%$).
Moreover, we obtain and briefly analyze the feature maps from the second max pooling layer from the multi-channel model. A feature map is simply the output of one mathematical filter applied to the input. Looking at the feature maps, we expand our understanding of the model operation. This process helps to shine light over the image processing and provides insight into the internal representations combining and transforming information from seven different EUV channels into the seven dimming factors.
\section{Baseline Model}
\label{section:baseline}
We compare our DNN approach to a baseline motivated by the assumption that the EUV intensity outside magnetically ARs, i.e. the quiet Sun, is invariant in time \citep[a similar approach is also considered for the in-flight calibration of some UV instruments, e.g.][]{Schule1998}. A similar assumption in measuring the instrument sensitivity of the Solar \& Heliospheric Observatory \citep[SOHO, ][]{soho} CDS was also adopted by \citet{2010A&A...518A..49D}, where they assumed that the irradiance variation in the EUV wavelengths is mainly due to the presence of ARs on the solar surface and the mean irradiance of the quiet Sun is essentially constant over the solar cycle. Though there are evidences of small-scale variations in the intensity of quiet Sun when observed in the transition region \citep{2015A&A...581A..51S}, their contribution is insignificant in comparison to their AR counterparts. We use this idea for our baseline model as described in this section.
\begin{figure}[h]
\centering
\includegraphics[height=3.3in]{hist_demo.pdf}
\caption{Histograms of the pixel values for $304$~\AA~ channel. In blue the histogram for the refence image and in red the histogram for the dimmed image. The y-axis is the number of pixels, and the x-axis is the pixel intensity [$DN/px/s$]. The modes are marked with blue and red line for the reference and dimmed images respectively.}
\label{fig:baseline_histogram}
\end{figure}
It is important to remark that we use exactly the same data pre-processing and splitting approach as the one used for the neural network model described in Sect.~\ref{Section:Analysis}. From the processed dataset, a set of reference images per channel, ${\vec{C}_{\rm ref}}$, are selected at time $t=t_{\rm ref}$. Since the level of solar activity continuously evolves in time, we only select the regions of the Sun that correspond to low activity, as discussed in the preceding paragraph. Furthermore, the activity level is decided based on co-aligned (with AIA) magnetic field maps from HMI. To define these regions, we first make a square selection with a diagonal of length $2R_\odot$ centered at $R=0$ of the solar images so as to avoid LOS projection effects towards the limb. We then apply an absolute global threshold value of 5 Mx~cm$^{-2}$ on the co-aligned HMI LOS magnetic field maps corresponding to $t=t_{\rm ref}$, such that only those pixels that have B$_{\mathrm{LOS}}$ less than the threshold are extracted, resulting in a binary mask with 1 corresponding to the pixels of interest and 0 the rest. This minimum chosen value of the magnetic flux density is close to the noise level of the HMI\_720s magnetograms \citep{2012SoPh..279..295L,2018ApJ...862...35B}. Finally, we use this mask to extract the co-spatial quiet Sun (less active) pixels from each AIA channel and compute the respective 1D histograms of the intensity values as shown in Fig.~\ref{fig:baseline_histogram}. Now, based on the assumption that the intensity of the quiet Sun area does not change significantly over time (as discussed in the preceding section), we chose to artificially dim these regions by multiplying them with a constant random factor between 0 and 1. Naturally, values close to 0 will make the images progressively dimmer. The histograms for the dimmed and the original (undimmed) quiet Sun intensities for the AIA~304~\AA\ channel are shown in Fig.~\ref{fig:baseline_histogram}. The idea is to develop a non-machine learning approach that could be used to retrieve this dimming factor.
From Fig.~\ref{fig:baseline_histogram} we find that both the dimmed and undimmed 1D histograms have a skewed shape, with a dominant peak at lower intensities and extended tails at higher intensities. Such skewed distribution for the quiet Sun intensities has been reported by various studies in the past \citep[see][]{2015A&A...581A..51S}, where they have been modeled as either a sum of two Gaussians \citep{1976RSPTA.281..319R} or a single log-normal distribution \citep{1999ApJ...512..992G,2007A&A...468..695F}. Despite an increased number of free parameters in double Gaussian fitting, \citet{2000A&A...362..737P} showed that the observed quiet Sun intensity distribution could be fitted significantly better with a single log-normal distribution. The skewed representation, such as the one shown for the 304~\AA\ channel, was also observed for all the other EUV channels, indicating that the criterion for masking the quiet Sun pixels described here is justified.
We then compute the mode (most probable value) of both the dimmed and undimmed log-normal distributions and indicate them by $I_{i,{\rm ref}}^{\rm mp}$ (where \textit{i} implies the AIA channel under consideration and \textit{mp} stands for the modal value for the undimmed images), and $I^{\rm mp}_{i}$ representing the modal intensity value for the corresponding images dimmed with a dimming factor (say $\alpha_i$). These are indicated by blue and red vertical lines in Fig.~\ref{fig:baseline_histogram}. Subsequently, the dimming factor is obtained by computing the ratio between the two most probable intensity values according to the following equation:
\begin{equation}
\alpha_i := \frac{I^{\rm mp}_{i}}{I_{i, {\rm ref}}^{\rm mp}}
\end{equation}
Since both distributions are essentially similar except for the dimming factor, we suggest that such a ratio is efficient enough to retrieve $\alpha_i$ reliably forming a baseline against which the neural network models are compared. The efficiency of the baseline in recovering the dimming factor is then evaluated according to the success rate metric and the results for all channels are tabulated in Table~\ref{tab:autocalibrate_final_results}.
\section{Results and Discussions}
\label{Section:Results}
\subsection{Comparing the performances of the baseline model with different CNN architectures}
The results of the learning algorithm are binarized using five different thresholds: the absolute value of $0.05$ and relative values of $5\%$, $10\%$, $15\%$, and $20\%$. If the absolute difference between the predicted degradation factor ($\alpha_P$) and the ground truth degradation factor ($\alpha_{GT}$) is smaller than the threshold, it is considered a success $\alpha_P$; otherwise, it is not a success. We then evaluate the binarized results by using the success rate, which is the ratio of success $\alpha_P$ and the total amount of $\alpha_{P}$. We chose different success rate thresholds to gauge the model, all of which are smaller than the uncertainty of the AIA calibration \citep[estimated as $28\%$ by ][]{AIA_calib_paper}.
The baseline, single-channel, and multi-channel model results are summarized in Table~\ref{tab:autocalibrate_final_results}. The different colors are for different success rates: green is for success rates greater than $90\%$, yellow for success rate between $80\%$ and $90\%$, and red is for success rate lower than $80\%$.
\begin{table*}
\centering
\caption{Results of the baseline and CNN models applied to all the EUV AIA channels. The Table is divided into three sections: Baseline, Single-Channel, and Multi-Channel model. From the left, the channel number, the success rates for the baseline, the success rates for the single-channel CNN model, and the success rates for the multi-channel CNN model. Each model performance is considered at different tolerance levels. At the bottom, the mean of the success rate across all the channels. The color green is for success rates greater than $90\%$, yellow for success rate between $80\%$ and $90\%$, and red is for success rate lower than $80\%$.}
\label{tab:autocalibrate_final_results}
\centering
\begin{tabular}{cccccccccccccccc}
\toprule
\multirow{2}{*}{Channel} & \multicolumn{5}{c}{\parbox{5cm}{\centering Baseline}} & \multicolumn{5}{c}{\parbox{5cm}{\centering Single-Channel Model}} & \multicolumn{5}{c}{\parbox{5cm}{\centering Multi-Channel Model}} \\
\cmidrule(lr){2-6}\cmidrule(lr){7-11}\cmidrule(lr){12-16}
& 0.05 & 5\% & 10\% & 15\% & 20\% & 0.05 & 5\% & 10\% & 15\% & 20\% & 0.05 & 5\% & 10\% & 15\% & 20\%\\
\midrule
94~\AA & \zz {32}\% & \zz{08}\% & \zz{18}\% & \zz{28}\% & \zz {40}\% & \zz {70}\% & \zz {37}\% & \zz {61}\% & \zz {78}\% & \zz {87}\% & \zz {82}\% & \zz {48}\% & \zz {73}\% & \zz {85}\% & \zz {92}\% \\
131~\AA & \zz {76}\% & \zz{50}\% & \zz{73}\% & \zz{86}\% & \zz {96}\% & \zz {94}\% & \zz {72}\% & \zz {92}\% & \zz {98}\% & \zz {99}\% & \zz {99}\% & \zz {76}\% & \zz {94}\% & \zz {97}\% & \zz {99}\% \\
171~\AA & \zz {58}\% & \zz{27}\% & \zz{48}\% & \zz{66}\% & \zz {85}\% & \zz {93}\% & \zz {70}\% & \zz {93}\% & \zz {97}\% & \zz {99}\% & \zz {84}\% & \zz {48}\% & \zz {72}\% & \zz {86}\% & \zz {93}\% \\
193~\AA & \zz {38}\% & \zz{13}\% & \zz{27}\% & \zz{44}\% & \zz {53}\% & \zz {73}\% & \zz {41}\% & \zz {69}\% & \zz {85}\% & \zz {93}\% & \zz {90}\% & \zz {59}\% & \zz {85}\% & \zz {94}\% & \zz {98}\% \\
211~\AA & \zz {31}\% & \zz{11}\% & \zz{21}\% & \zz{29}\% & \zz {39}\% & \zz {63}\% & \zz {30}\% & \zz {53}\% & \zz {71}\% & \zz {84}\% & \zz {76}\% & \zz {41}\% & \zz {68}\% & \zz {82}\% & \zz {92}\% \\
304~\AA & \zz {86}\% & \zz{66}\% & \zz{89}\% & \zz{95}\% & \zz{100}\% & \zz {90}\% & \zz {65}\% & \zz {89}\% & \zz {97}\% & \zz {99}\% & \zz {94}\% & \zz {62}\% & \zz {86}\% & \zz {93}\% & \zz {96}\% \\
335~\AA & \zz {38}\% & \zz{13}\% & \zz{29}\% & \zz{42}\% & \zz {51}\% & \zz {62}\% & \zz {31}\% & \zz {54}\% & \zz {69}\% & \zz {80}\% & \zz {73}\% & \zz {39}\% & \zz {65}\% & \zz {82}\% & \zz {91}\% \\
\textbf{Mean} & \textbf{\zz {51}\%} & \textbf{\zz {27}\%} & \textbf{\zz {43}\%} & \textbf{\zz {56}\%} & \textbf{\zz {66}\%} & \textbf{\zz {78}\%} & \textbf{\zz {50}\%} & \textbf{\zz {73}\%} & \textbf{\zz {85}\%} & \textbf{\zz {92}\%} & \textbf{\zz {85}\%} & \textbf{\zz {53}\%} & \textbf{\zz {77}\%} & \textbf{\zz {89}\%} & \textbf{\zz {94}\%} \\
\bottomrule
\end{tabular}
\end{table*}
A detailed look at Table~\ref{tab:autocalibrate_final_results} reveals that for an absolute tolerance value of $0.05$, the best results for the baseline are $86\%$ ($304$~\AA) and $76\%$ ($131$~\AA), and a mean success rate of $\sim51\%$ across all channels. As we increase the relative tolerance levels, the mean success rate increases from $27\%$ (for $5\%$ relative tolerance) to $66\%$ (with $20\%$ relative tolerance) and with a $39\%$ success rate in the worst-performing channel ($211$~\AA).
Investigating the performance of the CNN architecture with a single input channel and an absolute tolerance level of $0.05$, we find that this model performed significantly better than our baseline with much higher values of the metric for all the channels. The most significant improvement was shown by the $94$~\AA~ channel with an increase from $32\%$ in the baseline model to about $70\%$ in the single input CNN model, with an absolute tolerance of $0.05$. The average success rate bumped from $51\%$ in the baseline to $78\%$ in the single-channel model. The worst metric for the single-channel CNN architecture was recorded by the $211$~\AA\ channel, with a success rate of just $63\%$, which is still significantly better than its baseline counterpart ($31\%$). Furthermore, with a relative tolerance value of $15\%$, we find that the mean success rate is $85\%$ for the single-channel model, which increases to more than $90\%$ for a $20\%$ tolerance level. This is a promising result considering the fact that the error associated with the current state-of-the-art calibration techniques (sounding rockets) is $\sim25\%$.
Finally, we report the results from the multi-channel CNN architecture in the last section of Table~\ref{tab:autocalibrate_final_results}. As expected, the performance, in this case, is the best of the models, with significant improvements for almost all the EUV channels. Clearly, the success rates belonging to the red category are much lesser compared to the former models implying that the mean success rate is the highest across all tolerance levels. The multi-channel architecture recovers the degradation (dimming) factor for all channels with a success rate of at least $91\%$ for a relative tolerance level of $20\%$ and a mean success rate of $\sim94\%$. It is also evident that this model outperforms the baseline and the single-channel model for all levels of relative tolerances. For any given level of tolerance, the mean across all channels increased significantly. For example, with absolute tolerance of $0.05$, the mean increase from $78\%$ to $85\%$, even changing its color classification. In addition, the success rate is consistently the worst for $335$~\AA~and $211$~\AA~channels across all tolerances, whereas the performance of the $131$~\AA~channel is the best.
Looking at specific channels, we can see that $304$~\AA~ does consistently well through all the models with not much variation, which wasn't expected. Now observing $171$~\AA, it does well in the baseline and in the multi-channel model, but surprisingly it has its maximum performance in the single-channel model, through all tolerances, and a remarkable $94\%$ success rate with a tolerance of $0.05$. In opposite to $171$~\AA, channels $211$~\AA~and $335$~\AA~ have a poor performance in the baseline and the single-channel models, and they have a significant improvement in the multi-channel model as expected and hypothesized by this paper.
Observing the Fig.\ref{fig:training_curve}, we can see the training and test MSE loss curve evolving by epoch. Based on the results from Table \ref{tab:autocalibrate_final_results} and comparing the training and test loss curves in Fig. \ref{fig:training_curve} we can see the model does not heavily overfit in the range of epochs utilized, and it presents stable generalization performance on test results. We stopped the training before epoch 1000, seeing only marginal improvements achieved in the test set over many epochs.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{training_plot.pdf}
\caption{Graphic of the evolution of training and testing MSE loss through the epochs.}
\label{fig:training_curve}
\end{figure}
Overall the result shows higher success rates for the CNN models, particularly for the multi-channel model, which was predicted by the toy problem, and for higher tolerances.
\subsection{Modelling Channel Degradation over Time}
\label{sec:degradation}
In this section, we discuss the results obtained when comparing the AIA degradation curves $V8$ and $V9$, with both single-channel and multi-channel CNN models. This process was performed using a dataset equivalent to the SDOML but with no correction for degradation and data period from $2010$ to $2020$. This tests both models for real degradation suffered by AIA from $2010$ to $2020$.
\begin{figure*}
\centering
\centering
\includegraphics[width=0.98\textwidth]{degradation.pdf}
\caption{Channels degradation over Time. From top to bottom: Channel $94$~\AA~ (blue) and $131$~\AA~(yellow), $171$\AA~(green) and $193$~\AA~(red), $211$~\AA~(purple) and $304$~\AA~(brown) and $335$~\AA~(magenta). The solid black (gray) curve is the degradation profile of AIA calibration release $V9$ ($V8$). The gray shaded area correspond to the $25\%$ error of the degradation curve $V9$. The colorful shaded areas are the standard deviation of the CNN models. The vertical black dashed line is the last available observation from EVE MEGS-A data and the vertical gray dashed line is the last training date.}
\label{fig:degradation_curve}
\end{figure*}
Figure~\ref{fig:degradation_curve} presents the results of our analysis for all the seven AIA EUV channels. In each panel, we show four quantities: the degradation curve $V9$ (solid black line), the degradation curve $V8$ (solid gray line), predicted degradation from the single-channel model (dashed colorful line), and multi-channel model (solid colorful line). The shaded gray band depicts the region covering $25\%$ variation (error) associated with the $V9$ degradation curve, and the colorful shaded areas are the standard deviation of the single- and multi-channel models. The dashed vertical line coincides with the day (25 May 2014), the last day of EVE MEGS-A instrument data. It is important to note that MEGS-A was earlier used for the sounding rocket calibration purposes, the loss of which caused both the $V8$ and $V9$ degradation curves to become noisier in the future. \citet{Szenicereaaw6548} used deep learning to facilitate a virtual replacement for MEGS-A.
Observing the different panels of Fig.~\ref{fig:degradation_curve}, we can see that even though we trained both the single and multi-channel models with the SDOML dataset that was produced and corrected using the $V8$ degradation curve, both CNN models predict the degradation curves for each channel quite accurately over time, except for $94$~\AA\ and $211$~\AA\ channel. However, the deviations of the predicted values for these two channels fall well within the $25\%$ variation of the $V9$ calibration curve. In fact, the CNN predictions have even better agreement with $V9$ than the $V8$ calibration for most of the channels. That hints at the conclusion that the CNN is picking up on some actual information that is perhaps even more responsive to degradation than FISM. The latest degradation curve ($V9$) was updated recently in July $2020$, and the change from $V8$ to $V9$ might have easily caused an impact while training the models. Moreover, the more significant deviation of $94$~\AA\ channel in the early stages of the mission is due to the fact we limited our degradation factor to be less than one.
From the predicted calibration curves computed from the single- and multi-channel models, we see that they have a significant overlap throughout the entire period of observation. The single-channel model predictions, however, have a more significant variation for channels $211$~\AA, $193$~\AA~ and $171$~\AA. For a systematic evaluation and a comparison among the results of the two models across channels, we calculated some goodness of fit metrics, and the results are shown in Table~\ref{tab:quantities_degradation}.
\begin{table}[h]
\centering
\caption{Goodness of fit metrics for single-channel and multi-channel models with reference to the $V9$ degradation curve. The first metric is the Two-Sample Kolmogorov-Smirnov Test (KS), and the second metric is the Fast Dynamic Time Warping.}
\label{tab:quantities_degradation}
\centering
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{Channel} & \multicolumn{2}{c}{{\centering Single-Channel}} & \multicolumn{2}{c}{{\centering Multi-Channel}} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}
& KS & DTW & KS & DTW\\
\midrule
94~\AA & 0.485 & 7.120 & 0.568 & 9.624\\
131~\AA & 0.346 & 2.711 & 0.275 & 1.624\\
171~\AA & 0.298 & 3.074 & 0.329 & 3.549\\
193~\AA & 0.211 & 1.829 & 0.244 & 2.080\\
211~\AA & 0.305 & 2.850 & 0.242 & 2.807\\
304~\AA & 0.282 & 1.412 & 0.100 & 1.311\\
335~\AA & 0.212 & 2.539 & 0.141 & 2.839\\
\bottomrule
\end{tabular}
\end{table}
Table \ref{tab:quantities_degradation} contains two different metrics for evaluating the goodness of fit of each CNN model with the $V9$ degradation curve. The first is the Two-Sample Kolmogorov--Smirnov Test (KS), which determines whether two samples come from the same distribution \citep{two_ks}, and the null hypothesis assumes that the two distributions are identical. The KS test has the advantage that the distribution of statistics does not depend on the cumulative distribution function being tested. The second metric is the Fast Dynamic Time Warping \citep[DTW, ][]{fastDTW}, which measures the similarity between two temporal sequences that may not be of the same length. This last one is important since statistical methods can be too sensitive when comparing both time series. DTW has distance between the series as an output, and as a reference the DTW for the different EUV channels between the $V8$ and $V9$ degradation curves are: $94$~\AA-$72.17$, $131$~\AA-~$13.03$, $171$~\AA-$9.82$, $193$~\AA-$30.05$, $211$~\AA-$16.86$, $304$~\AA-$7.02$ and $335$~\AA-$5.69$.
Similar to Fig.~\ref{fig:degradation_curve} we find in Table~\ref{tab:quantities_degradation}, that the predictions from both the single-channel and multi-channel models overlap significantly both in terms of the metric and the time evolution. Except for the $94$~\AA~channel, all others have very close metric values, well within a given level of tolerance. A low value of the KS test metric suggests that the predictions have a similar distribution as the observed $V9$ calibration curve, which also indicates the robustness of our CNN architecture. KS test agrees well with DTW, where the values obtained are smaller than the reference values (as indicated earlier) between the $V8$ and the $V9$ calibration curves. Overall, the metric analysis for the goodness of fit between the predictions and the actual calibration curve ($V9$) shows that the CNN models perform remarkably well in predicting the degradation curves despite being trained only on the first three years of the observations.
\subsection{Feature Maps}
\label{sec:feature-maps}
\begin{figure} [h!]
\centering
\includegraphics[width=0.8\linewidth]{reference.pdf}
\includegraphics[width=\linewidth]{latent.pdf}
\caption{Feature maps obtained from the last layer of CNN of our model. The top row shows a sample input in AIA 193~\AA~ channel, and the bottom row shows four representative feature maps out of one hundred and twenty eight different feature maps from the final convolutional layer of the multi-channel NN model.}
\label{fig:autocalibrate_activation_viz}
\end{figure}
As mentioned in Sect.~\ref{section:model-benchmark-understanding}, the feature maps are the result of applying the filters to an input image. That is, at each layer, the feature map is the output of that layer. In Fig.~\ref{fig:autocalibrate_activation_viz} we present such maps obtained from the output of the last convolutional layer of our CNN. The top row shows a reference input image observed at $193$~\AA\ used in this analysis, with its intensity scaled between $0-1$ pixel units, and the bottom row shows $4$ representative feature maps (out of a total of $128$) with their corresponding weights. These maps are obtained after the final convolutional layer of the multi-channel model, and it represents the result of combining all seven EUV channels as input. The predicted $\alpha$ dimming factors from the model are given by the sigmoid activation function applied to a linear combination of these features. Such mapping allows us to see that the network actually learned to identify the different features of such full-disk solar images such as the limb, the quiet Sun features, and the ARs. The reason for visualizing a feature map for specific AIA images is to gain an understanding of what features a model detects are ultimately useful in recovering the degradation or the dimming factors.
\section{Concluding remarks}
\label{section:summary}
This paper reports a novel ML-based approach to auto-calibration and advances our comprehension of the cross-channel relationship among different EUV channels by introducing a robust novel method to correct for the EUV instrument time degradation. We began with formulating the problem and setting up a toy model to test our hypothesis. We then established two CNN architectures that consider multiple wavelengths as input to auto-correct for on-orbit degradation of the AIA instrument onboard SDO. We trained the models using the SDOML dataset and further augmented the training set by randomly degrading images at each epoch. This approach made sure that the CNN model generalizes well to data not seen during the training, and we also developed a non-ML baseline to test and to compare its performance with the CNN models. With the best trained CNN models, we reconstructed the AIA multi-channel degradation curves of 2010-2020 and compared them with the sounding-rocket based degradation curves $V8$ and $V9$.
Our results indicate that the CNN models significantly outperform the non-ML baseline model ($85\%$ vs. $51\%$ in terms of the success rate metric), for a tolerance level of $0.05$. In addition, the multi-channel CNN also outperforms the single-channel CNN with a $78\%$ success rate with an absolute $0.05$ threshold. This result is consistent with the expectation that correlations between structures in different channels, and size (morphology) of structures, and brightness can be used to compensate for the degradation. To further understand the correlation between different channels, we used the concept of feature maps to shed light over this aspect and see how the filters of the CNNs were being activated. We did see that the CNNs learned representations that make use of the different features within solar images, but further work needs to be done in this aspect to establish a more detailed interpretation.
We also found that the CNN models reproduce the most recent sounding-rocket based degradation curves ($V8$ and $V9$) very closely and within their uncertainty levels. This is particularly promising, given that no time information has been used in training the models. For some specific channels, like $335$~\AA, the model is reproducing the $V8$ curve instead of $V9$ since the SDOML was corrected using the former. The single-channel model could perform as well as the multi-channel model even though the multi-channel presented a more robust performance when evaluated on the basis of their success rates.
Lastly, this paper presents a unique possibility of auto-calibrating deep space instruments such as the ones onboard the STEREO spacecraft, and the recently launched remote sensing instrument called \textit{Extreme Ultraviolet Imager} \citep{2020A&A...642A...8R}, aboard the Solar Orbiter satellite \citep{2020A&A...642A...1M}, that are too far away from the Earth to be calibrated using a traditional method such as sounding-rockets. The auto-calibration model could be trained using the first months of data from the mission, assuming the instrument is calibrated at the beginning of the mission. The data volume could be an issue, and different types of data augmentation could be used to overcome this problem, such as synthetic degradation and image rotation. We further envision that the technique presented here may also be adapted to imaging instruments or spectrographs operating at other wavelengths (e.g., hyperspectral Earth-oriented imagers) observed from different space-based instruments like \textit{IRIS} \citep[][]{2014SoPh..289.2733D}.
\begin{acknowledgements}
{This project was partially conducted during the 2019 Frontier Development Lab (FDL) program, a co-operative agreement between NASA and the SETI Institute. We wish to thank IBM for providing computing power through access to the Accelerated Computing Cloud, as well as NASA, Google Cloud and Lockheed Martin for supporting this project. L.F.G.S was supported by the National Science Foundation under Grant No. AGS-1433086. M.C.M.C. and M.J. acknowledge support from NASA’s SDO/AIA (NNG04EA00C) contract to the LMSAL. S.B. acknowledges the support from the Research Council of Norway, project number 250810, and through its Centers of Excellence scheme, project number 262622. This project was also partially performed with funding from Google Cloud Platform research credits program. We thank the NASA’s Living With a Star Program, which SDO is part of, with AIA, and HMI instruments on-board. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). A.G.B. is supported by EPSRC/MURI grant EP/N019474/1 and by Lawrence Berkeley National Lab. The authors thank the anonymous referee for the comments.\\
Software: We acknowledge for CUDA processing cuDNN \citep{cudnn}, for data analysis and processing we used Sunpy \citep[][]{Sunpy2020}, Numpy \citep{numpy}, Pandas \citep{pandas}, SciPy \citep{scipy}, scikit-image \citep{scikit-image} and scikit-learn\citep{scikit-learn}. Finally all plots were done using Matplotlib \citep{matplotlib} and Astropy \citep{astropy:2018}}.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,088,898 | arxiv | \section{Introduction}
\label{Introduction}
\subsection{Motivation : Gravitational waves localization }
\label{Intro-Motivation}
The recent discovery of gravitational waves (GW) is opening up new horizons for high-energy astrophysics. Since GW interact very weakly with matter, unlike electromagnetic radiation, they can carry information from the early Universe and its origin as well as other unseen high energy phenomena. Moreover, GWs can provide new tests of general relativity, especially in the dynamically strong-field regime, as is the case with the three recent detections of Binary Black-Hole merger events by the Laser Interferometer Gravitational Wave Observatory (LIGO) experiment. However, the location of such events remains undetermined, as experimental triangulation of GW is currently impossible (see fig.\ \ref{fig:GW}). One promising lead is to look for electromagnetic (EM) counterparts. Finding an EM counterpart of GW will allow the localization of the events.
To date, the three events detected by LIGO are binary black hole mergers \citep{LIGO-FirstDetection,Abbott2017}. Nevertheless, GW emitted during binary neutron star mergers will be above the sensitivity threshold of Advanced LIGO for the next campaigns.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.4]{localization}
\caption{Locatization error of GW151226 (Credit : \citealt{LIGO-FirstDetection})}
\label{fig:GW}
\end{figure}
Binary neutron star mergers have also been recognized as the possible progenitors of short gamma-ray bursts (GRBs) \citep{Eichler1989,Nakar2007}. If so, short GRBs (sGRB) and their afterglows could give rise to fascinating electromagnetic counterparts to GW.
However, it is not clear to what extent can either the GRB or the GRB jet afterglow be observable given beaming. Hence, we have to look for other EM counterparts.
So far, macronova which behave like r-process supernova and radio flares are widely studied in the literature \citep{Metzger2017}. Here we discuss an additional EM counterpart that arises from the interaction of a cocoon formed during the jet propagation within matter surrounding the merger.
\subsection{Cocoon}
\label{Intro-cocoon}
When two neutron stars merge, matter is ejected prior to the jet’s onset by winds driven from the newly formed hyper-massive neutron star and from the debris disk that forms around it \citep{Hotokezaka2015}.
The cocoons are generated during the interaction of the GRB jet with this surrounding matter \citep{Nagakura2014,MB2014}.
The different components of the ejecta for sGRB are presented in fig.\ \ref{fig:cocoon}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{CocoonS}
\caption{Heuristic description of the jet, cocoon, wind and dynamical ejecta following a NS merger, presumably producing a sGRB.}
\label{fig:cocoon}
\end{figure}
\\The energy of the cocoon is comparable to the energy of GRB which includes the prompt emission and afterglow kinetic energy \citep{Nakar2017}. In addition, the cocoon afterglow will behave like the GRB afterglow \citep{Piran2004}.
As shown in this figure, the cocoon opening angle is wider than the jet opening angle. Hence, when observing off jet-axis, sGRB cocoon signatures could be of prime importance.
Nevertheless, the initial Lorentz factor of the cocoon is $\gamma_{0} \approx 10 $. Therefore, when expanding, the cocoon will rapidly reach the mildly relativistic regime for which neither the Sedov-Taylor \citep[ST][]{Sedov1958,Taylor} solution nor its fully relativistic counterpart by Blandford and McKee \citep[BM][]{BM} are well applicable. We propose here a model that allows a smooth transition from the BM phase to the ST phase.
\newpage
\section{Cocoon}
\label{Cocoon}
Hereafter we discuss, the cocoon formation demonstrated in both analytical and simulation works. We also present the cocoon properties.
\subsection{Cocoon formation : analytical and simulation results }
\label{Cocoon-formation}
It has been shown, for active galactic nuclei, that the propagation of the jets they accelerate through surrounding media generates a double bow-shock structure at the head of the jet \citep{Blandford1974,Scheuer1974}. Energy and matter that enter this structure are pushed aside due to a high pressure gradient and create a hot cocoon around the jet. The cocoon, in turn, applies pressure on the jet and compresses it.
The cocoon formation was studied analytically by \citep{Bromberg2011}, and it is described heuristically in fig.\ \ref{fig:CF}.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.5]{cocoonformation1}
\caption{Cocoon formation, Credit \cite{Bromberg2011}}
\label{fig:CF}
\end{figure}
Meanwhile, simulations of GRB jet propagation inside a star were conducted as well (e.g. \citealt{Zhang2003}, \citealt{Morsony2007}, \citealt{Mizuta2009}). All these analyses demonstrated the formation of a jet head and a hot cocoon. See for instance the simulations by \cite{Mizuta2009}, also depicted in fig.\ \ref{fig:CS}.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.4]{cocoonsimulation.jpg}
\caption{Cocoon Simulation, Credit \cite{Mizuta2009}}
\label{fig:CS}
\end{figure}
Therefore, we can conclude that for both short and long GRBs, the interaction of the jet with the surrounding matter generates a cocoon. Long GRBs arise at the end of massive star life after the collapse of the core (citer collapsar model). In this case, a cocoon would be generated by the interaction of the jet with the progenitor star before the breakout \citep{Nakar2017}.
Note that the following study focuses on the cocoon in the case of short GRBs, but could also be applied for long GRB by appropriately considering different energy and initial Lorentz factor.
\subsection{Properties of the cocoon}
\label{Cocoon-Properties}
\subsubsection{Energy}
\label{Cocoon-Properties-Energy}
The total cocoon energy, $E_{0}$, which is the total energy deposited by the jet in the cocoon until the breakout time $t_{b}$, is expected to be comparable to the total GRB energy given by the sum of the prompt emission and afterglow kinetic energy. The reason is that the typical breakout time is comparable to the typical burst duration (Bromberg et al. 2012) and the jet deposits almost all its energy into the cocoon during its propagation in the ejecta. The total cocoon energy $E_{0}= \int_{t_{inj}}^{t_{b}}L_{j}(1-\beta _{h})dt$ where $L_{j}$ is the total two sided luminosity and $\beta _{h}c$ is the velocity of the jet's head.
Note that while the jet is relativisitc the jet's head velocity is typically of order of $0.1-0.3c$ \citep{Matzner2003,Bromberg2011}. The GRB’s energy is the jet’s energy after the breakout and there is no reason to expect that the jet will not have the same luminosity before and after breakout. It has been shown by \cite{Moharana2017} that the distribution of sGRB durations suggests that the jet is launched for at least a few hundred milliseconds in order for it to break out of the ejecta. Therefore, we can approximate $E_{0}\sim L_{j}(t_{b} - t_{inj} - R_{bo}/c) $ where $R_{bo}$ is the radius at time of the breakout.
In such a case the cocoon carries an energy that is comparable to that of the sGRB itself and the cocoon breakout radius is around $ 10^{9}$~cm.
\subsubsection{Composition and Lorentz factor}
\label{Cocoon-Properties-Composition}
As shown in section \ref{Cocoon-formation}, the cocoon is composed of an inner and an outer part. The inner part is made of jet material while the outer part is made of ejected matter from the binary neutron star merger. Nevertheless, these two parts can be partially to fully mixed and the mixing ratio will change the Lorentz factor of the cocoon \citep{Nakar2017}.
Here we focus on the inner part of the cocoon which is made of jet material and hence does not contain too much mass. This component will be relativistic, we consider here $\gamma_{0} \approx 10$. Therefore this inner part is more likely to create an afterglow that will behave like a GRB afterglow \citep{Piran2004}.
\section{Hydrodynamics of the shock}
\label{Hydrodynamics}
Similarly to GRB afterglows, a cocoon afterglow arises from the interaction of the cocoon with the matter surrounding it. This interaction is mainly hydrodynamical.
\subsection{Evolution of Lorentz factor of the cocoon}
\label{Hydrodynamics-Evolution}
The evolution of the cocoon can be separated into 3 distinct phases shown in. fig.\ \ref{fig:Lorentz}, which represent 3 steps of the blast wave evolution, as described below.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.7]{lorentz}
\caption{Evolution of Lorentz factor of the cocoon}
\label{fig:Lorentz}
\end{figure}
\\The first phase can be described by the fireball model, proposed by \cite{Paczynski1986} and \cite{Goodman1986}. They have shown that the sudden release of a large quantity of gamma ray photons into a compact region can lead to an opaque photon–lepton “fireball” through the production of electron–positron pairs. The term “fireball” refers here to an opaque radiation–plasma whose initial energy is significantly greater than its rest mass. Goodman considered the sudden release of a large amount of energy, $E_{0}$, in a small volume, characterized by a radius, $R_{0}$ which could occur in an explosion.
They showed that if the ejecta stays optically thick long enough, then all the internal energy can be converted into kinetic energy, allowing the matter to reach a final Lorentz factor $\gamma_{0}$. Here we consider for the cocoon, $\gamma_{0} \approx 10$.
Subsequently, during the second phase, the fireball expands and collects an exterior mass $M_{ext}$ until the exterior mass $M_{ext}\approx {M_{0} / \gamma_{0}}$. This occurs at a radius \citep{Daigne}:
\begin{equation}
R_{dec}\approx \left (\frac{3E_{0}}{4\pi \gamma _{0}^{2}n_{0}m_{p}c^{2}} \right )^{1/3}.
\end{equation}
Considering an initial energy $E_{0} = 10^{49}$ erg, $\gamma_{0} \approx 10 $ and $n_{0} = 1\: $ cm$^{-3}$ we obtain $R_{dec}\approx 2.5 \: 10^{16}$cm.
Finally, during the third phase, after reaching $R_{dec}$ the blast waves decreases following Blandford and McKee and later on Sedov and Taylor. This part will be discussed into details in section.
\subsection{Shock properties}
\label{Hydrodynamics-Shock}
\subsubsection{Shock : general properties}
\label{Hydrodynamics-Shock-general}
Consider the situation when a cold relativistic shell (whose internal energy is negligible compared to its rest mass) moves into the cold interstellar medium (ISM). Conservation of mass, energy and momentum determine the Hugoniot shock jump conditions across the relativistic shocks for the case when the upstream matter is cold (see e.g. \citealt{BM}).
\begin{figure} [h!]
\centering
\includegraphics[scale=0.7]{Shock}
\caption{Shock characteristics}
\label{fig:shock}
\end{figure}
\\Note that the above figure presents the shock for a given $\gamma$, and that as described below, $\gamma$ varies with time generating different shocks with different characteristics.
For the shocked ISM the particle density is $n$ while $n_{0}$ is the exterior density.
With $\gamma$ the Lorentz factor of the shocked fluid, the particle density, energy of the shocked ISM are defined following Blandford and McKee 1976 as :
\begin{eqnarray}
n &=& 4 n_{0} \gamma, \\
E &=& N m_{p} c^{2} (\gamma -1),
\end{eqnarray}
with $N = n\frac{4\pi }{3}R^{3}$, number of particles of the shocked ISM.
\subsubsection{Shock : acceleration of the electrons}
\label{Hydrodynamics-Shock-acceleration}
We assume that electrons are accelerated in the shock to a power law distribution of Lorentz factor $\gamma_e$, with a minimum Lorentz factor $\gamma _{m}$ such as $ N(\gamma _{e}) d\gamma _{e} \alpha \gamma _{e}^{-p} d\gamma _{e}, \gamma _{e} \geq \gamma _{m}$,
with $p>2$, in order to keep the energy of the electrons finite.
\\
\\
We consider $ p=2.5$ in relativistic regime and $ p=3$ in non relativistic regime \citep{Sari1996}. In our model during the transition phase $p$ evolves linearly between $p = 2.5$ and $p = 3$.
\\
\\
Let $\varepsilon _{e}$ be the fraction of the shock energy going into the electron energy density:
\begin{equation}
E_{e} = \varepsilon _{e} E,
\end{equation}
with $ E_{e}$ energy density of the electrons.
\\
\\
Considering that a constant fraction $\varepsilon _{e}$ of the shock energy goes into the electrons, we get for the relativistic regime \citep{Sari1998}:
\begin{equation}
\gamma_{m}=\epsilon _{e}\frac{p-2}{p-1} \frac{m_{p}}{m_{e}}\gamma
\end{equation}
where $m_{p}$ is the mass of proton and $m_{e}$ the mass of the electron.
However, during the non relativistic regime, assuming that the same constant constant fraction $\varepsilon _{e}$ of the shock energy goes into the electrons, we obtain :
\begin{equation}
\gamma_{m}=\epsilon _{e}\frac{p-2}{p-1} \frac{m_{p}}{m_{e}}\beta^{2}
\end{equation}
\subsubsection{Magnetic field of the shocked ISM}
\label{Hydrodynamics-Shock-magneticfield}
Similarly, let $\varepsilon _{B}$ be the fraction of the shock energy going into magnetic energy density:
\begin{equation}
\frac{B^{2}}{8\pi} = \varepsilon _{B} E.
\end{equation}
We assume that a constant fraction $\varepsilon _{B}$ of the shock energy goes into magnetic energy density. Therefore the magnetic field strength is given by \citep{Sari1998}:
\begin{equation}
B = (32\pi m_{p}\varepsilon _{B}n)^{1/2}\gamma c
\end{equation}
\subsection{Deceleration of the blast wave}
\label{Hydrodynamics-Deceleration}
Here, we assume $\varepsilon _{B}=\varepsilon _{e}=0.1$ and consider an adiabatic evolution.
\subsubsection{Blandford and McKee, Sedov and Taylor}
\label{Hydrodynamics-Deceleration-Blandford}
We consider a spherical blast wave of radius $R(t)$ propagating into a constant surrounding density $n_{0}$. After $R_{dec}$, the deceleration of the blast wave begins (see fig.\ \ref{fig:Lorentz}), and the evolution of the radius $R$ and the Lorentz factor $\gamma $ while still in the ultra relativistic domain is given by \citep{BM}:
\begin{eqnarray}
\gamma(t) &=& \frac{1}{4}\left ( \frac{17E}{\pi n_{0} m_{p}c^{5}t^{3}} \right )^{1/8} \\
R (t) &=& \left ( \frac{17Et}{4 \pi n_{0} m_{p}c } \right )^{1/4}
\end{eqnarray}
When the non relativistic regime is reached, the evolution of the radius $R(t)$ and the velocity $V (t)$ is then given by \cite{Sedov1958} and \cite{Taylor}.
\begin{equation}
V (t) = \frac{2}{5}\left ( \frac{25E}{4 \pi n_{0} m_{p} } \right )^{1/5} t^{-3/5}
\end{equation}
\begin{equation}
R (t) = \frac{2}{5}\left ( \frac{25E}{4 \pi n_{0} m_{p} } \right )^{1/5} t^{2/5}
\end{equation}
\subsubsection{Transition region}
\label{Hydrodynamics-Deceleration-Transition}
As shown in section \ref{Cocoon-Properties-Composition}, we consider an initial Lorentz factor $\gamma_{0} \approx 10 $ for the cocoon, therefore, the mildly relativistic regime will be of prime importance. For GRB afterglows, the initial Lorentz factor is $\gamma_{0} \approx 100$, which means that most of the emission will take place during the Blandford and McKee evolution of the blast wave.
However, in our case, we have to determine the evolution of the Lorentz factor during this mildly relativistic to non relativistic transition in which most of the emission of the cocoon will take place. The emission process will be discussed in detail in section \ref{Synchrotron} below.
For our model, we consider that we have a Blandford and McKee evolution until $\gamma (t) \approx 3$, where we enter the transition region.
In the transition region, the evolution is given by the following extrapolation:
\begin{equation}
S_{trans} = \left ( S_{BM}^{2} + S_{ST} ^{2}\right )^{1/2},
\end{equation}
where $S_{BM}$ is the BM solution, $S_{ST}$ the ST solution and $S_{trans}$ our solution for the transition regime. Our model allows us to cover the full hydrodynamic evolution of the blast wave by computing a smooth transition between the BM and ST regimes. At the end of the transition region, we have reached a non relativistic regime and therefore the blast wave follows a Sedov and Taylor evolution. Our model will be compared to simulations.
It has been shown \citep{Daigne} that the radius $R_{New}$ at which the blast wave begins to follow the Sedov and Taylor evolution is given by:
\begin{equation}
R_{New}\approx \left (\frac{3E_{0}}{4\pi n_{0}m_{p}c^{2}} \right )^{1/3}.
\end{equation}
For the given energy and particle density, we get $R_{New}\approx 1.7 \: 10^{17}$~cm.
These theoretical values of both $R_{dec}$ and $R_{New}$ are very close to the values obtained with our model, see fig.\ \ref{fig:gammaR}.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.6]{lorentzR}
\caption{Evolution of Lorentz factor with respect to radius R.}
\label{fig:gammaR}
\end{figure}
\subsection{Beaming}
\label{Hydrodynamics-Beaming}
\subsubsection{Beaming effect on the emission}
\label{Hydrodynamics-Beaming-emission}
The above description considers a spherical expansion of the blast wave. Nevertheless, the radiation from a relativistic source is beamed with a typical beaming angle $1/\gamma $.
Let $\theta _{0}$ be the half opening angle of the cocoon, see fig. \ \ref{fig:beaming}.
During the relativistic regime, when $\theta _{0}$ is larger than $/1/\gamma$, an observer will see only part of the emission.
Moreover if the observer is off-jet axis, the observer angle, i.e the angle between jet-axis and line of sight to the observer, $\theta _{obs}$, will affect the radiation received by the observer, see fig . \ \ref{fig:beaming}.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.7]{beaming}
\caption{Beaming and observer line of sight effects}
\label{fig:beaming}
\end{figure}
The following equations present the actual observed flux $F_{obs }$ given the isotropic flux $F_{\nu }$:
\\
\\
If $\frac{1}{\gamma }< \theta _{0}$, we have :
\begin{equation}
\left\lbrace
\begin{array}{ll}
\theta _{obs}<\theta _{0}\, \, \, \, \, \, \, \: \: \: \: \: \: \: F_{obs } = F_{\nu }\\
\theta _{obs}>\theta _{0}\, \, \, \, \, \, \, \: \: \: \: \: \: \: F_{obs } = 0
\end{array}
\right. .
\end{equation}%
As the Lorentz factor decreases with time, we observe a progressively larger fraction of the emitting region, until ${1}/{\gamma }\approx \theta _{0}$. Then if ${1}/{\gamma }>\theta _{0}$, we have obtained the following equations to take into account both the beaming effect and the observer position.
If ${1}/{\gamma }>\theta _{0}$, we have:
\begin{equation}
\left\lbrace
\begin{array}{ll}
\theta_{obs} + \theta _{0} <\frac{1}{\gamma } & F_{obs } = F_{\nu } \\
\theta _{obs} + \theta _{0} >\frac{1}{\gamma }>\theta _{obs} ~~~~~ & F_{obs } = F_{\nu } \frac{S\gamma^{2} }{\pi} \\
\frac{1}{\gamma }<\theta _{obs} & F_{obs } = 0
\end{array}
\right.
\end{equation}
where $S $ is the surface as shown in fig. 10. Detailed calculation of S are presented in the appendix.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.7]{surface}
\caption{Useful surface of the emission for the observer}
\label{fig:surface}
\end{figure}
As discussed in the above section, the late time emission of the cocoon afterglow becomes non relativistic and spherical. Therefore, no relativistic beaming has to be considered, emission seen by the observer only depends on $\theta _{obs}$.
\newpage
\subsubsection{Beaming effect on the energy}
\label{Hydrodynamics-Beaming-energy}
Beaming also has an effect on luminosity and energy. Indeed, the luminosity $L_{j}$ and energy $E_{0}$, presented in section \ref{Cocoon-Properties-Energy}, are assuming that the bursts are isotropic. However, when taking beaming into account, the energy $E_{iso}$ has to be considered:
\begin{equation}
E_{iso} = \frac{1}{\theta _{0}^{2} }E_{0}
\end{equation}
Nevertheless, when reaching the non relativistic regime, the expansion is spherical. Hence, the energy that has to be considered is $E_{0}$, as defined in section \ref{Cocoon-Properties-Energy}.
Consequently, in our model we consider $E_{iso}$ for the relativistic regime, and reach $E_{0}$ for the non relativistic one.
Note that the conical expansion of the half opening angle of the cocoon, $\theta _{0}$, will be taken into account in the complete version of our model.\footnote{Curves with footnotes have been modified to take sideways expansion into account}
\newpage
\section{Synchrotron Emission}
\label{Synchrotron}
\subsection{Synchrotron or Inverse Compton}
\label{Synchrotron-Inverse}
Given the fraction of the shock energy going into magnetic energy density, $\varepsilon _{B}$, and into the electron energy density, $\varepsilon_{e}$, as defined in section \ref{Hydrodynamics-Shock-acceleration} , we neglect Compton scattering and consider only synchrotron emission. Indeed, in this model, we consider $\varepsilon_{B}=\varepsilon_{e}=0.1$. (Compton scattering can be important if $\varepsilon_{B}> \varepsilon_{e}$).
\subsection{Discussion about velocity of electrons}
\label{Synchrotron-Discussion}
It is clear that, electrons will be accelerated into higly relativistic Lorentz factor during the Blandford and McKee evolution of the blast wave. Nevertheless, the question of Lorentz factor at which electrons are accelerated arises during Sedov and Taylor.
During this phase, the minimum Lorentz factor $\gamma_{m}$ at which electrons are accelerated is given by:
\begin{equation}
\gamma_{m}=\epsilon_{e}\frac{p-2}{p-1} \frac{m_{p}}{m_{e}}\beta^{2}
\end{equation}
\noindent Where $p =3$.
Electrons are not relativistic when $\gamma_{m} \rightarrow 1$ which occurs for $\beta \approx 10^{-2}$. However, we find with our model that while $ t< 10^{3}$ days, we have that $\beta > 10^{-1}$. As a consequence, we can consider that, within reasonable observing time, the electrons are accelerated to relativistic velocities.
\subsection{Synchrotron frequency and power for a relativistic shock}
\label{Synchrotron-Frequency}
Consider a relativistic electron with Lorentz factor $\gamma _{e} \gg 1$ in a magnetic field B, it emits synchrotron radiation. The radiation power and the characteristic frequency for a relativistic shock are given by \citep{Sari1998}:
\begin{equation}
P(\gamma _{e}) = \frac{4}{3} \sigma _{T}c\gamma ^{2}\gamma _{e}^{2}\frac{B^{2}}{8\pi }\beta^{2}
\end{equation}
\begin{equation}
\nu (\gamma _{e}) = \gamma \gamma _{e}^{2}\frac{q_{e}B}{2\pi m_{e}c}
\end{equation}
\noindent where $\sigma _{T}$ is Thomson cross-section , $q_{e}$ the electron charge.
In the above equations, the factors of $\gamma ^{2}$ and $\gamma $ are used to transform the results from the frame of the shocked fluid to the frame of the observer.
The spectral power, $P_{\nu }$, power per unit frequency, varies as $\nu^{1/3}$ while $ \nu < \nu (\gamma _{e}) $, and cuts off exponentially for $ \nu > \nu (\gamma _{e}) $ \citep{RB}. The peak power occurs at $\nu (\gamma _{e}) $, where it has the approximate value :
\begin{equation}
P_{\nu ,max }\approx\frac{P(\gamma _{e})}{\nu (\gamma _{e})} = \frac{m_{e}c^{2}\sigma _{T}}{3q_{e}}\gamma B
\end{equation}
\subsection{Synchrotron cooling}
\label{Synchrotron-Cooling}
The above description of $P_{\nu }$ does not take into account the loss of energy due to radiation. However, the electrons emitting synchrotron radiation are cooling down. The time scale for this to occur is given by the energy of the electrons divided by the rate at which they are radiating away their energy.
Consider $\gamma _{c}$, the critical value above which cooling by synchrotron radiation is significant. The critical electron Lorentz factor $\gamma _{c}$ is given by the condition \citep{Sari1998}: $$\gamma\gamma _{c} m_{e}c^{2} = P (\gamma _{c})t$$ where $t$ refers to time in the frame of the observer.
\\
\\
Therefore the critical electron Lorentz factor $\gamma _{c}$,in the relativistic regime, is:
\begin{equation}
\label{eq:sync}
\gamma _{c} = \frac{3 m_{e}}{16 \varepsilon _{B} \sigma _{T}m_{p}c t \gamma ^{3}n_{0}}
\end{equation}
In the non relativistic regime, we obtain a critical electron Lorentz factor $\gamma _{c}$:
\begin{equation}
\gamma _{c} =\frac{3 m_{e} t^{-1} \gamma^{3}}{ 16 \sigma _{T} m_{p} \varepsilon _{B} n_{0}}
\end{equation}
where $t$ refers to time in the frame of the observer.
In our model, we also compute the value of $\gamma _{c}$ in the transition region.
\subsection{Synchrotron self-absorption}
\label{Synchrotron-Absorption}
Our above calculation assumes that all of the synchrotron radiation emitted by each electron reaches the observer. However this is not necessarily the case: as a photon propagates through the plasma on its way out of the source, there is a chance that it will scatter off one of the synchrotron electrons. This is known as synchrotron self-absorption.
If such scattering occurs many times before the photon can get out of the source, the result is that an outside observer only “sees” emission from a thin layer near the surface of the source. Beneath this, the synchrotron radiation from the electrons are self absorded (i.e the medium is optically thick). For GRB, self absorption may appear at late time and typically in radio emission \citep{Katz1994,Waxman1997}. It leads to a steep cutoff of the low energy spectrum, either as the commonly known $\nu^{5/2}$ or as $\nu^{2}$ . To estimate the self absorption frequency one needs the optical depth along the line of sight. A simple approximation is: $\alpha ^{'}_{\nu ^{'}} R/\gamma$ where $\alpha ^{'}_{\nu ^{'}}$ is the absorption coefficient defined by \cite{Piran2004}:
\begin{equation}
\alpha ^{'}_{\nu ^{'}} = \frac{p+2}{8\pi m_{e}\nu^{'2} }\int_{\gamma _{min}}^{\infty }d\gamma _{e}P^{'}_{\nu ^{'},e}(\gamma _{e})\frac{n(\gamma _{e})}{\gamma _{e}}
\end{equation}
The self absorption frequency $\nu _{a}$ satisfies: $\alpha ^{'}_{\nu ^{'}} R/\gamma=1$ \citep{Piran2004}.
In the relativistic regime, $\nu _{a}$ is given by \cite{Granot1999}:
\begin{equation}
\nu _{a} = 0.247 \times 4.24 \times 10^{9}\left ( \frac{p+2}{3p+2}\right )^{3/5}\times \frac{(p-1)^{8/5}}{p-2} \varepsilon _{e}^{-1}\varepsilon _{B}^{1/5}E_{52}^{1/5}n_{0}^{3/5}\: Hz
\end{equation}
\noindent where $E_{52} = E_{0}/10^{52}$.
In the opposite non relativistic regime, where the radio emission will peak, we consider $\nu _{a}$ \citep{Nakar2011}:
\begin{equation}
\nu _{a}\approx 10^{9} R_{17}^{\frac{2}{p+4}}n_{0}^{\frac{6+p}{2(p+4)}}\varepsilon _{e,-1}^{\frac{2 +p}{2(p+4)}}\varepsilon _{B,-1}^{\frac{2(p-1)}{p+4}}\beta ^{\frac{5p-2}{p+4}}\: Hz
\end{equation}
In our model, we also compute the value of $\nu _{a}$ in the transition region.
\subsection{Influence of cooling and self-absorption }
\label{Synchrotron-Influence}
As described in section \ref{Hydrodynamics-Shock-acceleration}, electrons are accelerated in the shock to a power law distribution of Lorentz factor $\gamma_e$, with a minimum Lorentz factor $\gamma _{m}$.
In order to observe the impact of both cooling and self-absorption, we need to compare $\nu_{m} = \nu_{syn}(\gamma _{m})$ with the cooling frequency $\nu _{c} = \nu_{syn}(\gamma_{c})$ and the self-absorption $\nu_{a}$.
We obtain the following evolution of $\nu _{m}$, $\nu _{c}$ and $\nu _{a}$, see fig.\ 11.
\begin{figure} [h!]
\centering
\includegraphics[scale=0.9]{frequency_}
\label{figfrequency}
\caption{Important frequencies for synchrotron emission}
\end{figure}
In the above figure, two important transition frequencies are shown, $\nu_{0}$ and $\nu_{eq}$ where $\nu _{0}$ is the frequency at which $\nu_{m}=\nu_{c}$ and $\nu_{eq}$ for $\nu_{m}=\nu _{a}$.
We can see that $\nu_{0}$ will be important for X-rays emission while $\nu_{eq}$ will play a key role for radio emission.
{\color{white}
{asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs}
{asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs}}
\section{Light curves}
\label{Light}
\subsection{Spectrum for X-ray and Optical emission}
\label{Light-Xray}
\subsubsection{Fast cooling }
\label{Light-Xray-Fast}
As described in section \ref{Hydrodynamics-Shock-acceleration}, electrons are accelerated in the shock to a power law distribution of Lorentz factor $\gamma_e$, with a minimum Lorentz factor $\gamma _{m} $.
To calculate the net spectrum due to all the electrons we need to integrate over $\gamma _{e}$. Let the total number of electrons accelerated be $N_{e}$.
If $\gamma _{m} > \gamma _{c} $, all the electrons cool down roughly to $\gamma _{c} $ and the flux at $\nu _{c} $ is approximately $N_{e} P_{\nu,max }$ . We call this the case of fast cooling. The isotropic flux at the observer, $F_{\nu }$, is given by \cite{Sari1998}:
\begin{equation}
F_{\nu } =\left\{
\begin{array}{ll}
\left ( \frac{\nu }{\nu _{c}} \right )^{1/3}F_{\nu, max }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{c}>\nu\\
\left ( \frac{\nu }{\nu _{c}} \right )^{-1/2}F_{\nu, max }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{m}>\nu>\nu _{c}\\
\left ( \frac{\nu_{m} }{\nu _{c}} \right )^{-1/2}\left ( \frac{\nu }{\nu _{m}} \right )^{-p/2}F_{\nu, max }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu>\nu _{m}
\end{array}
\right.
\end{equation}
\noindent where $F_{\nu, max }= N_{e} P_{\nu,max }/4 \pi D^{2}$ is the observed peak flux at distance D from the source considering an isotropic flux.
\subsubsection{Slow cooling }
\label{Light-Xray-Slow}
When $\gamma _{c} > \gamma _{m} $, only the electrons with $\gamma _{e} > \gamma _{c} $. We call this slow cooling, because the electrons with $\gamma _{e} \approx \gamma _{c} $, which represent a large fraction of the electron population, do not cool within a time $t$, see eq.\ \ref{eq:sync}. Integrating over the electron distribution, we have the following isotropic flux at the observer, $F_{\nu }$, is given by \cite{Sari1998}:
\begin{equation}
F_{\nu } =\left\{
\begin{array}{ll}
\left ( \frac{\nu }{\nu _{m}} \right )^{1/3}F_{\nu, max }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{m}>\nu\\
\left ( \frac{\nu }{\nu _{m}} \right )^{-(p-1)/2}F_{\nu, max }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{c}>\nu>\nu _{m}\\
\left ( \frac{\nu_{c} }{\nu _{m}} \right )^{-(p-1)/2}\left ( \frac{\nu }{\nu _{c}} \right )^{-p/2}F_{\nu, max }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu>\nu _{c}
\end{array}
\right.
\end{equation}
\subsection{Light curves for X-ray and Optical emission}
\label{Light curves for X-ray and Optical emission}
The instantaneous spectra described in the previous section do not depend on the hydrodynamical evolution of the shock. Nevertheless, the light curves at a given frequency, do depend on hydrodynamics evolution. As shown, in section, $ N_{e} $ and $\gamma$ vary with time. We have also shown that $\nu _{m}$ and $\nu_{c}$ vary with time in section \ref{Synchrotron-Influence} .
Our model which takes into account the full hydrodynamics evolution in BM , transition and ST regime allows us to calculate the light curves.
Moreover as shown in section \ref{Hydrodynamics-Beaming}, with our model the beaming effect are taken into account to compute the flux seen by the observer.
The beaming also affects the number of accelerated electrons $N_{e}$. Indeed, during the relativistic phase where there is beaming $N_{e} = n_{0}\pi \theta_{0}^{2}R^{3}$ while during the non relativistic phase where there is no beaming $N_{e} = n_{0}\frac{4\pi }{3}R^{3}$.
Finally, given an observer at a distance $D = 10 ^{28}$ cm , with an half opening angle of the cocoon $\theta_0 = 20 \text{deg}$, a density $n=1 cm^{-3} $, an energy $ E = 10 ^50 $erg at the given frequency $\nu = 7. \:10^{14}$ Hz and $\nu = 6.\: 10^{16}$ we obtain the following light curves for both on-axis and off-axis observer :
\begin{figure}[p]
\centering
\includegraphics[scale=0.65]{optical-onaxis.png}
\caption[bla]{Optical light curve on-axis observer \footnotemark}
\end{figure}
\footnotetext{\label{foot_curve} curve modified after Master's submission to integrate sideways expansion}
\begin{figure}[p]
\centering
\includegraphics[scale=0.65]{optical-offaxis.png}
\caption[bla]{Optical light curve off-axis observer \footnoteref{foot_curve} }
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.65]{x-onaxis.png}
\caption[bla]{X-ray light curve on-axis observer \footnoteref{foot_curve}}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.65]{x-offaxis.png}
\caption[bla]{X-ray light curve off-axis observer \footnoteref{foot_curve}}
\end{figure}
For both X-ray and optical emission an off-axis observer begins to "see" the flux later because of her position and the flux observed during the relativistic beamed part is less important than previously because of the beaming effects discussed in section \ref{Hydrodynamics-Beaming}.
{\color{white}
{asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs}}
{\color{white}
{asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs asdfasdfa asdfasdfadfs}}
\subsection{Spectrum for radio }
\label{Light-Radio-Spectrum}
As discussed in section \ref{Synchrotron-Influence}, for radio spectrum, the two important frequencies are $\nu _{m}$ and $\nu _{a}$. Similarly than for X-ray spectrum, there are two distincts cases : $\nu _{m}>\nu _{a}$ and $\nu _{m}<\nu _{a}$. Integrating over the electron distribution, we have the following isotropic flux at the observer, $F_{\nu }$, \citep{Nakar2011}:
If $\nu _{m}>\nu _{a}$,
\begin{equation}
F_{\nu } =\left\{
\begin{array}{ll}
\left ( \frac{\nu }{\nu _{a}} \right )^{2}\left ( \frac{\nu_{a} }{\nu _{m}} \right )^{1/3}F_{m }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{a}>\nu\\
\left ( \frac{\nu }{\nu _{m}} \right )^{1/3}F_{m }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{m}>\nu>\nu _{a}\\
\left ( \frac{\nu }{\nu _{m}} \right )^{-(p-1)/2}F_{m }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu>\nu _{m}
\end{array}
\right.
\end{equation}
\noindent with $F_{ m}\:\approx\: 0.5 \: mJy \:\:R_{17}^{3 }\: n_{0}^{3/2 } \:\varepsilon _{B}^{1/2 }\:\beta \:D_{27}^{-2}$ where $R_{17} = R/10^{17}\:$ cm and $D_{27} = D/10^{27}\:$ cm.
If $\nu _{m}>\nu _{a}$, $F_{m}$ is non real i.e. the spectrum does not peak at $F_{m }$ but it peaks at $F_{a}\:\approx\: \left ( \frac{\nu _{a}}{\nu _{m}} \right )^{-(p-1)/2}F_{m } $ .
Integrating over the electron distribution, we have the following isotropic flux at the observer, $F_{\nu }$,\citep{Nakar2011}:
\begin{equation}
F_{\nu } =\left\{
\begin{array}{ll}
\left ( \frac{\nu }{\nu _{m}} \right )^{-(p-1)/2}F_{m }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu >\nu_{a}\\
\left ( \frac{\nu }{\nu _{a}} \right )^{5/2}F_{a }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu _{a}>\nu>\nu _{m}\\
\left ( \frac{\nu_{m} }{\nu _{a}} \right )^{5/2}\left ( \frac{\nu }{\nu _{m}} \right )^{2}F_{a }\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \nu<\nu _{m}
\end{array}
\right.
\end{equation}
\subsection{Light curves for radio emission}
\label{Light-Radio-Light}
Similarly to section \ref{Light curves for X-ray and Optical emission}, given an observer at a distance $D = 10 ^{28}$ cm , a density $n=1 cm^{-3} $, an energy $ E = 10 ^50 $erg a frequency $\nu = 1\:10^{10}$ Hz we obtain the following light curves for both on-axis and off-axis observer. The opening angle of the cocoon is $\theta_0 = 20 \:\text{deg}$ and the observer angle is $\theta_0 = 40\: \text{deg}$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.7]{radio-onaxis.png}
\caption[bla]{Radio light curve on-axis observer \footnoteref{foot_curve}}
\label{fig::LCradio}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.7]{radio-offaxis.png}
\caption[bla]{Radio light curve off-axis observer \footnoteref{foot_curve}}
\label{fig::LCradiobeaming}
\end{figure}
Similarly to section \ref{Light curves for X-ray and Optical emission}, an off-axis observer begins to "see" emission later and the flux observed for the relativistic part is less important.
It can be seen on both curves that the peak occurs at 20 days. At this time, $\nu_m$ and $\nu_a$ are equal so the spectrum calculation changed as discussed in section \ref{Light-Radio-Spectrum}. The later change in the slope occurs at the time when $\nu_m$ is equal to $\nu$ which also induce a change in the sepctrum as discussed in section \ref{Light-Radio-Spectrum}.
\newpage
\section{Is the cocoon afterglow a promising EM counterpart of GW ?}
\label{Is the cocoon afterglow a promising EM counterpart of GW ?}
Hereafter we firstly present the binary neutron star merger detectability, then discuss the other EM counterpart candidates. Finally, we compare the cocoon afterglow to the GRB afterglow.
\subsection{Neutron star binary merger rate}
\label{Can LIGO observe neutron star binary merger?}
To date, the three events detected by LIGO are binary black hole mergers \citep{Abbott2017,GW1,GW2}. Except perhaps in rare circumstances, the merger of stellar mass black holes are not expected to produce luminous EM emission due to the absence of baryonic matter in these systems. Nevertheless, as shown in the following figure fig. \ref{fig::NSNS} , GW emitted during binary neutron star mergers will be above the sensitivity threshold of Advanced LIGO for the next campaigns. Therefore looking for EM counterpart of binary neutron star merger will hopefully allow localization of future GW emitted during such events.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.7]{NSNS}
\caption{Binary neutron star merger rate threshold for current and futur LIGO campaigns}
\label{fig::NSNS}
\end{figure}
\newpage
\subsection{Discussion about other possible EM counterparts }
\label{IsTheCocoon-Discussion}
Macronova, r-process supernova like event and radio flares discussed in section \ref{Intro-Motivation}are other EM candidates. They also arise from binary neutron star merger.
Matter ejected during binary neutron star merger is enriched by heavy unstable nuclei whose radioactive decay power a macronova \citep{Li1998,Kulkarni2005,Metzger2010}. However, the macronova will peak in infrared, which make them less likely to be observed considering infrared telescope sensitivities.
The interaction of the expanding ejecta, produced in binary neutron star merger, with the surrounding medium produces, at a later stage, a radio flare lasting months to years, but peaking around a year after the prompt emission \citep{Nakar2011}.
\subsection{Comparison with GRB afterglow}
\label{IsTheCocoon-Comparison}
The cocoon afterglow is produced by the same physical mechanisms than the GRB afterglow. Consequently, our model can be used to calculate light curves for the GRB afterglow. For the GRB afterglow, we consider the same energy than for the cocoon afterglow, see section \ref{Cocoon-Properties-Energy}, an initial Lorentz factor $\gamma_{0} \approx 200 $ and a half opening of the jet $\theta _{0}$ = 10 deg.
\subsubsection{X-ray emission}
For a frequency of $\nu = 6. \: 10^{16}$ Hz, we obtain the following light curves for the cocoon afterglow and the GRB afterglow see fig.\ 19 and fig. \ 20.
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{x-comp-cocoon}
\caption[bla]{X-ray emission of the cocoon afterglow \footnoteref{foot_curve}}
\label{fig::6-xraycocoon}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{x-comp-jet}
\caption[bla]{X-ray emission of the GRB afterglow \footnoteref{foot_curve}}
\label{fig::6-xrayjet}
\end{figure}
It can be observed that the cocoon afterglow is both brighter and appears sooner than the orphan jet afterglow while being off-axis. For both afterglows, a more important observer angle gives a later and less bright emission.
\subsubsection{Optical emission}
\label{IsTheCocoon-Comparison-Optical}
For a frequency of $\nu = 7. \: 10^{14}$ Hz, we obtain the following light curves for the cocoon afterglow and the GRB afterglow see fig.\ 21 and fig.\ 22.
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{optical-comp-cocoon}
\caption[bla]{Optical emission of the cocoon afterglow \footnoteref{foot_curve}}
\label{fig::optical_cocoon_6}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{optical-comp-jet}
\caption[bla]{Optical emission of the GRB afterglow \footnoteref{foot_curve}}
\label{fig::6opticaljet}
\end{figure}
Similarly to X-ray emission, it can be observed that the cocoon afterglow is both brighter and appears sooner than the orphan jet afterglow while being off-axis. For both afterglows, a more important the observer angle gives a later and the less bright emission.
\subsubsection{Radio emission}
\label{IsTheCocoon-Comparison-Radio}
For a frequency of $\nu = 1 $GHz, we obtain the following light curves for the cocoon afterglow and the GRB afterglow see fig.\ 23 and fig.\ 24.
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{radio-comp-cocoon.png}
\caption[bla]{Radio emission of the cocoon afterglow \footnoteref{foot_curve}}
\label{fig::6-10GHz-cocoon}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{radio-comp-jet.png}
\caption[bla]{Radio emission of the GRB afterglow \footnoteref{foot_curve}}
\label{fig::6-10GHzjet}
\end{figure}
Differently than before, for an observer angle of 30 deg, both afterglow emissions are comparable. However, the cocoon afterlow occurs one day before the jet afterglow.
For an observer angle of 45 deg or 60 deg, the orphan jet afterglow is less bright than the cocoon one and appears later.
\newpage
\section{Conclusion}
\label{Conclusion}
The detection of a GW event with a coincidental EM counterpart will allow the localization of the event progenitor. This should provide us with important information about one of the most intriguing and energetic phenomena in our Universe, that of neutron star mergers. It will allow us to ascertain the effective sensitivity of GW detectors.
The EM candidates : macronova and radio flares, discussed in section \ref{IsTheCocoon-Comparison} exhibit several uncertain characteristics making their possible observation in doubt. Therefore, we considered in this work another a less known and non studied EM counterpart : the cocoon afterglow. For that purpose, we propose a model that provides the full hydrodynamic evolution of a blast wave including the mildly relativistic regime where neither Sedov-Taylor solution nor its fully relativistic counterpart Blandford-McKee are valid.
As shown, in section \ref{IsTheCocoon-Comparison}, under favorable conditions the cocoon afterglow emission is comparable to the GRB afterglow. However unlike the latter, it will be observable, depending on the observer angle, a few days to more than a dozen of days before the orphan GRB afterglow itself. \textbf {Therefore, we expect the signal arising from the cocoon afterglow to be of prime importance}.
In our subsequent study, we will consider more sophisticated hypothesis for sideways emission.
We can conclude that the cocoon afterglow can be a promising EM counterpart and will be our future research project.
\newpage
\section{Appendix: Beaming issue of the surface S}
\begin{equation}
\begin{split}
\theta_{bs} &= \frac{1}{\Gamma} + \theta_j - d \\
\Longrightarrow d &= \frac{1}{\Gamma} + \theta_j - \theta_{bs}
\end{split}
\end{equation}
Pythagore:
\begin{equation}
\begin{split}
\frac{1}{r^2} = \left( \frac{1}{\Gamma} - \frac{d}{2} \right)^2 + d_2^2
\Longrightarrow d_2 &=\left( \frac{d}{\Gamma} - \frac{d^2}{4} \right)^{\frac{1}{2}}
\end{split}
\end{equation}
($d_2$ full diagonal)
\begin{equation}
A = d \times \frac{d_2}{2}
\end{equation}
\begin{equation}
A = \frac{d}{2} \times \left( \frac{d}{\Gamma} - \frac{d^2}{4} \right)^{\frac{1}{2}}
\end{equation}
\def\ref@jnl#1{{#1}}
\def\aj{\ref@jnl{AJ}}
\def\actaa{\ref@jnl{Acta Astron.}}
\def\araa{\ref@jnl{ARA\&A}}
\def\apj{\ref@jnl{ApJ}}
\def\apjl{\ref@jnl{ApJ}}
\def\apjs{\ref@jnl{ApJS}}
\def\ao{\ref@jnl{Appl.~Opt.}}
\def\apss{\ref@jnl{Ap\&SS}}
\def\aap{\ref@jnl{A\&A}}
\def\aapr{\ref@jnl{A\&A~Rev.}}
\def\aaps{\ref@jnl{A\&AS}}
\def\azh{\ref@jnl{AZh}}
\def\baas{\ref@jnl{BAAS}}
\def\ref@jnl{Bull. astr. Inst. Czechosl.}{\ref@jnl{Bull. astr. Inst. Czechosl.}}
\def\ref@jnl{Chinese Astron. Astrophys.}{\ref@jnl{Chinese Astron. Astrophys.}}
\def\ref@jnl{Chinese J. Astron. Astrophys.}{\ref@jnl{Chinese J. Astron. Astrophys.}}
\def\icarus{\ref@jnl{Icarus}}
\def\ref@jnl{J. Cosmology Astropart. Phys.}{\ref@jnl{J. Cosmology Astropart. Phys.}}
\def\jrasc{\ref@jnl{JRASC}}
\def\memras{\ref@jnl{MmRAS}}
\def\mnras{\ref@jnl{MNRAS}}
\def\na{\ref@jnl{New A}}
\def\nar{\ref@jnl{New A Rev.}}
\def\pra{\ref@jnl{Phys.~Rev.~A}}
\def\prb{\ref@jnl{Phys.~Rev.~B}}
\def\prc{\ref@jnl{Phys.~Rev.~C}}
\def\prd{\ref@jnl{Phys.~Rev.~D}}
\def\pre{\ref@jnl{Phys.~Rev.~E}}
\def\prl{\ref@jnl{Phys.~Rev.~Lett.}}
\def\pasa{\ref@jnl{PASA}}
\def\pasp{\ref@jnl{PASP}}
\def\pasj{\ref@jnl{PASJ}}
\def\rmxaa{\ref@jnl{Rev. Mexicana Astron. Astrofis.}}%
\def\qjras{\ref@jnl{QJRAS}}
\def\skytel{\ref@jnl{S\&T}}
\def\solphys{\ref@jnl{Sol.~Phys.}}
\def\sovast{\ref@jnl{Soviet~Ast.}}
\def\ssr{\ref@jnl{Space~Sci.~Rev.}}
\def\zap{\ref@jnl{ZAp}}
\def\nat{\ref@jnl{Nature}}
\def\iaucirc{\ref@jnl{IAU~Circ.}}
\def\aplett{\ref@jnl{Astrophys.~Lett.}}
\def\ref@jnl{Astrophys.~Space~Phys.~Res.}{\ref@jnl{Astrophys.~Space~Phys.~Res.}}
\def\bain{\ref@jnl{Bull.~Astron.~Inst.~Netherlands}}
\def\fcp{\ref@jnl{Fund.~Cosmic~Phys.}}
\def\gca{\ref@jnl{Geochim.~Cosmochim.~Acta}}
\def\grl{\ref@jnl{Geophys.~Res.~Lett.}}
\def\jcp{\ref@jnl{J.~Chem.~Phys.}}
\def\jgr{\ref@jnl{J.~Geophys.~Res.}}
\def\ref@jnl{J.~Quant.~Spec.~Radiat.~Transf.}{\ref@jnl{J.~Quant.~Spec.~Radiat.~Transf.}}
\def\ref@jnl{Mem.~Soc.~Astron.~Italiana}{\ref@jnl{Mem.~Soc.~Astron.~Italiana}}
\def\nphysa{\ref@jnl{Nucl.~Phys.~A}}
\def\physrep{\ref@jnl{Phys.~Rep.}}
\def\physscr{\ref@jnl{Phys.~Scr}}
\def\planss{\ref@jnl{Planet.~Space~Sci.}}
\def\procspie{\ref@jnl{Proc.~SPIE}}
\let\astap=\aap
\let\apjlett=\apjl
\let\apjsupp=\apjs
\let\applopt=\ao
\newpage
\bibliographystyle{authordate1}
|
2,877,628,088,899 | arxiv | \section{\bf Introduction}\ \\
In this paper we study the global existence of weak solution to the modified incompressible Navier-Stokes equations in $\R^3$
\begin{equation}\label{$S$}
\left\{ \begin{matrix}
\partial_t u
-\nu\Delta u+ u.\nabla u +\alpha|u|^{\beta-1}u = -\nabla p &\hbox{ in } \mathbb R^+\times \mathbb R^3\\
{\rm div}\, u = 0 \hfill&\hbox{ in } \mathbb R^+\times \mathbb R^3\\
u(0,x) =u^0(x) \hfill&\hbox{ in }\mathbb R^3,\\
\alpha>0,\;\beta>1\hfill&.
\end{matrix}\right. \tag{$NSD$}
\end{equation}
where $u=u(t,x)=(u_1,u_2,u_3)$, $p=p(t,x)$ denote respectively the unknown velocity and the pressure of the fluid at the point $(t,x)\in \mathbb R^+\times \mathbb R^3$, $\nu $ is the viscosity of fluid and $u^0=(u_1^0(x),u_2^0(x),u_3^0(x))$ the initial given velocity.
The damping is from the resistance to the motion of the flow. It describes various physical situations such as porous media flow, drag or friction effects, and some dissipative mechanisms (see \cite{BD,BDC,H,HP} and references therein).
The fact that ${\rm div}\,u = 0$, allows to write the term $(u.\nabla u):=u_1\partial_1 u+u_2\partial_2 u+u_3\partial_3u$ in the following form
$ {\rm div}\,(u\otimes u):=({\rm div}\,(u_1u),{\rm div}\,(u_2u),{\rm div}\,(u_3u)).$
If the initial velocity $u^0$ is quite regular, the divergence free condition determines the pressure $p$.\\
In order to simplify the calculations and the proofs of our results, we consider the viscosity unitary (i.e. $\nu=1$).
The global existence of weak solution of initial value problem of the classical incompressible Navier-Stokes were proved by Leray and Hopf (see \cite{Hopf}-\cite{Leray}) long before. The uniqueness remains an open problem for the dimensions $d\geq3$.\\
The polynomial damping $\alpha|u|^{\beta-1}u$ is studied in \cite{CJ} by Cai and Jiu, where they proved the global existence of weak solution in
$$L^\infty(\R^+,L^2(\R^3))\cap L^2(\R^+,\dot H^1(\R^3))\cap L^{\beta+1}(\R^+,L^{\beta+1}(\R^3)).$$
The purpose of this paper is to study the uniqueness, continuity and large time decay of the global solution of the incompressible Navier-Stokes equations with damping $(NSD)$. We recall that in \cite{CJ} employ the Galerkin approximations to construct the global solution of $(NSD)$ with $\beta\geq1$. But, in our case we use Friedrich method to prove the continuity and the uniqueness of such solution for $\beta>3$. The study of large time decay is studied for $\beta\geq \frac{10}3$. Precisely, our main result is the following:
\begin{theorem}\label{th2}\pn
Let $\beta>3$ and $u^0\in L^2(\mathbb R^3)$ be a divergence free vector fields, then there is a unique global solution of $(NSD)$:
$u\in C_b(\R^+,L^2(\mathbb R^3)\cap L^2(\R^+,\dot H^1(\mathbb R^3))\cap L^{\beta+1}(\R^+,L^{\beta+1}(\R^3))$. Moreover, for all $t\geq0$
\begin{equation}\label{eqth2-1}\|u(t)\|_{L^2}^2+2\int_0^t\|\nabla u(s)\|_{L^2}^2ds +2\alpha\int_0^t\|u(s)|_{L^{\beta+1}}^{\beta+1}ds
\leq \|u^0\|_{L^2}^2.
\end{equation}
Moreover, if $\beta\geq\frac{10}3$ we have
\begin{equation}\label{eqth2-2}
\limsup_{t\to \infty}\|u(t)\|_{L^2}=0.
\end{equation}
\end{theorem}
\begin{rem}\pn
In theorem \ref{th2}, the inequality (\ref{eqth2-1}) is proved in
\cite{CJ}. The new parts of this theorem is the uniqueness, the continuity of the
global solution in $L^2(\R^3)$ and the asymptotic result (\ref{eqth2-2}).
\end{rem}
\section{\bf Notations and Preliminary Results}
For a function $f\colon\R^3\to\bar\R$ and $R>0$, the
Friedritch operator $J_R$ is defined by:
$\ds J_R(D)f=\F^{-1}(\chi_{B_R} \widehat{f}),$
where $B_R$, the ball of center $0$ and radius $R$. If $L^2_\sigma(\R^3)$ is the space of divergence-free vector fields in $L^2 (\R^3)$, the
Leray projector $\mathbb P\colon (L^2(\R^3))^3\to (L^2_\sigma(\R^3))^3$ is defined by:
$$\mathcal F(\mathbb P f)=\widehat{f}(\xi)-(\widehat{f}(\xi).\frac{\xi}{|\xi|})\frac{\xi}{|\xi|}=M(\xi)\widehat{f}(\xi).$$
where $M(\xi)$ is the matrix $(\delta_{k,\ell}-\frac{\xi_k\xi_\ell}{|\xi|^2})_{1\leq k,\ell\leq 3} $.\\
Particularly, if $ u \in \mathcal S(\R^3)^3$, we obtain
$$\ds \mathbb P( u)_k(x) = \frac{1}{(2\pi)^{\frac 3 2}} \int_{\R^3} \left( \delta_{kj}-\frac{\xi_k \xi_j}{ \vert \xi \vert^2}\right)
\widehat{ u}_j(\xi) \, e^{i \xi \cdot x}\, d\xi,$$
where $\mathcal S(\R^n)$ is the Schwartz space.
Define also the operator $A_R(D)$ by:
$$\ds A_R(D)u=\mathbb P J_R(D)u=\mathcal F^{-1}(M(\xi)\chi_{B_R}(\xi)\widehat{u}).$$
In which follows, we recall some preliminary results:
\begin{prop}(\cite{HBAF})\label{prop1}\pn
Let $H$ be a Hilbert space.
\begin{enumerate}
\item The unit ball is weakly compact, that is: if $(x_n)$ is a bounded sequence in $H$, then there is a subsequence $(x_{\varphi(n)})$ such that
$$(x_{\varphi(n)}|y)\to (x|y),\;\forall y\in H.$$
\item If $x\in H$ and $(x_n)$ a bounded sequence in $H$ such that
$\ds\lim_{n\to+\infty}(x_n|y)= (x|y)$, for all $y\in H,$
then $\|x\|\leq\ds \liminf_{n\to \infty}\|x_n\|.$
\item If $x\in H$ and $(x_n)$ is a bounded sequence in $H$ such that\\
$\ds\lim_{n\to+\infty}(x_n|y)= (x|y)$, for all $y\in H$
and
$\limsup_{n\to \infty}\|x_n\|\leq \|x\|,$
then $\ds \lim_{n\to \infty}\|x_n-x\|=0.$
\end{enumerate}
\end{prop}
We recall the following product law in the homogeneous Sobolev spaces:
\begin{lem}(\cite{JYC})\label{lem1}\pn
Let $s_1,\ s_2$ be two real numbers and $d\in\N$.
\begin{enumerate}
\item If $s_1<\frac d 2$\; and\; $s_1+s_2>0$, there exists a constant $C_1=C_1(d,s_1,s_2)$, such that: if $f,g\in \dot{H}^{s_1}(\mathbb{R}^d)\cap \dot{H}^{s_2}(\mathbb{R}^d)$, then $f.g \in \dot{H}^{s_1+s_2-\frac{d}{2}}(\mathbb{R}^d)$ and
$$\|fg\|_{\dot{H}^{s_1+s_2-\frac{d}{2}}}\leq C_1 (\|f\|_{\dot{H}^{s_1}}\|g\|_{\dot{H}^{s_2}}+\|f\|_{\dot{H}^{s_2}}\|g\|_{\dot{H}^{s_1}}).$$
\item If $s_1,s_2<\frac d 2$\; and\; $s_1+s_2>0$ there exists a constant $C_2=C_2(d,s_1,s_2)$ such that: if $f \in \dot{H}^{s_1}(\mathbb{R}^d)$\; and\; $g\in\dot{H}^{s_2}(\mathbb{R}^d)$, then $f.g \in \dot{H}^{s_1+s_2-\frac{d}{2}}(\mathbb{R}^d)$ and
$$\|fg\|_{\dot{H}^{s_1+s_2-\frac{d}{2}}}\leq C_2 \|f\|_{\dot{H}^{s_1}}\|g\|_{\dot{H}^{s_2}}.$$
\end{enumerate}
\end{lem}
\begin{lem}\label{lem2}\pn
Let $ \beta>0$ and $d\in\N$. Then, for all $x,y\in\R^d$, we have
\begin{equation}\label{eqn-lem2-1}
\langle |x|^{\beta}x-|y|^{\beta}y ,x-y\rangle\geq \frac{1}{2}(|x|^{\beta}+|y|^{\beta})|x-y|^{2}.
\end{equation}
\end{lem}
{\bf Proof.} \pn
Suppose that $|x|>|y|>0$. For $u>v>0$, we have
\begin{equation}\label{eqn-lem2-3}
2\langle ux-vy, x-y\rangle-(u+v)|x-y|^{2}=(u-v)(|x|^2-|y|^2)\geq0.
\end{equation}
It suffices to take $u=|x|^\beta$ and $v=|y|^\beta$, we get the inequality \eqref{eqn-lem2-1}.
The following result is a generalization of Proposition 3.1 in \cite{J1}.
\begin{prop}\label{prop2} \pn
Let $\nu_1,\nu_2,\nu_3\in[0,\infty)$, $r_1,r_2,r_3\in(0,\infty)$ and $f^0\in L^2_\sigma(\R^3)$. \\
For $n\in\N$, let $F_n:\R^+\times\R^3\to \R^3$ be a measurable function in $C^1(\R^+,L^2(\R^3))$ such that $$A_n(D)F_n=F_n,\;F_n(0,x) =A_n(D)f^0(x)$$ and
\begin{enumerate}
\item [(E1)]
$\ds \partial_t F_n+\sum_{k=1}^3\nu_k|D_k|^{2r_k} F_n+ A_n(D){\rm div}\,(F_n\otimes F_n)+ A_n(D)h(|F_n|)F_n =0.$
\item [(E2)]
\beqq
&&\ds \|F_n(t)\|_{L^2}^2+2\sum_{k=1}^3\nu_k\int_0^t\||D_k|^{r_k} F_n(s)\|_{L^2}^2ds\\
&&\hskip 2cm +2 a\int_0^t\|h(|F_n(s)|)|F_n(s)|^2\|_{L^1}ds \leq \|f^0\|_{L^2}^2.
\eeqq
\end{enumerate}
where $\ds h(z)=\alpha z^{\beta-1},$ with $\alpha>0$ and $\beta>3$.
Then: for every $\varepsilon>0$ there is $\delta=\delta(\varepsilon,\alpha,\beta,\nu_1,\nu_2,\nu_3,r_1,r_2,r_3,\|f^0\|_{L^2})>0$
such that: for all $t_1,t_2\in\R^+$, we have
\begin{equation}\label{eqn-1}
\Big(|t_2-t_1|<\delta\Longrightarrow \|F_n(t_2)-F_n(t_1)\|_{H^{-s_0}}<\varepsilon\Big),\;\forall n\in\N,
\end{equation}
with $\ds s_0\ge \max(3,2r_1,2r_2,2r_3).$
\end{prop}
{\bf Proof.} The proof is similar to that of Proposition 2.4 in \cite{MJ1}.
\section{\bf Proof of Theorem \ref{th2}}
\subsection{Existence of Solution}\ \\
Consider the approximate system:
$$(NSD_{n})
\begin{cases}
\partial_t u
-\Delta J_nu+ J_n(J_nu.\nabla J_nu) + \alpha J_n[ |J_nu|^{\beta-1}J_nu] =\;\;-\nabla p_n\hbox{ in } \mathbb R^+\times \mathbb R^3\\
p_n=(-\Delta)^{-1}\Big({\rm div}\,J_n(J_nu.\nabla J_nu) + \alpha {\rm div}\,J_n[|J_nu|^{\beta-1}J_nu]\Big)\\
{\rm div}\, u = 0 \hbox{ in } \mathbb R^+\times \mathbb R^3\\
u(0,x) =J_nu^0(x) \;\;\hbox{ in }\mathbb R^3,
\end{cases}
$$
where $J_n$ is the Friedritch operator defined by
$\ds J_n(D)f=\F^{-1}(\chi_{B_n} \widehat{f}) $ and
$B_n$ the ball of center $0$ and radius $n\in\N$.
\begin{enumerate}
\item[$\bullet$] By Cauchy-Lipschitz theorem, there exists a unique solution $u_n\!\in\! C^1(\R^+,L^2_\sigma(\R^3))$ of the system $(NSD_{n})$ such that $J_nu_n\!=\!u_n$ and
\begin{equation}\label{eq-energyn}\|u_n(t)\|_{L^2}^2+2\int_0^t\|\nabla u_n(s)\|_{L^2}^2ds +2\alpha\int_0^t\|u_n(s)\|_{L^{\beta+1}}^{\beta+1}ds\leq \|u^0\|_{L^2}^2.\end{equation}
\item[$\bullet$]
The sequence $(u_n)_n$ is bounded in $L^2(\R^3))$ and on $H^{1}(\R^3)$.\\
Using proposition \ref{prop2} and the interpolation method, we deduce that the sequence $(u_n)_n$ is equicontinuous on $H^{-1}(\R^3)$.
\item[$\bullet$]
For $(T_q)_q $ a strictly increasing sequence such that $\ds\lim_{q\to+\infty} T_q=\infty$, consider a sequence
of functions $(\theta_q)_{q }$ in $C_0^\infty(\R^3)$ such that
$$\left\{\begin{array}{l}
\theta_q(x)=1,\ {\rm for}\ |x|\le q+\frac{5}{4}\\
\theta_q(x)=0,\ {\rm for}\ |x|\ge q+2 \\
0\leq \theta_q\leq 1.
\end{array}\right.$$
Using the energy estimate \eqref{eq-energyn}, the equicontinuity of the sequence $(u_n)_n$ on $H^{-1}(\R^3)$
and classical argument by combining Ascoli's theorem and the Cantor diagonal process, there exists a subsequence $(u_{\varphi(n)})_n$ and
$u\in L^\infty(\R^+,L^2(\R^3))\cap C(\R^+,H^{-3}(\R^3))$ such that: for all $q\in\N$,
\begin{equation}\label{eq-cv}\lim_{n\to \infty}\|\theta_q(u_{\varphi(n)}(t)-u(t))\|_{L^\infty([0,T_q],H^{-4})}=0.\end{equation}
In particular, the sequence $(u_{\varphi(n)}(t))_n$ converges weakly in $L^2(\R^3)$ to $u(t)$ for all
$t\geq0$.
\item[$\bullet$] Using the same method in \cite{J1}, we obtain:
\begin{equation}\label{eqn-23}
\|u(t)\|_{L^2}^2\!+\!2\int_0^t\!\|\nabla u(s)\|_{L^2}^2ds\!+\!2\alpha\int_0^t\!\|u(s)\|_{L^{\beta+1}}^{\beta+1}ds\!\leq\! \|u^0\|_{L^2}^2.
\end{equation}
for all $t\geq0$, and $u$ is a solution of the system $(NSD)$.
\end{enumerate}
\subsection{Continuity of the solution in $L^2$}\pn
By the inequality (\ref{eqn-23}), we have
$\ds \limsup_{t\to 0}\|u(t)\|_{L^2}\leq\|u^0\|_{L^2} $ and using
proposition \ref{prop1}-(3), we get
$\ds \limsup_{t\to 0}\|u(t)-u^0\|_{L^2}=0.$
This ensures the continuity of the solution $u$ at $0$. To prove the continuity on $\R$, consider the functions
$\ds v_{n,\varepsilon}(t)=u_{\varphi(n)}(t+\varepsilon),\;p_{n,\varepsilon}(t)=p_{\varphi(n)}(t+\varepsilon),$
for $n\in\N$ and $\varepsilon>0$. We have:
\beqq
\partial_tu_{\varphi(n)}-\Delta u_{\varphi(n)}+J_{\varphi(n)}(u_{\varphi(n)}.\nabla u_{\varphi(n)})
+\alpha J_{\varphi(n)}(|u_{\varphi(n)}|^{\beta-1} u_{\varphi(n)})&=&-\nabla p_{\varphi(n)} \\
\partial_tv_{n,\varepsilon}-\Delta v_{n,\varepsilon}+J_{\varphi(n)}(v_{n,\varepsilon}.\nabla
v_{n,\varepsilon})+\alpha J_{\varphi(n)}(|v_{n,\varepsilon}|^{\beta-1} v_{n,\varepsilon})&=&-\nabla p_{n,\varepsilon}.
\eeqq
The function $w_{n,\varepsilon}=u_{\varphi(n)}-v_{n,\varepsilon}$ fulfills the following:
\beqq
&&\partial_tw_{n,\varepsilon}-\Delta w_{n,\varepsilon} +\alpha J_{\varphi(n)}\Big(|u_{\varphi(n)}|^{\beta-1}
u_{\varphi(n)}-|v_{\varphi(n)}|^{\beta-1} v_{n,\varepsilon}\Big)\\
&&\hskip 3cm = -\nabla (p_{\varphi(n)}-p_{n,\varepsilon})+J_{\varphi(n)}(w_{n,\varepsilon}.\nabla w_{n,\varepsilon})\\
&&\hskip 3cm-J_{\varphi(n)}(w_{n,\varepsilon}.\nabla u_{\varphi(n)})
- J_{\varphi(n)}(u_{\varphi(n)}.\nabla w_{n,\varepsilon}).
\eeqq
Taking the scalar product with $w_{n,\varepsilon}$ in $L^2(\R^3)$ and using the fact that
$\langle w_{n,\varepsilon}.\nabla w_{n,\varepsilon},w_{n,\varepsilon}\rangle=0$ and ${\rm div}\ w_{n,\varepsilon}=0$, we get
\begin{eqnarray}\label{eqn-24}
\frac{1}{2}\frac{d}{dt}\|w_{n,\varepsilon}(t)\|_{L^2}^2+\|\nabla
w_{n,\varepsilon}(t)\|_{L^2}^2&
+&\alpha \langle J_{\varphi(n)}\Big( |u_{\varphi(n)}|^{\beta-1}u_{\varphi(n)}- |v_{n,\varepsilon}|^{\beta-1}v_{n,\varepsilon}\Big);w_{n,\varepsilon}\rangle_{L^2}
\nonumber\\
&=& -\langle J_{\varphi(n)}(w_{n,\varepsilon}.\nabla u_{\varphi(n)});w_{n,\varepsilon}\rangle _{L^2} .
\end{eqnarray}
By inequality \eqref{eqn-lem2-1}, we have
\begin{eqnarray}
&&\langle J_{\varphi(n)}\Big( |u_{\varphi(n)}|^{\beta-1}u_{\varphi(n)}-( |v_{n,\varepsilon}|^{\beta-1}v_{n,\varepsilon}\Big);
w_{n,\varepsilon}\rangle _{L^2}\nonumber\\
&&\hskip 5cm=\langle ( |u_{\varphi(n)}|^{\beta-1} u_{\varphi(n)}-
|v_{n,\varepsilon}|^{\beta-1} v_{n,\varepsilon};J_{\varphi(n)}w_{n,\varepsilon}\rangle _{L^2}\nonumber\\
&&\hskip 5cm= \langle ( |u_{\varphi(n)}|^{\beta-1}u_{\varphi(n)}- |v_{n,\varepsilon}|^{\beta-1}v_{n,\varepsilon};w_{n,
\varepsilon}\rangle_{L^2}\nonumber\\
&& \hskip 5cm\geq \frac{1}{2}\int_{\R^3}\Big( |u_{\varphi(n)}|^{\beta-1}+ |v_{n,\varepsilon}|^{\beta-1}\Big)|w_{n,\varepsilon}|^2\nonumber\\
&&\hskip 5cm \geq \frac{1}{2}\int_{\R^3}
|u_{\varphi(n)}|^{\beta-1}|w_{n,\varepsilon}|^2,\nonumber
\end{eqnarray}
which implies
{\footnotesize
\begin{equation}\label{eqn41}\alpha\langle J_{\varphi(n)}\Big( |u_{\varphi(n)}|^{\beta-1}u_{\varphi(n)}-( |v_{n,\varepsilon}|^{\beta-1}v_{n,\varepsilon}\Big);
w_{n,\varepsilon}\rangle _{L^2}\geq \frac{\alpha}{2}\int_{\R^3}
|u_{\varphi(n)}|^{\beta-1}|w_{n,\varepsilon}|^2.\end{equation}}
Also, we have
\begin{eqnarray}\label{eqn42}
|\langle J_{\varphi(n)}(w_{n,\varepsilon}.\nabla u_{\varphi(n)});w_{n,\varepsilon}\rangle _{L^2}|
&\leq&\ds \int_{\R^3}|w_{n,\varepsilon}|.|u_{\varphi(n)}|.|\nabla w_{n,\varepsilon}| \nonumber\\
&\leq&\ds \frac{1}{2}\int_{\R^3}|w_{n,\varepsilon}|^2|u_{\varphi(n)}|^2 +\frac{1}{2}\|\nabla w_{n,\varepsilon}\|_{L^2}^2\nonumber.
\end{eqnarray}
By using the convex inequality
$$ab\leq \frac{a^p}{p}+\frac{b^q}{q}\leq a^p+b^q$$
with $p = \ds\frac{\beta-1}{2},\ \ q=\ds\frac{\beta-1}{\beta-3},\ \
a=\ds|w_{n,\varepsilon}|^2(\frac \alpha 2)^{\frac{2}{\beta-1}},\ \
b=\ds(\frac 2\alpha)^{\frac{2}{\beta-1}},$
we get
\begin{eqnarray}\label{eqn42}
|\langle J_{\varphi(n)}(w_{n,\varepsilon}.\nabla u_{\varphi(n)});w_{n,\varepsilon}\rangle _{L^2}|
&\leq&\ds \frac{\alpha}{4}\int_{\R^3}|w_{n,\varepsilon}|^{\beta-1}|u_{\varphi(n)}|^2+C_{\alpha,\beta}\|w_{n,\varepsilon}\|_{L^2}^2+\frac{1}{2}\|\nabla w_{n,\varepsilon}\|_{L^2}^2,\nonumber
\end{eqnarray}
with $C_{\alpha,\beta}=\frac{1}{2}(\frac 2\alpha)^{\frac{2}{\beta-3}}$. Combining this inequality and inequalities \eqref{eqn-24}, \eqref{eqn41} and \eqref{eqn42}, we get
$$ \frac{d}{dt}\|w_{n,\varepsilon}\|_{L^2}^2+ \|\nabla w_{n,\varepsilon}\|_{L^2}^2\leq 2C_{\alpha,\beta} \|w_{n,\varepsilon}\|_{L^2}^2.$$
By Gronwall Lemma, we deduce the following:
$$\|w_{n,\varepsilon}(t)\|_{L^2} \leq \|w_{n,\varepsilon}(0)\|_{L^2} e^{C_{\alpha,\beta}t},$$
and
$$\|u_{\varphi(n)}(t+\varepsilon)-u_{\varphi(n)}(t)\|_{L^2} \leq
\|u_{\varphi(n)}(\varepsilon)-u_{\varphi(n)}(0)\|_{L^2} e^{C_{\alpha,\beta}t}.$$
For $t_0>0$ and $\varepsilon\in(0,t_0)$, we have
$$\|u_{\varphi(n)}(t_0+\varepsilon)-u_{\varphi(n)}(t_0)\|_{L^{2}}\leq\|u_{\varphi(n)}(\varepsilon)-u_{\varphi(n)}(0)\|_{L^{2}} e^{C_{\alpha,\beta}t_0}.$$
$$\|u_{\varphi(n)}(t_0-\varepsilon)-u_{\varphi(n)}(t_0)\|_{L^{2}}\leq\|u_{\varphi(n)}(\varepsilon)-u_{\varphi(n)}(0)\|_{L^{2}} e^{C_{\alpha,\beta}t_0}.$$
So
\beqq
\| u_{\varphi(n)}(\varepsilon)-u_{\varphi(n)}(0) \|_{L^2}^2 &=&
\| J_{\varphi(n)} u_{\varphi(n)}(\varepsilon)-J_{\varphi(n)}u_{\varphi(n)}(0) \|_{L^2}^2 \\
&=&\|J_{\varphi(n)}\left(u_{\varphi(n)}(\varepsilon) -u^0\right)\|_{L^2}^2\\
&\le&\| u_{\varphi(n)}(\varepsilon)-u^0 \|_{L^2}^2\\
&\le& \| u_{\varphi(n)}(\varepsilon)\|_{L^2}^2+\|u^0 \|_{L^2}^2-2Re\langle u_{\varphi(n)}(\varepsilon),u^0\rangle\\
&\le& 2\|u^0 \|_{L^2}^2-2Re\langle u_{\varphi(n)}(\varepsilon),u^0\rangle.
\eeqq
But $\ds \lim_{n\to+\infty}\langle u_{\varphi(n)}(\varepsilon),u^0\rangle=\langle u (\varepsilon),u^0\rangle$, hence
$$\liminf_{n\to \infty}\|u_{\varphi(n)}(\varepsilon)-u_{\varphi(n)}(0)\|^{2}_{L^{2}}
\leq 2\|u^0\|^{2}_{L^{2}}-2Re\langle
u(\varepsilon);u^0\rangle_{L^2}.$$ Moreover, for all $q,N\in\N$
\beqq
\|J_N\Big(\theta_q.(u_{\varphi(n)}(t_0\pm\varepsilon)-u_{\varphi(n)}(t_0))\Big)\|^{2}_{L^2}
&\leq & \|\theta_q.(u_{\varphi(n)}(t_0\pm\varepsilon)-u_{\varphi(n)}(t_0))\|^{2}_{L^2}\\
&\leq&
\|u_{\varphi(n)}(t_0\pm\varepsilon)-u_{\varphi(n)}(t_0)\|^{2}_{L^2}.
\eeqq
Using (\ref{eq-cv}) we get, for $q$ big enough,
$$\|J_N\Big(\theta_q.(u(t_0\pm\varepsilon)-u(t_0))\Big)\|_{L^2}
\leq \liminf_{n\to \infty}\|u_{\varphi(n)}(t_0\pm\varepsilon)-u_{\varphi(n)}(t_0)\|_{L^2}.$$
Then
$$\|J_N\Big(\theta_q.(u(t_0\pm\varepsilon)-u(t_0))\Big)\|^{2}_{L^2}
\leq 2\Big(\|u^0\|^{2}_{L^{2}}-Re\langle u(\varepsilon);u^0\rangle_{L^2}\Big)e^{2C_{\alpha,\beta}t_0}.$$
By applying the Monotone Convergence Theorem in the order $N $ and $q $, we get
$$\|u(t_0\pm\varepsilon)-u(t_0))\|^{2}_{L^2}
\leq 2\Big(\|u^0\|^{2}_{L^{2}}-Re\langle u(\varepsilon);u^0\rangle_{L^2}\Big)e^{ 2C_{\alpha,\beta}t_0}.$$
Using the continuity at 0 and make $\varepsilon\to 0$, we get the continuity at $t_0$.
\subsection{Uniqueness}\ \\
Let $u,v$ be two solutions of $(NSD)$ in the space
$$C_b(\R^+,L^2(\R^3))\cap L^2(\R^+,\dot H^1(\R^3))\cap L^{\beta+1}(\R^+,L^{\beta+1}(\R^3)).$$
The function $w=u-v$ satisfies the following:
$$\partial_tw-\Delta w+\alpha \Big( |u|^{\beta-1}u- |v|^{\beta-1} v\Big)= -\nabla (p-\tilde p)+w.\nabla w-w.\nabla u-
u.\nabla w.$$
Taking the scalar product in $L^2$ with $w$, we get
$$\frac{1}{2}\frac{d}{dt}\|w\|_{L^2}^2+\|\nabla w\|_{L^2}^2+\alpha
\langle \Big( |u|^{\beta-1} u- |v|^{\beta-1}v\Big);w\rangle _{L^2}=-\langle w.\nabla u;w\rangle _{L^2}.$$
By adapting the same method for the proof of the continuity of such solution in $L^2(\R^3)$, with
$u, v,w $ instead of $u_{\varphi(n)}, v_{n,\varepsilon}, w_{n,\varepsilon}$ in oreder,
we find
$$
\alpha\langle \Big( |u|^{\beta-1}u- |v|^{\beta-1} v\Big);w\rangle _{L^2}\geq
\frac{\alpha}{2}\int_{\R^3}|u|^{\beta-1}|w|^2.
$$
and
$$
|\langle w.\nabla u;w\rangle _{L^2}|\leq \frac{\alpha}{4}\int_{\R^3}|w|^2|u|^2+C_{\alpha,\beta}\|w\|_{L^2}^2+\frac{1}{2}\|\nabla w\|_{L^2}^2.$$
Combining the above inequalities, we find the following energy estimate:
$$ \frac{d}{dt}\|w(t)\|_{L^2}^2+ \|\nabla w(t)\|_{L^2}^2 \leq 2C_{\alpha,\beta}\|w(t)\|_{L^2}^2.$$
By Gronwall Lemma, we obtain
$$\|w(t)\|_{L^2}^2+\int_0^t\|\nabla w\|_{L^2}^2 \leq \|w^0\|_{L^2}^2e^{2C_{\alpha,\beta}t}.$$
As $w^0=0$, then $w=0$ and $u=v$, which implies the uniqueness.
\subsection{Asymptotic Study of the Global Solution}\pn
To prove the asymptotic behavior \eqref{eqth2-2}, we need some preliminaries lemmas:
\begin{lem}\label{lem1}\pn
If $u$ is a global solution of {$(NSD)$} with $\beta\!\geq\!\frac{10}3$, then $u \in L^{\beta}(\R^+\times\R^3)$.
\end{lem}
\noindent{\bf Proof.}\pn
Let $E_1=\{(t,x):\ |u(t,x)|\leq 1\}$ and $E_2=\{(t,x):\ |u(t,x)|> 1\},$
$\ds L_1=\ds\int_{E_1}|u(s,x)|^{\beta}dxds $ and $\ds L_2=\ds\int_{E_2}|u(s,x)|^{\beta}dxds.$
We have
\beqq
L_1&=&\ds\int_{E_1}|u(s,x)|^{\beta}dxds =\ds\int_{E_1}|u(s,x)|^{\beta-\frac{10}3}|u(s,x)|^{\frac{10}3}dxds \\
& \leq& \ds \int_{0}^\infty\|u(s)\|_{L^{\frac{10}3}}^{\frac{10}3}ds .
\eeqq
By using the Sobolev injection
$\dot H^{\frac{3}5}(\R^3)\hookrightarrow L^{\frac{10}3}(\R^3)$, we get
\begin{equation}\label{eqasym1}
L_1\leq C \int_{0}^\infty\|u(s)\|_{\dot H^{\frac{3}5}}^{\frac{10}3}ds.
\end{equation}
By interpolation inequality
$\ds \|u(s)\|_{\dot H^{\frac{3}5}}\leq \|u(s)\|_{\dot H^0}^{\frac{2}5}\|u(s)\|_{\dot H^1}^{\frac{3}5}$,
we obtain
\begin{equation}\label{eqasym2}
L_1\leq C \int_{0}^\infty\|u(s)\|_{L^2}^{\frac{4}3}\|\nabla u(s)\|_{L^2}^2\leq C
\|u^0\|_{L^2}^{\frac{4}3}\int_{0}^\infty\|\nabla u(s)\|_{L^2}^2.\end{equation}
For the therm $L_2$, we have
$$L_2=\ds\int_{X_2}|u(s,x)|^\beta dxds\le \ds\int_{0}^\infty\int_{\R^3}|u(s,x)|^{\beta+1}dxds.
$$
Hence
$$\|u\|_{L^{\beta}(\R^+\times\R^3)}\leq C
\|u^0\|_{L^2}^{\frac{4}3}\int_{0}^\infty\|\nabla
u(s)\|_{L^2}^2ds +\int_{0}^\infty\int_{\R^3}|u(s,x)|^{\beta+1}dxds.$$
Therefore $u\in L^{\beta}(\R^+\times\R^3)$.
\begin{lem}\pn
If $u$ is a global solution of {$(NSD)$}, with $\beta\!\geq\! \frac{10}3$ , then $\ds \lim_{t\!\to\!\infty}\|u(t)\|_{H^{\!-\!2}}\!=\!0$.
\end{lem}
\noindent{\bf Proof.}\pn
For $\varepsilon>0$, using the energy inequality
(\ref{eqth2-1}) and Lemma \ref{lem1}, there exists $t_0\geq0$ such that
\begin{equation}\label{asym.eq1}
\|\nabla u\|_{L^2([t_0,\infty)\times\R^3)}<\frac{\varepsilon}{4},
\end{equation}
and
\begin{equation}\label{asym.eq2}
\|u\|_{L^\beta([t_0,\infty)\times\R^3)}<\frac{\varepsilon}{4}.
\end{equation}
Now, consider the following system
\begin{equation}\label{$4.6$}
\left\{ \begin{matrix}
\partial_t v
-\nu\Delta v+ v.\nabla v +\alpha|v|^{\beta-1}v =\;\;-\nabla q \hfill&\hbox{ in } \mathbb R^+\times \mathbb R^3\\
{\rm div}\, v= 0 \hfill&\hbox{ in } \mathbb R^+\times \mathbb R^3\\
v(0,x) =u(t_0,x) \;\;\hfill&\hbox{ in }\mathbb R^3\hfill&.
\end{matrix}\right. \tag{$NSD'$}
\end{equation}
By the existence and uniqueness part, the system ($NSD'$) has a
unique global solution $v\in C_b(\R^+,L^2(\R^3))\cap L^2(\R^+,\dot H^1(\R^3))\cap L^{\beta+1}(\R^+,L^{\beta+1}(\R^3))$
such that $v(t_0)=u(t_0,x)$ and $q(t)=p(t_0+t).$
The energy estimate for this system is as follows:
$$\|v(t)\|_{L^2}^2+2\int_0^t\|\nabla
v(s)\|_{L^2}^2ds +2a\int_0^t\|v(s)\|_{L^{\beta+1}}^{\beta+1}\leq\|u(t_0)\|_{L^2}^2\leq
\|u^0\|_{L^2}^2.$$
By the Duhamel formula, we obtain
$$\ds v(t,x)=e^{t\Delta}v^0(x)+f(t,x)+g(t,x),$$ where
$$f(t,x)=-\int_0^te^{(t-s)\Delta}\mathbb P{\rm div\,}(v\otimes v)(s,x)ds$$
and
$$g(t,x)=-\alpha\int_0^te^{(t-s)\Delta}\mathbb P{\rm div\,}|v(s,x)|^{\beta-1}v(s,x)ds.$$
By Dominated Convergence Theorem,
$\ds \lim_{t\rightarrow\infty}\|e^{t\Delta}v^0\|_{L^2}=0$ and hence $\ds \lim_{t\rightarrow\infty}\|e^{t\Delta}v^0\|_{H^{-2}}=0.$\\
Moreover,
\beqq
\|f(t)\|_{H^{-2}}^2&\leq&\ds\|f(t)\|_{H^{-\frac{1}2}}^2 \leq\ds\|f(t)\|_{\dot H^{-\frac{1}2}}^2\\
&\leq&\ds\int_{\R^3}|\xi|^{-1}\left(\int_0^te^{-(t-s)|\xi|^2}|\mathcal F{\rm div}(v\otimes v)(s,\xi)|ds\right)^2d\xi\\
&\leq&\ds\int_{\R^3}|\xi|\left(\int_0^te^{-(t-s)|\xi|^2}|\mathcal F(v\otimes v)(s,\xi)|ds\right)^2d\xi.
\eeqq
Since
\beqq
\ds\left(\int_0^te^{-(t-s)|\xi|^2}|\mathcal F(v\otimes v)(s,\xi)|ds\right)^2
&\leq&\ds\left(\int_0^te^{-2(t-s)|\xi|^2}ds\right) \int_0^t|\mathcal F(v\otimes v)(s,\xi)|^2ds \\
&\leq&\ds|\xi|^{-2} \int_0^t|\mathcal F(v\otimes v)(s,\xi)|^2ds,
\eeqq
then
\beqq
\|f(t)\|_{H^{-2}}^2dt&\leq&\ds\int_{\R^3}|\xi|^{-1}\int_0^t|\mathcal F(v\otimes v)(s,\xi)|^2dsd\xi \\
&\leq&\ds\int_0^t(\int_{\R^3}|\xi|^{-1}| (v\otimes v)(s,\xi)|^2d\xi)ds=\ds\int_0^t\|v\otimes v)(s)\|_{\dot H^{-\frac{1}2}}^2ds.
\eeqq
\noindent
Using the product law in homogeneous Sobolev spaces, with
$s_1=0$, $s_2=1$, we get
\beqq
\|f(t)\|_{H^{-2}}^2dt&\leq&\ds C\int_0^t\|v(s)\|_{L^2}^2\|\nabla
v(s)\|_{L^2}^2ds.
\eeqq
Using inequalities \eqref{asym.eq1} and \eqref{asym.eq2}, we get
\beqq
\|f(t)\|_{H^{-2}}^2dt&\leq&\ds C\|u^0\|_{L^2}^2\int_0^t\|\nabla
u(t_0+s)\|_{L^2}^2ds\\
&\leq&\ds C\|u^0\|_{L^2}^2\int_0^\infty\|\nabla u(t_0+s)\|_{L^2}^2ds\\
&\leq&\ds C\|u^0\|_{L^2}^2\int_{t_0}^\infty\|\nabla u(s)\|_{L^2}^2ds\\
&\leq&\ds C\|u^0\|_{L^2}^2\frac{\varepsilon^2}{9(C\|u^0\|_{L^2}^2+1)},
\eeqq
which implies that
$$\|f(t)\|_{H^{-2}}<\frac \varepsilon3,\;\forall t\geq 0.$$
For an estimation of $\|g(t)\|_{H^{-2}}$ and using
$$L^1(\R^3)\hookrightarrow H^{-s}(\R^3),\;\forall s>3/2,$$
with $s=2$, we get
\beqq
\|g(t)\|_{H^{-2}}^2dt&\leq&\ds
\int_{\R^3}(1+|\xi|^2)^{-2}\left(\int_0^te^{-(t-s)|\xi|^2}|\mathcal
F(|v |^{\beta-1}v)(s,\xi)|ds\right)^2d\xi\\
&\leq&\ds
C\left(\int_0^t\|(|v |^{\beta-1}v)(s,.)\|_{L^1(\R^3)}ds\right)^2\\
&\leq&\ds
C\left(\int_0^t\||v(s,.)|^{\beta}\|_{L^1(\R^3)}ds\right)^2\\
&\leq&\ds C\|v\|_{L^{\beta}(\R^+\times\R^3)}^2,
\eeqq
where $\ds C=\int_{\R^3}(1+|\xi|^2)^{-2}d\xi.$\\
Also using inequality \eqref{asym.eq2}, we get
\beqq
\|g(t)\|_{H^{-2}}^2dt&\leq&C\||u(t_0+.)\|_{L^{\beta}(\R^+\times\R^3)}^2\\
&\leq&C\|u\|_{L^\beta([t_0,\infty)\times\R^3)}^2
\leq C\frac{\varepsilon^2}{9C}.
\eeqq
which implies that
$\ds \|g(t)\|_{H^{-2}}<\frac \varepsilon 3,\;\forall t\geq 0.$
Combining the above inequalities, we obtain
$$\lim_{t\rightarrow\infty}\|u(t)\|_{H^{-2}}=0.$$
\begin{lem}\pn
If $u$ is a global solution of $(NSD)$ and $\beta\geq\frac{10}3$, then $\ds \lim_{t\rightarrow\infty}\|u(t)\|_{L^2}=0.$
\end{lem}
\noindent{\bf Proof.}\pn
Let
$$w_1 = {\bf 1}_{|D|<1}u=\mathcal F^{-1}\big({\bf 1}_{|\xi|<1}\widehat{u}\big)\quad{\rm and}\quad
w_2 = {\bf 1}_{|D|\geq1}u=\mathcal F^{-1}\big({\bf
1}_{|\xi|\geq1}\widehat{u}\big).$$
Using the second step, we get
$$\|w_1(t)\|_{L^2}=c_0\|w_1(t)\|_{H^0}\leq 2c_0\||w_1(t)\|_{H^{-2}}\leq 2\||u(t)\|_{H^{-2}},$$
which implies
$$\lim_{t\rightarrow\infty}\|w_1(t)\|_{L^2}=0.$$
For $\varepsilon>0$, there is a $t_1>0$ such that
$$\|w_1(t)\|_{L^2}<\frac \varepsilon2,\;\forall t\geq t_1.$$
We have
$$\int_{t_1}^\infty\|w_2(t)\|_{L^2}^2dt\leq \int_{t_1}^\infty\|\nabla w_2(t)\|_{L^2}^2dt\leq \int_{t_1}^\infty\|\nabla u(t)\|_{L^2}^2dt<\infty.$$
Since the map $t\longmapsto \|w_2(t)\|_{L^2}$ is continuous, there exists $t_2\geq t_1$ such that
$\ds \|w_2(t_2)\|_{L^2}<\frac \varepsilon2.$ Hence
$$\|u(t_2)\|_{L^2}^2=\|w_1(t_2)\|_{L^2}^2+\|w_2(t_2)\|_{L^2}^2<\frac {\varepsilon^2}2.$$
Using the following energy estimate
$$
\|u(t)\|_{L^2}^2+2\int_{t_2}^t\|\nabla
u(s)\|_{L^2}^2ds+2\alpha\int_{t_2}^t\|u(s)\|_{L^{\beta+1}}ds\leq\|u(t_2)\|_{L^2}^2,\,\forall t\geq t_2,$$ we get
$$\|u(t)\|_{L^2}<\varepsilon,\;\forall t\geq t_2,$$
and the proof is completed.
|
2,877,628,088,900 | arxiv | \section{Conclusion}
We presented a Las Vegas polynomial time algorithm for sampling from the Bingham distribution $p(x)\propto \exp(x^\top A x)$ on the unit sphere $\mathcal S^{d-1}$. The techniques are based on a novel polynomial approximation of the pdf which we believe is of independent interest, and should find other applications.
There are several natural open problems to pursue---perhaps the most natural one is how to generalize our techniques for the rank-$k$ case. Can these polynomial expansion techniques be used to sample other probabilities of interest Bayesian machine learning, e.g., posterior distributions in latent-variable models such as Gaussian mixture models? More generally, for what other non-log-concave distributions of practical interest can we design provably efficient algorithms?
\section{Introduction}
Sampling from a probability distribution $p$ given up to a constant of proportionality is a fundamental problem in Bayesian statistics and machine learning. A common instance of this in statistics and machine learning is posterior inference (sampling the parameters of a model $\theta$, given data $x$), where the unknown constant of proportionality comes from an application of Bayes rule: $p(\theta|x) \propto p(x|\theta) p(\theta)$.
However, for standard approaches to sampling such as the Langevin Monte Carlo algorithm, provable results on efficient (polynomial-time) sampling often require that $p$ be log-concave or close to log-concave.
In this work, we consider the problem of sampling from a specific non-log-concave probability distribution on the sphere $\cal S^{d-1}$ in $d$ dimensions: the \emph{Bingham distribution.} In addition to having applications in statistics, the Bingham distribution is of particular interest as it models the local behavior of any smooth distribution around a stationary point.
We give a polynomial-time algorithm based on approximating the probability density function by a polynomial and explicitly evaluating its integral over the sphere. Our algorithm is of Las Vegas type: It has the advantage of giving \emph{exact} samples, assuming exact computation of an inverse function of a polynomial. Our approach contrasts with the usual Markov Chain Monte Carlo algorithms, which are not known to enjoy rapid mixing on this problem, and only give approximate samples.
The Bingham distribution \citep{bingham1974antipodally} defined by a matrix $A\in \R^{d\times d}$
is the distribution on the sphere $\cal S^{d-1}\subeq \R^d$ whose density function with respect to the uniform (surface) measure is given by
\begin{align*}
p(x):=\fc{dP}{d\mu_{\cal S^{d-1}}}(x) &\propto \exp(x^\top Ax).
\end{align*}
Note that due to the symmetric form, without loss of generality, we can assume $A$ is symmetric.
This distribution finds frequent use in \emph{directional statistics}, which studies distributions over the unit sphere. In particular, the Bingham distribution is widely used in paleomagnetic data analysis \citep{onstott1980application} and has applications to computer vision \citep{antone2000automatic,haines2008belief,glover2013bingham} and even differential privacy \citep{chaudhuri2013near, wang2015differentially}. As shown in Section~\ref{s:rank1}, it also naturally appears in the posterior distribution for a rank-1 matrix inference problem, a special case of matrix factorization.
Our main theorem is given below. In the following, we will identify a probability distribution over $\mathcal S^{d-1}$ with its density function with respect to the uniform measure on $\mathcal S^{d-1}$.
\begin{restatable}{thm}{tmain}
\label{t:main}
\label{t:main-poly}
Let $A$ be a symmetric matrix with maximum and minimum eigenvalue $\la_{\max}$ and $\la_{\min}$, respectively.
Let $p(x)
\propto \exp(x^\top Ax)$ be a probability distribution over $\mathcal{S}^{d-1}$.
Then, given an oracle for solving a univariate polynomial equation, Algorithm~\ref{a:main} produces a sample from $p(x)$ and runs in expected time $\operatorname{poly}(\la_{\max}-\la_{\min}, d)$.
\end{restatable}
We can consider the Bingham distribution as a ``model" non-log-concave distribution, because any smooth probability distribution looks like a Bingham distribution
in a sphere of small radius around a stationary point.\footnote{The more general \emph{Fisher-Bingham distribution} includes a linear term, and so can locally model any smooth probability distribution.} More precisely, suppose $f:\R^d\to \R$ is 3-times differentiable, $p(x)= e^{-f(x)}$ on $\R^d$, and $\nb f(x_0)=0$. Then we have that as $x\to x_0$,
\begin{align*}
p(x) &= \exp\bc{-[f(x_0) + (x-x_0)^\top (\nb^2 f(x_0)) (x-x_0) + O(\ve{x-x_0}^3)]}.
\end{align*}
Note that if we can sample from small spheres around a point, we can also sample from a small ball around the point by first estimating and sampling from the marginal distribution of the radius.
Moreover, the Bingham distribution already illustrates the challenges associated with sampling non-log-concave distributions.
First, it can be arbitrarily non-log-concave, as the minimum eigenvalue of the Hessian can be arbitrarily negative. Second, when $A$ has distinct eigenvalues, the function $f(x) = x^\top Ax$ on $\mathcal S^{d-1}$ has $2(d-1)$ saddle points and 2 minima which are antipodal.
Hence, understanding how to sample from the Bingham distribution may give insight into sampling from more general non-log-concave distributions.
\subsection{Related work}
We first discuss general work on sampling, and then sampling algorithms specific to the Bingham distribution.
Langevin Monte Carlo \citep{rossky1978brownian,roberts1996exponential} is a generic algorithm for sampling from a probability distribution $p(x) \propto e^{-f(x)}$ on $\R^d$ given gradient access to its negative log-pdf $f$. It is based on discretizing Langevin diffusion, a continuous Markov process. In the case where $p$ is log-concave, Langevin diffusion is known to mix rapidly \citep{bakry1985diffusions}, and Langevin Monte Carlo is an efficient algorithm \citep{dalalyan2016theoretical,durmus2016high}. More generally, for Langevin diffusion over a compact manifold (such as $\cal S^{d-1}$), positive Ricci curvature can offset non-log-concavity of $p$, and rapid mixing continues to hold if the sum of the Hessian of $f$ and Ricci curvature at any point is lower bounded by a positive constant \citep{bakry1985diffusions,hsu2002stochastic}. In our setting, this is only the case when the maximum and minimum eigenvalues of $A$ differ by less than $\fc{d-1}2$: $\la_{\max}(A) - \la_{\min}(A) < \fc{d-1}2$. We note there are related algorithms such as Hamiltonian Monte Carlo~\citep{duane1987hybrid} that are more efficient in the log-concave case, but still suffer from torpid mixing in general.
Next, we consider algorithms tailored for the Bingham distribution.
An important observation is that the normalizing constant of the Bingham distribution is given by the hypergeometric function of a matrix argument \cite[p.182]{mardia2009directional},
\begin{align*}
\int_{\cal S^{d-1}}\exp(x^\top A x) \,d\cal S^{d-1}(x) &= {}_1F_1\pa{\rc2;\fc n2;D}^{-1}
\end{align*}
where $D$ is the diagonal matrix of eigenvalues of $A$.
Methods to approximate the hypergeometric function are given in~\cite{koev2006efficient}, however, with super-polynomial dependence on the degree of the term where it is truncated, and hence on the accuracy required.
The previous work \citep{kent2013new} gives an rejection sampling based algorithm where the proposal distribution is an angular central gaussian envelope, that is, the distribution of a normalized gaussian random variable. This distribution has density function $p(x) \propto (x^\top \Om x)^{-d/2}$
for $\Om$ chosen appropriately depending on $A$.
The efficiency of rejection sampling is determined by the maximum ratio between the desired ratio and the proposal distribution. Their bound for this ratio depends on the normalizing constant for the Bingham distribution~\cite[(3.5)]{kent2013new}, and they only give an polynomial-in-dimension bound when the temperature approaches zero (that is, for the distribution $\exp(\be x^\top Ax)$ as $\be\to \iy$).
Our algorithm is also based on rejection sampling; however, we use a more elaborate proposal distribution, for which we are able to show that the ratio is bounded at all temperatures.
\subsection{Application to rank-1 matrix inference}
\label{s:rank1}
The algorithm we give has an important application to a particularly natural statistical inference problem: that of
recovering a rank-1 matrix perturbed by Gaussian noise.
More precisely, suppose that an observation $Y$ is produced as follows: we sample $x \sim \mathcal{D}$ for a prior distribution $\mathcal{D}$ and $N \sim \mathcal{N}(0,\gamma^2 I)$, then output $Y = xx^\top + N$. By Bayes Rule, the posterior distribution over $x$ has the form
\begin{align}\label{e:rank1-post}
p(x | Y) &\propto \exp\left(-\frac{1}{2\gamma^2}\|Y - xx^\top \|^2_F\right)p(x).
\end{align}
In the particularly simple case where $\mathcal{D}$ is uniform over the unit sphere, this posterior has the form we study in our paper:
\begin{align*}
p(x | Y) &\propto \exp\left(\frac{1}{2\gamma^2} x^\top Y x\right)
\end{align*}
for $x\in \cal S^{d-1}$. Thus, we are able to do posterior sampling. More generally, for radially symmetric $p(x)$, we can approximately sample from the radial distribution of the marginal, after which the problem reduces to a problem on $\cal S^{d-1}$. Note that our algorithm does not require the model to be well-specified, i.e., it does not require $Y$ to be generated from the hypothesized distribution.
In existing literature, the statistics community has focused more on questions of \emph{recovery} (can we achieve a non-trivial ``correlation'' with the planted vector $x$ under suitable definitions of correlation) and \emph{detection} (can we decide with probability $1-o(1)$ as $d \to \infty$ whether the matrix presented is from the above distribution with a ``planted" vector $x$, or is sampled from a Gaussian) under varying choices for the prior $\mathcal{D}$. In particular, they study the threshold for $\gamma$ at which each of the respective tasks is possible. The two most commonly studied priors $\mathcal{D}$ are uniform over the unit sphere (\emph{spiked Wishart model}), and the coordinates of $x$ being $\pm\frac{1}{\sqrt{d}}$ uniformly at random (\emph{spiked Wigner}). For a recent treatment of these topics, see e.g., \cite{peche2006largest, perry2018optimality}.
However, the statistical tests involve calculating integrals over the posterior distribution~\eqref{e:rank1-post} (for instance, the MMSE $\wh x \wh x^\top = \fc{\int xx^\top \exp(-\rc{2\ga^2}\ve{Y-xx^\top}_F^2) p(x)\,dx}{\int \exp(-\rc{2\ga^2}\ve{Y-xx^\top}_F^2) p(x)\,dx}$) , and the question of algorithmic efficiency of this calculation is not considered. Our work makes these statistical tests algorithmic (for spherically symmetric priors), because integrals over the posterior distribution can be approximated through sampling.
On the algorithmic side, the closest relative to our work is the paper by \cite{moitra2020fast}, which considers the low-rank analogue of the problem we are interested in: namely, sampling from the distribution\begin{align*}
p(X) &\propto \exp\left(-\frac{1}{2\gamma^2}\|XX^\top - Y\|_F^2\right)
\end{align*}
supported over matrices $X \in \mathbb{R}^{d \times k}$, s.t. $Y = X_0 X_0^\top + \gamma N$, for some matrix $X_0 \in \mathbb{R}^{d \times k}$ and $N \sim \mathcal{N}(0, I)$. It proves that a slight modification of Langevin Monte Carlo can be used to sample from this distribution efficiently in the \emph{low-temperature} limit, namely when $\gamma = \Omega(d)$.
For comparison, in this paper, we can handle an \emph{arbitrary} temperature, but only the rank-1 case (i.e. $k=1$). Moreover, the algorithm here is substantially different, based on a polynomial approximation of the pdf, rather than MCMC. Extending either approach to the full regime (arbitrary $k$ and arbitrary temperature) is an important and challenging problem.
\section{Lower bound for rejection sampling with normalized Gaussian proposal}
In this section, we show that if we wish to do sampling with a normalized Gaussian, there are simple instances where the worst-case ratio with the desired distribution can be arbitrarily bad, even in dimension $d=2$.
Define an \emph{angular Gaussian distribution} as any distribution over $\mathcal S^{d-1}$ in the form
$p(x) =\frac{(x^T \Omega x)^{-d/2}}{\det(\Omega)^{1/2}}$. This is a normalized Gaussian distribution with covariance matrix $\Sigma$, where
where $\Omega = \Sigma^{-1}$.
We show the following, which shows that for the circle $\cal S^1$, the ratio between a Bingham distribution and \emph{any} angular Gaussian distribution can become arbitrarily large.
\begin{lem} Let $q:\mathbb{R}^2 \to \mathbb{R}$ be defined as $q(x) = \frac{e^{x_1^2 \sigma^2}}{Z}$. Furthermore, for any positive-definite $\Omega \in \mathbb{R}^{2 \times 2}$, let $p_{\Omega}(x) =\frac{(x^T \Omega x)^{-1}}{\det(\Omega)^{1/2}}$.
Then, as
$\sigma \to \infty$,
$\min_{\Omega} \max_{x \in \mathcal{S}^d} \max \left\{\frac{p_{\Omega}(x)}{q(x)},{\frac{q(x)}{p_{\Omega}(x)}} \right\} \to \infty $.
\end{lem}
\begin{proof}
Suppose otherwise for the sake of contradiction. First, we have
\begin{align*}
\frac{q((1,0))}{q((0,1))} = e^{\sigma^2}
\end{align*}
so correspondingly, we need to have $\frac{p_{\Omega}((1,0))}{p_{\Omega}((0,1))} = \frac{\Omega^{-1}_{11}}{\Omega^{-1}_{22}} \to \infty$, i.e. $\frac{\Omega_{11}}{\Omega_{22}} \to 0$
We also have
\begin{align*}
\frac{q((1,0))}{q((1/2,1/2))} = e^{3/4 \sigma^2}
\end{align*}
and
\begin{align*}
\frac{p_{\Omega}((1,0))}{p_{\Omega}((1/2,1/2))} = \frac{\Omega^{-1}_{11}}{4 (\Omega_{11}+\Omega_{22} + \Omega_{12})^{-1}}
\end{align*}
Since $\Omega$ is a positive definite matrix, we have $|\Omega_{12}| \leq \sqrt{\Omega_{11} \Omega_{22}}$, so in particular, \begin{align} \Omega_{11}+\Omega_{22} + \Omega_{12} &\geq \Omega_{22} - \Omega_{11} - \sqrt{\Omega_{22} \Omega_{11}}
\end{align}
Note that $\frac{\sqrt{\Omega_{22}\Omega_{11}}}{\Omega_{22}} \to 0$, so that $\Omega_{22} - \Omega_{11} - \sqrt{\Omega_{22} \Omega_{11}} \to \Omega_{22}$.
But then, we have that
$\frac{q((0,1))}{q((1/2,1/2))} = e^{-1/4 \sigma^2} \to 0$, while $\frac{p_{\Omega}((0,1))}{p_{\Omega}((1/2,1/2))} \to 1/4$, which is a contradiction.
\end{proof}
\section{Moment calculations}
For completeness, we present the moment calculations that are used in the proof of our main theorem.
\begin{lem}\llabel{l:gf}
Let $A$ be symmetric PSD, and let $\la_1,\ldots, \la_d$ be its eigenvalues. Then the moment generating function of $z^\top A z$, $z\sim N(0,I_d)$, is
\begin{align*}
f(x) &= \pa{\prodo id \rc{1-2\la_i x}}^{\rc2}
\end{align*}
\end{lem}
\begin{proof}
Without loss of generality $A$ is diagonal. Then $z^TAz = \sumo ik \la_i z_i^2$. The mgf of $r\sim \chi^2_d$ is $\prc{1-2x}^{\rc2}$. Now use the following two facts:
\begin{enumerate}
\item If the mgf of $X$ is $M_X$, then the mgf of $aX$ is $M_X(at)$: $M_{aX}(t)=M_X(at)$.
\item The mgf of a sum of random variables is the product of the mgfs: $M_{X+Y}(t)=M_X(t)M_Y(t)$.
\end{enumerate}
\end{proof}
\begin{cor}[\cite{kan2008moments}]\label{c:xTAx-recursion}
Let $A$ be symmetric PSD.
Let $S(n) = \frac{1}{n! 2^n} \mathbb{E}_{x \sim N(0,I_d)} (x^T Ax)^n$. Then $S(0)=1$ and for $n\ge 1$,
\begin{equation}
S(n) = \frac{1}{2n}\sum_{i=1}^n \Tr(A^i) S(n-i).
\end{equation}
This can be calculated in polynomial time by dynamic programming.
\end{cor}
\begin{proof}
Note $\Tr(A^k) = \sumo id \la_i^k$.
The moment generating function in Lemma~\ref{l:gf} satisfies the differential equation
\begin{align*}
f'(x) & = \sumo id \fc{\la_i}{1-2\la_i x} f(x).
\end{align*}
Matching the coefficient of $x^{n-1}$ gives the equation.
\end{proof}
\begin{cor}\label{c:x2n}
For $n\ge 0$,
\begin{align*}
\E_{x\sim N(0,I_d)}[\ve{x}^{2n}]
&= \prodz i{n-1} (d+2i).
\end{align*}
\end{cor}
For $d=1$, this agrees with the formula $\E_{x\sim N(0,1)}[x^{2n}] = (2n-1)!!$.
\begin{proof}
By Lemma~\ref{l:gf}, the moment generating function of $\ve{x}^2$ is $\pa{1-2x}^{-\fc d2}$.
Use the binomial series expansion.
\end{proof}
\section{Algorithm based on polynomial approximation}
We present our rejection sampling algorithm as Algorithm~\ref{a:main}. Our main theorem is the following.
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}[h!]
\caption{Sampling algorithm for Bingham distribution}
\begin{algorithmic}[1]
\Require Symmetric matrix $A$
\Ensure A random sample $x \sim p(x) \propto \exp(x^{\top} Ax)$ on $\mathcal{S}^{d-1}$
\medskip
\State Diagonalize $[V, \Lambda] = \mathrm{diag}(A)$ such that $A = V \Lambda V^{\top}$; let $\lambda_{\min}$ and $\lambda_{\max}$ denote the smallest and largest eigenvalues respectively;
\State Set $D = \Lambda - \lambda_{\min}I_d$;
\State Set $n = (\lambda_{\max} - \lambda_{\min})^2$;
\Repeat \Comment{Rejection sampling for $z \sim \widetilde{p}(z) \propto \exp(z^{\top} D z)$ on $\mathcal{S}^{d-1}$}
\For{$i = 1 \to d$} \Comment{\parbox[t]{.58\linewidth}{Sample proposal $z \sim q(z) \propto \bigl( z^{\top} (I+D/n) z \bigr)^n$ on $\mathcal{S}^{d-1}$ one coordinate at a time}}
\If{$i = 1$}
\State Let $D_1 = D$;
\State Determine the marginal distribution $q(z_1)$ whose pdf is given as follows, where $(D_1)_{-1}$ represents the submatrix of $D_1$ obtained from deleting the first row and column (see Theorem~\ref{t:xDxn}, \eqref{eq:integral} for details)
$$
\frac{1}{Z}
\int_{y \in \mathcal{S}^{d-2}} \left((1+ (D_1)_{11}/n) z_1^2 + (1-z_1^2) y^\top (I_{d-1}+(D_1)_{-1}/n) y\right)^n \,d\mathcal{S}^{d-2}(y)
;$$
\State Sample $z_1 \sim q(z_1)$ via inverse transform sampling (Lemma~\ref{l:inverse});
\State Let $y_1=z_1$;
\Else
\State Let $D_i = y_{i-1}^2 (D_{i-1})_{11} + (1-y_{i-1}^2) (D_{i-1})_{-1}\in \R^{(d-i+1)\times (d-i+1)}$;
\Comment{We will sample from the distribution $\propto (y^\top (I+D_i/n) y)^n$.}
\State Determine the conditional marginal distribution $q(y_i \vert z_1, \ldots, z_{i-1})$ where $z_i = y_i\sqrt{1-\sumo j{i-1}z_j^2}$, whose pdf is given by
%
(see Theorem~\ref{t:xDxn}, \eqref{eq:integral} for details)
$$
\frac{1}{Z}
\int_{(y_{i+1},\ldots,y_d) \in \mathcal{S}^{d-i-1}} \left((1+ (D_i)_{11}/n) y_i^2 + (1-y_i^2) y^\top (I_{d-i}+(D_i)_{-1}/n) y\right)^n \,d\mathcal{S}^{d-i-1}(y)
;
$$
\State Sample $y_i \sim q(y_i \vert z_1, \ldots, z_{i-1})$ via inverse transform sampling (Lemma~\ref{l:inverse});
\State Let $z_i = y_i\sqrt{1-\sumo j{i-1}z_j^2} $;
\EndIf
\EndFor
\State Accept $z$ with probability $e^{-1} \frac{\exp(z^\top D z)}{(z^\top (I+D/n)z)^n}$; \Comment{\parbox[t]{.45\linewidth}{Rejection sampling (see proof of Theorem~\ref{t:main-poly} for explanation of the $e^{-1}$ factor)}}
\Until the sample $z$ is accepted;
\State \Return $x = V z$;
\end{algorithmic}
\label{a:main}
\end{algorithm}
\tmain*
Before proceeding to the proof of Theorem~\ref{t:main-poly}, we make a few remarks about the statement. Firstly, we work in the real model of computation. Solving a polynomial equation can be done to machine precision using binary search, so the only errors present when actually running the algorithm are roundoff errors.
The algorithm is based on rejection sampling: we calculate a proposal sample in time $\operatorname{poly}(\la_{\max}-\la_{\min},d)$, accept it with some probability, and otherwise repeat the process. In the parlance of algorithms, this means that it is a Las Vegas algorithm: it produces an exact sample, but has a randomized runtime.
For the analysis, we lower bound the acceptance probability by an absolute constant. The number of proposals until acceptance follows a geometric distribution with success probability equal to the acceptance probability. Hence, the total time is polynomial with high probability.
The analysis of our algorithm proceeds in the following steps:
\begin{enumerate}
\item By diagonalization and change-of-coordinates, we show that it suffices to provide an algorithm for sampling from distributions over the unit sphere $p: \mathcal{S}^{d-1} \to \mathbb{R}^+$ in the form
$$ p(x) \propto \exp\left(x^\top D x\right),$$
where $D \in \mathbb{R}^{d \times d}$ is diagonal and PSD.
\item We show that if we use $q(x) \propto (x^\top (I + D/n)x)^n$ as a proposal distribution, when $n \geq D_{\max}^2$ the ratio $\max\{\frac{p(x)}{q(x)}, \frac{q(x)}{p(x)}\}$ is bounded by an absolute constant.
\item We then show that CDF for the marginal distributions of $q(x)$ can be computed explicitly in polynomial time (in $n, d$), therefore using inverse transform sampling, one can sample from $q$ in polynomial time.
\end{enumerate}
\paragraph{Change-of-coordinates}
We first argue that it suffices to provide an algorithm for sampling from distributions over the unit sphere $p: \mathcal{S}^{d-1} \to \mathbb{R}^+$ in the form
$$ p(x) \propto \exp\left(x^\top D x\right)$$
where $D \in \mathbb{R}^{d \times d}$.
To see this, note that if $A=V D V^\top$ with $D$ diagonal and $V$ orthogonal, then given a sample $x$ from the distribution $\propto \exp(x^\top Dx)$, $V x$ is a sample from the distribution $\propto \exp(x^\top V DV^\top x)$.
Moreover, we can assume that $D$ is a PSD diagonal matrix, with smallest eigenvalue $D_{\min} = 0$ and largest eigenvalue $D_{\max}$. This is because replacing $D$ by $D-cI_d$ simply multiplies $\exp(x^\top Dx)$ by a constant on $\cal S^{d-1}$, and we can take $c=D_{\min}$.
\paragraph{Proposal distribution} Next we give a proposal distribution for rejection sampling based on polynomial approximation of $p$:
\begin{lem}\label{l:approx}
Let $D\in \R^{d\times d}$ be diagonal with maximum eigenvalue $D_{\min}\ge 0$ and largest eigenvalue $D_{\max}$.
Let the distribution $q: \mathcal{S}^{d-1} \to \mathbb{R}^+$ be defined as $q(x) \propto (x^\top (I + D/n)x)^n$ for $n\ge 1$.
Then,
$$ \max\left\{\fc{p(x)}{q(x)}, \fc{q(x)}{p(x)}\right\} \leq
\exp\pf{D_{\max}^2}{2n}.
$$
Moreover, if $D_{\min}=0$, letting $v$ be a unit eigenvector with eigenvalue $0$, $1 \le \fc{q(v)}{p(v)}\le \exp(\fc{D_{\max}^2}{2n})$.
\end{lem}
Note that only an upper bound on $\frac{p(x)}{q(x)}$ is necessary for rejection sampling; however, the lower bound
comes for free with our approach.
\begin{proof}
First, we show that
\begin{equation}
-\fc{D_{\max}^2}{2n} \le n\log( x^\top (I+D/n)x) - x^\top D x \le 0.
\label{eq: unnormalizedbd} \end{equation}
By Taylor's theorem with remainder, we have for $x\in \mathcal{S}^{d-1}$ that
\begin{align*}
\log(x^\top (I+D/n)x)
&= \log (1+x^\top Dx/n)\\
&=\fc{x^\top Dx}n - \rc 2 \rc{(1+\xi)^2} \pf{x^\top Dx}n^2 & \text{for some }\xi\in [0,x^\top Dx/n].
\end{align*}
Because $\ve{x}=1$, we have $x^\top Dx/n \le D_{\max}/n$, so
\begin{align*}
\log(x^\top (I+D/n)x) &\in \ba{\fc{x^\top Dx}{n} - \fc{D_{\max}^2}{2n^2}, \fc{x^\top Dx}{n}}
\end{align*}
Multiplying by $n$, \eqref{eq: unnormalizedbd} follows.
Now \eqref{eq: unnormalizedbd} implies by exponentiation that
\begin{align*}
\exp\pa{-\fc{D_{\max}^2}{2n}}\le \frac{(x^\top (I + D/n)x)^n}{\exp(x^\top D x)}
\le 1
\end{align*}
and hence
\begin{multline*}
\exp\pa{-\fc{D_{\max}^2}{2n}}\le \left.
\fc{(x^\top (I + D/n)x)^n}{\int_{\mathcal{S}^{d-1}}(x^\top (I + D/n)x)^n\,d\mathcal{S}^{d-1}(x)}
\right/
\fc{\exp(x^\top D x)}{\int_{\mathcal{S}^{d-1}} \exp(x^\top D x)\,d\mathcal{S}^{d-1}(x)} \\
\leq \exp\pa{\fc{D^2_{\max}}{2n}} \end{multline*}
from which the lemma immediately follows.
For the last statement, note that for $x=v$, the numerators $(x^\top (I + D/n)x)^n$ and $\exp(x^\top D x)$ in the above expression both equal 1.
\end{proof}
\paragraph{Sampling from proposal $q$} Finally, we show that it is possible to sample from $q(x)$ efficiently in time polynomial in $n, d$. First we show that the high order moments for quadratic forms can be computed efficiently.
\begin{lem}[Calculating integrals of quadratic forms] The integral
$$ \int_{\mathcal{S}^{d-1}} (x^\top D x)^n\,d\mathcal S^{d-1}(x) $$
can be calculated in time $\mbox{poly}(n,d)$.
\label{eq:integralquadratic} \label{l:integral}
\end{lem}
\begin{proof}
The result follows essentially from known formulas about moments of quadratic functions under a Gaussian distribution.
First, we show the task reduces to calculating
$$\mathbb{E}_{x \sim N(0,I_d)} [(x^\top Dx)^n].$$
A Gaussian can be sampled by sampling the norm of $x$ and the direction of $x$ independently.
Hence,
\begin{align}
\label{e:norm-dir}
\mathbb{E}_{x \sim N(0,I_d)}[ (x^\top Dx)^n] &= \mathbb{E}_{x \sim N(0,I_d)}
[\|x\|^{2n} ]
\cdot
\mathbb{E}_{x \sim N(0,I_d)}
\ba{\left(\pf{x}{\|x\|}^\top D\pf{x}{\|x\|}\right)^n }.
\end{align}
The second factor is (up to a constant) the integral of interest as $\frac{x}{\|x\|}$ is uniformly distributed over the sphere:
\begin{align*}
\E_{x\sim \mathcal{S}^{d-1}} [(x^\top D x)^n]
&=
\fc{\int_{\mathcal{S}^{d-1}} (x^\top Dx)^n\,d\mathcal{S}^{d-1}(x)}{\Vol(\mathcal{S}^{d-1})}
=\fc{\int_{\mathcal{S}^{d-1}} (x^\top Dx)^n\,d\mathcal{S}^{d-1}(x)}{2\pi^d/\Ga(d/2)}.
\end{align*}
The first factor in~\eqref{e:norm-dir} has a simple closed-form expression given by Corollary~\ref{c:x2n}.
Thus it remains to calculate the LHS of~\eqref{e:norm-dir}, the expectation under the Gaussian. We use the recurrence from \cite{kan2008moments}, reprinted here as Corollary~\ref{c:xTAx-recursion}: denoting $S(n) = \frac{1}{n! 2^n} \mathbb{E}_{x \sim N(0,I_d)} [(x^\top Dx)^n]$, we have $S(0)=1$ and for $n\ge 1$,
\begin{equation}
S(n) = \frac{1}{2n}\sum_{i=1}^n \mbox{Tr}(D^i) S(n-i)
\end{equation}
which can be calculated in time $\mbox{poly}(n,d)$ by dynamic programming.
\end{proof}
Using this integral, we can compute the unnormalized cdf for the marginals of distribution $q$. This can then be combined with the technique of \vocab{inverse transform sampling}.
\begin{lem}[Inverse transform sampling]\label{l:inverse}
Suppose that we know that the probability distribution on $[a,b]$ has pdf $p(x)\propto f(x)$, and we can calculate the (unnormalized) cdf $F(x)=\int_a^x f(t)\,dt$. Then given an oracle for computing the inverse of $G(x) = F(x)/F(b)$, one can sample from the distribution.
\end{lem}
\begin{proof}
The algorithm simply generates a uniformly random number $r\in [0,1]$ and computes $G^{-1}(r)$. Since $G(x)$ is the cdf of the probability distribution we know $G^{-1}(r)$ is exactly a random variable from this probability distribution $p(x)$.
\end{proof}
Note that when the cdf $F(x)$ is a polynomial, it is possible to compute $G^{-1}$ with accuracy $\ep$ in $\operatorname{poly}\log(1/\ep)$ time by binary search.
Combining Lemma~\ref{l:integral} and \ref{l:inverse} we are ready to show that one can sample from $q(x)$ efficiently.
\begin{thm} \label{t:xDxn}
Let $D$ be a diagonal PSD matrix and let $q(x) \propto (x^\top (I+ D/n) x)^n$.
Given an oracle for solving a univariate polynomial equation, we can sample from $q(x)$ in time $\operatorname{poly}(n,d)$.
\end{thm}
As suggested above, we can solve the polynomial equation using binary search, obtaining an $\ep$-accurate solution using $\operatorname{poly}\log\prc{\ep}$ evaluations of the polynomial.
\begin{proof}
Note the theorem is trivial for $d=1$, as $q(x)$ is the uniform distribution on $\cal S^0=\{-1,1\}$. Hence we assume $d>1$.
We will sample the coordinates one at a time (see Algorithm~\ref{a:main}).
For notational convenience, let us denote by $x_{-i}$ the set of coordinates of a vector $x$ excluding the $i$-th.
Namely, we will show that: \begin{enumerate}
\item We can efficiently sample
from the marginal distribution of $x_1$, denoted by\footnote{This is a slight abuse of notation, and it denotes the marginal probability of the first coordinate. We do this to reduce clutter in the notation by subscripting the appropriate coordinate.} $q(x_1)$,
via inverse transform sampling. To do this, we exhibit a $\mbox{poly}(n,d)$ algorithm for calculating the CDF of $q(x_1)$.
\item For any $x_1$, the conditional distribution $q(x_{-1}| x_1)$ also has the form $q(x_{-1} | x_1) \propto (x_{-1}^\top (I+\wt{D}/n) x_{-1})^{n}$, for some diagonal PSD matrix $\wt{D} \in \mathbb{R}^{(d-1) \times (d-1)}$.
\end{enumerate}
Applying this recursively gives our theorem.
Towards proving part 1, the marginal can be written
using the co-area formula as
\begin{align*} q(x_1) &= \frac{(1-x_1^2)^{-(d-1)/2} \int_{\mathcal{S}^{d-2}} q(x_1, x_{-1}) \,d\mathcal{S}^{d-2}(x_{-1})}{Z} \\
&=\frac{(1-x^2)^{-(d-1)/2} \int_{\mathcal{S}^{d-2}} \left((1+ D_{11}/n) x^2_1 + \sum_{i=2}^d (1+D_{ii}/n) x^2_i\right)^n d\mathcal{S}^{d-2}(x_{-1})}{Z},
\end{align*}
where
$Z = \int_{\mathcal{S}^{d-1}} (x^\top (I + D/n)x)^n$.
Introducing the change of variables $x_{-1} = y \sqrt{1-x_1^2}$ where $y=(y_2,\ldots ,y_d)\in \mathcal S^{d-2}$, we can rewrite the numerator as
\begin{align*}
\int_{\mathcal{S}^{d-2}} \left((1+ D_{11}/n) x^2_1 + (1-x_1^2)\sum_{i=2}^d (1+D_{ii}/n) y^2_i\right)^n d\mathcal{S}^{d-2}(y).
\end{align*}
Hence, the CDF for $q(x_1)$ has the form
\begin{equation}
\frac{1}{Z} \int_{x=-1}^{x_1}
\int_{y \in \mathcal{S}^{d-2}} \left((1+ D_{11}/n) x^2 + (1-x^2)\sum_{i=2}^d (1+D_{ii}/n) y^2_i\right)^n \,d\mathcal{S}^{d-2}(y) \,dx
\label{eq:integral}
\end{equation}
If we can evaluate this integral in time $\mbox{poly}(n,d)$, we can sample from $q(x_1)$ by using inverse transform sampling.
Expanding the term inside the inner integral, \eqref{eq:integral} can be rewritten as
\begin{align*}
\frac{1}{Z} \sum_{k=0}^n \binom{n}{k} \int_{x=-1}^{x_1} \left((1+ D_{11}/n) x^2\right)^{n-k} (1-x^2)^{k} \int_{y \in \mathcal{S}^{d-2}} (y^\top (I_{d-1}+D_{-1}/n) y)^k \,d\mathcal{S}^{d-2}(y) \,dx
\end{align*}
where $D_{-1}$ is obtained from $D$ by deleting the first row and column. By Lemma \ref{eq:integralquadratic}, we can calculate each of the integrals
$\int_{y \in \mathcal{S}^{d-2}} (y^\top (I_{d-1}+D_{-1}/n) y)^k $
in time $\mbox{poly}(n,d)$.
Also by Lemma \ref{eq:integralquadratic}, $Z$ can be calculated in $\mbox{poly}(n,d)$.
Hence, it remains to show we can approximate in time $\mbox{poly}(n,d)$ an integral of the type
\begin{equation}
\int_{x=-1}^{x_1} x^{2(n-k)} (1-x^2)^{k} \,dx. \label{eq:integral2}
\end{equation}
We can do this in polynomial time by expanding this as a polynomial and explicitly computing the integral.
Towards showing part 2, we compute the marginal distribution by using Bayes's theorem and making the change of variables $x_{-1} = y\sqrt{1-x_1^2}$, $y=(y_2,\ldots, y_d)\in \mathcal S^{d-2}$,
\begin{align}
\nonumber
q(x_{-1} |x_1) &\propto q(x_1,x_{-1}) \\
\nonumber
&= \left((1+ D_{11}/n) x^2_1 + \sum_{i=2}^n (1+D_{ii}/n) x^2_i\right)^n \\
\nonumber
&= \left((1+ D_{11}/n) x_1^2 + \sum_{i=2}^n (1-x_1^2) (1+D_{ii}/n) y^2_i\right)^n \\
&= \left(y^\top \left(\left(x_1^2 (1+D_{11}/n) + (1-x_1^2)\right) I_{d-1} + (1-x_1^2) D_{-1}/n \right) y\right)^n.
\nonumber
\end{align}
The last expression has the form
$\left(y^\top (1+\wt{D}/n) y\right)^n$, for
\begin{align} \label{e:cond-D}
\wt D = x_1^2 D_{11} I_{d-1} + (1-x_1^2) D_{-1},
\end{align}
which is diagonal. Thus, we can apply the same sampling procedure recursively to $\wt D$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t:main-poly}]
As noted, we have reduced to the case of diagonal $D$ with minimum eigenvalue $D_{\min}=0$.
Let $n=D_{\max}^2$. From Theorem~\ref{t:xDxn} we can sample from the distribution $q(x)\propto (x^\top Dx)^n$ in time $\operatorname{poly}(D_{\max},d)$. By Lemma~\ref{l:approx}, we have $$\exp(-1/2)\le \max\{p(x)/q(x),q(x)/p(x)\}\le \exp(1/2).$$ We would like to do rejection sampling: accept the sample with probability $Cp(x)/q(x)$, where $C$ is a constant $C\le e^{-1/2}$ to ensure this is always $\le 1$; otherwise generate another sample.
Averaged over $x$ drawn from $q(x)$, the probability of acceptance is then $C$.
However, we don't have access to the normalized distribution $q(x)$. Instead, we have the unnormalized distributions $q^*(x)=(x^\top (I+D/n)x)^n$ and $p^*(x)=\exp(x^\top Dx)$.
We use the ratio at a particular point $v$ to normalize them. Let $v$ the the eigenvector with eigenvalue $0$. We accept a proposal with probability
\begin{align*}
e^{-1} \fc{p^*(x)}{q^*(x)}
&=e^{-1}\fc{q^*(v)}{p^*(v)}\cdot \fc{p^*(x)}{q^*(x)}
=
e^{-1}\fc{q(v)}{p(v)}\cdot \fc{p(x)}{q(x)}
\end{align*}
Using the inequality for $v$ in Lemma~\ref{l:approx}, this fits the above framework with $C=e^{-1}\fc{q(v)}{p(v)}\in [e^{-1},e^{-1/2}]$.
This ensures the probability of acceptance is at least $e^{-1}$.
\end{proof}
|
2,877,628,088,901 | arxiv | \section{Introduction}
A group $G$ is polycyclic if there exists a finite series of subnormal subgroups
$G = G_1 \unrhd G_2 \unrhd \ldots \unrhd G_m \unrhd G_{m+1} = \{1\}$ so that
each section $G_i / G_{i+1}$ is cyclic. Polycyclic groups play an important role
in group theory as, for instance, each finite group with odd order is
polycyclic. Moreover, polycyclic groups form a special class of finitely
presented groups for which various algorithmic problems are solvable.
For instance, it is well-known that the word problem in a polycyclic
group is solvable. More precisely, a polycyclic group $G$ can be described by
a polycyclic presentation. This is a finite presentation with generators
$\{a_1,\ldots,a_m\}$ and relations of the form
$$
\begin{array}{rcll}
a_i^{r_i} &=& a_{i+1}^{\alpha_{i,i+1}} \cdots a_m^{\alpha_{i,m}},& i\in {\mathcal I}\\
a_i^{-1} a_j a_i &=& a_{i+1}^{\beta_{i,j,i+1}} \cdots a_m^{\beta_{i,j,m}},&1\leq i<j\leq m\\
a_i^{-1} a_j^{-1} a_i &=& a_{i+1}^{\gamma_{i,j,i+1}} \cdots a_m^{\gamma_{i,j,m}},&1\leq i<j\leq m,\: j\not\in {\mathcal I}\\
a_i a_j a_i^{-1} &=& a_{i+1}^{\delta_{i,j,i+1}} \cdots a_m^{\delta_{i,j,m}},&1\leq i<j\leq m,\: i\not\in {\mathcal I}\\
a_i a_j^{-1} a_i^{-1} &=& a_{i+1}^{\varepsilon_{i,j,i+1}} \cdots a_m^{\varepsilon_{i,j,m}},&1\leq i<j\leq m,\: i,j\not\in {\mathcal I}
\end{array} $$
for a subset ${\mathcal I} \subseteq \{1,\ldots,m\}$ and integers $\alpha_{i,\ell},
\beta_{i,j,\ell}, \gamma_{i,j,\ell}, \delta_{i,j,\ell},\varepsilon_{i,j,\ell}\in {\mathbb
Z}$ that satisfy $0 \leq \alpha_{i,\ell}, \beta_{i,j,\ell},
\gamma_{i,j,\ell}, \delta_{i,j,\ell},\varepsilon_{i,j,\ell} < r_i$ whenever
$\ell\in {\mathcal I}$ holds. For further details on polycyclic presentations we
refer to Section 9.4 of [14]. \\ \\
Given any finite presentation of a polycyclic group, the polycyclic
quotient algorithm [11,12] allows one to compute a
polycyclic presentation defining the same group. If, additionally,
the polycyclic group is nilpotent, than any finite presentation can be
transformed into a polycyclic presentation with the nilpotent quotient
algorithm [13]. We further note that even certain infinite
presentations (so-called finite $L$-presentations; see [2])
of a nilpotent and polycyclic group can be transformed into a polycyclic
presentation [3]. We may therefore always assume that a polycyclic
group is given by a polycyclic presentation. \\ \\
In the group $G$, every element is represented by a word $a_1^{e_1}
a_2^{e_2} \cdots a_m^{e_m}$ with $0\leq e_i < r_i$ whenever $i \in
{\mathcal I}$ holds. If this representation is unique, then the polycyclic
presentation is consistent and it yields a normal form for elements in
the group. This is a basis for symbolic computations within polycyclic
groups. Various strategies for computing normal forms in a polycyclic
group have been studied so far [10,16,6,1]. The current
state of the art algorithm is \emph{collection from the left}. But it is
known that even `collection from the left' is exponential in the number
of generators [10]; see also [1]. \\ \\
In this paper, we concentrate on refined solvable presentations as a special
class of polycyclic presentations that we describe in
Section 2. We
choose a finite series of normal subgroups so that
the sections are abelian. A refined solvable presentation will be a certain
polycyclic presentation that refines this series. Each weighted
nilpotent presentation, as used extensively in the nilpotent quotient
algorithms [13,3] and in [15], is of this type. A
solvable presentation can be described effectively by presentation
maps which we define in Section 2. Presentation maps
can be considered as the basic data structure to define a polycyclic
group in computer-algebra-systems like {\scshape Gap} or {\scshape
Magma}. We obtain consistency criteria for refined solvable presentations in
Section 3. This consistency check has been implemented
in the {\scshape Nql}-package [8]. Our implementation shows
that the consistency checks for solvable presentations are often
faster than the general methods for polycyclic groups. As an example,
we consider nilpotent quotients of the Basilica group [7]
and the BSV group [4]. \\ \\
Fast algorithms for polycyclic groups are of special interest as, for
instance, the algorithm in [9] attempts to find periodicities
in the Dwyer quotients of the Schur multiplier of a group. In order
to observe these periodicities, the algorithm needs to compute with
polycyclic presentations with some hundreds of generators and therefore
fast algorithms for polycyclic groups are needed.
\section{Refined solvable presentations}
Let $G$ be a poly-cyclic group with a strictly ascending chain of
normal subgroups
$$\{1\}=G_{0}<G_{1}<\cdots <G_{r}=G$$
where $G_{i}/G_{i-1}$ is abelian for $i=1,\ldots ,r$. Since each subgroup of a
polycyclic group is finitely generated, we can choose a finite generating
set $X$ for $G$ which partitions as $X=X_{1}\cup X_{2}\cup \cdots \cup X_{r}$
such that
$$G_{i}/G_{i-1}=\bigoplus_{x\in X_{i}}\langle
xG_{i-1}\rangle$$
for $i=1,\ldots ,r$ and where all the direct summands are non-trivial. We can
furthermore make
our choice so that for each $x\in X_{i}$, either the order,
$o(xG_{i-1})$, of $xG_{i-1}$ is infinite or a power of a prime. Let
${\mathcal P}$ denote the set of all primes. For each $p\in {\mathcal P}$,
let
$$X_{i}(p)=\{x\in X_{i}:\,o(xG_{i-1})\mbox{\ is a power of
}p\}$$
and let
$$X_{i}(\infty)=\{x\in X_{i}:\,o(xG_{i-1})=\infty\}.$$
Notice that the Sylow $p$-subgroup of $G_{i}/G_{i-1}$ is
$$(G_{i}/G_{i-1})_{p}=\bigoplus_{x\in X_{i}(p)}\langle
xG_{i-1}\rangle.$$
We order the generators in $X$ such that the generators in $X_{i}$
precede those in $X_{j}$ whenever $i<j$. Suppose that
$X=\{x_{1},\ldots ,x_{m}\}$ with $x_{1}<x_{2}<\ldots <x_{m}$.
For each $x\in X_{i}$ let $n(x)=o(xG_{i-1})$. If $n(x)=\infty$,
let $\mbox{\blb Z}_{x}=\mbox{\blb Z}$ and otherwise let $\mbox{\blb
Z}_{x}=\{0,\ldots ,n(x)-1\}$. Each element $g\in G$ has a unique
normal form expression
$$g=x_{m}^{r_{m}}x_{m-1}^{r_{m-1}}\cdots x_{1}^{r_{1}}$$
where $r_{i}\in \mbox{\blb Z}_{x_{i}}$. \\ \\
We next describe some relations that hold in the generators $x_{1},\ldots ,
x_{m}$. If $x\in X_{s}(p)$ then we get a {\it power relation} of the form
\begin{equation}
x^{n(x)}=x_{m}^{\alpha_{x}(m)}\cdots x_{1}^{\alpha_{x}(1)}
\end{equation}
with $\alpha_{x}(i)\in \mbox{\blb Z}_{x_{i}}$ and where
$\alpha_{x}(i)=0$ if $x_{i}\not \in X_{1}\cup\cdots \cup X_{s-1}$. \\ \\
For each pair of generators $x,y\in X$ with $x<y$ we also get a
{\it conjugacy relation}
\begin{equation}
x^{y}=x_{m}^{\beta_{(x,y)}(m)}\cdots
x_{1}^{\beta_{(x,y)}(1)}
\end{equation}
where $\beta_{(x,y)}(i)\in \mbox{\blb Z}_{x_{i}}$. \\ \\
{\bf Remark}. There are three types of relations of the form (2). \\ \\
\underline{Type 1}. If $x,y\in
X_{s}$ then $x$ and $y$ commute modulo $G_{s-1}$ and thus we get
that $\beta_{(x,y)}(i)=0$ if $x_{i}\not\in X_{1}\cup \cdots \cup
X_{s-1}\cup \{x\}$ and that $\beta_{(x,y)}(i)=1$ if $x_{i}=x$. \\ \\
Now
suppose that $s<t$. \\ \\
\underline{Type 2}. If $x\in X_{s}(p)$ and $y\in X_{t}$ then
$x^{y}G_{s-1}\in (G_{s}/G_{s-1})_{p}$ and thus we get a relation of
the form (2) where $\beta_{(x,y)}(i)=0$ if $x_{i}\not \in X_{1}\cup
\cdots \cup X_{s-1}\cup X_{s}(p)$. \\ \\
\underline{Type 3}. Finally if $x\in X_{s}(\infty)$
and $y\in X_{t}$ then $x^{y}\in G_{s}$ and we get a relation of the
type (2) where $\beta_{(x,y)}(i)=0$ if $x_{i}\not\in
X_{1}\cup \cdots \cup X_{s}$. \\ \\
{\bf Remark}. By an easy induction on $m$, one can see that (1) and (2) also
give us, for every pair of generators $x,y\in X$ such that $x<y$,
a relation $x^{y^{-1}}=\mu(x,y)$, where $\mu(x,y)$ is a normal form
expression. Thus using only relations (1) and the three types of relations
(2), we have a full information about $G$ and we can calculate inverses and
products of elements of normal form and turn the result into a normal form
expression using for example collection from the left. \\ \\
The claim holds trivially for $m=1$. Now suppose that $m\geq 2$
and that the claim holds for all smaller values of $m$. Consider the subgroup
$H=\langle x_{1},\ldots ,x_{m-1}\rangle$. By the inductive hypothesis, every
element in $H$ can be turned into a normal form expression using only
relations (1) and (2). Now (2) gives us normal form expressions for
$x_{1}^{x_{m}},\ldots ,x_{m-1}^{x_{m}}$ and this determines an automorphism
$\phi\in \mbox{Aut\,}(H)$ induced by the conjugation of $x_{m}$. This
then gives us $\phi^{-1}$ that gives us in turn normal form expressions
for $x_{1}^{x_{m}^{-1}},\ldots ,x_{m-1}^{x_{m}^{-1}}$. This finishes the proof
of the inductive step. \\ \\
The point about this is that the relations $x^{y^{-1}}=\mu(x,y)$ are not defining
relations but consequences of (1) and (2). So for a polycyclic group $G$ we
only need (1) and (2) to define it. For practical reasons we need however
to determine the relations $x^{y^{-1}}=\mu(x,y)$ first to be able to perform
calculations in $G$. At the end of section $3$, we describe an efficient method
for doing this for the polycyclic presentations that we are about to introduce
next, refined solvable presentations. \\ \\ \\
Suppose now conversely that we have a finite alphabet
$X=\{x_{1},x_{2},\ldots ,x_{m}\}$ with an ordering $x_{1}< x_{2}<
\ldots < x_{m}$. Let $F$ be the free group on $X$. Partition $X$ into some
disjoint non-empty subsets
$X_{1},\ldots, X_{r}$ such that the elements of $X_{i}$ precede
those in $X_{j}$ whenever $i<j$. Then partition further each $X_{i}$
as a union of disjoint
subsets (most empty of course) \\ \\
$$X_{i}=(\bigcup_{p\in {\mathcal P}}X_{i}(p))\cup
X_{i}(\infty).$$
Let $Y=Z\setminus \{x\in X:\,n(x)=\infty\}$ and $Z=\{(x,y)\in X\times X:\,
x<y\}$. We introduce three maps that we will refer to as {\it presentation
maps}. The first one is
$$n:X\rightarrow \mbox{\blb N}\cup \{\infty\}$$
such that $n(x)=\infty$ if $x\in X_{i}(\infty)$ and $n(x)$ is a non-trivial
power of $p$ is $x\in X_{i}(p)$. The second presentation map is
$$\pi:Y\rightarrow F$$
where, if $x\in X_{s}(p)$, $\pi(x)=x_{m}^{\alpha_{x}(m)}\cdots x_{1}^{\alpha_{x}(1)}$ with
$\alpha_{x}(i)\in \mbox{\blb Z}_{x_{i}}$ and $\alpha_{x}(i)=0$
whenever $x_{i}\not\in X_{1}\cup\cdots \cup X_{s-1}$. Notice that these
are the conditions for the right hand side of the power relation (1). The
final presentation map is
$$\delta:Z\rightarrow F$$
where $\delta(x,y)=x_{m}^{\beta_{(x,y)}(m)}\cdots x_{1}^{\beta_{(x,y)}(1)}$ and
the conditions for the right hand side of (2) above hold as indicated in the
remark that follows it. So we have a data that consists of an alphabet $X$
with a partition and three presentation maps. To this data we associate a
presentation with generators $x_{1},\ldots ,x_{m}$, power relations
$$x^{n(x)}=\pi(x)$$
for any $x\in X$ such that $n(x)\not =\infty$, and conjugacy relations
$$x^{y}=\delta(x,y)$$
for each pair $(x,y)\in X\times X$ such that $x<y$. We call such a presentation
a {\it refined solvable presentation}. We have seen above that every polycyclic
group has a refined solvable presentation that is consistent. Conversely,
we are interested in criteria for a given refined solvable presentation
to be a consistent presentation for a polycyclic group $G$. In other
words we want the group $G$ to be polycyclic and we want every element $g\in G$
to have a unique normal form expression
$$g=x_{m}^{r_{m}}\cdots x_{1}^{r_{1}}$$
with $r_{i}\in \mbox{\blb Z}_{x_{i}}$. In next section we describe such
consistency criteria. \\ \\
{\bf Remark}. Notice that there are groups with a refined solvable
presentation that are not polycyclic. Take for example two variables
$x_{1}<x_{2}$ and let $X_{1}=X_{1}(\infty)=\{x_{1}\}$,
$X_{2}=X_{2}(\infty)=\{x_{2}\}$. Here $Y=\emptyset$ and $Z=\{(x_{1},x_{2})\}$.
For the presentation maps $n:X\rightarrow \mbox{\blb N}\cup \{\infty\}$ and
$\pi:Y\rightarrow F$, we must have $n(x_{1})=n(x_{2})=\infty$ and
$\pi$ must be empty. Suppose we choose $\delta:Z\rightarrow F$ such that
$\delta(x_{1},x_{2})=x_{1}^{2}$. Then we get a presentation with two
generators $x_{1},x_{2}$ and one relation
$$x_{1}^{x_{2}}=x_{1}^{2}.$$
The resulting group is not polycyclic. The criteria that we will describe
in section 3 are thus not only consistency criteria but also criteria
for the resulting group to be polycyclic.
\section{The consistency criteria}
Before establishing our consistency criteria, we first describe constructions
that are central to what follows.
Suppose we have a polycyclic group $G=\langle X\rangle$ that has a consistent
refined solvable presentation as described above with a generating set
$X=\{x_{1},\ldots ,x_{m}\}$ that is partitioned as described in section 2 and
with presentation maps $n,\pi$ and $\delta$. Let $\phi\in \mbox{Aut\,}(G)$.
We will consider two situations where we can use this data to get a consistent
refined solvable presentation for a larger polycyclic group $\tilde{G}$.
Add a new variable $x_{m+1}$ and extend our order on $\tilde{X}=X\cup
\{x_{m+1}\}$ such that $x_{m+1}$ is larger than the elements in $X$. Let
$\tilde{F}$ be the free group on $\tilde{X}$. Let $H$ be the semidirect
product of $G$ with a infinite cyclic group
$C_{\infty}=\langle x\rangle$ where the action from $C_{\infty}$ on $G$
is given by $g^{x}=g^{\phi}$. \\ \\
For the first situation let $\tilde{G}=H$. We extend the presentation maps
$n,\pi,\delta$ to $\tilde{n},\tilde{\pi},\tilde{\delta}$ so they involve
$\tilde{X}$. We do this by letting $\tilde{n}(x_{m+1})=\infty$ and
$$\tilde{\delta}(x_{i},x_{m+1})=x_{i}^{\phi}\mbox{\ \ (in a normal form
expression in $x_{1},\ldots ,x_{m}$)}$$
for $i=1,\ldots ,x_{m}$. Notice that, since $n(x_{m+1})=\infty$, $\tilde{\pi}=
\pi$. The refined solvable presentation that we get using the extended
presentation maps has all the relations for $G$ together with $m$
extra relations
$$x_{i}^{x_{m+1}}=\delta(x_{i},x_{m+1})=x_{i}^{\phi}$$
for $i=1,\ldots ,m$. A moments glance should convince the reader that this
is a refined solvable presentation for the polycyclic group $\tilde{G}=H$. \\ \\
{\bf Remark}. We haven't said anything above about the partition of
$\tilde{X}=\{x_{1},\ldots ,x_{m+1}\}$. The partition would be
into $\tilde{X}_{1}=X_{1},\ldots
,\tilde{X}_{r}=X_{r},\tilde{X}_{r+1}=\{x_{m+1}\}$. If furthermore
$x^{-1}x^{\phi}\in G_{r-1}$ for all $x\in X_{r}$ we could instead
choose a partition with $\tilde{X}_{1}=X_{1},\ldots
,\tilde{X}_{r-1}=X_{r-1},\tilde{X_{r}}=X_{r}\cup \{x_{m+1}\}$. \\ \\
The second situation is a variant of the first.
Now suppose furthermore that for some integer $e\geq 2$, that
is a power of a prime $p$, and $g\in G$
we have that
\begin{eqnarray}
a^{g} & = & a^{\phi^{e}}\mbox{\ \ (for all $a\in G$)} \\
g^{\phi} & = & g
\end{eqnarray}
In this case $N=\langle g^{-1}x^{e}\rangle$ is a subgroup of the
centre of $H$. Let $\tilde{G}=H/N$. $G$ embeds naturally into
$\tilde{G}$ and we identify it with it's image. We now extend the presentation
maps $n,\pi,\delta$ to $\tilde{n},\tilde{\pi},\tilde{\delta}$ as follows.
First we let $\tilde{n}(x_{m+1})=e$ and $\tilde{\pi}(x_{m+1})$ be the
normal form expression for $g$ in $x_{1},\ldots ,x_{m}$. Finally as before
let $\tilde{\delta}(x_{i},x_{m+1})$ be the normal form expression of
$x_{i}^{\phi}$ in $x_{1},\ldots ,x_{m}$. The refined solvable presentation
with respect to the presentation maps $\tilde{n},\tilde{\pi}$ and
$\tilde{\delta}$ is then a presentation with all the relations for
$G$ and the extra relations
$$x_{m+1}^{n(x_{m+1})}=\tilde{\pi}(x_{m+1})=g$$
together with
$$x_{i}^{x_{m+1}}=\tilde{\delta}(x_{i},x_{m+1})=x_{i}^{\phi}\mbox{\ \ (in a normal form
expression in $x_{1},\ldots ,x_{m}$)}$$
for $1\leq i\leq m$.
Again it is clear that this is a refined solvable presentation for
the polycyclic group $\tilde{G}=
H/N$. The remark above applies again for the partition in this case. \\ \\
\\
We now turn back to our task of finding a consistency criteria for
power-conjugate presentations of poly-cyclic groups. Suppose
$G=\langle x_{1},\ldots ,x_{m}\rangle$ is a poly-cyclic group with a
refined solvable presentation as described above. So we have some partition
of $X=\{x_{1},\ldots ,x_{m}\}$ and presentation maps $n,\pi,\delta$ giving
us relations
$$x^{n(x)}=\underbrace{x_{m}^{\alpha_{x}(m)}\cdots
x_{1}^{\alpha_{x}(1)}}_{\pi(x)}$$
for $x_{1}\leq x\leq x_{m}$ with $n(x)<\infty$ and
$$x^{y} = \underbrace{x_{m}^{\beta_{(x,y)}(m)}\cdots
x_{1}^{\beta_{(x,y)}(1)}}_{\delta(x,y)}$$
for $x_{1}\leq x<y\leq x_{m}$. For $k=0,1,\ldots ,m$, let $H_{k}$ be
the group satisfying the sub-presentation with generators
$x_{1},\ldots ,x_{k}$ and those of the relations involving only
$x_{1}\leq x<y\leq x_{k}$. The idea is to establish inductively
criteria for the refined solvable presentation for $H_{k}$ to be a consistent
presentation of a polycyclic group. The induction
basis $k=0$ doesn't need any work. Now suppose that we have already
obtained criteria for the refined solvable presentation for
$H_{k}$, where
$0\leq k\leq m-1$, to be a consistent presentation of
a polycyclic group. Using the presentation map $\delta$ we define a function
$\delta(x_{k+1}):H_{k}\rightarrow H_{k}$ by first defining the
values of the generators as $x_{i}^{\delta(x_{k+1})}=\delta(x_{i},x_{k+1})$
for $i=1,\ldots ,k$. We then extend this to the whole of $H_{k}$ by
letting $\delta(x_{k+1})$ act on normal form expressions as follows
$$(x_{k}^{r_{k}}\cdots x_{1}^{r_{1}})^{\delta(x_{k+1})}
=(x_{k}^{\delta(x_{k+1})})^{r_{k}}\cdots
(x_{1}^{\delta(x_{k+1})})^{r_{1}}.$$
Suppose the resulting map $\delta(x_{k+1})$ is an automorphism. If
$n(x_{k+1})=\infty$, we have that the presentation for $H_{k+1}$ is a
consistent presentation for the semidirect product of $H_{k}$ with the
infinite cyclic group $C_{\infty}=\langle x\rangle$ where
$g^{x}=g^{\delta(x_{k+1})}$. Now suppose that $n(x_{k+1})\not
=\infty$. Using the second construction above and taking into
account conditions (3) and (4), we get a presentation for
$H_{k+1}$ that is a consistent presentation of a polycyclic group, provided that
\begin{eqnarray*}
\pi(x_{k+1})^{\delta(x_{k+1})} & = & \pi(x_{k+1}) \\
x_{i}^{\delta(x_{k+1})^{n(x_{k+1})}} & = & x_{i}^{\pi(x_{k+1})}
\end{eqnarray*}
for $i=1,\ldots ,k$. It remains to find criteria for
$\delta(x_{k+1})$ to be an automorphism. This problem we turn to
next. \\ \\ \\
Let $G=\langle X\rangle$ be a poly-cyclic group with a
consistent refined solvable presentation as described above. For $s=1,\ldots
,r$ let $G_{s}=\langle X_{1}\cup\cdots\cup X_{s}\rangle$,
$G_{s}(p)=\langle X_{1}\cup\cdots\cup X_{s-1}\cup X_{s}(p)\rangle$
and let $\tau(G_{s})=\langle X_{1}\cup\cdots
X_{s-1}\cup(\bigcup_{p\in{\mathcal P}}X_{s}(p))\rangle$. For each
$x\in X$ choose an element $x^{\phi}$ subject to the following
conditions:
\begin{eqnarray}
x^{\phi}\in G_{i} & \mbox{if} & x\in X_{i} \\
x^{\phi}\in G_{i}(p) & \mbox{if} & x\in X_{i}(p). \nonumber
\end{eqnarray}
We extend this to a map $\phi:G\rightarrow G$ by letting $\phi$ act
on normal form expressions as:
$$(x_{m}^{r_{m}}\cdots x_{1}^{r_{1}})^{\phi}=
(x_{m}^{\phi})^{r_{m}}\cdots (x_{1}^{\phi})^{r_{1}}.$$
Notice that the condition (5) implies that $\phi$ induces maps
$\phi_{s}:G_{s}\rightarrow G_{s}$, $s=1,\ldots ,r$, where
$\phi_{s}=\phi|_{G_{s}}$. It also induces maps
$\phi_{(s,p)}:G_{s}(p)/G_{s-1}\rightarrow G_{s}(p)/G_{s-1}$ and maps
$\phi_{(s,\infty)}:G_{s}/\tau(G_{s})\rightarrow G_{s}/\tau(G_{s})$.
\\
\begin{lemm}
The map $\phi:G\rightarrow G$ is a homomorphism if and only if
\setcounter{equation}{0}
\begin{equation}
\pi(x)^{\phi}= (x^{\phi})^{n(x)}\mbox{\ \ \ $(x_{1}\leq x\leq
x_{m})$}
\end{equation}
and
\begin{equation}
\mbox{} x^{y \phi}=x^{\phi y^{\phi}}
\mbox{\ \ \ \ \ $(x_{1}\leq x<y\leq x_{m})$}.
\end{equation}
$\phi$ is furthermore an automorphism if for $s=1,\ldots ,r$ we have
\begin{eqnarray}
\mbox{det\,}(\phi_{(s,p)}) & \not = & 0\mbox{\ $($mod $p)$} \\
\mbox{det\,}(\phi_{(s,\infty)}) & = & \pm 1. \nonumber
\end{eqnarray}
\end{lemm}
{\bf Proof}.\ \ Consider the homomorphism $\psi:F\rightarrow F$ on
the free group $F=\langle x_{1},\ldots ,x_{m}\rangle$ induced by the
values $x^{\psi}=x^{\phi}$ for $x_{1}\leq x\leq x_{m}$. Let $R$ be
the normal subgroup generated by the defining polycyclic relators for $G$.
This means that $G=F/R$. Then conditions (1) and (2) imply that
$R^{\psi}\leq R$ and thus $\psi$ induces a homomorphism on $G=F/R$.
This homomorphism is clearly the map $\phi$. \\ \\
The homomorphism $\phi$ is bijective if and only if the induced
linear maps $\phi_{(s,p)}$ and $\phi_{(s,\infty)}$ are bijective and
this happens if and only if condition (3) holds. $\Box$ \\ \\ \\
{\bf Remark}. The condition (1) in the lemma above is of course only relevant
when $n(x)<\infty$. To avoid making the statement more complicated we can
decide that $\pi(x)=1$ and $u^{n(x)}=1$ for all $u\in G$ in the case
when $n(x)=\infty$. \\ \\
We now turn back again to the problem of establishing
criteria for refined solvable presentations to be a consistent presentation
of a polycyclic group. Let $G=\langle
x_{1},\ldots ,x_{m}\rangle$ be a group satisfying a
refined solvable presentation as described above with relations
\begin{eqnarray*}
x^{n(x)} & = & \underbrace{x_{m}^{\alpha_{x}(m)}\cdots
x_{1}^{\alpha_{x}(1)}}_{\pi(x)} \mbox{\ \ \ $(x_{1}\leq x\leq x_{m})$}\\
x^{y} & = & \underbrace{x_{m}^{\beta_{(x,y)}(m)}\cdots
x_{1}^{\beta_{(x,y)}(1)}}_{\delta(x,y)}
\mbox{\ \ \ $(x_{1}\leq x<y\leq x_{m})$}.
\end{eqnarray*}
We let $H_{k}$ be the group satisfying the sub-presentation with
generators $x_{1},\ldots ,x_{k}$ and those of the relations where
$x_{1}\leq x<y\leq x_{k}$. We establish inductively
criteria for the presentation for $H_{k}$ to be a consistent presentation of
a polycyclic group. Suppose this has been
achieved for some $k$. We want to add criteria so that the
presentation for $H_{k+1}$ is a consistent presentation for a polycyclic group.
We let
$\delta(x_{k+1}):H_{k}\rightarrow H_{k}$ be the map induced by the
values $x^{\delta(x_{k+1})}$ as described above. As we
pointed out, the presentation for $H_{k+1}$ is a consistent presentation
of a polycyclic group if and only
if the map $\delta(x_{k+1})$ is an automorphism and that we have the
extra criteria that
\begin{eqnarray*}
\pi(x_{k+1})^{\delta(x_{k+1})} & = & \pi(x_{k+1}) \\
x_{i}^{\delta(x_{k+1})^{n(x_{k+1})}} & = &
x_{i}^{\pi(x_{k+1})}.
\end{eqnarray*}
From Lemma 1 we have criteria for $\delta(x_{k+1})$ to be an
automorphism. Suppose that $x_{k+1}\in X_{s}$. Then
$\delta(x_{k+1})$ acts trivially on $G_{s}/G_{s-1}$ and so to
establish that $\delta(x_{k+1})$ is bijective we only need to show
that $\delta(x_{k+1})_{(t,p)}$ and
$\delta(x_{k+1})_{(t,\infty)}$ are bijective for $1\leq t<s$. \\ \\
For $z\in X$ let $r(z)$ be the integer such that $z\in X_{r(z)}$.
Adding up for $k=0,\ldots ,m-1$, we obtain the following consistency
criteria.
\begin{theo}
The refined solvable presentation for $G$ is a consistent presentation for
a polycyclic group if and only
if the following criteria hold. Firstly we must have for
all $x_{2}\leq z\leq x_{m}$ that
\setcounter{equation}{0}
\begin{eqnarray}
\pi(z)^{\delta(z)} & = & \pi(z) \\
\pi(x)^{\delta(z)} & = & (x^{\delta(z)})^{n(x)} \mbox{\ \ \
\ \ \ \
$(x_{1}\leq x<z)$} \\
x^{\delta(z)^{n(z)}} & = & x^{\pi(z)} \mbox{\ \ \ \ \ \ \ \ \ \ \ \ \ $(x_{1}\leq
x<z)$} \\
\mbox{} x^{y \delta(z)} & = & x^{\delta(z)y^{\delta(z)}}
\mbox{\ \ \ \ \ $(x_{1}\leq x<y<z)$}.
\end{eqnarray}
We also need for $1\leq s<r(z)$ that
\begin{eqnarray}
\mbox{det\,}(\delta(z)_{(s,p)}) & \not = & 0\mbox{\ $($mod $p$$)$} \\
\mbox{det\,}(\delta(z)_{(s,\infty)}) & = & \pm 1. \nonumber
\end{eqnarray}
\end{theo}
\mbox{}\\
{\bf Remarks}. (1) Recall that we established the consistency of the
polycyclic group
$H_{k}$ recursively for $k=0,1,\ldots ,m$. So according to the proof
we should check (1)-(5) for $z=x_{2},\ldots ,x_{m}$ in ascending
order. If $z=x_{k+1}$ then the consistency of $H_{k+1}$ follows from
the consistency of $H_{k}$ together with relations (1)-(5) of Theorem
2 where $z=x_{k+1}$. So when doing the check for $z=x_{k+1}$ we
can assume that the presentation for $H_{k}$ is consistent. Using the
definition of $\delta(z)$ we first transform all the expressions in
(1)-(4) into expressions in $H_{k}$. Then we turn each side of the
equations into normal form in $H_{k}$ and compare. It is interesting
to note that (provided the check has been positive so far) $H_{k}$
has a consistent presentation and so the normal form in each case is
independent of how we calculate. We can however do the check in any order we like (and still
sticking to the assumption that $H_{k}$ has a consistent presentation).
The reason for this is that we will at some point reach the smallest
$z$ where the check fails (provided that we haven't got a negative
result in the mean time). Hence if the presentation is not a consistent
presentation of a polycyclic group,
this will be recognised. \\ \\
(2) How does this approach compare to the existing ones.
Our approach is to consider functions $\delta(z)$ defined
on a group $G_{z}$ with a subpresentation (involving only the generators less
than $z$). Modulo consistency of $G_{z}$ the conditions (1)-(5) in Theorem 2
are conditions for the map $\delta(z)$ to be an automorphism ((2), (4) and (5)) and
for the resulting cyclic extension to have a consistent presentation ((1) and
(3)). The emphasis is thus on the function $\delta(z)$ rather than the group
operation (as in [14]). It is our belief that this viewpoint makes things
look a bit clearer. \\ \\
(3) It should be noted however that our conditions (1)-(4) have equivalent
criteria in the standard approach. See the list (*) in [14], page 424. The 'overlaps' (1),(2),(3) and (5) in that list correspond to (4),(2),(3) and (1) in
Theorem 2. The condition (5) is however new and is a biproduct of working
with an ascending normal solvable series. In the standard approach one works
with a ascending subnormal series with cyclic factors.
It should also be noted that the idea of obtaining consistency recursively for
$H_{k}$, $k=0,\ldots, m$, through working with $\delta(z)$, is also implicit in [14] but is kept in the background within the proof. Our conditions (1)-(5)
bring this to the surface.
\\ \\ \\
{\bf A method for obtaining inverse conjugation relations}.
For practical checks using these consistency criteria one needs to
determine first normal form expressions $x^{z^{-1}}$ for $x<z<x_{m}$
(in order to be able to transform any expression in $H_{k}$ to an
normal form expression). Note however that this is ofcourse only
needed when $z$ is of infinite order. Another advantage of our approach is that it becomes
quite simple and effective to determine these after having produced all
the linear maps $\delta(z)_{(s,p)}$ and
$\delta(z)_{(s,\infty)}$, $2\leq s\leq r$.
Suppose that $z\in X_{s}$ for some $2\leq s\leq r$. We now describe how to obtain
normal form expressions for
$x^{z^{-1}}$ recursively for $x<z$. \\ \\
We can suppose that we already know that the sub-presentation for the group $G^{*}$
generated by the generators $\{x\in X:\,x<z\}$ (using only the relations involving
these generators) is consistent.
The presentation for $G^{*}$ is built around an ascending normal $z$-invariant series with
each factor either a finite abelian $p$-group or a finitely
generated torsion-free abelian group. \\ \\
Now suppose that we are looking at one such factor $K/H$ and that
the extra generators needed to generate $K$ are $y_{1},\ldots ,y_{e}$. We can suppose inductively
that
we have obtained normal form expressions for all $x^{z^{-1}}$ when
$x$ is a generator of $H$. We want to extend this
to $y_{i}^{z^{-1}}$ for $i=1,\ldots ,e$. \\ \\
Let $v_{1}=y_{1}H,\ldots ,v_{e}=y_{e}H$ be the generators of $K/H$.
Let $\phi$ be the automorphism on $K/H$ induced by the conjugation
action by $z$ and let $\psi$ be the inverse of $\phi$. Suppose
$\psi$ is represented by the matrix $B=(b_{ij})$. Since
$\phi(\psi(v_{i}))=v_{i}$, we have
$$b_{ei}\phi(v_{e)}+\cdots +b_{2i}\phi(v_{2})+ b_{1i}\phi(v_{1})=v_{i}.$$
It follows that (using the presentation and calculating in $K$) we
get
$$(y_{e}^{z})^{b_{ei}}\cdots (y_{2}^{z})^{b_{2i}}(y_{1}^{z})^{b_{1i}}=y_{i}u.$$
Where $u$ is a normal form expression in the generators of $H$ (and
we already know how $z^{-1}$ acts on $u$. It follows that
$$y_{i}^{z^{-1}}=y_{e}^{b_{ei}}\cdots
y_{2}^{b_{2i}}y_{1}^{b_{1i}}u^{-z^{-1}}.$$
\section{Implementation and some applications of our consistency checks}
We have implemented our consistency check in the {\scshape Nql} package [8]
of the computer-algebra-system {\scshape Gap}; see [5]. In
this section, we demonstrate how this method yields a significant speed-up
in checking consistency of large polycyclic presentations (with some hundreds of generators). For this
purpose, we consider nilpotent quotients of the Basilica group $\Delta$ from [7] and
the Brunner-Sidki-Vieira-Group $\mbox{BSV}$ from [4]. Both groups
are two-generated but infinitely presented. The Basilica group admits
the following infinite presentation
\[
\Delta \cong \langle \{a,b\} \mid [a,a^b]^{\sigma^{i}}, i\in\mbox{\blb N}_0\rangle
\]
where $\delta$ is the endomorphism of the free group over $a$ and $b$
induced by the mapping $a\mapsto b^2$ and $b\mapsto a$; see [7].
The $\mbox{BSV}$ group admits the infinite presentation
\[
\mbox{BSV} \cong \langle \{a,b\} \mid [b,b^a]^{\varepsilon^{i}}, [b,b^{a^3}]^{\varepsilon^{i}}, i\in\mbox{N}_0 \rangle,
\]
where $\varepsilon$ is the endomorphism of the free group over $a$ and
$b$ induced by the mapping $a\mapsto a^2$ and $b\mapsto a^2b^{-1}a^2$.
The nilpotent quotient algorithm in [3] computes a
weighted nilpotent presentation for the lower central series quotient
$G/\gamma_c(G)$ for a group $G$ given by an infinite presentation as
above (a so-called finite $L$-presentation; see [2]). A weighted nilpotent presentation is a polycyclic presentation
which refines the lower central series of the group. We note that the
weighted nilpotent presentations for the quotients $\Delta/\gamma_c\Delta$
and $\mbox{BSV}/\gamma_c\mbox{BSV}$ are refined solvable presentations. \\ \\
In order to verify consistency of a given polycyclic presentation,
the algorithm in [14, p. 424] rewrites the overlaps of the rewriting rules
and compares the result; that is, the algorithm checks the underlying rewriting system for local confluence. As even the state of art algorithm 'collection from
the left' is exponential [10], the number of overlaps is a central bottleneck
here. There are improvements known which
make use of the structure of a polycyclic presentation in order to
reduce the number of overlaps. For instance, for weighted nilpotent
presentations, a weight function allows one to reduce the number of
overlaps significantly; see [14, p. 431]. \\ \\
Our method replaces some overlaps by the computation of determinants of
integral matrices and it can easily be combined with the method for weighted
nilpotent presentations. This promising approach yields a significant speed-up
as the following table shows. The timings were obtained on an Intel
Pentium 4 processor with a clock-speed of $2.4$~GHz.
\begin{center}
\begin{tabular}{cccccc}
\toprule
Quotient & \verb|#gens| &\verb|Usual| & \verb|Solv| & \verb|Weight| & \verb|Solv+Weight| \\
\midrule
$\mbox{BSV}$, class $25$ & 106 & 0:00:05 & 0:00:04 & 0:00:01 & 0:00:01\\
$\mbox{BSV}$, class $35$ & 179 & 0:01:35 & 0:01:06 & 0:01:06 & 0:00:48\\
$\mbox{BSV}$, class $40$ & 219 & 0:04:26 & 0:03:00 & 0:03:22 & 0:02:25\\
$\mbox{BSV}$, class $45$ & 259 & 0:10:27 & 0:06:54 & 0:08:28 & 0:06:05\\
$\mbox{BSV}$, class $50$ & 301 & 6:31:17 & 3:52:36 & 6:30:13 & 4:43:52\\
\midrule
$\Delta$, class ${35}$ & 185 & 0:00:31 & 0:00:31 & 0:00:02 & 0:00:02\\
$\Delta$, class ${80}$ & 609 & 1:19:22 & 1:15:03 & 0:29:48 & 0:27:36\\
$\Delta$, class $100$ & 821 & 8:25:37 & 7:39:54 & 5:45:40 & 5:18:08\\
\bottomrule
\end{tabular}
\end{center}
The method \verb|Usual| denotes the algorithm in [14, p. 424]
for polycyclic presentations, the method
\verb|Solv| denotes our new method, the method \verb|Weight|
denotes the method for weighted nilpotent presentation as
in [14, p. 431], and the method \verb|Solv+Weight| denotes
the combination of both of the latter methods. The number \verb|#gens|
denotes the number of generators of the considered polycyclic presentation.
In summary, our method always yields here a significant
speed-up compared with the standard method for polycyclic groups. \\ \\
{\it Acknowledgement}. We thank Michael Vaughan-Lee for many useful comments
and for having provided us with a simpler proof for Lemma 1.
|
2,877,628,088,902 | arxiv | \section{Introduction}
Three dimensional (3D) Topological band Insulators (TI) have attracted considerable attention from the con- densed matter and device physics communities for the novel electronic surface states they support and the host of unusual responses to external fields\cite{zahid}. The basic understanding of their electronic structure, using Density Functional Theory (DFT) is a continuing quest in the literature. These studies have contributed to the growth of the field and helped with interpreting experiments. In recent years, binary 3D TI materials Bi$_2$Se$_3$ and Bi$_2$Te$_3$ have emerged as model systems for numerous experiments which focused on exfoliation and the molecular beam epitaxy growth of thin films and for probing the electronic structure by optical techniques\cite{zahid2} and device fabrications\cite{mit}. The theoretical works, based on density functional theory or tight-binding calculations, focused on elucidating the structure-property relations, predicting new TIs, and mapping the band structures\cite{louie}. One of the concerns for the use of binary TI materials in devices is the intrinsic {\it n}(or {\it p})-type vacancies in the bulk crystals of Bi$_2$Se$_3$ (Bi$_2$Te$_3$) which make the conduction through bulk states dominate transport experiments\cite{ong} and the hexagonal {\it warping} effect that makes the Dirac cone anisotropic, especially in the conduction band region. This anisotropic nature of the Dirac cone leads to interesting spin-textures (or spin-momentum locking) of surface states with possible applications in spintronics and quantum information processing\cite{bansil}. Recently, two promising 3D TI materials, TlBiX$_2$ (X= Se,Te) and Bi$_2$X$_2$Y (X,Y = Se,Te) have been predicted to have near perfect Dirac cones, in terms of less entanglement of bulk and surface states, and experiments have supported these predictions.\cite{ando,lin,souma,xu,chen}. Moreover, Bi-based ternary compounds offer high bulk resistivity, due to the structurally perfect nature of the crystals, so surface transport is enhanced.
\begin{figure}[ht!]
\scalebox{0.40}{\includegraphics{Fig1.ps}}
\caption{ (Color online) Schematic of the hexagonal bulk structures of representative ternary compounds (a) TlBiSe$_{2}$ and (b) bulk Bi$_{2}$Se$_{2}$Te derived from their corresponding bulk trigonal structures whose three primitive vectors are denoted by t$_1$, t$_2$ and t$_3$. The trigonal structure is part of the larger hexagonal structure shown by dashed lines. In both the compounds, the hexagonal cell contains three times the number of atoms compared to that in the trigonal structure. In (b), the five atomic layers, forming a quintuple layer are shown in the shaded square region. (c) The first Brillouin Zone of the bulk trigonal structure with four time-reversal invariant points $\Gamma$, Z, F and L are shown.}
\label{fig:Fig1}
\end{figure}
In view of these promising advances, a comprehensive theoretical study of the thin-film structures of these ternary 3D TI is necessary to help better understand these materials and to help design future experiments to verify these effects. These materials can have intrinsic size limits which can protect the metallic nature of the surface bands, the surface states can spread inside the bulk region, and atomic rearrangements in thin layers can have profound effect on the Dirac cone itself. We address these issues in this paper using a DFT-based electronic structure method and compare our results with available experimental results as well as with the results of binary Bi-based TIs. Our studies will have implications for the understanding of topological surface states in ternary TIs and their intended applications.
Our paper is organized as follows. In Section II, we describe the bulk crystal structures of TlBiX$_2$ (X=Se,Te) and Bi$_2$X$_2$Y (X,Y=Se, Te), their conduction band (CB) and valence band (VB) structures and the computational method used for this study. In Section III we present the thin film electronic structure of these materials and discuss the role of atomic relaxations or rearragements, resulting from thin film formation from the bulk crystal, on the shape and size of the Dirac cone in the bulk, and predict critical film thicknesses required to maintain the Dirac cone and degree of surface state extension into the bulk region which have yet to be determined experimentally for these class of materials. We compare these results with those available for binary TI compounds. Finally, in Setion IV, we present our summary and conclusions.
\section{Computational method and bulk band structures}
This section details the computational method used, the choice of computational parameters, the bulk crystal structures and the resulting electronic band structure. The bulk crystal structures of both Tl and Bi-based ternary compounds are similar to the binary compounds Bi$_2$X$_3$(X=Se,Te). Both compounds have a trigonal structure with covalently bonded alternating cations and anion layers stacked along the crystallographic {\it z}-direction. However, there is one important difference as compared to the binary compounds: The atomic layers of these Tl-based structures are not arranged in a quintuple-like (QL) order, suggesting covalent bonding between the unit-cells, and therefore, that exfoliation techniques, used to peel binary TI flakes from the corresponding bulk crystals, cannot be used for extractng Tl-based TI thin films. The layers are arranged in the order Tl-Se(Te)-Bi-Se(Te). The trigonal unit cell contains four atoms (as opposed to five in binary Bi-based TIs) with the lattice parameter {\it a} = 0.7887 nm(0.8263 nm) and the angle between the lattice vectors spanning the lattice of $\alpha$$\sim$31.4$^o$($\sim$31.8$^o$) for TlBiSe$_2$(TlBiTe$_2$)\cite{mahanti}. The hexagonal cell, formed from the trigonal, consists of twelve atomic layers with the number of atoms tripled (Fig. 1(a)). The lattice parameters of the hexagonal cell are {\it a}=0.4264 nm(0.4534 nm) and {\it c}=2.2478 nm(2.3512 nm) for TlBiSe$_2$(TlBiTe$_2$) and the layer sequence in this case is same as the trigonal cell. Compared to the corresponding binary compound, the Bi-based ternary compound Bi$_2$Se$_2$Te (Bi$_2$Te$_2$Se) has one of the Se (Te) layers replaced by a Te (Se) layer, and has the layers arranged in the order Te(Se)-Se(Te)-Bi-Bi-Se(Te). The QL-like layer arrangement in these compounds facilitates exfoliation-like methods which can be used to peel thin films from the bulk-crystals. The lattice parameters of the bulk trigonal cell is almost same for Bi$_2$Se$_2$Te (Bi$_2$Te$_2$Se): {\it a} = 1.0046 nm(1.0255 nm) and $\alpha$ $\sim$ 24.2$^o$ ($\sim$ 24.1$^o$)\cite{nakajima}. The hexagonal cell, built from the trigonal structure, contains fifteen atomic layers with the lattice parameters {\it a} = 0.422 nm(0.428 nm) and {\it c} = 2.92 nm(2.99 nm) for Bi$_2$Se$_2$Te (Bi$_2$Te$_2$Se) (Fig. 1(b)).
We used a DFT-based electronic structure method with projector-augmented wave basis\cite{vasp1} and a generalized gradient approximation to the exchange-correlation potential\cite{perdew} for computing the electronic properties of the bulk and thin films of both of the compounds. Spin-orbit coupling (SOC) was invoked in the calculation as implemented in the numerical method\cite{vasp2}. For the bulk structural optimizations of both the compounds, a kinetic energy cut-off of 400 eV and a non-orthogonal {\bf k}-point mesh along the reciprocal lattice vectors of 9 $\times$ 9 $\times$ 9 in the first Brillouin zone (BZ) were chosen. Since previous structural studies of bulk trigonal structures reported agreement with experimental lattice parameters\cite{expt}, we chose to relax only the internal parameters, namely the atomic positions, keeping the lattice constants fixed to experimental values. The total energy is assumed to be converged when all the components of Hellman-Feynman forces on each ion are smaller than the threshold 0.001 eV/\AA. Convergence of computed properties were carefully checked with respect to large energy cut-off, {\bf k}-point mesh and larger force threshold. For both the compounds, the computed interlayer distances are quite close to the experimental works\cite{expt}. These computed distances suggest van der Waals type of bonding between the adjacent QLs in ternary Bi-based TI but not in Tl-based TIs. With these informations, we built the bulk hexagonal cell and the corresponding thin-films of both the compounds as detailed in the next section.
\begin{figure}[ht!]
\scalebox{0.26}{\includegraphics[angle=-90]{Fig2a.ps}}\\
\scalebox{0.26}{\includegraphics[angle=-90]{Fig2b.ps}}
\scalebox{0.26}{\includegraphics[angle=-90]{Fig2c.ps}}\\
\caption{ (Color online) Band structure of bulk trigonal structure of (a) TlBiTe$_{2}$ and (b) Bi$_2$Se$_2$Te and (c) Bi$_2$Te$_2$Se with spin-orbit coupling, shown along high symmetry dirctions in the bulk BZ. We obtained a negative indirect gap in TlBiTe$_2$ and indirect gaps in Bi-based ternary TIs. Energy states mainly from $p_z$ orbitals of Bi, Se and Te are marked on the top of bulk bands and the procedure to obtain these surface bands from crystal wave-functions is discussed in the text.}
\label{fig:Fig2}
\end{figure}
\begin{figure}
\scalebox{0.26}{\includegraphics[angle=-90]{Fig3a.ps}}
\scalebox{0.26}{\includegraphics[angle=-90]{Fig3b.ps}}
\caption{ (Color online) Band structure of bulk Bi$_{2}$Te$_{2}$Se along high symmetry directions (a) without spin-orbit coupling, and (b) with spin-orbit coupling. Energy states formed mainly from $p_z$ orbitals of Bi and Te are marked on the top of bulk bands and band inversion processes are clearly shown.}
\label{fig:Fig3}
\end{figure}
The bulk band structures of TlBiX$_2$ (X= Se,Te), without and with SOC, agree well with other theoretical calculations\cite{mahanti, mahanti2}. However, with SOC, other calculations predict TlBiTe$_2$
to be an {\it indirect} semiconductor with the CB minimum (CBM) at the $\Gamma$ point and the VB maximum (VBM) on the line joining the {\bf L} and {\bf Z}-points. The high-symmetry points are labeled in Fig. 1(c). Recent experimental studies argue TlBiTe$_2$ to be a {\bf semimetal} with an negative energy gap of 20 meV\cite{chen}. Our results agree well with this experimental work (Fig. 2(a)). TlBiSe$_2$ has a direct gap at $\Gamma$ of 124 meV(235 meV) without(with) SOC. Figure 2(b) and (c) show the bulk band structures of Bi$_2$X$_2$Y (X,Y=(Te,Se) or,(Se,Te)) with SOC, which agree quite well with a recent DFT calculation\cite{johnson}. The CBM is at the $\Gamma$-point and the VBM is between $\bf Z$ and $\bf F$-points for both the Bi-based TIs. The computed {\it indirect} gap are 157 meV and 272 meV, for Bi$_2$Se$_2$Te and Bi$_2$Te$_2$Se, respectively. We note that the gap is {\it direct} at the $\Gamma$-point without including SOC with the gap values of 633 meV and 76 meV, for Bi$_2$Se$_2$Te and Bi$_2$Te$_2$Se, respectively.
In the literature, emergence of these topological surface states within the bulk band gap of binary 3D TIs was associated with the bulk {\it band inversion} when SOC is switched on\cite{zahid}. We tested this conjecture by plotting the specific orbital contributions from a given atom as a function of {\bf k} and band on the band structure diagrams of Figs. 2 and 3. These specific contributions are normalized with respect to the contributions from all the orbitals in both the bulk Tl- and Bi-based ternary compounds. Since the bulk CBM and VBM at $\Gamma$ consist mainly of $p_z$ orbitals\cite{mahanti2} of the Bi and Se(Te) atoms, respectively for Tl-based compounds, this state was chosen for the orbital projection studies. In Tl-based TIs, a cut-off contribution percentage is chosen which gives a reasonable picture of {\it band inversion} in these figures. If the calculated $p_z$ orbital contribution of a specific atom at a certain state is greater than a given cut-off percentage, we considered this state as mainly originating from that atom. This cut-off was set at 50$\%$ and 25$\%$ for Se(Te) and Bi atoms, respectively. A slight change in a cut-off percentage, below or above these choices, resulted in an insignificant change for the CBM and VBM bands at the $\Gamma$-point, which is region of interest to study the {\it band inversion} process. For example, the $p_z$ orbital contributions on the band structure of TlBiTe$_2$ in Fig. 2(a) remain similar near the CBM and VBM around the $\Gamma$-point with the cut-off percentage of 45$\%$ and 22.5$\%$ for Te and Bi atoms, respectively (Figures not shown). A similar {\it band inversion} effect was also seen in Bi-based TIs (Bi$_2$Te$_2$Se and Bi$_2$Se$_2$Te). This effect is shown only for one of the compound Bi$_2$Te$_2$Se (Fig. 3(a) and (b)). The other compounds show similar effects (Figures not shown). We gave an equal weight of 30$\%$ to both Bi and Te(Se) orbital contributions since the two outermost layers of Se(Te) and Bi, out of the total of five atomic layers within each QL, mainly particpate in forming the VBM and CBM.
\begin{figure}
\scalebox{0.550}{\includegraphics{Fig4.ps}}
\caption{ (Color online) Schematic diagram of thin film structures of (a) TlBiSe$_{2}$ obtained by stacking 39 layers along the {\it z}-direction. The atomic layers close to surfaces are in the order of Se-Bi-Se-Tl, on both sides of the film. (b) Two-dimensional Brillouin zone of the (111) surface of the bulk TlBiSe$_{2}$ with three time-reversal invariant points $\bar{\Gamma}$, \={M}, and \={K}
}
\label{fig:Fig4}
\end{figure}
\section{Thin film band structures and surface states}
In this section we discuss the surface states of ternary TIs. First we discuss the interesting nature of surface states in Tl- and Bi-based TIs. We compare our results with binary Bi-based TIs, wherever possible. We choose kinetic energy cut-off of 400 eV and {\bf k}-mesh size of 9 $\times$ 9 $\times$ 1 on the surface BZ of the hexagonal supercell for computing surface band structure of both Tl- and Bi-based ternary TIs.
\subsection{Surface states in Thallium-based ternary compounds}
The thin films of Tl-based TIs are built from the bulk hexagonal structure with the atomic layers stacked along the crystallographic {\it z} direction and a net vacuum size of 3 nm above the top layer and below the bottom layer within a periodic simulation region. Recent theoretical\cite{lin} studies suggest contributions to the surface band structure from the dangling bonds arising from the surface terminations, and that these dangling states appear along with the topological surface states. Among four surface terminations, with the atomic layer ordering as Tl-Se(Te)-Bi-Se(Te), Se(Te)-Bi-Se(Te)-Tl, Bi-Se(Te)-Tl-Se(Te) and Se(Te)-Tl-Se(Te)-Bi, the one with Se(Te) beneath the Tl layer is argued to have minimal surface dangling bond densitys\cite{lin}. Therefore we chose this termination in all further thin film studies. We note here that experiments do not observe any signature of dangling bond states in the optical spectra\cite{ando}. And, at least for this termination, we do not find any signature of the dangling bond state either, at least, within the Dirac cone region. Different thicknesses of the thin films of TlBiSe$_2$, ranging from 23 to 39 layers, corresponding to thicknesses of 4 to 7 nm and 23 to 31 layers for TlBiTe$_2$ with the maximum thickness value of 5.8 nm were considered. Figure 4(a) shows a representative supercell structure of 39 layers of TlBiSe$_2$. We consider the relaxation of atomic positions in the {\it z}-direction since it is argued that the positions of Se(Te) atoms are critical in determining the topological nature of the surface states\cite{lin}. The positions are assumed to be optimized when the {\it z}-component of the forces are smaller than the threshold value 0.015 eV/\AA.
As for the Bi-based binary compounds, Tl-based ternary compounds also show thickness dependent electronic structure. The Dirac cone is preserved in TlBiSe$_2$ with 39 atomic layers, corresponding to a thickness of 7 nm (Fig. 5(a)), and in TlBiTe$_2$ with 31 layers of a thickness 5.8 nm (Fig. 5(b)), where red circles denote surface state contributions and the solid black lines represent the bulk bands. We reproduce the interesting experimental features of TlBiSe$_2$\cite{ando} and TlBi$_2$Te\cite{chen}. The bulk VB (BVB), in TlBi$_2$Se has two maxima between $\bar{\Gamma}$ and \={M} points in the hexagonal BZ (Fig. 4(b)) and the bulk CB (BCB) has a minimum at \={M}. The Dirac cone is found to exist inside the bulk band gap, at the $\bar{\Gamma}$-point of the BZ. These features are clearly seen in our computed band structure (Fig. 5(a)). Our computed band structure of TlBiTe$_2$ shows that the Dirac point is below the BVBM (Fig. 5(b)). The BCBM is placed at $\bar{\Gamma}$ and the BVBM along the line joining $\bar{\Gamma}$ and \={M}, producing an indirect gap of 83 meV. Another BVBM emerges along the line joining $\bar{\Gamma}$ to \={K}. These features including the indirect gap size, positions of BVB, BCB and the Dirac point are consistent with the recent experimental study\cite{chen}.
For thin film thicknesses corresponding to fewer than 39 (31) atomic layers in TlBiSe$_2$ (TlBiTe$_2$), our study suggests that a band gap opens in the otherwise metallic surface states, much like those in binary Bi-based TIs\cite{zhang2}. With decreasing thicknesses, the size of the gap seems to increase indicating the increased interactions between surfaces states that exist on two opposite surfaces of the TI (Figures not shown). The gap values are summarized in Table I. There are no experimental reports, so far, on the critical film thickness necessary to preserve the metallic bands for Tl-based ternary TIs which our theoretical study predicts. It is interesting to note that the predicted induced gaps in Tl-based TIs, for the same film thickness, are larger than the binary Bi-based TIs \cite{jiwon}.
\begin{figure}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig5a.ps}}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig5b.ps}}
\caption{ (Color online) Band structures of (a) TlBiSe$_{2}$ film with a thickness of $\sim$7.0 nm (39 layers) and (b) TlBiTe$_{2}$ film with a different thickness of $\sim$5.8 nm (31 layers). Thinner films (23, 27, 31, 35 layers for TlBiSe$_{2}$ and 23, 27 layers for TlBiTe$_{2}$) show nonzero band gaps at $\bar{\Gamma}$ (Figures not shown). Orbital contributions from first few layers in each thicknesses are marked with red circles.}
\label{fig:Fig5}
\end{figure}
\begin{figure}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig6.ps}}
\caption{ (Color online) Band structure of TlBiSe$_{2}$ film with a thickness of $\sim$5.6 nm (31 layers). A nonzero band gap exists at $\bar{\Gamma}$ due to the coupling between the top surface CB and the bottom surface VB and between the top surface VB and the bottom surface CB. No splitting is observed away from the surface state band edges.
}
\label{fig:Fig6}
\end{figure}
\begin{figure}
\scalebox{0.345}{\includegraphics[angle=0]{Fig7a.eps}}
\scalebox{0.345}{\includegraphics[angle=0]{Fig7b.eps}}
\caption{ (Color online) Layer-projected charge density, contributed by surface states in the neighborhood of the $\bar{\Gamma}$-point for (a) TlBiSe$_{2}$ with a thickness of $\sim$ 7 nm, corresponding to 39 atomic layers and (b) TlBiTe$_2$ with a thickness of $\sim$ 5.8 nm, corresponding to 31 layers. The color labeling scheme is consistent with that in Figure 1 (a).}
\label{fig:Fig7}
\end{figure}
\begin{table}
\caption{The induced band gap at the time-reversal invariant point $\bar{\Gamma}$ $\Delta$E$_{\bar{\Gamma}}$ (in eV) for thin-films with various thicknesses.}
\begin{tabular}{ c | c | c | c | c | c }
\hline \hline
Number of Layers & 23 & 27 & 31 & 35 & 39 \\
\hline
$\Delta$E$_{\bar{\Gamma}}$ & 0.059 & 0.042 & 0.023 & 0.015 & 0.000\\
TlBiSe$_{2}$ & & & & &\\
\hline
$\Delta$E$_{\bar{\Gamma}}$ & 0.018 & 0.009 & 0.000 & NA & NA\\
TlBiTe$_{2}$ & & & & &\\
\hline \hline
\end{tabular}
\end{table}
\begin{figure}
\scalebox{0.355}{\includegraphics[angle=0]{Fig8a.eps}}
\scalebox{0.355}{\includegraphics[angle=0]{Fig8b.eps}}
\caption{ (Color online) Layer-projected relative charge density for thinner samples of (a) TlBiSe$_{2}$ with thickness of $\sim$ 5.6 nm corresponding to 31 layers and (b) TlBiTe$_2$ with thickness of $\sim$ 5 nm corresponding to 27 layers. The color labelling scheme is consistent with that in Figure 1(a).}
\label{fig:Fig8}
\end{figure}
\begin{figure}
\scalebox{0.455}{\includegraphics[angle=0]{Fig9.ps}}
\caption{ (Color online) Schematic diagram of (a) Bi$_{2}$Se$_{2}$Te and (b) Bi$_{2}$Te$_{2}$Se thin film structures obtained by stacking 4QLs along {\it z}-direction.}
\label{fig:Fig9}
\end{figure}
To understand the origin of the gap at the Dirac point with decreasing film thickness, we first map out the surface state contributions in the band structure from atoms within first few layers of the film, and from all the orbitals, relative to the total contributions of all orbitals from all the layers of the thin film. From the top or bottom surface of the film, 8(6) atomic layers were chosen for TlBiSe$_2$(TlBiTe$_2$). A larger number of atomic layers are used for TlBiSe$_2$ than for TlBiTe$_2$ because a larger number of atomic layers is required to preserve the metallic band structure because the surface states are less localized to the surface region in TlBiSe$_2$. These choices also helped us to choose the same cut-off percentage for both the Tl-based compounds in estimating the surface state contributions. For film thicknesses corresponding to 23, 27, 31, 35 and 39 atomic layers in TlBiSe$_2$, respectively, these cut-offs were varied from 70$\%$ - 50$\%$, in steps of 5$\%$ decrease. For TlBiTe$_2$, 23, 27 and 31 atomic layers were considered with cut-offs of 70$\%$, 65$\%$ and 60$\%$. Examples results of these orbital contributions are marked on the top of the total band structure in Fig. 5(a) and (b).
After estimating the distribution of surface states in the thin film structure, we computed the valence charge density resulting from these surface state wave-functions in both the compounds. We focused on surface states around the Dirac-point, in an energy window chosen such that same number of wave-functions are used for building the surface charge density for most relevant thicknesses considered in this study. The wave-functions at three closely spaced {\bf k}-points including the $\Gamma$-point were chosen. The charge density is computed layer-wise. We define the {\it relative} charge density, to be the ratio of peak charge density in a particular layer to the peak charge density in all layers. This quantity is plotted in Figs. 7(a) and (b). Our studies suggest exponential decay of the charge density in the bulk region but with slower decay in TlBiSe$_2$ than in TlBiTe$_2$. In about 12-14 atomic layers, from top or bottom surfaces of the film (corresponding to 2.3 nm), the surface state contributions decay to almost zero in TlBiTe$_2$, whereas, the surface states extends up to 16-18 atomic layers ($\sim$3 nm) in TlBiSe$_2$. With decreasing film thickness, the charge density accumulates in the middle of bulk region in both compounds (Figs. 8(a) and (b)) suggesting increasing interactions of surface states by overlap of their wave-functions inducing a gap at the Dirac point. However, while a band gap is opened in the surface state band structure, away from the surface state band edges band structure is largely unaffected as seen in Fig. 6 -i.e., it does not split into two bands as one would expect from resonant coupling between opposite surface states-suggesting that the splitting is due to interband coupling between the top surface CB and the bottom surface VB and between the top surface VB and the bottom surface CB.
\begin{figure}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig10a.ps}}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig10b.ps}}
\caption{ (Color online) Band structures of Bi$_{2}$Se$_{2}$Te films with thickness of 3.6 nm corresponding to 4 QLs (a) without atomic relaxation and (b) with atomic relaxation. The Dirac cone is affected by relaxing the atoms.}
\label{fig:Fig10}
\end{figure}
\begin{figure}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig11a.ps}}
\scalebox{0.315}{\includegraphics[angle=-90]{Fig11b.ps}}
\caption{ (Color online) Band structures of Bi$_{2}$Te$_{2}$Se films with thickness of $\sim$ 3.7 nm corrresponding to 4 QLs (a) without atomic relaxation and (b) with atomic relaxation. The Dirac cone is not affected by relaxation.}
\label{fig:Fig11}
\end{figure}
\begin{figure}
\scalebox{0.345}{\includegraphics{Fig12a.eps}}
\scalebox{0.345}{\includegraphics{Fig12b.eps}}
\scalebox{0.345}{\includegraphics{Fig12c.eps}}\\
\scalebox{0.345}{\includegraphics{Fig12d.eps}}\\
\caption{ (Color online) (a) Layer-projected charge density for Bi$_{2}$Se$_{2}$Te without the atomic relaxation, contributed by surface states in the neighborhood of the $\bar{\Gamma}$-point for thickness of $\sim$3.6 nm corresponding to 4 QLs. Charge density resulting from (b) top and (c) bottom surface wave functions and (d) Bi$_{2}$Te$_{2}$Se with the atomic relaxation for the thickness of $\sim$3.7 nm corresponding to 4 QLs. Although a charge pile up in the middle is seen in (a), the separate wavefunction contributions in (b) and (c) clearly show that surface charge density strictly vanishes after 2 QLs from both the top and bottom surfaces of the film.}
\label{fig:Fig12}
\end{figure}
\subsection{Surface states of Bismuth-based ternary compounds}
We discuss the surface state properties of Bi$_2$Se$_2$Te and Bi$_2$Te$_2$Se in this section. The building blocks of these compounds are QLs, arranged in the order -Se(Te)-Bi-Te(Se)-Bi-Se(Te)-, and stacked along the crystallographic {\it z}-direction (Figs. 9(a) and (b)). Different film thicknesses corresponding the different number of QLs were considered with a maximum of 4 QLs. The choice thickness is guided by the necessity to maintain the metallic nature of the surface bands. The computational parameters used in this study are same as those for Tl-based TIs. We first construct the thin film from experimental parameters\cite{nakajima} and then study the thin-film properties without and with relaxing the atomic positions with the force convergence criteria of 0.05 eV/\AA. The procedure for extracting surface states from crystal wave-functions and the construction of layer-wise surface charge density is same as that for Tl-based TIs discussed in the previous section.
The thin-film structure of Bi$_2$Se$_2$Te, without internal position relaxation, shows a more symmetric Dirac cone with metallic bands, as compared to the Tl-based TIs for thickness corresponding to 4 QLs (Fig. 10 (a)). The bulk band gap (of 313 meV) supports these novel states, and the Dirac point is well within this gap. Recently, a report of 3 QLs, as the minimum thickness, required to preserve the Dirac cone has appeared\cite{johnson}. We find 4 QLs to be necessary, which we also illustrate with the help of surface charge density calculations. However, in the absence of any experimental work, it is likely that the minimum thickness, carefully predicted by different theoretical methods, may fall in the range of 3QLs-4QLs. For film thicknesses below 4 QLs, a finite gap is induced at the Dirac point whose size increases with decreasing the film thickness (Figures not shown). The size of the gap values are listed in Table II. With atomic relaxation, the Dirac cone is affected and its symmetry is lost (Fig. 10(b)). The BVB lies along the line joining $\bar{\Gamma}$ and \={M} in the hexagonal BZ, and its maximum is positioned almost in line with the Dirac point. This hints at crucial role played by rearrangement of atomic positions either {\it insitu} in thin-film preparation procedure or in presence of external environments. In Bi$_2$Te$_2$Se, a significant part of surface state band structure falls below the BVBM and the Dirac point occurs in the energy neighborhood of the $\Gamma$-point. Atomic relaxation has almost negligible effect on the overall band structure (Figs. 11 (a) and (b)) in this case. Decreasing film thickness induces a finite gap (Table II). These studies suggest that the Dirac cone remains protected in thinner layers of ternary Bi-based TIs than in binary Bi-bsed TIs or than in Tl-based TIs.
The origin of the critical thickness needed to maintain the metallic nature of the surface states can be understood by studying the layer-dependent surface charge densities. The procedure is same as that used for Tl-based TIs, except the charge density in the interstitial regions is also considered in order to locate the surface state spread in the bulk region as precisely as possible for thinner films thicknesses. For 4 QL thick Bi$_2$Se$_2$Te films, the combined charge density from the surface states of both sides of the film seems to be significant in the middle of the bulk region (Fig. 12(a)). However, we separately computed the charge densities resulting from top and the bottom surface states. As seen from the plots (Figs. 12(b) and (c)), the density associated with surface states drops rapidly beyond 2 QLs, indicating negligible interaction between these opposite surface states in the 4QL film, consistent with the observed protected Dirac cone. Relaxation of atomic positions is found to have no significant effect. In the 4QL films of Bi2 Te2 Se, even the combined contribution from both surfaces is small in our studies with and without atomic relaxation (Figs. 12(d)).
\begin{table}
\caption{The induced band gap at the time-reversal invariant point $\bar{\Gamma}$ $\Delta$E$_{\bar{\Gamma}}$ (in eV) for thin-films with various thicknesses.}
\begin{tabular}{ c | c | c | c }
\hline \hline
Number of QLs & 2 & 3 & 4 \\
\hline
$\Delta$E$_{\bar{\Gamma}}$ & 0.128 & 0.038 & 0.000 \\
Bi$_{2}$Se$_{2}$Te without relaxation & & & \\
\hline
$\Delta$E$_{\bar{\Gamma}}$ & 0.086 & 0.017 & 0.000 \\
Bi$_{2}$Se$_{2}$Te with relaxation & & & \\
\hline
$\Delta$E$_{\bar{\Gamma}}$ & 0.039 & 0.037 & 0.000 \\
Bi$_{2}$Te$_{2}$Se without relaxation & & & \\
\hline
$\Delta$E$_{\bar{\Gamma}}$ & 0.027 & 0.037 & 0.000 \\
Bi$_{2}$Te$_{2}$Se with relaxation & & & \\
\hline \hline
\end{tabular}
\end{table}
\section {Summary and Conclusions}
We use a density functional based electronic structure method to study the thin film surface state properties of Tl and Bi-based ternary topologicial insulators. These studies predict that the Dirac cone remains protected in thinner layers of ternary Bi-based TIs than in binary Bi-based TIs or than in Tl-based TIs, which may be advantageous for certain applications. However, we also predict that in the atomically relaxed strucures, the Dirac cone of all but Bi$_2$Se$_3$ lie in or near the BVB. However, the Dirac cone of Bi$_2$Se$_2$Te remains outside of the BVB in the absence of relaxation, as might occur with dielectrics replacing the air gap. And, of course, the bonding to the dielectric could directly affect the surface states as well. Our computed results agree very well with experimental results where available, while other predictions such as the critical film thicknesses required to maintain the Dirac cone, size of the induced gaps, and the extent of the spread of surface states in the bulk region point to further needed experimental work.
\acknowledgments
The authors acknowledge financial support from the Nanoelectronics Research Initiative supported Southwest Academy of Nanoelectronics (NRI-SWAN) center. We thank the Texas advanced computing center (TACC) for computational support (TG-DMR080016N).
|
2,877,628,088,903 | arxiv | \section{Introduction}
Almost any successful business needs to predict the future in order to make better decisions and allocate resources more effectively. Times series forecasting gives the power of future prediction based on past observations. It is ubiquitous, which is used extensively in finance, supply chain management and inventory planning. Examples of time series forecasting use cases are: stock market forecasting \cite{ma2018multi},
cotton yield forecasting~\cite{nguyen2019spatial}, gas price forecasting~\cite{jin2015forecasting}, weather forecasting~\cite{wang2019deep},
energy demand forecasting for households ~\cite{chou2018forecasting} and many more.
Time series data is collected at successive and equally-spaced time intervals. Because of the temporal dimension, time series data therefore contains the information from the past to current and possibly conveys the information for future. Accurate time series forecasting algorithms can best capture the observed time series, help interpret the underlying causes, and forecast the future values based on the history of that series.
The undoubted importance of time series forecasting has driven significant interest in this area. \cite{box1970distribution} proposed a statistical method called autoregressive integrated moving average (ARIMA) to forecast univariate time series data. \cite{gers1999learning} proposed Long-Short Term Memory (LSTM) network to learn both long and short term dependency of the data while addressing the gradient vanishing and exploding problem of recurrent neural network (RNN) \cite{hochreiter1998vanishing}. Convolutional neural network in image processing and sequence to sequence in language modeling are also utilized in the task of time series forecast \cite{borovykh2017conditional}, \cite{zaytar2016sequence}. Researchers also endeavor to build more complex models to enhance the forecasting performance \cite{lai2018modeling}. These works have laid the foundations for the time series forecasting, either by utilizing the state-of-the-art deep learning models or heavily relying on domain related knowledge to improve the performance of the forecasting models.
However, optimizing a complex deep learning network is not an easy task. Besides, domain related knowledge is not always accessible and required a lot of demanding work. Therefore, forecasting time series without domain knowledge and ``auto-optimized" mechanism is still a challenge for researchers.
On the other hand, multi-task learning \cite{caruana1997multitask} and multi-view learning \cite{sun2013survey} have been introduced with the capability to boost the deep learning model performance. Multi-task learning improves generalization and achieves better efficiency and prediction accuracy by using signals of related tasks as inductive bias. Multi-view learning is able to a set of complement distinct features. Therefore, the views can be employed to comprehensively and accurately describe the data, thus to help improve the learning performance \cite{xu2013survey}.
In this paper, we propose a self-boosted model which combines the learning capability of multi-task and the multi-view learning with co-training objective function to enhance the forecasting performance.
More importantly, the model does not require external knowledge.
The key idea is the original time series were decomposed into multiple components: the intrinsic mode functions and the residue.
The decomposition is done via Empirical Ensemble Decomposition (EEMD) algorithm, introduced in signal processing \cite{huang1998empirical}, where the generated signals can have variable amplitude and frequency along the time axis.
Then, these self-generated time series are clustered into closely related groups and loosely related groups compared with the original time series. The decomposed closely related time series are fed to build a multi-task learning model while the decomposed loosely related time series group is utilized for multi-view learning. This combination helps improve the generalization of the model, and can enhance the performance significantly. Specifically, our contributions in this paper are:
\begin{itemize}
\item We propose a novel self-boosted mechanism for time series forecasting. It firstly decomposes time series to intrinsic mode functions, then utilizes multi-task and multi-view learning paradigms to build the forecasting model from those generated time series.
\item To the best of our knowledge, this is one of the first attempts in incorporating EEMD method from signal processing into multi-task and multi-view learning paradigms in addressing time series forecasting problem.
\item We demonstrate the superiority of our proposed self-boosted model via extensive experiments on different datasets and compare with the state-of-the-art forecasting techniques. This self-boosted forecasting method is widely applicable to multivariate time series data
\end{itemize}
\section{Related Work}
\label{sec:related-work}
We review two approaches in addressing time series forecasting problem: the general statistical and machine learning based approaches, and the signal processing based approach.
\subsubsection{Statistical and Machine Learning Based Methods}
Autoregressive Integrated Moving Average (ARIMA) introduced by \cite{box1970distribution} is perhaps a base for many time series forecasting solution. This model is a bundle of two variants: the autoregression(AR) and the moving average (MA) models. One limitation is it only looks back of the dependent variable but fails to capture an unusual change of a pattern
In another approach Dasgupta et al. used non-linear dynamic boltzmann machines for time series prediction \cite{dasgupta2017nonlinear}. The technique is used to learn a generative model of temporal pattern sequences, using an exact learning rule that maximizes the log likelihood of a given time series. Hsiang-Fu et al. presented temporal graph regularization method in high-dimensional time series forecast \cite{yu2016temporal}. They formed connections to graph regularization methods in the context of learning the dependencies in an autoregressive framework. Support vector machines used in \cite{kim2003financial},\cite{chidlovskii2017multi} for financial time series forecasting and learning of mutually dependent time series in a multi-task setting. Some other works utilized both ARIMA and Multilayer Perceptron (MLP) as in \cite{zhang2003time}, or \cite{jain2007hybrid} for hydrologic time series forecasting. \cite{emamgholizadeh2014prediction} used artificial neural network (ANN) with adaptive neuro-fuzzy inference system for ground water level prediction. \cite{liang2018multi} used multi-variable stacked Long-short Term Memory (LSTM) network to learn different time scales and enhance wind speed prediction. However, most of these works focus on high-dimensional time series where domain specific features play an important role in the forecasting models. Instead, we propose a solution based on the novelty of not requiring external information in the forecasting model.
\subsubsection{Signal Processing Based Methods}
The success in signal processing using deep learning captures attention of researchers to apply it for addressing the problem of time series forecasting. Ena et al. utilized a hybrid model of ARIMA and ANN with Discrete Wavelet Transform (DWT) \cite{khandelwal2015time}. In his work DWT is used to decompose a time series dataset into linear and nonlinear components; in the later phase the ARIMA and ANN were used to perform better prediction on those linear and non-linear components, respectively. \cite{awajan2018improving} uses EMD-HW (Ensemble Mode Decomposition - Holt-Winter) bagging technique to do financial forecasting. \cite{wu2019improved} combined ensemble empirical mode decomposition (EEMD) with the LSTM model to forecast crude oil price. These models have shown signal processing success in time series forecasting. However, there are still limited works in integrating the success of latest signal processing technologies and the multi-task multi-view learning paradigms. Our paper also proposes the use of EEMD from signal processing in multi-task multi-view deep neural network setting to bridge the gap.
\section{Problem Formulation}
\label{sec:problem_formulation}
Here, we describe our self-boosted time series forecasting problem as below:
\textbf{Intrinsic mode functions (IMF)}:
\textit{``IMF is any time-varying function with the same number of extrema and zero crossings, whose envelopes are symmetric with respect to zero"} \cite{huang1998empirical}.
\textbf{The input}: The input is a univariate time series denoted by $Y=\{y_1, y_2,...,y_t\}$ where $t$ is the current time.
This time series is decomposed into intrinsic mode functions denoted $IMF=\{IMF_1, IMF_2,..., IMF_n\}$ where $n$ is the number of intrinsic mode functions. Each function $IMF_i$ is described as a time series $IMF_i=\{imf_1, imf_2, ..., imf_t\}$. We call these functions as supporting time series.
\textbf{Problem definition:} With the given time series and its intrinsic mode functions, our goal is to learn a function $f$ that takes the history of those functions and the time series until current time $t$ then returns the predicted values in future time steps $Y_{t+1..t+H} = \{y_{t+1}, y_{t+2}, ..., y_{t+H}\}$ where $H$ is called the forecasting horizon.
\begin{equation}
y_{t+1}, y_{t+2}, ..., y_{t+H} = f(IMF_{1..n}, Y_{1..t})
\end{equation}
\section{Method}
\label{sec:method}
Here we describe the details of our proposed method. The overall the framework involves three steps:
\begin{itemize}
\item \textbf{Time series Decomposition:} In this step, the original time series is decomposed into multiple intrinsic mode functions $\{IMF_1, IMF_2,...,IMF_n\}$ in which they are orthogonal and its sum is the original time series.
\item \textbf{Feature Selection:} We treat each intrinsic mode function as an additional feature to train our model later. We group all intrinsic mode functions that are similar most with the original time series and build multi-task model based on these features. The similarity measure is calculated via the correlation coefficient between two time series.
\item \textbf{Forecasting Model:} Finally, we build multi-task model where each task is the forecasting of the selected intrinsic mode functions. The rest of intrinsic mode functions play as additional views for the target task specific branch. The utilization of the co-training algorithm in multi-task learning and the multi-view learning help boost the performance of the forecasting model.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/2020_self_bossting_ts_arch.pdf}
\caption{An overview of the multi-task and multi-view learning architecture in self-boosted time series forecasting framework.}
\label{fig:architecture}
\end{figure}
\subsection{From Time series to Intrinsic Mode Functions}
We employ Ensemble Empirical Mode Decomposition (EEMD) method from signal processing domain to decompose the input sequence $y(t)$ into serial components (so called intrinsic mode functions (IMF) and residual component) as shown in equation (\ref{equation:ts-decomposition}).
\begin{equation}
\label{equation:ts-decomposition}
y(t) = \sum_{i=1}^{N-1}{imf_i} + r_n
\end{equation}
where $imf_i$ is an IMF and $r_n$ is the residual component.
All the IMFs are orthogonal and their sum is equal to the original time series. Each IMF represents a unique range of energy and frequency.
\begin{algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\underline{function eemd\_decomposition} $(y)$\\
\Input{time series $y$}
\Output{serial components $imfs$}
- 0. add uniform distribution of white noise signal. \\
- 1. identify all extrema of $Y$\\
- 2. interpolate between minima (resp. maxima) with "envelops" \\
- 3. compute the mean envelops\\
- 4. extract the detail $imf_j=y(t)-m_k(t)$ \\
- 5. repeat (1) to (4) until IMFs meet the definition and converge.\\
- 6. repeat (1) to (5) to generate residual $r_n(t)=x(t)-imf_n(t)$
\Return $imf_{1..n}$
\caption{EEMD decomposition algorithm. Step $1$ to $6$ is the algorithm of Empirical Mode Decomposition (EMD). Step $0$ is added to resolve the mode mixing problem.}
\label{algorithm:eemd-decomposition}
\end{algorithm}
\paragraph{EEMD algorithm.}
Let $N$ be the number of ensembles. The final IMFs and residual in Algorithm \ref{algorithm:eemd-decomposition} are defined by the average values after $N$ ensembles as the following equations:
\begin{equation}
\begin{aligned}
& imf_j(t) = \frac{1}{N} \sum_{i=1}^{N}{imf_{ji}} \quad j=1..N-1\\
& r(t) = \frac{1}{N} \sum_{i=1}^{N}{r_{ni}}
\end{aligned}
\end{equation}
The stopping condition for Algorithm \ref{algorithm:eemd-decomposition} is the size of the standard deviation (SD) by twice sifting the results \cite{huang1998empirical}. The $SD$ is computed as:
\begin{equation}
SD = \sum_{t=0}^{T}{ \frac{|imf_{j-1}(t) - imf_j(t)|^2}{imf^2_{j-1}(t)} }
\end{equation}
According to \cite{huang1998empirical}, a typical value of $SD$ is between $0.2$ and $0.3$. The sifting process will be terminated once the $SD$ value falls into the specified range.
\subsection{Selection of Intrinsic Mode Functions}
To integrate intrinsic mode functions into our multi-task learning model, we perform clustering of those functions into two categories. The first category comprises of functions that are not highly dependent with the original time series, the second category includes functions which are highly dependent with the original one. We employ k-mean algorithm to cluster the time series where $k=M$ ($M$ is the number of clusters) and the distance is measured by the inversion of the similarity (correlation coefficient) between a series and the original one. The general equation for measuring the correlation coefficient between time series $X$ and $Y$ is:
\begin{equation}
corr(X, Y) = \frac{ \sum_{i=1}^{T}{(x_i-\overline{x})*(y_i-\overline{y})}}{\sqrt{\sum_{i=1}^{T}{(x_i-\overline{x})}}*\sqrt{\sum_{i=1}^{T}{(y_i - \overline{y})}}}
\end{equation}
where $\overline{x}$ and $\overline{y}$ is the mean value of the time series $X$ and $Y$ respectively. Each cluster represents how close it is with the original time series. The least similarity group(s) can be dropped to reduce the data dimension and the computation of the model.
\subsection{Utilizing Intrinsic Mode Functions to Learn with Multi-task and Multi-view based Model}
Figure \ref{fig:architecture} presents an overview of our proposed architecture. The main task in our model is to forecast the original time series at a horizon $H$. The auxiliary tasks are to forecast related time series (intrinsic mode functions) decomposed in the previous step. The main task sub-network takes less related time series group as different views to enhance its forecasting performance. These tasks are co-trained in order to boost the performance of the main task leveraging the auxiliary tasks.
In particular, the related time series group is fed from the input to three one-dimension convolutional layers to extract short-term patterns of the series. Let $k$ be the number of filters with width $w$ swept through the input matrix $X$, the output $h_k$ of each layer after the filter $k^{th}$ is computed as:
\begin{equation}
h_k = max(0, W_k*X + b_k)
\end{equation}
where $b_k$ is the bias in the filter $k^{th}$ operation; $max(0, x)$ is the popular $RELU(x)$ activation function. Stacking the convolutional layers allows a hierarchical decomposition of the inputs.
A max pooling layer is stacked on top of the convolutional layers to reduce the latent representation dimension and computation in the network. Subsequently, the architecture continues with two gated recurrent units (GRU) \cite{chung2014empirical} and a dense layer before leading into task specific branches. The choice of GRU is for faster training while still maintaining the capabilities of learning long-short term pattern as in the traditional Long-short Term Memory (LSTM) unit. Stacking the GRUs helps efficiently discover more high-level features at different time scales and result in improvement of the forecasting performance.
The hidden state of recurrent units is computed at time $t$ as below \cite{chung2014empirical}:
\begin{equation}
\begin{aligned}
& h_t = (1-z_t)h_{t-1} + z_t \hat{h_t} \\
& z_t = \sigma(W_z x_t + U_z h_{t-1}) \\
& \hat{h_t} = tanh(W x_t + U(r_t \odot h_{t-1})) \\
& r_t = \sigma(W_r x_t + U_r h_{t-1})
\end{aligned}
\end{equation}
where $r$, $z$, $h$ and $\hat{h}$ are the values of the reset gate, update gate, activation gate and candidate activation, respectively. $\odot$ is the element-wise product, $\sigma$ is the sigmoid function and $x_t$ is the input at time $t$. $W$ and $U$ are weight matrices.
On the main task specific branch, the less related time series group is treated as different views and they are concatenated together before feeding to the subsequent layers. The concatenated view is defined by:
\begin{equation}
h^c = h^m \oplus v_1 \oplus \dots \oplus v_q
\end{equation}
where $\oplus$ is the concatenation operator, $h^c$ is the final concatenated view, $h^m$ is the latent view of the main task in multi-task learning, $v1 \dots v_q$ are additional views fed by less related time series group.
All the branches end with a fully connected dense layer where it produces the final forecasting results $o_t$ computed as:
\begin{equation}
o_t = W h_{t} + b
\end{equation}
where $W$ and $b$ are learnable parameters, $h_t$ is the hidden state at time $t$.
\subsection{Optimization Algorithm}
We use traditional strategy in building features and solving time series model. From the given time series $Y_t = \{y_1, y_2, \dots, y_t\}$ and a lag time $q$, we form the input with features are the values $X = \{y_{t- q+1}, y_{t-q+2}, \dots, y_t\}$. If $H$ is the forecasting horizon, the regression problem of feature-value pair $\{X_t, Y_{t+H}\}$ can be solved using Adam optimizer \cite{kingma2014adam}.
Regarding the objective function, it is to minimize the joint loss of all tasks. With this joint loss function, all the tasks are co-trained to improve its generalization. In particular, the joint loss function $L$ is defined by the average weighted loss of all task-specific losses.
\begin{equation}
L = \frac{1}{N} \sum_{i=1}^{N} \alpha_i * MSE(Y_i, \hat{Y_i})
\end{equation}
where $N$ is the number of tasks, $\alpha_i$ is the loss weight of the task $i$; $Y_i$ and $\hat{Y_i}$ is the ground truth and forecasting values of all samples in the training set for the task $i$. In our experiment, we penalize more with the error on the main task, hence we set its weight twice compared with the auxiliary tasks.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/exchange_rate_decomposition.pdf}
\caption{Exchange rate decomposed into intrinsic mode functions.}
\label{fig:imfs}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/comparison-mtl-mtv-all.pdf}
\caption{Performance comparison of multi-task learning (MTL), multi-view Learning (MTV) and the proposed method across all datasets (left: electricity consumption; middle: air temperature; right: exchange rate). X-axis is the lag time and Y-axis is the normalized RMSE value.}
\label{fig:comparison-mtl-mtv-all}
\end{figure*}
\section{Evaluation}
\label{sec:evaluation}
We conduct experiments on three public datasets with four state-of-the-art methods. The competing approaches and evaluation metrics are described as below:
\subsection{Competing Approaches}
\begin{itemize}
\item \textit{ARIMA}: The autoregressive integrated moving average (ARIMA) is a popular time series analysis method, applying in many application domains, including but not limited to primary energy demand \cite{ediger2007arima}. This forecasting technique projects the future values of a series based entirely on its own inertia.
\item \textit{RNN-GRU}: This is the Recurrent Neural Network (RNN) using Gate Recurrent Unit (GRU) \cite{chung2014empirical} as the cell. The GRU has less number of parameters compared to LSTM while still maintaining competitive performance.
\item \textit{Dilated CNN}: This is a Convolution Neural Network (CNN) based on WaveNet architecture \cite{oord2016wavenet}. It comprises three dilated convolutional layers. The dilation values for the layers are $1$, $2$, and $4$ respectively \cite{borovykh2017conditional}.
\item \textit{Seq2seq}: This is the sequence to sequence method which is widely used in neural machine translation \cite{sutskever2014sequence}. We utilize this approach in time series forecasting in which two Long-Short Term Memory networks (LSTM) are used. One plays the role of encoder while the other one is the decoder.
\end{itemize}
\subsection{Evaluation Metrics}
We use conventional evaluation metrics such as Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and R-squared ($R^2$). These metrics are computed as:
\begin{equation}
\begin{aligned}
& RMSE = \sqrt{ \frac{1}{N} \sum_{i=1}^{N} (Y_i - \hat{Y_i})^2} \\
& MAE = \frac{1}{N} \sum_{i=1}^{N} |Y_i - \hat{Y_i}| \\
& MAPE = \frac{100\%}{N} \sum_{i=1}^{N} \frac{|Y_i - \hat{Y_i}|}{Y_i}
\end{aligned}
\end{equation}
where $i = 1,\dots,N$; $Y$ and $\hat{Y}$ are ground true series and system forecast series. $N$ is the number of elements in the test set.
\section{Experiment Results}
\label{sec:results}
\begin{table*}[t]
\centering
\caption{Summary of all models' performance. Each row is the metric values of a specific method for all datasets. Each column is the comparison between methods on a particular dataset using metrics RMSE, MAE, MAPE and R-squared (R2). Report for lag time $1$ is not included due to limited space.} \begin{tabular}{rlrrrrrrrrr}
\toprule
\multicolumn{1}{l}{Dataset} & \multicolumn{1}{r|}{} & \multicolumn{3}{c|}{Electricity Consumption} & \multicolumn{3}{c|}{Air Temperature} & \multicolumn{3}{c}{Exchange Rates} \\
\midrule
& \multicolumn{1}{r|}{} & \multicolumn{3}{c|}{Lag Time} & \multicolumn{3}{c|}{Lag Time} & \multicolumn{3}{c}{Lag Time} \\
\cmidrule{3-11} \multicolumn{1}{l}{Method} & \multicolumn{1}{l|}{Metrics} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{6} & \multicolumn{1}{c|}{12} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{6} & \multicolumn{1}{c|}{12} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{12} \\
\midrule
\multicolumn{1}{r}{\multirow{4}[2]{*}{ARIMA}} & RMSE & 268.2087 & 263.7740 & 256.1630 & 7.0773 & 7.0090 & 6.8422 & 0.0130 & 0.0130 & 0.0130 \\
& MAE & 196.6700 & 193.1916 & 186.7810 & 4.7797 & 4.7365 & 4.6165 & 0.0094 & 0.0094 & 0.0094 \\
& MAPE & 0.3820 & 0.3763 & 0.3646 & 0.0922 & 0.0914 & 0.0892 & 0.0104 & 0.0104 & 0.0104 \\
& R2 & -0.4037 & -0.3577 & -0.2804 & 0.4079 & 0.4193 & 0.4466 & 0.9871 & 0.9871 & 0.9871 \\
\midrule
\multicolumn{1}{r}{\multirow{4}[2]{*}{RNN-GRU}} & RMSE & 164.1796 & 272.0544 & 188.6766 & 1.8021 & 1.7469 & \textbf{1.8587} & 0.0072 & 0.0106 & 0.0114 \\
& MAE & 115.5435 & 189.5071 & 159.8048 & 1.2333 & 1.2486 & \textbf{1.3186} & 0.0041 & 0.0071 & 0.0068 \\
& MAPE & 0.0657 & 0.0980 & 0.0875 & 0.0248 & 0.0254 & \textbf{0.0265} & 0.0048 & 0.0079 & 0.0073 \\
& R2 & 0.9684 & 0.9131 & 0.9582 & 0.9713 & 0.9731 & \textbf{0.9696} & 0.9957 & 0.9906 & 0.9890 \\
\midrule
\multicolumn{1}{r}{\multirow{4}[2]{*}{Dilated CNN}} & RMSE & 152.0230 & 123.9416 & 111.9258 & 3.2532 & 6.5757 & 4.3498 & \textbf{0.0064} & 0.0134 & 0.0121 \\
& MAE & 87.2483 & 30.0819 & 79.2819 & 2.5077 & 4.3453 & 3.1238 & \textbf{0.0038} & 0.0075 & 0.0086 \\
& MAPE & 0.0505 & 0.0629 & 0.0490 & 0.0583 & 0.1068 & 0.0734 & \textbf{0.0044} & 0.0081 & 0.0097 \\
& R2 & 0.9820 & 0.9660 & 0.9853 & 0.9066 & 0.6189 & 0.8336 & \textbf{0.9966} & 0.9850 & 0.9878 \\
\midrule
\multicolumn{1}{r}{\multirow{4}[2]{*}{Seq2seq}} & RMSE & 151.6445 & 142.9175 & 129.0600 & 1.7772 & 2.0277 & 2.1882 & 0.0144 & 0.0168 & 0.0146 \\
& MAE & 104.5271 & 99.3870 & 97.1598 & 1.2675 & 1.6001 & 1.7653 & 0.0100 & 0.0133 & 0.0094 \\
& MAPE & 0.0571 & 0.0551 & 0.0522 & 0.0258 & 0.0336 & 0.0373 & 0.0108 & 0.0148 & 0.0101 \\
& R2 & 0.9730 & 0.9760 & 0.9804 & 0.9721 & 0.9638 & 0.9579 & 0.9827 & 0.9763 & 0.9820 \\
\midrule
\multicolumn{1}{r}{\multirow{4}[2]{*}{\textbf{Self-boosted}}} & RMSE & \textbf{98.4757} & \textbf{85.9606} & \textbf{88.9537} & \textbf{1.4645} & \textbf{1.3909} & 1.9176 & 0.0070 & \textbf{0.0096} & \textbf{0.0082} \\
& MAE & \textbf{73.4041} & \textbf{62.3544} & \textbf{65.4086} & \textbf{1.1418} & \textbf{1.0817} & 1.5684 & 0.0046 & \textbf{0.0071} & \textbf{0.0058} \\
& MAPE & \textbf{0.0405} & \textbf{0.0363} & \textbf{0.0398} & \textbf{0.0234} & \textbf{0.0221} & 0.0324 & 0.0054 & \textbf{0.0088} & \textbf{0.0067} \\
& R2 & \textbf{0.9886} & \textbf{0.9913} & \textbf{0.9907} & \textbf{0.9811} & \textbf{0.9830} & 0.9677 & 0.9960 & \textbf{0.9924} & \textbf{0.9944} \\
\bottomrule
\end{tabular}%
\label{tab:performance-comparison}%
\end{table*}%
\subsection{Dataset Description}
We use publicly available datasets which can be summarize as below:
\begin{itemize}
\item \textit{Electricity\footnote{https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014}}: This is the electricity consumption in kW was recorded
every $15$ minutes from $2011$ to $2014$, for $321$ clients. We resampled for hourly consumption and average them to have per client consumption. This data is used for training model that forecasts average electricity per client consumption.
\item \textit{Exchange rate}: The dataset contains daily exchange rates from $1990$ to $2016$ of eight countries: Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore.
\item \textit{Air Temperature\footnote{https://archive.ics.uci.edu/ml/datasets/Air+Quality}}: This is the hourly temperature recorded from March 2004 to February 2005 on the field within an Italian city. The missing values are replaced with linear interpolation.
\end{itemize}
All the datasets are split into $60\%$, $20\%$, and $20\%$ within chronological order for training, validation and testing, respectively. All the models forecast one-time step forward.
\subsection{Ensemble Empirical Mode Decomposition Result}
Figure \ref{fig:imfs} presents the IMFs after decomposing the original exchange rate time series using EEMD method. There are $10$ IMF components and one residue. The $IMF1$ has the highest frequency, shortest wavelength and maximum amplitude. The subsequent components have the decreasing frequency, amplitude and increasing wavelength. The residual component has a slowly varying around long term representing the trend of annual exchange rate pattern. The EEMD decomposition transforms non-linear, non-stationary time series to stationary time series and can be useful for forecasting performance.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/rmse_comparison.pdf}
\caption{Average RMSE comparison on each dataset.
}
\label{fig:avg-rmse-comparison}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/imf_importance.pdf}
\caption{Impact of intrinsic mode functions on model performance across three datasets (left: electricity consumption, middle: air temperature, right: exchange rates). X-axis indicates the number of intrinsic mode functions. Y-axis is the RMSE coefficient. The red point indicates starting of adding intrinsic mode functions to the multi-view learning component.
}
\label{fig:performance-imfs}
\end{figure*}
\subsection{Overall Forecasting Performance}
Table \ref{tab:performance-comparison} summarizes the forecasting performance of our proposed method comparing with all the baselines across three datasets and all the metrics. We also vary the lag time with values $\{1, 3, 6, 12\}$ to evaluate the learning capabilities of the models better. Due to limited space, the table does not show the result of the lag time $1$. The best results are highlighted in bold face.
From Table \ref{tab:performance-comparison}, we see that deep learning models outperforms the traditional ARIMA model in all perspectives. This explains the skill of deep learning models in learning both linear and non-linear time series data. For electricity consumption dataset, our self-boosted method has rounded RMSE values for the lag times $3, 6, 12$ are $98, 86$, and $89$, respectively. In this metric setting, the proposed method outperforms the strongest baseline methods which is Seq2seq whose rounded RMSE value for lag time $3$ is $152$, equivalent to $36\%$ improvement; Dilated CNN whose rounded RMSE values for lag time $6$ is $124$, lag time $12$ is $112$, equivalent to $31\%$ and $21\%$ improvements, respectively. For the air temperature dataset, our self-boosted method also beats other deep learning models at time lag $3$ and $6$. The exchange rates dataset also shares the same result pattern where our method outperforms other methods on lag time $6$ and $12$. It is slightly behind the Dilated-CNN model at lag time $3$ while the two models still have R-squared values above $99\%$. In addition, we average RMSE across all lag times to generalize the performance comparison of all methods. Figure \ref{fig:avg-rmse-comparison} shows that our self-boosted model outperforms all the baselines.
\begin{table}[htbp]
\centering
\caption{Comparison of the average RMSE performance of the proposed model and the average performance of the best models in each lag time. ABBM means Average Best Baseline Methods.}
\begin{tabular}{l|c|c|c}
\toprule
& Electricity & Temperature & Exchange Rates \\
\midrule
ABBM & 129.1706 & 1.7943 & 0.0095 \\
\midrule
\textbf{Self-boosted} & \textbf{91.1300} & \textbf{1.5910} & \textbf{0.0082} \\
\bottomrule
\end{tabular}%
\label{tab:performanc-average}%
\end{table}%
Furthermore, we select the best baseline method on each lag time performance, then average them for each dataset (so called ABBM method in the resulting table). The result is then compared with our proposed model average performance across all lag times for those datasets. Table \ref{tab:performanc-average} displays the average results. The RMSEs of our self-boosted model are $91.1300, 1.5910$, and $0.0082$ while the RMSEs of the average best models are $129.1708, 1.7943$, and $0.0095$ on the datasets electricity consumption, air temperature, and exchange rates, respectively. This result confirms that our self-boosted model consistently outperforms those state-of-the-art models.
\subsection{The Role of Multi-task and Multi-view Learning}
To evaluate the combination of multi-task and multi-view learning approach, we created a multi-task learning model (named MTL) that has a similar architecture of the self-boosted model in which the multi-view component is ignored. In addition, we also created a multi-view learning model (named MTV) by dropping the auxiliary tasks from the proposed model. Then we measure the performance of these variants and present the results in Figure \ref{fig:comparison-mtl-mtv-all}.
Figure \ref{fig:comparison-mtl-mtv-all} shows normalized RMSE values of the MTL. model, MTV. model and the self-boosted model across all datasets with respect to the selected time lags $\{1, 3, 6, 12\}$. From the figure, the self-boosed model outperforms the two variants on electricity consumption dataset. The MTV model has comparable RMSE at time lag $1$, but the results of other time lags still show the superiority for the proposed model. In addition, on the air temperature and exchange rate datasets, our model performs three to four times better than the MTL variant. Overall, our self-boosted model outperforms its variants and has the lowest normalized RMSE values in each time lag across the three datasets.
\subsection{Understanding the Importance of Intrinsic Mode Functions}
To better understanding the importance of the Intrinsic Mode Functions in self-boosted mechanism, we conduct an evaluation of the forecasting performance by sorting its similarity with the original time series in descending order. Then, these IMFs are fed into the self-boosted model one at a time from the most similar IMF to the least one. To reflect the influence of the IMF, we calculate RMSE coefficients from the RMSEs measured from the model performance. RMSE coefficient $coeff_i$ for the IMF $i^{th}$ is computed as:
\begin{equation}
coeff_i = \frac{RMSE_i}{\sum_{j=1}^{N} RMSE_j}
\end{equation}
where $N$ is the number of IMFs, $RMSE_i$ is the RMSE after including IMF $i^{th}$ into the model. The lower the value of the RMSE coefficient, the higher the importance of the intrinsic mode function in the model performance.
Figure \ref{fig:performance-imfs} presents the RMSE coefficient after adding each IMF one at a time tested on the three datasets. With electricity consumption and air temperature datasets, we see that adding more IMFs until the red point (the point in which an IMF is added for multi-view learning) helps reducing RMSE coefficient. It indicates that the model performance is improved via multi-task learning. Meanwhile, the exchange rates dataset show a slight negative performance on the performance after adding IMF $3^{th}$. From the red point onward, we noticed a decreasing trend of the RMSE coefficient on the electricity consumption dataset, but an increasing trend with the air temperature dataset, and a fluctuation on exchange rate dataset. These behaviors indicate that some IMFs have a negative effect while some other IMFs have a positive effect to the model performance. It entails that a better feature selection can be done to select more proper intrinsic mode functions for both multi-task learning and multi-view learning. We can drop the IMFs which cause an increment of the RMSE coefficient and keep the ones that lead to decrements of RMSE coefficient values. In other words, the overall performance of the model does improve for simple feature selection with k-mean clustering algorithm. However, we can still enhance the model performance with a better feature selection strategy.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we presented a novel self-boosted deep learning model for time series forecasting. The proposed model co-trains multi-task learning and multi-view learning to enhance the forecasting performance. The learning features come from intrinsic mode functions which are generated by the ensemble empirical mode decomposition (EEMD) method in the signal processing domain. The multi-task learning part learns from related intrinsic mode functions while the multi-view learning component learns from less related ones. Three public datasets: electricity consumption, air temperature and exchange rates are used to evaluate the forecasting results. The experimental results demonstrate that our proposed self-boosted model outperforms several state-of-the-art baseline methods on all the datasets. Future work will continue to explore the intrinsic mode functions selection strategy, so that the negative impact on the model performance will be removed, and the network computation can be more efficient.
\bibliographystyle{aaai}
|
2,877,628,088,904 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figure/teaser1.pdf}
\caption{Performance comparison between K-Net and Mask-RCNN. K-Net can outperform Mask-RCNN on large-scale datasets (COCO-full). However, on small datasets (the right three), it can not perform as well as Mask-RCNN since it's hard to learn localization and shape priors. Our proposed unsupervised pre-training method based on saliency prompt not only boosts the vanilla K-Net significantly, but also helps to achieve comparable performance compared with Mask-RCNN.}
\label{fig:tease}
\end{figure}
Modern CNN models address the instance segmentation task in an indirect way, by defining the localization problem on a large set of proposals\cite{maskrcnn}, window centers\cite{tensormask, instanceFCN}, or location-based masks\cite{fcos,solov1,solov2}.
A typical example is Mask-RCNN\cite{maskrcnn}, which generates candidate bounding boxes using a well-designed region proposal network.
Although this paradigm makes localization learning easily optimized,
it still relies on the manually-designed non-maximum suppression (NMS) as post-processing to remove duplicated predictions.
Based on a state-of-the-art object detection model, DETR \cite{liu2021survey}, a few Query-based End-to-end Instance Segmentation (QEIS) models \cite{queryinst,mask2former,maskformer,maxdeeplab,istr,knet} have been proposed to perform instance segmentation in a new way.
Unlike CNN-based methods which usually require a large set of proposals, QEIS models use dynamic queries/kernels to automatically encode object localization knowledge with different locations and object shapes.
This design effectively eliminates hand-crafted anchors and post-processing like NMS.
However, due to the intrinsic dynamic attribute, the kernels are forced to learn general object spatial distribution and shape priors in a data-driven manner so that they can fit any input image.
This makes QEIS models require a much larger amount of training data and a much longer training time to achieve competitive performance with CNN-based methods.
Once in low-data regimes\cite{detreg}, QEIS models will encounter much more significant performance drops than CNN-based methods, as shown in Figure~\ref{fig:tease}. Here we take K-Net \cite{knet} as the typical example of QEIS models and compare it with Mask-RCNN.
That being said, the potential of QEIS models is still enormous since once good localization and shape priors can be learned, they can perform on par with or even outperform CNN-based methods with a much more concise pipeline. This makes us think about
how we can help QEIS models learn localization and shape priors quickly, especially for low-data regimes.
A promising solution is to adopt unsupervised pre-training, which requires no extra data and any modification to existing model architectures. However, most existing unsupervised pre-training methods \cite{detreg,updetr,swav,densecl,mocov2 }
are only used for the backbone and can not benefit instance segmentation prediction heads, where localization and shape priors are exactly encoded.
In the object detection field, some works\cite{updetr,detreg} do pre-train a full detection architecture.
However,
they use pseudo bounding boxes for training, many of which do not contain any object inside hence can not generate pseudo instance masks for instance segmentation.
FreeSOLO\cite{wang2022freesolo} is the first method specifically designed for instance segmentation.
Yet it mainly focuses on generating pseudo masks and directly using them to supervise the model training. Such a way still learns the object localization and shape priors in a data-driven manner, hence requiring tedious steps to generate high-quality pseudo masks.
To address these problems, we present a novel un-supervised pre-training method for QEIS models. Inspired by the recent advances in Prompting in NLP and vision tasks \cite{bert,clip,coop,cooop,vpt}, we propose to directly inject localization and shape priors into the kernels using our proposed \textbf{Saliency Prompt (SP)}.
The prompts are generated by saliency masks which indicate potential objects, and then are used to decorate the kernels for injecting location and shape knowledge.
In detail, our saliency prompt involves two essential parts: \textit{saliency} and \textit{prompt}:
First, a \textbf{Saliency Mask Proposal} generation method is responsible for generating saliency-level pseudo masks from unlabeled images.
Instead of directly learning from noisy pseudo masks, we use them to generate corresponding region features and then achieve prompts from them.
Next, a \textbf{Prompt-Kernel Matching} module matches the saliency prompts to the kernels and then injects the prior knowledge encoded in the prompts into the best-matched kernels.
Furthermore, we also propose a \textbf{Kernel Supervision} scheme to supervise the model learning at the kernel level to gain kernel robustness.
See Figure~\ref{fig:overview} for overview.
In our experiments, our method surpasses all the existing unsupervised pre-training algorithms on low-data regimes on four datasets.
It can be used as a plug-and-play pre-training step for most QEIS methods and
enables faster convergence speed and better performance without any increase in parameters or memory.
Most importantly, our method achieves two \textit{desiderata} on downstream tasks
a) it leads to the same convergence speed as CNN-based methods.
(b) it gains comparable or even better performance than CNN-based methods on most downstream datasets.
In ablations, we find that our method shows big tolerance to the quality of pseudo masks. As such, we can easily achieve performance improvement without a sophisticated and time-consuming pseudo mask generation method as in FreeSOLO\cite{wang2022freesolo}.
\section{Related Work}
\subsection{Query-Based End-to-End Instance Segmentation}
With the development of Transformer, a brand new object detector based on object-queries is proposed by DETR\cite{detr}.
It considers the detection task as a set prediction problem, which makes DETR become the first end-to-end model without any human-crafted anchors or NMS.
Subsequently, many works\cite{queryinst,mask2former,maskformer,maxdeeplab,istr,knet} follow this paradigm to tackle the instance segmentation task, which we call them Query-Based End-to-End Instance Segmentation (QEIS) methods.
In the prediction head, instead of proposing dense object proposals, QEIS models use queries/tokens/kernels to capture individual instance features on the global scale, hence are more flexible and enable end-to-end learning. Meanwhile, various improvements have been made by different QEIS models.
Inspired by the idea of SOLO-v2 \cite{solov2}, K-Net\cite{knet} generates convolution kernels to predict masks directly.
This kernel-mask paradigm enables K-Net to segment both semantic and instance categories consistently by a group of learnable kernels.
QueryInst\cite{queryinst} builds upon Sparse-RCNN \cite{sparse} and adopts parallel supervision on dynamic mask heads.
Mask2Former\cite{mask2former} improves the efficiency and accuracy of the prediction head by using masked-cross-attention and multi-scale feature fusion.
In this work, we take K-Net as a typical example of QEIS models and develop our unsupervised pre-training method upon it. However, our method can also be deployed on other QEIS-style models freely and improve their performance on low-data regimes, as proved in the experiments part.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.95\linewidth]{figure/overview1.pdf}
\caption{Overview of our proposed pre-training framework.
Modules with {\color[HTML]{F56B00} orange} colors denotes our pre-training method with the corresponding supervision. As can be seen,our method is parameter-free.
{\color[HTML]{A9C4EB} Blue} and {\color[HTML]{CCCCCC} gray} modules denote a vanilla QEIS model, here we use K-Net for example. }
\label{fig:overview}
\vspace{-0.2in}
\end{figure*}
\subsection{Unsupervised Pre-training}
Unsupervised pre-training aims to pre-train deep models with carefully designed pretext tasks for boosting the model performance in downstream tasks.
Most state-of-the-art unsupervised pre-training methods, such as DenseCL\cite{densecl}, SwAV\cite{swav}, and MoCo-v2\cite{mocov2}, are only used to pre-train backbones and ignore the prediction head, thus can not solve the data-hungry problem of QEIS models.
UP-DETR\cite{updetr} and DETReg\cite{detreg} creatively build end-to-end unsupervised learning frameworks based on DETR's query-object mechanism.
Specifically, UP-DETR\cite{updetr} randomly crops image patches from images as pseudo labels. DETReg\cite{detreg} uses region proposals generated by ResNet and SwAV as pseudo labels.
However, none of these methods work for segmentation tasks as their pseudo labels would only contain backgrounds that cannot provide object and shape information. Such pseudo labels can largely mislead the training of the segmentation models.
Recent work FreeSOLO\cite{wang2022freesolo} concentrates on generating good-quality pseudo masks inspired by SOLOv2 \cite{solov2}, making the first time, pseudo labels can be used for training instance segmentation models.
However, FreeSOLO requires multiple steps, such as pre-training and self-training, to generate pseudo masks. It only considers using the pseudo masks as labels to supervise the model training, without exploring how to use them in a more efficient way.
Our experiments indicate that explicitly injecting object localization and shape priors contained in the pseudo labels can bring further improvements compared with solely using pseudo labels as the supervision.
\subsection{Prompting}
The prompting technique originates from NLP\cite{bert} and is soon transferred to the multi-modal domain\cite{clip,coop,cooop}.
It formulates downstream tasks as a "fill-in-the-bank" problem, such as "A photo of a \{object\}" in CLIP\cite{clip}.
Here "A photo of a" stands for prompt templates, which guide the language model to elicit useful information from the pretrained models and predict the "\{object\}".
CoOp\cite{coop} replaces man-defined language prompts with a set of learnable vectors.
Based on CoOp, CoCoOp\cite{cooop} is further purposed to generate input-conditional prompts for each image and combine them with existing language dynamic prompts.
To conclude, prompting has shown its performance in the language and vision-language domain.
Most recently,
VPT \cite{vpt} first integrates prompting into pure vision tasks.
It prepends several learnable prompt tokens to the patch tokens of the frozen ViT\cite{vit} model to fit it for a variety of downstream tasks and datasets without the need of finetuning the whole ViT model.
We find that most previous works use prompts to improve the performance of pre-trained models on downstream tasks. That is to say, they do not use prompts in the pre-training stage and only utilize them in the finetuning stage. Contrary to them, we ingeniously tailor the prompting mechanism for our pre-training task.
Our saliency prompts are only used in the pre-training stage to help the QEIS model learn a better prediction head and then are removed in the finetuning stage.
\section{Methodology}
\label{sec:method}
In this section, we take K-Net \cite{knet} as a typical example of QEIS models and present our proposed unsupervised pre-training method upon it. We first briefly review the K-Net model, and then show how to use our proposed saliency prompt for pre-training K-Net.
\subsection{K-Net Review}
\label{subsec:knet}
In most instance segmentation scenarios, the number of instances to segment is usually assumed to be unknown (average 7.7 in COCO).
K-Net dynamically encodes the instance-level information into $N$ kernels\footnote{$N$ is defined to be larger than the maximum instance number in images.}, each of which is responsible for finding the pixels belonging to its corresponding instance.
In particular, given the feature maps $\mathbf{F} \in \mathbb{R}^{C \times H \times W}$ of the input image and the generated kernels $\mathbf{K} \in \mathbb{R}^{N \times C}$, the instance segmentation masks $\mathbf{M} \in \mathbb{R}^{N \times H \times W}$ can be obtained by performing convolution on $\mathbf{F}$ with $\mathbf{K}$, denoted as
\begin{equation}
\mathbf{M}=\sigma(\mathbf{K} * \mathbf{F}),
\end{equation}
where $\sigma$ means the sigmoid activation function and $*$ denotes the convolution operation.
For kernel generation, K-Net designs a dynamic kernel update mechanism that uses the segmented masks and the features $\mathbf{F}$ to enhance the kernels $\mathbf{K}$ in an iterative way. At each iteration step $i$, the kernel updatation is formulated as
\begin{equation}
\mathbf{K}^i, \mathbf{M}^i=\mathcal{F}^i(\mathbf{M}^{i-1}, \mathbf{K}^{i-1}, \mathbf{F}).
\end{equation}
The initial kernels $\mathbf{K}^0$ are randomly initialized and then learned during training, encoding general image-agnostic localization and shape priors. The kernels are then updated by $\mathcal{F}^i$ to receive image-specific information.
Therefore, the learning of $\mathbf{K}^0$ can only be driven by the final instance segmentation loss and the localization loss, hence needs lots of training data and time to learn general priors.
\subsection{Saliency Mask Proposal Generation}
Most previous deep unsupervised segmentation methods generate pseudo masks with unsupervised algorithms for supervising the model training. Here we also follow the same pipeline and additionally use the pseudo masks for generating prompts.
There exist various unsupervised algorithms for generating pseudo masks, such as selective search \cite{selective}, random proposal\cite{updetr}, and FreeSOLO\cite{wang2022freesolo}. Here we adopt the saliency mechanism \cite{liu2020picanet,liu2021visual,zhuge2022salient,fang2021densely} since its simplicity.
Our main idea is to generate dense saliency maps through foreground-background separation modeling.
We first use a self-supervised pre-trained model, such as ResNet-50 \cite{resnet} trained with the DenseCL\cite{densecl} algorithm, as the backbone to extract image features.
Then, dense feature similarity is calculated upon the output of the backbone network to generate dense saliency maps.
Specifically, given the feature maps $\mathbf{X} \in \mathbb{R}^{H \times W \times D}$,
we first uniformly sample $H' \times W'$ foreground seeds and generate the seed features $\mathbf{S} \in \mathbb{R}^{H' \times W' \times D}$ using average pooling on $\mathbf{X}$.
Next, dense saliency is computed by using the feature of each seed
$\mathbf{S}_{i,j} \in \mathbb{R}^D$ as the convolution weights to convolve the feature $\mathbf{X}$.
The operation for generating a saliency map $\mathbf{Y}_{i, j}$ for the seed $(i,j)$ is formulated as
\begin{equation}
\mathbf{Y}_{i, j}=\operatorname{Conv}\left(\mathbf{S}_{i,j}. \mathbf{X}\right)\in \mathbb{R}^{H \times W}.
\end{equation}
The convolution operation calculates the similarity between the weights and the feature at each location. The locations obtain large convolution activation means they are similar to the foreground seed $(i,j)$ hence belonging to the salient foreground. While those that have small convolution activation belong to the background of the seed. Then,
we linearly normalize the saliency map $\mathbf{Y}_{i, j}$ to the range [0,1] and separate the foreground and background by using a threshold to binarize it, thus obtaining the saliency mask for the seed $(i,j)$. Finally, each foreground seed in $\mathbf{S}$ has a saliency mask, which usually highlights the coarse region of the foreground object this seed belongs to.\footnote{If one seed does not belong to any object, usually we get a null saliency mask.} However, different saliency masks may indicate the same object since their seeds may all belong to this object.
Hence, we further use the mask NMS to filter out overlapping masks. The whole process can be formulated as
\begin{equation}
\mathbf{Z}=\operatorname{NMS}\left(\operatorname{Thres} \left(\operatorname{Norm}\left(\mathbf{Y}\right)\right)\right),
\end{equation}
where \textbf{Z} denotes the final saliency mask proposals and $\operatorname{Norm}$ means linear normalization. The process of generating saliency mask proposals is also shown in Figure \ref{fig:overview}.
\subsection{Prompt-Kernel Matching}
Figure \ref{fig:overview} shows the details of our proposed Prompt-Kernel Matching, which has two key steps: Prompt Generation and Cosine Similarity Based Matching.
\paragraph{Prompt generation.}
Given the saliency mask proposals $\mathbf{Z}$, we use their tightest bounding boxes to crop the image feature maps output by an FPN \cite{fpn}. We denote the cropped features as $\mathbf{f} = \{\mathbf{f}_1, \mathbf{f}_2, \cdots, \mathbf{f}_L\}$, where $L$ is the number of the masks in $\mathbf{Z}$ ($L$ can vary from different images).
Then, we use average pooling to convert the features $\mathbf{f}$ into prompts:
\begin{equation}
\mathbf{P} = \text{Avg}(\mathbf{f})\in \mathbb{R}^{L \times C},
\end{equation}
where $C$ is also the channel number of the FPN feature and Avg means average pooling along the spatial dimension.
\paragraph{Cosine similarity based matching.}
Each prompt encodes the localization and shape priors for an individual object and can be injected into one of the initial kernels $\mathbf{K}^0$ of K-Net.
This raises an interesting question: which prompt should be injected into which kernel?
A straightforward way is to randomly or sequentially assign the $L$ prompts to $N$ tokens. However, as found in DETR \cite{detr} and K-Net \cite{knet}, under the dynamic learning training scheme, different kernels/queries encode the localization and shape priors of different image regions and objects with different shapes, while each kernel mainly learns a specific pattern of similar object shapes and locations. As a result, using random or sequential assignments of the prompts may easily inject totally different object localization and shape information into the same token in different training samples, hence making the learning of the initial tokens very unstable.
To this end, we propose a novel prompt-kernel matching scheme based on cosine similarity to assign the best-matched prompt to each token.
Specifically, given the $L$ prompts $\mathbf{P} = \{\mathbf{P}_l\in \mathbb{R}^{C}\}_{l=1}^L$ and the $N$ initial kernels $\mathbf{K}^0 = \{\mathbf{K}^0_n\in \mathbb{R}^{C}\}_{n=1}^N$, we compute the cosine similarity between them to build the similarity matrix $\mathbf{E} \in \mathbb{R}^{N\times L}$:
\begin{equation}
\mathbf{E}_{n,l} = \frac{\mathbf{K}^0_n}{\|\mathbf{K}^0_n\|_2} \cdot \frac{\mathbf{P}_l}{\left\|\mathbf{P}_l\right\|_2}.
\end{equation}
Then, for each kernel $n$, we select the best-matched prompt index $\delta(n)$ with the largest similarity score:
\begin{equation}
\delta(n) = \mathop{\arg\max}\limits_{l\in [1,...,L]}\mathbf{E}_{n,l}.
\end{equation}
Next, the best-matched prompt $\mathbf{P}_{\delta(n)}$ is injected into the kernel $n$ via summation:
\begin{equation}
\mathbf{K}^{0'}_n = \mathbf{K}^0_n + \mathbf{P}_{\delta(n)}.
\end{equation}
Finally, the decorated initial kernels $\mathbf{K}^{0'}$ are fed into the prediction head of K-Net.
As such, each kernel can get the best-matched localization and shape awareness to ease its learning.
\subsection{Loss Function and Kernel Supervision}
\begin{table}[t]
\scriptsize
\caption{
Instance segmentation fine-tune results on COCO with 5\% and 10\% annotated images based on K-Net. \vspace{-8pt}
}
\centering
\begin{tabular}{cccccccc}
\hline
& Pre-train & mAP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\ \hline
& Img. Sup. & 14.8 & 29.1 & 13.7 & 4.3 & 15.5 & 24.4 \\
& DenseCL & 16.7 & 31.2 & 15.9 & 5.1 & 17.5 & 27.7 \\
& SwAV & 15.7 & 30.3 & 14.7 & 4.6 & 25.9 & 16.6 \\
& MoCo-v2 & 17 & 32 & 16.2 & 5.3 & 18.3 & 27.1 \\
\multirow{-5}{*}{\rotatebox{90}{5\% images}} & \cellcolor[HTML]{EFEFEF}\textbf{SP(ours)} & \cellcolor[HTML]{EFEFEF}\textbf{19.9} & \cellcolor[HTML]{EFEFEF}\textbf{35.7} & \cellcolor[HTML]{EFEFEF}\textbf{19.9} & \cellcolor[HTML]{EFEFEF}\textbf{6.0} & \cellcolor[HTML]{EFEFEF}\textbf{21.0} & \cellcolor[HTML]{EFEFEF}\textbf{32.6} \\ \hline
& Img. Sup. & 19.1 & 35.7 & 18.2 & 6.7 & 20 & 31.6 \\
& DenseCL & 20.3 & 36.4 & 20.3 & 6.6 & 21.8 & 33.6 \\
& SwAV & 18.9 & 34.8 & 18.3 & 6.8 & 20.8 & 30.6 \\
& MoCo-v2 & 20.7 & 37.7 & 20.4 & 6.4 & 22.1 & 34.2 \\
\multirow{-5}{*}{\rotatebox{90}{10\% images}} & \cellcolor[HTML]{EFEFEF}\textbf{SP(ours)} & \cellcolor[HTML]{EFEFEF}\textbf{23.5} & \cellcolor[HTML]{EFEFEF}\textbf{41.4} & \cellcolor[HTML]{EFEFEF}\textbf{23.7} & \cellcolor[HTML]{EFEFEF}\textbf{7.9} & \cellcolor[HTML]{EFEFEF}\textbf{24.8} & \cellcolor[HTML]{EFEFEF}\textbf{38.6} \\ \hline
\end{tabular}
\vspace{-0.05in}
\label{tab:knet-coco}
\end{table}
We use the saliency mask proposals $\mathbf{Z}$ as pseudo labels to perform bipartite matching with the $N$ predictions of the tokens and then use the set prediction loss to supervise the pre-training, which is the same as the original K-Net \cite{knet}.
The overall loss function $\mathcal{L}_K$ of K-Net is assembled by three components: the focal loss \cite{focal} $\mathcal{L}_{cls}$ for classification, the Dice loss $\mathcal{L}_{dice}$ and cross-entropy loss $\mathcal{L}_{ce}$ for segmentation, which only consider supervising the predictions. Since the predictions are mainly generated by the kernels, we argue that we can directly supervise the kernels as a supplementary loss.
Specifically, for each mask proposal $\mathbf{Z}_l$ where $l\in [1,...,L]$, we can find its corresponding saliency seed and feature $\mathbf{S}_{l} \in \mathbb{R}^D$, which encodes the representative object information of this mask. Then, we transform its channel number to $C$ for supervising the embedding of the token whose prediction is matched with the proposal $\mathbf{Z}_l$ after bipartite matching. We denote the index of the token matched with $\mathbf{Z}_l$ as $n_l$. Then, the kernel supervision loss can be formulated as
\begin{equation}
\mathcal{L}_{ker} = \sum_{l=0}^{L} \sum_i (1 - \operatorname{Cos}(\operatorname{Linear}(\mathbf{S}_{l}), \mathbf{K}^i_{n_l})),
\end{equation}
where the supervision is adopted for every K-Net kernel update iteration step $i$ and summed over all mask proposals. $\operatorname{Linear}$ means a linear transformation to reduce the channel number to $C$ and $\operatorname{Cos}$ denotes the cosine similarity.
Our final loss can be defined as
\begin{equation}
\mathcal{L}_K = \lambda_{cls}\mathcal{L}_{c l s} + \lambda_{dice}\mathcal{L}_{dice} + \lambda_{ce}\mathcal{L}_{ce} +\lambda_{ker} \mathcal{L}_{ker}.
\end{equation}
where $\lambda_{(\cdot)}$ are corresponding loss weights. Since our pseudo labels are class-agnostic, we use binary classification (foreground \textit{v.s.} background) for $\mathcal{L}_{cls}$.
\section{Experiments}
\subsection{Implementation Details}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figure/knet_10image.pdf}
\vspace{-8pt}
\caption{AP learning curves of K-Net with different pre-training methods on COCO with 10\% annotated images.}
\label{fig:10image}
\vspace{-8pt}
\end{figure}
\label{sec:exp}
\noindent \textbf{Pre-training setting.}
ResNet-50 \cite{resnet} is applied for all models as the backbone and pre-trained with the DenseCL algorithm.
We adopt the AdamW optimizer with $0.05$ weight decay and 1,000 steps of linear step warmup.
As for data augmentation, we simply apply random flipping.
The model is trained with a batch size of 96 for 12 epochs on 8 A100 GPUs.
The initial learning rate is set to $1\times 10^{-4}$ and decreased by 0.1 after 8 and 11 epochs.
As for the hyperparameters of our model, we set the number of kernels/queries $N$ as 100, $\mathcal{L}_{c l s}=2$, $\mathcal{L}_{dice}=4$ , $\mathcal{L}_{ce}=1$ and $\mathcal{L}_{ker}=1$.
\noindent \textbf{Fine-tuning setting.}
All models are trained with a batch size of 96 on 8 A100 GPUs.
Random flipping and rotation are used as data augmentation.
Referring to the open-source of MMDetection\cite{mmdetection}, we use the same hyperparameters for QEIS and CNN-based models.
For QEIS, we apply the same training strategy as the pre-training stage except the training epoch, which will illustrate in the experiment tables.
For CNN-based models, we apply SGD as the optimizer with weight decay and momentum. The learning rate is 0.02, momentum is set to 0.9 and weight decay is 0.0001.
\begin{table*}[t]
\scriptsize
\centering
\caption{Instance segmentation fine-tune results on Cityscapes and CTW1500.}\vspace{-8pt}
\label{tab:knet-other}
\begin{tabular}{cc|ccccccc|ccccccc}
\hline
& & \multicolumn{7}{c|}{Cityscapes} & \multicolumn{7}{c}{CTW1500} \\
\multirow{-2}{*}{Model} & \multirow{-2}{*}{Pre-train} & Epoch & AP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ & Epoch & AP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\ \hline
Mask RCNN\cite{maskrcnn} & Img. Sup. & 24 & 30 & 57.4 & - & 8.3 & 27.9 & 49 & 96 & 34.5 & 69.8 & 32.1 & 25.9 & 40.4 & 36.8 \\
SOLO-v2\cite{solov2} & Img. Sup. & 24 & 24.9 & 44.4 & - & 1.8 & 20.1 & 50.4 & 96 & 27.9 & 59.6 & 23.3 & 8.7 & 30.2 & 41 \\ \hline
& Img. Sup. & 24 & 24.8 & 47.4 & - & 4.8 & 19.9 & 43.1 & 96 & 9.7 & 26.5 & 6.1 & 3.0 & 9.2 & 19.2 \\
& DenseCL & 24 & 28 & 52.2 & - & 6.6 & 25.2 & 55.2 & 96 & 18.9 & 42.6 & 15.1 & 7.0 & 18.8 & 32.5 \\
& SwAV & 24 & 27.4 & 52.1 & - & 5.3 & 22.7 & 49.2 & 96 & 9.1 & 25.8 & 4.8 & 2.7 & 9 & 19.8 \\
& MoCo-v2 & 24 & 28.2 & 51.2 & - & 5.9 & 26.4 & 52.7 & 96 & 13.3 & 32.2 & 10 & 4.3 & 13.3 & 24.2 \\
\multirow{-5}{*}{K-Net} & \cellcolor[HTML]{EFEFEF}\textbf{SP(ours)} & \cellcolor[HTML]{EFEFEF}\textbf{24} & \cellcolor[HTML]{EFEFEF}\textbf{30.6} & \cellcolor[HTML]{EFEFEF}\textbf{55.4} & \cellcolor[HTML]{EFEFEF}\textbf{-} & \cellcolor[HTML]{EFEFEF}5.8 & \cellcolor[HTML]{EFEFEF}\textbf{27} & \cellcolor[HTML]{EFEFEF}\textbf{54} & \cellcolor[HTML]{EFEFEF}\textbf{96} & \cellcolor[HTML]{EFEFEF}\textbf{34.6} & \cellcolor[HTML]{EFEFEF}\textbf{71.1} & \cellcolor[HTML]{EFEFEF}\textbf{31} & \cellcolor[HTML]{EFEFEF}18.0 & \cellcolor[HTML]{EFEFEF}\textbf{36.1} & \cellcolor[HTML]{EFEFEF}\textbf{45.7} \\ \hline
\end{tabular}
\vspace{-0.05in}
\end{table*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figure/knet_cityscapes.pdf}
\caption{AP learning curves of Cityscapes}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figure/knet_ctw1500.pdf}
\caption{AP learning curves of CTW1500}
\end{subfigure}
\vspace{-4pt}
\caption{AP learning curves of Mask-RCNN, vanilla K-Net and our method on Cityscapes and CTW1500.}
\vspace{-10pt}
\label{fig:curve-knet-other}
\end{figure*}
\noindent \textbf{Datasets.} We pretrain our QEIS models on MS COCO \cite{lin2014coco} unlabeled 2017 split and fine-tune on multiple datasets, including MS COCO, Cityscapes \cite{cityscapes} and CTW1500 \cite{ctw1500}.
{MS COCO} is a popular instance segmentation dataset that contains 164k labeled images, where objects from 80 object categories are annotated with dense pixel segmentation.
{Cityscapes} is a popular instance segmentation dataset which focus on semantic understanding and urban street scenes. It contains 5000 fine annotated large-scale images.
{CTW1500} is a wild-scene dataset that focuses on text detection and segmentation. It contains 1,500 images with dense annotation, which is also a typical low-data regime.
\subsection{Fine-tune Results on MS COCO}
To evaluate our performance in low-data regimes, we split the MS COCO train2017 dataset into two different types of training subset:
\begin{itemize}
\item COCO with 10\% fully annotated images, which contains 12k+ images, 80k+ annotated masks.
\item COCO with 5\% fully annotated images, which contains 5k+ images, 43k+ annotated masks.
\end{itemize}
Table \ref{tab:knet-coco} shows the comparison results on MS COCO with different pre-training methods.
Img. Sup. denotes the ImageNet supervised pre-training.
As can be seen, vanilla K-Net performs poorly in low-data regimes.
However, our pre-training method significantly boosts its performance compared with the ImageNet supervised pre-training: up to \textbf{+5.1} AP on 5\% COCO and \textbf{+4.4} AP on 10\% COCO .
Moreover, from the AP learning curves on 10\% COCO images shown in Figure \ref{fig:10image}, we can observe that our method converges much faster than other methods and gain much higher AP at the beginning of the fine-tuning stage.
These evidences reflect that our method has probably already learned shape and localization prior in the pre-training stage.
\subsection{Fine-tune Results on Other Datasets}
In this part, we test our method in wild scenes with unseen targets (CTW1500\cite{ctw1500}) and small objects (Cityscapes\cite{cityscapes}). The comparison results are shown in
Table \ref{tab:knet-other}.
As can be seen,
our method outperforms the ImageNet-supervised method by \textbf{+5.8}AP and
achieves better performance(\textbf{+0.6}AP) compared with Mask-RCNN, a representative CNN-based model, on CItyscapes.
On CTW1500, our method surpasses the supervised and unsupervised methods by \textbf{+24.9}AP and \textbf{+15.7}AP. It also achieves comparable performance(\textbf{+0.1}AP) with Mask-RCNN.
These experiments further demonstrate that our method could enable K-Net to learn localization and shape prior rather than simply remembering objects during the pre-train stage. When compared with Mask-RCNN in small object scenarios in terms of $AP_{S}$, our method did not show superior results. We conjecture there are two reasons: 1) the intrinsic deficiency of the QEIS paradigm---both vanilla SOLOv2 and K-Net perform extremely badly compared with Mask R-CNN. 2) Saliency Mask Proposals mainly provide large-scale pseudo labels, which makes our kernels pay more attention to big objects rather than small objects.
Figure \ref{fig:curve-knet-other} shows the AP learning curves on Cityscapes and CTW1500.
Although QEIS models like K-Net perform much worse than traditional CNN models like Mask-RCNN in both convergency speed and final accuracy,
equipped with our method, K-Net converges much faster than the ImageNet supervised pre-training models and can gain considerable learning curves compared with Mask-RCNN.
\subsection{Deployed on QueryInst and Mask2Former}
\begin{table*}[t]
\caption{
Instance segmentation fine-tune results on QueryInst and Mask2Former.
}\vspace{-8pt}
\scriptsize
\centering
\begin{tabular}{cc|ccccccc|ccccccc}
\hline
& & \multicolumn{7}{c}{CTW1500} & \multicolumn{7}{c}{Cityscapes} \\
\multirow{-2}{*}{Model} & \multirow{-2}{*}{Pre-train} & Epoch & AP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ & Epoch & AP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\ \hline
& Img. Sup. & 80 & 38.8 & 67.6 & 41.6 & 15.6 & 41.2 & 57.4 & 24 & 29.1 & 52.4 & - & 5.6 & 23.9 & 55 \\
& DenseCL & 80 & 43.2 & 71.6 & 48.5 & 18.4 & 47.6 & 59.9 & 24 & 27.5 & 48.9 & - & 5.2 & 23.8 & 53.7 \\
& SwAV & 80 & 41.2 & 69.1 & 46.1 & 17.6 & 45.2 & 58.1 & 24 & 30.3 & 53.3 & - & 5.4 & 23.3 & 59 \\
& MoCo-v2 & 80 & 43.3 & 71.2 & 49.2 & 18.9 & 47.6 & 59.4 & 24 & 30.7 & 54.3 & - & 5.4 & 25.5 & 56.4 \\
\multirow{-5}{*}{Mask2Former\cite{mask2former}} & \cellcolor[HTML]{EFEFEF}\textbf{SP(ours)} & \cellcolor[HTML]{EFEFEF}\textbf{20} & \cellcolor[HTML]{EFEFEF}\textbf{52.9} & \cellcolor[HTML]{EFEFEF}\textbf{83.4} & \cellcolor[HTML]{EFEFEF}\textbf{62.1} & \cellcolor[HTML]{EFEFEF}\textbf{29.4} & \cellcolor[HTML]{EFEFEF}\textbf{56.4} & \cellcolor[HTML]{EFEFEF}\textbf{67.6} & \cellcolor[HTML]{EFEFEF}\textbf{24} & \cellcolor[HTML]{EFEFEF}\textbf{31.8} & \cellcolor[HTML]{EFEFEF}\textbf{55.8} & \cellcolor[HTML]{EFEFEF}\textbf{-} & \cellcolor[HTML]{EFEFEF}5.1 & \cellcolor[HTML]{EFEFEF}\textbf{26.5} & \cellcolor[HTML]{EFEFEF}\textbf{59.0} \\ \hline
& Img. Sup. & 80 & 28.3 & 53.7 & 28.6 & 9.8 & 29 & 41.8 & 24 & 29.1 & 53.2 & - & 6.7 & 27.4 & 50.7 \\
& DenseCL & 80 & 31.6 & 56.7 & 33.4 & 10.4 & 32.5 & 46.6 & 24 & 30.8 & 54.7 & - & 8.6 & 28.9 & 54.5 \\
& SwAV & 80 & 24.6 & 50 & 23.1 & 8.1 & 25 & 36.3 & 24 & 30.7 & 54.4 & - & 7.9 & 28.5 & 53.9 \\
& MoCo-v2 & 80 & 31.6 & 56.8 & 32.8 & 12.6 & 32 & 45.8 & 24 & 31.4 & 54.4 & - & 8.1 & 28.4 & 56.1 \\
\multirow{-5}{*}{QueryInst\cite{queryinst}} & \cellcolor[HTML]{EFEFEF}\textbf{SP(ours)} & \cellcolor[HTML]{EFEFEF}\textbf{20} & \cellcolor[HTML]{EFEFEF}\textbf{39.2} & \cellcolor[HTML]{EFEFEF}\textbf{66.8} & \cellcolor[HTML]{EFEFEF}\textbf{43.1} & \cellcolor[HTML]{EFEFEF}\textbf{16.7} & \cellcolor[HTML]{EFEFEF}\textbf{42.2} & \cellcolor[HTML]{EFEFEF}\textbf{51.9} & \cellcolor[HTML]{EFEFEF}\textbf{24} & \cellcolor[HTML]{EFEFEF}\textbf{32.8} & \cellcolor[HTML]{EFEFEF}\textbf{57.3} & \cellcolor[HTML]{EFEFEF}\textbf{-} & \cellcolor[HTML]{EFEFEF}\textbf{8.8} & \cellcolor[HTML]{EFEFEF}\textbf{29.2} & \cellcolor[HTML]{EFEFEF}\textbf{57.0} \\ \hline
\end{tabular}
\vspace{-0.2in}
\label{tab:query-mask2former}
\end{table*}
\begin{table}[t]
\scriptsize
\caption{Ablation of {kernel supervised learning}.} \vspace{-8pt}
\begin{tabular}{cccccccc}
\hline
Model & $L_{ker}?$ & mAP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\ \hline
\rowcolor[HTML]{EFEFEF}
\textbf{K-Net} & {\color[HTML]{009901}\textbf{\Checkmark}} & \textbf{23.5} & \textbf{41.4} & \textbf{23.7} & \textbf{7.9} & \textbf{24.8} & \textbf{38.6} \\
K-Net & {\color[HTML]{CB0000}\XSolidBrush} & 23.1 & 41.4 & 23.0 & 7.5 & 24.6 & 37.8 \\ \hline
\end{tabular}
\label{tab:loss_sem-ablation}
\vspace{-0.3in}
\end{table}
Besides K-Net, here we further apply our pre-train method on another two QEIS methods:QueryInst\cite{queryinst} and Mask2Former\cite{mask2former}.
As shown in table \ref{tab:query-mask2former}, with our pre-train method, QueryInst outperforms the state-of-the-art unsupervised pre-training method by \textbf{+7.6} AP on CTW1500 dataset and achieves gains of \textbf{1.4} AP on Cityscapes dataset.
For Mask2Former, our method achieves significant gains of \textbf{+ 9.6} AP over the state-of-the-art unsupervised pre-training method on CTW1500.
These results indicate that our pre-training method can help the kernels/queries of QEIS models to learn localization and shape prior effectively and help gain competitive performance improvement.
\subsection{Ablation Study}
We perform ablation analysis to understand the impact of each component of our pre-train method.
In general, we pre-train K-Net with our method on COCO unlabeled2017 for 12 epochs and then fine-tune on COCO train2017 with 10\% images for 48 epochs.
\noindent \textbf{Loss function.} Firstly, we investigate the contribution of the proposed kernel supervision loss. As shown in table \ref{tab:loss_sem-ablation}, this loss function yields clear improvement, indicating that the kernel supervision plays a complementary role to prediction supervision---using the noisy prediction supervision alone may lead to over-fitting.
\noindent \textbf{Cosine Similarity Based Matching.} Table \ref{tab:prompt-ablation} shows the evaluation results of several prompt approaches, including Random Assignment, Sequential Assignment and Cosine Similarity based matching.
Random Assignment is called 'shuffle' in UP-DETR\cite{updetr}, which leads to a performance drop of 0.4AP compared to the method even without using prompt.
Sequential Assignment simply expands the number of saliency prompt and attaches them to the initial kernels of K-Net, which achieves 1.7 AP improvement compared to Random Assignment.
Then, Cosine Similarity further surpasses Sequential Assignment by {0.5} AP.
\begin{table}[t]
\scriptsize
\caption{Ablation of different prompt approaches. '\XSolidBrush' means the model only pre-trained by pseudo labels without prompt.}\vspace{-8pt}
\begin{tabular}{ccccccc}
\hline
Prompt Approch & mAP & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ \\
\hline
\XSolidBrush & 21.7 & 38.7 & 21.7 & 7.2 & 23.5 & 36.2 \\
Random Assign & 21.3 & 38.3 & 21.2 & 7.6 & 22.5 & 35.5 \\
Seq. Assgin & 23 & 40.6 & 23 & 8 & 24.3 & 37.6 \\
\rowcolor[HTML]{EFEFEF}
\textbf{Cosine Similarity} & \textbf{23.5} & \textbf{41.4} & \textbf{23.7} & \textbf{7.9} & \textbf{24.8} & \textbf{38.6} \\\hline
\end{tabular}
\label{tab:prompt-ablation}
\vspace{-0.1in}
\end{table}
\begin{figure}[b]
\vspace{-0.3in}
\centering
\begin{subfigure}{0.39\linewidth}
\includegraphics[width=1\linewidth]{figure/ourmethod.png}
\caption{Saliency Masks Proposal.}
\label{fig:pseudo-our}
\end{subfigure}
\hfill
\begin{subfigure}{0.6\linewidth}
\includegraphics[width=0.95\linewidth]{figure/finetune_vis.png}
\caption{Fine-tune on COCO 10\% images.}
\label{fig:pseudo-freesolo}
\end{subfigure}
\caption{Examples of our Saliency Mask Proposals and fine-tuned results.}
\label{fig:psuedo-label}
\vspace{-0.1in}
\end{figure}
\noindent \textbf{Class-agnostic Object Detection.}
We convert our proposed masks into bounding boxes like FreeSOLO\cite{wang2022freesolo}, and compare with UP-DETR\cite{updetr} and DETReg\cite{detreg}.
Table \ref{tab:class_agnostic} shows the results of class-agnostic object detection on COCO val2017 benchmark. As can be seen, our method achieves better performance than other pre-training methods without self-training.
Although FreeSOLO has better performance than other methods, its self-training process requires much longer training time (extra 14 hours) and larger memory cost.
\begin{table}[t]
\caption{Unsupervised class-agnostic object detection results.}\vspace{-8pt}
\centering
\footnotesize
\begin{tabular}{ccccc}
\hline
Method & Self-train? & AP & $AP_{50}$ & $AP_{75}$ \\ \hline
FreeSOLO\cite{wang2022freesolo} & {\color[HTML]{009901} \Checkmark} & 5.5 & 12.2 & 4.2 \\
UP-DETR\cite{updetr} & {\color[HTML]{CB0000} \XSolidBrush} & 0 & 0 & 0 \\
DETReg\cite{detreg} & {\color[HTML]{CB0001} \XSolidBrush} & 1.0 & 3.1 & 0.6 \\
\rowcolor[HTML]{EFEFEF}
K-Net \textit{w} SP & {\color[HTML]{CB0002} \XSolidBrush} & 3.2 & 8.5 & 2.0 \\ \hline
\end{tabular}
\label{tab:class_agnostic}
\vspace{-0.1in}
\end{table}
\noindent \textbf{Pseudo Mask Analysis.}
Here we evaluate our Prompting method on three kinds of pseudo labels with different qualities.
\begin{table}[t]
\centering
\caption{Ablation of Pseudo Labels on COCO with 10\% images. P means our Prompting method.} \vspace{-8pt}
\begin{tabular}{ccccc}
\hline
\rowcolor[HTML]{FFFFFF}
\multicolumn{2}{l}{\cellcolor[HTML]{FFFFFF}Approch} & \multicolumn{2}{l}{\cellcolor[HTML]{FFFFFF}Pseudo Label (quality)} & mAP \\ \hline
\multicolumn{2}{l}{K-Net $w/o$ P} & \multicolumn{2}{l}{Rand. Prop. (bad)} & 0.8 \\
\multicolumn{2}{l}{K-Net $w$ P} & \multicolumn{2}{l}{Rand. Prop. (bad)} & 10.0 \\
\multicolumn{2}{l}{K-Net $w/o$ P} & \multicolumn{2}{l}{Saliency (normal)} & 21.7 \\
\rowcolor[HTML]{EFEFEF}
\multicolumn{2}{l}{\cellcolor[HTML]{EFEFEF}K-Net {\color[HTML]{009901}$w$ P}} & \multicolumn{2}{l}{\cellcolor[HTML]{EFEFEF}{\color[HTML]{009901}Saliency (normal)}} & 23.5 \\
\rowcolor[HTML]{EFEFEF}
\multicolumn{2}{l}{\cellcolor[HTML]{EFEFEF}K-Net {\color[HTML]{CB0000}$w/o$ P}} & \multicolumn{2}{l}{\cellcolor[HTML]{EFEFEF}{\color[HTML]{CB0000}FreeOLO (good)}} & 23.3 \\
\multicolumn{2}{l}{K-Net $w$ P} & \multicolumn{2}{l}{FreeSOLO (good)} & 24.0 \\ \hline
\end{tabular}
\label{tab:pseudo-label}
\vspace{-0.25in}
\end{table}
As illustrated in table \ref{tab:pseudo-label}, pseudo labels generated by 'Random Proposal' have a negative impact on the results of the fine-tuned model,
yet our pre-training method achieves significant improvement by utilizing Saliency Prompt.
Moreover, our Prompting method achieves competitive performance with both normal and good-quality pseudo masks, improving AP by 1.8 and 0.7, respectively. Besides, our method can also approach the performance of FreeSOLO which
requires a much longer training time and many process steps. Notably, with the proposed Prompting approach, pre-train on normal pseudo masks gains comparable or even better performance than pre-train on good yet time-consuming pseudo masks like FreeSOLO.
We further visualize our pseudo masks in Figure \ref{fig:psuedo-label}.
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figure/kernel_activation_map_scratch.png}
\caption{Train from scratch.}
\label{fig:short-a}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figure/kernel_activation_map_moco.png}
\caption{Pre-trained by MoCo-v2.}
\label{fig:short-b}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figure/kernel_activation_map_upknet.png}
\caption{Pre-trained by our method.}
\label{fig:short-c}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figure/kernel_activation_map_best.png}
\caption{Trained on COCO train2017-full.}
\label{fig:short-d}
\end{subfigure} \vspace{-8pt}
\caption{Average activation of the 100 kernels over 5000 images on COCO val2017. All masks are resized to $200\times 200$ for analysis.}\vspace{-8pt}
\label{fig:visualize}
\end{figure*}
\subsection{Kernel Spatial Distribution Analysis}
To further justify whether the proposed saliency prompt can assist the kernels of the prediction head to learn localization and shape priors, we visualize the average of mask activations of the 100 instance kernels over 5000 images on the 2017val split after 2 training epochs.
The best result (Figure~\ref{fig:short-d}) is from the fully trained K-Net on COCO train2017.
Those kernels have learned different shape and location priors.
However, the priors from the supervised (Figure~\ref{fig:short-a}) and the compared unsupervised (Figure~\ref{fig:short-b}) method mainly focus on the central area.
Surprisingly, the kernels learned by our pre-train method (Figure~\ref{fig:short-c}) show positive trends in the diversity of shape and location priors, which is close to the fully trained kernels.
These results demonstrate that the kernels pre-trained with Saliency Prompt have
learned effective spatial distribution and shape discrimination ability.
\section{Conclusion}
This paper first points out that the QEIS models lack spatial distribution and shape awareness and perform poorly in low-data regimes.
Hence we present \textbf{Saliency Prompt}, a novel unsupervised pre-train method using visual prompt, which can significantly boost the performance of QEIS models on low-data instance segmentation and achieve comparable or even better performance compared with CNN-based models.
From the perspective of technical, it is the first paper that explores the application of prompting in the instance segmentation field. We hope its novel design elements provide insights for future works on visual-based prompting mechanisms. In the future, we will follow more recent studies on visual saliency \cite{zhang2019synthesizing,liu2021scg,liu2021visual} for further promoting the prompt learning mechanism and apply the prompt learning mechanism to advance the weakly supervised learning community \cite{zhang2020weakly,zhang2021weakly,zhao2021weakly}.
\noindent \textbf{Limitations.}
Most of our saliency masks are large-scale and simple textures, which makes our pre-trained kernels/queries mostly focus on large objects rather than small objects.
Compared with the accuracy improvements on large objects, our pre-train method achieves limited improvement on small ones.
We believe there is plenty of room to further optimize our proposed method.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
\noindent
For convenience, we use color to indicate reviewers \textcolor{red}{pn3v}, \textcolor{blue}{z9vE} and \textcolor{green}{HXGs}. For writing and presentation issues, we'll carefully address them in the revised version.
\noindent
\textcolor{red}{\textbf{Q1}}: Comparison to semi-supervised object detection.
\textcolor{red}{\textbf{A1}}: (1) We kindly argue that our setting is quite different from semi-supervised setting: Our model is a self-supervised method that works in "pre-training$+$down stream task fine-tuning" fashion, where the domains of the down stream tasks are not constrained, which, in most cases, differ from the pre-training domain. However, the semi-supervised setting constrains all training data residing in a single domain. Otherwise, the semi-supervised model cannot converge based on our experiments. (2) Semi-supervised methods usually use auxiliary loss (like pseudo-label supervision) which we don't use. (3) Our model concentrates on down stream tasks of instance segmentation rather than the reviewer-mentioned object detection task. Based on the above two-fold reasons, the semi-supervised object detection works are incomparable to ours.
\noindent
\textcolor{red}{\textbf{Q2}}: Why don't simply use Mask-RCNN in this case?
\textcolor{red}{\textbf{A2}}:
(1) Compared to Mask-RCNN, DETR-based methods are more convenient in model design by eliminating hand-crafted anchors and the cumbersome post-processing, which enables end-to-end learning. This makes DETR-based modeling a more advanced technology in this field. (2) For DETR-based methods, a known drawback is their capacity in dealing with low-data down stream tasks [Chen L. Points as queries.CVPR21.Fig.7]. Once this drawback is addressed, DETR-based modeling would advance the community further. Based on the above reasons, we make great efforts here to build saliency prompt learning scheme for boosting DETR-based low-data instance segmentation, which we believe is meaningful to the community.
\noindent
\textcolor{red}{\textbf{Q3}}: Saliency mask proposal is not novel.
\textcolor{red}{\textbf{A3}}:
(1) Our saliency mask proposal is different from FreeSOLO: we directly generate saliency masks without using feature pyramid and complicated self-training. (2) The saliency mask proposal is a component for implementing our framework but not the core novelty of this work. Our main novelty lies in how we use pseudo-masks to generate prompts that guides queries/kernels to learn localization and shape prior, and this has been proved to be superior to K-Net with FreeSOLO pre-training as reported in table 7 (23.5 \textit{v.s.} 23.3) although FreeSOLO is more complicated.
\noindent
\textcolor{red}{\textbf{Q4}}: Prompt-kernel matching \textit{V.S.} DINO.
\textcolor{red}{\textbf{A4}}: (1) We select the prompt for each kernel/query while DINO directly selects queries. (2) We select prompts based on their similarity with the kernels while DINO selects queries refer to the IOU of auxiliary masks.
\noindent
\textcolor{red}{\textbf{Q5}}: Confusion on “CNN-based”.
\textcolor{red}{\textbf{A5}}: Our "CNN-based" refers to the decoder of segmenter. We'll clarify this.
\noindent
\textcolor{red}{\textbf{Q6}}: Prompt-kernel matching (PKM) \textit{V.S.} set prediction loss (SPL).
\textcolor{red}{\textbf{A6}}: 1) PKM uses similarity to match the proposed prompts and vanilla kernels, yet SPL uses segmentation and classification losses to match the proposed masks and GT masks. 2) PKM allows one-to-many matching while the matching in SPL is only one-to-one.
\noindent
\textcolor{red}{\textbf{Q7}}: Saliency-Prompt(SP) used in fine-tuning? Will it cause mismatch?
\textcolor{red}{\textbf{A7}}: No; \textbf{SP} is only a plug-and-play module for QEIS models and helps learn the localization and shape prior for their queries. Once good query initialization has been learned, the mission of SP is completed and should be removed.
The learned queries will be used to provide a good initialization for down-stream tasks and the queries should be re-fitted to the new data distribution based on supervised learning, without the need of using SP again. Hence, the missing of SP will not cause mismatch.
\noindent
\textcolor{red}{\textbf{Q8}}: Purpose of kernel supervision?
\textcolor{red}{\textbf{A8}}: Since the predictions are mainly generated by the kernels, we add supervision in kernel-level to learn semantic and feature as a supplementary to the mask loss. It doesn't supervise the learning of the backbone.
\noindent
\textcolor{red}{\textbf{Q9}}: Comparison to FreeSOLO.
\textcolor{red}{\textbf{A9}}:
1) Direct comparison between Mask-RCNN+FreeSOLO and our K-Net+SP is somewhat unfair (please also refer to \textcolor{red}{\textbf{Q2}}). 2) In a more fair comparison setting (both using K-Net), our performance is better than FreeSOLO (23.5 \textit{v.s.} 23.3 in Table 7).
\noindent
\textcolor{red}{\textbf{Q10}}:Comparison to longer schedule DenseCL.\textcolor{red}{\textbf{A10}}:Table \ref{tab:densecl}.
\begin{table}[t]
\tiny
\caption{Comparison with a longer schedule DenseCL.} \vspace{-9.5pt}
\begin{tabular}{ccccc}
\hline
DenseCL pt epoch & SP & mAP & $AP_{50}$ & $AP_{75}$ \\ \hline
\rowcolor[HTML]{EFEFEF}
200 & {\color[HTML]{009901}\textbf{\Checkmark}} & \textbf{23.5} & \textbf{41.4} & \textbf{23.7} \\
300 & {\color[HTML]{CB0000}\XSolidBrush} & 20.6 & 36.6 & 20.6 \\ \hline
\end{tabular}
\centering
\label{tab:densecl}
\vspace{-10pt}
\end{table}
\noindent
\textcolor{blue}{\textbf{Q1}}: Novelty.
\textcolor{blue}{\textbf{A1}}: Our core novelty is the saliency prompt scheme for boosting DETR-based low-data instance segmentation. Our idea of using prompt to help the queries of DETR-based models to learn localization and
shape prior is totally new and never been explored by previous works. The novelty of our techniques are also appreciated by \textcolor{green}{HXGs}.
\noindent
\textcolor{blue}{\textbf{Q2}}: Comparison with other methods/down-stream tasks.
\textcolor{blue}{\textbf{A2}}: 1) We politely argue that we've compared with other pre-training models (i.e. MoCo-v2, SwAV and DenseCL) on multiple down-stream datasets in Table 1\&2\&3 and Figure 3\&4.
2) Our method is specifically designed for the challenging down-stream task of instance segmentation.
\noindent
\textcolor{green}{\textbf{Q1}}: Issues in statement, method description.
\textcolor{green}{\textbf{A1}}: Thanks for the reviewer's careful reading of our paper and the constructive comments. We will address them in revision.
\noindent
\textcolor{green}{\textbf{Q2}}: Performance improvement and noise level.
\textcolor{green}{\textbf{A2}}: 1) According to table \ref{tab:ablation}, the improvement of kernel loss is good, especially on CTW1500. 2) The proposed prompt method actually improves performance from 21.7 to 23.5. ``Seq. Assign" in Table 5 also uses our proposed prompts, only without using the matching method. 3)We conducted the ablation study 5 times and the noise level is $\pm 0.1$ on mAP.
\begin{table}[t]
\tiny
\vspace{1pt}
\caption{Ablation of kernel supervised learning.}
\vspace{-7pt}
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}{*}{$L_{ker}$} & \multicolumn{3}{c|}{Cityscapes} & \multicolumn{3}{c}{CTW1500} \\
& $mAP$ & $AP_{50}$ & $AP_{75}$ & $mAP$ & $AP_{50}$ & $AP_{75}$ \\ \hline
\rowcolor[HTML]{EFEFEF}
{\color[HTML]{009901}\Checkmark} & 30.6 & 55.4 & - & 34.6 & 71.1 & 31.0 \\
{\color[HTML]{CB0000}\XSolidBrush} & 29.3 & 54.3 & - & 32.8 & 69.5 & 29.3 \\ \hline
\end{tabular}
\centering
\label{tab:ablation}
\vspace{-20pt}
\end{table}
\end{document}
|
2,877,628,088,905 | arxiv | \section{INTRODUCTION}
\label{sect:intro}
Ultrafast electron microscopy, the introduction of temporal resolution to conventional 3-dimensional electron microscopy, has opened up the space-time exploration of as diverse as nanomaterials and biostructures \cite{38}. Novel electron optical concepts promise a tremendous increase in temporal and spatial resolution allowing researchers to address new scientific challenges \cite{1,2,3,4,32}. The use of relativistic MeV electron energies is an attractive avenue to deal with the issue of Coulomb repulsion in ultrashort electron pulses bringing single-shot electron microscopy on nm length- and ps time-scales within reach \cite{39,39b}. Electrons with MeV energy travel close to the speed of light. This reduces dramatically the velocity mismatch between the electromagnetic pump and electron probe pulses in time-resolved diffraction experiments \cite{ued_ref}. This has proven indispensable in probing ultrafast processes in fields ranging from gas-phase photochemistry \cite{40} over lattice transformation in low-dimensional systems \cite{41} to the excitation of transient phonons \cite{42}.
While the use of electron pulses as structural dynamics probes is well established, their use as initiators of dynamical processes has remained almost completely unexplored. In an early study, Tudosa \textit{et. al} used the electromagnetic fields surrounding intense relativistic electron pulses to permanently switch the magnetization direction of ferromagnetic films used in magnetic data storage applications \cite{27}. Here we show that nanofocused relativistic electron pulses provide a unique tool to drive and control magnetization dynamics in nanostructures relevant for spintronics applications \cite{27,43}. The paper is organized as follows. After this introduction, we will describe how our model calculations are able to reproduce the experimental results of Tudosa \textit{et. al} \cite{27}. We will then show how nanofocused electron beams are able to induce switching and magnon excitations in nanowires and can alter the chirality of skyrmions in nanodots. The paper finishes with conclusions and a brief outlook to possible experimental realizations.
Very recently we published a related work, which deals not only with nanostructured materials but also with extended thin films, possessing Dzyaloshinskii-Moriya interaction (DMI)\cite{APL}. There, the creation of skyrmions aside from geometrical confinement is discussed based on the same methods like in this manuscript and is recommended for further reading.
\section{Results and Discussion}
\subsection{Electron beam induced magnetic switching}
\subsubsection{Methods}
For the purpose of investigating the dynamics of thin magnetic films, micromagnetic simulations solving the Landau-Lifshitz-Gilbert equation (LLG)\cite{34,35} are executed.
The different magnetic systems are discretized into cuboid lattices, so that each simulation cell is associated with a magnetization $\vec{m}_i=\vec{M}_i/M_{\mathrm{S}}$ normalized to the saturation magnetization $M_{\mathrm{S}}$.
The magnetization dynamics is governed by the LLG
\begin{equation}
\dot{\vec{m}}_i = - \frac{\gamma}{1+\alpha^2}\left\{\vec{m}_i\times \vec{H}_i^{\mathrm{eff}}(t) + \alpha \left[\vec{m}_i\times \left(\vec{m}_i\times \vec{H}_i^{\mathrm{eff}}(t)\right)\right]\right\}\ .
\end{equation}
Here $\gamma=1.76 \cdot 10^{11}$~1/(Ts) denotes the gyromagnetic ratio and $\alpha$ is the dimensionless Gilbert damping parameter.
The local effective magnetic field
$\vec{H}_i^{\mathrm{eff}}(t)$ can be calculated following the equation $\mu_0\vec{H}_i^{\mathrm{eff}}(t)=-1/M\ind{S}\delta F/(\delta \vec{m}_i)$
and is therefore a functional of the system's total free energy $F=F\ind{EXCH} + F\ind{MCA} + F\ind{DMF} + F\ind{ZMN}+F\ind{DMI}$.
This quantity is influenced by the exchange interaction of adjacent magnetic moments $F\ind{EXCH}= - A/c^2\sum_{<ij>}\vec{m}_i \cdot \vec{m}_j$, the magnetocrystalline anisotropy $F\ind{MCA}$, the demagnetizing fields $F\ind{DMF}$, the Zeeman-energy $F\ind{ZMN}$ and the DMI term $F\ind{DMI}$. Further details on the single contributions can be found for example in ref.\cite{36,37}.
In order to simulate the magnetization dynamics an adaptive Heun solver method has been used. As the excitations of the systems take place in a very short time scale the time step is fixed to $1\,$fs during the time evolution. Using the simulation package \texttt{mumax3}\cite{37} full GPU-based micromagnetic calculations are employed to account for the effect of demagnetizing fields efficiently.
As the external magnetic fields, generated by short electron pulses are the main driving mechanism via the Zeeman-coupling, the calculation of these Oersted-like fields is required. The near-field of the electron bunch becomes important for the nanostructures treated in sections \ref{sec:2.2} and \ref{sec:2.3}.
We assume a pulse of electrons with a Gaussian envelope in space and in time or accordingly the propagation direction $j_z(r,\varphi,z)= \frac{N_e e v}{(2\pi)^{3/2}\sigma_{xy}^2\sigma_{z}}\exp\left[-\frac{1}{2}(r/\sigma_{xy})^2-\frac{1}{2}((z-t v)/\sigma_z)^2\right]\ .$
The parameter $N_e$ corresponds to the number of electrons, $e$ is the electron charge, $v$ the average velocity and $\sigma_{xy}$ and $\sigma_z$ are the standard deviations in the related directions. The field's profile resulting from Biot-Savart's law $\vec{B}(\vec{r})=\frac{\mu_0}{4\pi}\int_V \vec{j}(\vec{r}')\times\frac{\vec{r}-\vec{r}'}{|\vec{r}-\vec{r}'|^3}\mathrm{d} V'$\ ,
is shown in fig. \ref{fig_bfield} for two different sets of parameters. Because of the electron packet propagating in $z$-direction, the magnetic field consists of the $B_\varphi$-component only as inferred from Biot-Savart's law in cylindrical coordinates. The curves shapes are almost the same, as in both cases the radial extension of the beam is much smaller than the standard deviation in the propagation direction. If this is changed the profiles do change as well. The peak field strength for the $30\,\mu$m beam is ten times smaller than the other pulse. This can be deduced from the number of electrons being a hundred times larger, but contrary the beam width is a thousand times wider, which leads to a resulting factor of 10.
For subsequent calculations, the numerical results for the magnetic field are fitted with a model function, which is dependent on both the beam's standard deviation and the included number of electrons, which are experimentally well accessible parameters.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{bfield.eps}
\caption{Radial dependence of the peak magnetic field for a pulse duration of $\sigma_t=2.3\,$ps. The red curve corresponds to $\sigma_{xy}=30\,$nm and $N_e=10^8$, whereas the black line relates to $\sigma_{xy}=30\,\mu$m and $N_e=10^{10}$.}
\label{fig_bfield}
\end{center}
\end{figure}
\subsubsection{Verification of the model}
In the past years, multiple experiments exploring the field of magnetic switching triggered by fast electrons have been established. One pioneering example can be found in ref.\cite{27}. In this experiment, short electron bunches are shot through thin films (thickness $\approx$14\,nm) of granular CoCrPt-type, in order to explore the ultimate speed limit for precessional magnetic dynamics. The electron pulses have a duration of $\sigma_t=2.3\,$ps a spatial extent of $\sigma_{xy}=30\,\mu$m and included a number of $N_e=10^{10}$ electrons each. After the irradiation, the magnetic domain structure is analyzed, which reveals several domain-wall rings with out-of-plane orientation (\cref{fig_comp}(a)). Starting from a homogeneously magnetized state in $\pm z$ direction the magnetic moments precess around the $\hat{e}_\rho$ unit vector during the pulse and relax either up or down afterward.
Most of the materials used for magnetic data storage media possesses uniaxial magnetocrystalline anisotropy. CoCrPt-alloys are no exclusion, with an easy axis in the out-of-plane direction, which coincides with the cylindrical $z$-axis. The material specific parameters are chosen to be equal to the measured ones \cite{27}, meaning a saturation magnetization $M\ind{sat}=517.25\,$kA/m, the uniaxial anisotropy $K\ind{u}=156.98\,\text{kJ}/\text{m}^3$ and the Gilbert damping parameter $\alpha=0.3$.
Similar to the experimental work, we also consider a thin film of CoCrPt and a size of $150 \,\mu$m$\times 150\,\mu$m$\times 14\,$nm. As the included Co-atoms lead to a granular structure, with decoupled magnetic grains with a size of $20.6\pm 4\,$nm, we can model the system with discrete cells with a dimension of $(41.2\times 41.2\times 14)\,$nm$^3$. To cover the desired area we need $3640^2$ cells.
Subsequent to the time propagation over $300\sigma_t$, the magnetic configuration is relaxed, which means that the precessional term in the LLG is disregarded in order to achieve a fast approach towards the final stable magnetic configuration.
The pulse induced ring pattern of the magnetic domains pointing either up or down (with respect to the easy direction of the magnetic films) is well captured by our micromagnetic simulations.
As pointed out in \cite{27}, the critical precessional angle $\phi\geq\pi/2$ is determined by the local strength of the magnetic field and indicates the achieved angular velocity $\omega$.
The pulse duration $\sigma_t$ plays a crucial role \cite{28a,28b,28c}. As discussed in Ref.\cite{28a,28b,28c}, an appropriate sequence of ps pulses allows for an optimal control scheme achieving a ballistic magnetic switching, even in the presence of high thermal fluctuations. Longer pulses might drive the system back to the initial state \cite{28a,28b,28c}. So, the critical precessional angle and $\sigma_t$ are the two key parameters \cite{27} for the established final precessional angle $\phi=\omega\sigma_t$. Note, the demagnetization fields are also relevant, as inferred from Fig. \ref{fig_comp} but they do not change the main picture.
\begin{figure}[!th]
\begin{center}
\includegraphics[width=.5\linewidth]{fig1.eps}
\caption{Comparison between experimental (a)\cite{27}, and numerical results (b), (c).
Both numerical simulations and the experimental data cover an area of $150\times 150\,\mu$m$^2$. In contrast to the panel (b), in (c) the demagnetizing fields are included in simulations.
The gray shading signals the magnetization's $z$-component with white color meaning $m_z=+\hat{e}_z$ and black $m_z=-\hat{e}_z$. The $N_e = 10^{10}$ electrons in the beam impinging normal to the sample have an energy of 28\,GeV. The pulse's time-envelope is taken as a Gaussian with a pulse duration of $\sigma_t = 2.3\,$ps, whereas $\sigma_{xy}=30\,\mu$m. The generated Oersted field has an equivalent time-dependence. }
\label{fig_comp}
\end{center}
\end{figure}
\subsection{Nanoscale magnetization dynamics}
\label{sec:2.2}
Having substantiated our methods against experiment we turn to the main focus of our study, to be specific the generation of magnetic excitations on the nanoscale.
One possible application can be found regarding nanowire geometries. Our aim is to excite such magnetic systems in two different ways. Obviously, the creation of domain walls analogously to the CoCrPt-system discussed before should be possible. On top of this, the confined system leads to a unidirectional transport of spin-waves, also called magnons, which are stimulated by the abrupt excitation of the magnetic moments close to the beam.
This setup becomes interesting for strongly focused beams, that cause magnon-modes beyond the linear-response regime, as the LLG couples the magnetization to the demagnetizing fields, which again act on the magnetization dynamics.
We choose a nanowire with a size of $4000\times 50\times 2\,$nm$^3$, which corresponds to a system of $2000\times 25\times 1$ simulation cells. The material parameters are the same as the ones before adapted from the experiment, except for the Gilbert damping parameter, which is reduced to $\alpha=0.001$ in order to incorporate magnonic excitations. In practice, this can be achieved by preventing the system from exhibiting granular structures. Even if the reduction is not sufficient for this material, the principle can be transferred easily to other thin magnetic films with out-of-plane anisotropy. The electron beam's duration is $\sigma_t=2.3$ as before, whereas the number of electrons is reduced, but focused on the nanoscale and striking at nanowire's center.
Two different examples are shown in \cref{fig_wire1} and \cref{fig_wire2}. The graphics show the magnetization's components' time propagation, averaged over the wire's breadth.
In \cref{fig_wire1} a number of $N_e=5\times 10^6$ electrons is bundled on a normal distribution with $\sigma_{xy}=100\,$nm. A few things in the magnetization are worth to be mentioned. Most distinctly is the creation of two regions with inverted magnetization near the beam's center ($50\,$nm$\leq |x|\leq 300\,$nm). The outer border regions show a N\'eel type domain wall, whereas the region in between the two domains features a rotating behavior of the in-plane components, which will relax towards a single- or two-domain state on a longer time-scale.
On top of this two branches of magnons, propagating along the wire are present. One rather fast propagating spin-wave of minor amplitude and one more pronounced magnon possessing a smaller group velocity.
By increasing the beam's intensity ($\sigma_{xy}=100\,$nm, $N_e=5\times 10^7$) the resulting magnetization dynamics become more complex and multiple branches of magnons interfering with each other, but also domain-walls propagating along the wire can be observed (see \cref{fig_wire2}). Especially rapidly moving magnons, which are reflected by the simulation boxes boundaries are present. Because of the magnetization's non-linear feedback in terms of the demagnetizing fields, a complex pattern occurs.
This can be used to analyze magnon-excitation in nanostructures beyond linear-response.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{plMag3_1c.eps}
\caption{Time propagation of the magnetization's components for a nanowire of $4000\times 50\times 2\,$nm$^3$ size and electron beam parameters being $\sigma_t=2.3$, $\sigma_{xy}=100\,$nm and $N_e=5\times 10^6$. }
\label{fig_wire1}
\includegraphics[width=\linewidth]{plMag3_2c_5e7_100nm_Full.eps}
\caption{Time propagation of the magnetization's components for a nanowire of $4000\times 50\times 2\,$nm$^3$ size and electron beam parameters being $\sigma_t=2.3$, $\sigma_{xy}=30\,$nm and $N_e=5\times 10^7$.}
\label{fig_wire2}
\end{center}
\end{figure}
\subsection{Imprinting of topological magnetic structures}
\label{sec:2.3}
The generation of topologically protected magnetic excitations such as skyrmions via pulsed electron pulses is another possible application for the established method.
A recent work \cite{29} evidences that ultra thin nanodiscs of materials such Co$_{70.5}$Fe$_{4.5}$Si$_{15}$B$_{10} $\cite{30} sandwiched between Pt and Ru/Ta are well suited for our purpose since they exhibit Dzyaloshinskii-Moriya (DM) spin-orbital coupling.
Hence the magnetization's structure may nucleate spontaneously into skyrmionic configurations.
We adapted the experimentally verified parameters for this sample and present results for the magnetic dynamics triggered by short electron beam pulses.
Taking a nanodisc of a variable size the ground state with a topological number $|N|=1$ is realized after propagating an initially homogeneous magnetization in $\pm z$ direction according to the Landau-Lifshitz-Gilbert equation (LLG) including DM interactions \cite{34,35,36,37}.
The two possible ground states, depending on the initial magnetization's direction are shown in \cref{fig_groundstate} along with the material's parameters.
Our main focus is on how to efficiently and swiftly switch between these skyrmion states via a nano-focused relativistic electron pulse, an issue of relevance when it comes to fundamental research or practical applications like data storage.
While currently such pulses can be generated with micron size beam dimensions \cite{ued_ref} future sources are expected to reach focus sizes down to the few nm range \cite{32}. In principle the possibility of beam damage occurring in the beam’s focus as in the case of the experiment in ref.\cite{27} is present. However, ongoing experiments with relativistic electron beams \cite{ued_ref} indicate that the use of ultrathin freestanding films may alleviate damage concerns.
Topologically protected magnetic configurations, like magnetic skyrmions, are well defined quasiparticles. They can be characterized mathematically by the topological number $N=\frac{1}{4\pi}\int \vec{m}\cdot\left(\frac{\partial \vec{m}}{\partial x}\times\frac{\partial \vec{m}}{\partial y}\right)\mathrm{d} x\mathrm{d} y$\cite{33} also called winding number, which counts how often the unit vector of the magnetization wraps the unit sphere when integrated over the two-dimensional sample.
Therefore, skyrmions are typically a quasiparticle in thin (mono)layers. The topological number adopts integer values indicating the magnetic configuration to be skyrmionic ($N=\pm 1$) or skyrmion multiplexes ($|N| >1$). If the topological number is not an integer the topological protection is lifted and the magnetic texture is unstable upon small perturbations.
The topological stability of skyrmionic states stem from the necessity of flipping at least one single magnetic moment by $180^\circ$, to overcome the barrier and transfer the state into a "trivial" state, like a single domain or vortex texture.
In the following, we will attempt to overcome this topological energy barrier with a magnetic "kick" so that the magnetization will be converted into a state of different topological invariant.
Advantageous is the spatial structure of the magnetic field curling around the beam's center, which gives a good point of action in order to manipulate topologically protected configurations.\\
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{fig2.eps}
\caption{Magnetic ground states for a nanodisc with a diameter of 300\,nm and a thickness of 1.5nm. The material parameters are $M\ind{sat}=450\times 10^3\,$A/m, $A\ind{ex}=10\,$pJ/m, $\alpha=0.01$, $K_u=1.2\times 10^5\,$J/m$^3$ (out-of-plane anisotropy), and the interfacial DMI-constant $D\ind{ind}=0.31\times 10^{-3}\,$mJ/m$^2$. (a) corresponds to $N=1$, whereas (b) possesses $N=-1$, both skyrmions are of the N\'eel type. Bottom panel illustrates pictorially the influence of the magnetic field associated with the electron bunch. The cones correspond to the initial magnetic configuration as in (a) and (b), whereas the golden arrows show the induced magnetic field. The resulting torque points perpendicular to the magnetization, affecting the magnetic configuration accordingly. }
\label{fig_groundstate}
\end{center}
\end{figure}
For magnetic systems, a minimum time of exposure is necessary, whereas the spatial focus of the beam is limited. To overcome this conflict, the pulse duration is fixed at $2.3\,$ps as before, when nothing different is mentioned. Starting from such an electron beam two main parameters can be adjusted to achieve the favored reaction of the nanodiscs. Those are the pulse width and the number of electrons, which will be treated independently. In \cref{fig_dur}, the final topological charges after a single Gaussian electron pulse irradiating a nanodisc are plotted as a function of the number of electrons and the width of the Gaussian distributed electrons.
The results do not show the transient time evolution of the sample but only the final steady-state values of the winding number. They are obtained by applying an electron pulse, propagating the magnetization during the pulse, and relaxing the magnetic configuration afterward as to approach a local minimum of the free energy's hypersurface.
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.6\linewidth]{fig3.eps}
\caption{Varying the number of electrons per pulse or the spatial enlargement of the pulse, the imprinted topological charge can be tuned. The pulse duration is set to $2.3\,$ps. Black and green curves correspond respectively to starting with a magnetic ordering having $+1$ or $-1$ topological charge, as shown in \cref{fig_groundstate} for different pulse widths. Both the blue and red curve start from $N_i=+1$.
The sample is a magnetic disc (diameter $d=300\,$nm) which is irradiated with a Gaussian beam pulse with $\sigma_{xy}=30\,$nm (and $90\,$nm) in case of the bottom graph, respectively the upper graphs' beam has a constant number of $n_e=10^8$ electrons. }
\label{fig_dur}
\end{center}
\end{figure}
We note the strong correlation between the change of the topological charge and the number of electrons or accordingly the beam width. Relatively large intervals of both parameters lead to the same final values for $N$. We note that not only the variation of these control parameters, but also of the duration of the pulse is experimentally accessible, particularly in a nano-apex ultrafast transmission electron microscope \cite{5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22}.
Noteworthy, the graphs for opposite initial configurations (see \cref{fig_dur}(a)) are axially symmetric with respect to the $x$ axis. This can be explained by the coinciding symmetry centers of the pulse and the skyrmionic structure.
This symmetric and robust behavior can be exploited to switch between the accessible different values for the topological charges which are quite close to the ideal integer values that would be realized in an infinitely small discretization.
Interestingly, the switching between the two stable states occurs repetitively for an increase in the number of electrons, whereas the spatial manipulation of the beam leads to one regime only in which the fields are sufficient to switch the topological number. The first observation can be explained with the schematics shown in fig.\ref{fig_groundstate}c). Depending on the strength of the pulse the magnetic moments are forced to rotate multiple times around the $\hat{e}_\varphi$ vector in a collective manner, as each moment of equal distance to the center experiences the same torque. The final position of the surrounding moments couples backward to the center and determines the new topological charge. The electron number linearly translates to the peak magnetic field, whereas the beam width has a more complicated influence.
When the width is increased the spatial profile in the $xy$-plane is manipulated, as the maximum magnetic field is shifted towards the disc's rim and beyond. How the system reacts on this changes depends crucially on the exact profile of the beam, especially on the point of maximum magnetic field strength, as can be seen in \cref{fig_dur}(a).
This leads to the question of the optimum parameter regime, to manipulate the system reliably, which can not finally be answered as it strongly depends on the experimentally available capabilities. Hence this work focuses on an exemplary study on the effect.
The same switching phenomenon as discussed before can also be observed for different setups. Weaker pulses, as long as they are able to overcome the internal fields to excite the system, can be used as well, but obviously, the field's amplitude translates to the strength of the resulting torque. This implies a longer radiation time needed for pulses of lower intensity to be capable of switching the system.
In the case of different materials or geometries, the accessible topological states have to be investigated, before they can be utilized. Otherwise undesired, interstitial states might be achieved by accident and the switching is not deterministic anymore.
\section{Summary and Outlook}
We have shown using micromagnetic calculations that relativistic electron pulses of few ps duration can indeed induce magnetization dynamics in ferromagnetic nanostructures and extended films as observed experimentally \cite{27}. Contrary to micron sized electron beams employed experimentally so far \cite{27}, nanofocusing allows us to reduce the number of electrons dramatically in order to achieve magnetic switching. We demonstrated this for the case of ferromagnetic nanowires where the nanofocused electron pulses enabled magnetic switching on one end of the wire with magnon excitations propagating along the wire towards the other end. We also predict a novel way of switching the winding sense of magnetic skyrmions using the tangential fields of electron pulses focused down to the skyrmion size. We predict that such magnon excitations can be driven with electron numbers as low as $10^6$ electrons/pulse. This is within reach at present electron diffraction experiments \cite{ued_ref}. Attempts at achieving the required nanofocus at such sources are currently underway. We note that the predicted magnetization dynamics could be imaged using VUV \cite{44} and soft x-ray photons \cite{45}. Since both ultrafast electron diffraction and imaging experiments start with laser-generated photoelectrons the synchronization of fs probe laser systems with the electron pulses is straightforward \cite{ued_ref}. The necessary probing of magnetization dynamics has recently been demonstrated using circularly polarized photons for high-harmonic laser sources \cite{46}.
\section*{Acknowledgments}
A. F. S. and J. B. are supported by the German Research Foundation (No. SFB 762) and the Priority Programme 1840. H.A.D. acknowledges support by the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-76SF00515.
|
2,877,628,088,906 | arxiv | \section{Introduction}
In this paper we present some results on structure of Wick homogenenous ideals of quadratic algebras allowing Wick ordering, shortly Wick algerbas, introduced in \cite{jsw}. Namely, let $\{T_{ij}^{kl},\ i,j,k,l=1,\ldots,d\}\subset\mathbb{C}$ satisfy conditions $T_{ji}^{lk}=\overline{T}_{ij}^{kl}$, then Wick algebra $W(T)$ is generated by $a_i$, $a_i^*$, $i=1,\ldots,d$,
satisfying commutation relations of the form
\begin{equation}\label{wick}
a_i^*a_j=\delta_{ij}1+\sum_{k,l=1}^d T_{ij}^{kl}a_la_k^*,\ i,j=1,\ldots,d.
\end{equation}
Following \cite{jsw} consider finite-dimensional Hilbert space
$\mathcal{H}=\mathbb{C}\langle e_1,\cdots,e_d\rangle$ and its formal dual
$\mathcal{H}^*=\mathbb{C}\langle e_1^*,\cdots,e_d^*\rangle$, where $\{e_i,\ i=1,\ldots,d\}$ form an orthonormal base of $\mathcal{H}$. Put
$\mathcal{T}(\mathcal{H},\mathcal{H}^*)$ to be the full tensor algebra over $\mathcal{H}$ and $\mathcal{H^*}$, then
\begin{equation}\label{canonic}
W(T)\simeq \mathcal{T}(\mathcal{H},\mathcal{H}^*)/\langle e_i^*\otimes e_j-\sum_{k,l=1}^d T_{ij}^{kl} e_l\otimes e_k^*\rangle.
\end{equation}
Note, that in this realisation the free algebra generated by $a_i$, $i=1,\ldots,d$ coincides with $\mathcal{T}(\mathcal{H})=
\mathbb{C}\Omega\oplus\bigoplus_{n\in\mathbb{N}}\mathcal{H}^{\otimes n}$.
The Fock representation of $W(T)$ is defined on $\mathcal{T}(\mathcal{H})$ by the rules
\[
a_i^*\Omega=0,\quad a_i e_{i_1}\otimes\cdots \otimes e_{i_k}=e_i\otimes e_{i_1}\otimes\cdots \otimes e_{i_k},\ i=1,\ldots,d,
\]
the action of $a_i^*$, $i=1,\ldots, d$, on vectors other than $\Omega$, is determined inductively using the commutation relation in $W(T)$.
It was proved in \cite{jsw} that there exists a unique sesquilinear form $\langle\cdot,\cdot\rangle_F$,
called the {\it Fock scalar product},
on $\mathcal{T}(\mathcal{H})$, such that the Fock representation becomes a $*$-representation with respect to this form. It is defined in such a way that the subspaces $\mathcal{H}^{\otimes n}$ and $\mathcal{H}^{\otimes m}$ are orthogonal if $m\ne n$ and
\[
\langle X,Y\rangle_F=\langle X, P_n Y\rangle,\quad X,Y\in\mathcal{H}^{\otimes n}.
\]
where by $\langle\cdot,\cdot\rangle$ we denote the standard scalar product on $\mathcal{H}^{\otimes n}$ and
$P_n\colon\mathcal{H}^{\otimes n}\rightarrow
\mathcal{H}^{\otimes n}$ is an operator defined in the following way (see \cite{jsw}):
First we introduce an operator
$T\colon\mathcal{H}^{\otimes 2}\rightarrow\mathcal{H}^{\otimes 2}$ given by
\begin{equation}\label{opert}
T e_{k}\otimes e_{l} =
\sum_{i,j=1}^d T_{ik}^{lj}e_{i}\otimes e_{j}.
\end{equation}
Note that $T$ is self-adjoint with respect to the standard scalar product on $\mathcal{H}^{\otimes 2}$. Further, for any $n>2$ consider the following extensions of $T$ to $\mathcal{H}^{\otimes n}$:
\[
T_i=\bigotimes_{k=1}^{i-1}\mathbf{1}_{\mathcal{H}}\otimes T\otimes
\bigotimes_{k=i+2}^n\mathbf{1}_{\mathcal{H}},\quad i=1,\ldots,n-1.
\]
Then we set $P_0=1$, $P_1=\mathbf{1}_{\mathcal{H}}$, $P_2=\mathbf{1}_{\mathcal{H}^{\otimes 2}}+T$ and
\begin{equation}\label{pn}
P_{n}=(\mathbf{1}_{\mathcal{H}}\otimes P_{n-1})R_n,\ n\ge 3,
\end{equation}
where
\[
R_n\colon\mathcal{H}^{\otimes n}\rightarrow\mathcal{H}^{\otimes n},\quad R_n=\mathbf{1}_{\mathcal{H}^{\otimes n}}+T_1+T_1T_2+\cdots+T_1T_2\cdots T_{n-1}.
\]
\begin{remark}
The operators $R_n$, $n\ge 2$, are used to obtain explicit formulas
for commutation relations between generators $a_i^*$, $i=1,\ldots,d$
and homogeneous polynomial in noncommutative variables $a_1,\ldots,
a_d$. Namely, by \cite{proams}, for $X\in\mathcal{H}^n$ one has the
following equality in $W(T)$ (here we use the canonical
realisation)
\[
e_i^*\otimes X=\mu_0(e_i^*)(R_n X+\sum_{k=1}^d T_1T_2\cdots T_n (X\otimes e_k)\otimes e_k^*),
\]
where $\mu_0(e_i^*):\mathcal T(\mathcal H)\to \mathcal T(\mathcal H)$ is given by
\[
\mu_0(e_i^*)e_{i_1}\otimes e_{i_2}\otimes\cdots e_{i_s}=\delta_{ii_1}e_{i_2}\otimes\cdots e_{i_s},\ s\ge 1,\quad \mu_0(e_i^*)\Omega =0.
\]
This allows to determine explicitly the action of $a_i^*$ in the Fock representation as follows
\[
a_i^* X=\mu_0(e_i^*)R_n X,\quad X\in\mathcal{H}^{\otimes n}.
\]
\end{remark}
Positivity of the Fock scalar product means that $P_n\geq 0$ for all $n\ge 2$. In this case the Fock representation can be extended to a $*$-representation
of $W(T)$ on a Hilbert space, which is a completion of $\mathcal{T}(\mathcal{H})/\bigoplus_{n\ge 2}\ker P_n$ with respect to the norm defined by the Fock scalar product. Sufficient conditions for positivity of family $\{P_n,\ n\ge 2\}$ can be found in \cite{bs,jps, jsw}.
For instance if $T$ is {\it braided}, i.e. $T_1T_2T_1=T_2T_1T_2$ on $\mathcal{H}^{\otimes 3}$, and $||T||\le 1$, then by \cite{bs} $P_n\ge 0$, $n\ge 2$. Moreover in this case for any $n\ge 2$
\[
\ker P_n=\sum_{i=1}^{n-1}\ker (\mathbf{1}_{\mathcal{H}^{\otimes n}}+T_i)
\]
and the kernel of the Fock representation is generated as a two-sided $*$-ideal by $\ker (\mathbf{1}_{\mathcal{H}^{\otimes 2}}+T)$, see \cite{jps}.
Furthermore, if
$T$ is braided and $\ker (\mathbf{1}_{\mathcal{H}^{\otimes 2}}+T)\ne \{0\}$, the two-sided ideal $\mathcal{I}_2\subset\mathcal{T}(\mathcal{H})$ generated by $\ker (\mathbf{1}_{\mathcal{H}^{\otimes 2}}+T)$ is invariant with respect to multiplication by any $a_i^*$, $i=1,\ldots,d$. i.e.
\begin{equation}\label{ideal}
e_i^*\otimes\mathcal{I}_2\subset\mathcal{I}_2+\mathcal{I}_2\otimes\mathcal{H}^{*}
\end{equation}
Ideals $I\subset\mathcal{T}(\mathcal{H})$ satisfying (\ref{ideal}) are called {\it Wick ideals}, see \cite{jsw}. It was shown that homogeneous Wick ideals, i.e. those ones which are generated by subspaces in $\mathcal{H}^{\otimes n}$, are annihilated by the Fock representation, see \cite{jsw}. In \cite{jps} the authors prove that if the operator $T$ is braided then
existence of homogeneous Wick ideals is necessary for existence of Wick ideals in general. If $T$ is a braided contraction, then any homogeneous Wick ideal of higher degree is contained in a largest quadratic one, see \cite{jps}. Note that for some Wick algebras (e.g. Wick algebras associated with twisted canonical commutation relations of W. Pusz and S.L. Woronowicz, see \cite{jsw,pw}; quonic commutation relations, see \cite{mar} and others) their quadratic Wick ideals are contained in their $*$-radicals, i.e. such ideals are annihilated by any bounded $*$-representation of the corresponding algebra.
In this paper we investigate the structure of homogeneous Wick ideals of higher degrees. We present a method how to construct a homogeneous Wick ideal
$\mathcal{I}_{n+1}$ of degree $n+1$ out of a homogeneous Wick ideal $\mathcal{I}_n$ of degree $n$ so that $\mathcal{I}_{n+1}\subset\mathcal{I}_n$. We show that in some particular cases our procedure allows to get a description of largest homogeneous Wick ideals of higher degrees having generators of the largest quadratic Wick ideal only.
Finally we study classes of $*$-representations of Wick version of CCR annihilating certain homogeneous Wick ideals of degree higher than $2$.
\section{Wick ideals: basic definitions and properties.}
The notion of Wick ideal in quadratic Wick algebra was presented in \cite{jsw}. It was proposed as a natural way to introduce additional relations between generators $a_i$, $i=1,\ldots,d$, which are consistent with the basic relations of the algebra.
Following \cite{jsw} we will work with the canonical realisation of $W(T)$ as a quoitient of the tensor algebra $\mathcal T(\mathcal H,\mathcal H^*)$
given by (\ref{canonic}). In this realisation the subalgebra generated by $a_i$, $i=1,\ldots,d$, is identified with $\mathcal{T}(\mathcal{H})$.
\begin{definition}\label{defwickid}
A two-sided ideal $\mathcal{I}\subset\mathcal{T}(\mathcal{H})$
is called a Wick ideal if
\[
\mathcal{T}(\mathcal{H}^*)\otimes\mathcal{I}\subset\mathcal{I}\otimes
\mathcal{T}(\mathcal{H}^*).
\]
If the Wick ideal $\mathcal{I}$ is generated by a subspace $\mathcal{I}_0\subset\mathcal{H}^{\otimes n}$, then $\mathcal{I}$ is called a homogeneous Wick ideal of degree $n$.
\end{definition}
It is easy to verify the following criteria for a two-sided ideal $\mathcal{I}$ to be a Wick one, see \cite{jsw}.
\begin{proposition}
A two-sided ideal $\mathcal{I}\subset\mathcal{T}(\mathcal{H)}$ is Wick iff
\[
\mathcal{H}^*\otimes\mathcal{I}\subset\mathcal{I}+\mathcal{I}\otimes\mathcal{H}^*.
\]
\end{proposition}
\begin{remark}
If an ideal $\mathcal{I}\subset\mathcal{T}(\mathcal{H})$ is generated by a subspace $\mathcal{I}_0\subset\mathcal{H}^{\otimes n}$, then it is Wick iff
\[
\mathcal{H}^*\otimes\mathcal{I}_0\subset\mathcal{I}_0+
\mathcal{I}_0\otimes\mathcal{H}^*
\]
\end{remark}
It is important from the representation theory point of view to get a
precise description of generators of homogeneous Wick ideals of degrees higher than $2$. The first step in this direction was done in \cite{proams}. Namely, in this paper the following statement was proved.
\begin{proposition}\label{cubide}
Let $T$ be a braided contraction and let $\mathcal{I}_2\subset\mathcal{H}^{\otimes 2}$ generate the largest quadratic Wick ideal. Then
\[
\mathcal{I}_3=(\mathbf{1}_{\mathcal{H}^{\otimes 3}}-T_1T_2)(\mathcal{I}_2\otimes\mathcal{H})
\]
generates the largest Wick ideal of degree 3.
\end{proposition}
Below we will often say "homogeneous Wick ideal of degree $n$" meaning a
linear subspace in $\mathcal{H}^{\otimes n}$ generating this ideal.
\section{Homogeneous Wick ideals}
We start with a simple observation, showing that the product of homogeneous Wick ideals is again a homogeneous Wick ideal.
\begin{proposition}
Let $\mathcal{J}_n$ and $\mathcal{J}_k$ be homogeneous Wick ideals of degree $n$ and $k$ respectively, then their tensor product $\mathcal{J}_n\otimes\mathcal{J}_k$ is a homogeneous Wick ideal of degree $n+k$.
\end{proposition}
\begin{proof}
Indeed, since for a Wick ideal one has
\[
\mathcal{H}^*\otimes\mathcal{I}\subset\mathcal{I}+\mathcal{I}\otimes\mathcal{H}^*
\]
we get
\begin{align*}
\mathcal{H}^* &\otimes(\mathcal{J}_n\otimes\mathcal{J}_k)
\subset(\mathcal{J}_n+\mathcal{J}_n\otimes\mathcal{H}^*)
\otimes\mathcal{J}_k=\mathcal{J}_n\otimes\mathcal{J}_k+
\mathcal{J}_n\otimes\mathcal{H}^*\otimes\mathcal{J}_k\subset\\
&\subset\mathcal{J}_n\otimes\mathcal{J}_k+
\mathcal{J}_n\otimes\mathcal{J}_k
+\mathcal{J}_n\otimes\mathcal{J}_k\otimes\mathcal{H}^*
=\mathcal{J}_n\otimes\mathcal{J}_k
+\mathcal{J}_n\otimes\mathcal{J}_k\otimes\mathcal{H}^*.
\end{align*}
Thus, $\mathcal{J}_n\otimes\mathcal{J}_k\subset\mathcal{H}^{\otimes (n+k) }$ is a Wick ideal.
\end{proof}
The following proposition was proved in \cite{jsw} for quadratic Wick ideals and in \cite{proams} in general case.
\begin{proposition}\label{wickide}
Let $P\colon\mathcal{H}^{\otimes n}\rightarrow\mathcal{H}^{\otimes
n}$ be a projection. The subspace
$\mathcal{I}=P(\mathcal{H}^{\otimes n})$ generates a Wick ideal iff
\begin{enumerate}
\item $R_nP=0$ (equality in $\mathcal{H}^{\otimes n}$),
\item $[\mathbf{1}_{\mathcal{H}}\otimes (\mathbf{1}_{\mathcal{H}^{\otimes n}}-P)]T_1T_2\cdots T_n [P\otimes\mathbf{1}_{\mathcal{H}}]=0$ (equality in $\mathcal{H}^{\otimes n+1}$).
\end{enumerate}
Moreover, if $T$ is braided and $P$ is the projection onto $\ker R_n$, the second condition holds automatically and hence $\ker R_n$ generates the largest homogenenous Wick ideal of degree $n$.
\end{proposition}
\begin{remark}\label{remincl}
Note, that the second condition of Proposition \ref{wickide} means
\[
T_1T_2\cdots T_n \bigl(\mathcal{I}\otimes\mathcal{H}\bigr)\subset\mathcal{H}\otimes\mathcal{I}.
\]
\iffalse
Indeed it directly follows from the equality
\[
\bigl(\mathbf{1}_{\mathcal{H}}\otimes P\bigr)T_1T_2\cdots T_n\bigl(P\otimes\mathbf{1}_{\mathcal{H}}\bigr)=
T_1T_2\cdots T_n\bigl(P\otimes\mathbf{1}_{\mathcal{H}}\bigr)
\]\marginpar{\tiny Mozhet ne nado etogo objasneniya?}
\fi
\end{remark}
\begin{lemma}\label{ntonplus1}
Let $\mathcal{I}\subset\mathcal{H}^{\otimes n}$ generate a homogeneous Wick ideal, then
\[
\bigl(\mathbf{1}_{\mathcal{H}^{\otimes( n+1)}}-T_1T_2\cdots T_n\bigr)(\mathcal{I}\otimes\mathcal{H})\subset\ker R_{n+1}.
\]
\end{lemma}
\begin{proof}
Let $X\in\mathcal{I}$. Then $X\in\ker R_n$. Note that
\[
R_{n+1}=R_n\otimes\mathbf{1}_{\mathcal{H}}+T_1T_2\cdots T_n=\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}+T_1(\mathbf{1}_{\mathcal{H}}\otimes R_n)
\]
Then
for any $i=1,\ldots,d$ one has
\begin{align*}
R_{n+1}&(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_1T_2\cdots T_n)(X\otimes e_i)=\\
&=R_{n+1}(X\otimes e_i)-
R_{n+1}T_1T_2\cdots T_n (X\otimes e_i)=\\
&=(R_n\otimes\mathbf{1}_{\mathcal{H}}+T_1T_2\cdots T_n)(X\otimes e_i)\\
&-
(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}+T_1(\mathbf{1}_{\mathcal{H}}\otimes R_n))T_1T_2\cdots T_n (X\otimes e_i)=\\
&=T_1T_2\cdots T_n(X\otimes e_i)-T_1T_2\cdots T_n(X\otimes e_i)\\
& -T_1(\mathbf{1}_{\mathcal{H}}\otimes R_n)T_1T_2\cdots T_n(X\otimes e_i)=0,
\end{align*}
where we used
\[
T_1T_2\cdots T_n(\mathcal{I}\otimes\mathcal{H})\subset\mathcal{H}\otimes{\mathcal{I}}
\subset\mathcal{H}\otimes\ker R_n=\ker(\mathbf{1}_{\mathcal{H}}\otimes R_n).
\]
\end{proof}
\noindent The following corollary is immediate.
\begin{corollary}
If the operator $T$ is braided, then
\[
(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_1T_2\cdots T_n)(\ker R_n\otimes\mathcal{H})\subset\ker R_{n+1}.
\]
\end{corollary}
Below we will use the following simple observation
\begin{lemma}
Let $T$ be braided. Then for any $n\ge 2$ and $k\le n-1$
\[
(T_1T_2\cdots T_n)(T_1T_2\cdots T_k)=(T_2T_3\cdots T_{k+1})(T_1T_2\cdots T_n).
\]
\end{lemma}
\begin{proof}
Evidently it is enough to check that
\[
T_1T_2\cdots T_n T_j=T_{j+1}T_1T_2\cdots T_n,\quad 1\le j\le n-1.
\]
Indeed, since $T_iT_j=T_jT_i$ when $|i-j|\ge 2$ and $T_jT_{j+1}T_j=T_{j+1}T_jT_{j+1}$ we get
\begin{align*}
T_1T_2\cdots T_n T_j & =
T_1T_2\cdots T_{j-1}T_{j}T_{j+1}T_jT_{j+2}\cdots T_n=\\
&=T_1T_2\cdots T_{j-1}T_{j+1}T_jT_{j+1}T_{j+2}\cdots T_n=\\
&=T_{j+1}T_1T_2\cdots T_n.
\end{align*}
\end{proof}
The following proposition gives a procedure to compute
generators of certain homogeneous Wick ideals of degree $n+1$ out of generators of Wick ideals of degree $n$ when $T$ is braided.
\begin{proposition}
Let $T$ be braided and $\mathcal{I}_n\subset\mathcal{H}^{\otimes n}$ generate a homogeneous Wick ideal of degree $n$. Then
\[
\mathcal{I}_{n+1}=(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_1T_2\cdots T_n)(\mathcal{I}_n\otimes\mathcal{H})
\]
generates a homogeneous Wick ideal of degree $n+1$.
\end{proposition}
\begin{proof}
According to Lemma \ref{ntonplus1}
\[
(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_1T_2\cdots T_n)(\mathcal{I}_n\otimes\mathcal{H})\subset\ker R_{n+1}
\]
so, it remains to prove that
\begin{equation}\label{**}
T_1T_2\cdots T_{n+1}(\mathcal{I}_{n+1}\otimes\mathcal{H})\subset
\mathcal{H}\otimes\mathcal{I}_{n+1}.
\end{equation}
Indeed
\begin{align*}
T_1T_2\cdots T_{n+1}&(\mathcal{I}_{n+1}\otimes\mathcal{H})=T_1T_2\cdots T_{n+1}
(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_1T_2\cdots T_n)(\mathcal{I}_n\otimes\mathcal{H}\otimes\mathcal{H})=\\
&=(T_1T_2\cdots T_{n+1}-T_1T_2\cdots T_{n+1}T_1T_2\cdots T_n)
(\mathcal{I}_n\otimes\mathcal{H}\otimes\mathcal{H})=\\
&=(T_1T_2\cdots T_{n+1}-T_2T_3\cdots T_{n+1}T_1T_2\cdots T_{n+1})(\mathcal{I}_n\otimes\mathcal{H}\otimes\mathcal{H})=\\
&=(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_2T_3\cdots T_{n+1})T_1T_2\cdots T_{n+1}(\mathcal{I}_n\otimes\mathcal{H}\otimes\mathcal{H})=\\
&=(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_2T_3\cdots T_{n+1})T_1T_2\cdots T_n(\mathcal{I}_n\otimes T(\mathcal{H}\otimes\mathcal{H}))\subset\\
&\subset (\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_2T_3\cdots T_{n+1})T_1T_2\cdots T_n(\mathcal{I}_n\otimes\mathcal{H}\otimes\mathcal{H})\subset\\
&\subset
(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_2T_3\cdots T_{n+1})(\mathcal{H}\otimes\mathcal{I}_n\otimes\mathcal{H})=\\
&=\mathcal{H}\otimes (\mathbf{1}_{\mathcal{H}^{\otimes n}}-T_1T_2\cdots T_n)(\mathcal{I}_n\otimes\mathcal{H})=\\
&=\mathcal{H}\otimes\mathcal{I}_{n+1}.
\end{align*}
\end{proof}
Next aim is to describe largest Wick ideals.
\begin{lemma}
Let $T$ satisfy the braid relation. Then
\begin{equation}\label{rntcomm}
R_{n+1}T_1T_2\cdots T_n=T_1T_2\cdots T_n+T_1^2T_2\cdots T_n
(R_n\otimes\mathbf{1}_{\mathcal{H}})
\end{equation}
\end{lemma}
\begin{proof}
Indeed
\begin{align*}
& R_{n+1}T_1T_2\cdots T_n=\\
&=T_1T_2\cdots T_n+
T_1(\mathbf{1}_{\mathcal{H}^{\otimes n+1}}+T_2+T_2T_3+\cdots +
T_2T_3\cdots T_n)T_1T_2\cdots T_n=\\
&=T_1T_2\cdots T_n+T_1^2T_2\cdots T_n
(\mathbf{1}_{\mathcal{H}^{\otimes n+1}}+T_1+T_1T_2+\cdots+T_1T_2\cdots T_{n-1})
=\\
&=T_1T_2\cdots T_n+T_1^2T_2\cdots T_n (R_n\otimes\mathbf{1}_{\mathcal{H}}).
\end{align*}
\end{proof}
\begin{lemma}\label{rntn}
Let $T$ be braided. Then
\[
R_{n+1}(\mathbf{1}_{\mathcal{H}^{\otimes( n+1)}}-T_1T_2\cdots T_n)=(\mathbf{1}_{\mathcal{H}^{\otimes (n+1)}}-T_1^2T_2\cdots T_n)(R_n\otimes\mathbf{1}_{\mathcal{H}}).
\]
\end{lemma}
\begin{proof}
By the previous Lemma
\begin{align*}
R_{n+1}&-R_{n+1}T_1T_2\cdots T_n=\\
&=R_n\otimes\mathbf{1}_{\mathcal{H}}+T_1T_2\cdots T_n-T_1T_2\cdots T_n-T_1^2T_2\cdots T_n (R_n\otimes\mathbf{1}_{\mathcal{H}})=\\
&=(1-T_1^2T_2\cdots T_n)(R_n\otimes\mathbf{1}_{\mathcal{H}}).
\end{align*}
\end{proof}
Let $\mathcal{K}_2=\ker R_2$ and
\[
\mathcal{K}_{m+1}=(\mathbf{1}_{\mathcal{H}^{\otimes (m+1)}}-T_1T_2\cdots T_{m})(\mathcal{K}_m\otimes\mathcal{H}),\quad m\ge 2.
\]
Since by (\ref{**})
\[
\mathcal{K}_{m+1}\subset\mathcal{H}\otimes\mathcal{K}_m+
\mathcal{K}_m\otimes\mathcal{H},
\]\
the Wick ideals generated by $\mathcal{K}_m$, $m\ge 2$, form a nested sequence
\[
\langle\mathcal{K}_2\rangle\supset\langle\mathcal{K}_3\rangle\supset\cdots\supset\langle \mathcal{K}_m\rangle\supset\cdots
\]
\begin{proposition}\label{keqr}
Suppose that $T$ is braided and for any $m\ge 2$
\[
\ker (\mathbf{1}_{\mathcal{H}^{\otimes (m+1)}}-T_1T_2\cdots T_m)=\{0\}\quad \mbox{and}\quad\ker(\mathbf{1}_{\mathcal{H}^{\otimes (m+1)}}-T_1^2T_2\cdots T_m)=\{0\}.
\]
Then
\[
\mathcal{K}_m=\ker R_m,\ m\ge 2,
\]
and hence $\mathcal{K}_m$
generates the largest homogeneous Wick ideals of degree $m$ for any $m\ge 2$.
\end{proposition}
\begin{proof}
Suppose that $\dim\mathcal{H}=d$. If
$\mathbf{1}_{\mathcal{H}^{\otimes m+1}}-T_1T_2\cdots T_m$, $m\ge 2$ are invertible, by the definition of $\mathcal{K}_m$ we have
\[
\dim\mathcal{K}_m=d\cdot\dim\mathcal{K}_{m-1}=d^{m-2}\cdot\dim\ker R_2.
\]
As ${\mathcal K}_m\subset \ker R_m$ (by Lemma~\ref{ntonplus1})
it remains to see that for any $m\ge 2$ one has
\[
\dim\ker R_{m}=d\cdot\dim\ker R_{m-1}=\ldots=d^{m-2}\dim\ker R_2
\]
But this immediatly follows from the equality
\[
R_{m+1}(\mathbf{1}_{\mathcal{H}^{\otimes m+1}}-T_1T_2\cdots T_m)=(\mathbf{1}_{\mathcal{H}^{\otimes m+1}}-T_1^2T_2\cdots T_m)(R_m\otimes\mathbf{1}_{\mathcal{H}})
\]
and invertibility of the operators
$\mathbf{1}_{\mathcal{H}^{\otimes m+1}}-T_1T_2\cdots T_m$ and
$\mathbf{1}_{\mathcal{H}^{\otimes m+1}}-T_1^2T_2\cdots T_m$.
Hence, $\dim\ker R_m=\dim\ker\mathcal{K}_m$ and
\[
\mathcal{K}_m=\ker R_m,\quad m\ge 2.
\]
\end{proof}
\begin{lemma}\label{1notinsp}
Let $T$ be braided and $||T_1T_2T_1||=q<1$, $||T||=1$. Then $\ker R_m=\mathcal{K}_m$ for any $m\ge 2$.
\end{lemma}
\begin{proof}
By Propostion~\ref{keqr} it is enough to see that
\[
1\not\in\sigma(T_1T_2\cdots T_n)\quad\mbox{and}\quad 1\not\in\sigma(T_1^2T_2\cdots T_n).
\]
Indeed, since
$T_iT_j=T_jT_i$, $|i-j|\ge 2$, and $||T_i||=1$, $i=1,\ldots,n$, we get
\[
(T_1T_2T_3\cdots T_n)^2=(T_1T_2T_1)(T_3T_4\cdots T_nT_2T_3\cdots T_n)
\]
implying
\[
||(T_1T_2\cdots T_n)^2||\le q<1
\]
and hence $1\not\in\sigma(T_1T_2\cdots T_n)$.
Analogously,
\[
(T_1^2T_2\cdots T_n)^2=T_1(T_1T_2T_1)(T_3T_4\cdots T_nT_1T_2\cdots T_n)
\]
and $||(T_1^2T_2\cdots T_n)^2||\le q<1$ giving $1\notin \sigma(T_1^2T_2\ldots T_n)$.
\end{proof}
In what follows we shall often say ideal ${\mathcal K}_m$ meaning
the ideal generated by ${\mathcal K}_m$.
In general, see Section 3 and Section 4, largest homogeneous Wick ideals do not coincide with the ideals $\mathcal{K}_m$. However a direct calculations in {\sc Mathematica} shows that for some Wick algebras, including Wick versions of CCR, twisted CCR, twisted CAR and quonic commutation relations, see \cite{jsw}, the following conjecture is true.
\begin{conjecture}\label{strwide}
If $T$ is braided then
\[
\ker R_{n+1}=(\mathbf{1}_{\mathcal{H}^{\otimes( n+1)}}-T_1T_2\cdots T_n)(\ker R_n\otimes\mathcal{H})+\ker R_{n-2}\otimes\ker R_2.
\]
\end{conjecture}
\section{Homogeneous ideals of Wick version of quon commutation relations}
Here we apply results of the previous section to get a
description of homogeneous ideals the Wick algebra,
$\mathcal{A}_2^q$, associated with quon commutation relations with
two degrees of freedom, see \cite{mar}. Recall that
$\mathcal{A}_2^q$ is a $*$-algebra generated by elements $a_i$,
$a_i^*$, $i=1,2$, satisfying commutation relations of the form
\begin{align*}
a_i^*a_i&=1+qa_ia_i^*,\ i=1,2,\\
a_1^*a_2&=\lambda a_2 a_1^*,
\end{align*}
where $q$, $\lambda$ are parameters such that $0<q<1$,
$|\lambda|=1$. In this case $\dim\mathcal{H}=2$ and the operator $T$
is given by
\begin{align}\label{tquon}
T e_i\otimes e_i & =q e_i\otimes e_i,\ i=1,2,\nonumber\\
T e_1\otimes e_2 & =\overline{\lambda} e_2\otimes e_1,\quad T e_2\otimes e_1=\lambda e_1\otimes e_2
\end{align}
It is easy to verify that $T$ is braided, $||T||=1$ for any $q\in (0,1)$, $\lambda\in\mathbb{C}$, $|\lambda|=1$ and
\[ \ker
(\mathbf{1}_{\mathcal{H}^{\otimes 2}}+T)=\mathbb C\left<
A=e_2\otimes e_1-\lambda e_1\otimes e_2\right>.
\]
\begin{proposition}\label{quonwide}
Let $T\colon\mathcal{H}^{\otimes 2}\rightarrow\mathcal{H}^{\otimes
2}$ be defined by (\ref{tquon}) and $\dim\mathcal{H}=2$. Then for
any $m\ge 2$, $\ker R_m=\mathcal{K}_m$ is the largest homogeneous
Wick ideal of degree
$m$.
\end{proposition}
\begin{proof}
By Lemma~\ref{1notinsp} it is enough to show that
$||T_1T_2T_1||<1$. Indeed, it is easy to see that for the standard
orthonormal basis of $\mathcal{H}^{\otimes 3}$ one has
\begin{align*}
T_1T_2T_1 e_i\otimes e_i\otimes e_i&=q^3 e_i\otimes e_i\otimes e_i,\quad i=1,2\\
T_1T_2T_1 e_1\otimes e_1\otimes e_2&=q\overline{\lambda}^2 e_2\otimes e_1\otimes e_1,\quad
T_1T_2T_1 e_2\otimes e_1\otimes e_1=q\lambda^2 e_1\otimes e_1\otimes e_2\\
T_1T_2T_1 e_2\otimes e_2\otimes e_1&=q\lambda^2 e_1\otimes e_2\otimes e_2,\quad
T_1T_2T_1 e_1\otimes e_2\otimes e_2=q\overline{\lambda}^2 e_2\otimes e_2\otimes e_1\\
T_1T_2T_1 e_1\otimes e_2\otimes e_1&=q\ e_1\otimes e_2\otimes e_1,\quad\ \
T_1T_2T_1 e_2\otimes e_1\otimes e_2=q\ e_2\otimes e_1\otimes e_2.
\end{align*}
Hence $||T_1T_2T_1||=q<1$.
\end{proof}
\begin{remark}
\begin{enumerate}
\item For Wick quonic relations with
three generators Lemma~\ref{1notinsp} cannot be applied, since in this case $||T_1T_2T_1||=1$ . However, since $T$ is a braided contraction we have by Proposition \ref{cubide}
$$\ker R_3=(\mathbf{1}_{\mathcal{H}^{\otimes 3}}-T_1T_2)(\ker
R_2\otimes\mathcal{H})
$$
and one can apply
Proposition \ref{keqr} to show that in this case
$\mathcal{K}_m=\ker R_m$, $m\ge 2$ as well.
\item Computations in {\sc Mathematica} show that for Wick qounic relations with four or more generators
the ideals $\mathcal{K}_m$ do not coincide with $\ker R_m$ for
$m>3$.
\end{enumerate}
\end{remark}
\subsection{$*$-Representations of $\mathcal{A}_2^q$, annihilating homogeneous ideals}
In this section we show that any $*$-representation of the Wick
quonic relations annihilating $\mathcal{K}_m$ for some fixed $m\ge
2$ annihilates the ideal $\mathcal{K}_2$.
First we recall that for any bounded $*$-representation $\pi$ of
$\mathcal{A}_2^q$ one has $\pi(\mathcal{K}_2)=0$, see \cite{osam}.
Indeed, it easy to verify, that if $A=a_2a_1-\lambda a_1a_2$, then
\[
a_1^*A=\lambda qAa_1^*,\quad a_2^* A=\overline{\lambda}q Aa_2^*
\]
implying that $A^*A=q^2AA^*$. Evidently, the only bounded operator
$A$ satisfying such relation is the zero one.
\begin{proposition}
Let $\pi$ be an irreducible $*$-representation (possibly unbounded)
of $\mathcal{A}_2^q$ such that $\pi(\mathcal{K}_m)=\{0\}$ for some
$m\ge 3$. Then $\pi(A)=0$ and hence $\pi(\mathcal K_2)=0$.
\end{proposition}
\begin{proof}
By Propositions \ref{quonwide} for any $m\ge 3$ the
ideal $\mathcal{K}_m$ coincides with the largest homogeneous ideal
of degree $m$.
Let $m=2k$, for some $k>1$. Then, since the product of homogeneous
Wick ideals is a homogeneous Wick ideal, we get
\[
(\ker R_2)^{\otimes k}\subset\ker R_{2k}=\mathcal{K}_m.
\]
So if $\pi(\mathcal{K}_m)=\{0\}$, then $\pi(A^k)=0$ and hence
$\ker\pi(A)\ne\{0\}$. Further, $A^*A=q^2 AA^*$ implies that
$\ker\pi(A)=\ker\pi(A^*)$ and from
\[
Aa_1^*=\overline{\lambda}q^{-1}a_1^*A,\ Aa_2^*=\lambda q^{-1}a_2^*A,\ A^*a_1=\overline{\lambda}qa_1A^*,\ A^*a_2=\lambda q a_2A^*
\]
we obtain that $\ker\pi(A)=\ker\pi(A^*)$ is invariant with respect to $\pi(a_i)$ and $\pi(a_i^*)$, $i=1,2$.
Thus if $\pi$ is irreducible, $\pi(A)=\{0\}$.
Suppose now that $\pi(\mathcal{K}_m)=\{0\}$ and $m=2k+1$, for fixed
$k\ge 1$. Then as above $A^{k+1}\in\mathcal{K}_{m+1}$. Since
$\langle\mathcal{K}_{m+1}\rangle\subset\langle \mathcal{K}_m\rangle$ we get $\pi(A^{k+1})=0$ and
repeating the arguments from the previous paragraph we obtain
$\pi(A)=0$.
\end{proof}
We refer the reader to \cite{schmudgen} for definitions and facts
about unbounded $*$-representations of $*$-algebras. Note that such
representations can be rather complicated and one usually restricts
oneself to a subclass of "well-behaved" representations. For Lie
algebras natural well-behaved representations are
integrable representations i.e. those which can be integrated to a
unitary representation of the corresponding Lie group (see for
example \cite[Section 10]{schmudgen}).
\section{$*$-Representations of Wick version of CCR annihilating homogenenous ideals}
In this Section we consider a Wick version of CCR, denoted below by
$\mathcal{A}_d^0$, and given by
\[
\mathcal{A}_d^0=\mathbb{C}\left<a_i,\ a_i^* \mid
a_i^*a_j=\delta_{ij}\mathbf{1}+a_ja_i^*,\ i,j=1,\ldots, d\right>.
\]
In this case $T$ is the flip operator
\[
T e_i\otimes e_j=e_j\otimes e_i,\ i,j=1,\ldots,d
\]
and the largest quadratic ideal $\mathcal{K}_2=\ker R_2$ is
generated by the elements
\[
A_{ij}=e_j\otimes e_i-e_i\otimes e_j,\quad i\ne j,\ i,j=1,\ldots,d.
\]
The action of the operator $T_1T_2\cdots T_k$ on a product of the form
$B\otimes e_i$, $B\in{\mathcal{H}^{k}}$, $i=1,\ldots,d$, is the
following
\[
(T_1T_2\cdots T_k)(B\otimes e_i)=e_i\otimes B,\quad i=1,\ldots,d.
\]
Thus if the homogeneous Wick ideal $\mathcal{K}_m$ is generated by a
family $\{B_j,\ j\in\mathcal{J}\}$, then
\[
\mathcal{K}_{m+1}=\bigl<e_i\otimes B_j-B_j\otimes e_i,\ i=1,\ldots,d,\ j\in\mathcal{J}\bigr>
\]
Recall that
\[
e_i^*\otimes B_j=\mu_0(e_i^*)\bigl(R_m B_j+\sum_{k=1}^d T_1T_2\cdots
T_m (B_j\otimes e_k)\otimes e_k^*\bigr),\ i=1,\ldots,d,\
j\in\mathcal{J}.
\]
Since
\[
T_1T_2\cdots T_m(B_j\otimes e_k)=e_k\otimes B_j,\quad R_nB_j=0,
\]
and $\mu_0(e_i^*)e_k\otimes X=\delta_{ik}X$ for any $X\in\mathcal{T}(\mathcal{H})$, we get
\[
e_i^*\otimes B_j=B_j\otimes e_i^*,\quad i=1,\ldots,d,\ j\in\mathcal{J}.
\]
In other words if we consider the quotient of $\mathcal{A}_d^0$ by
the homogeneous Wick ideal $\mathcal{K}_{m+1}$ we obtain the
following commutation relations between generators of the algebra
and generators of the ideal $\mathcal{K}_{m}$
\[
a_i^*B_j=B_ja_i^*,\quad a_iB_j=B_ja_i,\ i=1,\ldots,d,\ j\in\mathcal{J}.
\]
We intend to study representations of $\mathcal{A}_2^0$ annihilating
the ideals $\mathcal{K}_m$, $m=2,3,4$.
\subsection{Representations of $\mathcal{A}_2^0$ annihilating quadratic and cubic ideals}
Below we assume $d=2$. The quadratic ideal $\mathcal{K}_2$ is
generated by $a_1\otimes a_2 - a_2\otimes a_1$ and the quotient
$\mathcal{A}_2^0/\mathcal{K}_2$ is the Weyl algebra with two degrees
of freedom. Note that it is a quotient of the universal enveloping
of the Heisenberg algebra. The unique irreducible well-behaved
representation of the Weyl algebra (by well-behaved we mean a
representation which can be integrated to a unitary representation
of the Heisenberg Lie group),
is the Fock representation: the space of the
representation is
$\mathcal{H}=l_2(\mathbb{Z}_{+})\otimes l_2(\mathbb{Z}_{+})$ and
\[
a_1=a\otimes\mathbf{1},\quad a_2=\mathbf{1}\otimes a,
\]
where $a e_n=\sqrt{n+1}e_{n+1}$, $n\in\mathbb{Z}_{+}$, and $\{e_n,\
n\in\mathbb{Z}_+\}$ is the standard orthonormal basis in $l_2(\mathbb{Z}_{+})$.
Now we study irreducible representations of
$\mathcal{A}_2^0$ which annihilate the ideal $\mathcal{K}_3$.
The ideal $\mathcal{K}_3$ is generated by the elements
\[
Aa_1-a_1A,\quad Aa_2-a_2A
\]
with $A=a_2a_1-a_1a_2$.
Since $a_i^* A=A a_i^*$, $i=1$, $2$, we conclude that $A$ belongs to the center of the quotient $\mathcal{A}_2^0/\mathcal{K}_3$.
For a well-behaved irreducible representation $\pi$, we assume that
$A$ commutes with $a_i$, $a_i^*$ strongly (i.e. $A$ is closable on
the domain of the representation and if $A=U|A|$ is the polar
decompostion of $A$ then $U$ and all spectral projections of $|A|$
belongs to the strong commutant of the family $\{a_1,a_1^*, a_2,
a_2^*\}$, \cite{schmudgen})
and by the Schur lemma we
have $A=x\mathbf1$, $x\in \mathbb C$ (we denote the operators of the
representation by the same letters as the corresponding elements of
the algebra). Thus, the problem of classification of such
irreducible representations is reduced to the classification of
irreducible representations of the following family of commutation
relations
\begin{align}\label{ccrk3}
&a_i^*a_i-a_ia_i^*=\mathbf1,\quad i=1,2,\nonumber\\
&a_1^*a_2=a_2a_1^*\quad
a_2a_1-a_1a_2=x\mathbf1.
\end{align}
Denote by $A_{2,x}$ the $*$-algebra generated by relations (\ref{ccrk3}) and by $A_{2,0}$ the $*$-algebra generated by CCR with two degrees of freedom.
\begin{proposition}\label{axa2}
The $*$-algebras $A_{2,x}$ and $A_{2,0}$ are isomorphic for any
$x\in\mathbb{C}$.
\end{proposition}
\begin{proof}
For any fixed $x\in\mathbb{C}$ let
\[
d_1=a_1\quad\mbox{and}\quad d_2=\Bigl(1+|x|^2\Bigr)^{-\frac{1}{2}}a_2-xa_1^*.
\]
Then it is easy to verify that $d_1$, $d_2$ generate $A_{2,x}$ and
\begin{equation}\label{2ccr}
d_i^*d_i-d_id_i^*=1,\ i=1,2,\quad d_1^*d_2=d_2d_1^*,\quad d_2d_1=d_1d_2.
\end{equation}
Conversely, let $c_1$, $c_2$ be generators of $A_{2,0}$ satisfying
(\ref{2ccr}). Put
\[
b_1=c_1,\quad b_2=\Bigl(1+|x|^2\Bigr)^{\frac{1}{2}}c_2+xc_1^*
\]
Then $b_1$, $b_2$ satisfy (\ref{ccrk3}) and generate $A_{2,0}$. Hence $A_{2,x}\simeq A_{2,0}$.
\end{proof}
It follows from the uniqueness of irreducible well-behaved
representation of CCR with two degrees of freedom that there exists
a unique, up to a unitary equivalence, irreducible representation
of (\ref{2ccr}) defined on $l_2(\mathbb{Z}_{+})^{\otimes 2}$ by the
formulas
\begin{equation*}
d_1=a\otimes\mathbf{1},\quad d_2=\mathbf{1}\otimes a.
\end{equation*}
Below by {\it well-behaved} representation of $A_{2,x}$ we mean a
well-behaved representation of $A_{2,0}\simeq A_{2,x}$. Applying
Proposition \ref{axa2} we get the following result.
\begin{theorem}\label{repcubid}
For any $x\in\mathbb{C}$ there exists a unique, up to unitary
equivalence, irreducible well-behaved representation of $A_{2,x}$
given by
\begin{align*}
a_1&=a\otimes\mathbf{1},\\
a_2&=\sqrt{1+|x|^2}\mathbf{1}\otimes a+x a^*\otimes\mathbf{1}.
\end{align*}
\end{theorem}
Evidently in the case $x=0$ we get the Fock representation, annihilating $\mathcal{K}_2$.
\subsection{Representations annihilating $\mathcal{K}_4$}
Let us describe representations of $\mathcal{A}_2^0$ which
annihilate the ideal $\mathcal{K}_4$. Recall that
\[
\mathcal{K}_4=\langle B_i a_j-a_j B_i,\quad i,j=1,2\rangle,
\]
where $B_i=A a_i-a_iA$, $i=1,2$, are generators of $\mathcal{K}_3$.
Since
\[
a_j^*B_i=B_ia_j^*,\ i=1,2,
\]
the elements $B_1$, $B_2$ belong to the center of the quotient $\mathcal{A}_2^0/\mathcal{K}_4$. Identifying again the elements with their images in a representation
$\pi$ annihilating $\mathcal K_4$ we require that for a well-behaved irreducible representation
\[
B_1=Aa_1-a_1A=x_1\mathbf{1},\quad B_2=Aa_2-a_2A=x_2\mathbf{1}
\]
for some $x_1$, $x_2\in\mathbb{C}$.
Note also that in $\mathcal{A}_2^0$ we have $a_i^*A=Aa_i^*$, $i=1,2$.
\subsubsection{Representations with $x_1\ne 0$.}
Fix $(x_1,x_2)\in\mathbb{C}^2$ with $x_1\ne 0$ and consider the $*$-algebra $A_{x_1,x_2}$, generated by elements $a_1$, $a_2$, $A$ satisfying the following commutation relations
\begin{gather}
a_i^*a_i-a_ia_i^*=1,\nonumber\\
a_1^*a_2=a_2a_1^*,\quad A=a_2a_1-a_1a_2,
\label{abasic}
\\
Aa_i-a_iA=x_i1,\quad a_i^*A=Aa_i^*,\quad i=1,2 \nonumber.
\end{gather}
Let
\begin{align}\label{c1c2c3}
d_1&=a_1\nonumber\\
d_2&=|x_1|^{-1}(A-x_1a_1^*)\\
d_3&=\Bigl(1+\frac{|x_2|^2}{|x_1|^2}\Bigr)^{-\frac{1}{2}}\Bigl(a_2+
\frac{x_2}{|x_1|}d_2^*
-\frac{\overline{x}_1}{2} d_2^2-|x_1|d_1^*d_2-
\frac{x_1}{2} (d_1^*)^2\Bigr)\nonumber
\end{align}
Below we show that the elements $d_i$, $i=1,2,3$, generate
$A_{x_1,x_2}$ and satsify CCR with three degrees of freedom.
First we establish some commutation relations between $a_i$ and
$d_j$, $i,j=1,2$.
\begin{lemma}
The elements $a_1$, $a_2$, $d_1$, $d_2$ satisfy the following relations
\begin{align}\label{adcomrel}
d_1^*a_2&=a_2d_1^*,\nonumber\\
a_2d_1-d_1a_2&=|x_1|d_2+x_1d_1^*,\nonumber\\
a_2^*d_2&=d_2a_2^*+x_1d_2^*+|x_1|d_1,\\
a_2d_2&=d_2a_2-\frac{x_2}{|x_1|}.\nonumber
\end{align}
\end{lemma}
\begin{proof}
The first two relations follow directly from the definition of
$d_1$, $d_2$ and (\ref{abasic}). Further
\begin{align*}
|x_1|a_2d_2&=a_2A-x_1a_2a_1^*=Aa_2-x_2-x_1a_1^*a_2=\\
&=(Aa_2-x_1a_1^*)a_2-x_2=|x_1|d_2a_2-x_2,
\end{align*}
and
\begin{align*}
|x_1|a_2^*d_2&=a_2^*A-x_1a_2^*a_1^*=Aa_2^*-x_1(a_1^*a_2^*-A^*)=\\
&=(A-x_1a_1^*)a_2^*+x_1A^*=|x_1|d_2a_2^*+x_1(|x_1|d_2^*+\overline{x}_1d_1)=\\
&=|x_1|(d_2a_2^*+x_1d_2^*+|x_1|d_1).
\end{align*}
\end{proof}
\begin{lemma}\label{ax1x2toa3}
The elements $d_i$, $d_i^*$, $i=1,2,3$, generate $A_{x_1,x_2}$ and
satisfy CCR with three degrees of freedom, i.e. for any $i=1,2,3$
and $i\ne j$
\begin{equation}\label{cbasic}
d_i^*d_i-d_id_i^*=1,\quad d_i^*d_j=d_jd_i^*,\quad d_id_j=d_jd_i.
\end{equation}
\end{lemma}
\begin{proof}
It easily follows from (\ref{c1c2c3}) that
\begin{align}\label{a1a2a3}
a_1&=d_1, \nonumber\\
A&=|x_1|d_2+x_1d_1^*,\\
a_2&=\Bigl(1+\frac{|x_2|^2}{|x_1|^2}\Bigr)^{\frac{1}{2}}d_3-
\frac{x_2}{|x_1|}d_2^*+
\frac{\overline{x}_1}{2}d_2^2+|x_1| d_1^*d_2+\frac{x_1}{2}(d_1^*)^2\nonumber
\end{align}
proving that $A_{x_1,x_2}$ is generated by $d_1$, $d_2$, $d_3$.
Further
\begin{align*}
|x_1|d_2d_1&=(A-x_1a_1^*)a_1=Aa_1-x_1(1+a_1a_1^*)=\\
&=a_1A+x_1-x_1-x_1a_1a_1^*=a_1(A-x_1a_1^*)=|x_1|d_1d_2
\end{align*}
\[
|x_1|d_1^*d_2=a_1^*(A-x_1a_1^*)=Aa_1^*-x_1(a_1^*)^2=(A-x_1a_1^*)a_1^*=
|x_1|d_2d_1^*
\]
Now let us check that $d_2^*d_2-d_2d_2^*=1$
\begin{align*}
|x_1|^2d_2^*d_2&=(A^*-\overline{x}_1a_1)(A-x_1a_1^*)=\\
&=A^*A-x_1A^*a_1^*-\overline{x}_1a_1A+|x_1|^2a_1a_1^*=\\
&=AA^*-x_1(a_1^*A^*-\overline{x}_1)-\overline{x}_1(Aa_1-x_1)
+|x_1|^2(a_1^*a_1-1)=\\
&=AA^*-x_1a_1^*A^*-\overline{x}_1Aa_1+|x_1|^2a_1^*a_1+|x_1|^2=\\
&=(A-x_1a_1^*)(A^*-\overline{x}_1a_1)+|x|_1^2=|x_1|^2(1+d_2d_2^*).
\end{align*}
Here we use the evident fact that $AA^*=A^*A$.
The relation $d_1^*d_3=d_3d_1^*$ follows immediately from the definition of $d_3$ and the commutation relations between $d_1^*$ and $d_2$, $d_2^*$. Using this commutation again as well as relations (\ref{adcomrel}) we get
\begin{align*}
\sqrt{1+\frac{|x_2|^2}{|x_1|^2}}(d_1d_3-d_3d_1)=&d_1a_2-a_2d_1+
|x_1|(d_1^*d_1-d_1d_1^*)d_2+\\
&+\frac{x_1}{2}((d_1^*)^2d_1-d_1(d_1^*)^2)=\\
=&-|x_1|d_2-x_1d_1^*+|x_1|d_2+\frac{x_1}{2}2d_1^*=0,
\end{align*}
\begin{align*}
\sqrt{1+\frac{|x_2|^2}{|x_1|^2}}(d_2^*d_3-d_3d_2^*)=&d_2^*a_2-a_2d_2^*-\\
&-\frac{\overline{x}_1}{2}(d_2^*d_2^2-d_2^2d_2^*)
-|x_1|(d_2^*d_2-d_2d_2^*)d_1^*=\\
&=\overline{x}_1d_2+|x_1|d_1^*-\frac{\overline{x}_1}{2}2d_2-|x_1|d_1^*=0
\end{align*}
and
\begin{align*}
\sqrt{1+\frac{|x_2|^2}{|x_1|^2}}(d_2d_3-d_3d_2)&=d_2a_2-a_2d_2+
\frac{x_2}{|x_1|}(d_2d_2^*-d_2^*d_2)=\\
&=\frac{x_2}{|x_1|}-\frac{x_2}{|x_1|}=0.
\end{align*}
Finally, since $d_3d_i=d_id_3$, $d_i^*d_3=d_3d_i^*$, $i=1,2$ one has
\begin{align*}
1=a_2^*a_2-a_2a_2^*&=\bigl(1+\frac{|x_2|^2}{|x_1|^2}\bigr)(d_3^*d_3-d_3d_3^*)-
\frac{|x_2|^2}{|x_1|^2}(d_2^*d_2-d_2d_2^*)+\\
&+\frac{|x_1|^2}{4}((d_2^*)^2d_2^2-d_2^2(d_2^*)^2)+
\frac{|x_1|^2}{4}(d_1^2(d_1^*)^2-(d_1^*)^2d_1^2)+\\
&+|x_1|^2(d_2^*d_2d_1d_1^*-d_1^*d_1d_2d_2^*)+\\
&+\frac{\overline{x}_1|x_1|}{2}d_1(d_2^*d_2^2-d_2^2d_2^*)+
\frac{\overline{x}_1|x_1|}{2}d_2(d_1^2d_1^*-d_1^*d_1^2)+\\
&+\frac{x_1|x_1|}{2}d_2^*(d_1(d_1^*)^2-(d_1^*)^2d_1)+
\frac{x_1|x_1|}{2}d_1^*((d_2^*)^2d_2-d_2(d_2^*)^2)=\\
&=\bigl(1+\frac{|x_2|^2}{|x_1|^2}\bigr)(d_3^*d_3-d_3d_3^*)-
\frac{|x_2|^2}{|x_1|^2}+\\
&+\frac{|x_1|^2}{4}(2+4d_2d_2^*)-
\frac{|x_1|^2}{4}(2+4d_1d_1^*)+|x_1|^2(d_1d_1^*-d_2d_2^*)+\\
&+\frac{\overline{x_1}|x_1|}{2}2d_1d_2-\frac{\overline{x_1}|x_1|}{2}2d_2d_1
-\frac{x_1|x_1|}{2}2d_2^*d_1^*+\frac{x_1|x_1|}{2}2d_1^*d_2^*=\\
&=\bigl(1+\frac{|x_2|^2}{|x_1|^2}\bigr)(d_3^*d_3-d_3d_3^*)-
\frac{|x_2|^2}{|x_1|^2}
\end{align*}
showing that $d_3^*d_3-d_3d_3^*=1$.
\end{proof}
Denote by $A_3$ the $*$-algebra generated by CCR with $3$ degrees of
freedom and denote by $c_1$, $c_2$, $c_3$ the canonical generators
of $A_3$. Construct elements $b_1$, $b_2$, $B$ of $A_3$ using
formulas (\ref{a1a2a3}).
\begin{lemma}\label{a3toax1x2}
The elements $b_1$, $b_2$, $B$ satisfy (\ref{abasic}) and generate
$A_3$.
\end{lemma}
\begin{proof}
It is evident that one can express $c_i$, $i=1,2,3$ via $b_1$,
$b_2$, $B$ using (\ref{c1c2c3}) with $b_1$, $b_2$, $B$ instead of
$a_1$, $a_2$, $A$. So $A_3$ is generated by $b_1$, $b_2$, $b_3$.
Let us show that $b_1$, $b_2$, $B$ satisfy (\ref{abasic}).
Indeed, it is a moment of reflection to see that $b_1^*b_2=b_2b_1^*$
and $b_1^*b_1-b_1b_1^*=1$. Further
\begin{align*}
b_2b_1-b_1b_2&=|x_1|c_1^*c_2c_1+\frac{x_1}{2}(c_1^*)^2c_1-|x_1|c_1c_1^*c_2-
\frac{x_1}{2}c_1(c_1^*)^2=\\
&=|x_1|(c_1^*c_1-c_1c_1^*)c_2+\frac{x_1}{2}((c_1*)^2c_1-c_1(c_1^*)^2)=\\
&=|x_1|c_2+\frac{x_1}{2}2c_1^*=|x_1|c_2+x_1c_1^*=B,
\end{align*}
\begin{align*}
Bb_1&=|x_1|c_2c_1+x_1c_1^*c_1=|x_1|c_2c_1+x_1(1+c_1c_1^*)=\\
&=|x_1|c_1c_2+x_1c_1c_1^*+x_1=c_1(|x_1|c_2+x_1c_1^*)+x_1=b_1B+x_1,
\end{align*}
and
\[
Bb_2-b_2B=-\frac{x_2}{|x_1|}|x_1|c_2c_2^*+\frac{x_2}{|x_1|}x_1c_2^*c_2=
x_2(c_2^*c_2-c_2c_2^*)=x_2.
\]
Thus it remains to check that $b_2^*b_2-b_2b_2^*=1$. But in fact this
was done in Lemma \ref{ax1x2toa3}, when we checked that the
relation $d_3^*d_3-d_3d_3^*=1$ is satisfied.
\end{proof}
Using Lemma \ref{ax1x2toa3} and Lemma \ref{a3toax1x2} it is easy to
see that the $*$-algebras $A_{x_1,x_2}$ and $A_3$ are isomorphic.
\begin{proposition}
The $*$-algebra $A_{x_1,x_2}$ is isomorphic to the $*$-algebra $A_3$.
\end{proposition}
\begin{proof}
Let $\phi\colon A_{x_1,x_2}\rightarrow A_3$ be a homomorphism
defined by
\[
\phi(a_i)=b_i,\ i=1,2,\quad \phi(A)=B,
\]
where $b_1$, $b_2$ and $B$ are the generators constructed in Lemma
\ref{a3toax1x2}. Similarily define $\psi\colon A_3\rightarrow
A_{x_1,x_2}$ by
\[
\psi(c_i)=d_i,\quad i=1,2,3,
\]
where $d_i$ are taken from Lemma \ref{ax1x2toa3}.
Then $\psi\circ\phi=\mathrm{id}_{A_{x_1,x_2}}$ and
\iffalse
\marginpar{\tiny Mozhet ne
nado privodit dokazatelstvo. V printsepe, ponyatno.}Indeed
\begin{align*}
\psi\phi(a_1)&=\psi(b_1)=\psi(c_1)=d_1=a_1\\
\psi\phi(A)&=\psi(B)=|x_1|\psi(c_2)+x_1\psi(c_1^*)=|x_1|d_2+x_1d_1^*=A\\
\psi\phi(a_2)&=\psi(b_2)=\\
&=\bigr(1+\frac{|x_1|^2}{|x_2|^2}\bigl)^{\frac{1}{2}}\psi(c_3)-
\frac{x_2}{|x_1|}\psi(c_2^*)+
\frac{\overline{x}_1}{2}\psi(c_2^2)+|x_1|\psi(c_1^*c_2)+
\frac{x_1}{2}\psi(c_1^2)=\\
&=\bigr(1+\frac{|x_1|^2}{|x_2|^2}\bigl)^{\frac{1}{2}}d_3-
\frac{x_2}{|x_1|}d_2^*+
\frac{\overline{x}_1}{2}d_2^2+|x_1|d_1^*d_2+
\frac{x_1}{2}d_1^2=a_2
\end{align*}
\fi
$\phi\circ\psi=\mathrm{id}_{A_3}$.
\end{proof}
Therefore in order to study irreducible representations of $A_{x_1,x_2}$
we can work with the generators $d_1,d_2,d_3$. As for the case of
representations annihilating $\mathcal{K}_3$, we say that
a representation of $A_{x_1,x_2}$ with $x_1\ne 0$ is {\it
well-behaved} if the corresponding representation of $A_3\simeq
A_{x_1,x_2}$ is well-behaved. Then from the uniqueness of irreducible
well-behaved $*$-representation of CCR with finite degrees of freedom
we get that the space of representation is
$\mathcal{H}=l_2(\mathbb{Z}_{+})^{\otimes 3}$ and
\[
d_1=a\otimes \mathbf1\otimes \mathbf1,\ d_2=\mathbf1\otimes a\otimes \mathbf1,\ d_3=\mathbf1\otimes \mathbf1\otimes a.
\]
Returinig to the generators $a_1$, $a_2$, $a_3$ using (\ref{c1c2c3}) we get the
following result.
\begin{theorem}
For any $(x_1,x_2)\in\mathbb{C}^2$ with $x_1\ne 0$ there exists a
unique, up to a unitary equivalence, well-behaved irreducible
representation of $A_{x_1,x_2}$ defined on the generators by the
following formulas
\begin{align*}
a_1=&a\otimes \mathbf1\otimes \mathbf1,\\
a_2=&\sqrt{1+\frac{|x_2|^2}{|x_1|^2}}\ \mathbf1\otimes \mathbf1\otimes a-
\frac{x_2}{|x_1|}\mathbf1\otimes a^*\otimes \mathbf1+\frac{\overline{x}_1}{2}\mathbf1\otimes a^2\otimes \mathbf1+\\
&\hskip 3,9cm+|x_1|a^*\otimes a\otimes \mathbf1+\frac{x_1}{2}(a^*)^2\otimes \mathbf1\otimes \mathbf1,\\
A=&|x_1|\mathbf1\otimes a\otimes \mathbf1+x_1a^*\otimes \mathbf1\otimes \mathbf1.
\end{align*}
\end{theorem}
\subsubsection{Representations with $x_1=0$}
Let $x_1=0$ and $x_2\ne 0$. As in the previous case we have $A_{0,x_2}\simeq A_3$. To see this we express the generators $a_1$, $a_2$, $a_3$ via the generators $d_1$, $d_2$ and $d_3$ of CCR using formulas (\ref{a1a2a3}) with $a_2$, $-a_1$ instead of $a_1$, $a_2$ respectively, exchanging $x_1$ with $x_2$ and letting then $x_1=0$. For this we observe that $(-a_1)a_2-a_2(-a_1)=A$, and
$Aa_2-a_2A=x_2$.
Hence we get the following result.
\begin{theorem}
For any $x_2\in\mathbb{C}$, $x_2\ne 0$, there exists a unique, up to
a unitary equivalence, irreducible well-behaved $*$-representation
of $A_{0,x_2}$, defined by the following formulas
\begin{align*}
a_2&=a\otimes \mathbf1\otimes \mathbf1,\\
a_1&=-\bigl(\mathbf1\otimes \mathbf1\otimes a
+\frac{\overline{x}_2}{2}\mathbf1\otimes a^2\otimes \mathbf1+\\
&\hskip 3,9cm+|x_2|a^*\otimes a\otimes \mathbf1+\frac{x_2}{2}(a^*)^2\otimes \mathbf1\otimes \mathbf1\bigr)\\
A&=|x_2|\mathbf1\otimes a\otimes \mathbf1+x_2a^*\otimes \mathbf1\otimes \mathbf1
\end{align*}
\end{theorem}
If both $x_1=0$ and $x_2=0$, then $Aa_i=a_iA$, $i=1,2$ and hence the cubic ideal $\mathcal{K}_3$ is annihilated. In this case irreducible well-behaved represenations are described in Theorem \ref{repcubid}.
\subsection{Concluding remarks}
Note that our result shows in particular that in the case of $\mathcal{A}_2^0$ the ideal $\mathcal{K}_4$ does
not coincide with $\mathcal{I}_4$, the largest homogeneous ideal of degree $4$. Indeed, as noted above
$\mathcal{K}_2\otimes\mathcal{K}_2\subset\mathcal{I}_4$. So, if a representation $\pi$ annihilates
$\mathcal{I}_4$, then $\pi(A^2)=0$. Since $A$ is a normal element we immediately have $\pi(A)=0$.
However the representation that we constructed above has the property that $\pi(\mathcal{K}_4)=\{0\}$
but $\pi(A)\ne 0$. Thus $\mathcal{K}_4\ne\mathcal{I}_4$.
\section*{Acknowledgements} The work on this paper was supported by DFG grant SCHM1009/4-1. The paper was initiated during the visit of V. Ostrovskyi, D. Proskurin and L.~Turowska to Leipzig University, the warm hospitality and stimulating atmosphere are gratefully acknowledged. We are also indebdet deeply to D.~Neiter for performing computations in {\sc Mathematica}.
|
2,877,628,088,907 | arxiv | \section{Introduction}
Reproducibility and replicability are often cited as the cornerstones of reliable science. Studies where results cannot be reproduced or replicated by the scientific community should be treated with caution. The importance and benefits of reproducible and replicable research is well known \citep{donoho,peng2011}. For authors, the possible benefits include escalating the impact of their research. This can be achieved because by providing computer code, other researchers can easily use and compare the method developed, which may lead to more citations. In addition, other possible benefits include elevating work efficiency, improving work habits, communication and teamwork, and minimizing the errors in the research/computer code. For example, the training of students of all levels becomes dramatically easier, as students can immediately pick up where the previous student left off because the code was written with a reproducible mindset. Further, the access to the computer code and data enable downstream scientific contributions, such as meta-analyses. For readers, the benefits include increased trustworthiness, a perceived quality of research, and easy adaption and extension of the computer code and analysis. For the public, the benefits include reducing or preventing fraud and scandals related to the research, and increasing the public access to the research (public goods). The quality of the education system can also be improved by encouraging students to interact with the research papers by repeating a part of (or the entirety of) the analysis, rather than being a passive researcher. This practice enriches the experience, creates awareness, and becomes normal practice after they graduate.
Recently the importance of reproducibility and replicability in research has been frequently stressed in a wide range of scientific journals and magazines, and the terms ``reproducibility crisis'' and ``replication crisis'' have become more evident. A poll conducted by the journal \textit{Nature} in 2016 reported that more than half (52\%) of scientists surveyed believed science was facing a ``replication crisis'' \citep{baker}. The crisis involves the absence of replication studies in the published literature across many fields \citep{makel2012} (for example, the Open Science Collaboration found that fewer than half of the studies were successfully replicated in Psychology), the ``file drawer effect'' or the inflated rate of false positives in the literature \citep{agnoli}, and the lack of a systematic approach for the description of methods, computer code, and data analysis in publications across all fields \citep{nuijten}. An earlier essay in \textit{PLOS Medicine} carried the provocative title, ``Why Most Published Research Findings Are False'' \citep{Ioannidis}. In addition, in 2013, a cover story in the popular magazine, \textit{The Economist}, invited readers to learn ``How Science Goes Wrong'' and Richard Harris’s popular 2017 book \textit{Rigor Mortis} provided many examples of purported failures in science.
While we might be inclined to use the terms ``reproduce'' and ``replicate'' interchangeably, these two terms are in fact distinct. Reproducibility is the ability to reproduce the results of another researcher beginning with their computer code and data, while replicability is the ability of independent researchers to collect new study data and verify the results (different groups of researchers are using different terminologies sometimes in utter contradiction with each other, but for more details on terminology see \citealp{barba}). The objective of replicability is to quickly repudiate spurious results and enforce a ruled based approach to scientific discovery. Both terms facilitate the ongoing self-correcting nature of science. Indeed, a new scientific discovery requires both confirmation and extensive retesting in order to study the limits of the original result.
While replicability is generally regarded as the gold standard of verifying the result from a scientific study, it is often difficult to perform for many reasons including experimental costs, experimental length, recreating experimental conditions, inherent variability in the system, inability to control complex variables, and substandard research practices. Hence, reproducibility is the compromise. In fact, reproducibility can be used interchangeably with computational reproducibility \citep{stodden2018empirical}, which is embedded in numerous disciplines due to the ever growing capabilities of modern computation. As the development of computational algorithms increases, the research community assumes the computational component of the work can be easily reproduced by not only the original authors but by other researchers too. However, the reality is very different. There are a multitude of problems in reproducible research. One common problem, for example, is the lack of detailed instructions for how the analyses (the computer code with the accompanying data) should be performed. Many of the workflows that are used to derive the results are highly customized which, in combination with the often-limited information provided in the corresponding paper, make analyses hard to reproduce. Also, many computational algorithms remain opaque due to their increasing complexity. This makes the documentation and hence the reproduction both cumbersome and difficult. The level of detail required to reproduce the computational analysis is often not reported in the published paper, or the analysis is immensely time consuming. Additionally, the final computer code, script or data for producing the final analysis may be lost by or is unrecoverable from the authors. Furthermore, large data sets themselves, due to their size, are infeasible to process without access to specialized computer resources \citep{BOULUND201881}. The most frequently occurring issues associated with reproducibility can be summarized into four main points: (1) access to the real data; (2) availability of the final version of executable computation codes; (3) full details of the analysis workflow, and (4) complete description of the computer environment (and information on software versions) that was used to calculate the results. Consequently, many researchers have concluded that a credibility crisis is occurring in the field of computational analysis \citep{donoho}.
The field of statistics prides itself on its openness when it comes to sharing both computer code and data. In addition, anecdotally, in our experience, statisticians are very responsive to sharing computer code and data when requested by email. However, there is no quality control in the sharing of code and data. While there is currently a great many research papers on reproducibility in other computational fields, there has been no study, to the best of our knowledge, on the reproducibility of results in statistics. To this end, in this paper, we examine the current status of reproducibility in statistics by attempting to reproduce the results in seven prominent journals: the \textit{Annals of Applied Statistics}, \textit{Biometrics}, \textit{Biostatistics}, the \textit{Journal of Computational and Graphical Statistics}, the \textit{Journal of the American Statistical Association}, the \textit{Journal of the Royal Statistical Society: Series C}, and \textit{Statistics in Medicine} during the period 2010-2021. Many of these journals are currently in the process of revising author guidelines to include computer code and data availability. In the language of \cite{stodden2015reproducing}, we are focusing on computational reproducibility, which refers to ``changes in scientific practice and reporting standards to accommodate the use of computational technology occurring primarily over the past two decades, in particular whether the same results can be obtained from the data and code used in the original study''. Each journal publishes clear ``Requirement for codes'' and ``Requirement for data'' instructions for authors on their websites and ``encourage'' or ``strongly encourage'' authors to provide the paper-related materials including computation sources and data sets. Some of the journals also provide data archiving services for the convenience of data upload and management. Badges in recognition of outstanding contributions to open research have been established as well. However, such attempts have not received equal returns so far.
We focus on all published papers utilizing functional magnetic resonance imaging (fMRI) data from the journals during the period (93 papers in total). fMRI is a valuable tool for studying neural activity in the central nervous system due to its wide spatial coverage and non-invasive nature. Essentially, each fMRI data set has 4 dimensions, including spatial and temporal information of the Blood Oxygenation Level Dependency (BOLD) signal, which measures the neural activity by reflecting changes in blood flow. At first glance, the results from statistical methods applied to fMRI studies are expected to be reproducible as long as the computer code and preprocessed fMRI data are provided. However, we find that statistical papers on fMRI data in the recent 11 years are often not reproducible. In fact, among all the 93 examined papers, we could only reproduce the results in 14 (15.1\%) papers, that is, the papers provide both executable computer code (or software) with the real fMRI data, and our results matched the results in the paper. The failure to reproduce results is often due to i) incomplete, outdated, or missing instructions for running the computer code or software; ii) missing, outdated, inexecutable or unannotated source code files; and/or iii) missing fMRI data or raw fMRI data that had not been preprocessed (or the failure to provide the preprocessing script). Without well-annotated computer source code, it is very difficult for researchers to reproduce the result from scratch. Therefore it relies fully on the descriptions provided in the publications, which are often incomplete and prone to errors. Furthermore, many authors prefer to provide access to raw, publicly-available fMRI data sets rather than directly present the preprocessed fMRI data or offer the raw data with their preprocessing code or preprocessing pipeline. Since no agreement has been reached in terms of the best-preprocessed pipeline for fMRI data, ambiguously dealing with the raw fMRI data also jeopardizes the reproducibility of the paper.
The remainder of this paper is organized as follows. We summarize the reproducibility results of the 93 published papers based on fMRI data in 7 prominent statistics journals and relate these results to the computer code and data requirements from the corresponding journal in Section 2. We discuss the availability of computer code and data for each specific journal in Section 3. Finally, in Section 4, we detail some author-specific and journal-specific suggestions to improve research reproducibility in statistics and in computational methods in general and discuss the strengths and limitations of our own study. We also discuss some of the many initiatives the journals have proposed (and in many cases implemented) to improve reproducibility in statistics.
\section{Related work}
Reproducibility and replicability have been studied in other fields including in political science \citep{king}, econometric research \citep{koenker}, operations research \citep{nestler}, archaeology \citep{marwick}, chemical engineering \citep{han}, economics \citep{vilhuber}, transportation \citep{zheng}, evolutionary computation \citep{lopez}, physics \citep{clementi}, and computational biology~\citep{cadwallader}.
While there is currently a great deal of research papers on reproducibility in other computational fields, there has been no study, to the best of our knowledge, on the reproducibility of results in the field of statistics (\citealt{stodden2018empirical} studied computational reproducibility for papers published in the journal, \textit{Science}). There are, however, some related papers. For example, \cite{gentleman} described a software framework for both authoring and distributing integrated, dynamic documents that contain text, code, data, and any auxiliary content needed to recreate the computations. In an editorial for the journal \textit{Biostatistics}, \cite{peng2009reproducible} described the difficulties in and the efforts to promote reproducibility in biostatistical research. \cite{deangelis} discussed the importance of independent statistical analysis of industry-sponsored studies, a requirement for the \textit{Journal of the American Medical Association}. \cite{schulte} introduced a multi-language computing environment for literate programming and reproducible research. The phyloseq project \citep{mcmurdie} is an open-source R software tool for statistical analysis of phylogenetic sequencing data, which enables reproducible preprocessing, analysis, and publication-quality graphics production. \cite{xie} created the R package \textbf{knitr}, which combines computer code and software documentation in the same document that allows for easier reproducibility. \cite{stodden2015reproducing} provided an overview of issues of reproducibility and how statistical research has been and could be addressing these concerns. In \cite{fuentes}, the editor of the \textit{Journal of the American Statistical Association}, Applications and Case Studies, introduced the reproducibility initiative as a response of the reproducibility/replication crisis in science. The author noted that most statistical papers did not submit adequate supporting computer code or data that enabled reproduction of their results. \cite{leek} described the range of definitions of false discoveries in the scientific literature and summarize the philosophical, statistical, and experimental evidence for each type of false discovery. To address the challenge of reproducibility with increasing computer complexity, \cite{marwick2018} reviewed the concept of the research compendium as a solution for providing a standard and easily recognizable way for organizing the digital materials of a research project to enable other researchers to inspect, reproduce, and extend the research. \cite{becker} presented the \textbf{trackr} and \textbf{histry} R packages. Together, these packages define a framework for tracking, automatically annotating, discovering, and reproducing the intermediate and final results of computational work done within R. In \cite{benjamini}, the authors argued that addressing selective inference is a missing statistical cornerstone of enhancing replicability. Related to this work, \cite{hung} applied multiple testing and post selection inference techniques to develop new
statistical methods for replicability assessment. To increase the implementation of reproducible research in data science projects in R and to provide standards on reproducibility in published research, \cite{bertin} presented the R package \textbf{fertile}, that proactively prevents reproducibility mistakes from happening in the first place, and retroactively analyzes code for potential problems.
\section{Reproducibility in statistics journals}
In this paper, we explore the reproducibility of the results from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in seven prominent statistical journals during the 2010-2021 time period. In total, we identified 93 eligible papers (obviously, this is continuously subject to change but we intend to add new papers as they are published). We first identified the journals, and then we inspected each issue from 2010 to 2021 for fMRI data. Multiple human readers were used to confirm the availability of code and data. All the journals provide detailed descriptions of the computer code and data requirements, which we summarize in Table \ref{requirements}. We also include website links for each journal, where the requirements are provided.
Most journals use the words `encourage', `expect', `should' in their requirements with respect to computer code and data, but in most cases they are not required.
\footnotesize
\begin{table}[htbp]
\footnotesize{
\begin{tabular}{p{4.5em}|p{18em}|l|p{4.5em}}
\toprule
Journal & \multicolumn{1}{l|}{Requirement for data} & Requirement for codes & \multicolumn{1}{l}{URL} \\
\midrule
AOAS & AOAS \textbf{strongly encourages} authors to make the data used in papers published in AOAS available for others to analyze. & \multicolumn{1}{p{21.665em}|}{Authors are \textbf{encouraged} to utilize web-based supplementary files to include software, or code for carrying out the analyses presented in a paper. } &
Click \href{ https://imstat.org/journals-and-publications/annals-of-applied-statistics/annals-of-applied-statistics-manuscript-submission/}{\textcolor{blue}{here}} \\
\midrule
Biometrics & Biometrics \textbf{encourages} authors to share the data supporting the results in their study by archiving them in an appropriate public repository. Biometrics also \textbf{encourages} authors to submit data used in their illustrative examples if at all possible (along with code used for the analysis). & \multicolumn{1}{p{21.665em}|}{Biometrics \textbf{strongly encourages} authors to include software implementing proposed methodology with their papers at the time of submission, such as code implementing simulations or data analyses presented in the paper or, preferably, more generic software (e.g., a R package or SAS macro).} &
Click \href{https://biometrics.biometricsociety.org/home/author-guidelines#h.p_GNcAvEniYxGa}{\textcolor{blue}{here}} \\
\midrule
Biostatistics & There is the opportunity to present extensive analyses of data on the journal's website as supplementary material. & \multicolumn{1}{p{21.665em}|}{Authors are \textbf{strongly encouraged} to submit code supporting their publications. Authors should submit a link to a Github repository and to a specific example of the code on a code archiving service such as Figshare or Zenodo.} &
Click \href{https://academic.oup.com/biostatistics/pages/About}{\textcolor{blue}{here}} \\
\midrule
JCGS & \multicolumn{2}{p{39.665em}|}{Authors are \textbf{expected} to submit code and datasets as online supplements to the manuscript. Exceptions for reasons of security or confidentiality may be granted by the Editor. } &
Click \href{https://amstat.tandfonline.com/action/authorSubmission?show=instructions&journalCode=ucgs20}{\textcolor{blue}{here}} \\
\midrule
\multirow{2}[4]{*}{JASA} & \multicolumn{2}{p{39.665em}|}{\textcolor{red}{Before September 1, 2021}: The ASA \textbf{strongly encourages} all authors to submit datasets, code,other programs, and/or extended appendices that are directly relevant to their submitted articles to Theory \& Methods. Since \textcolor{red}{September 1, 2016} authors publishing in the Applications and Case Studies section of JASA will be \textbf{asked} to provide materials that demonstrate reproducibility.} & Click \href{https://www.tandfonline.com/action/authorSubmission?show=instructions&journalCode=uasa20#style}{\textcolor{blue}{here}} \\
\cmidrule{2-4} & \multicolumn{2}{p{39.665em}|}{\textcolor{red}{After September 1, 2021}: \textbf{All} invited revisions to JASA (both Applications \& Case Studies and Theory \& Methods) for manuscripts whose initial submission was on or after September 1, 2021, \textbf{must} include code, data, and the workflow to reproduce the work presented. Published papers will include a link to reviewed reproducibility materials, including the Author Contributions Checklist; the materials will be posted to the JASA GitHub repository.} & Click \href{https://jasa-acs.github.io/repro-guide/}{\textcolor{blue}{here}} \\
\midrule
JRSS,C & \multicolumn{2}{p{39.665em}|}{It is the policy of the Journal of the Royal Statistical Society that published papers \textbf{should}, where possible, be accompanied by the data and computer code used in the analysis. Both data and code must be clearly and precisely documented, in enough detail that it is possible to replicate all results in the final version of the paper.} &
Click \href{https://rss.onlinelibrary.wiley.com/hub/journal/14679876/author-guidelines}{\textcolor{blue}{here}}
\\
\midrule
Statistics in Medicine & Statistics in Medicine \textbf{expects} that data supporting the results reported in the paper will be archived in an appropriate public repository. & \multicolumn{1}{p{21.665em}|}{The journal \textbf{requires} authors to supply any supporting computer code or simulations that allow readers to institute any new methodology proposed in the published article.} &
Click \href{https://onlinelibrary.wiley.com/page/journal/10970258/homepage/forauthors.html}{\textcolor{blue}{here}} \\
\bottomrule
\end{tabular}
}
\caption{Computer code and data requirements as stated on the websites of the seven statistical journals: the \textit{Annals of Applied Statistics},
\textit{Biometrics}, \textit{Biostatistics}, the \textit{Journal of Computational and Graphical Statistics}, the \textit{Journal of the American Statistical Association}, the \textit{Journal of the Royal Statistical Society: Series C} and \textit{Statistics in Medicine}. A url link for each journal's requirements is also provided. }
\label{requirements}
\end{table}
\normalsize
Table \ref{result} summarizes the results of the reproducibility of the papers from the computer code and the data perspective. We took a lenient definition of reproducibility due to the possibility of randomness in the statistical algorithms. For computer code, we checked whether the paper provided scripts, a package (with and without a paper script), or no computer code. We considered whether the provided code worked, failed due to errors in the code, or failed due to an executable error (files missing in the script or whether the software package did not not include a script). For data, we checked whether the paper provided preprocessed fMRI data, simulated data, raw data (with and without a script for the preprocessing), or no data.
\begin{table}[htbp]
\centering
\begin{tabular}{|p{4.045em}c|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data\newline{} provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{14(1)} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{4(4)} & 3 & \multirow{3}[6]{*}{\textbf{51}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{12(2)} & \multicolumn{1}{c|}{3} & 1 & 0 & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{4(2)} & 8 & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{7(1)} & 35 & \textbf{42} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{47}} & \textbf{46} & \textbf{93} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from a computer code and a data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in seven prominent statistical journals: the \textit{Annals of Applied Statistics}, \textit{Biometrics}, \textit{Biostatistics}, the \textit{Journal of Computational and Graphical Statistics}, the \textit{Journal of the American Statistical Association}, the \textit{Journal of the Royal Statistical Society: Series C} and \textit{Statistics in Medicine} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result}%
\end{table}%
Overall, we find that 46 out of the 93 (49\%) published papers provide no computer code (or software). Out of the 47 published papers with computer code available, we find that 26 (28\%) provide computer code that runs smoothly, 5 (5\%) where the code provided failed, and 16 (17\%) where the code included was not executable (e.g., only the R functions were provided but there is missing key files). In addition, 10 papers provide an R software package \citep{R}, of which only 3 contain user-friendly scripts to reproduce the results in the paper. R packages are very convenient as they allow for the easy adaption of the code for application to other data sets and for easy comparison with other methods, however, ideally a paper based script should be included for reproducibility. For all cases where the computer code failed, we endeavored to fix all the errors (both major and minor). As long as the code could be executed after fixation, we classified it as `code working'. However, most computer code that generated an error was due to missing data or missing functions, which we were not able to resolve without the help of the original authors. From the data perspective, we find that 42 out of the 93 (45\%) published papers provide no real, simulated, or raw data. Out of the 51 (55\%) published papers with data available, we find that 23 (25\%) provide the real fMRI data analyzed in the paper, 16 (17\%) provide simulated data, and 12 (13\%) provide the raw fMRI data. If papers included both simulated and real data, we classified them under the real data category so we did not double count them.
From both the computer code and data viewpoint, among the 10 papers that developed a R software package, only 1 includes the preprocessed fMRI data set in the package with clear executable instructions. Out of the 11 papers that share the data (real, simulated or raw) but do not include any relevant computer code, 8 of them provide a link to a public website containing the raw fMRI data set. While raw fMRI data is preferable to providing no data, it puts the onus on the researcher attempting to reproduce the results to preprocess the fMRI data. As we discuss later, for fMRI data, it is extremely difficult to obtain precisely the same preprocessed data from the raw version as there is not an established framework of carrying out the preprocessing steps. Even worse, several researchers do not provide full preprocessing steps in their papers. Of course, it would be acceptable if the authors provided both the raw data and a clear step-by-step script for preprocessing but even then this makes reproducing the work more difficult due to this extra step and the possibility of errors. There are only 2 papers that provide a raw fMRI data set and a preprocessing script for the data. However, we were unable to preprocess the raw data; therefore they are listed under `code not executable'. Hence, from both the computer code and the data perspective, among all the 93 examined papers, only 14 papers provide executable computer code (or software) and real fMRI data, where we were able to reproduce the results. This equates to just around 15\% of the papers examined. Table \ref{resultofcode} (in the Appendix) provides more specific details on the reproducibility of the 47 papers that provide computer code (or software). From a computer software viewpoint, R is the mostly used software (33/47 of the papers employ it solely or employ it in combination with another software), with Matlab second (14/47 employ it solely or employ it in combination with another software).
\begin{figure}[p]
\centering
\includegraphics[scale = 0.9]{figure1_revise.eps}
\caption{(a) The number of fMRI papers published in the seven journals; (b) the number of papers providing preprocessed fMRI data (``PP''), no data (``N''), simulated data (``Sim''), and raw fMRI data (``Raw''); (c) the number of papers providing computer code (``Y''), no computer code (``N''), and an R software package (``R package''); (d) the number of papers providing computer code and data that can be executed, without failure (``Y''), with code errors (``N''), with code that is not executable (``NE''), or with errors due to missing data reasons (``NA''), for each year during the time period 2010-2021.}
\label{year}
\end{figure}
In Figure \ref{year}, we illustrate the timeline of the number of fMRI papers published in each journal, and their efforts towards reproducibility for each year during the period 2010-2021. From Figure \ref{year}(a), we conclude that the \textit{Annals of Applied Statistics} has published the most papers on fMRI data, while \textit{Biometrics}, \textit{Biostatistics} and \textit{JASA} appear to be publishing the most papers related to fMRI data in the recent five years. On the contrary, \textit{AOAS} has focused less on fMRI topics recently given the decreasing number of publications. In Figure \ref{year}(b), which displays the number of papers that provide the real preprocessed fMRI data, no data, simulated data, and raw fMRI data, we notice that more recently published papers choose to submit the ideal case, a preprocessed fMRI data set. In 2019, 86\% (12 out of 14) of the fMRI papers published in the seven journals offered at least one type of fMRI (simulated, raw, or preprocessed) data. This positive trend does not continue, there is a slight downward trend in 2020 and 2021. The increasing emphasis on the availability of computer code or software in recent years is also evident in Figure \ref{year}(c). Figure \ref{year}(d) provides more information on reproducibility: it considers both computer code and data availability and jointly reproducibility. In our papers considered, there are many examples where both the code can be executed on the data without errors, but there are more papers where the data is not available. Although half of the fMRI papers in the journals began to archive executable computer code in combination with real or simulated data since 2018, some of the computer code files are not executable due to unexpected errors or technical problems. In fact, among the 44 papers that were published after 2018 that provided computer code, we were only able to smoothly generate outputs in 15 of them (34\%). Nevertheless, the trajectory in this plot is increasing.
In the next section, we discuss the availability of computer code and data for each specific journal.
\section{Specific journals}
The authors are happy to share the table for all papers considered, with links to computer code and data, in the study but as mentioned earlier our objective is not to single out individual researchers.
\subsection{Annals of Applied Statistics (AOAS)}
In AOAS, we identified 20 published papers related to fMRI data during the time period 2010--2021. Out of the 20 papers, only 6 (30\%) provided executable computer code (Table~\ref{result_aoas}). Out of these 6, 3 created an R software package and included the preprocessed fMRI data as an illustrative example therein. We could not reproduce their results as the authors did not provide a specific script in their packages for the real data analysis. Nevertheless, using the instruction file in the reference manual of the package, other researchers may be able to reproduce the analysis on the real fMRI data set, but it would require extensive work and interpretation.
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{3(3)} & 2 & \multirow{3}[6]{*}{\textbf{11}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & 1 & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & 3 & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & 9 & \textbf{9} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{6}} & \textbf{14} & \textbf{20} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in the \textit{Annals of Applied Statistics} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_aoas}
\end{table}%
From the data perspective, out of the 20 published papers, 7 (35\%) attached the preprocessed data (or at least some of the preprocessed data in the supplementary materials), 1 provided simulated data, 3 mentioned the website where the raw data can be downloaded (but did not provide a preprocessing script), and the remaining 9 did not mention data availability. The high missing proportion (9/20) may be attributed to the early publication date of these papers: 5 were published before 2014 when the topic of reproducibility was not as important an issue generating a significant amount of coverage and discussion. Among the more recently published papers, there are only 2 papers that provide both computer code and the preprocessed fMRI data sets, but we were unable to reproduce precisely the same result as detailed in the published paper in both of them. One paper missed a key brain file used in the computer code, and in the other, we detected a different number of significant region of interest-single nucleotide polymorphisms (ROI-SNP) connections (however, we still defined this as reproducible due to our lenient interpretation).
Hence, overall, from both the computer code and the data perspective, among all the 20 examined papers in AOAS, only 2 (10\%) paper provides executable computer code (or software) and real fMRI data, where we were able to reproduce the results.
\subsection{Biometrics}
In \textit{Biometrics}, we identified 18 published papers related to fMRI data during the time period 2010--2021. Out of the 18 papers, 14 (78\%) shared computer code (Table~\ref{result_biometrics}). Out of those 14, 5 papers provided software packages implementing the proposed methodology. \textit{Biometrics} is the only journal among the seven studied that states their preference for generic software (e.g., R packages or SAS macro) over executable computer code implementing simulations or data analyses. However, the journal did not require authors to include the real data set used in the paper in the software, which resulted in 1 package having no illustrative data example, 2 packages with only simulated data sets, and the remaining 2 packages with preprocessed fMRI data. For the 9 published papers that provide computer code (as scripts), 1 provided R functions without executable lines, 1 failed to run due to a missing file, and the remaining 7 produced reasonable results.
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{5(1)} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{1(1)} & & \multirow{3}[6]{*}{\textbf{13}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{5(2)} & \multicolumn{1}{c|}{1} & & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & 1 & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{2(1)} & 3 & \textbf{5} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{14}} & \textbf{4} & \textbf{18} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in \textit{Biometrics} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_biometrics}
\end{table}%
From the data perspective, 13 (72\%) of the 18 papers provided some data related to the paper. In particular, 6 provided preprocessed data, 6 chose to share simulated data, 1 offered links to raw data (with no preprocessing script). The remaining 5 papers provided no information on the data source. Overall, published papers in \textit{Biometrics} provide very good capacity for reproducibility: from both the computer code and the data perspective, among all the 18 examined papers, 5 (28\%) papers provide executable computer code (or software) and real preprocessed fMRI data, which we were able to reproduce the results. The 3 papers missing both the computer code and data resources were published before 2014. One specific paper that was published recently claims that the code is available on the \textit{Biometrics} website, however, it only provides a link to the raw fMRI data without the computer code for preprocessing.
\subsection{Biostatistics}
In \textit{Biostatistics}, we identified 13 published papers related to fMRI data during the time period 2010--2021. The majority (10/13) of these papers were published after 2015. From the computer code perspective, 8 (62\%) papers attached computer code, among which 1 created an R package (Table~\ref{result_bios}). However, only 4 of the 8 computer codes were in working order. The paper with an R package did include some sample data sets in the software, however, no specific paper-related scripts that would allow reproduction is provided.
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{1} & & & \multirow{3}[6]{*}{\textbf{10}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{1} & & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{1(1)} & 3 & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & 1 & 2 & \textbf{3} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{8}} & \textbf{5} & \textbf{13} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in \textit{Biostatistics} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_bios}
\end{table}%
As for data, 10 (77\%) of the published papers are accompanied by some form of fMRI data used in the analysis, 4 of which only provide a link to a website with open source raw fMRI data (one paper pointed to its R package for its preprocessing script, but we could not execute it). As mentioned before, a unique trait of fMRI data is that is has to be preprocessed, but no established sequence in carrying out the preprocessing steps exists. Hence, while raw fMRI data is preferable to no data, it puts the onus on the researcher attempting to reproduce the results to preprocess the fMRI data, which is extremely difficult. Even if researchers describe their preprocessing pipeline in great detail, matching the final data in the published paper cannot be guaranteed. Without providing a reason, 3 published papers preferred to provide simulated data rather than the complete fMRI data analyzed in the paper. This may be due to proprietary nature of fMRI data. The simulated version of the data and the corresponding computer code may be due to the requirements of \textit{Biostatistics} (see Table \ref{requirements}), which states its encouragement to submit computer code for a specific example, rather than the exact real data set. Nevertheless, for these papers we could at least generate readable results, compared to another published paper which attached the computer code that was rigidly designed for the real data analysis but failed to provide the real data. Furthermore, 2 published papers refer to an fMRI study of thermal pain but neither of them provided the data source.
Overall, from both the computer code and the data perspective, among all the 13 examined papers in \textit{Biostatistics}, only 2 (15\%) papers provided properly organized computer code and real fMRI data, with which we were able to reproduce the results. Particularly, one paper partitions the fMRI time series ($T=197$) into two segments and only shares a truncated portion ($t \in [0,50]$) of it. \textit{Biostatistics} has been concerned with reproducibility since 2009 when its Editors announced its computational reproducibility policy \citep{peng2009reproducible} to promote reproducibility in biostatistical research. Here, after an article has been accepted for publication, the assigned Associate Editor for reproducibility (AER), considers three different criteria (Data, Code, Reproducible) when evaluating the reproducibility of an article. Published papers that meet any or all of the three criteria are marked D, C, and/or R on their title page in the journal. However, this process is not mandatory, only optional, and does not extent beyond the R software.
\subsection{Journal of Computational and Graphical Statistics (JCGS)}
In JCGS, we identified only 5 published papers related to fMRI data during the time period 2010--2021. Most of the papers provide detailed documents explaining the implementation of the computer code in the supplementary materials except one that published recently in 2021. In particular, 3 of the 5 (60\%) papers provide working computer code, 1 provides code that is not executable, while the final paper provides no code (Table~\ref{result_jcgs}).
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{} & & & \multirow{3}[6]{*}{\textbf{3}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{} & & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & 1 & 1 & \textbf{2} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{4}} & \textbf{1} & \textbf{5} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in the \textit{Journal of Computational and Graphical Statistics} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_jcgs}
\end{table}%
From the data perspective, 2 of the published papers attach the preprocessed fMRI data and one offers simulated data sets (60\% in total) and all can be combined with the code and executed smoothly. Hence, overall, from both the computer code and the data perspective, among all the 5 examined papers in JCGS, 2 (40\%) papers provide properly organized computer code and real preprocessed fMRI data, where we were able to reproduce the results. Compared to the other journals in this study, JCGS performs the best in terms of making the computer code and data available for reproduction. However, the number of papers is small.
\subsection{Journal of the American Statistical Association (JASA)} \label{subsec:jasa}
In JASA, we identified 18 published papers related to fMRI data during the time period 2010--2021. Out of the 18 papers, only 9 of them (50\%) made computer code or an R package (1 paper) relevant to the paper available. Out of the 9 papers, 3 had working code, 1 had code that failed, and 5 had code that was not executable (including the R package) (Table~\ref{result_jasa}).
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & & \multirow{3}[6]{*}{\textbf{8}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{1} & & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{3(1)} & 1 & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & 2 & 8 & \textbf{10} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{9}} & \textbf{9} & \textbf{18} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in the \textit{Journal of the American Statistical Association} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_jasa}
\end{table}%
In terms of data, 8 out of the 18 papers (44\%) provided some forms of data (real, simulated, or raw). In particular, 4 papers chose to share simulated data to illustrate their algorithms, three of which can be reproduced without fatal errors. Three papers provide rich materials that demonstrate in theory the reproducibility of their results and detail the source of data used to perform the analysis, as required by the journal in Table \ref{requirements}. However, following one paper's instruction steps, we had difficulty in finding a key data file that was no longer available on the original website or in their supplementary materials. Surprisingly, none of the 18 papers directly attach the full preprocessed data, which makes reproducing real data analysis more difficult. Although the journal stipulates a strong demand for the documentation of the source of the data as well as attaching the computer code or software, 8 of the 18 published papers provide neither in their final versions. These results were unexpected given the prominent stature of JASA among statisticians. In fact, for the 3 papers published after January 2020, we encountered different kinds of computer code and/or data issues with all of them. Specifically, 1 paper did not mention either the computer code or data used at all, while another only presented R functions without any other instructions or data. Finally, while the third paper detailed the reproduction steps and provided a list of file names used to generate the figures, the zip file available for download from the website is not structured in the same manner as mentioned in the supplementary materials. The most important Matlab toolbox implementing the method was missing, which made reproducibility impossible. Hence, overall, from both the computer code and the data perspective, among all the 18 examined papers in JASA, no papers provide properly organized computer code and real fMRI data.
As detailed in Table~\ref{requirements}, as of September 2021, JASA (Theory and Methods, it made changes to Applications and Case Studies in 2016) has made considerable changes to its `Requirement for data' and `Requirement for codes' (link \href{https://jasa-acs.github.io/repro-guide/}{\textcolor{blue}{here}} for more details). Specifically, the journal requires that all invited revisions ``must include code, data, and the workflow to reproduce the work presented.'' The journal also provides guidelines to authors and general resources for reproducibility. Most significantly, the journal has stipulated that either one of the reviewers or the JASA associate editors for reproducibility (AER) will carry out a reproducibility review of the work. Their objective is to ultimately make the reproducibility review process more efficient and rigorous (see Section~\ref{sec:conc} for more details).
\subsection{Journal of the Royal Statistical Society: Series C (JRSS,C)}
In JRSS,C, we identified 7 published papers related to fMRI data during the time period 2010--2021. Out of the 7 papers, only 4 (57\%) provided computer code (Table ~\ref{result_jrssc}). Out of the 4 providing computer code, 1 failed.
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|r}
\cmidrule{1-7} \multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} & \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & & \\
\cmidrule{1-7} \multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data } & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{1} & & & \multirow{3}[6]{*}{\textbf{4}} & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{} & & & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & & & \\
\cmidrule{1-7} \multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & 3 & \textbf{3} & \\
\cmidrule{1-7} \multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{4}} & \textbf{3} & \textbf{7} & \\
\cmidrule{1-7} \end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in the \textit{Journal of the Royal Statistical Society: Series C} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_jrssc}
\end{table}%
From the data perspective, 4 (57\%) out of the 7 papers provide some forms of data (real, simulated, or raw).
For the 3 papers that fail to provide any data, two of them were published recently in 2020. Furthermore, 2 out of 7 papers provide fMRI data for analysis and executable computer code, and one of them prepares a sample of the fMRI data (one out of the 45 subjects in the original paper). Hence, in summary, from both a computer code and data perspective, among all the 7 examined papers in JRSS,C (Applied Statistics), 2 (29\%) papers provide properly organized computer code and real preprocessed fMRI data.
Though the effort has not completely paid off yet, JRSS,C has emphasized reproducibility to a large extent. Unlike other journals which allow authors to provide a link to computer code archiving service such as Github or to attach the computer code file in supplementary materials, JRSS,C established an open-access website
(click \href{https://rss.onlinelibrary.wiley.com/hub/journal/14679876/series-c-datasets}{\textcolor{blue}{here}}), which includes resources from its published papers dating back to 1998. Both the computer code and the preprocessed data sets are `clearly and precisely documented in enough detail', as required by the journal. Nevertheless, 3 published papers in JRSS,C do not provide this for some unknown reason.
\subsection{Statistics in Medicine}
In \textit{Statistics in Medicine}, we identified 12 published papers related to fMRI data during the time period 2010--2021. Among all the 12 published papers, only 2 (17\%) papers provide computer code (Table~\ref{result_statmed}). In particular, only 1 paper provides executable Matlab code. The other paper attaches an R file listing all the defined functions but fails to attach a file illustrating the usage of any function. We could not identify any computer code resource in the remaining 10 papers.
\begin{table}[htbp]
\centering
\begin{tabular}{|cc|ccc|c|c|}
\toprule
\multicolumn{2}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{3}{c|}{code provided} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{code not provided}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{total}} \\
\cmidrule{3-5} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{code working} & \multicolumn{1}{c|}{code failed} & \multicolumn{1}{c|}{code not executable} & & \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{data provided}} & \multicolumn{1}{c|}{real data} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{} & & 1 & \multirow{3}[6]{*}{\textbf{2}} \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{sim data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & & \\
\cmidrule{2-6} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{raw data} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & & & \\
\midrule
\multicolumn{2}{|c|}{data not provided} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & 1 & 9 & \textbf{10} \\
\midrule
\multicolumn{2}{|c|}{total} & \multicolumn{3}{c|}{\textbf{2}} & \textbf{10} & \textbf{12} \\
\bottomrule
\end{tabular}%
\vspace{0.5cm}
\caption{The reproducibility results from the computer code and the data perspective from applied and methodological statistical papers based on functional magnetic resonance imaging (fMRI) data published in \textit{Statistics in Medicine} from 2010 to 2021. The numbers in the parenthesis represent the number of R software packages in that cell.}
\label{result_statmed}
\end{table}%
From the data perspective, only 2 (17\%) papers provide preprocessed fMRI data sets, while 10 do not provide any data. Interestingly, out of the 12 published papers, 9 claim to provide both the computer code or the real data, but only 2 actually do so. In one instance, a paper states in the abstract that ``software for fitting graphical object‐oriented data analysis is provided'', but the source of the software is not mentioned in the rest of the paper. Additionally, another paper clearly states that the tool implementation is in the coding language C and adds a hyperlink, but this only links to a list of publications by the author. For a more recently published paper, the paper states at the end of the paper that `upon publication, software in the form of R code will be available from an online repository together with the sample simulated data'. This paper was first published on October 2020 but after a search online in August 2021, the data and code were still not available.
Overall, papers on the topic of fMRI published in \textit{Statistics in Medicine} do not reproduce well, with only 1 paper out of 12 (8\%) providing both computer code and the fMRI data, and we could reproduce the result.
\section{Conclusion}\label{sec:conc}
In this paper, we have explored the reproducibility of applied and methodological papers in the field of statistics by exploring all the papers ($n=93$) based on functional magnetic resonance imaging (fMRI) data published in seven prominent statistical journals during the time period 2010-2021. Although statisticians pride themselves on open computer code (through the sharing of scripts or the creation of packages), we found an overall common lack of transparency and openness in both the computer code and data sets illustrating the statistical methods and applications, which raises the urgent need for attention and action. Below, referring to the narrative in \cite{Stodden1240}, we list our recommendations for authors, editors/journals, reviewers, and funding organizations to facilitate reproducibility in statistics in general (or fMRI applications specifically) but also across other quantitative research domains.
\subsection{Author recommendations}
\noindent \textbf{Computer code}:
Instead of attaching the created functions in the supplementary materials or simply listing all the required files in one folder, we recommend that authors follow the requirements for the Application and Case Studies (ACS) section of the \textit{Journal of the American Statistical Association}. It requests that authors provide detailed computer materials including step-by-step workflows to demonstrate reproducibility. For example, \cite{Mejia2017} present their computer code to perform the analysis in their paper in executing orders, with a thorough explanation for the function in each step. If possible, the visualization tool and the computation time should also be included, although these are not essential. We also noticed that several papers share their preprocessed fMRI data (which we recommend although raw data and a precise preprocessing script is also acceptable), but their computer code fails to reproduce the result because key files are missing. Hence, we recommend enclosing all dependent data and files (e.g., templates, brain masks, parameter settings, for fMRI data in particular) without the need to contact the authors. In terms of the code repository, out of our 93 papers, 17 of the published papers chose GitHub, 20 submitted their codes as online supplements on the journal's webpages, while other papers archived the files on their personal websites. It was evident that some links from the related publication on the personal websites were not accessible. We, therefore, suggest that authors share their computer code in an appropriate public repository by using persistent links (if and when they were to move to another institution or to another position). With respect to creating software, 10 produced R software packages instead of executable R scripts, with only 3 of them attaching paper-related preprocessed fMRI data sets to the package. This makes reproduction only possible if researchers are willing to study the manual and learn how to use each R function in detail, however, this puts the onus on the (reproducing) researchers and is less convenient. Therefore we believe it is critical to provide manuals and clear paper scripts with lucid, straightforward instructions on the steps necessary to regenerate the results.
\vspace{0.5cm}
\noindent \textbf{Data}:
For all data sets, especially for fMRI data, we strongly recommend that authors provide the preprocessed data in the supplementary materials/files rather than a link to a public data website which only provides the raw version. Specific to fMRI, as the field of neuroscience/neurostatistics has not reached an agreement on a standard preprocessing pipeline, the results can vary greatly owing to different sequences of the preprocessing steps. Although some of the papers do mention the general preprocessing steps, for example,
\begin{quote}
`We apply a series of standard image preprocessing steps: distortion-correction using FSL’s FUGUE, time-series preprocessing, rigid registration, brain extraction, temporal filtering, and 6mm FWHM Gaussian spatial smoothing. Subject-level models are fit using a linear model in FSL’s FILM software including ...'
\end{quote}
and
\begin{quote}
`the preprocessing included slice time correction; 3-D motion correction; temporal despiking; spatial smoothing (FWHM=6mm); mean-based intensity normalization; temporal bandpass filtering (0.009–0.1Hz); linear and quadratic detrending ...'
\end{quote}
\noindent it is still very difficult for researchers to regenerate precisely the same data set using the identical preprocessing steps, since the parameter settings and detailed computational steps are missing. One possible solution is to mimic the Athena strategy in the ADHD-200 Sample project
(click \href{https://www.nitrc.org/plugins/mwiki/index.php/neurobureau:AthenaPipeline#Description_of_included_files}{\textcolor{blue}{here}}). Not only is the preprocessed data set provided, but the preprocessing script and the log file that clarify the manipulating process for each subject are also included. Inspired by this strategy, we encourage authors that consider public fMRI data sets in their papers to, at the very minimum, provide the link to the raw data and their preprocessing script for easy access and use by other researchers.
A major issue with imaging data sets are their size, which makes the permanent storage of the data on the internet financially challenging for both the authors and journals. However, websites such as \url{openneuro.org} offer a free and open platform for validating and sharing Brain Imaging Data Structure (BIDS)-compliant magnetic resonance imaging (MRI), positron emission tomography (PET), magnetoencephalography (MEG), electroencephalography (EEG), and intracranial Electroencephalography (iEEG) data. The BIDS is an emerging standard for the organization of neuroimaging data, which would allow for reproducibility across research labs. For the papers we considered in our study, when the data sets were large for local drives, the authors either parsed the data or provided a toy example. While the ideal would be the sharing of all data and scripts, sharing a portion of the data is also acceptable.
\subsection{Suggestion for journals}
\noindent \textbf{Clarify the access to materials}:
As suggested by \cite{Stodden1240}, a digital object identifier (DOI) that uniquely discovers the related computer code and data sets should be assigned by the journal. Also, they recommend the sharing of the DOIs in trusted open repositories with sufficient detailed information such as the title, authors, version, software description (e.g., inputs, outputs, dependencies, etc.) if possible. In fact, the journal \textit{Statistics in Medicine} has already followed the publisher \textit{Wiley}'s data-sharing policies by including a DOI but only for data sets (click \href{https://onlinelibrary.wiley.com/page/journal/10970258/homepage/forauthors.html}{\textcolor{blue}{here}} for more information):
\begin{quote}
`Upon acceptance for publication, data files will be deposited to \href{https://figshare.com/}{figshare}, by \textit{Wiley}, on behalf of the authors. The data will be assigned a single DOI and will be automatically and permanently associated with the HTML version of the published manuscript.'
\end{quote}
The publisher, \textit{Wiley}, shows it commitment to a more open research landscape, facilitating faster and more effective research discovery by enabling reproducibility and verification of data, methodology and reporting standards. They encourage authors of articles published in their journals to share their research data including, but not limited to: raw data, processed data, software, algorithms, protocols, methods, and materials. According to the introduction on the \textit{Wiley} website (see \href{https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/data-sharing-citation/data-sharing-policy.html}{\textcolor{blue}{here}}), they produce a table in order to understand the various standardized data sharing policy categories which we reproduce in Table \ref{wiley}.
\begin{table}[htbp]
\centering
\begin{tabular}{|p{11em}|p{9em}|p{7em}|p{7em}|}
\toprule
\multicolumn{1}{|c|}{} & Data availability statement is published & Data has been shared & Data has been peer reviewed \\
\midrule
Encourages Data Sharing & Optional & Optional & Optional \\
\midrule
Expects Data Sharing & Required & Optional & Optional \\
\midrule
Mandates Data Sharing & Required & Required & Optional \\
\midrule
Mandates Data Sharing \& Peer Reviews Data & Required & Required & Required \\
\bottomrule
\end{tabular}
\caption{A table from the publisher \textit{Wiley}'s website to help to understand the various standardized data sharing policy categories.}
\label{wiley}
\end{table}
From inspecting the papers published in \textit{Statistics in Medicine}, we believe that the journal only adheres to the first standard (first column): a data availability statement confirming the presence or absence of shared data is not necessarily provided. We do not believe that it adheres to the second standard (second column): if data have been shared in a data repository, the link is not checked to ensure the validity. It also does not adhere to the third standard (third column): the replicability of linked data is not peer-reviewed. These are inconsistent with the claim on the journal's website that it `\textbf{expects} that data supporting the results reported in the paper will be archived in an appropriate public repository'. Even for published papers in this journal which did provide executable computer code files and fMRI data, data was not made available in figshare, and certainly no DOI was issued. In one case, all the MATLAB computer code and the related data sets were posted on the first author's `Faculty \& Staff' page on the university website for downloading, with researchers requiring additional steps to access the materials. On the other hand, for those recently published papers containing the `Data Availability Statement' portion (2 papers), data sharing is still not applicable as no data was created nor did they provide their preprocessed version of the data (or the raw data and a preprocessing script). This case study for \textit{Statistics in Medicine} illustrates the current difficulties in assigning DOIs to data sets. To the best of our knowledge, no fMRI-related paper in \textit{Statistics in Medicine} mention the data set DOI at this time, which renders the efforts of the journal and publisher unfulfilled.
As a valuable alternative, we recommend \textit{Journal of the Royal Statistical Society: Series C}'s practice for how it arranges the materials of its recently published papers. JRSS,C sorts the majority of the computer code and the related data of its papers on its website in chronological order, although some earlier published papers have missing resources. This convenient search method helps researchers easily find all the related materials of papers simply by looking up the corresponding volume number. \\
\noindent \textbf{Material availability statement}:
Inspired by \textit{Biometrics}'s policy (and also mentioned in some of \textit{Wiley}'s standardized data sharing policies):
\begin{quote}
`Authors are required to provide a `Data Availability Statement' to describe the availability or the absence of shared data. Please ensure the main manuscript contains this statement which should be a new, unnumbered section placed immediately before the list of references.'
\end{quote}
\noindent We strongly recommend that authors create a new section in their articles called `Material Availability Statement' in which they clearly state the availability or the absence of their computer code AND data files. The new section will ameliorate the inconsistent locations where authors choose to provide information on their files. In the 93 published papers in our study, information on the the code repository links or the data resource were placed in the abstract, introduction, data description, data analysis, conclusion, and supplementary files, which almost include all sections of the paper. We believe a unified placement of this information on the availability of computer code and data will be more convenient and helpful to readers and for paper review. In fact, \textit{Biostatistics} (link \textcolor{blue}{\href{https://academic.oup.com/biostatistics/pages/General_Instructions }{here}}) set up a similar reproducible research policy that states
\begin{quote}
`... papers in the journal should be kite-marked D if the data on which they are based are freely available, C if the authors' code is freely available, and R if both data and code are available ...'
\end{quote}
This is highly commendable, the policy was introduced in 2010, but \cite{peng2011} found that by July 2011, only 21 of 125 published papers in \textit{Biostatistics} had a kite-mark, including five articles with an ``R'' kite mark. Even though \textit{Biostatistics} introduced the kite-mark system in 2010, only papers published after 2018 from our data set were actually kite-marked. From these 13 published papers, 3 articles are C-marked (code available), 1 is D-marked (data available) and 2 are R-marked (both are available). However, articles with the D/R mark may only provide simulated data without sharing the real fMRI data.
\noindent \textbf{Reproducibility check}:
Although it would be very difficult to achieve in a short amount of time, we hope all statistics journals will regulate reproduction standards for each paper, and (wherever possible) check the reproducibility of its already published papers (this would indicate a real commitment to reproducibility). Going forward, we recommend that instead of using opaque and optional words like `encourage', `expect' and `should', journals should use stronger words such as `require' and `must', which will raise more awareness of reproducibility. Along with the requirements, submitted papers (or at least papers that are invited for revisions) should be checked in detail for computer code and data repositories, openly licensed artifacts, reproducibility, and the capacity of independent use by other scholars. It should be considered a necessary task for the reviewers (or the journal should have specific reviewers focused on reproducibility similar to JASA) during the review process. In addition, the editors of the journal should reserve the right to refuse publication of any paper for which the justification for failing to provide data (or details of how to access data), computer code, or any supporting files for replication is deemed inadequate.
As noted in Section~\ref{subsec:jasa}, as of September 2021, the \textit{Journal of the American Statistical Association} has made considerable changes to its requirements for computer code and data'. Most significantly, it has stipulated that either one of the reviewers or the JASA associate editors for reproducibility (AER) will carry out a reproducibility review of the work. We are hopeful and look forward to seeing the impacts of these changes.
Finally, given the workload currently on editors and journals, another possibility is the creation of non-profit reanalysis centres attached to respected statistics and biostatistics university departments similar to outreach consultancy groups run by PhD students in statistics and biostatistics departments.
\noindent \textbf{Supplementary materials}:
While Table \ref{requirements} provides detailed descriptions of the computer code and data requirements for each journal, the journals also provide statements for the submitted supplementary materials/files (here, we only focus on the attached computer code and data in the supplementary materials). For example,
\noindent \textit{Biometrics} states that:
\begin{quote}
`Code and data are not subject to a formal review and will be posted ``as-is.'' ';
\end{quote}
\noindent \textit{Journal of Computational and Graphical Statistics} states that:
\begin{quote}
`The supplements are subject to editorial review and approval.'
\end{quote}
\noindent \textit{Journal of the American Statistical Association} states that:
\begin{quote}
`Supplementary files should be supplied for review along with the manuscript at the initial submission.'
\end{quote}
\noindent and \textit{Statistics in Medicine} states that:
\begin{quote}
`The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.'
\end{quote}
\noindent The other three journals we consider in this paper do not specifically detail how they deal with the supplementary materials. While supplementary material policies are not the same as reproducibility policies, a clear statement on the material-review process in turn greatly influences the quality of the attached materials. Therefore to improve reproducibility, we deem a mandatory check on these indispensable materials.
\noindent \textbf{Rewards for authors and reviewers}:
Following the proposal from \cite{Stodden1240}, journals can improve their reproducibility by rewarding reviewers who take extra effort to verify computational findings and authors who facilitate such a review. Annual accolades such as prizes and/or badging or even cash could be awarded to both authors and reviewers. In \textit{Biometrics}, Open Research Badges have been applied in recent years in partnership with the non-profit Center for Open Science (COS) to recognize authors' contributions to reproducibility work such as sharing research instruments and materials in a publicly-accessible format, providing sufficient information for researchers to reproduce procedures and analyses. We strongly recommend that all reviewers working towards reproducible publications should also be considered as candidates for such badges. Although applying and qualifying for badges is not a requirement for publication, these badges are a further incentive for authors and reviewers to participate in the Open Research Movement and thus to increase the visibility and transparency of the research.
In general, as researchers, institutions, funders of research, and governments gradually see the benefit of open content, the necessity and urgency of ensuring reproducibility for research will be mentioned more frequently. The field will not change promptly, but developing a culture of reproducibility is paramount and will require time, patience and effort of the community. We hope the work and strength of different groups come together to facilitate reproducibility and make the above effects commonplace in the future.
\subsection{Strengths and limitations}
In this paper, we examine reproducibility in the field of statistics by attempting to reproduce the results in 93 published papers in prominent journals utilizing functional magnetic resonance imaging (fMRI) data during the 2010-2021 period. While there is currently a great many research papers on reproducibility in other computational
fields, this is the first study on the reproducibility of results in statistics. Overall, we conclude that while several journals have good policies in terms of reproducibility, the current reproducibility statistics are poor and the level needs to improve. We detail the reasons for the low reproducibility numbers and provide author-specific and journal-specific recommendations to improve the research reproducibility in statistics.
In this work, we focus on the reproducibility of research papers in the field of statistics based on fMRI. The reason for the spotlight on fMRI is that we understand fMRI very well and its idiosyncrasies. However, we understand that our results might not generalize to every area of application. This may be due to some particular properties of fMRI data. For example, first, fMRI data can be shared in various formats such as raw data, data that has been somewhat preprocessed, region of interest (ROI) time series data or the complete preprocessed data set.
While raw fMRI data is preferable to providing no data, it puts the onus on the researcher attempting to replicate the results to preprocess the fMRI data. As we discuss earlier, it is extremely difficult to obtain precisely the same preprocessed data from the raw data as there is not an established sequence in carrying out the preprocessing steps. Second, fMRI data is expensive to obtain, which means that many neuroscientists are unwilling to make it available openly until the data has been exhausted in terms of creation of research papers. However, there are many open source fMRI data sets available to statisticians such as \url{openfmri.org} and \url{http://www.humanconnectomeproject.org/}. Hence, fMRI lies between data that is very open (e.g., stock prices and returns on the main indices) and data that is truly proprietary.
Another limitation of the work is that we did not contact the authors of the published papers for their computer code and data. We took this step as there is no quality control when the authors share the computer code and data. Hence, we believe the reproducibility standards should be set at the journal level to maintain standards and shift expectations for transparency. However, anecdotally, in our experience, statisticians are very good at sharing computer code and data when requested by email.
Finally, while the statistical analysis of data is important in any study, there are other vital steps in research, and there is much more to statistical analysis of data than checking the calculations. It is essential to recognize that reproducibility also requires the understanding of the substantive question. Going forward, statistical research (and the reproduction of this research) should be mindful of this and consider the substantive context adequately.
\newpage
\noindent \LARGE \textbf{Appendix}
\noindent \large \textbf{Computer code details}
\normalsize
\footnotesize
\begin{longtable}[htbp]{|p{1cm}p{2.3cm}p{1.7cm}p{3cm}p{1.3cm}p{5cm}|}
\hline \hline
Journal & Software & Data type & Data Note & Working code & Code errors \\
\hline
AOAS & R package & PP & & NA & Only R package is provided. \\
\hline
AOAS & R package & (Partly)PP & 30/1250 voxels & NA & Only R package is provided. \\
\hline
AOAS & Matlab & PP & & N & A key brain file 'n33\_buckner17\_k286.mat' is not provided. \\
\hline
AOAS & Matlab & PP & & Y & Results differ from the paper (significant ROI-SNP connection: 24/26) \\
\hline
AOAS & R package & PP & & NA & Only R package is provided. \\
\hline
AOAS & Matlab & Sim & & Y & \\
\hline
Biom & Matlab & PP & Meta-data & Y & \\
\hline
Biom & R & PP & & Y & \\
\hline
Biom & Matlab & Sim & & Y & \\
\hline
Biom & Matlab & Sim & & N & A key brain file brain\_sample.map was not provided. \\
\hline
Biom & R & Sim & & Y & \\
\hline
Biom & R & Sim & & Y & \\
\hline
Biom & R package & Raw & & NA & Only R package is provided. \\
\hline
Biom & R package & N & & NA & Only R package is provided. \\
\hline
Biom & R package & Sim & & Y & \\
\hline
Biom & Matlab & PP & & Y & \\
\hline
Biom & R package & Sim & & Y & \\
\hline
Biom & R & PP & & Y & \\
\hline
Biom & R package & PP & & Y & \\
\hline
Biom & R & N & & NA & Only include R functions.
\\
\hline
Bios & Matlab & PP & & Y & \\
\hline
Bios & R & (Partly) PP & & N & Error in Wstat[which(Wtrue == 0, arr.ind = TRUE)] : \newline{} subscript out of bounds\newline{} \\
\hline
Bios & R & (Partly) PP & 50/197 time points. & Y & \\
\hline
Bios & R & Sim & & Y & \\
\hline
Bios & R & Sim & & N &file14.Rmd: Error in dlda(x = train.noise, y = train.labels): could not find function "dlda" \\
\hline
Bios & R package & Sim & & NA & Only R package provided. \\
\hline
Bios & R & N & NA & N & No real data provided. \\
\hline
Bios & R & Sim & & Y & \\
\hline
JCGS & Python & PP & Preprocessed in the code & Y & \\
\hline
JCGS & R & PP & & Y & \\
\hline
JCGS & Matlab & Sim & & Y & \\
\hline
JCGS & R & N & Upon request & N & One key dataset "suicide.rda" is missing. \\
\hline
JASA & R & Sim & & Y & \\
\hline
JASA & R & Sim & & Y & \\
\hline
JASA & Python+R & Sim & & N & Can't install smoothfdr package fatal error C1083: Cannot open include file: 'bayes\_gfl.h': No such file or directory \\
\hline
JASA & R & Sim & & Y & \\
\hline
JASA & R package & Raw & & NA & Only R package is provided. \\
\hline
JASA & Matlab & Raw & Difficulty downloading the data & N & Unable to run without data. \\
\hline
JASA & R+Matlab & Raw & Difficulty downloading the data & N & Unable to run without data. \\
\hline
JASA & R & N & & NA & Only R functions are provided. \\
\hline
JASA & Matlab & N & Very detailed documentation, but not as organized in the real zipped file. & NA & A key file is missing. \\
\hline
JRSS,C & R & (Partly) PP & 1/45 subjects & Y & \\
\hline
JRSS,C & Linux & PP & meta-data & N & The log file gave errors, for example, make: nvcc: Command not found; Makefile:11: recipe for target 'functions.o' failed\newline{}make: *** [functions.o] Error 127 \\
\hline
JRSS,C & R+Matlab+Linux & PP & & Y & \\
\hline
JRSS,C & R & Sim & & Y & \\
\hline
Stat Med & Matlab & PP & & Y & \\
\hline
Stat Med & R function & N & & NA & Only R functions are provided. \\
\hline
\caption{More details on the 47 out of the 93 published papers that provide computer code. All the papers contained functional magnetic resonance imaging (fMRI) data and were published in seven prominent statistical journals: the \textit{Annals of Applied Statistics} (AOAS),
\textit{Biometrics} (Biom), \textit{Biostatistics} (Bios), the \textit{Journal of Computational and Graphical Statistics} (JCGS), the \textit{Journal of the American Statistical Association} (JASA), the \textit{Journal of the Royal Statistical Society: Series C} (JRSS, C) and \textit{Statistics in Medicine} (Stat Med) from 2010 to 2021. PP and (Partly) denote preprocessed data and data preprocessed to some extent, respectively. }
\label{resultofcode}%
\end{longtable}%
\newpage
|
2,877,628,088,908 | arxiv |
\section{INTRODUCTION}
The doped transition metal oxides have very rich phase diagrams which
are fingerprints of the spectacular physics present in these strongly correlated
electron systems \cite{Ima98}. In particular: (i) La$_{2-x}$Sr$_x$CuO$_4$ has
an antiferromagnetic (AF) and Mott instulating ground state only for very low doping $x \in (0,0.02]$
although it is a high-temperature superconductor with optimal doping $x\sim 0.15$ \cite{Ima98},
(ii) La$_{1-x}$Sr$_x$MnO$_3$ has a plane with a ferromagnetic (FM) $e_g$ alternating orbital
(AO) and Mott insulating ground state in the lightly doped regime $x \in (0, 0.18]$ \cite{End99},
and (iii) La$_{1-x}$Sr$_{x}$VO$_3$ has a plane with an AF and a $t_{2g}$ AO Mott insulating ground state in
the lightly doped regime $x \in (0, 0.178]$ \cite{Fuj05}. In the present paper we
try to shed some light on the distinct features of the phase diagrams of
these three lightly doped transition metal oxides.
From the theoretical point of view the description of the lightly doped transition
metal oxides is relatively easy in the extreme case of only one hole doped into the
half-filled state \cite{Mar91} and in what follows we reduce our studies to this
limit only. Then the motion of such a single hole added to the half-filled
Mott insulating ordered ground state is strongly renormalized as it couples to
the excitations of the ordered state, magnons in the AF state and orbitons
in the AO state \cite{Mar91, Bri00}. This means that a {\it polaron} is formed:
(i) in La$_2$CuO$_4$ with one hole in the AF ground state -- a {\it spin} polaron \cite{Mar91},
(ii) in LaMnO$_3$ with one hole in the AO ground state -- an {\it orbital} polaron \cite{Bri00}, and
(iii) in LaVO$_3$ with one hole in the AO and AF ground state -- a {\it spin-orbital} polaron \cite{Woh09}.
In the next three chapters, using the self-consistent Born approximation (SCBA) \cite{Mar91} applied
to the polaron formulation of the respective $t$-$J$ model, we compare features of these three different
polarons \cite{Ber09}.
\section{SPIN POLARON}
\begin{figure}[t]
\includegraphics[width=0.46\textwidth]{fig1.eps}
\caption{\small{
Spectral density $A({\bf k}, \omega)$ of the model Eq. (\ref{eq:1}) with $J=0.4t$
along the particular directions of the 2D Brillouin zone.
} }
\label{fig:1}
\end{figure}
It is believed that the basic effective model which describes the important physics present
both in the undoped La$_2$CuO$_4$ and in the lightly doped La$_{2-x}$Sr$_x$CuO$_4$
is the two-dimensional (2D) $t$-$J$ model \cite{Cha78},
\begin{equation}
\label{eq:1}
H_S=-t \sum_{\langle i,j \rangle, \sigma} \left(\tilde{c}^\dag_{i\sigma} \tilde{c}_{j\sigma}
+ {\rm H.c.}\right)+ J \sum_{\langle i, j \rangle} {\bf S}_i \cdot {\bf S}_j,
\end{equation}
where ${\bf S}_i$ are spin $S=1/2$ operators and
the constrained operators $\tilde{c}^\dag_{i\sigma}=c^\dag_{i\sigma}(1-n_{i\bar{\sigma}})$
allow for the hopping only in the restricted Hilbert space with no double occupancies.
The superexchange energy scale is $J=4t^2/U$ where $U$ is the {\it effective} repulsion between
two electrons with opposite spins on the same Cu site and $t$ is the effective hopping between the Cu ions.
In the undoped case the ground state of the model is the 2D AF ordered state -- this can
be easily seen by noting that the kinetic term in Eq. (\ref{eq:1}) does not contribute
in the half-filled case and the $t$-$J$ model reduces then to the Heisenberg model.
On the other hand, in the case of one hole doped into the AF ground state the model
Eq. (\ref{eq:1}) can be reduced to the polaron-type model with the quadratic terms
representing magnon spectrum and the polaron-type interaction between the holes
and the magnons \cite{Mar91}. Such a model can be easily solved using the SCBA method
\cite{Mar91} and the hole spectral functions can be calculated from the Green's functions.
We solved the SCBA equations numerically on a mesh of $16 \times 16$ points
-- the results for the realistic case of $J=0.4t$ are shown in Fig. \ref{fig:1}.
We see a well-developed dispersive quasiparticle peak on the right hand side of the spectrum which
suggests that the polaron is formed. Since microscopically the polaron is formed
due to the coupling between the hole and magnons we call it a {\it spin} polaron.
Besides, we note that the excited states are almost entirely different than those
found in the classical Ising case (the so-called ladder spectrum \cite{Mar91}).
This is particularly pronounced for some values of momentum ${\bf k}$ such as e.g.
${\bf k} = (0,0)$ or ${\bf k} = (\pi,\pi)$.
\section{ORBITAL POLARON}
\begin{figure}[t]
\includegraphics[width=0.46\textwidth]{fig2.eps}
\caption{\small{
Spectral density $A({\bf k}, \omega)$ of the model Eq. (\ref{eq:2}) with $J=0.4t$
along the particular directions of the 2D Brillouin zone.
} }
\label{fig:2}
\end{figure}
A different situation occurs in the undoped LaMnO$_3$ and its doped counterpart La$_{1-x}$Sr$_x$MnO$_3$:
here the partially filled $e_g$ orbitals are degenerate and the orbital degrees should be taken
into account. However, in the lightly doped case the 2D ferromagnetic state is stable
and consequently the spin degrees of freedom can be integrated out. Thus, one arrives
at the following ${\it orbital}$ $t$-$J$ model for the $(a,b)$ plane \cite{Fei05, Bri00},
\begin{align}
\label{eq:2}
H_O=&-\frac{1}{4}t \sum_{\langle i,j \rangle} \left[3\tilde{x}^\dag_{i} \tilde{x}_{j}
+\tilde{z}^\dag_{i} \tilde{z}_{j} \mp \sqrt{3}\left(\tilde{x}^\dag_{i} \tilde{z}_{j}
+\tilde{z}^\dag_{i} \tilde{x}_{j}\right) + \mbox{H.c.}\right]
\nonumber \\
&+\frac{1}{8}J \sum_{\langle i, j \rangle} \left[3T^x_i T^x_j+T^z_i T^z_j \mp \sqrt{3}\left(T^x_i T^z_j
+T^z_i T^x_j\right)\right],
\end{align}
where $x^\dag_{i}|0\rangle = \frac{1}{\sqrt{2}} |x^2-y^2\rangle_i$,
$z^\dag_{i}|0\rangle = \frac{1}{\sqrt{6}} |3z^2-r^2\rangle_i$,
the $-$($+$) signs denote the bonds along the $a$ ($b$) direction,
and tilde
denotes the hopping in the Hilbert space with no double occupancies.
Besides, ${\bf T}_i$ are pseudospin
$T=1/2$ operators with $T^z_i=(\tilde{n}_{ix}-
\tilde{n}_{iz})/2$, the superexchange energy scale is $J=4t^2/U$ where $U$ is the effective repulsion
between electrons in the ${}^6A_1 $ state, and $t$ is the effective hopping between
the Mn ions.
This time, in the undoped case the ground state of the model is the 2D AO ordered state
formed by the $(|x\rangle+|z\rangle)/\sqrt{2}$ and $(|x\rangle-|z\rangle)/\sqrt{2}$ orbitals.
When one hole is doped into such an AO ground state the model Eq. (\ref{eq:2}) can be again
reduced to the polaron-type model with the quadratic terms
representing {\it orbiton} spectrum and the polaron-type interaction between the holes
and the orbitons \cite{Bri00}.
We solved the respective SCBA equations numerically on a mesh of $16 \times 16$ points
-- the spectral function for the realistic case of $J=0.4t$ \cite{Bri00} is shown in Fig. \ref{fig:2}.
As in the spin case there is a well-developed quasiparticle peak on the right hand side
of the spectrum which suggests that the polaron is formed. However, it is an {\it orbital}
polaron since it describes a hole dressed by orbiton excitations. Moreover,
the quasiparticle peak has almost no dispersion and the excited states
resemble the ladder spectrum \cite{Mar91} which suggests that
the orbital polarons are much more "classical" than the spin ones.
\section{SPIN-ORBITAL POLARON}
In the undoped LaVO$_3$ and in the lightly doped La$_{1-x}$Sr$_x$VO$_3$ the situation
is much more complex than in the cuprates or manganites: here both the spin and orbital
degrees of freedom should be taken into account as the spin degrees of freedom
form the AF order in the undoped plane and cannot be integrated out as in
the manganites \cite{Kha01}. Thus, one needs to consider the full spin-orbital $t$-$J$ model
with $t_{2g}$ orbital degrees of freedom \cite{Kha01, Woh08}. Furthermore,
in the case of the $t$-$J$ models with $t_{2g}$ orbital degrees of freedom
we have to supplement such models with the frequently neglected three-site terms \cite{Dag08}.
Thus we arrive at the following strong-coupling model for the $(a,b)$ planes of
the cubic vanadates \cite{Woh09},
\begin{equation}
\label{eq:3}
H_{SO}=H_t+H^{(1)}_J+H^{(2)}_J+H^{(3)}_J+H_{3s}.
\end{equation}
The first term in the above equation is the kinetic term \cite{Woh09},
\begin{align}\label{eq:ht}
H_t &= -t \sum_{i,\sigma} P \left( \tilde{b}^\dag_{i\sigma} \tilde{b}_{i+\hat{a}\sigma}
+\tilde{a}^\dag_{i\sigma}\tilde{a}_{i+\hat{b}\sigma} + {\rm H.c.}\right) P,
\end{align}
where: (i) electrons in $d_{yz}\equiv a$ ($d_{zx}\equiv b$) orbitals
can hop only along the $b$ ($a$) direction in the $(a,b)$ plane,(ii) the tilde above the operators denotes
the fact that the hopping is allowed only in the
constrained Hilbert space, and (iii) due to the large Hund's coupling $J_H \gg t$ in the cubic vanadates \cite{Kha01}
we project the final states resulting from the electron hopping onto the high spin states
using the $P$ operators in Eq. (\ref{eq:ht}).
The middle terms in Eq. (\ref{eq:3}) are the superexchange terms and are somewhat lengthy \cite{Ole05},
\begin{align}\label{eq:hj123}
H^{(1)}_J&=-\frac{1}{6}Jr_1 \sum_{\langle i, j \rangle} \left({\bf S}_i \cdot {\bf S}_j + 2\right) \left(\frac{1}{4}-T^z_i T^z_j\right), \\
H^{(2)}_J&=\frac{1}{8}J \sum_{\langle i, j \rangle} \left({\bf S}_i \cdot {\bf S}_j - 1\right) \left(\frac{19}{12}
\mp \frac{1}{2}T^z_i \mp \frac{1}{2} T^z_j -\frac{1}{3} T^z_i T^z_j\right), \\
H^{(3)}_J&=\frac{1}{8}J r_3 \sum_{\langle i, j \rangle} \left({\bf S}_i \cdot {\bf S}_j-1\right) \left(\frac{5}{4}
\mp \frac{1}{2}T^z_i \mp \frac{1}{2} T^z_j + T^z_i T^z_j\right),
\end{align}
where: ${\bf S}_i$ is a spin $S=1$ operator, $T^z_i=(\tilde{n}_{ib}-\tilde{n}_{ia})/2$ is a pseudospin $T=1/2$ operator,
and the superexchange constant $J=4t^2/U$ with $U$ being the repulsion between electrons
on the same site and in the same orbital and with $t\ll U$ being the effective hopping between
the V ions. The factors $r_1=1/(1-3\eta)$ and $r_3=1/(1+2\eta)$ (where $\eta=J_H/U$)
account for the Hund's coupling $J_H$ and originate from the energy splitting of various
$d^3$ excited states due to the various possible spin and orbital configurations \cite{Kha01}.
The last term is the three-site term which would contribute to the free hole motion \cite{Woh09},
\begin{align}\label{eq:h3s}
H_{3s}\!\! =\!\!& - \frac{1}{12} J \left(r_1+2\right)
\sum_{i,\sigma} P \left( \tilde{b}^\dag_{i-\hat{a}\sigma} \tilde{n}_{ia\bar{\sigma}}
\tilde{b}_{i+\hat{a}\sigma}+ {\rm H. c.} \right) P \nonumber \\
&- \frac{1}{12} J \left(r_1+2\right)
\sum_{i,\sigma} P \left( \tilde{a}^\dag_{i-\hat{b}\sigma} \tilde{n}_{ib\bar{\sigma}}
\tilde{a}_{i+\hat{b}\sigma} + {\rm H. c.} \right) P.
\end{align}
\begin{figure}[t]
\includegraphics[width=0.46\textwidth]{fig3.eps}
\caption{\small{
Spectral density $A({\bf k}, \omega)$ as obtained for the $a$ orbitals
of the model Eq. (\ref{eq:3}) with $J=0.2 t$ and $\eta=0.15t$ along
the particular directions of the 2D Brillouin zone.}}
\label{fig:3}
\end{figure}
In the undoped case the ground state of the model is the 2D AF and AO ordered state \cite{Kha01}.
When one hole is doped to the system the model Eq. (\ref{eq:3}) can again be expressed
in the polaron language: however, this time the hole couples {\it both} to orbiton
and magnon excitation simultaneously \cite{Woh09}. Thus, the SCBA equations are more complicated
and require an additional sum over the 2D Brillouin zone, similarly as in the case of the
coupling between a hole and two magnons \cite{Bal95}. Nevertheless, it is possible
to solve them numerically also on a mesh of $16 \times 16$ points
-- the results for the realistic case of $J=0.2t$ and $\eta=0.15$
are shown in Fig. \ref{fig:3}.
As in the purely spin or orbital case, described in the preceding chapters,
a well-developed quasiparticle peak on the right hand side
of the spectrum suggests formation of the polaron also in the present case. This time
it is a {\it spin-orbital} polaron since the hole couples both to the orbitons and magnons.
Surprisingly, the quasiparticle peak has only a very small dispersion and the
excited states reproduce almost exactly the ladder spectrum
of the purely classical spin case \cite{Mar91}.
Since in the model Eq. (\ref{eq:3})
only the orbital (pseudo)spins are Ising-type this means that these are
the orbital degrees of freedom which are responsible for the observed classical
behaviour.
\section{Conclusions and final discussion}
In conclusion, we studied a problem of a single hole
doped into the half-filled ground state of the three different
cases of the $t$-$J$ model: (i) the spin model relevant for
the cuprates, (ii) the $e_g$ orbital model relevant for the
manganites, and (iii) the spin-orbital model relevant for the
vanadates. In all these three cases the hole moves by dressing
up with the collective excitations of the ground state and forms a polaron.
However, there are striking differences between the discussed
here polarons. On one hand, in the spin case the quasiparticle peak has
a large dispersion and the excited spectrum does not resemble the classical
ladder spectrum \cite{Mar91} at all. On the other hand, both in the orbital and in the spin-orbital
case the quasiparticle has a very tiny dispersion and the rest of the spectrum
resembles almost exactly the ladder spectrum of the classical Ising model \cite{note}.
Possibly this is one of the reasons why the
ordered state disappears very quickly with hole doping in the cuprates
whereas it is relatively stable in the manganites or vanadates:
in the two latter cases the polarons are more classical and quantum
fluctuations would not destroy the ordered state so easily.
\begin{theacknowledgments}
I would like to thank the organising committee of the course for their financial support.
I thank Andrzej M. Ole\'s, Maria Daghofer and Peter Horsch for the extremely
fruitful discussion during the common work on this subject. I am also particularly
grateful to Andrzej M. Ole\'s for his invaluable help and ideas.
This work was supported in part by the Foundation for Polish Science (FNP)
and by the Polish Ministry of Science under grant No. N202 068 32/1481.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
2,877,628,088,909 | arxiv | \section{Introduction}
In 1978 S\'{a}rk\"{o}zy proved that if $A\subset\mathbb{Z}$ is a
dense set of integers then it contains a $k^{th}$-power difference
for any $k\geq2$ \cite{Sarkozy1978SarkozysTheorem}. For a set $A\subset\{1,\dots,N\}$
that avoids $k^{th}$-power differences the best quantitative upper
bound, due to Pintz, Steiger, and Szemer\'{e}di \cite{PintzSteigerSzemeredi1988BestBoundsSarkozysTheorem},
takes the form
\[
|A|\leq N^{1-o(1)},
\]
and the best lower bound, due to Ruzsa \cite{Ruzsa1984DifferenceSetsWithoutSquares}
(improved in some cases by Lewko \cite{Lewko2015ImprovedSarkozyLowerBounds}),
is of the form $N^{1-c}$ where $c\leq\frac{1-\delta}{k}$ for some
small fixed $\delta>0$. Following recent advances in the polynomial
method \cite{CrootLevPachZ4,EllenbergGijswijtCapsets,TaosBlogCapsets},
Green \cite{Green2017SarkozyTheoremInFunctionFields} gave a strong
quantitative result for the function field analogue of S\'{a}rk\"{o}zy's
theorem. Let $P_{q,n}$ denote the space of polynomials in the variable
$T$ over $\mathbb{F}_{q}$ of degree $<n$. For $k\geq2$, Green
proved that any subset $A\subset P_{q,n}$ that does not contain two
distinct polynomials $u(T),v(T)$ such that $u(T)-v(T)=b(T)^{k}$
for some $b\in\mathbb{F}_{q}[T]$ has size at most
\[
|A|\leq q^{n(1-c_{k})}
\]
where
\[
c_{k}=\frac{1}{2k^{2}D_{q}(k)^{2}\log q}
\]
and $D_{q}(k)$ denotes the sum of digits of $k$ in base $q$. In
this paper, by adapting Ruzsa's construction in $\mathbb{Z}$, we
prove the following lower bound for the function field setting:
\begin{thm}
\label{thm:sarkozy_kth_power_lower}Let $k\geq2$, and suppose that
$\gcd(k,q-1)>1$. For $n\equiv0\pmod{2k}$ there exists a set $A\subset P_{q,n}$
of size
\[
|A|=q^{n\left(1-\frac{1}{2k}\right)}
\]
that does not contain a $k^{th}$ power difference.
\end{thm}
The construction results in a bound for all $n$, since $P_{q,n}\subset P_{q,n+1}$.
In section \ref{sec:Generalized-Paley-Graphs}, we prove a handful
of results concerning the independence number of generalized Paley
graphs, including a generalization of a claim of Ruzsa. We prove a
basic lower bound for the independence number of a product, which
is ingredient in the proof of Theorem \ref{thm:sarkozy_kth_power_lower},
and we also prove upper bounds to help understand the limits of the
method. For $k\geq3$, Theorem \ref{thm:sarkozy_kth_power_lower},
could potentially be improved by finding larger independent sets in
products of generalized Paley graphs. In subsection \ref{subsec:Limits-of-the-method}
we give a case with an improved bound, and discuss the limit of the
method. When $k=2$, we conjecture that the lower bound achieved,
$q^{\frac{3}{4}n}$, is optimal.
\begin{rem}
For $\gcd(k,q-1)=1$, every element of $\mathbb{F}_{q}$ is a $k^{th}$
power, and it is not always possible to obtain a lower bound as strong
as Theorem \ref{thm:sarkozy_kth_power_lower}. For $k=q^{r}$, a pigeonhole
argument achieves the upper bound
\[
|A|\leq q^{n-\lfloor\frac{n-1}{k}\rfloor}.
\]
For any $k$, there are at $q^{1+\lfloor\frac{n-1}{k}\rfloor}$ distinct
$k^{th}$ powers with degree $\leq n-1$, and so the greedy construction
yields a set $A$ with no $k^{th}$ power differences of size
\begin{equation}
|A|=q^{n-1-\lfloor\frac{n-1}{k}\rfloor},\label{eq:greedy_lower_bound}
\end{equation}
and hence in the case of $k=q^{r}$, this upper bound is tight within
a factor of $q$.
\end{rem}
\subsection{Graph Theoretic Notation}
We will use the following graph theoretic notation throughout the
paper: For a directed graph $G=(V,E)$, let $\alpha(G)$ denote the
size of the largest independent set, that is the largest set such
that no two vertices are connected by a directed edge
\[
\alpha(G)=\max_{W\subset V}\left\{ |W|:\ \forall\ x,y\in W,\ (x,y)\notin E\right\} .
\]
We let $\omega(G)$ denote the clique number, which equals $\alpha(\overline{G})$
where $\overline{G}$ is the complement graph. Here the complement
graph is $\overline{G}=(V,\overline{E})$ where $\overline{E}=\left\{ (x,y)|\ (x,y)\in V^{2},\ x\neq y,\ (x,y)\notin E\right\} $.
\subsubsection*{The Strong Graph Product:}
Given two graphs $G=(V,E)$, $G=(V',E')$, the strong graph product\emph{
}of $G$ and $G'$ is the graph $G\boxtimes G'$ with vertex set $V\times V'$,
and edge set defined by connecting $(v,v'),\ (u,u')\in V\times V'$
if $(v,u)\in E$ and $(v',u')\in E'$, or $(v,u)\in E$ and $v'=u'$,
or $v=u$ and $(v',u')\in E'$. The strong graph product as defined
applies to both directed and undirected graphs.
\subsubsection*{Shannon Capacity:}
For a graph $G$ and $n\geq2$, we let $G^{\boxtimes n}$ denote the
$n$-fold strong graph product of $G$ with itself. The Shannon Capacity
of an undirected graph $G$ is defined to be
\[
\Theta(G)=\limsup_{n\rightarrow\infty}\left(\alpha(G^{\boxtimes n})\right)^{\frac{1}{n}}.
\]
\subsubsection*{Lov\'{a}sz Theta Function:}
For an undirected graph $G$, the Lov\'{a}sz Theta Function, $\vartheta(G)$,
is a minimization over orthonormal representations of $G$, see \cite{Lovasz1979ShannonCapacityOfAGraph}
for a precise definition. In particular, for undirected $G,H$, $\vartheta$
satisfies
\begin{align}
\Theta(G) & \leq\vartheta(G),\label{eq:Lovasz_Shannon_bound}\\
\vartheta(G\boxtimes H) & =\vartheta(G)\vartheta(H),
\end{align}
and if $G$ is vertex transitive, then
\begin{equation}
\vartheta(G)\vartheta(\overline{G})=|G|.\label{eq:lovasz_vertex_transitive_product_property}
\end{equation}
\section{Generalized Paley Graphs\label{sec:Generalized-Paley-Graphs}}
Generalized Paley Graphs were introduced by Cohen \cite{Cohen1988CliqueNumberOfPaleyGraphs}
and reintroduced by Lim and Praeger \cite{LimPraeger2009GeneralizedPaleyGraphs}.
We expand their definition slightly to allow for the vertex set to
be a ring rather than just a field, as this will be relevant later
(see Theorem \ref{thm:Ruzsa_generalization_odd}).
\begin{defn}
\label{def:Generalized_Paley_Graph_Definition}For a finite commutative
ring $R$, let $\Paley_{k}\left(R\right)$ be the (possibly directed)
graph with vertex set $V=R$, and edge set
\[
E=\left\{ (x,y):\ x-y=z^{k}\text{ for some }z\in R\right\} .
\]
\end{defn}
With this notation, for $q\equiv1\pmod{4}$, $\Paley_{2}(\mathbb{F}_{q})$
is the usual Paley graph. The graph $\Paley_{k}(\mathbb{F}_{q})$,
where two elements are connected if they differ by a $k^{th}$ power,
is undirected if and only if $\frac{q-1}{\gcd(q-1,k)}$ is even, since
in this case $-1$ is a $k^{th}$ power. If $\gcd(k,q-1)=1$, then
every element of $\mathbb{F}_{q}$ is a $k^{th}$ power, and so $\Paley_{k}(\mathbb{F}_{q})$
is complete graph on $q$ vertices, and if $\gcd(k,q-1)=d<k$, then
replacing $k$ with $d$ does not change the graph. The assumption
$q\equiv1\pmod{2k}$ is often used as it assures both that the graph
is undirected and that $k^{th}$ powers are indeed relevant.
Ruzsa \cite{Ruzsa1984DifferenceSetsWithoutSquares} gave a lower bound
for S\'{a}rk\"{o}zy's theorem in $\mathbb{Z}$ based on a maximization
involving the quantity
\[
\alpha\left(\Paley_{k}\left(\mathbb{Z}/m\mathbb{Z}\right)\right)
\]
for squarefree $m$. In Section \ref{sec:The-Lower-Bound-Construction}
we give a lower bound for the function field case in terms of the
independence number of a product of this graph. Our results are most
naturally stated in terms of the independence number of products of
generalized Payley graphs, and so we define
\begin{equation}
r_{k,n}(R)=\begin{cases}
\alpha\left(\Paley_{k}(R)\right) & \text{ when }n=1\\
\alpha\left(\Paley_{k}(R)^{\boxtimes n}\right) & \text{ when }n\geq2
\end{cases}.\label{eq:r_k_definition}
\end{equation}
For products of rings, these graphs can be factored. For composite
$m$, we have the following lemma:
\begin{lem}
\label{lem:paley_direct_product}Let $n,m>1$ be relatively prime.
Then
\[
\Paley_{k}(\mathbb{Z}/mn\mathbb{Z})=\Paley_{k}(\mathbb{Z}/m\mathbb{Z})\boxtimes\Paley_{k}(\mathbb{Z}/n\mathbb{Z}).
\]
\end{lem}
\begin{proof}
This follows from the Chinese Remainder Theorem and the fact that
an element is a $k^{th}$-power in $\mathbb{Z}/mn\mathbb{Z}$ if and
only if it maps to a $k^{th}$ power in both $\mathbb{Z}/m\mathbb{Z}$
and $\mathbb{Z}/n\mathbb{Z}$.
\end{proof}
\subsection{The Independence Number\label{subsec:The-Independence-Number-general-payley}}
For $R=\mathbb{F}_{q}$, the clique number\emph{ }of the generalized
Payley graph is a well studied quantity. See Yip's Masters Thesis
\cite{Yip2021ThesisCliquesInGeneralizedPaleyGraphs,Yip2022CliqueNumberPrimePower}
for a discussion of lower and upper bounds. In particular, when the
graph is undirected, the best lower bound is due to Cohen \cite{Cohen1988CliqueNumberOfPaleyGraphs}
\begin{equation}
\frac{p}{(p-1)\log d}\left(\frac{1}{2}\log q-2\log\log q\right)-1\leq\omega\left(\Paley_{k}(\mathbb{F}_{q})\right)=\alpha\left(\overline{\Paley_{k}(\mathbb{F}_{q})}\right).\label{eq:cohen_lower_bound}
\end{equation}
When $k=2$, the graph $\Paley_{k}(\mathbb{F}_{q})$ is self-complementary,
and so $\omega(\Paley_{2}(\mathbb{F}_{q}))=\alpha(\Paley_{2}(\mathbb{F}_{q}))$
but for $k>2$, $\Paley_{k}(\mathbb{F}_{q})$ is isomorphic to a subgraph
of $\overline{\Paley_{k}(\mathbb{F}_{q})}$, and hence
\begin{equation}
\omega\left(\Paley_{k}(\mathbb{F}_{q})\right)=\alpha\left(\overline{\Paley_{k}(\mathbb{F}_{q})}\right)\leq\alpha\left(\Paley_{k}(\mathbb{F}_{q})\right),\label{eq:independence_number_complement_bound}
\end{equation}
that is the lower bounds for the clique number imply lower bounds
for the independence number. In some unique cases, such as when $q=p^{s}$
and $k|\frac{q-1}{p^{r}-1}$ for some $r|s$, there are significantly
stronger lower bounds for the size of the clique in $\Paley_{k}(\mathbb{F}_{q})$,
see \cite{BroereDomanRidley1988largeCliquesInSomePayleyGraphs,Yip2021ThesisCliquesInGeneralizedPaleyGraphs}
for more details. The upper bound for the clique number is $\sqrt{q}$,
and for $k=2$ this was recently improved in \cite{HansonPetridis2021CliqueNumberOfPaleyGraphs}.
The case for the independence number is different however, as in some
cases it grows above $\sqrt{q}$ for larger $k$. In the first non-trivial
case, when $k=3$ and $q=7$, the Paley Graph is precisely the $7$-cycle,
and we have an independent set of size $3=7^{0.5645...}$. Indeed,
if $q=1+2k$, then the only $k^{th}$ powers in $\mathbb{F}_{q}$
are $-1,0,1$, and hence $\Paley_{k}(\mathbb{F}_{q})$ is isomorphic
to $C_{2k+1}$, the $(2k+1)$-cycle. This contains an independent
set of size $k$, and so there are infinitely many graphs satisfying
\begin{equation}
q^{1-\frac{\log(2)}{\log(2k+1)}}\leq\alpha\left(\Paley_{k}\left(\mathbb{F}_{q}\right)\right).\label{eq:sparse_lower_bound}
\end{equation}
The following theorem gives a basic upper bound for the independence
number of these graphs.
\begin{thm}
\label{thm:payley_upper_bound}Suppose that $-1$ is a $k^{th}$ power
in $\mathbb{F}_{q}$. Then we have that
\begin{equation}
\vartheta\left(\Paley_{k}(\mathbb{F}_{q})\right)\leq q^{1-\frac{1}{k}}\label{eq:lovasz_theta_bound}
\end{equation}
where $\vartheta$ is the Lov\'{a}sz Theta Function.
\end{thm}
In particular, by (\ref{eq:Lovasz_Shannon_bound}) and the definition
of $\Theta$ as a limsup, it follows that
\begin{equation}
r_{k,2}(\mathbb{F}_{q})\leq q^{2\left(1-\frac{1}{k}\right)}.\label{eq:simple_r_k_2_upper_bound}
\end{equation}
To prove this theorem, we need to prove the following fact about the
complement graph:
\begin{lem}
\label{lem:shannon_capacity_lower_bound}We have that
\[
\alpha\left(\overline{\Paley_{k}(\mathbb{F}_{q})}^{\boxtimes k}\right)\geq q,
\]
and hence when the graph is undirected
\begin{equation}
\vartheta\left(\overline{\Paley_{k}(\mathbb{F}_{q})}\right)\geq q^{\frac{1}{k}}.\label{eq:lower_bound_lovasz_theta}
\end{equation}
\end{lem}
\begin{proof}
If $\gcd(k,q-1)=1$, then $\overline{\Paley_{k}(\mathbb{F}_{q})}$
is the totally isolated graph, the complement of the complete graph,
and so the result holds trivially. Assume that $\gcd(k,q-1)>1$, and
let $\beta\in\mathbb{F}_{q}$ be a cyclic generator of $\mathbb{F}_{q}^{*}$.
Consider the set
\[
A=\left\{ (x,\beta x,\beta^{2}x,\dots,\beta^{k-1}x)|\ x\in\mathbb{F}_{q}\right\} .
\]
Let $x,y\in\mathbb{F}_{q}$ be two elements with $x-y\neq0$, and
write $(x-y)=\beta^{a}$ for some $a$. Then $(x-y)\beta^{j}$ will
be a $k^{th}$ power for the value of $j\in\{0,1,\dots,k-1\}$ satisfying
$j\equiv-a\pmod k$. This proves that $A$ is an independent set.
Equation (\ref{eq:lower_bound_lovasz_theta}) then follows from (\ref{eq:lovasz_theta_bound})
since the Lov\'{a}sz Theta Function upper bounds the size of the
largest independent set.
\end{proof}
\begin{proof}
(of Theorem \ref{thm:payley_upper_bound}). Since the Generalized
Payley graph is vertex transitive, and undirected since $-1$ is a
$k^{th}$ power, by (\ref{eq:lovasz_vertex_transitive_product_property})
\[
\vartheta\left(\Paley_{k}(\mathbb{F}_{q})\right)\vartheta\left(\overline{\Paley_{k}(\mathbb{F}_{q})}\right)=q,
\]
Equation (\ref{eq:lovasz_theta_bound}) then follows from this and
the lower bound in Lemma \ref{lem:shannon_capacity_lower_bound}.
\end{proof}
The following proposition gives a lower bound for the independence
number of a product, which will be used in the next section in the
proof of Theorem \ref{thm:sarkozy_kth_power_lower}.
\begin{prop}
\label{prop:rk_2_lower_bound}For $\gcd(k,q-1)>1$ we have that $r_{k,2}(\mathbb{F}_{q})\geq q.$
In particular, when $k=2$ and $q$ is odd, $r_{2,2}(\mathbb{F}_{q})=q$.
\end{prop}
\begin{proof}
Since $\gcd(k,q-1)>1$, there exists $\beta\in\mathbb{F}_{q}$ that
is not a $k^{th}$ power. Then $A=\left\{ (x,\beta x)\ |x\in\mathbb{F}_{q}\right\} $
is an independent set in $\Paley_{k}(\mathbb{F}_{q})\boxtimes\Paley_{k}(\mathbb{F}_{q})$,
since for any distinct $x,y\in\mathbb{F}_{q}$, only one of $x-y$
and $\beta(x-y)$ can be a $k^{th}$ power. The final statement follows
from the upper bound \ref{eq:simple_r_k_2_upper_bound}.
\end{proof}
Proposition \ref{eq:lower_bound_lovasz_theta} and Theorem \ref{thm:payley_upper_bound}
together imply that
\begin{equation}
q^{\frac{1}{2}}\leq\Theta\left(\Paley_{k}(\mathbb{F}_{q})\right)\leq q^{1-\frac{1}{k}}\label{eq:paley_shannon_capacity}
\end{equation}
and
\begin{equation}
q^{\frac{1}{k}}\leq\Theta\left(\overline{\Paley_{k}(\mathbb{F}_{q})}\right)\leq q^{\frac{1}{2}}.\label{eq:paley_complement_shannon_capacity}
\end{equation}
For $k\geq3$, for prime fields, we believe that neither of these
inequalities are sharp.
\begin{conjecture}
For $k\geq3$, there exists $a_{k},b_{k}>0$ such that for any prime
$p\equiv1\pmod{k}$.
\[
p^{\frac{1}{2}+a_{k}}\leq\Theta\left(\Paley_{k}(\mathbb{F}_{p})\right)\leq p^{1-\frac{1}{k}-b_{k}}.
\]
\end{conjecture}
\subsection{Composite $m$}
We conclude this section with a generalization of a result of Ruzsa
from \cite{Ruzsa1984DifferenceSetsWithoutSquares}. For $k=2$, the
following Theorem was proven by Ruzsa, but the proof was not published.
\begin{thm}
\label{thm:Ruzsa_generalization_odd}Let $m>1$ be squarefree, let
$k=d2^{s}$ where $d$ is odd, and suppose that each prime dividing
$m$ is of the form $p\equiv1\pmod{2^{s+1}}$. Then if $A\subset\mathbb{Z}/m\mathbb{Z}$
does not contain two elements whose difference is a $k^{th}$-power
we have
\[
|A|<m^{1-\frac{1}{k}}.
\]
\end{thm}
\begin{proof}
We will prove that
\[
\vartheta\left(\Paley_{k}\left(\mathbb{Z}/m\mathbb{Z}\right)\right)<m^{1-\frac{1}{k}},
\]
which implies the result since $\alpha(G)\leq\vartheta(G)$ for any
$G$. Let $m=p_{1}\cdots p_{r}$. Then by Lemma \ref{lem:paley_direct_product}
\[
\Paley_{k}\left(\mathbb{Z}/m\mathbb{Z}\right)=\Paley_{k}\left(\mathbb{Z}/p_{1}\mathbb{Z}\right)\boxtimes\cdots\boxtimes\Paley_{k}\left(\mathbb{Z}/p_{r}\mathbb{Z}\right).
\]
The condition $p_{i}\equiv1\pmod{2^{s+1}}$ for each $p_{i}|m$ guarantees
that $\frac{p_{i}-1}{\gcd(p_{i}-1,k)}$ will be even, and hence that
$-1$ is a $k^{th}$-power in $\mathbb{Z}/p_{i}\mathbb{Z}$, and so
these graphs are undirected. The multiplicative property of the Lov\'{a}sz
Theta Function \cite[Lemma 2]{Lovasz1979ShannonCapacityOfAGraph}
implies that
\[
\vartheta\left(\Paley_{k}\left(\mathbb{Z}/m\mathbb{Z}\right)\right)\leq\prod_{i=1}^{r}\vartheta\left(\Paley_{k}\left(\mathbb{Z}/p_{i}\mathbb{Z}\right)\right),
\]
and hence by equation (\ref{eq:lovasz_theta_bound})
\[
\vartheta\left(\Paley_{k}\left(\mathbb{Z}/m\mathbb{Z}\right)\right)\leq\prod_{i=1}^{r}p_{i}^{1-\frac{1}{k}}=m^{1-\frac{1}{k}}.
\]
The inequality can be made strict since the left hand side is an integer,
but the right hand side is not.
\end{proof}
When $k$ is odd, then the bound holds for any odd squarefree integer
$m$. For $k=3$, Matolcsi and Ruzsa recently gave the superior upper
bound $O_{\epsilon}(m^{\frac{1}{2}+\epsilon})$ for squarefree $m$
\cite{MatolcsiRuzsa2021DifferenceSetsOfCubesModuloM} using methods
from \cite{MatolcsiRuzsa2014DifferenceSetsExponentialSumsI}. In the
case where $m$ is a product of primes that are not necessarily of
the form $p\equiv1\pmod{2^{s+1}}$, it seems likely that the same
upper bound holds, however for the $k=2$ proving this seems more
complex, see \cite{FordGabdullin2021SetsAvoidingSquareDifferencesModuloM}
for more details. Should such a result hold for all $m$, then Ruzsa's
lower bound construction in $[1,N]$ for sets without a $k^{th}$
power difference cannot yield a set of size greater than $N^{1-1/k^{2}}$.
\section{The Lower Bound Construction\label{sec:The-Lower-Bound-Construction}}
To prove Theorem \ref{thm:sarkozy_kth_power_lower}, we prove a general
lower bound in terms of $r_{k,2}(\mathbb{F}_{q})$, and then apply
Proposition \ref{prop:rk_2_lower_bound}. We conclude this section
with a discussion of the limits of this method.
\subsection{Main Result}
\begin{thm}
\label{thm:sarkozy_general_paley_lower}Suppose that $n\equiv0\ (2k)$
and let $F\in\mathbb{F}_{q}[T]$ be a polynomial of degree $k$. Then
there exists a set $A\subset P_{q,n}$ of size
\begin{equation}
|A|\geq\left(r_{k,1}(\mathbb{F}_{q})\right)^{\frac{n}{k}}q^{n\left(1-\frac{1}{k}\right)}\label{eq:sarkozy_general_poly}
\end{equation}
that does not contain $p,p'$ such that $p'-p=F(u)$ for some $u\in\mathbb{F}_{q}[T]$.
If $F(T)=b_{k}T^{k}$ for $b_{k}\neq0$, then we have the improved
bound
\begin{equation}
|A|\geq\left(r_{k,2}(\mathbb{F}_{q})\right)^{\frac{n}{2k}}q^{n\left(1-\frac{1}{k}\right)}.\label{eq:sarkozy_general_kth_power}
\end{equation}
\end{thm}
\begin{proof}
Suppose that $n\equiv0\ (2k)$, and let $S\subset\mathbb{F}_{q}$
be a maximal independent set in $\Paley_{k}(\mathbb{F}_{q})$ that
contains $0$, and let $b_{k}u^{k}$ be the first coefficient of $F(u)$.
Let $A\subset P_{q,n}$ be the set of polynomials of the form
\[
c_{0}+c_{1}T+c_{2}T^{2}+\cdots+c_{n-2}T^{n-2}+c_{n-1}T^{n-1}
\]
where
\[
\begin{cases}
c_{i}\in\mathbb{F}_{q} & \text{ when }i\not\equiv0\ (k)\\
c_{i}\in b_{k}S & \text{ when }i\equiv0\ (k)
\end{cases}.
\]
The first non-zero coefficient of the difference of two elements in
$A$ will either be a power that is not divisible by $k$, or will
equal $b_{k}(s-s')T^{jk}$ for some $j$ where $s,s'\in S$. For $u=c_{j}T^{j}+\cdots+c_{0}$,
the first coefficient of $F(u)$ will equal $b_{k}(c_{j}T^{j})^{k}$,
but since $s-s'$ is never a $k^{th}$ power by definition of $S$,
this can never be of the form $b_{k}(s-s')T^{jk}$ for any $u\in P_{q,n}$.
This proves (\ref{eq:sarkozy_general_poly}) since $|S|=r_{k,1}(\mathbb{F}_{q})$.
In the case where $F(u)=u^{k}$, we can improve the bound by making
use of both the first and last coefficient. Let $U\subset\mathbb{F}_{q}\times\mathbb{F}_{q}$
be an independent set in $\Paley_{k}(\mathbb{F}_{q})\boxtimes\Paley_{k}(\mathbb{F}_{q})$,
and consider the set $A$ of polynomials of the form
\[
c_{0}+c_{1}T+c_{2}T^{2}+\cdots+c_{n-2}T^{n-2}+c_{n-1}T^{n-1}
\]
where
\[
\begin{cases}
c_{i}\in\mathbb{F}_{q} & \text{ when }i\not\equiv0\ (k)\\
(c_{i},c_{n-k-i})\in U & \text{ when }i\equiv0\ (k),\ i<\frac{n}{2}
\end{cases}.
\]
Note that since $n\equiv0\pmod{2k}$, $T^{n-k}$ is the $k^{th}$-power
with the largest degree in $P_{q,n}$. Suppose that $u$ is a difference
of two elements of $A$, and write $u=\sum_{i=0}^{n-1}a_{i}T^{i}$
for coefficients $a_{i}$, some of which may equal $0$. Let $j$
be the index of the non-zero coefficient $a_{j}$ whose degree is
farthest from the middle, that is, let $j$ be such that $a_{j}\neq0$,
and $|j-\frac{n-k}{2}|$ is maximal. In the event of a tie between
$2$ non-zero coefficients, take $j>\frac{n-k}{2}$. If $j\not\equiv0\ \pmod{k}$,
then $u$ cannot be a $k^{th}$-power. If $k|j$, consider the pair
of coefficients $(a_{j},a_{n-k-j})$, and assume without loss of generality
that $j>\frac{n-k}{2}$. By definition of $j$, we have that
\[
a_{i}=\begin{cases}
0 & \text{if }i>j\\
0 & \text{if }i<n-k-j
\end{cases},
\]
and so $a_{j}$ and $a_{n-k-j}$ must both be $k^{th}$-powers for
$u$ to be a $k^{th}$-power. Note that $0$ is a $k^{th}$-power,
and $a_{n-k-j}$ could possibly equal $0$. Since $a_{j}\neq0$, and
since $(a_{j},a_{n-k-j})\in U-U$, by definition of $U$ at least
one of $a_{j},a_{n-k-j}$ is not a $k^{th}$-power. This implies that
$u$ is not $k^{th}$-power, and hence $A$ contains no $k^{th}$-power
differences. Since $|U|=r_{k,2}(\mathbb{F}_{q})$, we have that
\[
|A|\geq\left(r_{k,2}(\mathbb{F}_{q})\right)^{\frac{n}{2k}}q^{n\left(1-\frac{1}{k}\right)}.
\]
The result follows for $F(T)=b_{k}T^{k}$ for $b_{k}\neq0$ by multiplying
the elements of $U$ by $b_{k}$ in the construction.
\end{proof}
Equation (\ref{eq:cohen_lower_bound}), Cohen's clique number lower
bound, yields a nontrivial bound in (\ref{eq:sarkozy_general_poly})
for sets avoiding a general polynomial $F$. Theorem \ref{thm:sarkozy_kth_power_lower}
follows immediately from equation (\ref{eq:sarkozy_general_kth_power})
and Proposition \ref{prop:rk_2_lower_bound}.
\begin{rem}
Theorem \ref{thm:sarkozy_general_paley_lower} is similar to Ruzsa's
lower bound in the integers. Let $n=\log_{m}N$. Ruzsa proved that
there exists $A\subset[1,x]$ that contains no two elements whose
difference is a $k^{th}$ power satisfying
\[
|A|\geq\frac{1}{m}\left(r_{k,1}(\mathbb{Z}/m\mathbb{Z})\right)^{\frac{n}{k}}N^{\left(1-\frac{1}{k}\right)}.
\]
(The quantitative result was improved by Lewko \cite{Lewko2015ImprovedSarkozyLowerBounds}
based on the fact that $\log r_{k,1}(\mathbb{Z}/m\mathbb{Z})/\log m$
is larger for $m=205$ than for $m=65$. See also \cite{Younis2019PolynomialSzemerediLowerBounds}
for an improved bound for a related problem). The bound for $k^{th}$
powers in Theorem \ref{thm:sarkozy_kth_power_lower} is similar, but
utilizes the fact that there is no ``overflow'' when taking powers
of a polynomial in $\mathbb{F}_{q}$ so both the first and last coefficients
play a role instead of only the last coefficient. This results in
a better bound for function fields since we always have
\[
\left(r_{k,2}(\mathbb{F}_{q})\right)^{\frac{1}{2}}\geq r_{k}(\mathbb{F}_{q}).
\]
\end{rem}
\subsection{\label{subsec:Limits-of-the-method}Limits of the Method}
Theorem \ref{thm:payley_upper_bound} implies that one must use a
different method to improve the lower bound in Theorem \ref{thm:sarkozy_kth_power_lower}
beyond
\begin{equation}
|A|=q^{n\left(1-\frac{1}{k^{2}}\right)}.\label{eq:best_lower_bound_possible}
\end{equation}
One can ask if the lower~bound in proposition \ref{prop:rk_2_lower_bound}
is optimal. This turns out not to be true in general for $k\geq3$,
as described in the comments preceeding equation (\ref{eq:sparse_lower_bound}).
When $p=2k+1$ is a prime, $\Paley_{k}(\mathbb{F}_{p})$ is isomorphic
to $C_{2k+1}$, the $(2k+1)$-cycle, and the largest independent set
in $C_{2k+1}\boxtimes C_{2k+1}$ has size $k^{2}+\lfloor\frac{k}{2}\rfloor$
\cite[Theorem 7.1]{Hales1973CycleProductOfCyclesDimension2IndependenceNumber}.
Hence, in this case
\[
r_{k,2}(\mathbb{F}_{p})=k^{2}+\biggr\lfloor\frac{k}{2}\biggr\rfloor.
\]
One can easily verify that in this case this construction results
in a lower bound better than Theorem \ref{thm:sarkozy_kth_power_lower}
but weaker than the best possible from Equation (\ref{eq:best_lower_bound_possible}).
The following example helps illustrate the size of the gap between
the upper and lower bounds with an explicit case.
\begin{example}
\label{exa:specific_c7_example}Consider the specific case of $k=3$
and $q=7$. The most precise upper bound obtained by Green's method,
where we calculate the value of the minimum instead of using a Chernoff
bound, is
\[
2\cdot\left(\min_{0<t<1}\frac{1-t^{7}}{(1-t)t^{6\cdot\frac{4}{9}}}\right)^{n}=2\cdot\left(6.903\dots\right)^{n}.
\]
Theorem \ref{thm:sarkozy_kth_power_lower} gives the lower bound of
$7^{\frac{5}{6}n}=\left(5.061\dots\right)^{n}$. Since $\Paley_{3}(\mathbb{F}_{7})$
is precisely $C_{7}$, the $7$-cycle, using the fact that
\[
\alpha\left(C_{7}\boxtimes C_{7}\right)=10,
\]
Theorem \ref{thm:sarkozy_general_paley_lower} yields the improved
lower bound
\[
\left(10^{\frac{1}{6}}7^{\frac{2}{3}}\right)^{n}=\left(5.371\dots\right)^{n},
\]
which is the limit of the method in this case. There is still a considerable
gap between the upper bound and the best possible lower bound this
method can produce.
\end{example}
Given the gap between upper and lower bounds, we may ask which is
closer to the truth? We believe that in the Function Field setting
the lower bound is close to the truth, and conjecture that it is exact
when $k=2$ (See \cite[Section 1.4]{Rice2019MaximalExtensionFurstenburgSarkozyDiscussion}
for speculation on the integer setting).
\begin{conjecture}
\label{conj:Lower-bound-conjecture}Let $k\geq2$, and suppose that
$\gcd(k,q-1)>1$. For $n\equiv0\ (2k)$, any set $A\subset P_{q,n}$
that does not contain a $k^{th}$ power difference has size at most
\[
|A|\leq q^{n\left(1-\frac{1}{k^{2}}\right)}.
\]
In particular, for $k=2$ and $q$ odd, we conjecture that Theorem
\ref{thm:sarkozy_kth_power_lower} is tight.
\end{conjecture}
\specialsection*{Acknowledgements}
I would like to thank Will Sawin for comments that simplified the
proof of Proposition \ref{prop:rk_2_lower_bound}. I would also like
to thank the anonymous referee for their helpful comments and corrections.
\bibliographystyle{plain}
|
2,877,628,088,910 | arxiv | \section{Introduction}
\label{sec:introduction}
\vspace{-0.1cm}
The enhancement of noisy speech in real acoustic scenarios is a challenging task, especially for low signal-to-noise ratios (SNRs) or non-stationary noises~{\cite{loizou2013speech}}. Recently, the advent of deep neural networks (DNNs) has significantly promoted the development of the speech enhancement (SE)~{\cite{wang2018supervised}} and leads to giant performance leaps over traditional methods.
For conventional DNN-based methods, sophisticated network structures are often devised in an end-to-end manner to learn the nonlinear mapping relations between input and output pairs~{\cite{tan2019learning, luo2019conv}}. Despite being feasible and efficient, they lack interpretability as the whole network is designed in a black-box manner. As a solution, some more recent works attempt to decompose the original task into multiple sub-steps and provide intermediate supervision as the prior information to boost the subsequent optimization progressively~{\cite{li2020speech, li2021simultaneous, hao2020masking}}. Increasing results show that compared with the single-stage paradigm with blind prior, the introduction of pre-estimated prior can lead to more accurate target estimation. Nonetheless, it is still far from affirmative how to decompose the mapping task optimally, and current multi-stage strategies seem to be empirical and intuitive.
In traditional statistical signal processing based SE methods, they usually derive optimal complex-spectral or spectral magnitude estimators~{\cite{gerkmann2014bayesian, gerkmann2012mmse}} following specific optimization criteria, \emph{e.g.}, maximum likelihood (ML), the Bayesian maximum a posteriori (MAP), and minimum squared error (MMSE). In detail, when specific prior terms and conditional distribution assumptions are provided, the optimal parameter estimator can be obtained using Bayes' theorem. It is evident that the performance of these model-based methods largely hinges on the model accuracy and rationality of the prior distribution that is often manually designed and can heavily degrade in adverse environments. In contrast, existing learning (NN)-based methods are highly-encapsulated, which skip the prior estimation and directly map to the target in a data-driven manner. Therefore, it will be attractive and meaningful to investigate the integration of both categories and leverage their respective merits.
In this paper, we propose a model-driven network, termed as \textbf{MDNet}, for monaural speech enhancement. Different from previous blind NN-based works, we devise our method following the MAP criterion, which empowers each module with better interpretability. Concretely, under the MAP guidance, our enhancement task is converted into the joint posterior estimation problem \emph{w.r.t.} speech and noise. Instead of manually designing the prior distribution in traditional SE algorithms, we propose to learn the speech/noise priors from training data, which can effectively fit real parameter distribution. Besides, the unfolding strategy is proposed, where we explicitly predict the prior gradient and update the target in each iterative step. To our best knowledge, this is the first time to propose the deep prior gradient method in the speech front-end field and we expect it to promote the combination of model-based and learning-based methods. We conduct the experiments on the WSJ0-SI84 and DNS-Challenge datasets and quantitative results show that the proposed approach yields competitive performance over current top-performed SE systems.
The rest of the paper is organized as follows. In Section~{\ref{sec:problem-formulation-and-map}}, we formulate the problem and introduce the MAP criterion. In Section~{\ref{sec:proposed-approach}}, the proposed approach is presented. Experimental setup is given in Section~{\ref{sec:experiments-setup}}, and we present the results and analysis in Section~{\ref{sec:results-and-analysis}}. Some conclusions are drawn in Section~{\ref{sec:conclusion}}.
\vspace{-0.60cm}
\section{Problem formulation and MAP}
\label{sec:problem-formulation-and-map}
\vspace{-0.1cm}
In the short-time Fourier transform (STFT) doamin, the observed mixture speech can be modeled as
\begin{gather}
\label{eq2}
X_{k, l} = S_{k, l} + N_{k, l},
\end{gather}
where $\left\{X_{k, l}, S_{k, l}, N_{k, l}\right\}$ are the corresponding variables in the freqency index of $k\in\left\{1,...,K\right\}$ and time index of $l\in\left\{1,...,L\right\}$.
In conventional DNN-based methods, the network serves as the mapping function to estimate the target from input mixture, given as
\begin{gather}
\label{eq3}
\widetilde{\mathbf{S}} = \mathcal{F}\left(\mathbf{X}; \mathbf{\Theta}\right),
\end{gather}
where $\mathcal{F}\left(\cdot;\mathbf{\Theta}\right)$ denotes the network function with parameter set $\mathbf{\Theta}$, and tilde symbol denotes the estimated variable. However, as the network directly estimates the posterior probability from mixtures, it lacks the prior term and the performance may suffer from heavy degradation under low SNRs. To resolve this problem, we reformulate the problem based on the MAP framework:
\begin{gather}
\label{eq4}
\resizebox{0.99\linewidth}{!}{$
\mathop{\arg\max}_{\mathbf{S}, \mathbf{N}} P\left(\mathbf{S}, \mathbf{N} | \mathbf{X}\right) \propto \mathop{\arg\max}_{\mathbf{S}, \mathbf{N}} P\left(\mathbf{X}| \mathbf{S}, \mathbf{N}\right) P\left(\mathbf{S}\right)P\left(\mathbf{N}\right),
$}
\end{gather}
where $P\left(\mathbf{S}, \mathbf{N}|\mathbf{X}\right)$ denotes the joint posterior probability of $\left\{\mathbf{S}, \mathbf{N}\right\}$, $P\left(\mathbf{X}| \mathbf{S}, \mathbf{N}\right)$ is the conditional probability of $\mathbf{X}$, and $P\left(\mathbf{S}\right)$, $P\left(\mathbf{N}\right)$ denote the prior probability of speech and noise. Eqn.({\ref{eq4}}) holds when speech and noise are assumed to be statistically independent.
In traditional SE methods, speech and noise are often assumed to follow certain probability distributions, where complex Gaussian distribution is most widely used~{\cite{ephraim1984speech}}. In contrast, in this study we focus on the reconstruction error $\mathbf{E} = \mathbf{X}-\mathbf{S}-\mathbf{N}$ and assume that it follows the zero-mean multivariate complex Gaussian probability density function (PDF), \emph{i.e.}, $\mathcal{N}_{\mathbb{C}}\left(\mathbf{0}, \mathbf{\Lambda}\right)$. For modeling convenience, we assume $\mathbf{\Lambda}$ is time-invariant and take a negative logarithmic operation on both sides of Eqn.({\ref{eq4}}), it can then be rewritten as
\begin{gather}
\label{eq5}
\mathop{\arg\min}_{\mathbf{S}, \mathbf{N}}\left\|\mathbf{X} - \mathbf{S} - \mathbf{N}\right\|_{F}^{2} + \alpha_{S}\Psi_{S}\left(\mathbf{S}\right) + \alpha_{N}\Psi_{N}\left(\mathbf{N}\right),
\end{gather}
where $\left\{\Psi_{S}\left(\mathbf{S}\right), \Psi_{N}\left(\mathbf{N}\right) \right\}$ are prior terms of speech and noise with distribution parameters $\left\{\alpha_{S}, \alpha_{N}\right\}$. In~{\cite{wang2021compensation}}, the authors revealed the optimization compensation effect between magnitude and phase during the complex spectrum recovery process. To dampen this effect, a collaborative complex spectrum reconstruction method was developed, where the complex spectrum recovery can be decoupled into magnitude filtering with range of $\left(0, 1\right)$ and complex residual mapping~{\cite{li2022glance}}. As such, we rewrite the speech and noise as
\begin{gather}
\label{eq6}
\mathbf{S} = \mathbf{G}_{S}\mathbf{X} + \mathbf{R}_{S},\\
\mathbf{N} = \mathbf{G}_{N}\mathbf{X} + \mathbf{R}_{N},
\end{gather}
where $\mathbf{G}$ and $\mathbf{R}$ namely refer to real-valued gains and complex residual components. If we regard the prior term $\left\{\mathbf{S}, \mathbf{N}\right\}$ as the joint prior of $\left\{\mathbf{G},\mathbf{R}\right\}$ and further assume that they are statistically independent, Eqn.({\ref{eq5}}) can be rewritten as
\begin{equation}
\label{eq7}
\begin{split}
\mathop{\arg\min}_{\mathbf{G}_{S}, \mathbf{G}_{N}, \mathbf{R}_{S}, \mathbf{R}_{S}}&\left\|(\mathbf{1}-\mathbf{G}_{S}-\mathbf{G}_{N})\mathbf{X} - \mathbf{R}_{S} - \mathbf{R}_{N}\right\|_{F}^{2} +\\
&\alpha_{G_{S}}\Psi_{G_{S}}\left(\mathbf{G}_{S}\right) + \alpha_{G_{N}}\Psi_{G_{N}}\left(\mathbf{G}_{N}\right) \\
&+\alpha_{R_{S}}\Psi_{R_{S}}\left(\mathbf{R}_{S}\right)+\alpha_{R_{N}}\Psi_{R_{N}}\left(\mathbf{R}_{N}\right).
\end{split}
\end{equation}
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width=1.75\columnwidth]{architecture.pdf}}
\caption{(a) Overall diagram of the proposed framework. (b) Internal structure of the gradient estimator. (c) Internal structure of the residual gradient calculator. (d) Internal structure of the gain gradient calculator. (e) Internal structure of the recalibration encoding layer. (f) Internal operation of the consistency layer. Different modules are indicated with different colors for better visualization.}
\label{fig:architecture}
\vspace{-0.6cm}
\end{figure*}
\vspace{-0.35cm}
\section{Proposed approach}
\label{sec:proposed-approach}
\vspace{-0.1cm}
\subsection{Iterative gradient descent optimization}
\label{sec:iteractive-gradient-descent-optimization}
\vspace{-0.1cm}
To solve the multi-target optimization problem in Eqn.(\ref{eq7}), let us assume the prior terms are differentiable, then the problem can be addressed via the gradient descent method (GDM). Specifically, in the $(q+1)$th iteration, we update the above four targets as follows:
\begin{gather}
\label{eq8}
\resizebox{0.99\linewidth}{!}{$
\widetilde{\mathbf{G}}_{S}^{(q+1)} = \widetilde{\mathbf{G}}_{S}^{(q)} - \eta_{G_{S}}\left(\nabla_{G_{S}}\mathbf{T}^{(q)} + \alpha_{G_{S}}\nabla_{G_{S}}\Psi_{G_{S}}\left(\widetilde{\mathbf{G}}_{S}^{(q)}\right)\right),
$}
\end{gather}
\begin{equation}
\label{eq9}
\resizebox{0.99\linewidth}{!}{$
\widetilde{\mathbf{G}}_{N}^{(q+1)} = \widetilde{\mathbf{G}}_{N}^{(q)} - \eta_{G_{N}}\left(\nabla_{G_{N}}\mathbf{T}^{(q)} + \alpha_{G_{N}}\nabla_{G_{N}}\Psi_{G_{N}}\left(\widetilde{\mathbf{G}}_{N}^{(q)}\right)\right),
$}
\end{equation}
\begin{equation}
\label{eq10}
\resizebox{0.99\linewidth}{!}{$
\widetilde{\mathbf{R}}_{S}^{(q+1)} = \widetilde{\mathbf{R}}_{S}^{(q)} - \eta_{R_{S}}\left(\nabla_{R_{S}}\mathbf{T}^{(q)} + \alpha_{R_{S}}\nabla_{R_{S}}\Psi_{R_{S}}\left(\widetilde{\mathbf{R}}_{S}^{(q)}\right)\right),\\
$}
\end{equation}
\begin{equation}
\label{eq11}
\resizebox{0.99\linewidth}{!}{$
\widetilde{\mathbf{R}}_{N}^{(q+1)} = \widetilde{\mathbf{R}}_{N}^{(q)} - \eta_{R_{N}}\left(\nabla_{R_{N}}\mathbf{T}^{(q)} + \alpha_{R_{N}}\nabla_{R_{N}}\Psi_{R_{N}}\left(\widetilde{\mathbf{R}}_{N}^{(q)}\right)\right),
$}
\end{equation}
where $\mathbf{\eta} = \left\{\eta_{G_{S}}, \eta_{G_{N}}, \eta_{R_{S}}, \eta_{R_{N}}\right\}$ denote the optimization step of the four parameters, and $\mathbf{T}^{(q)}$ denotes the quadratic term in Eq.~(\ref{eq7}). The total iteration number is notated as $Q$. While the gradient of the quadratic term can be easily calculated, it remains troublesome on how to obtain the gradient representation of the above prior terms. In this regard, we propose to learn the prior gradients with the network and can thus be directly learned from training data.
After gradient descent, accoding to Eqns.(\ref{eq6}) and (6), we can reconstruct the speech and noise components as
\begin{gather}
\label{eq12}
\widetilde{\mathbf{S}}^{(q+1)} = \widetilde{\mathbf{G}}_{S}^{(q+1)}\mathbf{X} + \widetilde{\mathbf{R}}_{S}^{(q+1)}, \\
\widetilde{\mathbf{N}}^{(q+1)} = \widetilde{\mathbf{G}}_{N}^{(q+1)}\mathbf{X} + \widetilde{\mathbf{R}}_{N}^{(q+1)}.
\end{gather}
\vspace{-0.7cm}
\subsection{Proposed model-driven framework}
\label{sec:proposed-model-driven-framework}
\vspace{-0.1cm}
\subsubsection{Forward stream}
\label{sec:forward-stream}\vspace{-0.1cm}
To enable the gradient update iteratively, we devise an unfolding-style framework, whose overall diagram is shown in Fig.~{\ref{fig:architecture}}(a). It has four major parts, namely feature extractor, gradient estimator, target update, and consistency layer. Given the input of noisy complex spectrum $\mathbf{X}\in\mathbb{R}^{2\times K\times L}$, it first passes through multiple 2D convolution (2D-Conv) layers with consecutive frequency downsampling operations to extract the spectral features, say, $\mathbf{F}$. In the $q$th GDM iteration, given the input $\left\{\mathbf{F}, \widehat{\mathbf{S}}^{(q-1)}, \widehat{\mathbf{N}}^{(q-1)} \right\}$, the gradient estimator will predict the prior gradients \emph{w.r.t.} the four parameters in Eqn.({\ref{eq7}}) and implement the gradient descent for updating, which are utilized to reconstruct the speech and noise targets simultaneously. The updated parameters are further modified as the inputs of the next iteration via the consistency layer~{\cite{wisdom2019differentiable}}, whose calculation process is detailed in Fig.~{\ref{fig:architecture}}(f). For practical implementation, we unfold the process for $Q$ times, and the whole forward stream is formulated as
\begin{gather}
\label{eq13}
\mathbf{F} = \text{Encoder}\left(\mathbf{X}\right),\\
\mathbf{\Omega}^{(q)} = \text{GradientUpdate}\left(\mathbf{F}, \mathbf{\Omega}^{(q-1)}\right),\\
\left\{\widetilde{\mathbf{S}}^{(q)}, \widetilde{\mathbf{N}}^{(q)}\right\} = \text{TargetUpdate}\left(\mathbf{\Omega}^{(q)}\right),\\
\left\{\widehat{\mathbf{S}}^{(q)}, \widehat{\mathbf{N}}^{(q)}\right\} = \text{ConsistencyLayer}\left\{\widetilde{\mathbf{S}}^{(q)}, \widetilde{\mathbf{N}}^{(q)}\right\},
\end{gather}
where $\mathbf{\Omega}^{(q)}$ is the parameter set for the above four parameters and $q\in\left\{1,...,Q\right\}$. Note that as decent parameter initialization is indispensable for later gradient updates, we adopt the network with the same structure as the gradient estimator to generate the initialized parameter estimation $\emph{i.e.}$, $\widetilde{\mathbf{G}}_{S}^{(0)},\widetilde{\mathbf{G}}_{N}^{(0)}, \widetilde{\mathbf{R}}_{S}^{(0)}, \widetilde{\mathbf{R}}_{N}^{(0)}$.
\vspace{-0.2cm}
\subsubsection{Feature extractor}
\label{sec:feature-extractor}\vspace{-0.1cm}
Like~{\cite{li2022glance}}, in the feature extractor, we utilize recalibration encoding layers (RELs) to gradually downsample the feature map and abstract the features. The internal structure of each REL is shown in Fig.~{\ref{fig:architecture}}(e). Except for 2D-GLU~{\cite{dauphin2017language}}, a UNet-block is followed with residual connection~{\cite{qin2020u2}}, which is akin to UNet and takes the current feature map as the input and further encodes the feature. Compared with vanilla 2D-Conv, the UNet-block can explore features with multiple scales. Besides, the introduction of residual connection can effectively recalibrate the feature distribution and preserve the spectral patterns.
\vspace{-0.3cm}
\subsubsection{Gradient estimator}
\label{sec:gradient-estimattor}
\vspace{-0.1cm}
The internal structure of the proposed gradient estimator (GE) is presented in Fig.~{\ref{fig:architecture}}(b). As stated above, the input includes the extracted feature $\mathbf{F}$, the modified speech $\widehat{\mathbf{S}}^{(q)}$, and noise $ \widehat{\mathbf{N}}^{(q)}$ after the consistency layer. As Fig.~{\ref{fig:architecture}}(b) shows, three gradient calculators are adopted, where the gain gradient calculator (GGC) is used to derive the gain gradients of both speech and noise, and another two complex residual gradient calculators (RGCs) aim at gradient prediction of the speech and noise residual, respectively. Note that we share the GGC for speech and noise as the gain function actually serves as the speech presence probability (SPP) in traditional SE algorithms and that of speech and noise are complementary from statistical perspective~{\cite{yoshioka2015ntt}}.
The internal structure of RGC is presented in Fig.~{\ref{fig:architecture}}(c). Take the speech branch as an example, we first flatten the complex spectrum as $\widehat{\mathbf{S}}^{(q)}\in\mathbb{R}^{2K\times L}$ and then concatenate with $\mathbf{F}$ as the network input. The network first compresses the feature with 1D-GLU and then models the gradient distribution with stacked temporal convolution networks (TCNs)~{\cite{bai2018empirical}}. To alleviate the parameter burden, we adopt the simplified version, termed as S-TCNs~{\cite{zhang2020deepmmse}}, which can dramatically decrease the parameters. The prior gradients of the complex residual are attained with two linear layers, namely for real and imaginary (RI) parts. For GGC, it has the similar structure except the input is the concatenation of $\mathbf{F}$ and the magnitude of speech and noise, say, $\text{Concat}\left( \mathbf{F}, \widehat{\mathbf{S}}^{(q)}, \widehat{\mathbf{N}}^{(q)}\right)$. The gain gradients \emph{w.r.t.} speech and noise are obtained via two linear layers.
\vspace{-0.3cm}
\subsubsection{Target fusion}
\label{sec:target-fusion}
\vspace{-0.1cm}
After repeating target updates, we obtain the estimates of speech and noise, \emph{i.e.}, $\widetilde{\mathbf{S}}^{(Q)}$, $\widetilde{\mathbf{N}}^{(Q)}$. Then another problem arises, \emph{how can we fuse the estimated speech and noise components to obtain the final speech estimation?} A common strategy is to apply a network to estimate time-frequency (T-F) bin-level weights and fuse both components dynamically~{\cite{zheng2021interactive}}, \emph{i.e.}, $\mathbf{M}\widetilde{\mathbf{S}}^{(Q)} + \left(\mathbf{1} - \mathbf{M}\right)\left(\mathbf{X}- \widetilde{\mathbf{N}}^{(Q)}\right)$. However, it is a linear combination within each T-F bin. Motivated by~{\cite{li2020two}}, we propose a residual recalibration module to better fuse the speech and noise parts. Concretely, given the original noisy and the estimated speech and noise as the input, a network is employed to estimate the residual structure, which is then added by the estimated speech:
\begin{equation}
\label{eq14}
\widetilde{\mathbf{S}}^{(Q)'} \leftarrow \widetilde{\mathbf{S}}^{(Q)} + \text{FuseNet}\left(\mathbf{X}, \widetilde{\mathbf{S}}^{(Q)}, \widetilde{\mathbf{N}}^{(Q)}\right).
\end{equation}
Differnet from the linear combination, it works with residual learning and gives nonlinear output, which can better leverage the complementarity between speech and noise.
\vspace{-0.35cm}
\subsubsection{Loss function}
\label{sec:losss-function}
\vspace{-0.1cm}
As the unfolding structure is utilized in the forward stream, we adopt the weighted loss for network training, given as
\begin{gather}
\label{eq15}
\mathcal{L} = \sum_{q=0}^{Q}\gamma_{q}\mathcal{L}_{q} + \zeta\mathcal{L}_{Q}^{(f)},
\end{gather}
where $\gamma_{q}$ and $\zeta$ are the weighting coefficients, and $\mathcal{L}_{Q}^{(f)}$ denotes the loss in the target fusion stage. Here $\gamma_{q}$ and $\zeta$ are empirically set to 0.1 and 1, respectively. For $\mathcal{L}_{q}$ and $\mathcal{L}_{Q}^{(f)}$ we have
\begin{equation}
\label{eq16}
\mathcal{L}_{q} =\frac{1}{2}\left( \mathcal{L}\left(\widetilde{\mathbf{S}}^{(q)\beta}, \mathbf{S}^{\beta}\right)+\mathcal{L}\left(\widetilde{\mathbf{N}}^{(q)\beta},\mathbf{N}^{\beta}\right)\right),
\end{equation}
\begin{equation}
\label{eq17}
\mathcal{L}_{Q} = \mathcal{L}\left(\widetilde{\mathbf{S}}^{(Q)'\beta}, \mathbf{S}^{\beta}\right),
\end{equation}
where the MSE criterion is adopted for network training and $\beta$ is the power-compressed coefficient and empirically set to 0.5~{\cite{li2021importance}}. Besides, the RI loss with magnitude constraint is adopted, which can mitigate the compensation effect in the complex spectrum recovery~{\cite{li2020two}}.
\vspace{-0.2cm}
\renewcommand\arraystretch{0.85}
\begin{table}[t]
\caption{Ablation study on the proposed MDNet. The values are specified with PESQ/ESTOI(\%)/SISNR(dB) format. \textbf{BOLD} indicates the best score in each case. All the values are averaged among different SNRs and nosies in the test set.}
\Large
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|ccccccc}
\specialrule{0.1em}{0.25pt}{0.25pt}
\multirow{2}*{Entry} &Param. &MACs &\multirow{2}*{$Q$} &Fusion &\multirow{2}*{PESQ$\uparrow$} &\multirow{2}*{ESTOI(\%)$\uparrow$} &\multirow{2}*{SISNR(dB)$\uparrow$}\\
&(M) &(G/s) & &type & & &\\
\specialrule{0.1em}{0.25pt}{0.25pt}
1a &\textbf{3.00} &\textbf{2.16} &0 &R &2.71 &74.05 &10.00\\
1b &4.79 &2.34 &1 &R &2.75 &74.95 &10.50\\
1c &6.57 &2.52 &2 &R &2.76 &75.44 &10.66\\
1d &8.36 &2.70 &3 &R &2.79 &76.03 &10.81\\
1e &10.15 &2.88 &4 &R &2.79 &76.06 &10.78\\
1f &11.93 &3.06 &5 &R &\textbf{2.82} &\textbf{76.55} &\textbf{10.93}\\
1g &13.72 &3.24 &6 &R &2.79 &76.21 &10.84\\
\specialrule{0.1em}{0.25pt}{0.25pt}
2a &7.85 &2.38 &3 &A &2.73 &74.41 &10.38\\
2b &8.33 &2.68 &3 &G &2.77 &75.30 &10.59\\
\specialrule{0.1em}{0.25pt}{0.25pt}
\end{tabular}}
\label{tbl:ablation-studies}
\vspace{-0.6cm}
\end{table}
\renewcommand\arraystretch{0.75}
\begin{table*}[t]
\setcounter{table}{2}
\caption{Quantitative comparisons with other state-of-the-art systems on the DNS Challenge dataset. ``-'' denotes no published result.}
\normalsize
\centering
\scalebox{0.85}{
\begin{tabular}{cccccccccccc}
\specialrule{0.1em}{0.25pt}{0.25pt}
\multirow{2}*{Methods} &\multirow{2}*{Year} &\multirow{2}*{Do.} &\multicolumn{4}{c}{w/ Reverberation} & \multicolumn{4}{c}{w/o Reverberation}\\
\cmidrule(lr){4-7}\cmidrule(lr){8-11}
& & &WB-PESQ$\uparrow$ &PESQ$\uparrow$ &STOI(\%)$\uparrow$ &SISNR (dB)$\uparrow$ &WB-PESQ$\uparrow$ &PESQ$\uparrow$ &STOI(\%)$\uparrow$ &SISNR(dB)$\uparrow$\\
\specialrule{0.1em}{0.25pt}{0.25pt}
Noisy &- &- &1.82 &2.75 &86.62 &9.03 &1.58 &2.45 &91.52 &9.07\\
NSNet~{\cite{reddy2020interspeech}} &2020 &T-F &2.37 &3.08 &90.43 &14.72 &2.15 &2.87 &94.47 &15.61\\
DTLN~{\cite{westhausen2020dual}} &2020 &T-F &- &2.70 &84.68 &10.53 &- &3.04 &94.76 &16.34\\
DCCRN~{\cite{hu2020dccrn}} &2020 &T-F &- &3.32 &- &- &- &3.27 &- &- \\
FullSubNet~{\cite{hao2021fullsubnet}} &2021 &T-F &2.97 &3.47 &92.62 &15.75 &2.78 &3.31 &96.11 &17.29\\
TRU-Net~{\cite{choi2021real}} &2021 &T-F &2.74 &3.35 &91.29 &14.87 &2.86 &3.36 &96.32 &17.55\\
CTS-Net~{\cite{li2020two}} &2021 &T-F &3.02 &3.47 &92.70 &15.58 &2.94 &3.42 &96.66 &17.99\\
GaGNet~{\cite{li2022glance}} &2022 &T-F &3.18 &3.57 &93.22 &16.57 &3.17 &\textbf{3.56} &97.13 &18.91 \\
\textbf{MDNet(Ours)} &2022 &T-F &\textbf{3.24} &\textbf{3.59} &\textbf{93.61} &\textbf{16.94} &\textbf{3.18} &\textbf{3.56} &\textbf{97.20} &\textbf{19.17}\\
\specialrule{0.1em}{0.25pt}{0.25pt}
\end{tabular}}
\label{tbl:dns1}
\vspace{-0.5cm}
\end{table*}
\vspace{-0.2cm}
\section{Experimental setup}
\label{sec:experiments-setup}
\vspace{-0.2cm}
Two datasets are adopted to carry out the experiments. WSJ0-SI84~{\cite{paul1992design}} consists of 7138 utterances by 83 speakers (42 males and 41 females). 5428 and 957 utterances are selected for training and validation, and 150 utterances spoken by unseen speakers are for testing. Around 20,000 types of noises are randomly selected from the DNS-Challenge noise set as the training noise set and mixed with clean utterances under SNRs $[-5\rm{dB}, 0\rm{dB}]$ with $1\rm{dB}$ interval. For testing, three challenging unseen noises are chosen, namely babble, factory1 from NOISEX92~{\cite{varga1993assessment}} and cafeteria from CHiME-3~{\cite{barker2015third}} under three SNRs, \emph{i.e.}, $\left\{-3, 0, 3\right\}\rm{dB}$. Totally, we create 150,000, 10,000 noisy-clean pairs for training and validation. For testing, 150 pairs are created for each SNR. Interspeech2020 DNS-Challenge\footnote{https://github.com/microsoft/DNS-Challenge} provides 562.72 hours clips from 2150 speakers and 181 hours of 60,000 clips from 150 classes. For model evaluation, it provides a non-blind validation set with two categories, namely with and without reverberation, and each includes 150 noisy-clean pairs. Following the script given by the organizer, we create around 3000 hours of pairs for training, and the input SNR ranges from -5\rm{dB} to 15\rm{dB}.
In the feature extractor, the kernel size and stride of 2D-Convs are $\left(1, 3\right)$ and $\left(1, 2\right)$ along the time and frequency axes, respectively. For each UNet-block, the kernel size is $\left(2, 3\right)$. The channel number of 2D-Convs remains 64 by default. Let us notate the number of (de)encoder layers in the $i$th UNet-block as $U_{i}$, then $U = \left\{4,3,2,1,0\right\}$ where $0$ means no UNet-block is used. Two groups of S-TCNs are employed, each of which includes four temporal convolution modules (TCMs), with the kernel size of the dilated Convs and dilation rates being 3 and $\left\{1,2,5,9\right\}$, respectively. Note that we take causal convolution operations by zero-padding along the past frames. The optimization step $\mathbf{\eta}$ is set as trainable and initialized at 0.01.
All the utterances are sampled at 16 kHz. 20 ms Hanning window is utilized with 50\% overlap between adjacent frames. 320-point FFT is utilized, leading to 161-D features in the frequency axis. The model is trained on the Pytorch platform with an NVIDIA V100 GPU. Adam optimizer is adopted for network training ($\beta_{1}=0.9$, $\beta_{2}=0.999$) and the total epoch is 60 with batch size being 8. The learning rate is initialized at 5e-4 and we halve the value if the validation loss does not decrease for consecutive two epochs. For fusing speech and noise components, we adopt the same ``Encoder-TCN-Decoder'' structure as~{\cite{li2020two}} except we adopt a lightweight version by halving the channel number in the 2D-Convs.
\renewcommand\arraystretch{0.85}
\begin{table}[t]
\setcounter{table}{1}
\caption{Quantitative comparisons with other SOTA systems on WSJ0-SI84 dataset. Scores are averaged upon different testing cases. ``Do.'' denotes the tranform domain of the method.}
\Large
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccccccc}
\specialrule{0.1em}{0.25pt}{0.25pt}
\multirow{2}*{Methods} &\multirow{2}*{Year} &\multirow{2}*{Do.} &Param. &MACs &\multirow{2}*{PESQ$\uparrow$} &ESTOI$\uparrow$ &SISNR$\uparrow$\\
& & &(M) &(G/s) & &(\%) &(dB) \\
\specialrule{0.1em}{0.25pt}{0.25pt}
Noisy &- &- &- &- &1.82 &41.97 &0.00\\
DDAEC~{\cite{pandey2020densely}} &2020 &T &4.82 &36.56 &2.76 &74.84 &10.85 \\
DEMUCAS~{\cite{defossez2020real}} &2020 &T &18.87 &4.35 &2.67 &76.23 &11.08\\
GCRN~{\cite{tan2019learning}} &2020 &T-F &9.77 &2.42 &2.48 &70.68 &9.21 \\
DCCRN~{\cite{hu2020dccrn}} &2020 &T-F&\textbf{3.67} &11.13 &2.54 &70.58 &9.47 \\
PHASEN~{\cite{yin2020phasen}} &2020 &T-F&8.76 &6.12 &2.73 &71.77 &9.38\\
FullSubNet~{\cite{hao2021fullsubnet}} &2021 &T-F &5.64 &31.35 &2.55 &65.89 &9.16\\
CTSNet~{\cite{li2020two}} &2021 &T-F &4.35 &5.57 &2.86 &76.15 &10.92 \\
GaGNet~{\cite{li2022glance}} &2022 &T-F &5.94 &\textbf{1.63} &2.86 &76.87 &10.93\\
\textbf{MDNet(Ours)} &2022 &T-F &8.36 &2.70 &\textbf{2.88} &\textbf{77.37} &\textbf{11.12} \\
\specialrule{0.1em}{0.25pt}{0.25pt}
\end{tabular}}
\label{tbl:wsj0-si84-result}
\vspace{-0.6cm}
\end{table}
\renewcommand\arraystretch{0.85}
\vspace{-0.4cm}
\section{Results and analysis}
\label{sec:results-and-analysis}
\vspace{-0.2cm}
\subsection{Ablation study}
\label{sec:ablation-study}
\vspace{-0.2cm}
Around 100 hours of training data from the WSJ0-SI84 corpus are used to conduct the ablation study in terms of step number $Q$ and fusion type. Three evaluation metrics are utilized, namely PESQ~{\cite{rix2001perceptual}}, ESTOI~{\cite{jensen2016algorithm}}, and SISNR~{\cite{le2019sdr}. Higher values indicate better performance. Quantitative results are shown in Table~{\ref{tbl:ablation-studies}} and several observations can be made. First, with the increase of steps, one can observe notable performance improvements over the non-update case (entry 1a), which shows that by gradient updating, we can obtain more accurate parameter estimation, leading to better reconstruction results. However, with more steps, the performance tends to get saturated and even degraded, \emph{e.g.}, from 1f to 1g. It can be explained as the estimation with GDM tends to get converged and GDM can not guarantee consistent optimization with the increase of steps for the non-convex optimization problem. Second, we compare three fusion strategies, namely ``R'' (the proposed fusion method), ``G'' (dynamic weighting), and ``A'' (average), and one can see that our method yields the best performance among above fusion schemes. It reveals the superiority of the proposed nonlinear residual fusion over dynamic weighting. Note that the average operation is the special case of dynamic weighting, where the weighting coefficients remain 0.5 for all T-F bins. However, as the local SNR varies a lot in different frequency bands, it is not reasonable to combine both parts with fixed weights, which can be proved from the notable performance degradations from entry 2b to 2a.
\vspace{-0.4cm}
\subsection{Comparsion with state-of-the-art methods}
\label{sec:comparison-with-sota-methods}
\vspace{-0.2cm}
Based on the analysis in the ablation study, we choose entry 1d as the default network configuration, which well balances between calculation complexity and performance, to compare with current top-performed SE systems. Quantitative results on the WSJ0-SI84 dataset are presented in Table~{\ref{tbl:wsj0-si84-result}}. Compared with another eight baselines, the proposed approach yields the highest scores among three objective metrics, validating the superiority of our method in speech quality and intelligibility. Note that despite our method has around 8.36M trainable parameters, it is rather advantageous in MACs, say, 2.70G/s, which reveals the gap in trainable parameters and computation complexity.
We also report the quantitative results on the DNS-Challenge non-blind test set, as shown in Table~{\ref{tbl:dns1}}. The wide-band version of PESQ (WB-PESQ)~{\cite{itu862}} and STOI~{\cite{taal2010short}} are also listed for evaluation. From the results, one can see that again, our method achieves the highest scores in different metrics over previous top-performed systems, which further attests to the superiority of our method under both reverberant and anechoic environments. Remark that different from previous literature where the mapping process lacks adequate interpretability, our method follows the MAP criterion and is explicitly optimized with the gradient descent method. Therefore, we think it is a promising direction to leverage the advantage of model-based methods to gradually open the black-box of DNNs in the speech enhancement area.
\vspace{-0.35cm}
\section{Conclusions}
\label{sec:conclusion}
\vspace{-0.2cm}
In this paper, we propose a model-driven approach to tackle single-channel speech enhancement. Specifically, based on the maximum a posteriori criterion, the original problem is formulated into the joint posterior estimation of speech and noise, and it is proposed that deep prior distribution is learned via the network from training data. The framework is devised with the unfolding structure and the gradient descent method is employed to update parameter estimation and facilitate the target reconstruction progressively. Besides, another network serves as the fusion module to further recover the speech component from previous estimations. Experimental results on the WSJ0-SI84 and DNS-Challenge datasets show that the proposed approach performs favorably against previous top-performed SE systems.
\vfill\pagebreak
\bibliographystyle{IEEEtran}
|
2,877,628,088,911 | arxiv | \section*{Introduction}
We study a computing system that have memory locations containing some data and a set of operations (instructions, machine commands), which change the states of the memory.
Some instructions can be executed concurrently. It is known the system state at the initial time. Runtime is defined for each operation.
A sequential process is a sequence of instructions. Our first task is to find this process parallelization algorithm, which would have calculated instructions for each time point. The second task is to specify the minimum time to reach a given state of memory from the initial state.
\section{ Basic notations and definitions}
{\em Asynchronous system} \cite{1} is a quintuple
$A = (S, s_0, E, I ,Tran)$
consisting of the set $S$ of {\em states}, {\em initial state} $s_0\in S$, set of the {\em instructions} $E$, subset
$Tran\subseteq S\times E\times S$ of {\em transitions},
and the irreflexive symmetric
{\em independence} relation $I\subseteq E\times E$ which satisfy to conditions
1. If $(s,a,s')\in Tran ~\&~ (s,a, s'')\in Tran$ then $s'=s'' $.
2. For all $s\in S$, if $(a,b)\in I ~\&~ (s,a,s')\in Tran ~\&~ (s',b,s'')\in Tran$, then
there is $s_1\in S$ such that $(s,b,s_1)\in Tran ~\&~ (s_1,a,s'')\in Tran$.
In particular, each Petri net can be considered as an asynchronous system whose states are markings and instructions are transitions. Independence relation consists of the pairs of transitions that have no common places.
Let $E$ be a set and let $I\subseteq E\times E$ be an irreflexive symmetric relation. The elements $a,b\in E$ are {\em independent} if $(a,b)\in I$. The equivalence relation consisting of pairs of words produced from each other by using a series of permutations of adjacent independent letters is defined on the monoid of words $E^*$.
{\em Trace} is the equivalence class $[w]$ for a word $w \in E^*$.
It is easy to see that the operation on the traces defined by the rule $[w_1][w_2]=[w_1w_2]$ turns the set of equivalence classes in the monoid. This monoid is denoted by $M(E,I)$ and is called {\em trace monoid} or a {\em free partially commutative monoid}.
Traces $[w_1]$, $[w_2] \in M(E,I)$ are called {\em parallel} if for any letter $a_1$ of the word $w_1$ and $a_2$ of $w_2$ we have $(a_1,a_2) \in I$. It is known \cite{2} that any asynchronous system $A = (S, s_0, E, I ,Tran)$ can be defined as a set $S$ with a partial right action of a monoid $M(E, I)$. The action is given by $s\cdot a = s'$ if $(s,a,s') \in Tran$. Action $s\cdot a$ undefined if there is no $s'$ satisfies the condition $(s,a,s')\in Tran$.
This allows us to consider a morphism of asynchronous systems as a morphism of the corresponding sets with a partial trace monoid action.
\begin{definition}
Homomorphism of asynchronous systems $(\sigma,f): A\to A'$ is a pair consisting of a map $\sigma: S\to S'$ and a homomorphism of monoids $f: M(E,I)\to M(E',I')$
satisfying the conditions
1. $f$ maps parallel traces in parallel;
2. $\sigma (s_0)=s_0'$;
3. $\sigma (s\cdot a)= \sigma (s)\cdot f(a)$ if action $s\cdot a$ is defined.
\end{definition}
Let $A = (S, s_0, E, I, Tran)$ is asynchronous system.
A {\em function of time} on $A$ is an arbitrary function $\tau: E\to N$ taking values in the set of integers $N = \{0, 1, 2, ...\}$.
Triples $(s,e,s')\in Tran$ are denoted by arrows $s\stackrel{e}\to s'$.
Every sequence of instructions
$$
s\stackrel{e_1}\to s_1\stackrel{e_2}\to s_2\to ...\to s_{n-1}\stackrel{e_n}\to s_n=s'
$$
consisting of triples belonging $Tran$ we call the {\em process} or {\em path} connecting the states $s$ and $s'$. In this case, the action of the monoid $M(E,I)$
on $S$ assigns the pair $(s, [e_1...e_n])$ into element $s' \in S$.
\section{The minimum trace runtime}
If the execution times for instructions are the same and equal to 1,
then the minimum execution time
for the trace will be equal to the height of its Foata normal form \cite{3}. In general, if the time $\tau(e)\in N$ corresponds to instruction $e\in E$, then we
decompose each instruction into the composition of small pairwise dependent instructions
the execution of which are 1 and apply the algorithm to construct the Foata normal form for the resulting trace.
These small instructions can be described as an instruction in the expansion which they participate, and the instruction will be equal to $e^{\tau(e)}$. Also we have to enter the intermediate states. For this purpose we introduce a new asynchronous system associated with a function of time.
Let $A$ is asynchronous system with the function $\tau: E\to N$. We define a total order
relation on the set $E$ and consider an asynchronous system
$A_\tau = (S_\tau, s_0, E, I, Tran_\tau)$ defined as follows. One has a set of states
\begin{multline*}
S_\tau=\{(s,a_{1}^{i_1},a_{2}^{i_2}...a_{m}^{i_m})~| \\
s\in S,~
s\cdot a_1 a_2\ldots a_m\in S,~a_1<a_2<...<a_m,~
(a_i,a_j)\in I,\\
~for ~all ~ 1\leq i<j\leq m, ~1\leq i_1<\tau(a_1), \ldots,~1\leq i_m<\tau(a_m)\}.
\end{multline*}
For technical reasons, it will be convenient to consider the states
$(s,a_{1}^{i_1}a_{2}^{i_2}...a_{m}^{i_m})$ where for some
$q\in \{1, 2, ..., m\}$ we have $i_q = 0$ or $i_q = \tau (a_q)$.
They will be identified with the elements of $S_\tau$ using formulas
\begin{gather*}
(s,a_{1}^{i_1}a_{2}^{i_2}...a_{q-1}^{i_{q-1}}a_{q}^{0}a_{q+1}^{i_{q+1}}...a_{m}^{i_m})=
(s,a_{1}^{i_1}a_{2}^{i_2}...a_{q-1}^{i_{q-1}}a_{q+1}^{i_{q+1}}...a_{m}^{i_m})\\
(s,a_{1}^{i_1}a_{2}^{i_2}...a_{q-1}^{i_{q-1}}a_{q}^{\tau(a_q)}a_{q+1}^{i_{q+1}}...a_{m}^{i_m})=
(s\cdot a_q,a_{1}^{i_1}a_{2}^{i_2}...a_{q-1}^{i_{q-1}}a_{q+1}^{i_{q+1}}...a_{m}^{i_m}).
\end{gather*}
We define a partial action of the monoid $M(E,I)$ on the $S_\tau$ considering
$$(s,a_{1}^{i_1}a_{2}^{i_2}...a_{m}^{i_m})\cdot a =
(s,a_{1}^{i_1}a_{2}^{i_2}...a_{q-1}^{i_{q-1}}a_{q}^{i_{q}}a_{q+1}^{i_{q+1}}...a_{m}^{i_m})$$
if $a=a_q$ for some $q\in \{1,2,...,m\}$. If $(a,a_r)\in I$ for all $r\in \{1,2,...,m\}$,
then we insert an element $a\in E$ in sequence so that there are inequalities $a_1<a_2<...<a_{q-1}<a<a_q<...<a_m$ for some $q$ and let $(s,a_{1}^{i_1}a_{2}^{i_2}...a_{m}^{i_m})\cdot a =
(s,a_{1}^{i_1}a_{2}^{i_2}...a_{q-1}^{i_{q-1}}aa_{q}^{i_{q}}...a_{m}^{i_m})$.
Action is not defined in the other cases.
Define the sets mapping $i: S\to S_\tau$ by the formula $i(s) = (s, 1)$. Let $t: M(E,I)\to M(E,I)$ is homomorphism which is defined by values on the elements $a\in E$ equal to $t(a) = a^{\tau(a)}$.
\begin{proposition}
Pair $(i,t)$ is homomorphism of the asynchronous systems $A\to A_\tau$.
\end{proposition}
{\em Parallel process, which realizes the trace} $\mu $, is the composition of traces
$$
[a_{i_1}a_{i_2}...a_{i_p}][a_{j_1}a_{j_2}...a_{j_p}]...[a_{k_1}a_{k_2}...a_{k_r}]=\mu
$$
which is equal to the trace and consisting of units within each of which the directions pairwise independent.
\begin{proposition}
The minimum runtime of trace $[a_1a_2...a_n]$, transforming the system from state $s$ to a state $s'$, is equal to the height of the Foata normal form of trace $[a_1^{\tau (a_1)}a_2^{\tau (a_2)}...a_n^{\tau (a_n)}]$. Parallel process with minimum time is equal to that normal form.
\end{proposition}
\begin{example}
Let us consider the pipeline Petri net consisting of three operating units
\begin{figure}[h]
$$
\xymatrix{
&*+[F]{a} \ar[rr]&&*+<10pt>[o][F]{ } \ar[rr]&&*+[F]{b} \ar[rr]&&*+<10pt>[o][F]{ } \ar[rr]&&*+[F]{c}}
$$
\caption{The pipeline Petri net}\label{pic1}
\end{figure}
\end{example}
Let the execution times are $\tau(a)=3,\tau(b)=1,\tau(c)=2$. If the input received $n$ numbers the time process trace will be $[(a^3bc^2)^n]$. It is easy to see that the Foata normal form is
$$[a][a][a]([b][ac][ac]a])^{n-1}[b][c][c]$$
Its height is equal to $4n+2$. Hence the minimum runtime using three processors power is $T_3=4n+2$. Runtime on a single processor is $T_1 = 6n$. Consequently, the average acceleration power is $6n/(4n+2)\approx 3/2$.
\section{Search for a parallel process with minimal time
to achieve a given reachable state from the initial state}
Let us consider an asynchronous system $A$ with the function of time $\tau: E\to N$.
Let $A_\tau$ is the corresponding asynchronous system.
We construct a directed graph whose vertex set is $S_\tau$. If
$$(s,a_1^{i_1}a_2^{i_2}...a_p^{i_p})\cdot e_1\cdot e_2...\cdot e_n=(s',b_1^{j_1}b_2^{j_2}...b_q^{j_q})$$
for some vertices $(s,a_1^{i_1}a_2^{i_2}...a_p^{i_p})\in S_\tau$,$ (s',b_1^{j_1}b_2^{j_2}...b_q^{j_q})\in S_\tau$
and such $e_1,e_2,...,e_n\in E $ that $(e_i,e_j)\in I $ for all $1\leq i<j\leq n$,
then these vertices are joined by directional arrow of length 1.
The elements $s\in S$ are identified with pairs $(s,1)\in S_\tau$
where 1 is the neutral element of the monoid $M(E,I)$.
\begin{proposition}
A parallel process of the minimum time that takes the system $A$
from state $s_0$ to state $s$ corresponds to the shortest path
in the constructed graph connecting vertices $(s_0, 1)$ and $(s, 1)$.
\end{proposition}
Algorithms for finding the shortest directed path are well known. For example, the vertices are colored with the colors $0,1,2,...$ as follows: first, the vertex $s_0$ is painted the color of 0. Then unpainted ends coming out of her arrows are painted color of 1. Then unpainted ends of arrows coming out of the vertex colors of 1 are painted color of 2, etc. until we color the top $s$. Vertex color $s$ will be the shortest path length. A slight modification of the algorithm leads to a method of finding a path of minimum length.
\section*{Conclusion}
The proposed time model $A_\tau$ can be interpreted
as a discrete model of E. Goubault timed automaton \cite{4}.
A similar model can be constructed for the distributed asynchronous automata entered in \cite{5}.
But in order to enable it to build algorithms for time estimates, it is necessary
to involve some additional conditions on these automata.
|
2,877,628,088,912 | arxiv | \section{Introduction}
\label{sec:intro}
\
Type Ia supernovae (SNe Ia), as successful cosmological
distance indicators, are thought to result from thermonuclear
explosions of carbon-oxygen white dwarfs (CO WDs) in binaries.
However, the nature of their progenitors still remains unclear. It
is very likely that a CO WD grows somehow in mass close to the
Chandrasekhar mass (Ch mass) of $1.378\,{M}_\odot$ (if its
rotation is ignored) and then explodes as a SN Ia (e.g.,
\citealt{nty84}). The growth of a CO WD to the Ch mass limit has
been investigated widely and many progenitor models of SNe Ia have
been proposed in past years (for a detailed review, see
\citealt{wh12,hn00}). The most popular progenitor models are
single-degenerate (SD) and double-degenerate (DD) models. In the
SD model, a CO WD accretes material from its non-degenerate
companion to increase its mass close to the Ch mass (e.g.,
\citealt{wi73,nty84,hp04,wang2009}). In the DD model, two CO WDs
spiral in due to gravitational wave radiation and eventually
merge, leading to a SN Ia explosion (e.g.,
\citealt{it84,w84,yung94,han98,rui09}). The SD model has become
the favourite one in the past decades because it can explain the
similarities of SNe Ia and can reproduce very well
characteristic observational features of most normal SNe Ia (see a recent review by \citealt{wh12}).
The process of accretion onto the CO WD is crucial for the SD
scenario. At low accretion rates, the accreting WD undergoes
H-shell flashes similar to nova outbursts \citep{gs78,kp94,pk95}.
However, multicycle evolution of the H-shell flash is difficult to
compute, therefore only individual outbursts were followed in most
simulations of novae. If the accretion rate is too high, the
accreted matter will pile up on the surface of the WD and the WD
will evolve into a star like a red giant (e.g.,
\citealt{nomoto1982}; \citealt{nskh07}, hereafter NSKH07). Only in
a narrow regime, is the H-shell burning steady, and the accreted
hydrogen is burnt completely. The SN Ia birth rate based on this
view was much lower than the observationally inferred one (e.g.,
\citealt{yung96,livio00}). To solve this problem, the optically
thick wind regime was proposed to replace the red giant regime
\citep{kh94,hkn96}. In this regime, the H-rich matter is
transformed into He at a critical rate, $\dot{M}_{\rm cr}$, while
the unprocessed material is blown off by the optically thick wind.
It significantly expanded the parameter space for the SN Ia
progenitors, which resulted in a higher theoretical birth rate of
SNe Ia from the SD scenario \citep{hp04,meng09}. However, those
studies did not pay much attention to the super-Eddington wind
which is triggered when the luminosity of the accreting WD exceeds
the Eddington luminosity. The Eddington accretion rate is around
$\sim 10^{-5}\,M_\odot\,\mbox{yr}^{-1}$ (\citealt{nomoto1982};
NSKH07), when only the accretion luminosity was used to evaluate
the radiation pressure on the surface of the WD. \citet{shen07}
proposed that the nuclear burning (H-shell burning during
accretion) near the surface of the WD has also to be considered
when estimating the Eddington accretion rate. They obtained a
range of Eddington accretion rate of
$2-7\times10^{-7}\,M_\odot\,\mbox{yr}^{-1}$ for various WD masses.
In this paper, we use the state-of-the-art stellar evolution code
of {\sc MESA}(version 3635)\citep{pea11,pea13} to simulate the
long-term evolution of CO WDs accreting solar-composition
material. Our aim is to investigate the stability of H-shell
burning on these WDs. The WD masses range from 0.5 to
$1.378\,{M}_\odot$, while the accretion rate is varied from
$10^{-8}$ to $10^{-5}\,M_\odot\,\mbox{yr}^{-1}$. The Eddington
accretion rate is estimated by including the nuclear energy,
gravothermal energy and radiation of the core.
\
\section{Simulation Code and Methods}
\label{sec:method}
\
\
In our study, we
employed the {\sc MESA} {\it default} opacity and EOS tables i.e.
the same as described in Figures 1 and 2 in \citet{pea11}. Our
nuclear network consisted of 21 isotopes, such as $^{1}$H,
$^{3}$He, $^{4}$He, $^{12}$C, $^{13}$C, $^{13}$N, $^{14}$N,
$^{15}$N, $^{14}$O, $^{15}$O, $^{16}$O, $^{17}$O, $^{18}$O,
$^{17}$F, $^{18}$F, $^{19}$F, $^{18}$Ne, $^{19}$Ne, $^{20}$Ne,
$^{22}$Mg, and $^{24}$Mg, coupled by 50 reactions, including those
of the pp chains and CNO cycles. Similar to NSKH07, the He-burning
reactions were neglected for simplicity. Two relevant {\sc MESA}
suite cases were selected for simulations: \texttt{make\_co\_wd}
and \texttt{wd2}.
The suite case \texttt{make\_co\_wd} was used to create CO WD
models. First, we chose a sufficiently massive pre-MS star and
evolved it until the mass of its He-exhausted core reached a value
close to the final WD's mass that we needed, but before its He
shell began to experience thermal pulses. Then, we artificially
removed the envelope, leaving a naked CO core that quickly evolved
to the WD cooling track. We selected hot CO WD models on the top
of the cooling track as initial models for our accretion
simulations\footnote{We encountered some convergence problems when
simulating cold CO WDs, possibly due to the high degeneracy
of materials on the surface of cold WDs and low resolution of WD
models in our study (the total number of mass zones is typically
around 2000). The evolution of cold CO WDs during accretion may
be slightly different from that of the hot models before and at
the onset of H burning, but the difference would become very small
when an equilibrium state is achieved. \cite{iben82} and
\cite{it89} also used hot WD models when they simulated accreting
WDs.}. Using these methods, we have obtained a series of CO WD
models with masses in the range of
$0.5-1.0\,{M}_\odot$. For CO WDs with masses larger
than $1.0\,{M}_\odot$, we increased the mass of our naked
$1.0\,{M}_\odot$ CO WD model by accreting material with its
surface composition until the required mass was reached
\citep{dhbp13}. As a result, we created 12 CO WD models with the
masses of $0.5$, $0.6$, $0.7$,
$0.8$, $0.9$, $1.0$,
$1.1$, $1.2$, $1.25$,
$1.3$, $1.35$ and $1.378\,{M}_\odot$, with central temperatures of (in units of $10^7{\rm K}$) 7.5, 7.1,
7.0, 6.9, 7.7, 10.0, 10.4, 12.2, 13.7, 16.0, 20.7 and 25.2,
respectively, at the onset of accretion.
The suite case of \texttt{wd2} was used to simulate accretion onto
CO WDs. The accreted material has the solar composition, in
particular, its H and heavy-element mass fractions are $X=0.7$ and
$Z=0.02$. We turned on a {\sc MESA} option that allows to include
the acceleration term in the equation of hydrostatic equilibrium.
We have done a series of simulations for all of our CO WD models
with the accretion rate in the range from
$10^{-8}$ to
$\,10^{-5}\,M_\odot\,\mbox{yr}^{-1}$. Each simulation was carried
out long enough to obtain its detailed accretion properties.
We realize that a real CO WD accretes material from its companion
via an accretion disk. Therefore, the WD should initially
accumulate the accreted material at its equator. However, this
material will probably spread over the entire WD surface on a
dynamical timescale because of a dynamical instability, resulting
in its quasi-spherical distribution \citep{mac83}. This motivates
our assumption of spherically symmetric accretion, which is made
and used for simplicity, like it was done previously in other
similar studies (e.g. \citealt{it89,pk95,hkn96}, NSKH07,
\citealt{shen07}), although multi-dimensional simulations would
provide more consistent results.
If the accretion rate is high enough, the luminosity $L$ of the
accreting WD may exceed the Eddington luminosity \begin{eqnarray} {L}_{\rm
Edd}=\frac{4\pi GMc}{\kappa_{\rm T}}, \label{eq:Ledd} \end{eqnarray} where
${M}$ is the WD mass and $\kappa_{T}$ is the Thomson
opacity\footnote{ We use the Thomson opacity here for comparison
with previous work. However, in our calculation, $\kappa_{T}$ was
replaced by the opacity of the photosphere.}, in which case the
super-Eddington wind is triggered.
The total luminosity ${L}$ consists of four parts, namely the
nuclear burning energy, gravothermal energy released by
compression, thermal radiation of the core, and accretion energy
released by the accreted material. The nuclear burning luminosity,
${L}_{\rm nuc}$, is always the main part of ${L}$. It can be
written as
\begin{eqnarray} {L}_{\rm nuc}=XQ\dot{M}_{\rm nuc},
\label{eq:nuc}
\end{eqnarray}
where $\dot{M}_{\rm nuc}$, ${X}$ and ${Q}$ are the rate
of the accretion material processed by hydrogen burning, hydrogen
mass fraction in the accreted material, and the energy released
per a unit mass of H transformed into He, respectively.
The accretion luminosity released by accreted material, ${L}_{\rm
acc}$, is
\begin{eqnarray} {L}_{\rm acc}=\frac{{GM}\dot{M}_{\rm acc}}{R},
\label{eq:acc}
\end{eqnarray}
where ${M}$ and ${R}$ are WD's mass and
radius, and $\dot{M}_{\rm acc}$ is the accretion rate.
By letting $L_{\rm Edd}\,=\,L_{\rm acc}$, the Eddington accretion
rate is $\dot{M}_{\rm Edd}=4\pi{cR}/\kappa_{\rm T}$, which was
used in NSKH07. If $L_{\rm Edd}\,=\,L_{\rm nuc}$ and $\dot{M}_{\rm
acc} = \dot{M}_{\rm nuc}$ (i.e. the accreted material is burnt
completely), the Eddington accretion rate then becomes
$\dot{M}_{\rm Edd}=4\pi GMc/(\kappa_{\rm T} XQ)$, which was
adopted in \citet{shen07}. Since $L_{\rm nuc}>>L_{\rm acc}$,
the value of $\dot{M}_{\rm Edd}$ obtained in \citet{shen07} was
much lower than that in NSKH07. Recently, \citet{tsyl13} also
obtained a much lower Eddington accretion limit than that of
NSKH07 by setting $L_{\rm Edd}\,=\,L_{\rm nuc}+L_{\rm acc}$.
Usually, the accretion energy is not taken into account in ${L}$
because it is radiated away from the WD's surface very quickly
\citep{tb04}. Here, we define a luminosity ${L}_{\ast}$ as
${L}-{L}_{\rm acc}$ for convenience.
In our study, we first set ${L}_{\rm Edd}\,=\,{L}_{\rm acc}$ as
the wind triggering condition to reproduce the results of NSKH07
and examine the reliability of the method. Then, we let ${L}_{\rm
Edd}\,=\,{L}_{\ast}$ as the wind triggering condition to study the
behaviors of CO WDs during accretion, which is followed by a
discussion of the inclusion of ${L}_{\rm acc}$ to ${L}$.
\
\section{Results}
\label{sec:results}
\subsection{Reproducing Previous Results With the New Method}
\label{sec:rpr}
To directly compare with the results of the steady-state models of
NSKH07, we first employed ${L}_{\rm Edd}\,=\,{L}_{\rm acc}$ as the
triggering condition for the super-Eddington wind. However, the
super-Eddington wind obtained under this condition cannot blow off
a sufficient mass during an H-shell flash and, as a result,
the calculation of the following evolution of the star is very CPU
consumptive. Therefore, all simulations encountering H-shell
flashes were investigated by setting ${L}_{\rm
Edd}\,=\,{L}_{\ast}$, while ${L}_{\rm Edd}\,=\,{L}_{\rm acc}$ was
set for other cases. Figure 1 shows three typical examples of our
simulations: (a) an H-shell flash at a low accretion rate,
$10^{-7}\,{M}_\odot\,\mbox{yr}^{-1}$; (b) steady H burning at an
intermediate accretion rate, $2.1\times
10^{-7}\,{M}_\odot\,\mbox{yr}^{-1}$; and (c) a WD becoming a red
giant at a high accretion rate, $8\times
10^{-7}\,{M}_\odot\,\mbox{yr}^{-1}$ (hereafter, the red-giant
regime). In panels (b) and (c), there are also weak H-shell
flashes when H-rich material is ignited. Such an H-shell flash is
unavoidable at any given accretion rate because H-rich material
accumulated on the WD surface becomes electron-degenerate before
it is initially ignited. Therefore, we first employed ${L}_{\rm
Edd}\,=\,{L}_{\ast}$ as the triggering condition for the
super-Eddington wind at the first H-shell flash, and only after
that we set ${L}_{\rm Edd}\,=\,{L}_{\rm acc}$ to simulate the
following accretion evolution.
Figure 2 shows three boundary curves marked by their corresponding
mass accretion rates: $\dot{M}_{\rm stable}$ separates the steady
H burning from the H-shell flash, $\dot{M}_{\rm RG}$ separates the
steady H burning from the red-giant regime, and $\dot{M}_{\rm
Edd}$ separates the red-giant regime from the super-Eddington
wind. All the boundaries were determined via bisection method for
each WD mass. The results of \cite{it89} and NSKH07 are also
presented in Figure~2 for comparison. It is seen that our results
are very close to theirs. However, some differences exist in the
exact locations of the boundary curves from our and previous
works, which are likely caused by different methods employed.
\cite{it89} used the static envelope analysis, NSKH07 used the
linear stability analysis to investigate the stability of the
steady-state models, while we carried out detailed stellar
evolution calculations that included a realistic accretion
process. A detailed comparison of parameters of the WD models in
the steady H burning regimes obtained in our work and in NSKH07 is
made in Table 1. Again, we see that the two models have very
similar properties.
\
\subsection{The Super-Eddington Wind Scenario}
\label{sec:Edd}
Here, we employ ${L}_{\rm Edd}\,=\,{L}_{\ast}$ as a condition for
triggering the super-Eddington wind to calculate the Eddington
accretion rate ($\dot{M}_{\rm Edd}$) for each WD mass. The results
are shown in Figure~3. We find that the values of $\dot{M}_{\rm
Edd}$ in this case are much lower than those from the previous
works, even lower than the values of $\dot{M}_{\rm RG}$ obtained
in section 3.1. The entire red-giant regime is now replaced by a
new regime that we call the ``super-Eddington wind regime''. In
this new regime, material accreted onto the surface of the WD
first undergoes an H-shell flash. After that, the H burning
becomes steady and then the super-Eddington wind is triggered. The
super-Eddington wind continues to blow off the surface material,
which prevents the envelope from expanding, then the WD will
never become a red giant and its luminosity remains constant at a
value of the Eddington luminosity.
In the super-Eddington wind regime, H burning is stable, and the
accreted material is partially accumulated on the WD surface.
Furthermore, the WD mass growth rate equals approximately to
$\dot{M}_{\rm Edd}$. The extra mass is blown away by the
super-Eddington wind. Thus, the super-Eddington wind is an
alternative to the optically thick wind in preventing an accreting
WD from expanding to a red giant at relatively high accretion
rates. For a convenient use, we have fitted our $\dot{M}_{\rm
Edd}$ and $\dot{M}_{\rm stable}$ data by the following
polynomials: \begin{eqnarray} \dot{M}_{\rm Edd}=5.975\times
10^{-6}\left({M}_{\rm WD}^4-3.496{M}_{\rm WD}^3+4.373{M}_{\rm
WD}^2 -2.226{M}_{\rm WD}+0.406\right), \label{eq:MdotEdd} \end{eqnarray}
\begin{eqnarray} \dot{M}_{\rm stable}=3.057\times 10^{-7}\left({M}_{\rm
WD}^2-0.386{M}_{\rm WD}+0.027\right), \label{eq:MdotStable} \end{eqnarray}
where ${M}_{\rm WD}$ is in units of ${M}_\odot$, and $\dot{M}_{\rm
Edd}$ and $\dot{M}_{\rm stable}$ are both in units of
$M_\odot\,\mbox{yr}^{-1}$.
In the above analysis, we neglected a contribution of $L_{\rm
acc}$ to $L$. However, for massive WDs, say $1.35{M}_\odot$,
${L}_{\rm acc}\sim 10^3{L}_\odot$ and it contributes to ${L}$ as
much as 10 per cent \citep{fkr85}. The inclusion of ${L}_{\rm
acc}$ in ${L}$ will make the super-Eddington wind to occur more
easily, because an additional source of energy will be
contributing to expelling the accreted material. Given that part
of the accretion luminosity is emitted by the disk, and a part of
the rest of it goes into spinning up of the WD
\citep{lp74,bfgs09}\footnote{The energy deposited to winds is
usually less than $0.5{L}_{\rm acc}$.}, we examined the effect of
${L}_{\rm acc}$ for the following three cases: ${L}_{\rm
Edd}={L}_{\ast}+0.1{L}_{\rm acc}$, ${L}_{\rm
Edd}={L}_{\ast}+0.5{L}_{\rm acc}$, and ${L}_{\rm
Edd}={L}_{\ast}+0.8{L}_{\rm acc}$. The corresponding results are
also shown in Figure 3, in which we see that the Eddington
accretion limit ${does}$ decrease with the inclusion of ${L}_{\rm
acc}$ for massive CO WDs, while little differences exist in
low-mass WDs.
\section{Discussion and Conclusion}
\label{sec:dis}
\
We have investigated the evolution of accreting CO WDs with masses
from 0.5 to $1.378\,M_{\odot}$ for accretion rates from $10^{-8}$
to $10^{-5}\,{M}_\odot\,\mbox{yr}^{-1}$. Our results are
consistent with those from the previous studies of the properties
of H burning on the surfaces of CO WDs during accretion, except
for some differences in the exact locations of the boundaries
between different regimes. We have proposed the super-Eddington
wind regime to replace the optical thick wind regime in preventing
the WD's envelope from expanding at high accretion rates. If a CO
WD accretes material with an appropriate rate (i.e. above
$\dot{M}_{\rm Edd}$), the WD will evolve through the
super-Eddington wind regime. The H in the accreted material is
burnt into He at a rate around $\dot{M}_{\rm Edd}$ and the
unprocessed material is blown away by the super-Eddington wind. If
the underlying He is further burnt into C and O as assumed in the
literature, the WD mass then increases and possibly reaches the Ch
mass. This picture thus provides a potential scenario for the
progenitors of SNe Ia. Note that we assumed a constant accretion
rate in our study, but the strong winds may hit the companion
surface and should affect the mass transfer rate to such an extent
that it could eventually stop \citep{hkn99}.
The characteristics of the super-Eddington wind are similar to
those of the optically thick wind \citep{hkn96}. However, the
efficiency of the optically thick wind strongly depends on the
metallicity because it is driven by a peak in the opacity due to
iron lines, therefore it does not work when the metallicity is
lower than 0.002 \citep{koba98,kobaN09}. On the contrary, the
super-Eddington wind does not significantly depend on the
metallicity. We examined this for two CO WDs (with mass of
1.0 and $1.378{M}_\odot$, respectively) for
${Z}=10^{-6}$, and found that the super-Eddington wind could still
be triggered. The extra mass exceeding the Eddington accretion
rate is blown away by the super-Eddington wind, and the WDs
increase in mass with rates near the Eddington accretion rate (see
Figure 4). This indicates that our super-Eddington wind scenario
may produce SNe Ia at very low metallicities, which could explain
the SNe Ia at high redshifts, e.g. SN UDS10Wil with a redshift of
1.914 \citep{jrr13}, assuming that the metallicity decreases with
the redshift.
Note that the mass-loss rate ($\equiv \dot M_{\rm acc}-\dot
M_{\rm Edd}$ in our study) by super-Eddington wind has an upper
limit, $\dot M_{\rm max}$, above which the super-Eddington wind
cannot blow away all the unprocessed material. From the energy
limit \citep{langer00}, $\dot M_{\rm max}=(\alpha L/L_{\rm
Edd})\,6\times10^{-6}(R/0.01R_\odot)M_\odot{\rm yr^{-1}}$, where
$L$ is the star luminosity and $\alpha$ is the efficiency of
stellar photon luminosity converting into kinetic wind energy
($\alpha=1$ if we assume all the photon energy is used to drive
the wind i.e all photons have been trapped and the WD is
invisible). If $\dot M_{\rm acc} > \dot M_{\rm Edd}+\dot M_{\rm
max}$, the WD may then become a red giant
eventually.\footnote{The mass-loss rate of optically thick
wind \citep{hkn96} is also limited by the energy limit. For
Wolf-Rayet stars, $\alpha \simeq 0.05$ from the study of
\citet{la93}.}
We have not considered abundance mixing induced by H-shell flashes
in our study, since the underlying He shell grows in each H-shell
flash and prevents any dredge-up of heavier nuclei from the core
to the surface zone \citep{kp94}. The abundance mixing might
affect the energy production via nuclear burning, and then the
$\dot{M}_{\rm stable}$ boundary slightly, but not the
super-Eddington wind boundary.
\acknowledgments We acknowledge useful discussions and suggestions
from Xiangdong Li.
This work is partly supported by the NSFC (Nos.11173055, 11033008)
and Yunnan Grant (2012HB037). PAD acknowledges the support of his
research by the NSF grants PHY 11-25915 and AST 11-09174 and by
JINA (NSF grant PHY 08-22648). The computations are made at the
Yunnan Observatories Supercomputing Platform.
|
2,877,628,088,913 | arxiv | \section{Introduction}
Throughout this paper, let $p$ be an odd prime number.
The symbol $\mathbb Z_p,\mathbb Q_p$ and $\mathbb C_p$ denote
the rings of $p$-adic integers, the field of $p$-adic numbers and
the field of $p$-adic completion of the algebraic closure of $\mathbb Q_p,$ respectively.
The $p$-adic absolute value in $\mathbb C_p$ is normalized in such way that
$|p|_p=p^{-1}.$
Let $\mathbb N$ be the natural numbers and $\mathbb Z^+=\mathbb N\cup\{0\}.$
As the definition of $q$-number, we use the following notations:
$$[x]_q=\frac{1-q^x}{1-q}\quad\text{and}\quad [x]_{-q}=\frac{1-(-q)^x}{1+q}.$$
Note that $\lim_{q\to1}[x]_q=x$ for $x\in\mathbb Z_p,$ where $q$ tends to 1 in the region $0<|q-1|_p<1.$
When one talks of $q$-analogue, $q$ is
variously considered as an indeterminate, a complex number $q\in\mathbb
C,$ or a $p$-adic number $q\in\mathbb C_p.$ If $q=1+t\in\mathbb C_p,$ one normally assumes $|t|_p<1.$
We shall further suppose that $\ord(t)>1/(p-1),$ so that $q^x=\exp(x\log q)$ for $|x|_p\leq1.$ If $q\in\mathbb C,$ then we assume that
$|q|<1.$
After Carlitz \cite{Ca48,Ca54} gave $q$-extensions of the classical Bernoulli numbers and polynomials,
the $q$-extensions of Bernoulli and Euler numbers and polynomials have been studied by
several authors (cf. \cite{CCKS,CKOS,Ca48,Ca54,KT1,KT,KT2,KT3,TK07,K07,K08,K08-1,K09,K09-1,K09-2,TK10,
KJKR,KKL,OS,OSRC,Sa}).
The Euler numbers and polynomials have been studied by researchers in the field of number theory, mathematical physics and
so on (cf. \cite{Ca48,Ca54,CSK,TK07,K08,K09,K09-1,K09-2,TK10,Ro}).
Recently, various $q$-extensions of these numbers and polynomials have been studied by many mathematicians
(cf. \cite{KT,KT2,KT3,K07,K08-1,KJKR,KKL,OSRC}).
Also, some authors have studied in the several area of $q$-theory (cf. \cite{CCKS,CKOS,GG,TK10,OS}).
It is known that the generating function of Euler numbers $F(t)$ is given by
\begin{equation}\label{Eu-gen}
F(t)=\frac 2{e^t+1}=\sum_{n=0}^\infty E_n\frac{t^n}{n!}.
\end{equation}
From (\ref{Eu-gen}), we known that the recurrence formula of Euler numbers is given by
\begin{equation}\label{Eu-gen-recu}
E_0=1,\qquad (E+1)^n+E_{n}=0\quad\text{if } n>0
\end{equation}
with the usual convention of replacing $E^n$ by $E_n$ (see \cite{KT2,KKL}).
In \cite{KJKR}, the $q$-extension of Euler numbers $E_{n,q}^*$ are defined as
\begin{equation}\label{Eu*-recu}
E_{0,q}^*=1,\qquad
(qE^*+1)^n+E_{n,q}^*=
\begin{cases}
2&\text{if }n=0 \\
0&\text{if } n>0
\end{cases}
\end{equation}
with the usual convention of replacing $(E^*)^n$ by $E_{n,q}^*.$
As the same motivation of the construction in \cite{KKL}, Carlitz's type $q$-Euler numbers $E_{n,q}$ are defined as
\begin{equation}\label{Eu-recu}
E_{0,q}=\frac{2}{[2]_q},\qquad
q(qE+1)^n+E_{n,q}=\begin{cases}
2&\text{if }n=0 \\
0&\text{if } n>0
\end{cases}
\end{equation}
with the usual convention of replacing $E^n$ by $E_{n,q}.$
It was shown that $\lim_{q\to1}E_{n,q}=E_n,$ where $E_n$ is the $n$th Euler number.
In the complex case, the generating function of Carlitz's type $q$-Euler numbers $F_q(t)$ is given by
\begin{equation}\label{Ca-q-Eu-int}
F_{q}(t)=\sum_{n=0}^\infty E_{n,q}\frac{t^n}{n!}=2\sum_{n=0}^\infty(-q)^ne^{[n]_qt},
\end{equation}
where $q$ is complex number with $|q|<1$ (see \cite{KKL}).
The remark point is that the series on the right-hand side of (\ref{Ca-q-Eu-int}) is uniformly convergent in the wider sense.
In $p$-adic case, Kim et al. \cite{KKL} could not determine the generating function of
Carlitz's type $q$-Euler numbers and Witt's formula for Carlitz's type $q$-Euler numbers.
In this paper, we obtain the generating function of Carlitz's type $q$-Euler numbers in the $p$-adic case.
Also, we give Witt's formula for Carlitz's type $q$-Euler numbers,
which is a partial answer to the problem in \cite{KKL}.
Moreover, we obtain a new $p$-adic $q$-$l$-function $l_{p,q}(s,\chi)$ for Dirchlet's character $\chi,$
with the property that
$$l_{p,q}(-n,\chi)=E_{n,\chi_n,q}-\chi_n(p)[p]_q^nE_{n,\chi_n,q^p}$$
for $n\in \mathbb Z^+$
using the fermionic $p$-adic integral on $\mathbb Z_p.$
\section{Carlitz's type $q$-Euler numbers in the $p$-adic case}
Let $UD(\mathbb Z_p)$ be the space of uniformly differentiable function on $\mathbb Z_p.$
Then the $p$-adic $q$-integral of a function $f\in UD(\mathbb Z_p)$ on $\mathbb Z_p$
is defined by
\begin{equation}\label{Iqf}
I_q(f)=\int_{\Z} f(a)d\mu_q(a)=\lim_{N\rightarrow\infty}\frac1{[p^{N}]_q}
\sum_{a=0}^{p^N-1}f(a)q^a
\end{equation}
(cf. \cite{CSK,KT1,KT,KT2,KT3,TK07,K07,K08,K08-1,K09,K09-1,K09-2,TK10,KJKR,OS,OSRC}).
The bosonic $p$-adic integral on $\mathbb Z_p$ is considered as the limit $q\to1,$ i.e.,
\begin{equation}\label{q=1}
I_{1}(f)=\int_{\mathbb Z_p}f(a)d\mu_{1}(a).
\end{equation}
From (\ref{Iqf}), we have the fermionic $p$-adic integral on $\mathbb Z_p$ as
follows:
\begin{equation}\label{q=-1}
I_{-1}(f)=\lim_{q\to-1}I_q(f)=\int_{\mathbb Z_p}f(a)d\mu_{-1}(a).
\end{equation}
Using formula (\ref{q=-1}), we can readily derive the classical Euler polynomials, $E_{n}(x),$ namely
\begin{equation}\label{h-Euer}
2\int_{\mathbb Z_p}
e^{(x+y)t} d\mu_{-1}(y)=\frac{2e^{xt}}{e^t+1}=\sum_{n=0}^\infty E_{n}(x)\frac{t^n}{n!}.
\end{equation}
In particular when $x=0,$ $E_{n}(0)=E_n$ is well known the Euler numbers (cf. \cite{KT2,TK10,OS}).
By definition of $I_{-1}(f),$ we show that
\begin{equation}\label{-1-ft-eq}
I_{-1}(f_1)+I_{-1}(f)=2f(0),
\end{equation}
where $f_1(x)=f(x+1)$ (see \cite{KT2}). By (\ref{-1-ft-eq}) and induction, we obtain the following
fermionic $p$-adic integral equation
\begin{equation}\label{-1-ft-eq-1}
I_{-1}(f_n)+(-1)^{n-1}I_{-1}(f)=2\sum_{i=0}^{n-1}(-1)^{n-i-1}f(i),
\end{equation}
where $n=1,2,\ldots$ and $f_n(x)=f(x+n).$
From (\ref{-1-ft-eq-1}), we note that
\begin{align}
&I_{-1}(f_n)+I_{-1}(f)=2\sum_{i=0}^{n-1}(-1)^{i}f(i)\quad\text{if $n$ is odd};
\label{-1-ft-eq-1-o} \\
&I_{-1}(f_n)-I_{-1}(f)=2\sum_{i=0}^{n-1}(-1)^{i+1}f(i)\quad\text{if $n$ is even}.
\label{-1-ft-eq-1-e}
\end{align}
For $x\in\Z$ and any integer $i\geq0,$ we define
\begin{equation}\label{p-binom}
\binom xi=
\begin{cases}
\frac{x(x-1)\cdots(x-i+1)}{i!}&\text{if }i\geq1, \\
1, &\text{if }i=0.
\end{cases}
\end{equation}
It is easy to see that $\binom xi\in\Z$ (see \cite[p.\,172]{Ro}).
We put $x\in\mathbb C_p$ with $\ord(x)>1/(p-1)$ and $|1-q|_p<1.$
We define $q^x$ for $x\in\Z$ by
\begin{equation}\label{q^x}
q^x=\sum_{i=0}^\infty\binom xi(q-1)^i\quad\text{and}\quad [x]_q=\sum_{i=1}^\infty\binom xi(q-1)^{i-1}.
\end{equation}
If we set $f(x)=q^x$ in (\ref{-1-ft-eq-1-o}) and (\ref{-1-ft-eq-1-e}), we have
\begin{align}
&I_{-1}(q^x)=\frac{2}{q^n+1}\sum_{i=0}^{n-1}(-1)^{i}q^i=\frac2{q+1}\quad\text{if $n$ is odd};
\label{q-ex-o} \\
&I_{-1}(q^x)=\frac{2}{q^n-1}\sum_{i=0}^{n-1}(-1)^{i+1}q^{i}=\frac2{q+1}\quad\text{if $n$ is even}.
\label{q-ex-e}
\end{align}
Thus for each $l\in\mathbb N$ we obtain $I_{-1}(q^{lx})=\frac2{q^l+1}.$
Therefore we have
\begin{equation}\label{q-num}
\begin{aligned}
I_{-1}(q^x[x]_q^n)&=\frac1{(1-q)^n}\sum_{l=0}^{n}\binom nl(-l)^lI_{-1}(q^{(l+1)x})\\
&=\frac1{(1-q)^n}\sum_{l=0}^{n}\binom nl(-l)^l\frac2{q^{l+1}+1}.
\end{aligned}
\end{equation}
Also, if $f(x)=q^{lx}$ in (\ref{-1-ft-eq}), then
\begin{equation}\label{q-ft-0}
I_{-1}(q^{l(x+1)})+I_{-1}(q^{lx})=2f(0)=2.
\end{equation}
On the other hand, by (\ref{q-ft-0}), we obtain that
\begin{equation}\label{q-num-m}
\begin{aligned}
&I_{-1}(q^{x+1}[x+1]_q^n)+I_{-1}(q^x[x]_q^n) \\
&=\frac1{(1-q)^n}\sum_{l=0}^{n}\binom nl(-1)^l\left(I_{-1}((q^{l+1})^{x+1})+I_{-1}((q^{l+1})^x)\right)\\
&=\frac2{(1-q)^n}\sum_{l=0}^{n}\binom nl(-1)^l \\
&=0
\end{aligned}
\end{equation}
is equivalent to
\begin{equation}\label{q-num-q-int}
\begin{aligned}
0&=I_{-1}(q^{x+1}[x+1]_q^n)+I_{-1}(q^x[x]_q^n) \\
&=qI_{-1}(q^{x}(1+q[x]^n))+I_{-1}(q^x[x]_q^n) \\
&=qI_{-1}\left(q^{x}\sum_{l=0}^n\binom nlq^l[x]^l\right)+I_{-1}(q^x[x]_q^n) \\
&=q\sum_{l=0}^n\binom nlq^lI_{-1}(q^x[x]^l)+I_{-1}(q^x[x]_q^n).
\end{aligned}
\end{equation}
From the definition of fermionic $p$-adic integral on $\mathbb Z_p$
and (\ref{q-num}), we can derive the following formula
\begin{equation}\label{re-f}
\begin{aligned}
I_{-1}(q^x[x]_q^n)&=\sum_{n=0}^\infty\int_{\Z}[x]_q^nq^xd\mu_{-1}(x) \\
&= \lim_{N\to\infty}\sum_{a=0}^{p^N-1}
\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^iq^{ia}(-q)^{a} \\
&= \frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^i\lim_{N\to\infty}\sum_{a=0}^{p^N-1}(-1)^a(q^{i+1})^a \\
&=\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^i\frac{2}{1+q^{i+1}}
\end{aligned}
\end{equation}
is equivalent to
\begin{equation}\label{re-f-eq}
\begin{aligned}
\sum_{n=0}^\infty I_{-1}(q^x[x]_q^n)\frac{t^n}{n!}
&=\sum_{n=0}^\infty \frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^i\frac{2}{1+q^{i+1}}\frac{t^n}{n!} \\
&=2\sum_{n=0}^\infty(-q)^ne^{[n]_qt}.
\end{aligned}
\end{equation}
From (\ref{q-ft-0}), (\ref{q-num-m}), (\ref{q-num-q-int}), (\ref{re-f}) and (\ref{re-f-eq}), it is easy to show that
\begin{equation}\label{Ca-q-Eu-recu}
q\sum_{l=0}^n\binom nlq^lE_{l,q}+E_{n,q}=
\begin{cases}
2&\text{if }n=0 \\
0&\text{if } n>0,
\end{cases}
\end{equation}
where $E_{n,q}$ are Carlitz's type $q$-Euler numbers defined by (see \cite{KKL})
\begin{equation}\label{Ca-q-Eu}
F_{q}(t)=2\sum_{n=0}^\infty(-q)^ne^{[n]_qt}=\sum_{n=0}^\infty E_{n,q}\frac{t^n}{n!}.
\end{equation}
Therefore, we obtain the recurrence formula for the Carlitz's type $q$-Euler numbers as follows:
\begin{equation}\label{Ca-q-Eu-recu-bi}
q(qE+1)^n+E_{n,q}=
\begin{cases}
2&\text{if }n=0 \\
0&\text{if } n>0
\end{cases}
\end{equation}
with the usual convention of replacing $E^n$ by $E_{n,q}.$
Therefore, by (\ref{re-f-eq}), (\ref{Ca-q-Eu}) and (\ref{Ca-q-Eu-recu-bi}),
we obtain the following theorem.
\begin{theorem}[Witt's formula for $E_{n,q}$]\label{witt-open}
For $n\in\mathbb Z^+,$
$$E_{n,q}=\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^i\frac{2}{1+q^{i+1}}=\int_{\Z}[x]_q^nq^xd\mu_{-1}(x),$$
which is a partial answer to the problem in \cite{KKL}.
Carlitz's type $q$-Euler numbers $E_n=E_{n,q}$ can be determined inductively by
\begin{equation}
q(qE+1)^n+E_{n,q}=
\begin{cases}
2&\text{if }n=0 \\
0&\text{if } n>0
\end{cases}
\end{equation}
with the usual convention of replacing $E^n$ by $E_{n,q}.$
\end{theorem}
Carlitz type $q$-Euler polynomials $E_{n,q}(x)$ are defined by means of the generating
function $F_{q}(x,t)$ as follows:
\begin{equation}\label{tw-q-E-gen}
F_{q}(x,t)=2\sum_{k=0}^\infty(-1)^kq^ke^{[k+x]_qt}=\sum_{n=0}^\infty E_{n,q}(x)\frac{t^n}{n!}.
\end{equation}
In the cases $x=0,$ $E_{n,q}(0)=E_{n,q}$ will be called Carlitz type $q$-Euler numbers
(cf. \cite{KT3,OS}).
We also can see that the generating functions $F_{q}(x,t)$ are determined as solutions of the
following $q$-difference equation:
\begin{equation}\label{q-diff}
F_{q}(x,t)=2e^{[x]_qt}- q e^tF_{q}(x,qt).
\end{equation}
From (\ref{tw-q-E-gen}), we get the following:
\begin{lemma}\label{gen-le}
\begin{enumerate}
\item $F_{q}(x,t)=2e^{\frac{t}{1-q}}\sum_{j=0}^\infty\left(\frac1{q-1}\right)^jq^{xj}
\frac1{1+ q^{j+1}}\frac{t^j}{j!}.$
\item $E_{n,q}(x)=2\sum_{k=0}^\infty (-1)^kq^k[k+x]_q^n.$
\end{enumerate}
\end{lemma}
It is clear from (1) and (2) of Lemma \ref{gen-le} that
\begin{equation}\label{q-exi-eq}
E_{n,q}(x)=\frac{2}{(1-q)^n}\sum_{k=0}^n\binom nk \frac{(-1)^k}{1+ q^{k+1}}q^{xk}
\end{equation}
and
\begin{equation}\label{q-sums}
\begin{aligned}
\sum_{k=0}^{m-1}(-1)^kq^k[k+x]_q^n
&=\sum_{k=0}^{\infty}(-1)^{k}q^k[k+x]_q^n \\
&\quad\quad\quad-\sum_{k=0}^{\infty}(-1)^{k+m}q^{k+m}[k+m+x]_q^n \\
&=\frac1{2}\left(E_{n,q}(x)+(-1)^{m+1}q^m E_{n,q}(x+m)\right).
\end{aligned}
\end{equation}
From (\ref{q-exi-eq}) and (\ref{q-sums}), we may state
\begin{proposition}\label{q-E-re}
If $m\in\mathbb N$ and $n\in\mathbb Z^+,$ then
\begin{enumerate}
\item $E_{n,q}(x)=\frac{2}{(1-q)^n}\sum_{k=0}^n\binom nk \frac{(-1)^k}{1+ q^{k+1}}q^{xk}.$
\item $\sum_{k=0}^{m-1}(-1)^kq^k[k+x]_q^n
=\frac1{2}\left(E_{n,q}(x)+(-1)^{m+1}q^m E_{n,q}(x+m)\right).$
\end{enumerate}
\end{proposition}
\begin{proposition}\label{q-e-witt}
For $n\in\mathbb Z^+,$ the value of $\int_{\mathbb Z_p}[x+y]_q^nq^yd\mu_{-1}(y)$ is $n!$
times the coefficient of $t^n$ in the formal expansion of
$2\sum_{k=0}^\infty(-1)^kq^ke^{[k+x]_qt}$ in powers of $t.$
That is,
$E_{n,q}(x)=\int_{\mathbb Z_p}[x+y]_q^nq^yd\mu_{-1}(y).$
\end{proposition}
\begin{proof}
From (\ref{q=-1}), we have the relation
$$\int_{\mathbb Z_p}q^{k(x+y)}q^{y}d\mu_{-1}(y)={q^{xk}}\lim_{N\rightarrow\infty}
\sum_{a=0}^{p^N-1}(-q^{k+1})^a
=\frac{2q^{xk}}{1+ q^{k+1}}$$
which leads to
$$\begin{aligned}
\int_{\mathbb Z_p}[x+y]_q^nq^yd\mu_{-1}(y)
&=2\sum_{k=0}^n\binom nk\frac1{(1-q)^n}(-1)^k\int_{\mathbb Z_p}q^{k(x+y)}q^yd\mu_{-1}(y) \\
&=\frac{2}{(1-q)^n}\sum_{k=0}^n\binom nk \frac{(-1)^k}{1+ q^{k+1}}q^{xk}.
\end{aligned}$$
The result now follows by using (1) of Proposition \ref{q-E-re}.
\end{proof}
\begin{corollary}\label{q-E-re-co}
If $n\in\mathbb Z^+,$ then
$$E_{n,q}(x)=\sum_{k=0}^n\binom nk [x]_q^{n-k}q^{kx}E_{k,q}.$$
\end{corollary}
Let $d\in\mathbb N$ with $d\equiv1\pmod{2}$ and $p$ be a fixed odd prime number.
We set
\begin{equation}\label{inv}
\begin{aligned}
&X=\varprojlim_N (\mathbb Z/dp^N\mathbb Z), \quad
X^*=\bigcup_{\substack{ 0<a<dp\\ (a,p)=1}} a+dp\mathbb Z_p,\\
&\,a+dp^N\mathbb Z_p=\{x\in X \mid x\equiv a\pmod{dp^N}\},
\end{aligned}
\end{equation}
where $a\in \mathbb Z$ with $0\leq a<dp^N$ (cf. \cite{KT2,TK07}).
Note that the natural map $\mathbb Z/dp^N\mathbb Z\to\mathbb Z/p^N\mathbb Z$
induces
\begin{equation}\label{pro}
\pi:X\to\Z.
\end{equation}
Hereafter, if $f$ is a function on $\Z,$ we denote by the same $f$ the function $f\circ\pi$ on $X.$
Namely we consider $f$ as a function on $X.$
Let $\chi$ be the Dirichlet's character with an odd conductor $d=d_\chi\in\mathbb N.$
Then the generalized Carlitz type $q$-Euler polynomials attached to $\chi$ defined by
\begin{equation}\label{open}
E_{n,\chi,q}(x)=\int_{X}\chi(a)[x+y]_q^nq^yd\mu_{-1}(y),
\end{equation}
where $n\in\mathbb Z^+$ and $x\in\mathbb Z_p.$
Then we have the generating function of generalized Carlitz type $q$-Euler polynomials attached to $\chi:$
\begin{equation}\label{prob-1}
F_{q,\chi}(x,t)=2\sum_{m=0}^\infty\chi(m)(-1)^mq^me^{[m+x]_qt} =\sum_{n=0}^\infty E_{n,\chi,q}(x)\frac{t^n}{n!}.
\end{equation}
Now fixed any $t\in\mathbb C_p$ with $\ord(t)>1/(p-1)$ and $|1-q|_p<1.$
From (\ref{prob-1}), we have
\begin{equation}\label{prob-2}
\begin{aligned}
F_{q,\chi}(x,t)&=2\sum_{m=0}^\infty\chi(m)(-q)^m\sum_{n=0}^\infty\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^iq^{i(m+x)}\frac{t^n}{n!} \\
&=2\sum_{n=0}^\infty\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^iq^{ix} \\
&\quad\times
\sum_{j=0}^{d-1}\sum_{l=0}^\infty\chi(j+dl)(-q)^{j+dl}q^{i(j+dl)}\frac{t^n}{n!} \\
&=2\sum_{n=0}^\infty\frac1{(1-q)^n}\sum_{j=0}^{d-1}\chi(j)(-q)^j
\sum_{i=0}^n\binom ni(-1)^i\frac{q^{i(x+j)}}{1+q^{d(i+1)}}\frac{t^n}{n!},
\end{aligned}
\end{equation}
where $x\in\mathbb Z_p$ and $d\in\mathbb N$ with $d\equiv1\pmod2.$
By (\ref{prob-1}) and (\ref{prob-2}), we can derive the following formula
\begin{equation}\label{prob-3}
\begin{aligned}
E_{n,\chi,q}(x)&=\frac1{(1-q)^n}\sum_{j=0}^{d-1}\chi(j)(-q)^j
\sum_{i=0}^n\binom ni(-1)^iq^{i(x+j)} \frac{2}{1+q^{d(i+1)}} \\
&=\frac1{(1-q)^n}\sum_{j=0}^{d-1}\chi(j)(-q)^j
\sum_{i=0}^n\binom ni(-1)^iq^{i(x+j)} \\
&\quad\times\lim_{N\to\infty}\sum_{l=0}^{p^N-1}(-1)^l(q^{d(i+1)})^l \\
&=\lim_{N\to\infty}\sum_{j=0}^{d-1}\sum_{l=0}^{p^N-1}\chi(j+dl)
\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^iq^{i(j+dl+x)} \\
&\quad\times(-1)^{j+dl}q^{j+dl} \\
&=\lim_{N\to\infty}\sum_{a=0}^{dp^N-1}\chi(a)
\frac1{(1-q)^n}\sum_{i=0}^n\binom ni(-1)^iq^{i(a+x)}(-q)^{a} \\
&=\int_{X}\chi(y)[x+y]_q^nq^yd\mu_{-1}(y),
\end{aligned}
\end{equation}
where $x\in\mathbb Z_p$ and $d\in\mathbb N$ with $d\equiv1\pmod2.$
Therefore, we obtain the following
\begin{theorem}
$$
E_{n,\chi,q}(x)=\frac1{(1-q)^n}\sum_{j=0}^{d-1}\chi(j)(-q)^j
\sum_{i=0}^n\binom ni(-1)^iq^{i(x+j)} \frac{2}{1+q^{d(i+1)}},
$$
where $n\in\mathbb Z^+$ and $x\in\mathbb Z_p.$
\end{theorem}
Let $\omega$ denote the Teichm\"uller character mod $p.$ For $x\in X^*,$ we set
\begin{equation}\label{Tei}
\langle x\rangle=[x]_q\omega^{-1}(x)=\frac{[x]_q}{\omega(x)}.
\end{equation}
Note that since $|\langle x\rangle-1|_p<p^{-1/(p-1)},$ $\langle x\rangle^s$ is defined
by $\exp(s\log_p\langle x\rangle)$ for $|s|_p\leq1$ (cf. \cite{K07,K08-1,Sa}).
We note that $\langle x\rangle^s$ is analytic for $s\in\Z.$
We define an interpolation function for Carlitz type $q$-Euler numbers.
For $s\in\Z,$
\begin{equation}\label{p-l-q}
l_{p,q}(s,\chi)=\int_{X^*}\langle x\rangle^{-s}\chi(x)q^xd\mu_{-1}(x).
\end{equation}
Then $l_{p,q}(s,\chi)$ is analytic for $s\in\Z.$
The values of this function at non-positive integers are given by
\begin{theorem}\label{int-p-ell}
For integers $n\geq0,$
$$l_{p,q}(-n,\chi)=E_{n,\chi_n,q}-\chi_n(p)[p]_q^nE_{n,\chi_n,q^p},$$
where $\chi_n=\chi\omega^{-n}.$ In particular, if $\chi=\omega^n,$ then
$l_{p,q}(-n,\omega^n)=E_{n,q}-[p]_qE_{n,q^p}.$
\end{theorem}
\begin{proof}
$$
\begin{aligned}
l_{p,q}(-n,\chi)&=\int_{X^*}\langle x\rangle^{n}\chi(x)q^xd\mu_{-1}(x) \\
&=\int_{X}[x]_q^n\chi_n(x)q^xd\mu_{-1}(x)-\int_{X}[px]_q^n\chi_n(px)q^{px}d\mu_{-1}(px) \\
&=\int_{X}[x]_q^n\chi_n(x)q^xd\mu_{-1}(x)-[p]_q^n\chi_n(p)\int_{X}[x]_{q^p}^n\chi_n(x)q^{px}d\mu_{-1}(x).
\end{aligned}
$$
Therefore by (\ref{open}), the theorem is proved.
\end{proof}
Let $\chi$ be the Dirichlet's character with an odd conductor $d=d_\chi\in\mathbb N.$
Let $F$ be a positive integer multiple of $p$ and $d.$ Then by (\ref{tw-q-E-gen}) and (\ref{prob-1}), we have
\begin{equation}\label{rabbe}
\begin{aligned}
F_{q,\chi}(x,t)&=2\sum_{m=0}^\infty\chi(m)(-1)^mq^me^{[m+x]_qt} \\
&=2\sum_{a=0}^{F-1}\chi(a)(-q)^a\sum_{k=0}^\infty(-q)^{Fk}e^{[F]_q\left[k+\frac{x+a}{F}\right]_{q^F}t} \\
&=\sum_{n=0}^\infty\left([F]_q^n\sum_{a=0}^{F-1}\chi(a)(-q)^aE_{n,q^F}\left(\frac{x+a}{F}\right)\right)\frac{t^n}{n!}.
\end{aligned}
\end{equation}
Therefore we obtain the following
\begin{equation}\label{rabbe-lem}
E_{n,\chi,q}(x)=[F]_q^n\sum_{a=0}^{F-1}\chi(a)(-q)^aE_{n,q^F}\left(\frac{x+a}{F}\right).
\end{equation}
If $\chi_n(p)\neq0,$ then $(p,d_{\chi_n})=1,$ so that $F/p$ is a multiple of $d_{\chi_n}.$
From (\ref{rabbe-lem}), we derive
\begin{equation}\label{rabbe-ell}
\begin{aligned}
\chi_n(p)[p]_q^nE_{n,\chi_n,q^p}
&=\chi_n(p)[p]_q^n[F/p]_{q^p}^n\sum_{a=0}^{F/p-1}\chi_n(a)(-q^p)^aE_{n,(q^p)^{F/p}}\left(\frac{a}{F/p}\right)\\
&=[F]_q^n\sum_{\substack{a=0\\p\mid a}}^F\chi_n(a)(-q)^aE_{n,q^F}\left(\frac{a}{F}\right).
\end{aligned}
\end{equation}
Thus we have
\begin{equation}\label{ell-le}
\begin{aligned}
E_{n,\chi_n,q}-\chi_n(p)[p]_q^nE_{n,\chi_n,q^p}
&=[F]_q^n\sum_{\substack{a=0\\p\nmid a}}^{F-1}\chi_n(a)(-q)^aE_{n,q^F}\left(\frac{a}{F}\right).
\end{aligned}
\end{equation}
By Corollary \ref{q-E-re-co}, we easily see that
\begin{equation}\label{dis-poly}
\begin{aligned}
E_{n,q^F}\left(\frac{a}{F}\right)&=\sum_{k=0}^n\binom nk \left[\frac{a}{F}\right]_{q^F}^{n-k}q^{ka}E_{k,q^F} \\
&=[F]_q^{-n}[a]_q^n\sum_{k=0}^n\binom nk \left[\frac{F}{a}\right]_{q^a}^{k}q^{ka}E_{k,q^F}.
\end{aligned}
\end{equation}
From (\ref{ell-le}) and (\ref{dis-poly}), we have
\begin{equation}\label{ell-le-ft}
\begin{aligned}
E_{n,\chi_n,q}&-\chi_n(p)[p]_q^nE_{n,\chi_n,q^p} \\
&=[F]_q^n\sum_{\substack{a=0\\p\nmid a}}^{F-1}\chi_n(a)(-q)^aE_{n,q^F}\left(\frac{a}{F}\right)\\
&=\sum_{\substack{a=0\\p\nmid a}}^{F-1}\chi(a)\langle a\rangle^n(-q)^a
\sum_{k=0}^\infty\binom nk \left[\frac{F}{a}\right]_{q^a}^{k}q^{ka}E_{k,q^F},
\end{aligned}
\end{equation}
since $\chi_n(a)=\chi(a)\omega^{-n}(a).$ From Theorem \ref{int-p-ell} and (\ref{ell-le-ft}),
\begin{equation}\label{ne-val}
\begin{aligned}
l_{p,q}(-n,\chi)=\sum_{\substack{a=0\\p\nmid a}}^{F-1}\chi(a)\langle a\rangle^n(-q)^a
\sum_{k=0}^\infty\binom nk \left[\frac{F}{a}\right]_{q^a}^{k}q^{ka}E_{k,q^F}
\end{aligned}
\end{equation}
for $n\in\mathbb Z^+.$
Therefore we have the following theorem.
\begin{theorem}
Let $F$ be a positive integer multiple of $p$ and $d=d_\chi,$ and let
$$
l_{p,q}(s,\chi)=\int_{X^*}\langle x\rangle^{-s}\chi(x)q^xd\mu_{-1}(x), \quad s\in\Z.
$$
Then $l_{p,q}(s,\chi)$ is analytic for $s\in\Z$ and
$$l_{p,q}(s,\chi)=\sum_{\substack{a=0\\p\nmid a}}^{F-1}\chi(a)\langle a\rangle^{-s}(-q)^a
\sum_{k=0}^\infty\binom{-s}k \left[\frac{F}{a}\right]_{q^a}^{k}q^{ka}E_{k,q^F}.$$
Furthermore, for $n\in\mathbb Z^+$
$$l_{p,q}(-n,\chi)=E_{n,\chi_n,q}-\chi_n(p)[p]_q^nE_{n,\chi_n,q^p}.$$
\end{theorem}
|
2,877,628,088,914 | arxiv | \section{Introduction}
The ability to effectively prove theorems, by both human and mechanical means, is crucial to formal methods. Formal proofs in mathematics and computer science are being studied because they can be verified by a very simple computer program. An open problem in the Computer Mathematics community is the feasibility to fully formalize mathematical proofs \cite{Bar02a}. Here, feasibility is understood as the capability to generate correct formal mathematics with an effort comparable to that of writing a mathematical paper in, say, \LaTeX.
Traditionally, proofs of theorems and formal deductions in deduction systems, are defined, expressed, reasoned about, and performed in principle, through
formal objects called deduction trees. Typical of these structured forms of
defining formal deductions are the natural deduction and the sequent systems due
to Gentzen. Formal deductions are considered too strict and detailed to be used
in practice by the working mathematician. In fact, except for very short
proofs, the use of deduction trees gets easily, messy, hard to read and awkward
to be explained and reasoned about.
Notwithstanding, for more than thirty years now, a revolution on the way of
reasoning and proving in mathematics has gained a substantial community of
enthusiastic practitioners. The \textit{calculational style} of presenting
proofs introduced by Dijkstra and Scholten \cite{DS90} is a \textit{formal}
deduction method based on formula manipulation through linear calculational
formats \cite{vGast90}. This deduction method has been adopted in some books on
theoretical computer science \cite{Gri93,Bac03,FvG99,Misra01} and appeared in
papers on set theory, discrete mathematics and combinatorics \cite{AAB17,
Boh05, Boh15}.
It was originally devised as an informal but rigorous and practical
theorem-proving discipline, in which, on one hand, use of equational reasoning
(understood as mainly based on the preeminence of logical equivalence and
equalities) is preferred over the traditional one based on logical implication;
and, on the other hand, the tree-like way of representing formal derivations is
replaced by what Lifschitz called \textit{calculations} \cite{Lif01}.
\textit{Calculational logic} and proof methods were formalized for
\textit{classical} predicate logic by Gries and
Schneider~\cite{Gri93,DBLP:conf/procomet/gries98} and, subsequently,
streamlined by Lifschitz \cite{Lif01}. An analogous approach for the case
of intuitionistic predicate logic was developed by one of the authors in
\cite{Boh08}.
The purpose of this article is to introduce in HoTT a calculational form of reasoning and proving similar to that proposed in \cite {Boh08} for the intuitionist logic. In order to formally express HoTT with equality and equivalence playing a preeminent role, we find inspiration in the Curry-Howard isomorphism based on the facts that, on one hand, HoTT is strongly based on the homotopic character of equality and equivalence, and on the other hand, a calculational version of intuitionistic first order logic (ICL) is well established \cite{Boh08}. For this, homotopic equivalence in HoTT plays the role of logical equivalence in ICL and \textit{deductive chains}, introduced in this work, play the role of formal \textit{calculations}, term introduced by Lifchitz \cite{Lif01} to formalize Dijkstra and Scholten calculational format. Through this form of reasoning, we could identify judgments in HoTT that represent, under the Curry-Howard isomorphism, the equation rules of the ICL system. In other words, we want, not only give equivalence a preeminent role in HoTT, but endow HoTT with a deduction method based on equational algebraic manipulations that allows for elegant and formal proof constructions, providing a calculational formalization of theorem proving for the case of HoTT by producing (hopefully) human-readable formal proofs based on the linear formats characteristic of the calculational style.
In order to do so, we extend the syntax of type theory introducing an additional judgment that give rise to a conservative extension which facilitates readable proof calculations. We also introduce, as we mention above, an {\it inhabitation format}, that is, a syntactic tool corresponding to the calculational proof format introduced by Dijkstra and Scholten and formalized by Lifchitz with the name of \textit{calculation} .
Additionally, we prove the corresponding judgments in HoTT to the basic equational rules in the ICL system. Some of these rules show that induction operators of some of the basic types in HoTT are actually homotopic equivalences, fact that resulted to be true for the rest of induction operators.
In section 2, we present a brief overview of the main logic principles or rules (algebraic properties, mainly given by equivalences) and notations (Eindhoven quantifiers) used to prove logic theorems calculationally, and the type judgments which correspond, under Curry-Howard isomorphism, to those equational rules. In section 3, we extend HoTT conservatively introducing a new inhabitation judgment which corresponds to a forgetful version of the usual inhabitation judgment, and present some structural rules which will be needed in later sections. In section 4, we define deductive chains as an alternative way of expressing certain derivations of judgments which are sufficient for argumentation in HoTT.
In section 5, we present the basic types of HoTT following the usual four rules: formation, construction, elimination and computation, but giving the elimination rules a fundamental role as links of deductive chains. In section 6, we introduce the notion of equivalence of types following \cite{hottbook} and study the identification of pairs, functions and natural numbers using deductive chains. Section 7 corresponds to the presentation of the replacement of equivalents by equivalents property of homotopic type-equivalence, which we called Leibniz properties of type-equivalence. In section 8, we prove that all induction operators are actually equivalences, which gives equality and equivalence a preeminent role in HoTT. In section 9, we prove the equational rules stated in section 2 which were not proved in the above sections. In section 10, we present an informal method to find canonical functions between types.
\section{Eindhoven quantifier logic and notation}\label{Eind}
At the \textit{THE} project in Eindhoven, researchers led by E.W. Dijkstra, in
the 1970's, devised a uniform notation for quantification in first order logic
and related areas \cite{DS90}. By
$\cuan{\mathcal{Q}}{x\!:\!T}{range}{term}$\footnote{The original Eindhoven
style uses colons as separators; the syntax with $|$ and $\boldsymbol{\cdot}$ is one of
the many subsequent notational variations based on their innovation.} was meant
that quantifier $\mathcal{Q}$ binds variable $x$ of type $T$ to be constrained
to satisfy formula \textit{range} within the textual scope delimited by the
outer parentheses $(...)$, that expression \textit{term} is evaluated for
each such $x$ and that those values then are combined via an associative and
commutative operator related to quantifier $\mathcal{Q}$. For brevety, we refer to Eindhoven quantifiers as \textit{operationals}. For the case of
logical operationals (corresponding to the universal and existential
quantifiers), the associated operators are respectively, conjunction and
disjunction considered as binary boolean operations.
\begin{center}
$\cuan{\forall}{x\!:\!T}{range}{term}$ means\; for all $x$ in $T$ satisfying
\textit{range} we have \textit{term},\\
$\cuan{\exists}{x\!:\!T}{range}{term}$ means\; for some $x$ in $T$ satisfying
\textit{range} we have \textit{term},
\end{center}
A general shorthand applying to these notations is that an omitted
$|$\textit{range} defaults to $|$\textit{true}. The following so called
\textit{trade rules} translate these logical notations to the usual first order
logic formulas\footnote{$\lor$ and $\land$ denote disjunction and conjunction
respectively, $\Rightarrow$ denote implication and $\equiv$ denotes equivalence.
If $E$ is a symbolic expression, $E[k/x]$ is the expression obtained by
replacing every free occurrence of `$x$' in $E$ by `$k$'.}.
\begin{description}
\item [[Trade\!\!]]\quad $\cuan{\forall}{x\!:\!T}{P}{Q} \equiv
\cuant{\forall}{x\!:\!T}{P \!\Rightarrow\! Q}$\\
\hspace*{7mm} $\cuan{\exists}{x\!:\!T}{P}{Q} \equiv \cuant{\exists}{x\!:\!T}{P
\!\land\! Q}$
\end{description}
The following equational rules (i.e. expressed as logical equivalences)
correspond to some of the most basic logical axioms and theorems of a
calculational version of intuitionistic first order logic \cite{Boh08}.
\begin{description}
\item [[One-Point\!\!]]\quad $\cuan{\forall}{x\!:\!T}{x\!=\!a}{P}
\equiv P[a/x]$\\
\hspace*{14mm} $\cuan{\exists}{x\!:\!T}{x\!=\!a}{P} \equiv
P[a/x]$
\item [[Equality\!\!]]\quad $\cuan{\forall}{x,y\!:\!T}{x\!=\!y}{P}
\equiv \cuant{\forall}{x\!:\!T}{P[x/y]}$\\
\hspace*{11mm} $\cuan{\exists}{x,y\!:\!T}{x\!=\!y}{P} \equiv
\cuant{\exists}{x\!:\!T}{P[x/y]}$
\item [[Range Split\!\!]]\quad $\cuan{\forall}{x\!:\!T}{P\lor Q}{R} \equiv
\cuan{\forall}{x\!:\!T}{P}{R} \land \cuan{\forall}{x\!:\!T}{Q}{R}$\\
\hspace*{17mm} $\cuan{\exists}{x\!:\!T}{P\lor Q}{R} \equiv
\cuan{\exists}{x\!:\!T}{P}{R} \lor \cuan{\exists}{x\!:\!T}{Q}{R}$
\item [[Term Split\!\!]]\quad $\cuan{\forall}{x\!:\!T}{P}{Q\land R} \equiv
\cuan{\forall}{x\!:\!T}{P}{Q} \land \cuan{\forall}{x\!:\!T}{P}{R}$\\
\hspace*{15mm} $\cuan{\exists}{x\!:\!T}{P}{Q\lor R} \equiv
\cuan{\exists}{x\!:\!T}{P}{Q} \lor \cuan{\exists}{x\!:\!T}{P}{R}$
\item [[Translation\!\!]]\quad $\cuan{\forall}{x\!:\!J}{P}{Q} \equiv
\cuan{\forall}{y\!:\!K}{P[f(y)/x]}{Q[f(y)/x]}$\\
\hspace*{16mm} $\cuan{\exists}{x\!:\!J}{P}{Q} \equiv
\cuan{\exists}{y\!:\!K}{P[f(y)/x]}{Q[f(y)/x]}$\\
where $f$ is a bijection that maps values of type $K$ to values of type $J$.
\item [[Congruence\!\!]]\quad $\cuan{\forall}{x\!:\!T}{P}{Q\equiv R}
\Rightarrow (\cuan{\forall}{x\!:\!T}{P}{Q} \equiv
\cuan{\forall}{x\!:\!T}{P}{R})$\\
\hspace*{17mm} $\cuan{\forall}{x\!:\!T}{P}{Q\equiv R} \Rightarrow
(\cuan{\exists}{x\!:\!T}{P}{Q} \equiv \cuan{\exists}{x\!:\!T}{P}{R})$
\item[[Antecedent\!\!]] \quad $R\Rightarrow \cuan{\forall}{x\!:\!T}{P}{Q}\equiv \cuan{\forall}{x\!:\!T}{P}{R\Rightarrow Q}$\\
\hspace*{18mm}$R\Rightarrow \cuan{\exists}{x\!:\!T}{P}{Q}\equiv \cuan{\exists}{x\!:\!T}{P}{R\Rightarrow Q}$\\
when there are not free occurrences of $x$ in $R$.
\item[[Leibniz principles\!\!]] \quad $ \cuan{\forall}{x,y\!:\!T}{x=y}{f(x)=f(y)}$\\
\hspace*{29.5mm}$\cuan{\exists}{x,y\!:\!T}{x=y}{P(x)\equiv P(y)}$\\
where $f$ is a function that maps values of type $T$ to values of any other type and $P$ is a predicate.
\end{description}
All of these rules have their counterpart in HoTT. In fact, we derive the following judgments which correspond to the above equational rules. In order to write this judgments we have to use the basic types of HoTT and the homotopic equivalence\footnote{The judgment $A\simeq B\!<:$ means that types $A$ and $B$ are quivalent.} that undertakes the role of logical equivalence in logical equational deductions, and the new judgment $A\!<:$ which asserts that $A$ is inhabited without specifying any object. The definition of homotopic equivalence will be presented in a later section. These are the corresponding rules in HoTT:
\begin{description}
\item [[One-Point\!\!]]\quad $\prod_{x:A}\prod_{p:x=a}P(x,p) \simeq P(a,\text{refl}_a)\!<:$\\ [0.1cm]
\hspace*{14mm} $\sum_{x:A}\sum_{p:x=a}P(x,p) \simeq P(a,\text{refl}_a)\!<:$
\item [[Equality\!\!]]\quad $\prod_{x:A}\prod_{y:A}\prod_{p:x=y}P(x,y,p) \simeq \prod_{x:A}P(x,x,\text{refl}_x)\!<:$\\ [0.1cm]
\hspace*{11mm} $\sum_{x:A}\sum_{y:A}\sum_{p:x=y}P(x,y,p) \simeq \sum_{x:A}P(x,x,\text{refl}_x)\!<:$
\item [[Range Split\!\!]]\quad $\prod_{x:A+B} P(x) \simeq \prod_{x:A}P(\text{inl}(x))\times \prod_{x:B}P(\text{inr}(x))\!<:$\\ [0.1cm]
\hspace*{16mm} $\sum_{x:A+B} P(x) \simeq \sum_{x:A}P(\text{inl}(x))+ \sum_{x:B}P(\text{inr}(x))\!<:$
\item [[Term Split\!\!]]\quad $\prod_{x:A} (P(x)\times Q(x)) \simeq \prod_{x:A}P(x)\times \prod_{x:A}Q(x)\!<:$\\ [0.1cm]
\hspace*{16mm} $\sum_{x:A} (P(x)+Q(x)) \simeq \sum_{x:A}P(x)+ \sum_{x:A}Q(x)\!<:$
\item [[Translation\!\!]]\quad $\prod_{x:A} P(x)\simeq \prod_{y:B}P(g(y))\!<:$\\ [0.1cm]
\hspace*{16mm} $\sum_{x:A} P(x) \simeq \sum_{y:B} P(g(y))\!<:$\\
where $g$ is an inhabitant of $B\simeq A$.
\item [[Congruence\!\!]]\quad $\prod_{x:A} (P(x)\simeq Q(x))\rightarrow (\prod_{x:A} P(x)\simeq \prod_{x:A}Q(x))\!<:$\\ [0.1cm]
\hspace*{16mm} $\prod_{x:A} (P(x)\simeq Q(x))\rightarrow (\sum_{x:A} P(x)\simeq \sum_{x:A}Q(x))\!<:$
\item[[Antecedent\!\!]] \quad $(R\rightarrow \prod_{x:A}Q(x))\simeq \prod_{x:A}(R\rightarrow Q(x))\!<:$\\ [0.1cm]
\hspace*{18mm}a) $\sum_{x:A}(R\rightarrow Q(x))\rightarrow (R\rightarrow \sum_{x:A}Q(x))\!<:$\\
when $R$ does not depend on $x$.\\ [0.1cm]
\hspace*{18mm}b) $\sum_{x:A}({\mathds 1}\rightarrow Q(x))\simeq ({\mathds 1}\rightarrow \sum_{x:A}Q(x))\!<:$\\
\item[[Leibniz principles\!\!]] \quad $\prod\limits_{x,y:A}x\!=\!y \rightarrow f(x) \!=\! f(y)\!<:$\\
\hspace*{29.5mm}$\prod\limits_{x,y:A}x\!=\!y \rightarrow P(x) \!\simeq\! P(y)\!<:$\\
where $f\!:\!A \rightarrow B$ and $P\!:\!A\to {\cal U}$ is a type family.
\end{description}
A surprising fact about these judgments is that some correspond to homotopic equivalence versions of elimination rules of basic types. In fact, we prove that all elimination rules of the basic types are homotopic equivalences.
\section{Extended Syntax of type theory} \label{sec::STT}
In this section we present a formulation of Martin-L\"of theory defining
terms, judgments and rules of inference inductively in the style of natural
deduction formalizations. To this formulation, we adjoin an additional
judgment yielding (by applying its deriving inference rules) a conservative
extension that allows to perform agile and readable proof calculations.
We suppose the reader is familiar with the syntax of Martin-L\"of type
theories. and give an overview of the version appearing in \cite{hottbook}.
\subsubsection*{Contexts} \label{subsub::1}
Contexts are finite lists of variable declarations
$(x_1\!:\!A_1,...,x_n\!:\!A_n)$, for $n\!\geq\!0$, where free variables
occurring in the $Ai$'s belong to $\{x_1,...,x_{i-1}\}$ when
$1\!\leq\!i\!\leq\!n$. This list may be empty and indicates that the distinct
variables $x_1,...,x_n$ are assumed to have types $A_1,...,A_n$, respectively.
We denote contexts with letters $\Sigma$ and $\Delta$, which may be juxtaposed
to form larger contexts.
The judgment $\Gamma\; ctx$ formally denotes the fact that $\Gamma$ is a well
formed context, introduced by the following rules of inference
\begin{align*}
& \; \inferrule*[right=ctx-EMP]{ }{\cdot ctx} & &
\inferrule*[right=ctx-EXT]{x_1\!:\!A_1,...,x_{n-1}\!:\!A_{n-1}
\vdash A_n\!:\!\mathcal{U}_i}{(x_1\!:\!A_1,... , x_n\!:\!A_n)\,ctx}
\end{align*}
with a side condition for the rule \textsc{ctx-EXT}: the variable $x_n$ must be
distinct from the variables $x_1,...,x_{n-1}$.
\subsubsection*{Forms of judgment} \label{subsub::2}
We first, consider the three usual basic judgments of type theory.
\begin{align*}
\Gamma\; ctx && \Gamma \vdash a\!:\!A && \Gamma \vdash a\equiv_A a'
\end{align*}
$\Gamma\; ctx$ expresses that $\Gamma$ is a (well-formed) context.
$\Gamma \vdash a\!:\!A$ denotes that a term $a$ has (inhabits) type $A$ in
context $\Gamma$. $\Gamma \vdash a\equiv_A a'$ means that $a$ and $a'$ are
definitionally equal objects of type $A$ in context $\Gamma$.
A fourth weaker and derived judgment, the \textit{inhabitation judgment}, will be useful for our purposes:
\[ \Gamma \vdash A\!<: \]
means that the type $A$ is inhabited in context $\Gamma$, that is, for some
term $a$, judgment $\Gamma \vdash a\!:\!A$ holds. This judgment corresponds to
a forgetful version of $\Gamma \vdash a\!:\!A$ where the mention of the term
$a$ inhabiting type $A$ is suppressed.
Since the main inference rule for introducing this judgment is
\[\inferrule*[]{\Gamma \vdash a\!:\!A}{\Gamma \vdash A\!<:}\]
and its remaining derivating inference rules correspond to forgetful versions of derived inference rules from judgments of the form $\Gamma \vdash a\!:\!A$,
this addition only brings forth a conservative extension of the theory.
\subsubsection*{Structural rules} \label{subsub::3}
The following rule expresses that a context holds assumptions, basically by saying that the typing judgments listed in the context may be derived.
\[\inferrule*[right=Vble]{(x_1\!:\!A_1,... , x_n\!:\!A_n)\,ctx}{x_1\!:\!A_1,...,x_{n-1}\!:\!A_{n-1} \vdash A_n\!:\!\mathcal{U}_i}\]
Although, the following rules corresponding to the principles of \textit{substitution} and \textit{weakening} are derivable by induction on all possible derivations, we state them. The principles corresponding to typing judgments are given by
\begin{align*}
& \; \inferrule*[right=Subst1]{\Gamma \vdash a\!:\!A \\ \Gamma,x\!:\!A,\Delta \vdash b\!:\!B}{\Gamma,\Delta[a/x] \vdash b[a/x]\!:\!B[a/x]} & &
\inferrule*[right=Wkg1]{\Gamma \vdash A\!:\!\mathcal{U}_i \\ \Gamma,\Delta \vdash b\!:\!B}{\Gamma,x\!:\!A,\Delta \vdash b\!:\!B}
\end{align*}
and the rules for the principles of judgmental (definitional) equality are
\begin{align*}
& \; \inferrule*[right=Subst2]{\Gamma \vdash a\!:\!A \\ \Gamma,x\!:\!A,\Delta \vdash b\!\equiv_B\!c}{\Gamma,\Delta[a/x] \vdash b[a/x]\!\equiv_{B[a/x]}\!c[a/x]} & &
\inferrule*[right=Wkg2]{\Gamma \vdash A\!:\!\mathcal{U}_i \\ \Gamma,\Delta \vdash b\equiv_B\!c}{\Gamma,x\!:\!A,\Delta \vdash b\!\equiv_B\!c}
\end{align*}
The following inference rules express the fact that definitional equality is an equivalence relation preserved by typing.
\begin{align*}
& \; \inferrule*[]{\Gamma \vdash a\!:\!A}{\Gamma \vdash a\!\equiv_A\!a} & & \inferrule*[]{\Gamma \vdash a\!\equiv_A\!b}{\Gamma \vdash b\!\equiv_A\!a} & & \inferrule*[right=Tran]{\Gamma \vdash a\!\equiv_A\!b \\ \Gamma \vdash b\!\equiv_A\!c}{\Gamma \vdash a\!\equiv_A\!c}
\end{align*}
\begin{align*}
& \; \inferrule*[]{\Gamma \vdash a\!:\!A \\ \Gamma \vdash A\!\equiv\!B\!:\!\mathcal{U}_i}{\Gamma \vdash a\!:\!B} & &
\inferrule*[]{\Gamma \vdash a\!\equiv_A\!b \\ \Gamma \vdash A\!\equiv\!B\!:\!\mathcal{U}_i}{\Gamma \vdash a\!\equiv_B\!b}
\end{align*}
Besides the inference rule
\[\inferrule*[right=Inhab]{\Gamma \vdash a\!:\!A}{\Gamma \vdash A\!<:}\]
introducing the inhabitation judgment, we present the following derivating inference rules for this judgment.
\begin{align*}
& \; \inferrule*[right=Fappl]{\Gamma \vdash A\!<: \\ \Gamma \vdash A\!\rightarrow\! B\!<:}{\Gamma\vdash B\!<:} & &
\inferrule*[right=Fcomp]{\Gamma \vdash A\!\rightarrow\!B\!<: \\ \Gamma \vdash B\!\rightarrow\!C\!<:}{\Gamma \vdash A\!\rightarrow\!C\!<:}
\end{align*}
These rules correspond to forgetful versions of the following rules that are easily derived from the original unextended syntax of type theory.
\begin{align*}
& \; \inferrule*[]{\Gamma \vdash a\!:\!A \\ \Gamma \vdash f\!:\!A\!\rightarrow\! B}{\Gamma\vdash f(a)\!:\!B} & &
\inferrule*[]{\Gamma \vdash f\!:\!A\!\rightarrow\!B \\ \Gamma \vdash g\!:\!B\!\rightarrow\!C\!}{\Gamma \vdash g\!\circ\!f\!:\!A\!\rightarrow\!C}
\end{align*}
An additional structural rule applying definitional equality of types to the
inhabitation judgment, that we explicitly use, is
\[\inferrule*[right=Tsubs]{\Gamma \vdash A\!<: \\ \Gamma \vdash
A\!\equiv\! B}{\Gamma\vdash B\!<:}\]
\section{Deductive Chains in Type Theory}
\bigskip
In classical logic, the task is to derive arbitrary valid formulas from a small
set of axiom schema. In type theory, the basic task is to show that certain type
can be inhabited from the inhabitation of another types which are related with
the first through the inference rules introduced before. This will be done by
means of an {\it inhabitation format}, a syntactic tool that is analogous to the
calculational proof format introduced by Dijkstra and Scholten \cite{DS90}. \\ [0.1cm]
Before defining an inhabitation format, we present the following inference rule which can be derived easily from the definition of homotopic equivalence(\cite{hottbook}, (2.4.11), p.79):
\[\inferrule*[right=Heq]{\Gamma \vdash A\simeq B\!<:}{\Gamma\vdash A\to B\!<:},\]
and explicit four of the fairly obvious inference rules, which are used implicitly in type theory most of the time, and correspond to the fact that judgmentally equal things can always be substituted for each other:
\begin{align*}
& \; \inferrule*[right=Repl1l]{\Gamma \vdash A\equiv B}{\Gamma \vdash A\to C\equiv B\to C} &
& \inferrule*[right=Repl1r]{\Gamma \vdash A\equiv B}{\Gamma \vdash C\to A\equiv C\to B} &
\end{align*}
\begin{align*}
& \; \inferrule*[right=Repl2l]{\Gamma \vdash A\equiv B}{\Gamma \vdash A\simeq C\equiv B\simeq C} &
& \inferrule*[right=Repl2r]{\Gamma \vdash A\equiv B}{\Gamma \vdash C\simeq A\equiv C\simeq B} &
\end{align*}
Given types $A$ and $B$, we temporarily write $A\leadsto B$ to represent the judgments $A\to B\!<:$, the judgment $A\equiv B$ or the judgment $A\simeq B\!<:$. We claim that for all $n\geq 3$, and given a context $\Gamma$, we have the derivation
\[\inferrule*[right=]{\Gamma \vdash A_1\leadsto A_2 \\ \Gamma \vdash A_2\leadsto A_3\\ \cdots\\ \Gamma \vdash A_{n-1}\leadsto A_{n}}{\Gamma \vdash A_1\leadsto A_n}\]
where the conclusion $\Gamma \vdash A_1\leadsto A_n$ corresponds to
$\Gamma \vdash A_1\to A_n\!<:$ if at least one of the premises is a judgment of the form $\Gamma \vdash A\to B\!<:$, or to $\Gamma \vdash A_1\simeq A_n\!<:$ if none of the premises is of the form $\Gamma \vdash A\to B\!<:$ and at least one is of the form $\Gamma \vdash A\simeq B\!<:$, or to $\Gamma \vdash A_1\equiv A_n$ if all the premises are of the form $\Gamma \vdash A\equiv B$.\\ [0.1cm]
We prove our claim by induction. If $n=3$, we have to show that
\[\inferrule*[right=BaseCase]{\Gamma \vdash A_1\leadsto A_2 \\ \Gamma \vdash A_2\leadsto A_3}{\Gamma \vdash A_1\leadsto A_3}\]
Combining the possibilities for $\leadsto$ we have nine cases.\\ [0.1cm]
Cases $(\equiv, \equiv)$, $(\to,\to)$ and $(\simeq,\simeq)$ are \textsc{Tran}, \textsc{Fcomp}, and transitivity of $\simeq$ (\cite{hottbook},Lemma 2.4.12, p. 79), respectively. \\ [0.1cm]
We only derive the first one of the cases $(\to, \equiv)$, $(\equiv, \to)$, $(\simeq, \equiv)$, and $(\equiv, \simeq)$:
\[\inferrule*[right=]{\Gamma \vdash A_1\to A_2\!<: \\ \Gamma \vdash A_2\equiv A_3}{\Gamma \vdash A_1\to A_3\!<:},\]
because the rest are derived in the same way. In fact,
\begin{prooftree}
\AxiomC{$\Gamma \vdash A_1\to A_2\!<:$}
\RightLabel{\textsc{Repl1l}}
\AxiomC{$\Gamma \vdash A_2\equiv A_3$}
\UnaryInfC{$\Gamma \vdash A_1\to A_2\equiv A_1\to A_3$}
\RightLabel{\textsc{Tsubs}}
\BinaryInfC{$\Gamma \vdash A_1\to A_3\!<:$}
\end{prooftree}
From cases $(\to,\simeq)$ and $(\simeq,\to)$ we derive only the first one
\[\inferrule*[right=]{\Gamma \vdash A_1\to A_2\!<: \\ \Gamma \vdash A_2\simeq A_3\!<:}{\Gamma \vdash A_1\to A_3\!<:},\]
the second is done in the same way. In fact,
\begin{prooftree}
\AxiomC{$\Gamma \vdash A_1\to A_2\!<:$}
\RightLabel{\textsc{Heq}}
\AxiomC{$\Gamma \vdash A_2\simeq A_3\!<:$}
\UnaryInfC{$\Gamma \vdash A_2\to A_3\!<:$}
\RightLabel{\textsc{Fcomp}}
\BinaryInfC{$\Gamma \vdash A_1\to A_3\!<:$}
\end{prooftree}
Now, let us suppose that we have the derivation
\[\inferrule*[right=IndHyp]{\Gamma \vdash A_1\leadsto A_2 \\ \Gamma \vdash A_2\leadsto A_3\\ \cdots\\ \Gamma \vdash A_{n-2}\leadsto A_{n-1}}{\Gamma \vdash A_1\leadsto A_{n-1}}.\]
Then,
\begin{prooftree}
\AxiomC{$\Gamma \vdash A_1\leadsto A_2\cdots \Gamma \vdash A_{n-2}\leadsto A_{n-1}$}
\RightLabel{\textsc{IndHyp}}
\UnaryInfC{$\Gamma \vdash A_1\leadsto A_{n-1}$}
\AxiomC{$\Gamma \vdash A_{n-1}\leadsto A_{n}$}
\RightLabel{\textsc{BaseCase}}
\BinaryInfC{$\Gamma \vdash A_1\leadsto A_{n} $}
\end{prooftree}
This proves our claim.\\ [0.1cm]
Due to the rules \textsc{Fappl}, \textsc{Tsubs} and \textsc{Heq} we have the derivation
\[\inferrule*[right=]{\Gamma \vdash a:A \\ \Gamma \vdash A\leadsto B}{\Gamma \vdash B\!<:}.\]
Let us suppose a given context $\Gamma$. A \textit{deductive chain} is a derivation of the form
\begin{equation}\label{Deriva}
\inferrule*[right=]{\overset{\vdots}{\Gamma \vdash a:A_1} \\ \overset{\vdots}{\Gamma \vdash A_1\leadsto A_2} \\ \cdots\\ \overset{\vdots}{\Gamma \vdash A_{n-1}\leadsto A_{n}}}{\Gamma \vdash A_{n}\!<:}.
\end{equation} represented schematically as a vertical deductive chain:
\[
\begin{array}{rl}
& A_n \\ \leftrightarrows & \\ & A_{n-1} \\ & \vdots \\ \leftrightarrows &\\ & A_2 \\ \leftrightarrows & \\ & A_1 \\ \stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}} & \left\langle \textit{inhabitation statement} \right\rangle\\ & a
\end{array}
\]
These chains, and their concrete versions, will be referred as \textit{inhabitation formats}.
Each link
\[
\begin{array}{rl}
& B \\ \leftrightarrows & \\ & A
\end{array}
\]
in the above format, corresponds to one of the following concrete versions:
\[
\begin{array}{l}
\phantom{\leftarrow } B \\ \leftarrow \left\langle\!:\,\, ; \textit{statement of inhabitation} \right\rangle \\ \phantom{\leftarrow } A
\end{array}
\]
called \textit{consequence link},
\[
\begin{array}{l}
\phantom{\leftarrow } B \\ \equiv \left\langle \textit{evidence of equivalence} \right\rangle \\ \phantom{\leftarrow } A
\end{array}\]
called \textit{equivalence link}, or
\[
\begin{array}{l}
\phantom{\leftarrow } B \\ \simeq \left\langle\!:\,\, ; \textit{statement of inhabitation} \right\rangle \\ \phantom{\leftarrow } A
\end{array}
\]
called \textit{homotopic equivalence link}. The closing link, that is the link at th bottom of the deduction chain,
\[
\begin{array}{rl}
& A\\ \stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}} & \left\langle \textit{inhabitation statement} \right\rangle\\ & a
\end{array}
\]
is called \textit{inhabitation link}.\\ [0.1cm]
In short, this inhabitation format is a deductive chain that represents the concatenation of the premises of a derivation of the form (\ref{Deriva}). Each link of the chain is a judgment of the form $A\!\rightarrow\! B\!<:$, $A\!\equiv\!B\!$, $A\!\simeq\!B\!$ or $a\!:\!A$ written vertically, together with an evidence or a statement supporting it, which is written between angular parentheses. \\ [0.1cm]
If $f:A\rightarrow B$, $g:B\rightarrow C$,
$h:A\rightarrow B$ and $a\!:\!A$ then $h(g(f(a)))\!:\!D$. This detailed account
of inhabitation is represented by the following chain:
\[
\begin{calcu}
\expro{D}
\\
\explo{\leftarrow}{\!\!:\,\,$h$}
\\
\expro{C}
\\
\explo{ \leftarrow}{\!\!:\,\,$g$}
\\
\expro{B}
\\
\explo{ \leftarrow}{\!\!:\,\,$f$}
\\
\expro{A}
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{evidence of inhabitation}
\\
\expro{a.}
\end{calcu}
\]
that derives, not only that $D$ is inhabited, but that $D$ is inhabited by $h(g(f(a)))$. \\ [0.1cm]
Before illustrating the use of deduction chains we introduce some basic
types in order to present some consequence links which come with their
specifications.
\section{Basic Types}\label{DistArr}
We follow the general pattern for introducing new types in Type Theory
presented in the HoTT book \cite{hottbook}. The specification of a type
consist mainly in four steps: (i)\textit{Formation rules}, (ii) \textit{Construction rules},
(iii) \textit{Elimination rules}, and \textit{Computation rules}. Here, we express the elimination rules in terms of consequence links.\\ [0.1cm]
We assign a special Greek letter to each induction operator introduced in the respective elimination rule. Namely
\[\begin{array}{|c|c|c|c|c|c|c|} \hline
\text{Type} & \Sigma & + & \mathbb N & = & \mathds O & \mathds 1\\ \hline
\text{Induction operator}& \boldsymbol{\sigma} & \boldsymbol{\kappa} & \boldsymbol{\nu} & \boldsymbol{\iota} & \boldsymbol{o} & \boldsymbol{\mu} \\ \hline
\end{array}
\]
\textbf{$\Pi$-types}. The dependent function types or $\Pi$-types, are the most fundamental basic
types and its elimination rule does not provide links for deductive chains. \\
[0.3 cm]
Given types $A\!:\!{\cal U}$ and $B\!:\!A\rightarrow{\cal U}$ we form the type
$\prod_{x:A}B(x)\!:\!\mathcal{U}$. For $b\!:\!B$ we construct
$\lambda(x\!:\!A).b$ of type
$\prod_{x:A}B(x)$.\\ [0.1cm]
For $f\!:\!\prod_{x:A}B(x)$ and $a\!:\!A$ then $f(a)\!:\!B[a/x]$ and the
computation rule is
\[(\lambda(x:A).b)(a)\equiv b[a/x]\]
When $B$ does not depend on the objects of $A$, the product type is the function
type $A\rightarrow B$:
\[
\prod\limits_{x:A}B(x)\; \equiv \; A\rightarrow B.
\]
The propositional reading of $f:\prod_{x: A} B (x)$ is that $f$ is a proof
that all objects of type $A$ satisfy the property $ B $. We use this {\it
semantic} throughout the paper as necessary. By the way, the elimination
rules
of $\Sigma$-types, co-product types, $\mathbb N$-type, and $W$-types, establish
that to prove that all objects of these types satisfy a property,
you have to prove that their constructed objects satisfy the property, and for
this, the rule introduces an induction operator fulfilling that task.\\ [0.1cm]
One useful property of $\Pi$ types is \textsl{$\Pi$-distribution over arrows}. Let us suppose that for each $x\!:\!A$ we have a function $\varphi_x:P(x)\to
Q(x)$. Then we can define the function
$$\Delta : (\prod_{x:A}P(x))\to
(\prod_{x:A}Q(x))$$ by
$
\Delta(u)(x):\equiv \varphi_x(u(x)).
$
This shows that if $\prod_{x:A}(P(x)\to Q(x))<:$ then $(\prod_{x:A}P(x))\to
\prod_{x:A}Q(x)<:$. This property is known as $\Pi$-distribution over arrows
and
is frequently used in deductive chains as the following consequence link
\begin{equation}\label{DistArrow}
\begin{calcu}
\expro{\prod_{x:A}Q(x)}
\\
\explo{\leftarrow}{\!:\,$\Delta$\,;\, Definition of $\varphi_x$}
\\
\expro{\prod_{x:A}P(x)}
\end{calcu}
\end{equation}
Later, in the section \ref{InhArr}, we explain a method to find
definitions of functions such as the one for $\Delta$.\\ [0.1cm]
\textbf{$\Sigma$-types}.The dependent pair types or $\Sigma$-types, are the types whose inhabitants are
dependent pairs.\\ [0.3 cm]
Given $A\!:\!\mathcal{U}$ and $B\!:\!A\rightarrow\mathcal{U}$ we form
$\sum_{x:A}B(x)\!:\!\mathcal{U}$ and if $a\!:\!A$ and $b\!:\!B[x/a]$ then
$(a,b)\!:\! \sum_{x:A}B(x)$.\\ [0.1cm]
In order to prove a property $C:\sum_{x:A}B(x)\rightarrow
{\cal U}$ for all objects of the $\Sigma$-type, i.e., to inhabit
$\prod_{p:\sum_{x:A}B(x)}C(p)$, we must prove the property for its constructed
objects,
i.e., to inhabit $\prod_{x:A}\prod_{y:B(x)}C((x,y))$ For this there is a
function $\boldsymbol{\sigma}(C)$ carrying a proof $g$ of this latter expression
to the proof $\boldsymbol{\sigma}(C)(g)$ of the former expression. Therefore,
the elimination rule is given by the following consequence link
\[
\begin{calcu}
\expro{\prod\limits_{p:\sum_{x:A}B(x)}C(p)}
\\
\explo{\leftarrow}{\!:\,\,$\boldsymbol{\sigma}_C$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:B(x)}C((x,y))}
\end{calcu}
\]
The computation rule states the definition of the function
$\boldsymbol{\sigma}_C$:
\[
\boldsymbol{\sigma}_C(g)((a,b))\equiv g(a)(b).
\]
For the case when $C$ is a constant family, we have that the induction operator link reduces to
\[
\begin{calcu}
\expro{(\sum\limits_{x:A}B(x))\to C}
\\
\explo{\leftarrow}{\!:\,\,$\boldsymbol{\sigma}_C$}
\\
\expro{\prod\limits_{x:A}(B(x)\to C)}
\end{calcu}
\]
With the induction operator we can also define functions on $\Sigma$-types. For
instance, projection functions $\text{pr}_1$ and $\text{pr}_2$ are defined by
\[\text{pr}_1 :\equiv \boldsymbol{\sigma}_A (g)\;\;\text{and\;\;}\text{pr}_2 :\equiv \boldsymbol{\sigma}_{B\circ \text{pr}_1}(h),\]
where $g:\equiv \lambda(x:A).\lambda(y:B(x)).x$, and $h:\equiv \lambda(x:A).\lambda(y:B(x)).y$.\\ [0.1cm]
When $B$ does not depend on the objects of $A$, the $\Sigma$-type is the type
$A\times B$, the Cartesian product type of $A$ and $B$:
\[
\sum_{x:A}B(x)\; \equiv \; A\times B.
\]
\textbf{Coproduct types}. The coproduct corresponds to the disjoint union of sets in Set Theory. \\ [0.1cm]
Given $A\!:\!\mathcal{U}$ and $B\!:\!\mathcal{U}$ we form $A+B\!:\!\mathcal{U}$
and if $a\!:\!A$ and $b\!:\!B$ then $\text{inl}(a)\!:A+B$ and
$\text{inr}(b)\!:\!A+B$.\\ [0.1cm]
In order to prove a property $C:A+B\rightarrow
{\cal U}$ for all objects of the coproduct type, i.e., to inhabit
$\prod_{p:A+B}C(p)$, we must prove the property for its constructed objects,
i.e., to inhabit $\prod_{x:A}C(\text{inl}(x))\times\prod_{y:B}C(\text{inr}(y))$.
For this there is a
function $\boldsymbol{\kappa}_C$ carrying a proof $g$ of the latter type to the
proof $\boldsymbol{\kappa}_C(g)$ of the former one. Therefore, the elimination
rule is given by the following consequence link
\[
\begin{calcu}
\expro{\prod\limits_{p:A+B}C(p)}
\\
\explo{\leftarrow}{\!:\,\,$\boldsymbol{\kappa}_C$}
\\
\expro{\prod\limits_{x:A}C(\text{inl}(x))\times\prod\limits_{y:B}C(\text{inr}
(y))}
\end{calcu}
\]
The computation rule states the definition of the function $\boldsymbol{\kappa}_C$:
\[\boldsymbol{\kappa}_C(g)(\text{inl}(a)):\equiv (\text{pr}_1g)(a)\;\; \text{and}\;\; \boldsymbol{\kappa}_C(g)(\text{inr}(b)):\equiv (\text{pr}_2g)(b)\]
\textbf{Empty type}. It is presented as $\mathds O$. This type has no objects
and its elimination rule is given by the function
\[
\boldsymbol{o}_C : \prod\limits_{x:{\mathds O}}C(x),
\]
which states that all the objects of ${\mathds O}$ satisfy any property $C:\mathds O\to {\cal U}$, and there is no computation rule.\\ [0.1cm]
\textbf{Unit type}. It is presented as ${\mathds 1}$. This type has just one
object, its constructor is $*\!:\!\mathds 1$, and its elimination rule is
given by the following link:
\[
\begin{calcu}
\expro{\prod\limits_{x:\mathds 1}C(x)}
\\
\explo{\leftarrow}{\!:\, $\boldsymbol{\mu}_C$}
\\
\expro{C(*)}
\end{calcu}
\]
which states that in order to prove a property $C:\mathds 1\to {\cal U}$ it is enough to inhabit $C(*)$. Its computation rule is $\boldsymbol{\mu}_C(u)(x):\equiv u$.\\ [0.1cm]
\textbf{The type of natural numbers} is presented as $\mathbb N$ and its constructors are $0\!:\!\mathbb N$ and $s\!:\!\mathbb
N\rightarrow\mathbb N$.\\ [0.1cm]
In order to prove a property $C:\mathbb N\rightarrow
{\cal U}$ for all objects of $\mathbb N$, i.e., to inhabit
$\prod_{p:\mathbb N}C(p)$, we must prove the property for its constructed
objects,
i.e., to inhabit $C(0)\times \left( \prod_{p:\mathbb N}C(p)\rightarrow
C(s(p))\right) $. For this, there is a
function $\boldsymbol{\nu}_C$ carrying a proof $g$ of the latter type to the
proof $\boldsymbol{\nu}_C(g)$ of the former one. Therefore, the elimination rule
is given by the following consequence link
\[
\begin{calcu}
\expro{\prod\limits_{p:\mathbb N}C(p)}
\\
\explo{\leftarrow}{\!:\,\,$\boldsymbol{\nu}_C$}
\\
\expro{C(0)\times \prod\limits_{p:\mathbb N}C(p)\rightarrow C(s(p))}
\end{calcu}
\]
The computation rule states the definition of the function
$\boldsymbol{\nu}_C$:
\[
\boldsymbol{\nu}_C(g)(0)\equiv (\text{pr}_1g)(0)
\;\; \text{and}\;\; \boldsymbol{\nu}_C(g)(s(p)))\equiv (\text{pr}_2g)(p,\boldsymbol{\nu}_C(g)(p)).
\]
\textbf{Identity type}. Given any pair of objects $a$ and $b$ of a type $P:{\cal U}$, there is a type
$(a=_{\phantom{.}_P}\!b):{\cal U}$, called identity type. There is only one
constructor:
\[
\text{refl}: \prod\limits_{x:P} (x=_{\phantom{.}_P}\!x)
\]
that states de identification of an object with itself. The objects of $x=y$ are
called paths from $x$ to $y$.\\ [0.3 cm]
In order to prove a property $C:\prod_{x,y:P}x=y\rightarrow
{\cal U}$ for all objects of the identity type, i.e., to inhabit
$\prod_{x,y}\prod_{p:x=y}C(p)$, we must prove the property for its constructed
objects,
i.e., to inhabit $\prod_{x:P}C(\text{refl}_x)$. For this there is a
function $\boldsymbol{\iota}_C$ carrying a proof $g$ of the latter type to the
proof $\boldsymbol{\iota}_C(g)$ of the former one. Therefore,
the elimination rule is given by the following consequence link
\[
\begin{calcu}
\expro{\prod\limits_{x,y:P}\prod\limits_{p:x=y}C(x,y,p)}
\\
\explo{\leftarrow}{\!:\,\,$\boldsymbol{\iota}_C$}
\\
\expro{\prod\limits_{x:P}C(x,x,\text{refl}_x)}
\end{calcu}
\]
The computation rule states the definition of the function
$\boldsymbol{\iota}_C$:
\[
\boldsymbol{\iota}_C(g)(x,x, \text{refl}_x):\equiv g(x).
\]
\textbf{Remark}. Induction operators depend on a type family; however, the corresponding computation rules do not. Recall that computation rules for $\boldsymbol{\sigma}$, $\boldsymbol{\kappa}$, $\boldsymbol{\iota}$ and $\boldsymbol{\mu}$, for example, are respectively:
$\boldsymbol{\sigma}(u)((x,y)):\equiv u(x)(y)$, $\boldsymbol{\kappa}(u,v)(\text{inl}(x)):\equiv u(x)$, $\boldsymbol{\kappa}(u,v)(\text{inr}(y)):\equiv v(y)$,
$\boldsymbol{\iota}(u)(x,x,\text{refl}_x):\equiv u(x)$, and
$\boldsymbol{\mu}(u)(*):\equiv u$.
These computations are idependent of the family type to which they apply. From now on, we do not mention the type families to which they apply .\\ [0.1cm]
With the identity induction operator, one can characterize the inhabitants of
Cartesian product types and coproduct types, this allows us to present the first
examples of deductive chains.
For the case of the Cartesian product type,
if $A$ and $B$ are types, then
\begin{equation}\label{UniqPairs}
\prod\limits_{u:A\times B}u=(\text{pr}_1(u),\text{pr}_2(u))<:
\end{equation}
In fact,
\[
\begin{calcu}
\expro{\prod\limits_{u:A\times B}u=(\text{pr}_1(u),\text{pr}_2(u))}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\sigma}$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:B} (x,y)=(\text{pr}_1((x,y)),\text{pr}_2((x,y)))}
\\
\explo{\equiv}{Definition of $\text{pr}_1$ and $\text{pr}_2$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:B} (x,y)=(x,y)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(x)(y):\equiv \text{refl}_{(x,y)}$}
\\
\expro{h.}
\end{calcu}
\]
And, for the case of the coproduct type, if $A$ and $B$ are types, then
\[\prod\limits_{p:A+B}(\sum\limits_{x:A}(p=\text{inl}(x))+\sum\limits_{y:B}(p=\text{inr}(y)))<:\]
In fact,
\[
\begin{calcu}
\expro{\prod\limits_{p:A+B}\sum\limits_{x:A}(p=\text{inl}(x))+\sum\limits_{y:B}p=\text{inr}(y)}
\\
\explo{\leftarrow}{\!:$\boldsymbol{\kappa}$}
\\
\expro{\phantom{\times}\prod\limits_{a:A}(\sum\limits_{x:A}(\text{inl}(a)=\text{inl}
(x))+\sum\limits_{y:B}\text{inl}(a)=\text{inr}(y))}
\\
&\times\prod\limits_{b:B}\sum\limits_{x:A}(\text{inl}(b)=\text{inl}(x))+\sum\limits_{y:B}
\text {inr}(b)=\text{inr}(y)
\\
\\
\explo{\leftarrow}{\!:\,$\varphi\,\,;\,\, \varphi(u,v):\equiv (\text{inl}\circ
u, \text{inr}\circ v )$ }
\\
\expro{\prod\limits_{a:A}(\sum\limits_{x:A}\text{inl}(a)=\text{inl}(x))
\,\times\, \prod\limits_{b:B}\sum\limits_{y:B}\text{inr}(b)=\text{inr}(y)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h:\equiv
(\lambda a.(a,\text{refl}_{\text{inl}(a)}),\lambda
b.(b,\text{refl}_{\text{inr}(b)}))$}
\\
\expro{h}
\end{calcu}
\]
\section{Equivalence of types}
Now, we introduce the notion of equivalence of types, but first, we need the
one
of
homotopic functions. Details of this topic may be found in \cite{hottbook}. \\
[0.3 cm]
Let $f$ and $g$ be two dependent functions inhabiting $\prod_{x:A}P(x)$. We
say that $f$ and $g$ are homotopic if the type $f\sim g$ defined by
\[
f \sim g :\equiv \prod\limits_{x:A} (f(x)=g(x))
\]
is inhabited.
Two types $A$ and $B$ are equivalent if there is a function $f:A\rightarrow B$
such that the type isequiv($f$) defined by
\[
\text{isequiv}(f) :\equiv ( \sum\limits_{g:B \to A} f\circ g \sim
\text{id}_B) \times ( \sum\limits_{h:B \to A} h\circ f \sim
\text{id}_A )
\]
is inhabited. Therefore, $A$ and $B$ are equivalent if the type $A\simeq B$
defined by $\sum_{f:A\to B}\text{isequiv}(f)$ is inhabited. However, in order
to
prove equivalence in this paper, we do not use the type isequiv($f$), but the
type qinv($f$), which is a simpler equivalent version (see \cite{hottbook},
2.4
p. 76) and is defined by
\[
\text{qinv}(f) :\equiv \sum\limits_{g:B \to A} \left( (f\circ g \sim
\text{id}_B) \times (g\circ f \sim \text{id}_A) \right) .
\]
This means that in order to show that types $A$ and $B$ are equivalent we must
exhibit a 4-tuple
\[
\boldsymbol{f}:\equiv (f,f',\alpha, \alpha')
\]
where
\[ f:A\to B, \quad f':B\to A,\quad \alpha: f\circ f' \sim \text{id}_B, \quad
\text{and}\quad \alpha': f'\circ f \sim \text{id}_A.\]
For instance, let us show that given types $A$ and $B$,
\begin{equation}\label{comm+}
A+B\simeq B+A<:
\end{equation}
In fact, let $f\!:\!A+B\to B+A$ and $f'\!:\!B+A\to A+B$ be defined by\linebreak
$f(\text{inl}(a)):\equiv \text{inr}(a)$, $f(\text{inr}(b)):\equiv
\text{inl}(b)$, $f'(\text{inl}(b)):\equiv \text{inr}(b)$ and
$f'(\text{inr}(a)):\equiv \text{inl}(a)$. Then, the folowing deductive chain
shows that $f\circ f'\sim \text{id}_{B+A}$ is inhabited:
\[
\begin{calcu}
\expro{f\circ f'\sim \text{id}_{B+A}}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{p:B+A} f(f'(p))= p}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\kappa}$}
\\
\expro{\prod\limits_{b:B} (f(f'(\text{inl}(b)))= \text{inl}(b)) \times
\prod\limits_{a:A} (f(f'(\text{inr}(a)))= \text{inr}(a))}
\\
\explo{\equiv }{Definition of $f$ and $f'$}
\\
\expro{\prod\limits_{b:B} (\text{inl}(b)= \text{inl}(b)) \times
\prod\limits_{a:A} (\text{inr}(a)= \text{inr}(a))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}} }{$u:\equiv \lambda b.\text{refl}_{\text{inl}(b)}\,\,;\,\,
v:\equiv \lambda a.\text{refl}_{\text{inr}(a)}$}
\\
\expro{(u,v)}
\end{calcu}
\]
We prove $f'\circ f \sim \text{id}_{A+B}<:$ in the same way.\\ [0.1cm]
We present three equivalences characterizing the identification of objects of
certain types: pairs, functions, and natural numbers. \\
[0.1cm]
\textbf{Identification of pairs}. Let $A$, $B$ be types. Then for all $u$ and $v$ inhabitants of $A\times B$ we
have that
\[
u=v\;\simeq\; (\text{pr}_1(u)=\text{pr}_1(v))\times
(\text{pr}_2(u)=\text{pr}_2(v))\, <:
\]
\textit{Proof}. First of all, we define $P_1(u,v):\equiv \text{pr}_1(u)=\text{pr}_1(v)$ and $P_2(u,v):\equiv \text{pr}_2(u)=\text{pr}_2(v)$. And now, we define $f\!:\!u\!=\!v\to P_1(u,v)\times P_2(u,v)$, by means of the following deductive chain:
\[
\begin{calcu}
\expro{\prod\limits_{u,v:A\times B} \prod\limits_{p:u=v}P_1(u,v)\times P_2(u,v)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\iota_1}$}
\\
\expro{\prod\limits_{u:A\times B} P_1(u,u)\times P_2(u,u)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}} }{$h:\equiv \lambda u.(\text{refl}_{\text{pr}_1(u)},\text{refl}_{\text{pr}_2(u)}$}
\\
\expro{h}
\end{calcu}
\]
Therefore we may define $f:\equiv \boldsymbol{\iota_1}(h)(u,v)$.\\ [0.1cm]
In order to define a function $f':P_1(u,v)\times P_2(u,v) \to u\!=\!v$, let us consider the following
deductive chain:
\[
\begin{calcu}
\expro{\prod\limits_{u,v:A\times B}P_1(u,v)\times P_2(u,v)\to u\!=\!v}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\sigma}\,\,;\,\,\boldsymbol{\sigma}(w)((a,
c),(b,d),(p,q)):\equiv w(a)(b)(c)(d)(p)(q)$}
\\
\expro{\prod\limits_{a,b:A}\prod\limits_{c,d:B}\prod\limits_{p:a=b}\prod\limits_
{q:c=d}(a,c)=(b,d)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\iota_2}\,\,;\,\,\boldsymbol{\iota_2}(z)(a,
a , c ,
c,\text{refl}_a,\text{refl}_c):\equiv z(a)(c)$}
\\
\expro{\prod\limits_{a:A}\prod\limits_{c:B}(a,c)=(a,c)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}} }{$k(a,c):\equiv \text{refl}_{(a,c)}$}
\\
\expro{k}
\end{calcu}
\]
Therefore, we can put $f':\equiv
(\boldsymbol{\sigma}\!\circ\!\boldsymbol{\iota_2})(k)(u,v)$.\\ [0.3 cm]
Now, let us show that $\prod_{u,v:A\times B}f\circ f'\sim \text{id}<:$
\small{\[
\begin{calcu}
\expro{\prod\limits_{u,v:A\times
B}\prod\limits_{g:P_1(u,v)\times P_2(u,v)}f(f'(g))=g}
\\
\explo{\equiv}{Definition of $f$ and $f'$}
\\
\expro{\prod\limits_{u,v:A\times
B}\prod\limits_{g:P_1(u,v)\times P_2(u,v)}
(\boldsymbol{\iota_1}(h)(u,v))\left((\boldsymbol{\sigma}
\!\circ\!\boldsymbol{\iota_2})(k)(u,v))(p,q)\right)=(p,q)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\sigma}$}
\\
\expro{\prod\limits_{a,b:A}\prod\limits_{c,d:B}\prod\limits_{p:a=b}\prod\limits_
{q:c=d}
(\boldsymbol{\iota_1}(h)((a,c),(b,d)))\left((\boldsymbol{\sigma}
\!\circ\!\boldsymbol{\iota_2})(k)((a,c),(b,d)) (p,q)\right)=(p,q)}
\\
\explo{\leftarrow}{\!:$\boldsymbol{\iota}$}
\\
\expro{\prod\limits_{a:A}\prod\limits_{c:B}
(\boldsymbol{\iota_1}(h)((a,c),(a,c)))\left((\boldsymbol{\sigma}
\!\circ\!\boldsymbol{\iota_2})(k)((a,c),(a,c))
(\text{refl}_a,\text{refl}_c)\right)=(\text{refl}_a,\text{refl}_c)}
\\
\explo{\equiv}{Definition of $\boldsymbol{\sigma}$, $\boldsymbol{\iota_2}$, and
$k$}
\\
\expro{\prod\limits_{a:A}\prod\limits_{c:B}(\boldsymbol{\iota_1}(h)((a,c),(a,
c)))(\text{refl}_{(a,c)})=(\text{refl}_a,\text{refl}_c)}
\\
\explo{\equiv}{Definition of $\boldsymbol{\iota_1}$, and $h$}
\\
\expro{\prod\limits_{a:A}\prod\limits_{c:B}(\text{refl}_a,\text{refl}_c)=(\text{refl}_a,\text{refl}_c)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}} }{$j:\equiv \lambda a.\lambda c.\text{refl}_{(\text{refl}_a,\text{refl}_c)}$}
\\
\expro{j}
\end{calcu}
\] }
The proof of $\prod_{u,v:A\times B}f'\circ f\sim \text{id}<:$ is done in the same way.\\ [0.1cm]
As a particular case, we have that if $a, c: A$, and $b, d: B$, then
\begin{equation}
(a,b)=(c,d)\; \simeq \; a\!=\!c\times b\!=\!d <:
\end{equation}
\textbf{Identification of functions}. Let $A$ and $B$ be two types, and $f$ and $g$ objects of $A\to B$. Then
\begin{equation}\label{FuncExt}
f=g\; \simeq \; f\sim g \,<:
\end{equation}
The inhabitation can not be proved with the theory introduced till now but
introduced as an axiom in \cite{hottbook} as {\it function
extensionality}.\\[2mm]
\textbf{Identification of natural numbers}. If one introduces the type family
\[
\text{code}: \mathbb N \to \mathbb N \to {\cal U}
\]
defined by
\[\text{code}(0,0):\equiv \mathds 1, \; \text{code}(s(n),0):\equiv \mathds
O,\;\text{code}(0,s(n)):\equiv \mathds O,\; \text{and}\]
\[\text{code}(s(m),s(n)):\equiv \text{code}(m,n)\]
then, theorem 2.13.1 in \cite{hottbook} states that, for all $m,n:\mathbb N$,
we have that
\begin{equation}
m=n\; \simeq\; \text{code}(m,n)<:
\end{equation}
Its proof introduces the functions $\textit{encode}\!\!:\prod_{m,n:\mathbb N}
m\!=\!n\to \text{code}(m,n)$ and $\textit{decode}\!:\!\prod_{m,n:\mathbb N}
\text{code}(m,n)\to m=n$, and shows that the functions $\text{encode}(m,n)$ and
$\text{decode}(m,n)$ are q-inverses of each other.\\ [0.1cm]
In next sections, we explore several properties related with equivalence.
\section{Leibniz properties of type equivalence}
By Leibniz properties, we refer to the replacement of equivalents by
equivalents (or congruence) property of, in this case, homotopic
type-equivalence.
\subsection{Leibniz principles.}
These are precisely [\textbf{Leibniz principles}] mentioned in section 2, and refer to the fact that equality is preserved respectively, by function application and type dependency (through, equivalence)\\ [0.1cm]
Let $A,B:{\cal U}$, $f\!:\!A \rightarrow B$ and $P\!:\!A\to {\cal U}$. Then
\[\prod\limits_{x,y:A}x\!=\!y \rightarrow f(x) \!=\! f(y)<:\quad\text{and}\quad
\prod\limits_{x,y:A}x\!=\!y \rightarrow P(x) \!\simeq\! P(y)<:\]
In fact,
\[
\begin{calcu}
\expro{\prod\limits_{x,y:A}\prod\limits_{p:x\!=\!y} f(x) \!=\! f(y)}
\\
\explo{\simeq}{\!:\! $\boldsymbol{\iota}$}
\\
\expro{\prod\limits_{x:A} f(x) \!=\! f(x)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(x):\equiv \text{refl}_{f(x)}$}
\\
\expro{h}
\end{calcu}
\]
One defines $\text{ap}_{f}(x,y,p):\equiv \boldsymbol{\iota}(h)(x,y,p)$, and by
definition of $\boldsymbol{\iota}$, we get\\
$\text{ap}_{f}(x,x,\text{refl}_x):\equiv \boldsymbol{\iota}
(h)(x,x,\text{refl}_x):\equiv h(x):\equiv \text{refl}_{f(x)}$.\\ [0.1cm]On the
other hand,
\[
\begin{calcu}
\expro{\prod\limits_{x,y:A}\prod\limits_{p:x\!=\!y} P(x) \!\simeq\! P(y)}
\\
\explo{\simeq}{\!:\! $\boldsymbol{\iota}$}
\\
\expro{\prod\limits_{x:A} P(x) \!\simeq\! P(x)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$k(x):\equiv \text{id}_{P(x)}$}
\\
\expro{k}
\end{calcu}
\]
One defines $\text{tr}^{P}(x,y,p)\!\!:\equiv \!\!\boldsymbol{\iota}(k)(x,y,p)$\footnote{This object is called transport$^P$ in the HoTT book \cite{hottbook} }, and by
definition of $\boldsymbol{\iota}$, we get\linebreak
$\text{tr}^{P}(x,x,\text{refl}_x):\equiv
\boldsymbol{\iota}(k)(x,x,\text{refl}_x):\equiv k(x):\equiv \text{id}_{P(x)}$.
\subsection{Leibniz inference rules.}\label{LeibInf}
Leibniz inference rules generally express the fact that type equivalence is preserved by replacement, in any given type expression, of any of its subexpressions by an equivalent one. We derive Leibniz inference rules for coproduct types, and for $\Pi$ and $\Sigma$ types, which are precisely [\textbf{Congruence}] and [\textbf{Translation}] rules, endowing HoTT, by this means, with a calculational style of proof.\\ [0.1cm]
Let $A,B.C:{\cal U}$ and $P,Q: A\to {\cal U}$. Then
\begin{description}
\item[[Congruence\!\!]]\quad
$\prod_{x:A} P(x)\simeq Q(x)\to\prod_{x:A} P(x)\simeq \prod_{x:A} Q(x)\!<:$ \quad $\Pi${\sc Eq}1\\ [0.3cm]
\hspace*{16mm} $\prod_{x:A} P(x)\simeq Q(x)\to\sum_{x:A} P(x)\simeq \sum_{x:A} Q(x)\!<:$\quad $\Sigma${\sc Eq}1
\vspace{0.3cm}
\item [[Translation\!\!]]\quad
$\prod_{\boldsymbol{f}:A\simeq B}\left( \prod_{x:A} P(x)\simeq \prod_{y:B} P(f'(y))\right) <:$\quad $\Pi${\sc Eq}2 \\ [0.3cm]
\hspace*{16mm} $\prod_{\boldsymbol{f}:A\simeq B}\left( \sum_{x:A} P(x)\simeq \sum_{y:B} P(f'(y))\right) <:$\quad $\Sigma${\sc Eq}2
\vspace{0.3cm}
\item[[Coproduct Monotony]\!\!] \quad $(A\simeq B)\to (A+C\simeq B+C)<:$\quad $+${\sc Eq}1\\ [0.3cm]
\hspace*{34mm}$(A\simeq B) \to (C+A\simeq C+B)<:$ \quad $+${\sc Eq}2
\end{description}
\vspace{0.3cm}
\textit{Proof of $\Pi${\sc Eq}1.}
Suppose that $\boldsymbol{\Phi}\!:\!\prod_{x:A}P(x)\simeq Q(x)$, with
$\boldsymbol{\Phi}(x)\equiv (\phi_x,\phi_x', \alpha, \alpha')$,
$\alpha \!:\!\phi_x\!\circ\! \phi_x' \sim \text{id}_{Q(x)}$ and $\alpha':
\phi_x'\!\circ\! \phi_x \sim \text{id}_{P(x)}$. Let
\[
\psi:\prod_{x:A}P(x)\to \prod_{x:A}Q(x)
\]
be defined by $\psi(f)(x):\equiv \phi_x(f(x))$\footnote{$\psi$ is precisely the function $\Delta$ of $\Pi$-distibution over arrows, see (\ref{DistArrow}) } and let
\[\psi':\prod_{x:A}Q(x)\to \prod_{x:A}P(x)
\]
be defined by $\psi'(g)(x):\equiv \phi_x' (g(x))$. Observe that
\begin{equation}\label{CalPeq1}
\psi(\psi'(g))(x)
\equiv
\phi_x(\psi'(g)(x))
\equiv
\phi_x(\phi_x'(g(x)))
\equiv
(\phi_x\circ \phi_x')(g(x))
\end{equation}
Then, in order to prove $\psi\circ \psi'\sim \text{id}<:$\,, it is enough to prove $(\psi\circ\psi')(g)=g<:$\,. for all $g:\prod_{x:A}Q(x)$. In fact,
\[
\begin{calcu}
\expro{(\psi\circ\psi')(g)=g}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})}
\\
\expro{\prod\limits_{x:A}(\psi\circ\psi')(g)(x)=
g(x)}
\\
\explo{\equiv}{See above calculations (\ref{CalPeq1})}
\\
\expro{\prod\limits_{x:A}(\phi_x\circ
\phi_x')(g(x))= g(x)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u(g)(x):\equiv \alpha (g(x))$}
\\
\expro{u}
\end{calcu}
\]
The proof of $\psi\circ \psi'\sim \text{id}<:$ is done similarly.\\ [0.1cm]
\textit{Proof of $\Sigma${\sc Eq}1.}
Suppose that $\boldsymbol{\Phi}:\prod_{x:A}P(x)\simeq Q(x)$ with
$\boldsymbol{\Phi}(x)\equiv (\phi_x,\phi_x', \alpha, \alpha')$, $\alpha
:\phi_x\circ \phi_x' \sim \text{id}_{Q(x)}$, and $\alpha': \phi_x'\circ \phi_x
\sim \text{id}_{P(x)}$. Let
\[
\psi:\sum_{x:A}P(x)\rightarrow \sum_{x:A}Q(x),
\]
be defined by $\psi(p):\equiv
(\text{pr}_1(p),\phi_{\text{pr}_1(p)}(\text{pr}_2(p))$ and let
\[
\psi':\sum_{x:A}Q(x)\rightarrow \sum_{x:A}P(x)
\]
be defined by $\psi'(q):\equiv
(\text{pr}_1(q),\phi_{\text{pr}_1(q)}'(\text{pr}_2(q))) $. Observe that
\begin{equation}\label{CalSeq1}
\psi(\psi'((x,y)))
\equiv
\psi((x,\phi_x'(y)))
\equiv
(x,\phi_x(\phi_x'(y)))
\equiv
(x,(\phi_x\circ \phi_x')(y))
\end{equation}
Then,
\[
\begin{calcu}
\expro{\psi\circ \psi'\sim \text{id}}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{q:\sum_{x:A}Q(x)}(\psi\circ\psi')(q)=q}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\sigma}$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:Q(x)}\psi(\psi'((x,y)))=(x,y)}
\\
\explo{\equiv}{See above computations (\ref{CalSeq1})}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:Q(x)}(x,(\phi\circ\phi')(y))=(x,y)}
\\
\explo{\simeq}{$(a,b)=(c,d)\simeq a=c\times b=d\, <:$ \,;\, $\Pi${\sc Eq}1}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:Q(x)}x=x\times (\phi\circ\phi')(y)=y}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(x,y):\equiv
(\text{refl}_x,\alpha(y))\,\,;\,\,\alpha:\phi\circ \phi'\sim \text{id}$}
\\
\expro{h}
\end{calcu}
\]
We prove $\psi'\circ \psi\sim \text{id}<:$ similarly.\\ [0.1cm]
\textit{Proof of $\Pi${\sc eq2.}}
Suppose that $\boldsymbol{f}:A\simeq B$. Let
\[\psi:\prod_{x:A} P(x)\rightarrow \prod_{y:B}P(f'(y))
\]
be defined by $\psi(u)(y):\equiv u(f'(y))$, and let
\[
\psi':\prod_{y:B} P(f'(y))\rightarrow \prod_{x:A}P(x)
\]
be defined by $\psi'(v)(x):\equiv v(f(x))$. Let us see that $\psi'$ is a
quasi-inverse of $\psi$. On one hand, we have
\[
\begin{calcu}
\expro{\psi\circ \psi'\sim \text{id}}
\\
\explo{\equiv}{Definition of $\sim $}
\\
\expro{\prod\limits_{v:\prod_{y:B} P(f'(y))}\psi(\psi'(v))=v}
\\
\explo{\equiv}{Definition of $\psi$ and $\psi'$}
\\
\expro{\prod\limits_{v:\prod_{y:B} P(f'(y))}v\circ f \circ f'=v}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})\,\,;\, $\Pi${\sc Eq}1}
\\
\expro{\prod\limits _{v:\prod_{y:B} P(f'(y))}v\circ f \circ f'\sim v}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{v:\prod_{y:B} P(f'(y))}\prod\limits_{y:B}v(f( f'(y)))=v(y)}
\\
\explo{\leftarrow}{\!:\,$\Delta$ \,;\,
$\varphi_{(v,y)}:\equiv \text{ap}_v(f( f'(y)),y)$,\, see (\ref{DistArrow}) }
\\
\expro{\prod\limits_{v:\prod_{y:B} P(f'(y))}\prod\limits_{y:B}f( f'(y))=y}
\\
\explo{\leftarrow}{\!:\, $\lambda z.(\lambda v.z)$}
\\
\expro{\prod\limits_{y:B}f(f'(y))=y}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{f\circ f'\sim \text{id}_B}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{Hypothesis}
\\
\expro{\alpha}
\end{calcu}
\]
On the other hand, we can show, exactly in the same way, that
\[h'\circ h\sim \text{id}_{\prod_{x:A} P(x)}<:.\]
\textbf{Application of $\Pi$-translation rule} (to prove
$\text{isSet}(\mathbb N)<:$).\\[1mm]
We can use the translation rule to prove $\text{isSet}(\mathbb
N)<:$\footnote{See definition 3.1.1 in \cite{hottbook}}\,. In
fact, let $\Phi:m=n\rightarrow \text{code}(m,n)$ be defined by $\Phi:\equiv
\text{encode}(m,n)$ and let $\Psi:\text{code}(m,n)\rightarrow m=n$ be defined by
$\Psi:\equiv \text{decode}(m,n)$. Then,
\[
\begin{calcu}
\expro{\text{isSet}(\mathbb N)}
\\
\explo{\equiv}{Definition of isSet}
\\
\expro{\prod\limits_{m,n:\mathbb N}\prod\limits_{p,q:m=n}p=q}
\\
\explo{\simeq}{$\Pi$-translation rule\,;\,$m=n\simeq\text{code}(m,n)$}
\\
\expro{\prod\limits_{m,n:\mathbb
N}\prod\limits_{s,t:\text{code}(m,n)}\Psi(s)=\Psi(t)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{See definition of $h$ below}
\\
\expro{h}
\end{calcu}
\]
where $h$ is defined by
\[
h(m,n,s,t)=
\begin{cases}
\boldsymbol{\mu}_{1}(\boldsymbol{\mu}_{2}(\text{refl}_{\Psi(*)}))&\text{ if }
\text{code}(m,n)={\mathds
1}\\
\boldsymbol{o}_{C}(s)(t), &\text{ if } \text{code}(m,n)={\mathds O}
\end{cases}
\]
with $C\equiv \prod\limits_{t:\textbf{0}}\Psi(s)=\Psi(t)$.
The definition of $h$ is justified by
\[
\begin{calcu}
\expro{\prod\limits_{s,t:\textbf{1}}\Psi(s)=\Psi(t)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\mu}_{1}$}
\\
\expro{\prod\limits_{t:{\mathds1}}\Psi(\ast)=\Psi(t)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\mu}_{2}$}
\\
\expro{\Psi(\ast)=\Psi(\ast)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u:\equiv \text{refl}_{\Psi(\ast)}$}
\\
\expro{u}
\end{calcu}
\]
\textit{Proof of $\Sigma${\sc eq2.}}
Suppose that $\boldsymbol f:A\simeq B$. Let
\[
\psi:\sum_{x:A}P(x)\rightarrow \sum_{y:B}P(f'(y))
\]
defined by $\psi(u):\equiv (f(\text{pr}_1(u)),\text{pr}_2(u))$ and let
\[
\psi':\sum_{y:B}P(f'(y))\rightarrow \sum_{x:A}P(x)
\]
defined by $\psi'(v):\equiv (f'(\text{pr}_1(v)),\text{pr}_2(v))$.
Observe that
\begin{equation}\label{CalSeq2}
\psi(\psi(v))\equiv\psi((f'(\text{pr}_1(v)),\text{pr}_2(v))\equiv((f\circ
f')(\text{pr}_1(v)),\text{pr}_2(v))\end{equation}
Then we have that
\[
\begin{calcu}
\expro{\psi\circ \psi'\sim \text{id}}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{v:\sum_{y:B}P(f'(y))}\psi(\psi'(v))=v}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\sigma}$}
\\
\expro{\prod\limits_{y:B}\prod\limits_{z:P(f'(y))}\psi(\psi'(y,z))=(y,z)}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\equiv}{See above calculations (\ref{CalSeq2})}
\\
\expro{\prod\limits_{y:B}\prod\limits_{z:P(f'(y))}((f\circ f')(y),z)=(y,z)}
\\
\explo{\simeq}{$(a,b)=(c,d)\simeq (a\!=\!c)\times (b\!=\!d)\,<:$ \,;\,
$\Pi${\sc Eq}1}
\\
\expro{\prod\limits_{y:B}\prod\limits_{z:P(f'(y))}((f\circ f')(y)\!=\!y)\times
(z\!=\!z)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(y,z):\equiv (\alpha(y),\text{refl}_z)$}
\\
\expro{h}
\end{calcu}
\]
The proof of $\psi'\circ \psi\sim \text{id}_{\sum_{x:A}P(x)}<:$ is similar. \\ [0.1cm]
We can use $\Sigma${\sc eq1}, $\Sigma${\sc eq2} and transitivity of equivalence to derive the following inference rule which we will be using later:
\begin{equation}\label{EqProd}
\inferrule*[right=eq$_{\times}$]
{f:A\simeq B \\ g: C\simeq D}
{f\times g:A\times C \simeq B\times D}
\end{equation}
\textit{Proof of $+${\sc Eq}1.}Suppose that $\boldsymbol{f}:A\simeq B$. Let
$\psi:A+C\rightarrow B+C$
be defined by $\psi:\equiv \boldsymbol{\kappa}(\text{inl}\circ f,
\text{inr}\circ \text{id}_C)$, and let $\psi':B+C\rightarrow A+C$
be defined by $\psi':\equiv \boldsymbol{\kappa}(\text{inl}\circ f',
\text{inr}\circ\text{id}_C)$. Let us see that $\psi'$ is a quasi-inverse of
$\psi$. Observe that, by definition of $\Psi$ and $\Psi'$, we have
\begin{multicols}{2}
$
\begin{array}{rl}
\phantom{\equiv}&\psi(\psi'(\text{inl}(x)))\\
\equiv & \psi(\boldsymbol{\kappa}(\text{inl}\circ f', \text{inr}\circ\text{id}_C)(\text{inl}(x)))\\
\equiv & \psi(\text{inl}(f'(x)))\\
\equiv & \boldsymbol{\kappa}(\text{inl}\circ f, \text{inr}\circ \text{id}_C)(\text{inl}(f'(x))\\
\equiv& \text{inl}(f(f'(x))),\quad \text{and}
\end{array}
$
\columnbreak
\begin{equation}\label{phis}
\begin{array}{rl}
\phantom{\equiv}&\psi(\psi'(\text{inr}(y))) \\
\equiv& \psi(\boldsymbol{\kappa}(\text{inl}\circ f', \text{inr}\circ\text{id}_C)(\text{inr}(y)))\\
\equiv&\psi(\text{inr}(y))\\
\equiv& \boldsymbol{\kappa}(\text{inl}\circ f, \text{inr}\circ \text{id}_C)(\text{inr}(y))\\
\equiv&\text{inr}(y).
\end{array}
\end{equation}
\end{multicols}
Then we have
\[
\begin{calcu}
\expro{\psi\circ \psi'\sim \text{id}}
\\
\explo{\equiv}{Definition of $\sim $}
\\
\expro{\prod\limits_{p:B+C}\psi(\psi'(p))=p}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\kappa}$}
\\
\expro{\prod\limits_{x:B}(\psi(\psi'(\text{inl}(x)))=\text{inl}(x))\times
\prod\limits_{y:C}\psi(\psi'(\text{inr}(y)))=\text{inr}(y)}
\\
\explo{\equiv}{Definition of $\psi$ and $\psi'$ (\ref{phis})}
\\
\expro{\prod\limits_{x:B}(\text{inl}(f(f'(x)))=\text{inl}(x))\times
\prod\limits_{y:C}\text{inr}(y)=\text{inr}(y)}
\\
\explo{\leftarrow}{\!:\,$k$\,\,;\, $k(u,v):\equiv (\lambda x.\text{ap}_{\text{inl}}(u(x)),\lambda x.\text{ap}_{\text{inr}}(v(x))$}
\\
\expro{\prod\limits_{x:B}(f(f'(x))=x)\times \prod\limits_{y:C}y=y}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h:\equiv (\alpha,\text{refl})$}
\\
\expro{h}
\end{calcu}
\]
We can prove $h'\circ h\sim \text{id}_{\prod_{x:A} P(x)}<:$ similarly.\\ [0.1cm]
\textit{Proof of $+${\sc Eq}2.}
\[
\begin{calcu}
\expro{C+A}
\\
\explo{\simeq}{Commutativity of + (\ref{comm+})}
\\
\expro{A+C}
\\
\explo{\simeq}{$+${\sc Eq}1}
\\
\expro{B+C}
\\
\explo{\simeq}{Commutativity of + (\ref{comm+})}
\\
\expro{C+B}
\end{calcu}
\]
\section{Induction operators as equivalences}
In order to be able to restate HoTT giving equality and equivalence a
preeminent role, it is convenient (and possible) to show that the inductive
operators for the equality type, the $\Sigma$-type and the coproduct are
actually, equivalences. We now proceed to show that this is actually so.
\subsection{Identity type induction operator}\label{IdType}
We prove that for all $P: A\rightarrow {\cal U}$, $\boldsymbol{\iota}$
is
an equivalence, and then,
\[
\prod_{x,y:A}(\prod_{p:x=y}P(x,y,p)) \simeq \prod_{x:A}P(x,x,\text{refl}_x) <:
\]
This equivalence is precisely $\Pi$-[\textbf{Equality}] rule in section \ref{Eind}.
Recall that
\[
\boldsymbol{\iota}:(\prod_{x:A}P(x,x,\text{refl}_x))\rightarrow
\prod_{x,y:A}\prod_{p:x=y}P(x,y,p).
\]
Now, let us define
\[
k:\prod_{x,y:A}(\prod_{p:x=y}P(x,y,p))\rightarrow
\prod_{x:A}P(x,x,\text{refl}_x)
\]
by
\[
k(v)(x):\equiv v(x,x,\text{refl}_x).
\]
Let us prove that $k\circ \boldsymbol{\iota}\sim \text{id}$ and that
$\boldsymbol{\iota}\circ k\sim \text{id}$.
First, observe that for all $u\!:\!\prod_{x:A}P(x,x,\text{refl}_x)$, by definition of $k$ and $\boldsymbol{\iota}$,
\begin{equation}\label{label1.8.1}
k(\boldsymbol{\iota}(u))(x)\;\equiv\;\boldsymbol{\iota}(u)(x,x.\text{refl}_x)\;\equiv\; u(x),
\end{equation}
and for all $v\!:\!\prod_{x,y:A}\prod_{p:x=y}P(x,y,p)$,
\begin{equation}\label{label2.8.1}
\boldsymbol{\iota}(k(v))(x,x.\text{refl}_x)\;\equiv\; k(v)(x)\;\equiv\; v(x,x,\text{refl}_x).
\end{equation}
Then, in one hand, because of (\ref{label1.8.1}), we have that $k\circ \boldsymbol{\iota}\sim \text{id}$. On
the
other, for each $v:\prod\limits_{x,y:A}\prod\limits_{p:x=y}P(x,y,p)$, let us
show that $\boldsymbol{\iota}(k(v))=v<:$
\[
\begin{calcu}
\expro{ \boldsymbol{\iota}(k(v))=v}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})}
\\
\expro{\prod\limits_{x,y:A}\prod\limits_{p:x=y}\boldsymbol{\iota}(k(v))(x,y,
p)=v(x,y,p)}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\leftarrow}{\!:\,\, $\boldsymbol{\iota}$}
\\
\expro{\prod\limits_{x:A}\boldsymbol{\iota}(k(v))(x,x,\text{refl}_x)=v(x,x,
\text{refl}_x)}
\\
\explo{\equiv}{See computation (\ref{label2.8.1}) above}
\\
\expro{\prod\limits_{x:A}v(x,x,\text{refl}_x)=v(x,x,\text{refl}_x)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u(x)=\text{refl}_{v(x,x,\text{refl}_x)}$}
\\
\expro{u}
\end{calcu}
\]
Therefore, the equivalence is proved.
\subsection{Identity type based-path induction operator}\label{BasePath}
Let us suppose that $a\!:\!A$ and that $D:\prod_{x:A}\prod_{p:a=x}{\cal U}$.
Based path induction states the existence of a function $\boldsymbol{\iota}'$
presented by the following consequence link
\[
\begin{calcu}
\expro{\prod\limits_{x:A}\prod\limits_{p:a=x}D(x,p)}
\\
\explo{\leftarrow}{\!:\,\,$\boldsymbol{\iota}'_D$;\,\,
$\boldsymbol{\iota}'_D(z)(a,\text{refl}_a):\equiv z$}
\\
\expro{D(a,\text{refl}_a)}
\end{calcu}
\]
We have also that $\boldsymbol{\iota}'_P$, the based path induction operator,
is
an equivalence, and then
\[\prod_{x:A}(\prod_{p:a=x}P(x,p)) \simeq P(a,\text{refl}_a)<:
\]
This equivalence corresponds to $\Pi$-[\textbf{One-Point}] rule in section \ref{Eind}.\\[0.1cm]
Let us prove that the functions
\[
\begin{calcu}
\expro{\prod_{x:A}\prod_{p:a=x}P(x,p)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\iota }'\,\,; \,\,\boldsymbol{\iota
}'(u)(a,\text{refl}_a):\equiv u$}
\\
\expro{P(a,\text{refl}_a)}
\end{calcu}
\]
and
\[
\begin{calcu}
\expro{P(a,\text{refl}_a)}
\\
\explo{\leftarrow}{\!:\, $k\,\,; \,\,k(v):\equiv v(a,\text{refl}_a)$}
\\
\expro{\prod_{x:A}\prod_{p:a=y}P(x,p)}
\end{calcu}
\]
are quasi-inverses. In fact,
\[k(\boldsymbol{\iota}'(u))\equiv
\boldsymbol{\iota}'(u)(a,\text{refl}_a)\equiv u,
\]
which shows that
$k\circ\boldsymbol{\iota}'\sim \text{id}$, and
\begin{equation}\label{CalBasPath}
\boldsymbol{\iota}'(k(v))(a,\text{refl}_a)\equiv k(v)(x)\equiv
v(a,\text{refl}_a).
\end{equation}
And so, to prove $\boldsymbol{\iota}'\circ k\sim \text{id}$, it is enough to
perform the following calculation for all $v:\prod_{x:A}\prod_{p:a=x}P(x,p)$,
\[
\begin{calcu}
\expro{\boldsymbol{\iota}'(k(v))=v}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})}
\\
\expro{\prod\limits_{x:A}\prod\limits_{p:a=x}\boldsymbol{\iota}'(k(v))(x,
p)=v(x,p)}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\leftarrow}{\!: $\boldsymbol{\iota}'$}
\\
\expro{\boldsymbol{\iota}'(k(v))(a,\text{refl}_a)=v(a,\text{refl}_a)}
\\
\explo{\equiv}{See (\ref{CalBasPath}), above}
\\
\expro{v(a,\text{refl}_a)=v(a,\text{refl}_a)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{Definition of $\text{refl}$}
\\
\expro{\text{refl}_{v(a,\text{refl}_a)}}
\end{calcu}
\]
Therefore,
$\prod_{x:A}\prod_{p:a=y}P(x,p)\simeq P(a,\text{refl}_a)<:$
\subsection{$\sum$-type induction operator}
Now, we prove that, for all $P: A\rightarrow {\cal U}$, $\boldsymbol{\sigma}$, the $\sum$-type induction operator, is an equivalence.
And so,
\begin{equation}\label{SConRule}
(\prod_{x:A}\prod_{y:B(x)}P((x,y))) \simeq \prod_{g:\sum_{x:A}B(x)}P(g)<:
\end{equation}
For the case of $P$ being a non-dependent type, the intuitionistic logical
theorem corresponding to this equivalence is
\[
\cuan{\forall}{x\!:\!T}{B}{P} \equiv \cuant{\exists}{x\!:\!T}{B} \Rightarrow P
\]
where $x$ does not occur free in $P$.\\ [0.1cm]
This motivate us to call the equivalence (\ref{SConRule}) {\it $\Sigma$-consequent rule}.\\[1mm]
Recall that
\[
\boldsymbol{\sigma}:(\prod_{x:A}\prod_{y:B(x)}P((x,y)))
\rightarrow
\prod_{g:\sum_{x:A}B(x)}P(g)
\]
and $\boldsymbol{\sigma}(u)((x,y)):\equiv u(x)(y)$. Let
\[
\Phi:(\prod\limits_{g:\sum_{x:A}B(x)}P(g)) \rightarrow
\prod\limits_{x:A}\prod\limits_{y:B(x)}P((x,y))
\]
be defined by $\Phi(v)(x)(y):\equiv v((x,y))$. Composing
$\boldsymbol{\sigma}$ with $\Phi$ we get
\[
\Phi(\boldsymbol{\sigma}(u))(x)(y)\equiv \boldsymbol{\sigma}(u)((x,y))\equiv u(x)(y).
\]
Then $\Phi\circ \boldsymbol{\sigma}$ is homotopic to the identity function.
Conversely, let $v$ be an inhabitant of $\prod_{g:\sum_{x:A}B(x)}P(g)$, then
\[
\begin{calcu}
\expro{\boldsymbol{\sigma}(\Phi (v))=v}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})}
\\
\expro{\prod\limits_{g:\sum_{x:A}B(x)}\boldsymbol{\sigma}(\Phi(v))(g)=v(g)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\sigma}$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:B(x)}\boldsymbol{\sigma}(\Phi(v))(x,y)=v((x,y))}
\\
\explo{\equiv}{$\boldsymbol{\sigma}(\Phi(v))((x,y))\equiv \Phi(v)(x)(y) \equiv v((x,y))$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:B(x)}v((x,y))=v((x,y))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h:\equiv \lambda x.\lambda y.\text{refl}_v(x,y)$}
\\
\expro{h}
\end{calcu}
\]
So, $\boldsymbol{\sigma}\circ\Phi$ is homotopic to the identity function.
\subsection{Coproduct induction operator}\label{CopIndOp}
For all $A,B:\mathcal U$ and $P:A+B\rightarrow
\mathcal U$ we have that
\[
(\prod\limits_{x:A+B}P(x)) \simeq
(\prod\limits_{x:A}P(\text{inl}(x)))\times
\prod\limits_{y:B}P(\text{inr}(x))<:
\]
This equivalence correspond to $\Pi$-[\textbf{Range Split}] rule in
section \ref{Eind}.\\[0.5mm]
\textit{Proof.} We have the induction operator $\boldsymbol{\kappa}$:
\[
\begin{calcu}
\expro{\prod\limits_{x:A+B}P(x)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\kappa}$\,;\, $\boldsymbol{\kappa}(u,v)(\text{inl}(x)):\equiv u(x)\,;\,\boldsymbol{\kappa}(u,v)(\text{inr}(x)):\equiv v(x)$}
\\
\expro{(\prod\limits_{x:A}P(\text{inl}(x))\times
\prod\limits_{y:B}P(\text{inr}(x))}
\end{calcu}
\]
and let us define
\[
\Psi: (\prod\limits_{x:A+B}P(x))\,\to\,
(\prod\limits_{x:A}P(\text{inl}(x)))\times
\prod\limits_{y:B}P(\text{inr}(y))
\]
by $\Psi(g):\equiv (g\circ \text{inl},g\circ \text{inr})$.
Let us see that $\Psi$ is a quasi-inverse of $\boldsymbol{\kappa}$. We
show that, the type $\boldsymbol{\kappa}\circ\Psi\sim \text{id}$, which by
definition is equivalent to
\[
\prod\limits_{g:\prod\limits_{x:A+B}P(x)}\boldsymbol{\kappa}(\Psi(g)))=g,
\]
is inhabited. Let $g$ be an object of type $\prod_{x:A+B}P(x)$, then:
\[
\begin{calcu}
\expro{\boldsymbol{\kappa}(\Psi(g)))=g}
\\
\explo{\equiv}{Definition of $\Psi$}
\\
\expro{\boldsymbol{\kappa}(g\circ \text{inl}, g\circ \text{inr})=g}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})}
\\
\expro{\boldsymbol{\kappa}(g\circ \text{inl}, g\circ \text{inr})\sim g}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{z:A+B}\boldsymbol{\kappa}(g\circ \text{inl}, g\circ
\text{inr})(z
)=g(z)}
\\
\explo{\leftarrow}{\!:\,$\boldsymbol{\kappa}$}
\\
\expro{\phantom{\times}\prod\limits_{x:A}\boldsymbol{\kappa}(g\circ
\text{inl}, g\circ \text{inr})(\text{inl} (x))=g(\text{inl} (x))\\
&\times\prod\limits_{y:B}\boldsymbol{\kappa}(g\circ \text{inl}, g\circ
\text{inr})(\text{inr}(y))=g(\text{inr}(y))}
\\
\explo{\equiv}{Definition of $\boldsymbol{\kappa}$}
\\
\expro{\prod\limits_{x:A}((g\circ \text{inl})(x)=( g\circ \text{inl})(x))
\times \prod\limits_{y:B}(g\circ \text{inr})(y)=( g\circ \text{inr})(y)}\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(g):\equiv (\lambda x.\text{refl}_{g(\text{inl}(x))},\lambda x.\text{refl}_{g(\text{inr}(t))})$}
\\
\expro{h(g)}
\\
\end{calcu}
\]
And now, we show that $\Psi\circ\boldsymbol{\kappa}\sim
\text{id}<:$\,. In other words, that
\[
\prod\limits_{u:\prod\limits_{x:A}P(\text{inl}(x))\times
\prod\limits_{y:B}P(\text{inr}(y))}\Psi(\boldsymbol{\kappa}(u))=u\,<:
\]
Let $u$ be an object of type $\prod_{x:A}P(\text{inl}(x))\times
\prod_{y:B}P(\text{inr}(y))$, $p:u\!=\!(\text{pr}_1(u),\text{pr}_2(u))$ and
$Q$ the type family defined by $Q(u):\equiv (\boldsymbol{\kappa}(u)\!\circ\!
\text{inl},\,\boldsymbol{\kappa}(u) \!\circ\!
\text{inr})\!=\!u $, and so, by the second Leibniz principle,
\[
\text{tr}^Q(u,(\text{pr}_1(u),\text{pr}_2(u)),p) : Q(u)\simeq Q((\text{pr}_1(u),\text{pr}_2(u)))
\]
Then:
\[
\begin{calcu}
\expro{\Psi(\boldsymbol{\kappa}(u))=u}
\\
\explo{\equiv}{Definition of $\Psi$}
\\
\expro{(\boldsymbol{\kappa}(u)\circ \text{inl},\boldsymbol{\kappa}(u)\circ
\text{inr})=u}
\\
\explo{\simeq}{\!:\, $\text{tr}^Q(u,(\text{pr}_1(u),\text{pr}_2(u)),p)$}
\\
\expro{(\boldsymbol{\kappa}(\text{pr}_1(u),\text{pr}_2(u))\circ
\text{inl},\boldsymbol{\kappa}(\text{pr}_1(u),\text{pr}_2(u))\circ \text{inr})=(\text{pr}_1(u),\text{pr}_2(u))}
\\
\explo{\simeq}{$(a,b)=(c,d)\simeq (a=c)\times (b=d)\,<:$}
\\
\expro{(\boldsymbol{\kappa}(\text{pr}_1(u),\text{pr}_2(u))\circ \text{inl}=\text{pr}_1(u)) \times
(\boldsymbol{\kappa}(\text{pr}_1(u),\text{pr}_2(u))\circ \text{inr}=\text{pr}_2(u))}
\\
\explo{\equiv}{Definition of $\boldsymbol{\kappa}$}
\\
\expro{(\text{pr}_1(u)=\text{pr}_1(u))\,\,\times\,\, (\text{pr}_2(u)=\text{pr}_2(u))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h:\equiv \text{refl}_{\text{pr}_1(u)}\,\,;\,\,k:\equiv \text{refl}_{\text{pr}_2(u)}$}
\\
\expro{(h,k)}
\\
\end{calcu}
\]
As a matter of fact, the induction operators corresponding to $W$ type, $\mathds O$ type and $\mathds 1$ type could be similarly proved to be equivalences.
\section{Operational properties of $\Pi$ and $\Sigma$ types}
Now we come back to the operational rules enumerated in section \ref{Eind} and prove the ones that we have not proved yet.\\ [0.1cm]
[\textbf{One-Point}] rules. In first order logic, quantifying a property over exactly one element is equivalent to the property applied to just this element. For the case of HoTT, this properties are slightly more general.
\[\prod_{x:A}(\prod_{p:a=x}P(x,p)) \simeq P(a,\text{refl}_a)<:
\]
and
\[
\sum_{x:A}(\sum_{p:x=a}P(x,p))\simeq P(a,\text{refl}_a)<:.
\]
We have proved $\Pi$-[\textbf{One-Point}] rule in subsection \ref{BasePath}. We now prove
the $\Sigma$-[\textbf{One-Point}] rule.\\ [0.1cm]
Given $A:{\cal U}$, $a:A$ and $P:\prod_{x:A}\prod_{p:x=a}{\cal U}$, let us
construct
\[
\Phi:\sum_{x:A}(\sum_{p:x=a}P(x,p))\rightarrow P(a,\text{refl}_a).
\]
This can be done by means of the following deductive chain:
\[
\begin{calcu}
\expro{\prod\limits_{g:\sum_{x:A}\sum_{p:x=a}P(x,p)} P(a,\text{refl}_a)}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:\sum_{p:x=a}P(x,p)}P(a,\text{refl}_a)}
\\
\explo{\simeq}{\!:\,$\Delta\,;\,\varphi_x:\equiv \boldsymbol{\sigma}_{x}$,\, $\Pi${\sc eq1}}
\\
\expro{\prod\limits_{x:A}\prod\limits_{p:x=a}\prod\limits_{z:P(x,p)}P(a,\text{
refl}_a)}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\iota}'$,\, $\Pi$-one-point rule}
\\
\expro{\prod\limits_{z:P(a,\text{refl}_a)}P(a,\text{refl}_a)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u:\equiv \text{id}_{P(a,\text{refl}_a)}$}
\\
\expro{u}
\end{calcu}
\]
In the chain above, $\boldsymbol{\sigma}_{x}$ is the induction operator for $\sum_{p:x=a}P(x,p)$ evaluated at the constant type
family $C(x,y):\equiv P(a,\text{refl}_a)$. \\ [0.1cm]
Now, let $\Psi:P(a,\text{refl}_a)\rightarrow \sum_{x:A}\sum_{p:x=a}P(x,p) $ be
defined by \[\Psi(u):\equiv (a,(\text{refl}_a,u)).\]
Let us verify that $\Phi\circ\Psi\sim \text{id}$ and that $\Psi\circ\Phi\sim
\text{id}$. First of all observe that,
making the compositions in the above chain, we get
\[
\Phi :\equiv \boldsymbol{\sigma}(\Delta(\boldsymbol{\iota}'(\text{id}_{P(a,\text{refl}_a)}))).
\]
On one hand we have,
\begin{multicols}{2}
$\begin{array}{rl}
\phantom{\equiv}& \Phi(\Psi(t))\\
\equiv &\boldsymbol{\sigma}(\Delta(\boldsymbol{\iota}'(\text{id}_{P(a,\text{refl}_a)}))(a,\text{refl}_a,t))\\
\equiv &\Delta(\boldsymbol{\iota}'(\text{id}_{P(a,\text{refl}_a)}))(a)((\text{refl}_a,t))\\
\end{array}$
\columnbreak
$\begin{array}{rll}
\equiv &\boldsymbol{\sigma}_a(\boldsymbol{\iota}'(\text{id}_{P(a,\text{refl}_a)})(a))((\text{refl}_a,t))\\
\equiv& \boldsymbol{\iota}'(\text{id}_{P(a,\text{refl}_a)})(a)(\text{refl}_a)(t)\\
\equiv& \text{id}_{P(a,\text{refl}_a)}(t)
\equiv t&
\end{array}$
\end{multicols}
and, on the other hand,
\[
\begin{calcu}
\expro{\Psi\circ\Phi \sim \text{id}}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{g:\sum_{x:A}\sum_{p:x=a}P(x,p)} \Psi(\Phi(g))=g}
\\
\explo{\equiv}{Definition of $\Psi$}
\\
\expro{\prod\limits_{g:\sum_{x:A}\sum_{p:x=a}P(x,p)}
(a,(\text{refl}_a,\Phi(g))=g}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:\sum_{p:x=a}P(x,p)}(a,(\text{refl}_a,
\Phi((x,y)))=(x,y)}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\simeq}{\!:\,$\Delta$\,\,;\,\,$\varphi_x:\equiv \boldsymbol{\sigma}_{x}$\,;\, $\Pi${\sc eq1}}
\\
\expro{\prod\limits_{x:A}\prod\limits_{p:x=a}\prod\limits_{z:P(x,p)}(a,(\text
{refl}_a,\Phi((x,(p,z))))=(x,(p,z))}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\iota}'$ ($\Pi$-one-point rule)}
\\
\expro{\prod\limits_{z:P(a,\text{refl}_a)}(a,(\text{refl}_a,\Phi((a,(\text{refl}
_a,z))))=(a,(\text{refl}_a,z))}
\\
\explo{\equiv}{Property of $\Phi$}
\\
\expro{ \prod\limits_{z:
P(a,\text{refl}_a)}(a,(\text{refl}_a,z))=(a,(\text{refl}_a,z))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(z):\equiv \text{refl}_{(a,(\text{refl}_a,z))}$}
\\
\expro{h}
\end{calcu}
\]
[\textbf{Equality}] rules. These equivalences correspond, in first
order logic, to the case when we are quantifying over two variables that happen
to be equal, then one of those quantified variables may be made equal to the
other, and be, in this way, eliminated.
\[
\prod_{x,y:A}(\prod_{p:x=y}P(x,y,p)) \simeq \prod_{x:A}P(x,x,\text{refl}_x)<:
\]
and
\[
\sum_{x,y:A}(\sum_{p:x=y}P(x,y,p)) \simeq \sum_{x:A}P(x,x,\text{refl}_x)<:
\]
$\Pi$-[\textbf{Equality}] rule was proved in subsection \ref{IdType}. The proof of
$\Sigma$-[\textbf{Equality}] rule follows analogous steps to those of the
$\Sigma$-[\textbf{One-Point}] rule. We omit it.\\ [0.1cm]
[\textbf{Range Split}] rules. The range split rule is a property of
operationals in general. In the case of logical quantifications, it allows separating them into two quantifiers of the same kind of the original one: universal or existential. These operational parts are joined by conjunctions for the first kind, and by disjunctions for the second. Their ranges correspond to disjoint components of the range of the original quantification. In the case of HoTT, this splitting is possible when the range of a $\Pi$-type or a $\Sigma$-type corresponds to a coproduct type. For the case
of a $\Pi$-type, $\Pi$-[\textbf{Range Split}], its parts are joined by a Cartesian product and in the case of a
$\Sigma$-type, $\Sigma$-[\textbf{Range Split}], they are joined by a coproduct operator, namely,
\[
\prod\limits_{x:P+Q}R(x)\simeq (\prod\limits_{x:P}R(\text{inl}(x))
)
\times (\prod\limits_{x:Q}R(\text{inr}(x)) )
\]
and
\[\sum\limits_{x:P+Q}R(x)\simeq (\sum\limits_{x:P}R(\text{inl}(x)) )
+(\sum\limits_{x:Q}R(\text{inr}(x)) )
\]
The $\Pi$-[\textbf{Range Split}] rule is related to the coproduct induction operator and was proved in subsection \ref{CopIndOp}. We now prove $\Sigma$-[\textbf{Range Split}] rule.\\ [0.1cm]
In order to get a function
$$\Phi:(\sum_{x:P+Q}R(x))\,\rightarrow\,
(\sum_{y:P}R(\text{inl}(y)))+\sum_{z:Q}R(\text{inr}(z))
$$
let us consider the following deductive chain:
\[
\begin{calcu}
\expro{(\sum\limits_{x:P+Q}R(x))\rightarrow
(\sum\limits_{y:P}R(\text{inl}(y)))+\sum\limits_{z:Q}R(\text{inr}(z))}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{x:P+Q}(R(x) \,\rightarrow\,
(\sum\limits_{y:P}R(\text{inl}(y))+\sum\limits_{z:Q}R(\text{inr}(z))))}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\simeq}{\!:\,$\boldsymbol{\kappa}$, ($\Pi$-range split rule)}
\\
\expro{\phantom{\times} (\prod\limits_{u:P}(R(\text{inl}(u))\rightarrow
(\sum\limits_{y:P}R(\text{inl}(y)))+\sum\limits_{z:Q}R(\text{inr}(z)))) }
& \times (\prod\limits_{v:Q}(R(\text{inr}(v))\rightarrow
(\sum\limits_{y:P}R(\text{inl}(y)))+\sum\limits_{z:Q}R(\text{inr}(z))))
\\
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$\phi_0(u)(a):\equiv \text{inl}((u,a));\quad
\phi_1(v)(b):\equiv \text{inr}((v,b))$}
\\
\expro{(\phi_0,\phi_1)}
\end{calcu}
\]
Then we can put $\Phi:\equiv
\boldsymbol{\sigma}(\boldsymbol{\kappa}(\phi_0,\phi_1))$\\ [0.1cm]
Now, in order to get a function
\[\Psi:\sum_{y:P}R(\text{inl}(y))+\sum_{z:Q}R(\text{inr}(z))\rightarrow
\sum_{x:P+Q}R(x)
\]
let us consider the following deductive chain:
\[
\begin{calcu}
\expro{(\sum\limits_{y:P}R(\text{inl}(y))+(\sum\limits_{z:Q}R(\text{inr}
(z))\rightarrow \sum\limits_{x:P+Q}R(x)}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\kappa}$,\, ($\Pi$-range split rule)}
\\
\expro{((\sum\limits_{y:P}R(\text{inl}(y))\rightarrow
\sum\limits_{x:P+Q}R(x))
\,\times\,
((\sum\limits_{z:Q}R(\text{inr}(z))\rightarrow
\sum\limits_{x:P+Q}R(x)) }
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}_1\!\times\!\boldsymbol{\sigma}_2$,\, {\sc eq}$_{\times}$ (\ref{EqProd})}
\\
\expro{((\prod\limits_{y:P}R(\text{inl}(y))\rightarrow
\sum\limits_{x:P+Q}R(x)) \times ((\prod\limits_{z:Q}
R(\text{inr}(z))\rightarrow \sum\limits_{x:P+Q}R(x)) }
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$\psi_0(y)(a):\equiv (\text{inl}(y),a);\quad
\psi_1(z)(b):\equiv (\text{inr}(z),b)$}
\\
\expro{(\psi_0,\psi_1)}
\end{calcu}
\]
Then we may define $\Psi:\equiv \boldsymbol{\kappa}(\boldsymbol{\sigma}_1\!\times\!\boldsymbol{\sigma}_2(\psi_0,\psi_1)):\equiv \boldsymbol{\kappa}(\boldsymbol{\sigma}_1(\psi_0),\boldsymbol{\sigma}_2(\psi_1))$\\ [0.1cm]
Observe that
\begin{multicols}{2}
$\begin{array}{rl}
\phantom{\equiv} &\Phi(\Psi(\text{inl}(f_1,f_2)))\\
\equiv&\Phi(\boldsymbol{\kappa}(\boldsymbol{\sigma}_1(\psi_0),\boldsymbol{\sigma}_2(\psi_1))(\text{inl}(f_1,f_2)))\\
\equiv&\Phi(\boldsymbol{\sigma}_1(\psi_0)(f_1,f_2))\\
\equiv&\Phi(\psi_0(f_1)(f_2))\\
\end{array}$
\columnbreak
$\begin{array}{rl}
\equiv&\Phi(\text{inl}(f_1),f_2)\\
\equiv&\boldsymbol{\kappa}(\phi_0,\phi_1))(\text{inl}(f_1))(f_2)\\
\equiv&\phi_0(f_1)(f_2)\\
\equiv&\text{inl}(f_1,f_2).
\end{array}$
\end{multicols}
In the same way we can prove that $\Phi(\Psi(\text{inr}(g_1,g_2)))\equiv \text{inr}(g_1,g_2))$
Then
\[
\begin{calcu}
\expro{\prod\limits_{p:\sum_{y:P}R(\text{inl}(y))+\sum_{z:Q}R(\text{inr}(z))}
\Phi(\Psi(p))=p}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\kappa}$,\, ($\Pi$-range split rule)}
\\
\expro{\phantom{\times}\prod\limits_{f:\sum_{y:P}R(\text{inl}(y))}
\Phi(\Psi(\text{inl}(f)))=\text{inl}(f)\\
&\times
\prod\limits_{g:\sum_{x:Q}R(\text{inr}(x))}\Phi(\Psi(\text{inr}(g)))=\text{inr}
(g)}
\\
\explo{\simeq}{\!:\, $\boldsymbol{\sigma}_1\times \boldsymbol{\sigma}_2$, {\sc eq}$_{\times}$ (\ref{EqProd})}
\\
\expro{\phantom{\times}\prod\limits_{f_1:P}\prod\limits_{f_2:R(\text{inl}(f_1))}
\Phi(\Psi(\text{inl}(f_1,f_2)))=\text{inl}(f_1,f_2)\\
&\times
\prod\limits_{g_1:P}\prod\limits_{g_2:R(\text{inr}(g_1))}\Phi(\Psi(\text{inr}
(g_1,g_2)))=\text{inr}(g_1,g_2)}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\equiv}{Above computations}
\\
\expro{\phantom{\times}\prod\limits_{f_1:P}\prod\limits_{f_2:R(\text{inl}(f_1))}
\text{inl}(f_1,f_2)=\text{inl}(f_1,f_2)\\
&\times
\prod\limits_{g_1:P}\prod\limits_{g_2:R(\text{inr}(g_1))}\text{inr}(g_1,
g_2)=\text{inr}(g_1,g_2)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u(f_1,f_2):\equiv
\text{refl}_{\textbf{inl}(f_1,f_2)}\,\,;\,\,u(g_1,g_2):\equiv
\text{refl}_{\textbf{inr}(g_1,g_2)}$}
\\
\expro{(u,v)}
\end{calcu}
\]
In the other direction, observe that
\begin{multicols}{2}
$\begin{array}{rl}
\phantom{\equiv}&\Psi(\Phi(\text{inl}(w), u_2))\\
\equiv&\Psi(\boldsymbol{\sigma}(\boldsymbol{\kappa}(\phi_0,\phi_1))(\text{inl}(w),u_2))\\
\equiv&\Psi(\boldsymbol{\kappa}(\phi_0,\phi_1)(\text{inl}(w))(u_2))\\
\equiv&\Psi(\phi_0(w)(u_2))\\
\end{array}$
\columnbreak
$\begin{array}{rll}
\equiv&\Psi(\text{inl}(w,u_2))\\
\equiv&\boldsymbol{\kappa}(\boldsymbol{\sigma}_1(\psi_0),\boldsymbol{\sigma}_2(\psi_1))(\text{inl}(w,u_2))\\
\equiv&\boldsymbol{\sigma}_1(\psi_0)(w,u_2)\\
\equiv&\psi_0(w)(u_2)\equiv (\text{inl}(w),u_2).
\end{array}$
\end{multicols}
In the same way we can prove that $\Psi(\Phi(\text{inr}(z), u_2)):\equiv
(\text{inr}(z), u_2)$.
Then
\[
\begin{calcu}
\expro{\prod\limits_{u:\sum_{x:P+Q}R(x)}\Psi(\Phi(u))=u}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{u_1:P+Q}\prod\limits_{u_2:R(u_1)}\Psi(\Phi(u_1,u_2))=(u_1,
u_2)}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\kappa}$,\, ($\Pi$-range split rule)}
\\
\expro{\phantom{\times}\prod\limits_{w:P}\prod\limits_{u_2:R(\text{inl}(w))}
\Psi(\Phi(\text{inl}(w),u_2))=
(\text{inl}(w),u_2)\\
&\times
\prod\limits_{z:Q}\prod\limits_{u_2:R(\text{inr}(z))}\Psi(\Phi(\text{inr}(z),
u_2))=(\text{inr}(z),u_2)}
\\
\explo{\equiv}{Above computations}
\\
\expro{\phantom{\times}\prod\limits_{w:P}\prod\limits_{u_2:R(\text{inl}(w))}
(\text{inl}(w),u_2)=
(\text{inl}(w),u_2)\\
&\times
\prod\limits_{z:Q}\prod\limits_{u_2:R(\text{inr}(z))}(\text{inr}(z),u_2)=(\text{
inr}(z),u_2)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h:\equiv (\lambda w.\lambda u_2.\text{refl}_{(\text{inl}(w),u_2)}, \lambda z.\lambda u_2.\text{refl}_{(\text{inr}(z),u_2)})$}
\\
\expro{h}
\end{calcu}
\]
[\textbf{Term Split}] rules. In logic, universal quantifications of conjuntions
split (through an equivalence) into universal quantifications of each conjunct
joined by conjunctios too. Dually, existential quantifications split into existential quantifications of each disjunct joined by disjunctions. In the case of HoTT, $\Pi$-types mapping into Cartesian products split into $\Pi$-types for each factor joined by Cartesian products, $\Pi$-[\textbf{Term Split}] rule. Dually, for
$\Sigma$-types, we have an analogous situation replacing cross products by coproducts, $\Sigma$-[\textbf{Term Split}] rule. Namely,
\[
\prod\limits_{x:A}(P(x)\times Q(x)) \simeq (\prod\limits_{x:A}P(x)
) \times (\prod\limits_{x:A}Q(x) )
\]
and
\[
\sum\limits_{x:A}(P(x)+ Q(x)) \simeq (\sum\limits_{x:A}P(x)
)
+ (\sum\limits_{x:A}Q(x) )
\]
\\[0.1cm]
To prove $\Pi$-[\textbf{Term Split}] rule, let $\Phi:(\prod_{x:A}P(x)) \times
(\prod_{y:A}Q(y)) \rightarrow \prod_{x:A}P(x) \times Q(x)$ be
defined by $\Phi(u)(x)\!:\equiv\! ((\text{pr}_1u)(x),(\text{pr}_2u)(x))$, and also, let
$\Psi\!:\!\prod_{x:A}(P(x)\!\times\! Q(x)) \rightarrow
(\prod\limits_{x:A}P(x)) \times\prod_{y:A}Q(y)$ be defined by
$\Psi(g):\equiv (\text{pr}_1\circ g, \text{pr}_2\circ g)$. Let us see that
$\Psi$ is a quasi-inverse of $\Phi$:
\[
\begin{calcu}
\expro{\Psi\circ \Phi \sim
\text{id}_{\prod\limits_{x:A}P(x)\times\prod\limits_{y:A}Q(y)}}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{u:\prod\limits_{x:A}P(x)\times\prod\limits_{y:A}Q(y)}
\Psi(\Phi(u))=u}
\\
\explo{\equiv}{Definition of $\Psi$}
\\
\expro{\prod\limits_{u:\prod\limits_{x:A}P(x)\times\prod\limits_{y:A}Q(y)}(\text
{pr}_1 \circ \Phi(u), \text{pr}_2 \circ \Phi(u))=u}
\\
\explo{\equiv}{Definition of $\Phi$}
\\
\expro{\prod\limits_{u:\prod\limits_{x:A}P(x)\times\prod\limits_{y:A}Q(y)}(\text
{pr}_1u, \text{pr}_2u)=u}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{Uniqueness principle of pairs (\ref{UniqPairs})}
\\
\expro{h}
\\
\end{calcu}
\]
Now let us show that $\Phi\circ \Psi \sim \text{id}<:$
\[
\begin{calcu}
\expro{\Phi\circ \Psi \sim \text{id}_{\prod\limits_{x:A}P(x)\times Q(x)}}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{g:\prod\limits_{x:A}P(x)\times Q(x)}\Phi(\Psi(g)))=g}
\\
\explo{\equiv}{Definition of $\Psi$}
\\
\expro{\prod\limits_{g:\prod\limits_{x:A}P(x)\times Q(x)}\Phi((\text{pr}_1
\circ
g, \text{pr}_2 \circ g))=g}
\\
\explo{\simeq}{Function extensionality (\ref{FuncExt})}
\\
\expro{\prod\limits_{g:\prod\limits_{x:A}P(x)\times Q(x)}\Phi((\text{pr}_1
\circ
g, \text{pr}_2 \circ g))\sim g}
\\
\explo{\equiv}{Definition of $\sim$}
\\
\expro{\prod\limits_{g:\prod\limits_{x:A}P(x)\times
Q(x)}\prod\limits_{x:A}\Phi((\text{pr}_1 \circ g, \text{pr}_2 \circ g))(x)=
g(x)}
\\
\explo{\equiv}{Definition of $\Phi$}
\\
\expro{\prod\limits_{g:\prod\limits_{x:A}P(x)\times
Q(x)}\prod\limits_{x:A}(\text{pr}_1 (g(x)), \text{pr}_2 (g(x))=g(x)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{Uniqueness principle of pairs (\ref{UniqPairs})}
\\
\expro{k.}
\\
\end{calcu}
\]
And now, we prove the $\Sigma$-[\textbf{Term Split}] rule:\\ [0.1cm]
In order to get a function
\[
\Phi:\sum_{x:A} (P(x)+Q(x)) \rightarrow (\sum_{x:A}P(x)) + (\sum_{x:A}Q(x))
\]
let us consider the folowing deductive chain:
\[
\begin{calcu}
\expro{\sum\limits_{x:A}(P(x)+Q(x)) \rightarrow (\sum\limits_{x:A}P(x))
+(\sum\limits_{x:A}Q(x))}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{x:A}((P(x)+Q(x)) \rightarrow (\sum\limits_{x:A}P(x))
+(\sum\limits_{x:A}Q(x)))}
\\
\explo{\simeq}{\!:\,$\Delta\,\,;\,\,\varphi_x:\equiv \boldsymbol{\kappa}_x$, $\Pi$\sc{eq1}}
\\
\expro{\prod\limits_{x:A}((P(x) \rightarrow
(\sum\limits_{x:A}P(x)) +(\sum\limits_{x:A}Q(x))) \times (Q(x) \rightarrow
(\sum\limits_{x:A}P(x)) +(\sum\limits_{x:A}Q(x))))}
\\
\explo{\simeq}{\!:\,$\eta\,\,;\,\,\eta(u,v):\equiv\lambda x.(u(x),v(x))$, ($\Pi$-term split rule) }
\\
\expro{\prod\limits_{x:A}(P(x) \rightarrow
\sum\limits_{x:A}P(x)+\sum\limits_{x:A}Q(x)\times \prod\limits_{x:A}(Q(x) \rightarrow
\sum\limits_{x:A}P(x)+\sum\limits_{x:A}Q(x))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$\phi_1:\equiv\lambda x.\lambda y.\text{inl}(x,y)$;
$\phi_2:\equiv\lambda x.\lambda z.\text{inr}(x,z)$}
\\
\expro{(\phi_1,\phi_2)}
\end{calcu}
\]
In the chain above, $\boldsymbol{\kappa}_{x}$ is the induction operator for $P(x)+Q(x)$ evaluated at the constant type
family $D:(\sum_{x:A}P(x))+(\sum_{x:A}Q(x))$. Then, we may define $\Phi:\equiv
\boldsymbol{\sigma}(\Delta(\eta(\phi_1,\phi_2)))$.\\ [0.1cm]
In order to get a function
\[
\Psi:\sum_{x:A}P(x)+\sum_{x:A}Q(x)\rightarrow \sum_{x:A}P(x)+Q(x)
\]
let us consider the following deductive chain:
\[
\begin{calcu}
\expro{(\sum\limits_{x:A}P(x)) +(\sum\limits_{x:A}Q(x)) \rightarrow
\sum\limits_{x:A}(P(x)+Q(x))}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\kappa}$, ($\Pi$-range split rule)}
\\
\expro{((\sum\limits_{x:A}P(x)) \rightarrow \sum\limits_{x:A}P(x)+Q(x)\times
((\sum\limits_{x:A}Q(x)) \rightarrow \sum\limits_{x:A}P(x)+Q(x))}
\\
\explo{\simeq}{\!:\, $\boldsymbol{\sigma}_1\!\times\!\boldsymbol{\sigma}_2$, {\sc eq}$_{\times}$ (\ref{EqProd})}
\\
\expro{\prod\limits_{x:A}(P(x) \rightarrow \sum\limits_{x:A}P(x)+Q(x))\times
\prod\limits_{x:A}(Q(x) \rightarrow \sum\limits_{x:A}P(x)+Q(x))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$\psi_1:\equiv\lambda x.\lambda y.(x,\text{inl}(y))$\,;\,
$\psi_2:\equiv \lambda x.\lambda z.(x,\text{inr}(z))$}
\\
\expro{(\psi_1,\psi_2)}
\end{calcu}
\]
Then we may define $\Psi:\equiv \boldsymbol{\kappa}(\boldsymbol{\sigma}_1\!\times\!\boldsymbol{\sigma}_2(\psi_1,\psi_2)))$\\
Observe that
\begin{multicols}{2}
$\begin{array}{rl}
\phantom{\equiv}&\Phi(\Psi(\text{inl}(a_1,a_2)))\\
\equiv&\Phi(\boldsymbol{\kappa}(\boldsymbol{\sigma}_1(\psi_1),\boldsymbol{\sigma}_2(\psi_2))\text{inl}(a_1,a_2))\\
\equiv & \Phi(\boldsymbol{\sigma}_1(\psi_1) (a_1,a_2)))\\
\equiv&\Phi(\psi_1(a_1)(a_2))\\
\equiv& \Phi(a_1, \text{inl}(a_2))
\end{array}$
\columnbreak
$
\begin{array}{rl}
\equiv&\boldsymbol{\sigma}(\Delta(\eta(\phi_1,\phi_2)))(a_1, \text{inl}(a_2)) \\
\equiv&\Delta(\eta(\phi_1,\phi_2))(a_1) (\text{inl}(a_2))\\
\equiv&\boldsymbol{\kappa}_{a_1}(\phi_1(a_1),\phi_2(a_1))(\text{inl}(a_2))\\
\equiv&\phi_1(a_1)(a_2)\;\;\,\equiv\text{inl}(a_1,a_2).
\end{array}
$
\end{multicols}
In the same way, $\Phi(\Psi(\text{inr}(b_1,b_2))):\equiv\text{inr}(b_1,b_2)$.
Then
\[
\begin{calcu}
\expro{\prod\limits_{p:\sum_{x:A}P(x)+\sum_{x:A}Q(x)} \Phi(\Psi(p))=p}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\kappa}$, ($\Pi$-range split rule)}
\\
\expro{\prod\limits_{a:\sum_{x:A}P(x)} \Phi(\Psi(\text{inl}(a)))=\text{inl}(a)
\times \prod\limits_{b:\sum_{x:A}Q(x)}
\Phi(\Psi(\text{inr}(b)))=\text{inr}(b)}
\end{calcu}
\]
\[
\begin{calcu}
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}_1\!\times\!\boldsymbol{\sigma}_2$, {\sc eq}$_{\times}$ (\ref{EqProd})}
\\
\expro{\phantom{\times}\prod\limits_{a_1:A}\prod\limits_{a_2:P(a_1)}
\Phi(\Psi(\text{inl}(a_1,a_2)))=\text{inl}(a_1,a_2)\\
& \times \prod\limits_{b_1:A}\prod\limits_{a_2:Q(b_1)}
\Phi(\Psi(\text{inr}(b_1,b_2)))=\text{inr}(b_1,b_2)}
\\
\explo{\equiv}{Above computations}
\\
\expro{\phantom{\times}\prod\limits_{a_1:A}\prod\limits_{a_2:P(a_1)}
\text{inl}(a_1,a_2)=\text{inl}(a_1,a_2)\\
& \times \prod\limits_{b_1:A}\prod\limits_{a_2:Q(b_1)}
\text{inr}(b_1,b_2)=\text{inr}(b_1,b_2)}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u:\equiv\lambda a_1.\lambda a_2.\text{refl}_{\text{inl}(a_1,a_2)}$\,\,;\,\
$v:\equiv\lambda b_1.\lambda b_2.\text{refl}_{\text{inr}(b_1,b_2)}$}
\\
\expro{(u,v)}
\end{calcu}
\]
In the other direction,
\[
\begin{calcu}
\expro{\prod\limits_{p:\sum_{x:A}P(x)+Q(x)} \Psi(\Phi(p))=p}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:P(x)+Q(x)} \Psi(\Phi(x,y))=(x,y)}
\\
\explo{\equiv}{Definition of $\Phi$}
\\
\expro{\prod\limits_{x:A}\prod\limits_{y:P(x)+Q(x)}
\Psi(\boldsymbol{\kappa}(\phi_1(x),\phi_2(x))(y))=(x,y)}
\\
\explo{\simeq}{\!:\,$\Delta$\,;\, $\varphi_x:\equiv \boldsymbol{\kappa}_x$\,;\, $\Pi${\sc Eq}1}
\\
\expro{\prod\limits_{x:A}\phantom{\times} (\prod\limits_{w:P(x)}
\Psi(\boldsymbol{\kappa}(\phi_1(x),\phi_2(x))(\text{inl}(w)))=(x,\text{inl}
(w)) \\
& \phantom{\times\times} \times \prod\limits_{z:Q(x)}
\Psi(\boldsymbol{\kappa}(\phi_1(x),\phi_2(x))(\text{inr}(z)))=(x,\text{inr}
(z))) }
\\
\explo{\equiv}{Definition of $\boldsymbol{\kappa}$}
\\
\expro{\prod\limits_{x:A}\phantom{\times}( \prod\limits_{w:P(x)}
\Psi(\phi_1(x)(w))=(x,\text{inl}(w)) \\
& \phantom{\times\times} \times \prod\limits_{z:Q(x)}
\Psi(\phi_2(x)(z))=(x,\text{inr}(z))) }
\\
\explo{\equiv}{Definition of $\phi_1$ and $\phi_2$}
\\
\expro{\prod\limits_{x:A}\phantom{\times}( \prod\limits_{w:P(x)}
\Psi(\text{inl}(x,w))=(x,\text{inl}(w)) \\
& \phantom{\times\times} \times \prod\limits_{z:Q(x)}
\Psi(\text{inr}(x,z))=(x,\text{inr}(z))) }
\\
\explo{\equiv}{Definition of $\Psi$}
\\
\expro{\prod\limits_{x:A}\phantom{\times}( \prod\limits_{w:P(x)}
(x,\text{inl}(w))=(x,\text{inl}(w)) \\
& \phantom{\times\times} \times \prod\limits_{z:Q(x)}
(x,\text{inr}(z))=(x,\text{inr}(z))) }
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$u:\equiv \lambda x.(\lambda w.\text{refl}_{(x,\text{inl}(w))},\lambda z.\text{refl}_{(x,\text{inr}(z))})$}
\\
\expro{u}
\end{calcu}
\]
[\textbf{Translation}] rules correspond to the derived
inference rules $\Pi${\sc Eq}2 and $\Sigma${\sc Eq}2 which were proved in subsection \ref{LeibInf}\\[2mm]
[\textbf{Congruence}] rules correspond to the derived inference rules $\Pi${\sc Eq}1 and $\Sigma${\sc Eq}1 stated and proved in subsection
\ref{LeibInf}\\[2mm]
[\textbf{Antecedent}] rules correspond to equivalences in first
order logic that allow introducing the antecedent of an implication into the term of a logical operational when the quantified variables do not occur free in this antecedent. For HoTT, we only have an equivalence for the case of $\Pi$-types, $\Pi$-[\textbf{Antecedent}] rule. For $\Sigma$-types we have an equivalence only if the antecedent is a mere proposition. Namely,
\[
( P\rightarrow \prod\limits_{x:A}Q(x)) \; \simeq \;
\prod\limits_{x:A} (P\rightarrow Q(x))<:
\]
and
\[\sum_{x:A}(P\rightarrow Q(x))
\rightarrow ( P\rightarrow \sum_{x:A}Q(x)) <:
\]
If $P\simeq \mathds 1<:$ then we get the equivalence.\\ [0.1cm]
The proof of $\Pi$-[\textbf{Antecedent}] rule appears in section \ref{InhArr}. We prove $\Sigma$-[\textbf{Antecedent}] rule.
Let us consider the following deductive chain.
\[
\begin{calcu}
\expro{\sum\limits_{x:A}(P\rightarrow Q(x))\rightarrow (P\rightarrow
\sum\limits_{x:A}Q(x))}
\\
\explo{\simeq}{\!:\,$\boldsymbol{\sigma}$,\, $\Sigma$-consequent rule}
\\
\expro{\prod\limits_{x:A}((P\to Q(x))\rightarrow (P\to \sum\limits_{x:A}Q(x)))}
\\
\explo{\stackrel{\mbox{\tiny $\wedge$}}{\mbox{\tiny :}}}{$h(x)(u)(y):\equiv (x,u(y))$}
\\
\expro{h}
\end{calcu}
\]
This proves the first part. Now, If $P\simeq \mathds 1<:$, let
\[
\psi:({\mathds 1}\to \sum_{x:A}Q(x))\rightarrow \sum_{x:A}({\mathds 1}\to Q(x))
\]
be defined by
\[
\psi(u):\equiv (\text{pr}_1(u(*)),\text{pr}_2\!\circ\! u).
\]
\section{Inhabiting arrows}\label{InhArr}
One of the tasks in homotopy type theory is to determine a formula for a
function from type $A$ to a type $B$. We found that in several cases the
structures of types $A$ and $B$ determine a natural matching of their objects
defining a function from $A$ to $B$. We call such a mapping a {\it
canonical function}. An attempt to systematize this task is to precise the way
in which we can get out of type $A$ through its eliminators and the way in
which we can get in type $B$ through its constructors. To do so, we define the
\textit{exit door} and the \textit{entry door} of a type. Of course, there will
be types $A$ and $B$ for which there is no canonical function. This procedure is rather informal and has not relation with deductive chains, but allows us, in several cases, to find the canonical function.
The entry door of a type is a $\lambda$-expression that represents a constructed
object of the type, i.e., an object of the type obtained from its
constructors. The exit door of a type is a $\lambda$-expression that represents an
eliminated object of the type, i.e., an object of the type constructed from the
elimination of a generic object. For instance, the entry door of the type
$\sum_{x:A}C(x)$ is the $\lambda$-expression
\[\left(u_1\!:\!A, u_2:C(u_1)\right)
\]
because a constructed object of the type is a dependent pair of objects $u_1$
of
type $A$ and $u_2$ of type $C(u_1)$. Then, we write
\[
\begin{calcu}
\expro{\sum\limits_{x:A}C(x)}
\\
\explo{\uparrow}{entry door}
\\
\expro{\left(\,u_1\!:\!A\,,\, u_2:C(u_1)\,\right)}
\end{calcu}
\]
The exit door of this type is the $\lambda$-expression
\[\left(\, \text{pr}_1 (u)\!:\!A\,,\, \text{pr}_2(u)\!:\!C(\text{pr}_1(u)\,\right)
\]
because it is the dependent pair constructed from the elimination of a generic
object $u$ of type $\sum\limits_{x:A}C(x)$ through their projections. We write
\[
\begin{calcu}
\expro{\left(\, \text{pr}_1 (u)\!:\!A\,,\, \text{pr}_2(u)\!:\!C(\text{pr}_1(u)\,\right)}
\\
\explo{\downarrow}{exit door}
\\
\expro{\sum\limits_{x:A}C(x).}
\end{calcu}
\]
The doors of a type can be used to determine a formula for a canonical function
from a type to another, by matching the exit door of the source type with the
entry door of the destination type. For instance, let us determine a function
from
$\sum_{x:A}C(x)$ to itself.
This means that we have to determine an object $\Phi$ in the following link
\[
\begin{calcu}
\expro{\sum\limits_{x:A}C(x)}
\\
\explo{\leftarrow}{\!:\,$\Phi$}
\\
\expro{\sum\limits_{x:A}C(x),}
\end{calcu}
\]
i.e. we have to match the exit door $\left(\, \text{pr}_1 (u)\!:\!A\,,\,
\text{pr}_2(u)\!:\!C(\text{pr}_1(u)\,\right)$ and the entry door $\left(\Phi(u)_1\!:\!A,
\Phi(u)_2:C(\Phi(u)_1)\right)$ of the type $\sum_{x:A}C(x)$, task that
we represent with the following matching diagram
\[
\begin{calcu}
\expro{\sum\limits_{x:A}C(x)}
\\
\explo{\uparrow}{entry door}
\\
\expro{\left(\Phi(u)_1\!:\!A, \Phi(u)_2:C(\Phi(u)_1)\right)}
\\
\explo{\mapsfrom}{Looking for definition}
\\
\expro{\left(\, \text{pr}_1 (u)\!:\!A\,,\, \text{pr}_2(u)\!:\!C(\text{pr}_1(u))\,\right)}
\\
\explo{\downarrow}{exit door}
\\
\expro{\sum\limits_{x:A}C(x),}
\end{calcu}
\]
where $\mapsfrom$ means that some sort of symbolic matching between two
expressions must be discovered. By matching the doors we get
\[
\Phi(u):\equiv (\text{pr}_1(u), \text{pr}_2(u)).
\]
Observe that the canonical function in this case is not the identity
function.\\
[0.1cm]
Let us determine the canonical function $\Phi$ from $\prod_{x:A}B(x)$ to
itself. The corresponding matching diagram is
\[
\begin{calcu}
\expro{\prod\limits_{x:A}B(x)}
\\
\explo{\uparrow}{entry door}
\\
\expro{\lambda(x\!:\!A).(\Phi(f)\!:\!B)}
\\
\explo{\mapsfrom}{?}
\\
\expro{\lambda(x\!:\!A).(f(x)\!:\!B(x))}
\\
\explo{\downarrow}{exit door}
\\
\expro{\prod\limits_{x:A}B(x).}
\end{calcu}
\]
Therefore, by matching, we get
\[
\Phi(f)(x):\equiv f(x).
\]
which, by uniqueness, is the identity function. \\ [0.1cm]
We now present some examples illustrating this technique.\\[0.1cm]
\textbf{$\Pi$-distribution over arrows}. As promised in section \ref{DistArr}, we show how to obtain the canonical function $\Phi:\equiv \lambda u.\Phi(u)$ of the type
\[\prod\limits_{x\!:\!A} (P(x) \rightarrow Q(x))
\rightarrow
( \prod\limits_{x\!:\!A} P (x) \rightarrow \prod\limits_{x: A} Q (x)) .
\]
For that, the corresponding entrance and exit doors are made to coincide
\[
\begin{calcu}
\expro{\prod\limits_{x: A} P (x) \rightarrow \prod\limits_{x: A} Q (x)}
\\
\explo{\uparrow}{entry door}
\\
\expro{\lambda (z\!:\!\prod\limits_{x: A} P (x)). \lambda (x\!:\!A).
\Phi(u)(z)(x)}
\\
\explo{\mapsfrom}{?}
\\
\expro{\lambda (x\!:\!A).\lambda (y\!:\!P(x)).u(x)(y)}
\\
\explo{\downarrow}{exit door}
\\
\expro{\prod\limits _{x:A}(P(x) \rightarrow Q(x))}
\end{calcu}
\]
obtaining
\[\Phi(u)(z)(x):\equiv u(x)(z(x)).
\]
$\Pi$-[\textbf{Antecedent}] rule. In order to prove that
\[
( P\rightarrow \prod\limits_{x:A}Q(x) ) \; \simeq \;
\prod\limits_{x:A} (P\rightarrow Q(x))<:
\]
we have to determine a 4-tuple $(\Phi, \Phi',\alpha, \alpha')$ inhabiting the equivalence type.
Consider the following entry-exit door arguments:
\[
\begin{calcu}
\expro{P\rightarrow \prod\limits_{x:A}Q(x)}
\\
\explo{\uparrow}{entry door}
\\
\expro{\lambda(y\!:\!P).\lambda(x\!:\!A).(\Phi(u)(y)(x):Q(x) )}
\\
\explo{\mapsfrom}{$\Phi(u)(y)(x):\equiv u(x)(y)$}
\\
\expro{\lambda(x\!:\!A).\lambda(y\!:\!P).(u(x)(y):Q(x)) }
\\
\explo{\downarrow}{exit door}
\\
\expro{\prod\limits_{x:A} (P\rightarrow Q(x)),}
\end{calcu}
\]
and
\[
\begin{calcu}
\expro{\prod\limits_{x:A} (P\rightarrow Q(x))}
\\
\explo{\uparrow}{entry door}
\\
\expro{\lambda(x\!:\!A).\lambda(y\!:\!P).(\Phi'(v)(x)(y)\!:\!Q(x)) }
\\
\explo{\mapsfrom}{$\Phi'(v)(x)(y):\equiv v(y)(x)$}
\\
\expro{\lambda(y\!:\!P).\lambda(x\!:\!A).(v(y)(x)\!:\!Q(x))}
\\
\explo{\downarrow}{exit door}
\\
\expro{P\rightarrow \prod\limits_{x:A}Q(x).}
\end{calcu}
\]
Observe that, by definition of $\Phi$ and $\Phi'$,
\[\Phi'(\Phi(u))(x)(y)\equiv \Phi(u)(y)(x)\equiv u(x)(y)\]
and
\[\Phi(\Phi'(v))(y)(x)\equiv\Phi'(v)(x)(y)\equiv v(y)(x).\]
This shows that $\Phi'$ and $\Phi$ are each other inverses, and then, that $\Phi'\circ\Phi\sim \text{id}<:$ and $\Phi\circ\Phi'\sim \text{id}<:$
\section{Conclusions}
We were able to obtain a formal deduction method in HoTT based on deduction chains; and found that the most important equational axioms and rules of a calculation version of intuitionistic logic (ICL) have a counterpart as derivable judgments in HoTT. Some of this judgments correspond to homotopic equivalence versions of the induction operators of basic types in HoTT.
We think that the use of deductive chains to formally prove HoTT theorems, in comparison with rigorous proofs written on paper by a human, is more effective, clear and readable. This is so, because the proofs are made of formally precise linearly chained modules which characterize the linear proof formats we call deductive chains. This way of proving, in our view, has the advantage of, on one hand, preserve formality avoiding ambiguities and imprecisions that may come with rigorous but colloquial proofs typical of the working mathematician; and on the other hand, they are constructed via very simple and precise steps, amenable to be made by hand. We hope to have helped demythify the wide belief that formal proofs are messy and very long to be readable and performable, in a practical way, by humans.\\
This work, appears to make possible the restatement of the whole HoTT in terms of an appropriate calculus of equational deduction.\\
Finally, we expect that our research will motivate exploring the proof theory associated to calculational methods of proof. We also think that it would be worthwhile to develop proof assistants and verifiers to support the automation of these methods.
\bibliographystyle{abbrv}
|
2,877,628,088,915 | arxiv | \section{Introduction}
For subluminal particles (``tardyons''),
the dispersion relation for the energy $E$ in terms of the velocity $v$ is given by
$E = m/\sqrt{1-v^2}$ (with $v < 1$),
and for superluminal particles (``tachyons''),
it reads as $E = m/\sqrt{v^2-1}$ with~$v > 1$.
Therefore, the ``light barrier'' at $v = 1$ (we set the speed of light
equal to one) looks like an (infinitely) elevated mountain
in terms of the energy of a relativistic particle.
Recami~\cite{Re2009} quotes Sudarshan with
reference to an imaginary demographer who studies population patterns on
the Indian subcontinent:
``Suppose a demographer calmly asserts that there are no people North of the
Himalayas, since none could climb over the mountain ranges! That would be an
absurd conclusion. People of central Asia are born there and live there: they
did not have to be born in India and cross the mountain range. So with
faster-than-light particles.''
In the early morning hours of 23 February 1987 (at 2'52'36''),
an unexpected neutrino bunch arrived at the LSD detector under the Mont Blanc
roughly 4.5 hours before the rest of the neutrinos from SN1987A,
and before the supernova became visible~\cite{DaEtAl1987}.
We are currently facing mounting evidence that
neutrinos may be genuinely superluminal particles (``tachyons'').
The MINOS experiment~\cite{AdEtAl2007} has measured
superluminal neutrino propagation velocities
which differ from the speed of light by a relative factor of
$(5.1 \pm 2.9) \times 10^{-5}$ at an energy of
about $E_\nu \approx 3 \, \mathrm{GeV}$,
supporting an earlier FERMILAB experiment
where the trend of the data also pointed toward
superluminal neutrinos~\cite{KaEtAl1979}.
This result has recently been confirmed by OPERA~\cite{OPERA2011v2}
with better statistics and in a wider energy interval,
as detailed below.
One of the prime candidates for a genuinely
superluminal particle is the neutrino, which has
never been observed at rest.
A number of experimental groups have measured negative
mass squares for the electron neutrino from tritium beta decay
endpoints~\cite{RoEtAl1991,AsEtAl1994,StDe1995,AsEtAl1996}
with mean values in the interval $-147 \,\mathrm{eV}^2 < m^2_{\nu} < 0$
for the electron neutrino mass square, at an energy of the order
of $E_\nu \approx 18 \, \mathrm{keV}$.
While some recent measurements indicate values consistent
with a vanishing neutrino mass~\cite{WeEtAl1999,LoEtAl1999,BeEtAl2008}
at even lower energies, the mean value of the experimental data for
$m^2_{\nu}$ (electron neutrino)
still is negative and of the order of a few negative $\mathrm{eV}^2$
(for an excellent overview, see Ref.~\cite{LABneutrino}).
The idea that neutrinos might be of tachyonic character
is not new~\cite{ChHaKo1985,Ch2000,Ch2002,JeWu2011jpa}.
Tachyonic neutrinos fulfill the dispersion relation
$E_\nu^2 - \vec p^{\,2} = -m_\nu^2$
with an (initially) constant parameter $m_\nu$.
The quantity $-m^2_\nu$ can be interpreted as the
negative mass square of the neutrino.
The current situation indicates the need for a
convenient descriptions of tachyonic fermions.
Ever since the early days of relativity, the notion of superluminal propagation
has intrigued physicists~\cite{So1905}, and the name ``tachyon'' was eventually
coined in Ref.~\cite{Fe1967}. The main problem in the description of a quantum
field theory with superluminal propagation is not the superluminal velocity
itself~\cite{BiDeSu1962}, but the construction of field operators and the time
ordering, which is in disarray because the time ordering of two space-time
points which are separated by a space-like interval is not invariant
under (subluminal) Lorentz boosts.
Generally, it has been assumed that
any particle in relativistic quantum theory should be
described by a unitary irreducible representation of
the Poincar\'{e} algebra or its supersymmetric
generalization. It may be necessary to relax this
restriction somewhat in order to accommodate
for a field theory of supersymmetric tachyons~\cite{BaSh1974,vDNgBi1985,XiJi1987}.
Three recent review articles~\cite{Re2009,Bi2009,Bo2009}
provide rather detailed background information on the
development of the theory of superluminal particles.
The recent OPERA experiment~\cite{OPERA2011v2}
uses a baseline of $L = (731278.0 \pm 0.2) \, \mathrm{m}$.
Two clocks used in the measurement are
accurately synchronized by a technique used
to compare atomic clocks~\cite{DePe2003,Le2008}.
It is of particular importance that the synchronization of the
two systems was calibrated
by the Federal Swiss Metrology
Institute METAS (Bundesamt f\"{u}r Metrologie) in 2008
and verified in 2011 by the Federal
German Metrology Institute PTB (Physikalisch-Technische
Bundesanstalt). As reported in Ref.~\cite{OPERA2011v2},
the difference between the time base of the CERN and OPERA
receivers was measured to be $(2.3 \pm 0.9) \, \mathrm{ns}$ and is
taken into account in the evaluation of the measurement.
The four data bins are
\begin{subequations}
\label{eee}
\begin{align}
E_\nu =& \; 13.8 \, \mathrm{GeV} \,, \qquad
\delta t = (54.7 \pm 18.4 \pm 7.1) \, \mathrm{ns} \,, \qquad
\Delta = \frac{v-c}{c} = ( 2.24 \pm 0.75 \pm 0.29 ) \times 10^{-5} \,,
\\[0.77ex]
E_\nu =& \; 28.2 \, \mathrm{GeV} \,, \qquad
\delta t = (61.1 \pm 13.2 \pm 7.1) \, \mathrm{ns} \,, \qquad
\Delta = \frac{v-c}{c} = ( 2.50 \pm 0.54 \pm 0.29 ) \times 10^{-5} \,,
\\[0.77ex]
E_\nu =& \; 40.7 \, \mathrm{GeV} \,, \qquad
\delta t = (68.1 \pm 19.1 \pm 7.1) \, \mathrm{ns} \,, \qquad
\Delta = \frac{v-c}{c} = ( 2.53 \pm 0.78 \pm 0.29 ) \times 10^{-5} \,,
\end{align}
and the overall average is
\begin{align}
\label{e}
E_\nu =& \; 17 \, \mathrm{GeV} \,, \qquad
\delta t = (57.8 \pm 7.2 \pm 7.1) \, \mathrm{ns} \,, \qquad
\Delta = \frac{v-c}{c} = ( 2.37 \pm 0.32 \pm 0.29 ) \times 10^{-5} \,.
\end{align}
\end{subequations}
While the OPERA data rather point to a slight increase
in the ratio $\Delta = (v-c)/c$ with the neutrino energy,
than to a trend in the opposite direction, the data
are generally consistent with a constant ratio
$\Delta = (v-c)/c$ in the entire
energy interval $13.8 \, \mathrm{GeV} < E_\nu < 40.7 \, \mathrm{GeV}$.
Tachyonic neutrinos fulfill the space-like dispersion relation $E_\nu^2 - \vec
p^{\,2} = -m_\nu^2$ and travel faster than light. Superluminality is conserved
under Lorentz boosts (see Ref.~\cite{BiDeSu1962} and Fig.~\ref{fig2} below).
It has been argued that neutrinos traveling at
velocities consistent with the recent OPERA data should decay by neutral
massive analogues of Cerenkov radiation~\cite{CoGl2011}. The noncovariant
dispersion relation $E_\nu = |\vec p| \, v_\nu$ has been used in recent work on
the subject~\cite{CoGl2011} (here, $v_\nu$ denotes the
neutrino velocity). Freely propagating subluminal relativistic
particles as well as tachyons~\cite{Re2009,Bi2009,Bo2009} fulfill the
``opposite'' relation $ |\vec p| = E_\nu \, v_\nu$. Both relations $E_\nu =
|\vec p| \, v_\nu$ and $|\vec p| = E_\nu\,v_\nu$ lead to a large virtuality $|
E_\nu^2 - \vec p^{\,2}|$ on the order of $(117\, \mathrm{MeV})^2$ when applied to the
recently measured OPERA data [see Eqs.~\eqref{mmE1} and~\eqref{mmE2} below].
These observations are inconsistent with beta decay end point
measurements~\cite{RoEtAl1991,AsEtAl1994,StDe1995,AsEtAl1996,%
WeEtAl1999,LoEtAl1999,BeEtAl2008}
which have led to values of a few $\mathrm{eV}^2$,
for neutrinos in the $\mathrm{keV}$ energy range.
This confusing situation raises a number of questions.
Starting from the tachyonic Dirac equation, we
conclude that additional interactions, hitherto not accounted for, are required
in order to explain the OPERA data which exhibit
a larger-than-expected virtuality at higher energies,
or, expressed differently, an energy-dependent mass.
At the current, early stage in the development of theoretical
models describing superluminal particles, a certain degree
of speculation cannot be avoided. For completeness, we should note
that we neither consider models based on
deformed special relativity~\cite{AC2000,AC2010,ACEtAl2011a,ACEtAl2011b}
nor kinematic constraints resulting from such
models~\cite{CoGl2011,BiYiYuYu2011,CoNuSa2011,GM2011}
in any greater detail. Lorentz symmetry is conserved in our approach.
We start with a digression on the kinematic
constraints to the observation of neutrinos along the
OPERA baseline in Sec.~\ref{kc}.
The tachyonic Dirac equation and
its solutions are being reviewed in Sec.~\ref{td}.
Chiral Yukawa interactions, which induce a neutrino mass
running via the renormalization group (RG), are studied in Sec.~\ref{running}.
Conclusions are reserved for Sec.~\ref{conclu}.
We always carefully distinguish between
$|\vec p|$ and the four-vector $p$, and we use natural units with
$\hbar = c = \epsilon_0 = 1$.
\section{Kinematic Constraints}
\label{kc}
The recent OPERA experiment has analyzed the propagation of muon neutrinos.
If neutrinos propagate faster than the speed of light,
then a number of decay processes are
kinematically allowed which are otherwise
forbidden. These include the following decays (see Fig.~\ref{fig1}),
\begin{subequations}
\label{decays}
\begin{align}
\label{gamma}
\nu_\mu \to & \; \nu_\mu + \gamma \,, \\[0.77ex]
\label{ee}
\nu_\mu \to & \; \nu_\mu + e^+ + e^- \,, \\[0.77ex]
\label{nunu}
\nu_\mu \to & \; \nu_\mu + \nu_e + \bar\nu_e \,.
\end{align}
\end{subequations}
In Ref.~\cite{CoGl2011}, these decay processes are analyzed
under the assumption of the Lorentz-violating dispersion relation
\begin{equation}
\label{displor}
\frac{\mathrm d E_\nu}{\mathrm d | \vec p_\nu | } = \mathrm{const.} \,,
\quad
E_\nu = | \vec p_\nu | \, v_\nu \,,
\quad
v_\nu \approx 1 + \Delta \,,
\end{equation}
where $\Delta = 2.37 \times 10^{-5}$ corresponds to the
value given in Ref.~\cite{OPERA2011v2}.
Processes (a)~and~(c) are parametrically suppressed with respect
to process~(b), and therefore process~(b) is deemed to constitute
the dominant decay channel.
One may observe that the dispersion relation
$E_\nu = |\vec p_\nu| \, v_\nu$ is at variance with both the
subluminal (also called tardyonic, see Ref.~\cite{Fe1967})
dispersion relation for freely
propagating massive neutrinos,
\begin{equation}
\label{dispsub}
E_\nu = \frac{m_\nu}{\sqrt{1 - v_\nu^2}} \,,
\qquad
|\vec p_\nu| = \frac{m \, v_\nu}{\sqrt{1 - v_\nu^2}} = E_\nu \, v_\nu\,,
\qquad
v_\nu < 1 \,,
\end{equation}
as well as with the dispersion relation for superluminal (tachyonic)
particles~\cite{ArSu1968,DhSu1968, SuSh1986,Re2009,Bi2009,Bo2009},
which reads
\begin{equation}
\label{dispsup}
E_\nu = \frac{m_\nu}{\sqrt{v_\nu^2 - 1}} \,,
\qquad
|\vec p_\nu| = \frac{m_\nu \, v_\nu}{\sqrt{v_\nu^2 - 1}} = E_\nu \, v_\nu\,,
\qquad
v_\nu > 1 \,.
\end{equation}
In both cases~\eqref{dispsub} and~\eqref{dispsup},
one obtains $|\vec p_\nu| = E_\nu \, v_\nu$, not the
opposite relation $E_\nu = |\vec p_\nu| \, v_\nu$
used in Ref.~\cite{CoGl2011}.
Under Lorentz transformations, superluminality of tachyonic
particles is conserved
(see Fig.~\ref{fig2}).
In two recent papers~\cite{MoRa2011,LiLiMeWaZh2011},
it has been observed that the conclusions of~\cite{CoGl2011}
would change if the dispersion relation were different.
Here, we are concerned with a more general question:
Namely, to investigate how the kinematic
constraints change when we assume a tachyonic dispersion
relation for the neutrino, and whether the
process~\eqref{decays} is still kinematically allowed
when $E_\nu^2 - \vec p_\nu^{\,2} < 0$.
\begin{figure}[t!]
\includegraphics[width=0.8\linewidth]{fig1.pdf}
\caption{\label{fig1}Feynman diagrams for the decay processes
of a tachyonic superluminal neutrino, as given in
Eq.~\eqref{decays}. The tachyonic neutrino may emit
of photon via a $W$ loop [Fig.~(a)], or an electron-positron
pair, [Fig.~(b)], or a neutrino-antineutrino pair [Fig.~(c)].
The processes scale with the quantum electrodynamic
(QED) coupling constant $\alpha$
and the weak coupling constant $G_F$ as follows,
(a)~is proportional to $\alpha \, G_F^2$,
(b)~is proportional to $\alpha \, G_F$,
and (c)~is proportional to $\alpha^2 \, G_F^2$.}
\end{figure}
For the process~\eqref{gamma},
an easy calculation based on the
energy and momentum conservation conditions reveals that
\begin{equation}
\label{kgamma}
E_\nu = E'_\nu + E_\gamma \,,
\qquad
\vec p_\nu = \vec p'_\nu + \vec k_\gamma \,,
\qquad
E_\nu = \sqrt{\vec p^{\,2}_\nu - m_\nu^2} \,,
\qquad
E'_\nu = \sqrt{\vec p'^{\,2}_\nu - m_\nu^2} \,,
\qquad
E_\gamma = | \vec k_\gamma | \,.
\end{equation}
Squaring the energy conservation condition, one obtains
\begin{subequations}
\begin{align}
\label{kgamma2}
E^2_\nu =& \;
\left( \vec p'_\nu + \vec k_\gamma \right)^{\,2} - m_\nu^2 =
\vec p'^{\,2}_\nu + \vec k_\gamma^{\,2}
- m_\nu^2 + 2 \, \vec p'_\nu \cdot \vec k_\gamma \,,
\\[2ex]
E^2_\nu =& \; (E'_\nu + E_\gamma)^2 =
\vec p'^{\,2}_\nu + \vec k_\gamma^{\,2} - m_\nu^2 +
2 \, | \vec k_\gamma | \, \sqrt{ \vec p'^{\,2}_\nu - m_\nu^2} \,,
\\[2ex]
\label{sevenc}
\vec p'_\nu \cdot \vec k_\gamma =& \;
| \vec k_\gamma | \, \sqrt{ \vec p'^{\,2}_\nu - m_\nu^2} \,.
\end{align}
\end{subequations}
We conclude that under the assumption of the
Lorentz-covariant, tachyonic dispersion relation~\eqref{dispsup},
weak-interaction Cerenkov radiation is allowed.
In view of Eq.~\eqref{sevenc}, the photon
is radiated off at a Cerenkov angle
\begin{equation}
\label{thetagamma}
\cos \theta_\gamma =
\frac{ \vec p'_\nu \cdot \vec k_\gamma }%
{| \vec k_\gamma | \, |\vec p'_\nu |} =
\frac{\sqrt{\vec p'^2_\nu - m_\nu^2}}{|\vec p'_\nu|} =
\frac{E'_\nu}{|\vec p'_\nu|} =
\frac{1}{v'_\nu} < 1 \,,
\end{equation}
under the assumption of a tachyonic neutrino with dispersion~\eqref{kgamma2}.
One may add that the kinematic consideration is somewhat
analogous to that for the emission of ordinary Cerenkov radiation.
The important observation is that under the tachyonic
dispersion relation~\eqref{dispsup}, the emission of a photon
by the neutrino is always allowed, i.e., there is no threshold
energy for the neutrino and there is no threshold for the
tachyonic mass $-m_\nu^2$. Once the particle becomes
tachyonic, weak Cerenkov radiation is kinematically allowed,
but the Cerenkov cone narrows as $-m_\nu^2 \to 0$.
For a particle fulfilling the noncovariant
dispersion relation $E_\nu = |\vec p'_\nu| \, v_\nu$, with
$v_\nu > 1$, the modified Cerenkov angle $\cos \theta'_\gamma$ is
easily computed as
\begin{equation}
\cos \theta'_\gamma =
\frac{1}{v'_\nu} + \frac{(v'^2_\nu-1) |\vec k_\gamma|}{2v'_\nu \,E'_\nu}
\approx \frac{1}{v'_\nu} < 1 \,,
\end{equation}
assuming a neutrino with the dispersion $E'_\nu = p'_\nu \, v'_\nu$ and
$v'_\nu > 1$. This is very well approximated by $\cos \theta'_\gamma \approx 1/v'_\nu$
for $v'_\nu \approx 1$.
As a second step, let us consider a process in which
a tachyonic neutrino fulfilling Eq.~\eqref{dispsup}
emits a massive neutral vector meson of mass $m_0$.
This is not depicted in Fig.~\eqref{fig1} but still instructive.
The kinematic conditions change,
\begin{equation}
\label{kmeson}
E_\nu = E'_\nu + E_0 \,,
\qquad
\vec p_\nu = \vec p'_\nu + \vec k_0 \,,
\qquad
E_\nu = \sqrt{\vec p^{\,2}_\nu - m_\nu^2} \,,
\qquad
E'_\nu = \sqrt{\vec p'^{\,2}_\nu - m_\nu^2} \,,
\qquad
E_0 = \sqrt{\vec k^{\,2}_0 + m_0^2} \,.
\end{equation}
The Cerenkov angle then becomes
\begin{equation}
\label{theta0}
\cos \theta_0 = \frac{ m_0^2 +
2 \, \sqrt{ \vec k_0^2 + m_0^2 } \, \sqrt{ \vec p'^2_\nu - m_\nu^2}}%
{2 \, | \vec k_0 | \, |\vec p'_\nu|}
\approx
\frac{\sqrt{ \vec k_0^2 + m_0^2 } \, \sqrt{ \vec p'^2_\nu - m_\nu^2}}%
{| \vec k_0 | \, |\vec p'_\nu|} \,,
\end{equation}
where the last expression is valid in the high-energy limit,
i.e, for $| \vec k_0 | \gg m_0$, and $|\vec p'_\nu| \gg m_\nu$.
If the vector meson carries away the bulk of the energy,
i.e. $E_0 = x \, E_\nu$ and $E'_\nu = (1 - x) \, E_\nu$,
with $x \lesssim 1$,
then for highly energetic incoming superluminal neutrinos,
one can always find a narrow cone near $\theta_0 \approx 0$
in which vector meson emission is possible.
Again, for highly energetic tachyonic superluminal
neutrinos, we conclude that there is no kinematic constraint
on the size of the tachyonic mass term $-m_\nu^2$
which would restrict massive vector meson emission.
Once the particle becomes
tachyonic and the energy of the tachyonic particle
is large enough,
massive vector emission becomes kinematically allowed in a
narrow angular region.
By contrast, if we replace in Eq.~\eqref{theta0}
$-m_\nu^2 \to +m_\nu^2$,
we would have $\cos \theta_0 > 1$, forbidding vector meson emission.
Also, the Cerenkov angle $\theta_0$ vanishes in the limit $m_\nu \to 0$.
Using more extensive calculations, we have checked
that the same statement applies to the light fermion pair
emission given in Eq.~\eqref{ee} and
depicted in Fig.~\ref{fig1}(b). Cerenkov-type pair emission
becomes kinematically possible for highly
energetic neutrinos, in a narrow angular region.
\begin{figure}[t!]
\includegraphics[width=0.5\linewidth]{fig2.pdf}
\caption{\label{fig2}(Color online.) Illustration of the Einstein velocity
addition theorem $w = (u+v)/(1 + u \, v)$, in
the superluminal domain with $u \in (-1,1)$ and $v \in (1,3)$.
For superluminal $v$, the range $w \in [-1,1]$ of values is excluded,
as shown by the rectangular box.}
\end{figure}
In the application of the tachyonic dispersion relation~\eqref{dispsup}
to the OPERA data, we face a dilemma which also plagues the
application of the Lorentz-noncovariant dispersion relation~\eqref{displor}.
Namely, we have for the OPERA data according to Eq.~\eqref{e},
\begin{align}
\label{mmE1}
-m_\nu^2 = E_\nu^2 - \vec p^{\,2}_\nu =
E_\nu^2 \, \left[ 1 - (1 + \Delta)^2 \right] =
- \left( 117 \, \mathrm{MeV} \right)^2
\qquad
\mbox{[dispersion relation~\eqref{dispsup}]}
\end{align}
which is at least six orders of magnitude larger than the
neutrino masses at
low energy~\cite{RoEtAl1991,AsEtAl1994,AsEtAl1996,%
StDe1995,WeEtAl1999,LoEtAl1999,BeEtAl2008}.
Likewise, assuming the dispersion relation~\eqref{displor}
implies that
\begin{align}
\label{mmE2}
E_\nu^2 - \vec p^{\,2}_\nu =
\vec p^{\,2}_\nu \, \left[ (1 + \Delta)^2 - 1 \right] \approx
E^2_\nu \, \left[ (1 + \Delta)^2 - 1 \right] =
\left( 117 \, \mathrm{MeV} \right)^2
\quad
\mbox{[dispersion relation~\eqref{displor}]} \,.
\end{align}
If we define the expression $| E_\nu^2 - \vec p^{\,2}_\nu|$
as the ``virtuality'' of the neutrino which measures
the deviation of the neutrino propagation velocity from the speed of light,
then we can say that at the high OPERA energies, the neutrino velocity
was not expected to deviate so much from the speed of light,
neither in the superluminal nor in the subluminal direction.
For example, if OPERA had hypothetically found a result of
\begin{equation}
\label{hypothetical}
\widetilde\Delta = -2.37 \times 10^{-5}
\qquad
\mbox{[opposite sign as compared to Eq.~\eqref{e}]}\,,
\end{equation}
then this would have been equally surprising.
In the latter case, one would probably have concluded
immediately that the neutrino must be subject to
a hitherto unknown interaction at high energy,
modifying its effective (running) mass.
We here advocate the viewpoint that the same
conclusion should be drawn from the OPERA data: namely,
the neutrino is genuinely tachyonic and subject to an
unknown interaction at high energy which modifies its mass
and its decay channels. Otherwise, it seems that the
high-energy OPERA data~\cite{OPERA2011v2} (in the
$\mathrm{GeV}$ range) cannot be reconciled with the
low-energy experimental
results (in the $\mathrm{keV}$ range) of Refs.~\cite{RoEtAl1991,AsEtAl1994,AsEtAl1996,StDe1995,%
WeEtAl1999,LoEtAl1999,BeEtAl2008}.
Of course, this statement holds provided the OPERA data
are not subject to a hitherto undiscovered systematic error.
The data bins given in Eq.~\eqref{eee}
are consistent with an energy-independent propagation
velocity. While the quantity $\Delta$ need not be
energy independent over large energy intervals,
it appears to be so in in the energy interval
$13.8 \, \mathrm{GeV} < E_\nu < 40.7 \, \mathrm{GeV}$.
Therefore, in this energy interval observed by OPERA~\cite{OPERA2011v2},
the dispersion relation is assumed to be close to a linear relationship
\begin{equation}
\label{mmE}
m_\nu = m(E_\nu) \approx \eta \, E_\nu \,,
\qquad
13.8 \, \mathrm{GeV} < E_\nu < 40.7 \, \mathrm{GeV} \,,
\qquad
\eta = \sqrt{(1 + \Delta)^2 - 1} = 6.88\times 10^{-3} \approx \frac{1}{145} \,.
\end{equation}
The unknown interaction leading to the energy-dependent mass
must now be investigated. When calculating
decay rates, the existence of the additional
interaction implies that one should use eigenstates of the
neutrino in the additional, hitherto unknown
interaction potential (i.e., taking into account the running mass)
rather than freely propagating tachyonic states, with an effective
energy-dependent tachyonic mass $m_\nu = m_\nu(E_\nu) \propto E_\nu$.
\section{Tachyonic Dirac Equation}
\label{td}
Given the obvious inconsistency of the OPERA data~\cite{OPERA2011v2}
with low-energy neutrino data~\cite{RoEtAl1991,AsEtAl1994,AsEtAl1996,StDe1995,%
WeEtAl1999,LoEtAl1999,BeEtAl2008},
as manifest in the energy-dependent effective mass~\eqref{mmE},
one may ask why an equation that describes a genuinely
tachyonic neutrino with an energy-independent, fixed
tachyonic mass $m_\nu$
should be considered at all in the following.
The reason is that if the neutrino is genuinely tachyonic,
then one has to start from an equation which describes
a genuinely tachyonic particle, with the possibility to
describe additional perturbative
interactions that modify the high-energy behaviour.
Expressed differently, we would expect the tachyonic Dirac
equation given below to describe the low-energy behaviour
of neutrinos~\cite{RoEtAl1991,AsEtAl1994,AsEtAl1996,StDe1995,%
WeEtAl1999,LoEtAl1999,BeEtAl2008},
while the large deviation from the light cone seen at high
energies~\cite{AdEtAl2007,OPERA2011v2} should be ascribed to
additional interactions.
We briefly recall here that
the Lorentz-covariant tachyonic Dirac equation reads
\begin{equation}
\label{Dirac5}
\left( \mathrm i \gamma^\mu \partial_\mu -
\gamma^5 \,m_\nu\right) \psi(x) = 0 \,,
\qquad
\gamma^0 =\left( \begin{array}{cc} \mathbbm{1}_{2\times 2} & 0 \\
0 & -\mathbbm{1}_{2\times 2} \\
\end{array} \right) \,,
\qquad
\vec\gamma = \left( \begin{array}{cc} 0 & \vec\sigma \\ -\vec\sigma & 0 \\
\end{array} \right) \,,
\qquad
\gamma^5 = \left( \begin{array}{cc} 0 & \mathbbm{1}_{2\times 2} \\
\mathbbm{1}_{2\times 2} & 0 \\
\end{array} \right) \,.
\end{equation}
Here, $x = (t, \vec x)$, and we use the Dirac matrices
in the Dirac representation~\cite{JeWu2011jpa}.
The partial derivatives are $\partial_\mu = \partial/\partial x^\mu$,
while $\gamma^5 = \mathrm i \, \gamma^0 \, \gamma^1 \, \gamma^2 \, \gamma^3$
is the fifth current matrix. The tachyonic Dirac equation has been briefly discussed
in Ref.~\cite{ChHaKo1985,Ch2000,Ch2002}.
It has recently been verified that
this equation is $\mathcal C \cal P$, as well as $\mathcal T$ invariant~\cite{JeWu2011jpa}.
These symmetry properties apply to neutrinos.
The positive-energy plane-wave solutions~\cite{JeWu2011jpa}
of the tachyonic Dirac equation have the properties
\begin{equation}
\label{solutions}
\Psi(x) = \frac{1}{\sqrt{V}} U_\pm(\vec k_\nu) \, \mathrm e^{-\mathrm i k_\nu \cdot x} \,,
\quad
k_\nu = (E_\nu, \vec k_\nu) \,,
\quad
E_\nu = \sqrt{\vec k_\nu^{2} - m_\nu^2} \,, \quad | \vec k_\nu | \geq m_\nu \,.
\end{equation}
The negative-energy solutions~\cite{JeWu2011jpa} are given by
\begin{equation}
\Phi(x) = \frac{1}{\sqrt{V}} V_\pm(\vec k_\nu) \, \mathrm e^{\mathrm i k_\nu \cdot x} \,,
\quad
k_\nu = (E_\nu, \vec k_\nu) \,,
\quad
E_\nu = \sqrt{\vec k_\nu^2 - m_\nu^2} \,,
\quad |\vec k_\nu| \geq m_\nu \,,
\end{equation}
where $V$ is the normalization volume.
These states are normalized, with
$U^{\mbox{{\bf{\tiny +}}}}_+(\vec k_\nu) \, U_+(\vec k_\nu) =
U^{\mbox{{\bf{\tiny +}}}}_-(\vec k_\nu) \, U_-(\vec k_\nu) =
V^{\mbox{{\bf{\tiny +}}}}_+(\vec k_\nu) \, V_+(\vec k_\nu) =
V^{\mbox{{\bf{\tiny +}}}}_-(\vec k_\nu) \, V_-(\vec k_\nu) = 1$.
The spinors entering these expressions read as
\begin{equation}
\label{UU}
U_+(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{m_\nu-E_\nu+|\vec k_\nu|}{\sqrt{2} \,
\sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \; a_+(\vec k_\nu) \\[0.77ex]
\dfrac{m_\nu+E_\nu-|\vec k_\nu|}{\sqrt{2} \,
\sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \; a_+(\vec k_\nu) \\
\end{array} \right) \,,
\qquad
U_-(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{m_\nu+E_\nu-|\vec k_\nu|}{\sqrt{2} \,
\sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \;
a_-(\vec k_\nu) \\[0.77ex]
\dfrac{-m_\nu+E_\nu-|\vec k_\nu|}%
{\sqrt{2} \, \sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \;
a_-(\vec k_\nu) \\
\end{array} \right) \,,
\end{equation}
where the helicity spinors $a_\pm(\vec k_\nu)$ are given below
in Eq.~\eqref{aplusminus}.
If we are interested in the massless limit, then we
should first take into account the fact that massless
particles propagate at velocities very close to the light cone.
For $v = 1 + \Delta$, we have $E - |\vec k_\nu| \approx -m \, \Delta/2
\ll m$.
Therefore, letting $\Delta \to 0$,
the dominant term for the massless limit actually
is the mass $m \gg E - |\vec k_\nu|$. This implies, e.g., that
$U_+(\vec k_\nu) \to \frac{1}{\sqrt{2}}
\left( \begin{array}{c} a_+(\vec k_\nu) \\
a_+(\vec k_\nu) \\ \end{array} \right)$ for the massless case.
The negative-energy eigenstates are given by
\begin{equation}
\label{VV}
V_+(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{-m_\nu-E_\nu+|\vec k_\nu|}%
{\sqrt{2} \, \sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \;
a_+(\vec k_\nu) \\[0.77ex]
\dfrac{-m_\nu+E_\nu-|\vec k_\nu|}%
{\sqrt{2} \, \sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \;
a_+(\vec k_\nu) \\
\end{array} \right) \,,
\qquad
V_-(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{-m_\nu+E_\nu-|\vec k_\nu|}%
{\sqrt{2} \, \sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \;
a_-(\vec k_\nu) \\[0.77ex]
\dfrac{m_\nu+E_\nu-|\vec k_\nu|}%
{\sqrt{2} \, \sqrt{(E_\nu - |\vec k_\nu|)^2 + m_\nu^2}} \; a_-(\vec k_\nu) \\
\end{array} \right) \,.
\end{equation}
The helicity spinors entering these expressions are
given in terms of the polar and azimuthal angles
$\theta$ and $\varphi$ of the three-vector $\vec k_\nu$,
\begin{equation}
\label{aplusminus}
a_+(\vec k_\nu) = \left( \begin{array}{c}
\cos\left(\frac{\theta}{2}\right) \\[1ex]
\sin\left(\frac{\theta}{2}\right) \, \mathrm e^{\mathrm i \, \varphi} \\
\end{array} \right) \,,
\qquad
a_-(\vec k_\nu) = \left( \begin{array}{c}
-\sin\left(\frac{\theta}{2}\right) \, \mathrm e^{-\mathrm i \, \varphi} \\[1ex]
\cos\left(\frac{\theta}{2}\right) \\
\end{array} \right) \,,
\end{equation}
and fulfill
\begin{equation}
\frac{\vec \sigma \cdot \vec k_\nu}{|\vec k_\nu|} \,
a_+(\vec k_\nu) = a_+(\vec k_\nu) \,,
\qquad
\frac{\vec \sigma \cdot \vec k_\nu}{|\vec k_\nu|} \,
a_-(\vec k_\nu) = -a_+(\vec k_\nu) \,.
\end{equation}
For plane waves, $E_\nu = \sqrt{\vec k_\nu^{\,2} - m_\nu^2}$ and
$\vec p_\nu = \vec k_\nu$ fulfill the tachyonic dispersion relation~\eqref{dispsup},
which we recall for convenience,
\begin{equation}
\label{recall}
E_\nu = \frac{m_\nu}{\sqrt{v_\nu^2 - 1}} \,,
\qquad
|\vec k_\nu| = \frac{m \, v_\nu}{\sqrt{v_\nu^2 - 1}} = E_\nu \, v_\nu\,,
\qquad
v_\nu > 1 \,,
\end{equation}
so that $\sqrt{\vec k_\nu^{\,2} - m_\nu^2}$ never becomes
imaginary. For $\vec k_\nu^{\,2} < m_\nu^2$, we have resonance and antiresonance
energies. We start with the resonances, whose energies have a negative
imaginary part,
\begin{subequations}
\label{RR}
\begin{align}
R_+(\vec k_\nu) = & \;
\left( \begin{array}{c}
\dfrac{m_\nu+\tfrac{\mathrm i}{2} \Gamma_\nu +|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_+(\vec k_\nu) \\[0.77ex]
\dfrac{m_\nu-\tfrac{\mathrm i}{2} \Gamma_\nu -|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_+(\vec k_\nu) \\
\end{array} \right) \,,
\qquad
R_-(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{m_\nu-\tfrac{\mathrm i}{2} \Gamma_\nu -|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_-(\vec k_\nu) \\[0.77ex]
\dfrac{-m_\nu-\tfrac{\mathrm i}{2} \Gamma_\nu-|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_-(\vec k_\nu) \\
\end{array} \right) \,,
\\[0.77ex]
E_\nu =& \; -\tfrac{\mathrm i}{2}\, \Gamma_\nu = -\mathrm i\,
\sqrt{m_\nu^2 - \vec k_\nu^{2}} \,,
\qquad
\vec k_\nu^{\,2} < m_\nu^2 \,.
\end{align}
\end{subequations}
The antiresonance energies have a positive imaginary part,
\begin{subequations}
\label{SS}
\begin{align}
S_+(\vec k_\nu) = & \;
\left( \begin{array}{c}
\dfrac{-m_\nu-\tfrac{\mathrm i}{2} \Gamma_\nu+|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_+(\vec k_\nu) \\[0.77ex]
\dfrac{-m_\nu+\tfrac{\mathrm i}{2} \Gamma_\nu-|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_+(\vec k_\nu) \\
\end{array} \right) \,,
\qquad
S_-(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{-m_\nu+\tfrac{\mathrm i}{2}\Gamma_\nu -|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_-(\vec k_\nu) \\[0.77ex]
\dfrac{m_\nu+\tfrac{\mathrm i}{2}\Gamma_\nu -|\vec k_\nu|}{\sqrt{2} \,
\sqrt{\vec k_\nu^{\,2} + m_\nu^2 + \tfrac14 \, \Gamma_\nu^2}} \;
a_-(\vec k_\nu) \\
\end{array} \right) \,,
\\[0.77ex]
E_\nu =& \; \tfrac{\mathrm i}{2}\, \Gamma_\nu =
\mathrm i \, \sqrt{m_\nu^2 - \vec k_\nu^{2}} \,,
\qquad \vec k_\nu^{\,2} < m_\nu^2 \,.
\end{align}
\end{subequations}
These states are also normalized, with $R^{\mbox{{\bf{\tiny +}}}}_+(\vec k_\nu) \, R_+(\vec k_\nu) =
R^{\mbox{{\bf{\tiny +}}}}_-(\vec k_\nu) \, R_-(\vec k_\nu) = S^{\mbox{{\bf{\tiny +}}}}_+(\vec k_\nu) \, S_+(\vec k_\nu) =
S^{\mbox{{\bf{\tiny +}}}}_-(\vec k_\nu) \, S_-(\vec k_\nu) = 1$.
The term ``resonances'' is used in the physics literature
in two contexts: (i)~in order to designate the
complex energy eigenvalue of a Hamiltonian,
and (ii)~in order to designate the peak in a cross section
or a quantum state which can decay into a final state
with a different particle content. In the current case,
the interpretation~(i) is relevant. The resonances have complex
resonance energies; the waves are evanescent
(exponentially damped) just like the diffracted wave under total reflection,
or a wave in a waveguide below the minimum frequency
for the TE$_{1,0}$ mode necessary for propagation,
and the resonance energies are complex just as in the
case of a resonance energy of the Stark effect~\cite{Je2001pra}.
Resonances are damped for propagation forward in time,
antiresonances for propagation backward in time,
in accordance with the Feynman prescription.
The wavelength of the resonance states is too long
to be supported in a genuinely superluminal wave packet of
tachyonic mass $m_\nu^2$.
The noncovariant, Hamiltonian form of Eq.~\eqref{Dirac5}
reads as
\begin{equation}
H_5 \psi(\vec x) =
\left( \vec \alpha \cdot \vec p + \beta \, \gamma^5 \, m_\nu \right) \, \psi(\vec x) =
E_\nu\, \psi(\vec x) \,,
\end{equation}
where $\beta = \gamma^0$, and $\vec \alpha = \gamma^0 \, \vec\gamma$.
The Hamiltonian $H_5$ has the
pseudo-Hermitian~\cite{Pa1943,BeBo1998,BeDu1999,BeBoMe1999,%
BeBrJo2002,Mo2002i,Mo2002ii,Mo2002iii,Mo2003npb} property
\begin{equation}
H
= \mathcal P \, H_5^{\mbox{{\bf{\tiny +}}}}(\vec x) \, \mathcal P^{-1}
= P \, H_5^{\mbox{{\bf{\tiny +}}}}(-\vec x) \, P^{-1} \,,
\end{equation}
where $\mathcal P$ is the full parity transformation and
$P$ is the parity matrix $P = \gamma^0$.
The eigenvalues of a
pseudo-Hermitian operator come in complex-conjugate pairs and are real if the
tachyonic dispersion relations~\eqref{dispsup} are fulfilled.
This can be seen as follows. Because the spectrum of a Hermitian
adjoint operator consists of the complex conjugate eigenvalues,
we have an eigenvector $\phi(\vec x)$ with eigenvalue
$E^*$ provided there exists an eigenvector $\psi(\vec x)$ with
eigenvalue $E$,
\begin{equation}
H_5(\vec x) \, \psi(\vec x) = E \, \psi(\vec x) \,,
\qquad
H_5^{\mbox{{\bf{\tiny +}}}}(\vec x) \, \phi(\vec x) = E^* \, \phi(\vec x) \,.
\end{equation}
Then, the transformation $\vec x \to - \vec x$ and
the introduction of the $P =\gamma^0$ parity matrix leads to
\begin{equation}
H_5^+(-\vec x) \, \phi(-\vec x) = E^* \, \phi(-\vec x) \,,
\qquad
P H_5^+(-\vec x) P^{-1} \, \left( P \phi(-\vec x) \right) = E^* \, P \phi(-\vec x) \,.
\end{equation}
By assumption, $P H_5^+(-\vec x) P^{-1} = H_5(\vec x)$ and thus
\begin{equation}
H_5(\vec x) \, P \phi(-\vec x) = E^* \, P \phi(-\vec x) \,,
\qquad
H_5(\vec x) \, \widetilde\psi(\vec x) = E^* \widetilde\psi(\vec x) \,,
\qquad
\widetilde\psi(\vec x) = P \phi(-\vec x) \,.
\end{equation}
This implies that $\widetilde\psi(\vec x) = P \psi(-\vec x)$ is an eigenvector
with eigenvalue $E^*$. The
eigenvalues of $H_5$ thus come in complex-conjugate pairs,
and furthermore, they are real for plane waves
fulfilling the dispersion relation~\eqref{dispsup}.
The covariant Green function corresponding to the Hamiltonian
$H_5$ thus reads as
\begin{equation}
S_T(p) = \gamma^0 \, \frac{1}{E - H_5} =
\frac{\cancel{p} + \gamma^5 \, m_\nu }{p^2 + m_\nu^2} \,.
\end{equation}
The tachyonic poles at $E_\nu^2 - \vec p^{\,2} = -m_\nu^2$
have to be encircled in a way consistent with the
boundary conditions imposed on the Green function.
Eigenvalues with
$E_\nu^2 = \vec p^{\,2} - m_\nu^2 < 0$
represent evanescent waves.
If one encircles the poles of the Green function
according to the Feynman prescription,
\begin{equation}
\label{ST}
S_T(p) = \frac{1}{\cancel{p} - \gamma^5 \, (m_\nu + \mathrm i\,\epsilon)} =
\frac{\cancel{p} - \gamma^5 \, m_\nu}{p^2 + m^2_\nu + \mathrm i \, \epsilon} \,,
\end{equation}
then the
energy-momentum dispersion relation is infinitesimally
displaced to read
$E_\nu = \pm \sqrt{\vec p^{\,2} - m_\nu^2 - \mathrm i \, \epsilon} $.
This is consistent with the evanescent wave picture
because positive-energy solutions have the form
$E_\nu = \epsilon -\mathrm i \, \sqrt{|\vec p^{\,2} - m_\nu^2|}$
and are thus exponentially damped for the propagation into the future,
whereas negative-energy solutions have the form
$E_\nu = -\epsilon +\mathrm i \, \sqrt{|\vec p^{\,2} - m_\nu^2|}$
and are thus exponentially damped for the propagation into the past.
In general, the Feynman prescription assigns an infinitesimal
negative imaginary part to energies whose real part is
positive, and vice versa.
Thus, while the time propagation of strictly tachyonic
wave packets (superpositions of the tachyonic plane-wave solutions)
is fully unitary (they have real eigenvalues),
a slight violation of unitarity cannot be avoided
if one allows eigenstates with $E_\nu^2 = \vec p^{\,2} - m_\nu^2 < 0$.
The complex resonance energies (the real part is only infinitesimal)
\begin{equation}
E_\nu = \epsilon -\mathrm i \sqrt{| \vec p^{\,2} - m_\nu^2 |} \,,
\qquad
E_\nu = -\epsilon + \mathrm i \sqrt{| \vec p^{\,2} - m_\nu^2 |} \,,
\qquad
\vec p^{\,2} < m_\nu^2 \,,
\end{equation}
describe the suppression of subluminal components of a
superluminal wave packet under time evolution.
One has to allow these solutions in the propagator~\eqref{ST}
if one would like to carry out the Fourier transformation
consistently, i.e., over the entire range $p^\mu \in \mathbbm{R}^4$,
or describe the time evolution of a general
wave packet under the Green function~\eqref{ST}.
It seems that a slight violation of unitarity,
relevant to the small sector $\vec p^{\,2} < m_\nu^2$,
where $m_\nu$ initially is on the order of a few $\mathrm{eV}$,
is a price for the introduction of tachyonic particles~\cite{XiJi1987}.
Note that full unitarity cannot be preserved anyway in
a tachyonic theory if one goes beyond tree-level amplitudes,
as shown in Ref.~\cite{Bo1970}.
The time propagation of wave packets in potentials
with manifestly complex resonance energies
has been described in Refs.~\cite{MoiseyevMcCurdy,JeSuLuZJ2008}.
The evanescence of the subluminal neutrino
wave function components, which are excluded from the
real neutrino plane-wave eigenstates but included in the
propagator, is somewhat analogous
to the photon propagator, where one includes the so-called
scalar and longitudinal photons in the photon propagator but
leaves them out from the real, physical states of the photon
field, which are composed of transverse photons.
In Ref.~\cite{JeWu2011jpa},
the tachyonic propagator~\eqref{ST} is derived not by inversion
of the Hamiltonian, but by a quantization of the tachyonic
field operators. We briefly sketch the essential elements of the derivation.
The field operator is written as
\begin{align}
\hat\psi(x) =& \;
\int \frac{\mathrm d^3 k_\nu}{(2\pi)^3} \,
\frac{m_\nu}{E_\nu} \sum_{\sigma = \pm}
\left[ b_\sigma(k_\nu) \, \mathcal U_\sigma(\vec k_\nu) \,
\mathrm e^{-\mathrm i \, k_\nu \cdot x}
+ b_\sigma(-k_\nu) \, \mathcal V_\sigma(\vec k_\nu) \,
\mathrm e^{\mathrm i \, k_\nu \cdot x} \right]
\nonumber\\[2ex]
=& \;
\int \frac{\mathrm d^3 k_\nu}{(2\pi)^3} \,
\frac{m}{E_\nu} \sum_{\sigma = \pm}
\left[ b_\sigma(k_\nu) \, \mathcal U_\sigma(\vec k_\nu) \,
\mathrm e^{-\mathrm i \, k_\nu \cdot x}
+ d^{\mbox{{\bf{\tiny +}}}}_\sigma(k_\nu) \, \mathcal V_\sigma(\vec k_\nu) \,
\mathrm e^{\mathrm i \, k_\nu \cdot x} \right] \,,
\end{align}
where $E_\nu = \sqrt{\vec k_\nu^2 - m_\nu^2 - \mathrm i \, \epsilon}$
is the tachyonic energy and the four-vector $k_\nu$ equals
$k_\nu = (E_\nu, \vec k_\nu)$.
Here, the $b$ operators annihilate particle,
whereas the $d^{\mbox{{\bf{\tiny +}}}}$ create antiparticles.
We here explicitly accept a Lorentz-covariant
vacuum state, which transforms according to Ref.~\cite{Fe1967}.
The Lorentz-transformed vacuum is filled with all
particle and antiparticle states whose energy changes
sign under a Lorentz transformation (Lorentz boost),
as outlined in Eqs.~(5.6) and (5.7) of Ref.~\cite{Fe1967}.
The spinors $\mathcal U$ and $\mathcal V$ are given by
\begin{subequations}
\label{covariant}
\begin{equation}
\mathcal U_+(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{m_\nu-E_\nu+|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_+(\vec k_\nu) \\[0.77ex]
\dfrac{m_\nu+E_\nu-|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_+(\vec k_\nu) \\
\end{array} \right) \,,
\qquad
\mathcal U_-(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{m_\nu+E_\nu-|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_-(\vec k_\nu) \\[0.77ex]
\dfrac{-m_\nu+E_\nu-|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_-(\vec k_\nu) \\
\end{array} \right)
\end{equation}
for positive energy, and by
\begin{equation}
\mathcal V_+(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{-m_\nu-E_\nu+|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_+(\vec k_\nu) \\[0.77ex]
\dfrac{-m_\nu+E_\nu-|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_+(\vec k_\nu) \\
\end{array} \right) \,,
\qquad
\mathcal V_-(\vec k_\nu) =
\left( \begin{array}{c}
\dfrac{-m_\nu+E_\nu-|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_-(\vec k_\nu) \\[0.77ex]
\dfrac{m_\nu+E_\nu-|\vec k_\nu|}{2 \, \sqrt{m_\nu} \, \sqrt{|\vec k_\nu|- m_\nu}} \;
a_-(\vec k_\nu) \\
\end{array} \right)
\end{equation}
for negative energy (in both cases, we assume
that $|\vec k_\nu| > m$). The normalization conditions are given by
\begin{equation}
\overline \mathcal U_\sigma(\vec k_\nu) \; \mathcal U_\sigma(\vec k_\nu) =
\mathcal U^{\mbox{{\bf{\tiny +}}}}_\sigma(\vec k_\nu) \gamma^0 \mathcal U_\sigma(\vec k_\nu) = \sigma \,,
\qquad
\overline \mathcal V_\sigma(\vec k_\nu) \; \mathcal V_\sigma(\vec k_\nu) =
\mathcal V^{\mbox{{\bf{\tiny +}}}}_\sigma(\vec k_\nu) \gamma^0 \mathcal V_\sigma(\vec k_\nu) = -\sigma \,.
\end{equation}
\end{subequations}
Quantizing the theory according to Fermi--Dirac statistics,
\begin{subequations}
\label{quantization}
\begin{align}
\left\{ b_\sigma(k_\nu) , b_{\rho}(k_\nu') \right\} = & \;
\left\{ a^{\mbox{{\bf{\tiny +}}}}_\sigma(k_\nu) , a^{\mbox{{\bf{\tiny +}}}}_{\rho}(k_\nu') \right\} =
\left\{ d_\sigma(k_\nu) , d_{\rho}(k_\nu') \right\} =
\left\{ d^{\mbox{{\bf{\tiny +}}}}_\sigma(k_\nu) , d^{\mbox{{\bf{\tiny +}}}}_{\rho}(k_\nu') \right\} = 0 \,,
\\[0.77ex]
\left\{ b_\sigma(k_\nu) , b^{\mbox{{\bf{\tiny +}}}}_{\rho}(k_\nu') \right\} = & \;
(-\sigma) \,
(2 \pi)^3 \, \frac{E}{m} \delta^3(\vec k_\nu - \vec k_\nu') \, \delta_{\sigma\rho}\,,
\qquad
\left\{ d_\sigma(k_\nu) , d^{\mbox{{\bf{\tiny +}}}}_{\rho}(k_\nu') \right\} =
(-\sigma) \,
(2 \pi)^3 \, \frac{E}{m} \delta^3(\vec k_\nu - \vec k_\nu') \,
\delta_{\sigma\rho}\,,
\end{align}
\end{subequations}
one can easily show that
\begin{equation}
\label{tensor}
\sum_\sigma (-\sigma) \; \mathcal U_\sigma(\vec k_\nu) \otimes
\overline\mathcal U_\sigma(\vec k_\nu) \,\gamma^5 =
\frac{\cancel{k}_\nu - \gamma^5 \, m}{2 m} \,,
\qquad
\qquad
\sum_\sigma (-\sigma) \; \mathcal V_\sigma(\vec k_\nu) \otimes
\overline\mathcal V_\sigma(\vec k_\nu) \,\gamma^5 =
\frac{\cancel{k}_\nu + \gamma^5 \, m}{2 m} \,.
\end{equation}
The field anticommutator is
\begin{align}
& \{ \hat\psi_\xi(x), \overline{\hat\psi}_{\xi'}(y) \} =
\left< 0 \left| \{ \hat\psi_\xi(x),
\overline{\hat\psi}_{\xi'}(y) \} \right| 0 \right>
\nonumber\\[2ex]
& \qquad = \int \frac{\mathrm d^3 k_\nu}{(2 \pi)^3}
\frac{m_\nu}{E_\nu} \,
\sum_{\sigma = \pm} \left\{
\mathrm e^{-\mathrm i k_\nu \cdot (x-y)} \,
\left(-\sigma\right) \, \left[ \mathcal U_{\sigma}(\vec k_\nu) \right]_\xi \,
\left[ \overline \mathcal U_{\sigma}(\vec k_\nu) \right]_{\xi'}
+
\mathrm e^{\mathrm i k_\nu \cdot (x-y)} \,
\left(-\sigma\right) \, \left[ \mathcal V_{\sigma}(\vec k_\nu) \right]_\xi \,
\left[ \overline \mathcal V_{\sigma}(\vec k_\nu) \right]_{\xi'} \right\} \,,
\end{align}
where $\xi$ denotes the spinor index.
It follows that
\begin{equation}
\label{anticom2}
\{ \hat\psi_\xi(x), \overline{\hat\psi}_{\xi'}(y) \} \, \gamma^5 =
\left( \mathrm i \, \cancel{\partial} - \gamma^5 \, m_\nu \right)_{\xi \xi'}
\mathrm i \, \Delta(x - y) \,,
\qquad
\qquad
\Delta(x-y) =
-\mathrm i \int \frac{\mathrm d^3 k_\nu}{(2 \pi)^3} \,
\frac{1}{2 E_\nu} \, \left( \mathrm e^{-\mathrm i k_\nu \cdot (x-y)} -
\mathrm e^{\mathrm i k_\nu \cdot (x-y)} \right)\,,
\end{equation}
where $\Delta(x-y)$ is introduced as in Chap.~3 of Ref.~\cite{ItZu1980}.
Furthermore, Eq.~(3.170) of~\cite{ItZu1980} finds the generalization
\begin{equation}
\left.
\{ \hat\psi_\xi(x), \overline{\hat\psi}_{\xi'}(y) \} \, \gamma^5
\right|_{x_0 = y_0}
= -
\left.
\left( \gamma^0 \right)_{\xi\xi'} \, \partial_0 \, \Delta(x - y)
\right|_{x_0 = y_0}
= \left( \gamma^0 \right)_{\xi\xi'} \, \delta^3(\vec x - \vec y) \,.
\end{equation}
In full analogy with Eq.~(3.174) of Ref.~\cite{ItZu1980}
and in agreement with Ref.~\cite{JeWu2012},
the tachyonic ($T$) propagator is then found as
\begin{equation}
\left< 0 \left| T \, \hat\psi_\xi(x) \,
\overline{\hat\psi}_{\xi'}(y) \gamma^5 \right| 0 \right> =
\mathrm i \, S_T(x - y)_{\xi\xi'} \,,
\qquad
\qquad
S_T(x - y) =
\int \frac{\mathrm d^4 k_\nu}{(2 \pi)^4} \,
\mathrm e^{-\mathrm i k_\nu \cdot (x-y)} \,
\frac{\cancel{k}_\nu - \gamma^5 \, m_\nu}{k_\nu^2 + m^2_\nu + \mathrm i \, \epsilon} \,,
\end{equation}
which confirms Eq.~\eqref{ST}. Indeed, the propagator obtained
from the quantized theory is equal to the propagator obtained
from the inversion of the Hamiltonian in the Lorentz-covariant
formulation, as it should.
The couplings of the neutrino involve the chirality
projector $(1 - \gamma^5)/2$, and in view
of $\gamma^5 \, (1 \pm \gamma^5)/2 = \pm (1 \pm \gamma^5)/2$,
the introduction of the $\gamma^5$ matrix in Eq.~\eqref{ST} is
reabsorbed into the interaction Lagrangian.
The non-unitarity is small because $m_\nu^2 $ is very small.
For a tachyonic particle, the evanescence
of non-tachyonic wave packet components is natural because its
tachyonic components remain tachyonic upon
Lorentz transformation (see Fig.~\ref{fig2}).
The tachyonic Dirac equation provides for a convenient
framework to describe freely propagating, superluminal,
electromagnetically neutral, particles.
\section{Neutrino Mass Running}
\label{running}
In Secs.~\ref{kc} and~\ref{td}, we have seen that a running neutrino
mass (with the energy) is able to conceivably suppress
Cerenkov-type decay processes, and the quantization of the
tachyonic Dirac equation has been discussed as a convenient description for
tachyonic spin-$1/2$ particles;
it naturally implies the suppression of the right-handed neutrino.
If current experimental data~\cite{OPERA2011v2}
are confirmed, then we now have to explain why the
effective mass of the neutrino, which needs to be inserted into the
tachyonic Dirac equation, changes from a few eV in the keV neutrino energy range,
to a mass on the order of MeV in the GeV energy range.
We note that neutrino mass running is usually assumed
to initiate on the energy scales of Grand Unification (see-saw mechanism).
However, the experimental data~\cite{DaEtAl1987,AdEtAl2007,OPERA2011v2,%
RoEtAl1991,AsEtAl1994,AsEtAl1996,StDe1995}
all point to a neutrino mass running which sets in at much
lower energy scales. We assume that the mass term is genuinely tachyonic.
The scenario that we would like to propose is as follows:
We conjecture that the neutrino mass running is due to
an interaction with a hitherto unknown field that modifies its
effective mass with the energy.
At low energy, the interaction with the unknown field is
weak, so that the apparent neutrino mass is in the $\mathrm{eV}$ range,
whereas at higher energies, the interaction becomes stronger
and leads to the observed~\cite{OPERA2011v2} large tachyonic masses.
We thus assume that the (bulk of the) neutrino mass is
created dynamically~\cite{Oi2011}.
Possibly, there is some threshold region where the
effective mass of the neutrino intersects with the
mass of the field it interacts with, and this might help explain
consistency with astrophysical data~\cite{DaEtAl1987}.
In the following, we would like to present a
semi-quantitative analysis which supports these conjectures.
We investigate a scalar-minus-pseudoscalar
($S-P$) interaction Lagrangian of the form
\begin{equation}
\label{Lint}
\mathcal L_{\rm int} = G \,\hat{\phi}_X \,
\overline{\hat{\psi}} \, (1 - \gamma^5) \, \hat{\psi} \,.
\end{equation}
Here, $\hat{\phi}_X$ is a scalar field operator, $G$ is a dimensionless
coupling, and the fermionic field operators
for the neutrino are denoted as $\hat{\psi}$.
The operator is of dimension~4 and therefore renormalizable;
it describes a Yukawa interaction with a chirality projector.
The complete Lagrangian of the tachyonic neutrino field,
the scalar field plus the $S-P$ interaction reads
\begin{align}
\label{LLLLL}
\mathcal L(x) =& \; \frac{\mathrm i}{2}
\left[ \overline{\hat \psi}(x) \gamma^\mu \left( \partial_\mu \hat{\psi}(x) \right) -
\left( \partial_\mu \overline{\hat \psi}(x) \right) \gamma^\mu \hat{\psi}(x) \right]
- \overline{\hat \psi}(x) \,\gamma^5 \, m_\nu \, \hat{\psi}(x)
\nonumber\\[2ex]
& \; - \frac12 \, \hat{\phi}_X(x) \, \left( \Box + M^2_X \right) \hat{\phi}_X(x)
+ G \,\hat{\phi}_X(x) \, \overline{\hat \psi}(x) \, (1 - \gamma^5) \, \hat{\psi}(x) \,.
\end{align}
At low energy, from dimensional analysis alone,
the induced one-loop neutrino mass running via the renormalization group (RG)
can be written down as
\begin{equation}
\frac{\mathrm d m_\nu}{\mathrm d \ln(\mu)} = \mu \, \frac{\mathrm d m_\nu}{\mathrm d \mu}
\propto \left[m_\nu(\mu)\right]^3 \; \left[G_X(\mu)\right]^2 \,,
\qquad
\qquad
\left[G_X(\mu)\right]^2 =
G^2_X \, \ln(\mu) =
\frac{G^2}{M_X^2} \, \ln(\mu) \,,
\end{equation}
where we assume a logarithmic running of the coupling
constant with the scale $\mu$.
Integrating the RG evolution equation,
\begin{equation}
\int \frac{\mathrm d m_\nu}{m_\nu^3} = G_X^2 \, \int \frac{\mathrm d\mu}{\mu} \,,
\qquad
\int\limits^{m_\nu(17 \, \mathrm{GeV})}_{m_\nu(18\,\mathrm{keV})} \frac{\mathrm d m_\nu}{m_\nu^3} =
G_X^2 \, \int^{17 \, \mathrm{GeV}}_{18\,\mathrm{keV}} \frac{\mathrm d\mu}{\mu} \, \ln(\mu),
\end{equation}
with $m_\nu(18 \, \mathrm{keV}) \approx 100 \, \mathrm{eV}$ (see Ref.~\cite{RoEtAl1991})
and $m_\nu(17 \, \mathrm{GeV}) \approx 117 \, \mathrm{MeV}$ (see Ref.~\cite{OPERA2011v2}),
we find that an $X$ particle of mass in the range $M_X \approx 1.4 \,\mathrm{keV}$
could potentially induce a neutrino mass running from about 100~eV
at 18~keV energies~\cite{RoEtAl1991} to 117~MeV at energies
of 17~GeV~\cite{OPERA2011v2}. Here, we assume that $G \approx 1$ and
a universal running
of the electron neutrino mass~\cite{RoEtAl1991} and the
muon neutrino mass~\cite{OPERA2011v2} with the energy.
The difference in the observed OPERA neutrino mass~\cite{OPERA2011v2} of $117 \, \mathrm{MeV}$
with low-energy neutrino data Refs.~\cite{RoEtAl1991,AsEtAl1994,AsEtAl1996,StDe1995,%
WeEtAl1999,LoEtAl1999,BeEtAl2008}, where masses in the $\mathrm{eV}$ range were
observed, suggests that significant neutrino mass running has to set in
at energies much below $17\,\mathrm{GeV}$, so that we can safely assume that $M_X \ll 17\,\mathrm{GeV}$.
This finding and the interaction~\eqref{Lint} is not described by any known
particle in the standard model, and thus,
our model constitutes a pertinent extension.
However, one may object that this treatment amounts to
an application of a one-loop running of the mass in a
domain which in view of $G \approx 1$ clearly is
nonperturbative.
This high-energy limit could be analyzed as follows.
We first recall that in the
high-energy domain, where the effective neutrino mass
is in the $\mathrm{MeV}$ range (see Refs.~\cite{OPERA2011v2,KaEtAl1979})
we assume that the neutrino mass is (almost) exclusively
generated by the strong
(nonperturbative) self-interaction with the $X$ field.
It is interesting to observe that {\em polynomial}
behaviour of RG functions in the {\em strong-coupling domain}
has recently been obtained by a sophisticated
analysis of higher-order perturbative terms,
for the $\beta$ functions of $\phi^4$ theories
and of quantum electrodynamics~\cite{Su2001phi4,Su2001}.
If the mass of the $X$ particle is negligible as compared
to the mass of the neutrino in the high-energy domain,
then the mass scaling must be independent of $M_X$, and
again, from dimensional analysis alone, we may conjecture that
in the high-energy, strong-coupling limit,
\begin{equation}
\mu \, \frac{\mathrm d m_\nu}{\mathrm d \mu}
\propto G^2 \, m_\nu \,,
\qquad
\qquad
\int \frac{\mathrm d m_\nu}{m_\nu} =
K \, G^2\, \int \frac{\mathrm d \mu}{\mu} \,,
\qquad
\qquad
m_\nu(\mu) = m_\nu(\mu_0) \,
\left( \frac{\mu}{\mu_0} \right)^{K \,G^2} \,,
\end{equation}
where $K$ is a constant of order unity.
In view of Eq.~\eqref{mmE}, if we assume that $G \approx 1/\sqrt{K}$, then
\begin{equation}
\label{sun}
m_\nu = m_\nu(E_\nu)
= \eta \, \left( E_\nu \right)^{K \, G^2}
\approx \eta \, E_\nu \,,
\qquad
G \approx \frac{1}{\sqrt{K}} \,,
\qquad
\eta \approx \frac{1}{145} \,,
\end{equation}
where the value of $\eta$ is chosen such as to be
consistent with Eq.~\eqref{mmE}.
\begin{figure}[t!]
\begin{minipage}[b]{0.45\linewidth}
\begin{center}
\includegraphics[height=0.6\linewidth]{fig3a.pdf} \\
{\bf (a)}
\end{center}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\begin{center}
\includegraphics[height=0.6\linewidth]{fig3b.pdf} \\
{\bf (b)}
\end{center}
\end{minipage}
\caption{\label{fig3}(Color online.) Measured neutrino velocities in the
range $E_\nu = 3 \, \mathrm{GeV}$ (Ref.~\cite{AdEtAl2007})
up to $E_\nu = 195 \, \mathrm{GeV}$ (Ref.~\cite{KaEtAl1979}).
The OPERA data are given in Eq.~\eqref{eee}
and correspond to the data bins at
$E = 13.8 \, \mathrm{GeV}$, $E = 28.2 \, \mathrm{GeV}$, and $E = 40.7 \, \mathrm{GeV}$ (circles).
The data point at $E_\nu = 3 \, \mathrm{GeV}$ is from Ref.~\cite{AdEtAl2007} (square).
All remaining data points (triangles) are from Ref.~\cite{KaEtAl1979}.
Panel~(a) corresponds to the data plotted in Fig.~3 of Ref.~\cite{AdEtAl2007},
while panel~(b) applies a path length correction of
$\Delta_{\rm path} = -0.5 \times 10^{-4}$ to the data (triangles) of
Ref.~\cite{AdEtAl2007}, as discussed near the end of Ref.~\cite{AdEtAl2007}.
Here, $\Delta$ is the relative
deviation from the speed of light in vacuum,
which we multiply by a scaling factor $10^4$ on $y$ axis.
The solid line at $\Delta = 2.4 \times 10^{-4}$
corresponds to the result~\eqref{mres} based on our model.}
\end{figure}
We are now in the position to add some more,
somewhat speculative, remarks on the
experimental findings of Refs.~\cite{KaEtAl1979,AdEtAl2007,OPERA2011v2}.
Based on the numerical entries in Table~II and Fig.~3 of Ref.~\cite{KaEtAl1979},
one may investigate the observed neutrino velocities as a function of the
propagation energy.
The authors of the somewhat inconclusive
1979 paper (Ref.~\cite{KaEtAl1979})
suggest to ascribe a path length correction of
$\Delta_{\rm path} = -0.5^{+0.2}_{-0.1} \times 10^{-4}$
to their data, because the muons that were ``racing'' against the
neutrinos in the experiment were assumed to be artificially
delayed due to multiple scattering events, which
extend the muon path length in comparison to the
muon neutrino path length.
The path length correction was assumed to be constant over the
energy range analyzed in Ref.~\cite{KaEtAl1979},
uniformly affecting neutrinos in the energy range of
$32 \, \mathrm{GeV} < E_\nu < 195 \, \mathrm{GeV}$ in an experiment
over a relatively short baseline of about 900~m (which is
smaller than the OPERA baseline by a factor of roughly $10^3$).
We find that the discussion on the derivation of the path length correction
in Ref.~\cite{KaEtAl1979} is rather short and therefore
present data with and without this correction in Fig.~\ref{fig3};
the same approach was recently taken in Figs.~1 and~2 of
Ref.~\cite{ACEtAl2011a}.
The model~\eqref{sun} leads to a constant deviation of the
neutrino velocity of
\begin{equation}
\label{mres}
\Delta = \frac{v-c}{c} = \sqrt{1 + \eta^2} - 1 = 2.4 \times 10^{-5} \,,
\end{equation}
independent of the neutrino energy. This result is compared to the
available experimental data~\cite{KaEtAl1979,AdEtAl2007,OPERA2011v2}
in Fig.~\ref{fig3}.
While our model is somewhat speculative at the current stage,
it is intriguing to observe that the solution of the
simple-minded RG equation~\eqref{sun} is in good
agreement with the observed neutrino velocities over a wide
energy interval (see Fig.~\ref{fig3}).
We also recall that the concomitant significant
neutrino mass running will suppress decays because the
tachyonic mass in the exit channel is much lower than in the incoming channel
(see Sec.~\ref{kc}).
\section{Conclusions}
\label{conclu}
Tachyons have a potential of fundamentally altering our view of physical law,
but they can be incorporated into the framework of Lorentz transformations,
despite obvious problems with the causality principle.
In Ref.~\cite{ArSu1968}, the authors
argue that a ``sensible'' theory is obtained if one insists
that the only physical quantities are transition amplitudes,
and a negative-energy in
(out) state is understood to be a positive-energy out (in) state.
This statement is in need of further explanation.
Suppose that observer $A$ sees event $E$
before $E'$, and observer $A'$ sees event $E'$ before $E$, because the two
events are separated by a space-like interval, and the Lorentz transform
for the frames $A$ and $A'$ reverses the time-ordering of
events $E$ and $E'$. According to Ref.~\cite{BiDeSu1962}, the reversed
time ordering occurs if and only if
the energy between the two frames also changes sign. So,
provided one reinterprets the negative-energy eigenstates of tachyonic Dirac
Hamiltonian propagating backward in time (the antiresonances included) as
positive-energy solutions propagating forward in time,
the creation and absorption of a particle can be consistently reinterpreted
if only the transition amplitude
is unaffected by the reinterpretation. This point has also been stressed in
Refs.~\cite{BiDeSu1962,BiSu1969}.
One problem, though, in the consistency of observations of
tachyons lies in conceivable decay processes~\cite{CoGl2011}.
In this paper, we investigate threshold conditions for the
emission (see Sec.~\ref{kc}) of real particles by analogues of Cerenkov radiation
emitted by superluminal, tachyonic neutrinos that fulfill
the dispersion relation~\eqref{dispsup}. We find that such
emissions, as shown in Fig.~\ref{fig1},
are possible at high energies for small Cerenkov angles
in a narrow cone of emission angles $\theta$
[see Eqs.~\eqref{thetagamma} and~\eqref{theta0}]. Furthermore, at sufficiently
large energy, a nonvanishing emission probability exists
for even very small tachyonic mass squares $-m_\nu^2$.
However, the calculation of the corresponding decay rates
crucially depends on the dispersion relation used in the calculation.
The tachyonic relation~\eqref{dispsup} is Lorentz-invariant,
and the effective mass $m_\nu$ crucially influences the decay rate.
We then investigate, based on the tachyonic Dirac equation
(see Sec.~\ref{td}), how the effective neutrino mass $m_\nu$ could
possibly change from a few $\mathrm{eV}$ at low energies in the $\mathrm{keV}$
range to energies of a hundred $\mathrm{MeV}$ in the $\mathrm{GeV}$ range
[see Eqs.~\eqref{mmE1},~\eqref{mmE2} and~\eqref{mmE}].
We here come to the conclusion that
a viable explanation for the large virtuality
$E_\nu^2 - \vec p^{\,2}$ of the OPERA neutrinos
could be due to an additional interaction that modifies the
neutrino propagation at high energies.
At an energy in the $\mathrm{GeV}$ range, as measured by OPERA,
the propagation velocity of
a particle with a rest mass on the order of a few $\mathrm{eV}$
is not expected to deviate from the speed of light by
a factor on the order of $10^{-5}$.
It does not really matter in this case that the OPERA experiment
has measured a deviation of $v_\nu$ from $c$ in the {\em super}luminal direction.
A hypothetical experimental result for $v_\nu - c < 0$ in the {\em sub}luminal
direction, of the same order-of-magnitude,
as indicated in Eq.~\eqref{hypothetical}, would have been
equally surprising. According to previous neutrino
data~\cite{RoEtAl1991,AsEtAl1994,AsEtAl1996,StDe1995,%
WeEtAl1999,LoEtAl1999,BeEtAl2008},
OPERA was not expected to find a deviation $| v_\nu - c|$ in the
neutrino propagation velocity of the order-of-magnitude given in
Eq.~\eqref{mmE}. In light of~Eqs.~\eqref{mmE1} and~\eqref{mmE2}, the OPERA
signal would otherwise
correspond to a particle with a rest mass in the
range of a hundred $\mathrm{MeV}$, or, with an effective
mass of the neutrino that grows linearly with the energy.
Unfortunately, neither the
Higgs mechanism nor the Gross-Neveu model,
induce a mass that depends on the energy.
Once the vacuum expectation value of the background field
that generates the mass is fixed, the mass of the
constituent particle is also fixed.
We find that it is indicated
to investigate genuine neutrino mass running due to interactions
which have hitherto not been introduced into the standard model.
In Sec.~\ref{running} of this
paper, we write down a chiral Yukawa interaction
which might induce a neutrino mass running with the
experimentally observed parameters.
\section*{Acknowledgments}
Helpful conversations with B.~J.~Wundt are gratefully acknowledged.
The author acknowledges support from the National Science Foundation
and by a Precision Measurement Grant from the National Institute of Standards
and Technology.
|
2,877,628,088,916 | arxiv | \section{Introduction}
Lanthanide-based heavy-fermion (HF) systems are suitable model
systems to study emergent phenomena at a quantum critical point
(QCP), where collective quantum fluctuations trigger the system
continuously from a magnetically ordered to a non-magnetic ground
state~\cite{FocusIssue2008,Mathur1998,Stewart2001,Park2006,Friedemann2009,Stockert2011}.
However, despite intense research, to the best of our knowledge, no
4$f$-based material is known with a continuous ferromagnetic (FM) to
paramagnetic (PM) quantum phase transition (QPT). The existence of
such a QPT is also controversially discussed from a theoretical
point of
view~\cite{Kirkpatrick2003,Conduit2009,Yamamoto2010,Peters2012,Green2012}.
Recently, Krellner \textsl{et al.} suggested that the HF metal
YbNi$_{4}$P$_{2}$ with a quasi-one-dimensional (1-D) electronic
structure exhibits FM quantum criticality above a low FM transition
temperature $T_{C}=170$ mK~\cite{Krellner2011}. YbNi$_{4}$P$_{2}$
crystallizes in the tetragonal ZrFe$_{4}$Si$_{2}$ structure
containing isolated chains of edge-connected Ni tetrahedra along the
$c-$axis. The Yb atoms are located in the channels between these Ni
tetrahedral chains. The reduced dimensionality in the Yb and Ni
network and the geometrical frustration between neighboring Yb
chains give rise to enhanced quantum spin fluctuations of the
magnetic Yb$^{3+}$ ions. In the PM state above 50~K, the magnetic
susceptibility shows Curie-Weiss behavior with an effective moment
$\mu_{eff}=4.52\mu_{B}$ that is characteristic for magnetic
Yb$^{3+}$ ions. Analysis of the magnetic entropy reveals a Kondo
energy scale of $T_{K}\approx8$ K for the crystal electric field
ground state doublet. The FM transition is evidenced by distinct
anomalies in magnetic susceptibility, specific heat, and resistivity
measurements. Low-$T$ magnetization measurements suggest an ordered
FM moment of $m_{ord}\approx0.05(4)\mu_{B}$. Pronounced
non-Fermi-liquid (NFL) behavior is reflected by a
stronger-than-logarithmic diverging Sommerfeld coefficient and a
linear-in-$T$ resistivity state apparent in a $T$ range larger than
a decade above $T_{C}$. In external magnetic fields, the NFL
behavior is suppressed and FL behavior gradually recovers.
Therefore, YbNi$_{4}$P$_{2}$ is considered as a clean system
situated in the very close vicinity of a FM QCP, with FM quantum
fluctuations dominating thermodynamic and transport quantities at
$T>T_{C}$.
The present knowledge on YbNi$_{4}$P$_{2}$ is based on measurements
of macroscopic magnetic, thermodynamic, and transport properties.
The next step in a deeper investigation of this prospective FM
quantum critical system is to get insight on a microscopic level.
Beside the nature of the magnetic order, a central issue in the
present context of critical behavior is the spin dynamics. Since in
systems close to a QCP, the ordered moment is usually strongly
reduced, muon spin relaxation ($\mu$SR) has proven to be an
extremely valuable technique to collect appropriate
information~\cite{MacLaughlin2001,Ishida2003,MacLaughlin2004}.
Here, we present $\mu$SR experiments on polycrystalline
YbNi$_{4}$P$_{2}$, providing microscopic evidence for static
magnetism at $T\leq T_{C}\approx140$ mK with an ordered moment of
$m_{ord}=(2.5-4.6)\times10^{-2}\mu_{B}$/Yb, depending on the assumed
muon site. Above $T_{C}$, the muon-spin polarization $P(t)$ obeys
the time-field scaling relation $P(t)=P(t/B^{0.81(5)})$, indicating
cooperative and critical spin dynamics.
In a $\mu$SR experiment positive spin-polarized muons are implanted
into the sample, and the subsequent time evolution of the muon spin
polarization is monitored by detecting the asymmetric spatial
distribution of positrons emitted from the muon decay~\cite{Schenk}.
$\mu$SR in longitudinal applied magnetic fields is dominated by
Yb-4$f$ electronic spin fluctuations that couple to the implanted
muons. The $\mu$SR experiments on YbNi$_{4}$P$_{2}$ in zero field
(ZF) and longitudinal (LF) applied field -- with respect to the
initial muon spin polarization -- were performed on the $\pi$M3 beam
line at the Swiss Muon Source (S$\mu$S) at the
Paul-Scherrer-Institut, Switzerland. The sample was prepared by
crushing $\sim270$~mg of single crystalline material, grown in a
self-flux at 1400$\,^{\circ}\mathrm{C}$ in a closed Tantal crucible
and characterized by powder x-ray diffraction experiments, proving
the absence of any foreign phases. Detailed low-$T$ measurements on
polycrystalline YbNi$_{4}$P$_{2}$ were reported
elsewhere~\cite{Krellner2011}.
Figure~1(a) displays typical time dependencies of the ZF muon-spin
polarization $P(t)$ in YbNi$_{4}$P$_{2}$ at representative
temperatures above and below $T_{C}$. A finite $T$-independent
background signal due to muons that stopped in a Ag sample holder
(signal fraction $\approx50\%$) was taken into account. At
$T\geq160$~mK, an exponential muon-spin relaxation is associated
with fast fluctuating paramagnetic electron spins with a relaxation
rate $\lambda$(160~mK)$ \approx0.152(2)\mu s^{-1}$. Note, that dense
static nuclear dipole moments would give rise to a weak Gaussian
relaxation in the PM regime. While cooling through $T_{C}$, an
additional magnetic relaxation mechanism is apparent, strongly
increasing with lowering $T$. Below $T_{C}$, a low-frequency
oscillation with a Gaussian relaxation of the muon-spin polarization
is observed indicating magnetic ordering of weak electronic
Yb$^{3+}$ moments. The muon-spin asymmetry data in the FM regime can
be described best using the functional
form~\cite{Barsov1990,Kornilov1991}:
\begin{figure}
\begin{center}
\includegraphics[width=0.90\columnwidth]{OrderParameter9.eps}
\end{center}
\caption[1]{(Color online)~(a) Corrected muon spin polarization
$P(t)$ at ZF for representative $T$ above and below
$T_{C}\approx140$~mK. At $T\leq T_{C}$, solid lines are fitting
curves according to Eq.~(1). (b) $T$ dependence of the ZF $\mu$SR
frequency $f_{\mu}(T)$. The solid line is a fit to the
phenomenological function:
$f_{\mu}=f_{\mu}(0)\cdot(1-\frac{T}{T_{C}})^{n}$. (c) $T$ dependence
of the ZF static internal field distribution $\sigma$ in Eq.~(1).
The solid line is a guide to the eye. (d) $T$-dependence of
$1/T_{1}T$ in the PM regime. The line describes power-law behavior
as $\frac{1}{T_{1}T}\propto T^{-1.5}$.}
\end{figure}
\begin{eqnarray}
P(t)=\frac{1}{3}+\frac{2}{3}[\cos(2\pi
f_{\mu}t)-\frac{\sigma^{2}t}{2\pi f_{\mu}}\cdot\sin(2\pi
f_{\mu}t)]\cdot e^{-\frac{1}{2}\sigma^{2}t^{2}},
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=0.90\columnwidth]{20mKdecoupling5.eps}
\end{center}
\caption[1]{(Color online) Corrected muon spin polarization at
$T=20$~mK and various longitudinal magnetic fields $B_{LF}$. The
lines represent theoretical depolarization curves for the static GKT
function in corresponding longitudinal fields.}
\end{figure}
where $f_{\mu}$ and $\sigma$ are the muon spin precession frequency
and the Gaussian field width, respectively. The 2/3 oscillating and
the 1/3 non-oscillating terms originate from the spatial averaging
in polycrystalline samples, where 2/3 (1/3) of the internal magnetic
field components are directed perpendicular (parallel) to the
initial muon spin, causing a precession (no precession) of the muon
spin. The observation of a 2/3 and 1/3 signal fraction below
$T_{C}$, implies dense magnetic moments and proof that 100\% of the
sample volume shows static magnetic order. The latter is supported
by LF-$\mu$SR measurements as discussed in detail below. In the
limit $2\pi f_{\mu}\gg\sigma$, Eq.~(1) becomes a Gaussian damped
cosine function. For $2\pi f_{\mu}\rightarrow0$, close to the
magnetic transition, Eq.~(1) is equivalent to the Gaussian
Kubo-Toyabe (GKT) function~\cite{Hayano}, which describes a
muon-spin relaxation due to a static Gaussian field distribution
centered around $B_{local}=0$. ZF-$\mu$SR on the
antiferromagnetically ordered system
YbRh$_{2}$Si$_{2}$~\cite{Ishida2003} reveals a similar crossover
from a Lorentzian to a Gaussian damped $\mu$SR signal in the
vicinity of the PM to magnetic phase transition, attributed to a
transition from dynamic to static magnetism of magnetic Yb$^{3+}$
moments.
For YbNi$_{4}$P$_{2}$ a finite $\mu$SR frequency is clearly observed
below 150~mK. From the measured frequency value
$f_{\mu}=0.188(1)$~MHz at 20~mK one can determine the internal local
field at the muon site to $B_{local}=13.87$~G using $B_{local}=2\pi
f_{\mu}/\gamma_{\mu}$ with $\gamma_{\mu}=2\pi\times13.55$~kHz/G as
the muon gyromagnetic ratio. The local field $B_{local}$ as well as
the local static field width $\Delta
B_{local}=\sigma/\gamma_{\mu}\approx6$~G are very small for
conventional rare-earth magnets with large ordered moments. The
fractional width $\Delta B_{local}/B_{local}$ of the spontaneous
field distribution is $\sim0.4$ at low $T$ and remains constant as
$T\rightarrow T_{C}$, which is a reasonable value for a magnetically
ordered HF system, as e.g. in CeRhIn$_{5}$~$\Delta
B_{local}/B_{local} = 0.5$ is observed~\cite{Schenk2002}. Thus, the
local field distribution is nearly uniform and homogeneous in the FM
regime. The spontaneous muon-spin precession and the Gaussian shape
of the internal field distribution below $T_{C}$ arise from a dense
system of weak magnetic moments with small, static magnetic
inhomogeneities. The presence of a finite $B_{local}\neq0$ proves
coherent magnetic order.
ZF-$\mu$SR allows a precise determination of the $T$ dependence of
the magnetic order parameter, which is proportional to the measured
$\mu$SR frequency $f_{\mu}$. The $T$ dependence of $f_{\mu}$ and
$\sigma$ is shown in Figs.~1(b) and 1(c). For $T\leq140$~mK, both
observables exhibit a continuous increase. The $T$ dependence of
$f_{\mu}$ can be fit to the phenomenological function
$f_{\mu}=f_{\mu}(0)\cdot(1-\frac{T}{T_{C}})^{n}$ for $T<T_{C}$ with
$n=0.208\pm0.02$, $f_{\mu}(T)=0.199(3)$~MHz, and $T_{C}=140(2)$~mK.
The value of the effective critical exponent $n$, describing the
critical behavior close to $T_{C}$, is between $n=0.125$ and 0.325,
which are theoretically expected for two-dimensional (2D) and
isotropic three-dimensional (3D) Ising magnets, respectively. This
is not in contradiction with the claim of a quasi-1D system. In such
a system, the weak inter-chain coupling results in an evolution from
a 1D behavior at high $T$ to a 2D Ising or 3D behavior at low $T$,
which is intimately linked with (and is a prerequisite for) the
long-range ordering at finite $T$. The low data point density
between $0.6\leq\frac{T}{T_{C}}\leq1$, however, precludes the
determination of the precise critical exponent. The obtained value
for $T_{C}$ agrees well with the value found in specific heat
measurements on these single crystals~\cite{Steppke2011}.
\begin{figure}
\begin{center}
\includegraphics[width=0.90\columnwidth]{LambdaL4.eps}
\end{center}
\caption[1]{(Color online) Main panel: Field dependence of the
dynamic muon spin relaxation rate $\lambda_{L}$. The solid curve
represents a Redfield fit. For display reasons, the ZF value is set
at $B_{LF}=0.01$~G. Inset: Field dependence of the corrected muon
spin polarization $P(t)$ at $T=190$~mK.}
\end{figure}
For all examined $T\leq T_{C}$, the sample signal is analyzed with a
well-defined single $f_{\mu}$ and $\sigma$, signaling that the
magnetic order is a bulk effect and that only one dominant muon
stopping site is present. In general, for the determination of the
muon stopping site(s) it is important to deduce the hyperfine
coupling constant. One way to find potential muon sites is to
compare calculated and measured quantities for the local field
$B_{local}$ at the muon site. The muon preferentially settles at
tetrahedra or octahedra interstitial crystallographic sites. From
simple symmetry arguments the most probable muon stopping sites,
using Wyckoff's notation, are 4$f$(1/4,1/4,0), 8$j$(1/4,1/4,1/4),
4$f$(1/4,1/4,1/2), 8$i$(1/4,1/2,1/2), 4$c$(1/2,0,0),
4$c$(1/2,0,1/2), 2$b$(0,0,1/2), and 2$a$(1/2,1/2,1/2). For a
particular FM structure with the magnetic Yb$^{3+}$ moments aligned
within the $a~b$ plane and a dominant 4$f$-$\mu$ dipolar
interaction, one can determine the expected internal field values
for the proposed sites. Our lattice sum calculations reveal that
only at the 4$c$(1/2,0,1/2) site and 8$i$(1/4,1/2,1/2) site a local
field $B_{local}$ of the measured absolute magnitude is found. For
the 4$c$ and 8$i$ sites the measured local field of
$B_{local}=13.87$~G corresponds to a static ordered moment of the Yb
ions of $m_{ord}=0.046\mu_{B}$ and 0.025$\mu_{B}$, respectively.
Both values are in good agreement with the value deduced from recent
magnetization measurements~\cite{Krellner2011}.
\begin{figure}
\begin{center}
\includegraphics[width=0.90\columnwidth]{TimeFieldSCaling_3rd.eps}
\end{center}
\caption[1]{(Color online) Corrected muon decay asymmetry at
$T=190$~mK for various magnetic fields as function of the scaling
variable $t/B_{LF}^{0.81}$. The dashed-dotted line is a fit of the
13~G data with $P(t)/P(0)=\exp[-\lambda_{L}t]^{-0.9}$.}
\end{figure}
The temperature dependence of the exponential relaxation rate
$\lambda_{L}=\frac{1}{T_{1}}$, observed above $T_{C}$, is plotted in
Fig.~1(d) on a log-log scale as $\frac{1}{T_{1}T}$. Cooling down
from 800~mK, $\frac{1}{T_{1}T}$ exhibits power-law behavior
according to $\frac{1}{T_{1}T}\propto T^{-1.40(6)}$. At
$T\leq190$~mK, the power-law behavior in $\frac{1}{T_{1}T}$ persists
in the PM regime down to $T_{C}$, however, with a slight change of
the critical exponent, i.e., $\frac{1}{T_{1}T}\propto T^{-1.5(1)}$
(dashed-dotted line). The observed $\frac{1}{T_{1}T}$ behavior is
close to the $T^{-4/3}$ temperature dependence predicted by the
self-consistent renormalization (SCR) theory for a system close to a
3D ferromagnetic QCP~\cite{Moriya1995}. There is no prediction for
an itinerant quasi 1D system in the $T$ range between the exchange
energy scale and ordering temperature. For an isolating
ferromagnetic quasi 1D spin chain the $T$ dependence of the
relaxation rate above $T_{C}$ depends strongly on the details of the
interactions -- see, e.g.,~\cite{Sato2011}.
LF-$\mu$SR experiments allow to separate the dynamic contribution to
the relaxation of the muon-spin polarization. Investigations of the
low-$T$ muon-spin dynamics yield additional information about the
origin of the NFL behavior in YbNi$_{4}$P$_{2}$. Figure~2 displays
the muon-spin asymmetry function $P(t)$ at $T=20$~mK for different
applied LF's. The muon-spin relaxation is completely suppressed in
an applied field $B_{LF}\approx300$~G, demonstrating that the
internal field distribution is static in nature. However, the
observed decoupling can not be described accurately by a standard
muon asymmetry function that considers an internal field
distribution which is symmetric around $B_{local}=0$. For
comparison, Fig~2. shows theoretical depolarization curves for the
static GKT function in the corresponding longitudinal magnetic
fields. This supports the ZF data, i.e., the observation of a broad
field distribution centered around a finite but small internal field
$B_{local}$(20~mK) $\approx13.87$~G in the FM phase. Finally, when
$B_{LF}\gg B_{local}$, the muon spin relaxation is decoupled from
the static $B_{local}$ as observed for $B_{LF}\geq23$~G.
At $T>T_{C}$, the field dependence of the muon-spin relaxation
probes the Fourier transform of the dynamic spin-spin
autocorrelation function
$q(t)=\langle\textbf{S}_{i}(t)\cdot\textbf{S}_{i}(0)\rangle$, which
exhibits exponential behavior for homogeneous systems and power-law
(or cutoff power-law) or stretched exponential behavior for
inhomogeneous systems. The inset of Fig.~3 displays the muon-spin
polarization $P(t)$ at $T=190$~mK, both in magnetic LF between 13
and 143~G and ZF. The relaxation rate $\lambda_{L}$ is reduced with
increasing field. The field dependence of $\lambda_{L}$ is given in
the main panel of Fig.~3. It shows nearly no field dependence for
magnetic fields of less than $\sim13$~G, but varies more strongly,
as $H^{-\kappa}$ with $\kappa\approx0.79(7)$, for higher fields.
From the field dependence of $\lambda_{L}$, the spin autocorrelation
time $\tau_{c}$ can be estimated using the Redfield formalism for
$\lambda_{L}(B_{LF})=(2\gamma_{\mu}^{2}\langle
B_{fluc}^{2}\rangle\tau_{c})/[1+(\gamma_{\mu}^{2}B_{LF}^{2}\tau_{c}^{2})]$
considering $\tau_{c}$ as independent of the applied field $B_{LF}$.
Here, $B_{fluc}(t)$ describes the time-varying local magnetic field
at the muon site due to fluctuations of neighboring Yb$^{3+}$
moments, with a local time averaged second moment
$\Delta^{2}=\gamma_{\mu}^{2}\langle B_{fluc}^{2}\rangle$ and a
single fluctuation time $\tau_{c}$. For $\hbar\omega\ll k_{B}T$
($\omega$ giving the spin fluctuation rate), the
fluctuation-dissipation theorem~\cite{Toll1956} relates $\tau_{c}$
to the imaginary component of the local $q$-independent $f$-electron
dynamic susceptibility, i.e.
$\tau_{c}(B)=(k_{B}T)[\chi^{\prime\prime}(\omega)/\omega]$. The fit
to the data (solid curve in the main panel of Fig.~3) yields
$\Delta^{2}\approx0.1$~(MHz) and $\tau_{c}\approx6\times10^{-7}$~s,
the latter value nearly three orders of magnitude larger than the
one obtained for YbRh$_{2}$Si$_{2}$ at $T=20$~mK~\cite{Ishida2003},
suggesting very slow critical fluctuations.
The $\mu$SR time spectra in Fig.~3 are well described with a
stretched exponential relaxation function of the form
$P(t)=P(0)\exp[-(\lambda t)^{\beta}]$. An exponent of
$\beta\approx0.9$ shows that the relaxation rate is nearly uniform
throughout the sample, indicating that YbNi$_{4}$P$_{2}$ exhibits
quasihomogeneous spin fluctuations for $T\ll T_{K}$. The spin
dynamics is characterized by a narrow distribution of correlation
times ($\beta=1$ corresponds to one single correlation time). Thus,
disorder-driven theories, including Kondo
disorder~\cite{Miranda1996,Miranda} and the Griffith phase
scenario~\cite{Neto1998} as primary mechanisms for the observed NFL
behavior, can be ruled out. It further implies that the crystalline
disorder in YbNi$_{4}$P$_{2}$ is quite small, which is consistent
with a small residual resistivity ($\rho_{0}\sim2.4\mu\Omega~cm$)
and the stoichiometric occupation of the crystallographic lattice
sites revealed by the x-ray structure
refinement~\cite{Krellner2011}.
A sensitive test to identify power-law or stretched exponential
behavior of $q(t)$ is a time-field scaling analysis of the muon-spin
relaxation function. In both cases a specific time-field scaling can
be found, i.e., the muon-spin relaxation function $P(t,B_{LF})$
obeys the scaling relation $P(t,B_{LF})=P(t/B_{LF}^{\gamma})$. This
relation applies only in the asymptotic strong field limit, i.e., as
long as $2\pi
f_{\mu}=\gamma_{\mu}B_{LF}\gg\lambda_{L}$~\cite{Keren}. If
time-field scaling is obeyed, a plot of $P(t,B_{LF})$ versus
$t/B_{LF}^{\gamma}$ at $T>T_{C}$ will be universal for the correct
choice of $\gamma$, and distinguishes between power-law ($\gamma<1$)
and stretched exponential ($\gamma\geq1$) correlations. For small
$B_{LF}$, the field dependence is expected to be due to the change
of $f_{\mu}$ rather than an effect of field on $q(t)$. A breakdown
of time-field scaling would occur for high fields where $q(t)$ is
directly effected by the applied fields. Figure~4 shows the same
asymmetry data, as displayed in Fig.~3, as functions of the scaling
variable $t/B_{LF}^{\gamma}$. For $\gamma=0.81(5)$ the data scale
well over $\sim2.5$ orders of magnitude in $t/B_{LF}^{\gamma}$ and
for all fields between 13 and 143~G, except for 293~G. Here, at
large $t$, the data fall above the low-field scaling curve. Fields
$\mu_{B}B_{LF}\geq k_{B}T$ (with $k_{B}=$Boltzmann`s constant) would
be expected to affect the spin dynamics. The scaling exponent
$\gamma=0.81(5)<1$ implies that within the $\mu$SR frequency range,
the spin-spin correlation function $q(t)$ is approximated by a power
law (or a cutoff power law) rather than a stretched exponential or
exponential~\cite{Keren}, consistent with the Redfield analysis. The
power-law is time-scale invariant and dynamical modulations should
therefore be observable in any time window. The obtained time-field
scaling of the relaxation data is a signature of slow homogeneous
spin dynamics. It strongly indicates that the critical slowing down
of spin fluctuations at the magnetic phase transition occurs
cooperatively throughout the sample. In stoichiometric, homogeneous
NFL systems such behavior may arise from the effect of disorder on
quantum critical fluctuations inherent to a QCP. This is suggested
for the NFL compound
YbRh$_{2}$Si$_{2}$~\cite{Ishida2003,MacLaughlin2004}.
In conclusion, ZF-$\mu$SR in the stoichiometric NFL compound
YbNi$_{4}$P$_{2}$ clearly proves static magnetic ordering of
strongly reduced Yb$^{3+}$ moments below $T_{C}=140$~mK. Above
$T_{C}$, the muon spin polarization $P(t)$ obeys the time-field
scaling relation $P(t)=P(t/B^{0.81(5)})$ for applied magnetic fields
$B$ between 13 and 143~G, indicating cooperative and critical spin
dynamics. Power-law behavior of the dynamic spin-spin
autocorrelation function is implied by the observation of
$\gamma<1$~\cite{Keren}. The LF-$\mu$SR results suggest that the NFL
behavior observed at $T>T_{C}$ is induced by quasi homogeneous
critical spin fluctuations.
We acknowledge with thanks the help of A. Amato and the PSI
accelerator crew as well as financial support by the German Science
Foundation (DFG) in the framework of the priority program 1458,
Grant No. KL1086/10-1.
|
2,877,628,088,917 | arxiv | \section{Introduction}\label{se:intro}
It is typical in Computer Science to classify problems according to the amount of resources that are needed to solve them. Hence, problems are usually classified according to the amount of time or to the amount of memory that a specific model of computation requires for their solution.
This epistemological need of classifying problems finds, in the Graph Drawing field, a very original interpretation. A Graph Drawing problem can be broadly described as follows: Given a graph of a certain family and a drawing convention (e.g.\ all edges should be straight-line segments), draw the graph optimizing some specific features. Among those features a fundamental one is the amount of geometric space that the drawing spans and a natural question is: Which is the amount of space that is required for drawing a planar graph, or a tree, or a bipartite graph? Hence, besides classifying problems according to the above classical coordinates, Graph Drawing classifies problems according to the amount of geometric space that a drawing that solves that problem requires.
Of course, such a space requirement can be influenced by the class of graphs (one can expect that the area required to draw an $n$-vertex tree is less than the one required to draw an $n$-vertex general planar graph) and by the drawing convention (straight-line drawings look more constrained than drawings where edges can be polygonal lines).
The attempt of classifying graph drawing problems with respect to the space required spurred, over the last fifty years, a large body of research. On one hand, techniques have been devised to compute geometric lower bounds that are completely original and do not find counterparts in the techniques adopted in Computer Science to find time or memory lower bounds. On the other hand, the uninterrupted upper bound hunting has produced several elegant algorithmic techniques.
In this paper we survey the state of the art on such algorithmic and lower bound techniques for several families of planar graphs. Indeed, drawing planar graphs without crossings is probably the most classical Graph Drawing topic and many researches gave fundamental contributions on planar drawings of trees, outerplanar graphs, series-parallel graphs, etc.
We survey the state of the art focusing on the impact of the most popular drawing conventions on the geometric space requirements. In Section~\ref{se:straight-line} we discuss straight-line drawings. In Section~\ref{se:poly-line} we analyze drawings where edges can be polygonal lines. In Section~\ref{se:upward} we describe upward drawings, i.e.\ drawings of directed acyclic graphs where edges follow a common vertical direction. In Section~\ref{se:convex} we describe convex drawings, where the faces of a planar drawing are constrained to be convex polygons. Proximity drawings, where vertices and edges should enforce some proximity constraints, are discussed in Section~\ref{se:proximity}. Section~\ref{se:clustered} is devoted to drawings of clustered graphs.
We devote special attention to put in evidence those that we consider the main open problems of the~field.
\section{Preliminaries}\label{se:preliminaries}
In this section we present preliminaries and definitions. For more about graph drawing, see~\cite{dett-gd-99,kw-dgmm-01}.
\subsection*{Planar Drawings, Planar Embeddings, and Planar Graphs} \label{se:graphs-planarembeddings}
All the graphs that we consider are \emph{simple}, i.e., they contain no multiple edges and loops. A \emph{drawing} of a graph $G(V,E)$ is a mapping of each vertex of $V$ to a point in the plane and of each edge of $E$ to a simple curve connecting its endpoints. A drawing is \emph{planar} if no two edges intersect except, possibly, at common endpoints. A \emph{planar graph} is a graph admitting a planar drawing.
A planar drawing of a graph determines a circular ordering of the edges incident to each vertex. Two drawings of the same graph are \emph{equivalent} if they determine the same circular ordering around each vertex and a \emph{planar embedding} (sometimes also called {\em combinatorial embedding}) is an equivalence class of planar drawings. A graph is \emph{embedded} when an embedding of it has been decided. A planar drawing partitions the plane into topologically connected regions, called \emph{faces}. The unbounded face is the \emph{outer face}, while the bounded faces are the \emph{internal faces}. The outer face of a graph $G$ is denoted by $f(G)$. A graph together with a planar embedding and a choice for its outer face is a \emph{plane graph}. In a plane graph, \emph{external} and \emph{internal} vertices are defined as the vertices incident and not incident to the outer face, respectively. Sometimes, the distinction is made between \emph{planar embedding} and \emph{plane embedding}, where the former is an equivalence class of planar drawings and the latter is a planar embedding together with a choice for the outer face. The \emph{dual graph} of an embedded planar graph $G$ has a vertex for each face of $G$ and has an edge $(f,g)$ for each two faces $f$ and $g$ of $G$ sharing an edge.
\subsection*{Maximality and Connectivity} \label{se:graphs-connectivity}
A plane graph is \emph{maximal} (or equivalently is a \emph{triangulation}) when all its faces are delimited by \emph{$3$-cycles}, that is, by cycles of three vertices. A planar graph is \emph{maximal} when it can be embedded as a triangulation. Algorithms for drawing planar graphs usually assume to deal with maximal planar graphs. In fact, any planar graph can be augmented to a maximal planar graph by adding some ``dummy'' edges to the graph. Then the algorithm can draw the maximal planar graph and finally the inserted dummy edges can be removed obtaining a drawing of the input graph.
A graph is \emph{connected} if every pair of vertices is connected by a path. A graph with at least $k+1$ vertices is \emph{$k$-connected} if removing any (at most) $k-1$ vertices leaves the graph connected; $3$-connected, $2$-connected, and $1$-connected graphs are also called \emph{triconnected}, \emph{biconnected}, and \emph{connected} graphs, respectively. A \emph{separating cycle} is a cycle whose removal disconnects the graph.
\subsection*{Classes of Planar Graphs} \label{se:graphs-classes}
A \emph{tree} is a connected acyclic graph. A \emph{leaf} in a tree is a node of degree one. A \emph{caterpillar} $C$ is a tree such that the removal from $C$ of all the leaves and of their incident edges turns $C$ into a path, called the \emph{backbone} of the caterpillar.
A \emph{rooted tree} is a tree with one distinguished node called \emph{root}. In a rooted tree each node $v$ at distance (i.e., length of the shortest path) $d$ from the root is the \emph{child} of the only node at distance $d-1$ from the root $v$ is connected to. A \emph{binary tree} (a \emph{ternary tree}) is a rooted tree such that each node has at most two children (resp. three children). Binary and ternary trees can be supposed to be rooted at any node of degree at most two and three, respectively. The \emph{height} of a rooted tree is the maximum number of nodes in any path from the root to a leaf. Removing a non-leaf node $u$ from a tree disconnects the tree into connected components. Those containing children of $u$ are the \emph{subtrees} of $u$.
A \emph{complete tree} is a rooted tree such that each non-leaf node has the same number of children and such that each leaf has the same distance from the root. Complete trees of degree three and four are also called \emph{complete binary trees} and \emph{complete ternary trees}, respectively.
A rooted tree is \emph{ordered} if a clockwise order of the neighbors of each node (i.e., a planar embedding) is specified. In an ordered binary tree and in an ordered ternary tree, fixing a linear ordering of the children of the root yields to define the \emph{left} and \emph{right child} of a node, and the \emph{left}, \emph{middle}, and \emph{right child} of a node, respectively. If the tree is ordered and binary (ternary), the subtrees rooted at the left and right child (at the left, middle, and right child) of a node $u$ are the \emph{left} and the \emph{right subtree} of $u$ (the \emph{left}, the \emph{middle}, and the \emph{right subtree} of $u$), respectively. Removing a path $P$ from a tree disconnects the tree into connected components. The ones containing children of nodes in $P$ are the \emph{subtrees} of $P$. If the tree is ordered and binary (ternary), then each component is a \emph{left} or \emph{right subtree} (a \emph{left}, \emph{middle}, or \emph{right subtree}) of $\mathcal P$, depending on whether the root of such subtree is a left or right child (is a left, middle, or right child) of a node in $\mathcal P$, respectively.
An \emph{outerplane graph} is a plane graph such that all the vertices are incident to the outer face. An \emph{outerplanar embedding} is a planar embedding such that all the vertices are incident to the same face. An \emph{outerplanar graph} is a graph that admits an outerplanar embedding. A \emph{maximal outerplane graph} is an outerplane graph such that all its internal faces are delimited by cycles of three vertices. A \emph{maximal outerplanar embedding} is an outerplanar embedding such that all its faces, except for the one to which all the vertices are incident, are delimited by cycles of three vertices. A \emph{maximal outerplanar graph} is a graph that admits a maximal outerplanar embedding. Every outerplanar graph can be augmented to maximal by adding dummy edges to it.
If we do not consider the vertex corresponding to the outer face of $G$ and its incident edges then the dual graph of an outerplane graph $G$ is a tree. Hence, when dealing with outerplanar graphs, we talk about the \emph{dual tree} of an outerplanar graph (meaning the dual graph of an outerplane embedding of the outerplanar graph). The nodes of the dual tree of a maximal outerplane graph $G$ have degree at most three. Hence the dual tree of $G$ can be rooted to be a binary tree.
\emph{Series-parallel graphs} are the graphs that can be inductively constructed as follows. An edge $(u,v)$ is a series-parallel graph with \emph{poles} $u$ and $v$. Denote by $u_i$ and $v_i$ the poles of a series-parallel graph $G_i$. Then, a \emph{series composition} of a sequence $G_1,G_2,\dots,G_k$ of series-parallel graphs, with $k\geq 2$, constructs a series-parallel graph that has poles $u=u_1$ and $v=v_k$, that contains graphs $G_i$ as subgraphs, and such that vertices $v_i$ and $u_{i+1}$ have been identified to be the same vertex, for each $i=1,2,\dots,k-1$. A \emph{parallel composition} of a set $G_1,G_2,\dots,G_k$ of series-parallel graphs, with $k\geq 2$, constructs a series-parallel graph that has poles $u=u_1=u_2=\cdots=u_k$ and $v=v_1=v_2=\cdots=v_k$, that contains graphs $G_i$ as subgraphs, and such that vertices $u_1,u_2,\cdots,u_k$ (vertices $v_1,v_2,\cdots,v_k$) have been identified to be the same vertex. A \emph{maximal series-parallel graph} is such that all its series compositions construct a graph out of exactly two smaller series-parallel graphs $G_1$ and $G_2$, and such that all its parallel compositions have a component which is the edge between the two poles. Every series-parallel graph can be augmented to maximal by adding dummy edges to it. The \emph{fan-out} of a series-parallel graph is the maximum number of components in a parallel composition.
A graph $G$ is \emph{bipartite} if its vertex set $V$ can be partitioned into two subsets $V_1$ and $V_2$ so that every edge of $G$ is incident to a vertex of $V_1$ and to a vertex of $V_2$. A \emph{bipartite planar graph} is both bipartite and planar. A \emph{maximal bipartite planar graph} admits a planar embedding in which all its faces have exactly four incident vertices. Every bipartite planar graph with at least four vertices can be augmented to maximal by adding dummy edges to it.
\subsection*{Drawing Standards}
A {\em straight-line drawing} is a drawing such that each edge is represented by a straight-line segment. A {\em poly-line drawing} is a drawing such that each edge is represented by a sequence of consecutive segments. The points in which two consecutive segments of the same edge touch are called \emph{bends}. A {\em grid drawing} is a drawing such that vertices and bends have integer coordinates. An {\em orthogonal drawing} is a poly-line drawing such that each edge is represented by a sequence of horizontal and vertical segments. A {\em convex drawing} (resp. {\em strictly-convex drawing}) is a planar drawing such that each face is delimited by a convex polygon (resp. strictly-convex polygon), that is, every interior angle of the drawing is at most $180^{\circ}$ (resp. less than $180^{\circ}$) and every exterior angle is at least $180^{\circ}$ (resp. more than $180^{\circ}$). An \emph{order-preserving drawing} is a drawing such that the order of the edges incident to each vertex respects an order fixed in advance. An \emph{upward drawing} (resp. \emph{strictly-upward drawing}) of a rooted tree is a drawing such that each edge is represented by a non-decreasing curve (resp. increasing curve). A \emph{visibility representation} is a drawing such that each vertex is represented by a horizontal segment $\sigma(u)$, each edge $(u,v)$ is represented by a vertical segment connecting a point of $\sigma(u)$ with a point of $\sigma(v)$, and no two segments cross, except if they represent a vertex and one of its incident edges.
\subsection*{Area of a Drawing}
The \emph{bounding box} of a drawing is the smallest rectangle with sides parallel to the axes that contains the drawing completely. The \emph{height} and \emph{width} of a drawing are the height and width of its bounding box. The \emph{area} of a drawing is the area of its bounding box. The \emph{aspect ratio} of a drawing is the ratio between the maximum and the minimum of the height and width of the drawing. Observe that the concept of area of a drawing only makes sense once a \emph{resolution rule} is fixed, i.e., a rule that does not allow vertices to be arbitrarily close (\emph{vertex resolution rule}), or edges to be arbitrarily short (\emph{edge resolution rule}). Without any of such rules, one could just construct drawings with arbitrarily small area. It is usually assumed in the literature that graph drawings in small area have to be constructed on a grid. In fact all the algorithms we will present in Sects.~\ref{se:straight-line},~\ref{se:poly-line},~\ref{se:upward},~\ref{se:convex}, and~\ref{se:clustered} assign integer coordinates to vertices. The assumption of constructing drawings on the grid is usually relaxed in the context of proximity drawings (hence in Sect.~\ref{se:proximity}), where in fact it is assumed that no two vertices have distance less than one unit.
\subsection*{Directed Graphs and Planar Upward Drawings}
A \emph{directed acyclic graph} (\emph{DAG} for short) is a graph whose edges are oriented and containing no cycle $(v_1,\dots,v_n)$ such that edge $(v_i,v_{i+1})$ is directed from $v_i$ to $v_{i+1}$, for $i=1,\dots,n-1$, and edge $(v_n,v_1)$ is directed from $v_n$ to $v_1$. The \emph{underlying graph} of a DAG $G$ is the undirected graph obtained from $G$ by removing the directions on its edges. An \emph{upward drawing} of a DAG is such that each edge is represented by an increasing curve. An \emph{upward planar drawing} is a drawing which is both upward and planar. An \emph{upward planar DAG} is a DAG that admits an upward planar drawing. In a directed graph, the \emph{outdegree} of a vertex is the number of edges leaving the vertex and the \emph{indegree} of a vertex is the number of edges entering the vertex. A \emph{source} (resp. \emph{sink}) is a vertex with indegree zero (resp. with outdegree zero). An \emph{st-planar DAG} is a DAG with exactly one source $s$ and one sink $t$ that admits an upward planar embedding in which $s$ and $t$ are on the outer face. \emph{Bipartite DAGs} and \emph{directed trees} are DAGs whose underlying graphs are bipartite graphs and trees, respectively. A \emph{series-parallel DAG} is a DAG that can be inductively constructed as follows. An edge $(u,v)$ directed from $u$ to $v$ is a series-parallel DAG with \emph{starting pole} $u$ and \emph{ending pole} $v$. Denote by $u_i$ and $v_i$ the starting and ending poles of a series-parallel DAG $G_i$, respectively. Then, a \emph{series composition} of a sequence $G_1,G_2,\dots,G_k$ of series-parallel DAGs, with $k\geq 2$, constructs a series-parallel DAG that has starting pole $u=u_1$, that has ending pole $v=v_k$, that contains DAGs $G_i$ as subgraphs, and such that vertices $v_i$ and $u_{i+1}$ have been identified to be the same vertex, for each $i=1,2,\dots,k-1$. A \emph{parallel composition} of a set $G_1,G_2,\dots,G_k$ of series-parallel DAGs, with $k\geq 2$, constructs a series-parallel DAG that has starting pole $u=u_1=u_2=\cdots=u_k$, that has ending pole $v=v_1=v_2=\cdots=v_k$, that contains DAGs $G_i$ as subgraphs, and such that vertices $u_1,u_2,\cdots,u_k$ (vertices $v_1,v_2,\cdots,v_k$) have been identified to be the same vertex. We remark that series-parallel DAGs are a subclass of the upward planar DAGs whose underlying graph is a series-parallel graph.
\subsection*{Proximity Drawings}
A \emph{Delaunay drawing} of a graph $G$ is a straight-line drawing such that no three vertices are on the same line, no four vertices are on the same circle, and three vertices $u$, $v$, and $z$ form a $3$-cycle $(u,v,z)$ in $G$ if and only if the circle passing through $u$, $v$, and $z$ in the drawing contains no vertex other than $u$, $v$, and $z$. A \emph{Delaunay triangulation} is a graph that admits a Delaunay drawing.
The \emph{Gabriel region} of two vertices $x$ and $y$ is the disk having segment $\overline{xy}$ as diameter. A \emph{Gabriel drawing} of a graph $G$ is a straight-line drawing of $G$ having the property that two vertices $x$ and $y$ of the drawing are connected by an edge if and only if the Gabriel region of $x$ and $y$ does not contain any other vertex. A \emph{Gabriel graph} is a graph admitting a Gabriel drawing.
A \emph{relative neighborhood drawing} of a graph $G$ is a straight-line drawing such that two vertices $x$ and $y$ are adjacent if and only if there is no vertex whose distance to both $x$ and $y$ is less than the distance between $x$ and $y$. A \emph{relative neighborhood graph} is a graph admitting a relative neighborhood drawing.
A \emph{nearest neighbor drawing} of a graph $G$ is a straight-line drawing of $G$ such that each vertex has a unique closest vertex and such that two vertices $x$ and $y$ of the drawing are connected by an edge if and only if $x$ is the vertex of $G$ closest to $y$ or viceversa. A \emph{nearest neighbor graph} is a graph admitting a nearest neighbor drawing.
A \emph{$\beta$-drawing} is a straight-line drawing of $G$ having the property that two vertices $x$ and $y$ of the drawing are connected by an edge if and only if the $\beta$-region of $x$ and $y$ does not contain any other vertex. The \emph{$\beta$-region} of $x$ and $y$ is the line segment $\overline{xy}$ if $\beta=0$, it is the intersection of the two closed disks of radius $d(x,y)/(2\beta)$ passing through both $x$ and $y$ if $0<\beta <1$, it is the intersection of the two closed disks of radius $d(x,y)/(2\beta)$ that are centered on the line through $x$ and $y$ and that respectively pass through $x$ and through $y$ if $1\leq \beta <\infty$, and it is the closed infinite strip perpendicular to the line segment $\overline{xy}$ if $\beta =\infty$.
\emph{Weak proximity drawings} are such that there is no geometric requirement on the pairs of vertices not connected by an edge. For example, a \emph{weak Gabriel drawing} of a graph $G$ is a straight-line drawing of $G$ having the property that if two vertices $x$ and $y$ of the drawing are connected by an edge then the Gabriel region of $x$ and $y$ does not contain any other vertex, while there might exist two vertices whose Gabriel region is empty and that are not connected by an edge.
A \emph{Euclidean minimum spanning tree} $T$ of a set $P$ of points is a tree spanning the points in $P$ (that is, the nodes of $T$ coincide with the points of $P$ and no ``Steiner points'' are allowed) and having minimum total edge length.
A \emph{greedy drawing} of a graph $G$ is a straight-line drawing of $G$ such that, for every pair of nodes $u$ and $v$, there exists a \emph{distance-decreasing path}, where a path $(v_0,v_1,\ldots,v_m)$ is distance-decreasing if $d(v_{i},v_{m})<d(v_{i-1},v_{m})$, for $i=1,\ldots,m$, where $d(p,q)$ denotes the Euclidean distance between two points $p$ and $q$.
For more about proximity drawings, see Chapter 7 in~\cite{gd-handbook}.
\subsection*{Clustered Graphs and $c$-Planar Drawings}
A \emph{clustered graph} is a pair $C(G,T)$, where $G$ is a graph, called \emph{underlying graph}, and $T$ is a rooted tree, called \emph{inclusion tree}, such that the leaves of $T$ are the vertices of $G$. Each internal node $\nu$ of $T$ corresponds to the subset of vertices of $G$, called \emph{cluster}, that are the leaves of the subtree of $T$ rooted at $\nu$. A clustered graph $C(G,T)$ is \emph{$c$-connected} if each cluster induces a connected subgraph of $G$, it is \emph{non-$c$-connected} otherwise.
A \emph{drawing} $\Gamma$ of a clustered graph $C(G,T)$ consists of a drawing of $G$ (each vertex is a point in the plane and each edge is as Jordan curve between its endvertices) and of a representation of each node $\mu$ of $T$ as a simple closed region containing all and only the vertices that belong to $\mu$. A drawing is \emph{$c$-planar} if it has no edge crossings (i.e., the drawing of the underlying graph is planar), no edge-region crossings (i.e., an edge intersects the boundary of a cluster at most once), and no region-region crossings (i.e., no two cluster boundaries cross).
A \emph{$c$-planar embedding} is an equivalence class of $c$-planar drawings of $C$, where two $c$-planar drawings are equivalent if they have the same order of the edges incident to each vertex and the same order of the edges incident to each cluster.
\section{Straight-line Drawings}\label{se:straight-line}
In this section, we discuss algorithms and bounds for constructing small-area planar straight-line drawings
of planar graphs and their subclasses.
In Sect.~\ref{se:straight-planar} we deal with general planar graphs,
in Sect.~\ref{se:straight-bipartite} we deal with $4$-connected and bipartite graphs,
in Sect.~\ref{se:straight-series-parallel} we deal with series-parallel graphs,
in Sect.~\ref{se:straight-outerplanar} we deal with outerplanar graphs,
and in Sect.~\ref{se:straight-trees} we deal with trees.
Table~\ref{ta:straight-line} summarizes the best known area bounds for straight-line planar drawings of planar graphs and their subclasses. Observe that the lower bounds of the table that refer to general planar graphs, $4$-connected planar graphs, and bipartite planar graphs hold true for {\em plane} graphs.
\begin{table}[!htb]\footnotesize
\centering
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.} \\
\hline
{\em General Planar Graphs} & $\frac{8n^2}{9}+O(n)$ & \cite{fpp-hdpgg-90,s-epgg-90,b-dpg89a-08} & $\frac{4n^2}{9}-O(n)$ & \cite{Val81,fpp-hdpgg-90,FratiP07,mnra-madp3t-10}\\
\hline
{\em $4$-Connected Planar Graphs} & $\lfloor\frac{n}{2}\rfloor \times (\lceil\frac{n}{2}\rceil-1)$ & \cite{mnn-gd4pg-01} & $\lfloor\frac{n}{2}\rfloor \times (\lceil\frac{n}{2}\rceil-1)$ & \cite{mnn-gd4pg-01}\\
\hline
{\em Bipartite Planar Graphs} & $\lfloor\frac{n}{2}\rfloor \times (\lceil\frac{n}{2}\rceil-1)$ & \cite{bb-dpbgsa-05} & $\lfloor\frac{n}{2}\rfloor \times (\lceil\frac{n}{2}\rceil-1)$ & \cite{bb-dpbgsa-05}\\
\hline
{\em Series-Parallel Graphs} & $O(n^2)$ & \cite{fpp-hdpgg-90,s-epgg-90,zhn-sgdpgbb-10} & $\Omega(n 2^{\sqrt{\log n}})$ & \cite{f-lbarspg-j10}\\
\hline
{\em Outerplanar Graphs} & $O(n^{1.48})$ & \cite{df-sadog-j09} & $\Omega(n)$ & \emph{trivial}\\
\hline
{\em Trees} & $O(n \log n)$ & \cite{cdp-noad-92} & $\Omega(n)$ & \emph{trivial}\\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small A table summarizing the area requirements for straight-line planar drawings of several classes of planar graphs. Notice that $4$-connected planar graphs have been studied only with the additional constraint of having at least four vertices on the outer face.}
\label{ta:straight-line}
\end{table}
\subsection{General Planar Graphs} \label{se:straight-planar}
In this section, we discuss algorithms and bounds for constructing small-area planar straight-line drawings of general planar graphs. Observe that, in order to derive bounds on the area requirements of general planar graphs, it suffices to restrict the attention to maximal planar graphs, as every planar graph can be augmented to maximal by the insertion of ``dummy'' edges. Moreover, such an augmentation can be performed in linear time~\cite{r-nmdg-87}.
We start by proving that every plane graph admits a planar straight-line drawing~\cite{w-bzv-36,s-cm-51}. The simplest and most elegant proof of such a statement is, in our opinion, the one presented by F\'ary in 1948~\cite{f-srpg-48}.
F\'ary's algorithm works by induction on the number $n$ of vertices of the plane graph $G$; namely, the algorithm inductively assumes that a straight-line planar drawing of $G$ can be constructed with the further constraint that the outer face $f(G)$ is drawn as an arbitrary triangle $\Delta$. The inductive hypothesis is trivially satisfied when $n=3$. If $n>3$, then two cases are possible. In the first case $G$ contains a separating $3$-cycle $c$. Then let $G_1$ (resp. $G_2$) be the graph obtained from $G$ by removing all the vertices internal to $c$ (resp. external to $c$). Both $G_1$ and $G_2$ have less than $n$ vertices, hence the inductive hypothesis applies first to construct a straight-line planar drawing $\Gamma_1$ of $G_1$ in which $f(G_1)$ is drawn as an arbitrary triangle $\Delta$, and second to construct a straight-line planar drawing $\Gamma_2$ of $G_2$ in which $f(G_2)$ is drawn as $\Delta(c)$, where $\Delta(c)$ is the triangle representing $c$ in $\Gamma_1$ (see Fig.~\ref{fig:fary1}(a)). Thus, a straight-line drawing $\Gamma$ of $G$ in which $f(G)$ is represented by $\Delta$ is obtained.
\begin{figure}[htb]
\centering
\begin{tabular}{c c c}
\mbox{\epsfig{figure=Fary2.eps,scale=0.2,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=Fary3.eps,scale=0.27,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=Fary4.eps,scale=0.27,clip=}} \\
(a) \hspace{5mm} & (b) \hspace{5mm} & (c)\\
\end{tabular}
\caption{(a) Induction in F\'ary's algorithm if $G$ contains a separating $3$-cycle. (b)--(c) Induction in F\'ary's algorithm if $G$ contains no separating $3$-cycle.}
\label{fig:fary1}
\end{figure}
In the second case, $G$ does not contain any separating $3$-cycle, i.e. $G$ is $4$-connected. Then, consider any internal vertex $u$ of $G$ and consider any neighbor $v$ of $u$. Construct an $(n-1)-$vertex plane graph $G'$ by removing $u$ and all its incident edges from $G$, and by inserting ``dummy'' edges between $v$ and all the neighbors of $u$ in $G$, except for the two vertices $v_1$ and $v_2$ forming faces with $u$ and $v$. The graph $G'$ is simple, as $G$ contains no separating $3$-cycle. Hence, the inductive hypothesis applies to construct a straight-line planar drawing $\Gamma'$ of $G'$ in which $f(G')$ is drawn as $\Delta$. Further, dummy edges can be removed and vertex $u$ can be introduced in $\Gamma'$ together with its incident edges, without altering the planarity of $\Gamma'$. In fact, $u$ can be placed at a suitable point in the interior of a small disk centered at $v$, thus obtaining a straight-line drawing $\Gamma$ of $G$ in which $f(G)$ is represented by $\Delta$ (see Figs.~\ref{fig:fary1}(b)--(c)).
The first algorithms for constructing planar straight-line grid drawings of planar graphs in polynomial area were presented (fifty years later than F\'ary's algorithm!) by de Fraysseix, Pach, and Pollack~\cite{fpp-sssfepg-88,fpp-hdpgg-90} and, simultaneously and independently, by Schnyder~\cite{s-epgg-90}. The approaches of the two algorithms, that we sketch below, are still today the base of every known algorithm to construct planar straight-line grid drawings of triangulations.
The algorithm by de Fraysseix \emph{et al.}~\cite{fpp-sssfepg-88,fpp-hdpgg-90} relies on two main ideas.
First, any $n$-vertex maximal plane graph $G$ admits a total ordering $\sigma$ of its vertices, called \emph{canonical ordering}, such that (see Fig.~\ref{fig:canonical1}(a)): (i) the subgraph $G_k$ of $G$ induced by the first $k$ vertices in $\sigma$ is biconnected, for each $k=3,\dots,n$; and (ii) the $k$-th vertex in $\sigma$ lies in the outer face of $G_{k-1}$, for each $k=4,\dots,n$.
Second, a straight-line drawing of an $n$-vertex maximal plane graph $G$ can be constructed starting from a drawing of the $3$-cycle induced by the first three vertices in a canonical ordering $\sigma$ of $G$ and incrementally adding vertices to the partially constructed drawing in the order defined by $\sigma$.
To construct the drawing of $G$ one vertex at a time, the algorithm maintains the invariant that the outer face of $G_k$ is delimited by a polygon composed of a sequence of segments having slopes equal to either $45^{\degree}$ or $-45^{\degree}$. When the next vertex $v_{k+1}$ in $\sigma$ is added to the drawing of $G_k$ to construct a drawing of $G_{k+1}$, a subset of the vertices of $G_k$ undergoes a horizontal shift that allows for $v_{k+1}$ to be introduced in the drawing still maintaining the invariant that the outer face of $G_{k+1}$ is delimited by a polygon composed of a sequence of segments having slopes equal to either $45^{\degree}$ or $-45^{\degree}$ (see Fig.~\ref{fig:canonical1}(b)--(c)).
The area of the constructed drawings is $(2n-4) \times (n-2)$. The described algorithm has been proposed by de Fraysseix~\emph{et~al.} together with an $O(n \log n)$-time implementation. The authors conjectured that its complexity could be improved to $O(n)$. This bound was in fact achieved a few years later by Chrobak and Payne in~\cite{chrobak95lineartime}.
\begin{figure}[htb]
\centering
\begin{tabular}{c c c}
\mbox{\epsfig{figure=deFraysseix1.eps,scale=0.28,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=deFraysseix2.eps,scale=0.35,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=deFraysseix3.eps,scale=0.35,clip=}} \\
(a) \hspace{3mm} & (b) \hspace{3mm} & (c)\\
\end{tabular}
\caption{(a) A canonical ordering of a maximal plane graph $G$. (b) The drawing of $G_k$ constructed by the algorithm of de Fraysseix~\emph{et~al.} (c) The drawing of $G_{k+1}$ constructed by the algorithm of de Fraysseix~\emph{et~al.}}
\label{fig:canonical1}
\end{figure}
The ideas behind the algorithm by Schnyder~\cite{s-epgg-90} are totally different from the ones of de Fraysseix~\emph{et~al.} In fact, Schnyder's algorithm constructs the drawing by determining the coordinates of all the vertices in one shot. The algorithm relies on results concerning planar graph embeddings that are indeed less intuitive than the canonical ordering of a plane graph used by de Fraysseix~\emph{et~al.}
First, Schnyder introduces the concept of \emph{barycentric representation} of a graph $G$ as an injective function $v\in V(G)\rightarrow (x(v),y(v),z(v))$ such that $x(v)+y(v)+z(v)=1$, for all vertices $v\in V(G)$, and such that, for each edge $(u,v)\in E(G)$ and each vertex $w\notin \{u,v\}$, $x(u)<x(w)$ and $x(v)<x(w)$ hold, or $y(u)<y(w)$ and $y(v)<y(w)$ hold, or $z(u)<z(w)$ and $z(v)<z(w)$ hold. Schnyder proves that, given any graph $G$, given any barycentric representation $v\rightarrow (x(v),y(v),z(v))$ of $G$, and given any three non-collinear points $\alpha$, $\beta$, and $\gamma$ in the three-dimensional space, the mapping $f:v\in V(G)\rightarrow v_1 \alpha +v_2 \beta + v_3 \gamma$ is a straight-line planar embedding of $G$ in the plane spanned by $\alpha$, $\beta$, and $\gamma$.
Second, Schnyder introduces the concept of a \emph{realizer} of $G$ as an orientation and a partition of the interior edges of a plane graph $G$ into three sets $T_1$, $T_2$, and $T_3$, such that: (i) the set of edges in $T_i$, for each $i=1,2,3$, is a tree spanning all the internal vertices of $G$ and exactly one external vertex; (ii) all the edges of $T_i$ are directed towards this external vertex, which is the root of $T_i$; (iii) the external vertices belonging to $T_1$, to $T_2$, and to $T_3$ are distinct and appear in counter-clockwise order on the border of the outer face of $G$; and (iv) the counter-clockwise order of the edges incident to $v$ is: Leaving $T_1$, entering $T_3$, leaving $T_2$, entering $T_1$, leaving $T_3$, and entering~$T_2$. Fig.~\ref{fig:schnyder}(a) illustrates a realizer for a plane graph~$G$. Trees $T_1$, $T_2$, and $T_3$ are sometimes called \emph{Schnyder woods}.
Third, Schnyder describes how to get a barycentric representation of a plane graph $G$ starting from a realizer of $G$; this is essentially done by looking, for each vertex $v\in V(G)$ at the paths $P_i(v)$, that are the only paths composed entirely of edges of $T_i$ connecting $v$ to the root of $T_i$ (see Fig.~\ref{fig:schnyder}(b)), and counting the number of the faces or the number of the vertices in the regions $R_1(v)$, $R_2(v)$, and $R_3(v)$ that are defined by $P_1(v)$, $P_2(v)$, and $P_3(v)$. The area of the constructed drawings is $(n-2)\times (n-2)$.
\begin{figure}[htb]
\centering
\begin{tabular}{c c}
\mbox{\epsfig{figure=Schnyder1.eps,scale=0.25,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=Schnyder2.eps,scale=0.25,clip=}} \\
(a) \hspace{5mm} & (b) \\
\end{tabular}
\caption{(a) A realizer for a plane graph $G$. (b) Paths $P_1(v)$, $P_2(v)$, and $P_3(v)$ (represented by green, red, and blue edges, respectively) and regions $R_1(v)$, $R_2(v)$, and $R_3(v)$ (delimited by $P_1(v)$, $P_2(v)$, and $P_3(v)$, and by the edges incident to the outer face of $G$).}
\label{fig:schnyder}
\end{figure}
Schnyder's upper bound has been unbeaten for almost twenty years. Only recently Brandenburg~\cite{b-dpg89a-08} proposed an algorithm for constructing planar straight-line drawings of triangulations in $\frac{8n^2}{9} + O(n)$ area. Such an algorithm is based on a geometric refinement of the de Fraysseix \emph{et al.}~\cite{fpp-sssfepg-88,fpp-hdpgg-90} algorithm combined with some topological properties of planar triangulations due to Bonichon et al.~\cite{bsm-wtr-02}, that will be discussed in Sect.~\ref{se:poly-line}.
A quadratic area upper bound for straight-line planar drawings of plane graphs is asymptotically optimal. In fact, almost ten years before the publication of such algorithms, Valiant observed in~\cite{Val81} that there exist $n$-vertex plane graphs (see Fig.~\ref{fig:lowerboundnested}(a)) requiring $\Omega(n^2)$ area in any straight-line planar drawing (in fact, in every poly-line planar drawing). It was then proved by de Fraysseix \emph{et al.}~in~\cite{fpp-hdpgg-90} that \emph{nested triangles graphs} (see Fig.~\ref{fig:lowerboundnested}(b)) require $\left(\frac{2n}{3}-1\right) \times\left(\frac{2n}{3}-1\right)$ area in any straight-line planar drawing (in fact, in every poly-line planar drawing). Such a lower bound was only recently improved to $\frac{4n^2}{9}-\frac{2n}{3}$ by Frati and Patrignani~\cite{FratiP07}, for all $n$ multiple of $3$ (see Fig.~\ref{fig:lowerboundnested}(c)), and then by Mondal \emph{et al.}~\cite{mnra-madp3t-10} to $\left \lfloor \frac{2n}{3}-1 \right\rfloor \times \left \lfloor \frac{2n}{3} \right\rfloor $, for all $n\geq 6$.
\begin{figure}[htb]
\centering
\begin{tabular}{c c c}
\mbox{\epsfig{figure=LowerBoundValiant.eps,scale=0.3,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=NestedTrianglesGraph.eps,scale=0.3,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=LowerBoundG6big.eps,scale=0.3,clip=}}\\
(a) \hspace{5mm} & (b) \hspace{5mm} & (c) \\
\end{tabular}
\caption{(a) A graph~\cite{Val81} requiring quadratic area in any straight-line and poly-line drawing. (b) A graph~\cite{fpp-hdpgg-90} requiring $\left(\frac{2n}{3}-1\right) \times\left(\frac{2n}{3}-1\right)$ area in any straight-line and poly-line drawing. (c) A graph~\cite{FratiP07} requiring $\frac{4n^2}{9}-\frac{2n}{3}$ area in any straight-line drawing.}
\label{fig:lowerboundnested}
\end{figure}
However, the following remains open:
\begin{problem}
Close the gap between the $\frac{8n^2}{9} + O(n)$ upper bound and the $\frac{4n^2}{9} - O(n)$ lower bound for the area requirements of straight-line drawings of plane graphs.
\end{problem}
\subsection{$4$-Connected and Bipartite Planar Graphs} \label{se:straight-bipartite}
In this section, we discuss algorithms and bounds for constructing planar straight-line drawings of $4$-connected and bipartite planar graphs. Such different families of graphs are discussed in the same section since the best known upper bound for the area requirements of bipartite planar graphs uses a preliminary augmentation to $4$-connected planar graphs.
Concerning $4$-connected plane graphs, tight bounds are known for the area requirements of planar straight-line drawings if the graph has at least four vertices incident to the outer face. Namely, Miura \emph{et al.}~proved in~\cite{mnn-gd4pg-01} that every such a graph has a planar straight-line drawing in $(\lceil{\frac{n}{2}}\rceil-1) \times (\lfloor{\frac{n}{2}}\rfloor)$ area, improving upon previous results of He~\cite{h-gefcpg-97}. The authors show that this bound is tight, by exhibiting a class of $4$-connected plane graphs with four vertices incident to the outer face requiring $(\lceil{\frac{n}{2}}\rceil-1) \times (\lfloor{\frac{n}{2}}\rfloor)$ area (see Fig.~\ref{fig:straight-line-fourconnected}(a)).
\begin{figure}[htb]
\centering
\begin{tabular}{c c}
\mbox{\epsfig{figure=Four-Connected-LB.eps,scale=0.4,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=Bipartite-LB.eps,scale=0.4,clip=}} \\
(a) \hspace{5mm} & (b) \\
\end{tabular}
\caption{(a) A $4$-connected plane graph requiring $(\lceil{\frac{n}{2}}\rceil-1) \times (\lfloor{\frac{n}{2}}\rfloor)$ area in any straight-line planar drawing. (b) A bipartite plane graph requiring $(\lceil{\frac{n}{2}}\rceil-1) \times (\lfloor{\frac{n}{2}}\rfloor)$ area in any straight-line planar drawing.}
\label{fig:straight-line-fourconnected}
\end{figure}
The algorithm of Miura \emph{et al.}~divides the input $4$-connected plane graph $G$ into two graphs $G'$ and $G''$ with the same number of vertices. This is done by performing a \emph{4-canonical ordering} of $G$ (see~\cite{kh-rel4cpgiagdp-97}). The graph $G'$ ($G''$, respectively) is then drawn inside an isosceles right triangle $\Delta'$ (resp.\ $\Delta''$) whose width is $\frac{n}{2}-1$ and whose height is half of its width. To construct such drawings of $G'$ and $G''$, Miura \emph{et al.}~design an algorithm that is similar to the algorithm by de Fraysseix \emph{et al.}~\cite{fpp-hdpgg-90}. In the drawings produced by their algorithm the slopes of the edges incident to the outer faces of $G'$ and $G''$ have absolute value which is at most $45\degree$. The drawing of $G''$ is then rotated by $180\degree$ and placed on top of the drawing of $G'$. This allows for drawing the edges connecting $G'$ with $G''$ without creating crossings. Fig.~\ref{fig:Miura} depicts the construction of the Miura \emph{et al.}'s algorithm.
As far as we know, no bound better than the one for general plane graphs is known for $4$-connected plane graphs (possibly having three vertices incident to the outer face), hence the following is open:
\begin{problem}
Close the gap between the $\frac{8n^2}{9} + O(n)$ upper bound and the $\frac{n^2}{4} - O(n)$ lower bound for the area requirements of straight-line drawings of $4$-connected plane graphs.
\end{problem}
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=Miura-algorithm.eps,scale=0.4,clip=}}}
\caption{The algorithm by Miura \emph{et al.}~to construct straight-line drawings of $4$-connected plane graphs~\cite{mnn-gd4pg-01}.}
\label{fig:Miura}
\end{figure}
Biedl and Brandenburg~\cite{bb-dpbgsa-05} show how to construct planar straight-line drawings of bipartite planar graphs in $(\lceil{\frac{n}{2}}\rceil-1) \times (\lfloor{\frac{n}{2}}\rfloor)$ area. To achieve such a bound, they exploit a result of Biedl \emph{et al.}~\cite{bkk-tpgfcc-98} stating that all planar graphs without separating triangles, except those ``containing a star'' (see~\cite{bb-dpbgsa-05} and observe that in this case a star is not just a vertex plus some incident edges), can be augmented to $4$-connected by the insertion of dummy edges; once such an augmentation is done, Biedl and Brandenburg use the algorithm of Miura \emph{et al.}~\cite{mnn-gd4pg-01} to draw the resulting $4$-connected plane graph. In order to be able to use Miura \emph{et al.}'s algorithm, Biedl and Brandenburg prove that no bipartite plane graph ``contains a star'' and that Miura \emph{et al.}'s algorithm works more in general for plane graphs that become $4$-connected if an edge is added to them. The upper bound of Biedl and Brandenburg is tight as the authors show a bipartite plane graph requiring $(\lceil{\frac{n}{2}}\rceil-1) \times (\lfloor{\frac{n}{2}}\rfloor)$ area in any straight-line planar drawing (see Fig.~\ref{fig:straight-line-fourconnected}(b)).
\subsection{Series-Parallel Graphs} \label{se:straight-series-parallel}
In this section, we discuss algorithms and bounds for constructing small-area planar straight-line drawings of series-parallel graphs.
No sub-quadratic area upper bound is known for constructing small-area planar straight-line drawings of series-parallel graphs. The best known quadratic upper bound for straight-line drawings is provided in~\cite{zhn-sgdpgbb-10}.
In~\cite{f-lbarspg-j10} Frati proved that there exist series-parallel graphs requiring $\Omega(n 2^{\sqrt{\log n}})$ area in any straight-line or poly-line grid drawing. Such a result is achieved in two steps. In the first one, an $\Omega(n)$ lower bound for the maximum between the height and the width of any straight-line or poly-line grid drawing of $K_{2,n}$ is proved, thus answering a question of Felsner \emph{et~al.}~\cite{journals/jgaa/FelsnerLW03} and improving upon previous results of Biedl \emph{et~al.}~\cite{journals/ipl/BiedlCL03}. In the second one, an $\Omega(2^{\sqrt{\log n}})$ lower bound for the minimum between the height and the width of any straight-line or poly-line grid drawing of certain series-parallel graphs is proved.
The proof that $K_{2,n}$ requires $\Omega(n)$ height or width in any straight-line or poly-line drawing has several ingredients. First, a simple ``optimal'' drawing algorithm for $K_{2,n}$ is exhibited, that is, an algorithm is presented that computes a drawing of $K_{2,n}$ inside an arbitrary convex polygon if such a drawing exists. Second, the drawings constructed by the mentioned algorithm inside a rectangle are studied. Such a study reveals that the slopes of the segments representing the edges of $K_{2,n}$ have a strong relationship with the relatively prime numbers as ordered in the \emph{Stern-Brocot} tree (see~\cite{s-uzf-58,b-cranm-60} and Fig.~\ref{fig:sternbrocot}). Such a relationship leads to derive some arithmetical properties of the lines passing through infinite grid points in the plane and to achieve the $\Omega(n)$ lower bound.
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=SternBrocot.eps,scale =0.5,clip=}}}
\caption{The Stern-Brocot tree is a tree containing all the pairs of relatively prime numbers.}
\label{fig:sternbrocot}
\end{figure}
The results on the area requirements of $K_{2,n}$ are then used to construct series-parallel graphs (shown in Fig.~\ref{fig:straight-line-sp-improvedgraphs}) out of several copies of $K_{2,2^{\sqrt{\log n}}}$ and to prove that such a graph requires $\Omega(2^{\sqrt{\log n}})$ height and width in any straight-line or poly-line grid drawing.
\begin{figure}[htb]
\centering{
\begin{tabular} {c c c}
\mbox{\epsfig{figure=G1.eps,scale =0.5,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=G2.eps,scale =0.5,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=G3.eps,scale =0.5,clip=}} \\
(a) \hspace{3mm} & (b) \hspace{3mm} & (c)
\end{tabular}}
\caption{The inductive construction of series-parallel graphs requiring $\Omega(2^{\sqrt{\log n}})$ height and width in any straight-line or poly-line grid drawing.}
\label{fig:straight-line-sp-improvedgraphs}
\end{figure}
As no sub-quadratic area upper bound is known for straight-line planar drawings of series-parallel graphs the following is open.
\begin{problem}
Close the gap between the $O(n^2)$ upper bound and the $\Omega(n 2^{\sqrt{\log n}})$ lower bound for the area requirements of straight-line drawings of series-parallel graphs.
\end{problem}
Related to the above problem, Wood~\cite{w-08} conjectures the following: Let $p_1,\dots,p_k$ be positive integers. Let $G(p_1)$ be the graph obtained from $K_3$ by adding $p_1$ new vertices adjacent to $v$ and $w$ for each edge $(v,w)$ of $K_3$. For $k \geq 2$, let $G(p_1,p_2,\dots,p_k)$ be the graph obtained from $G(p_1,p_2,\dots,p_{k-1})$ by adding $p_k$ new vertices adjacent to $v$ and $w$ for each edge $(v,w)$ of $G(p_1,p_2,\dots,p_{k-1})$. Observe that $G(p_1,p_2,\dots,p_k)$ is a series-parallel graph.
\begin{conjecture} (D. R. Wood) Every straight-line grid drawing of $G(p_1,p_2,\dots,p_k)$ requires $\Omega(n^2)$ area for some choice of $k$ and $p_1,p_2,\dots,p_k$.
\end{conjecture}
\subsection{Outerplanar Graphs} \label{se:straight-outerplanar}
In this section, we discuss algorithms and bounds for constructing small-area planar straight-line drawings of outerplanar graphs.
The first non-trivial bound appeared in~\cite{gr-aepsdog-07}, where Garg and Rusu proved that every outerplanar graph with maximum degree $d$ has a straight-line drawing with $O(dn^{1.48})$ area. Such a result is achieved by means of an algorithm that works by induction on the dual tree $T$ of the outerplanar graph $G$. Namely, the algorithm finds a path $P$ in $T$, it removes from $G$ the subgraph $G_P$ that has $P$ as a dual tree, it inductively draws the outerplanar graphs that are disconnected by such a removal, and it puts all the drawings of such outerplanar graphs together with a drawing of $G_P$, obtaining a drawing of the whole outerplanar graph.
The first sub-quadratic area upper bound for straight-line drawings of outerplanar graphs has been proved by Di Battista and Frati in~\cite{df-sadog-j09}. The result in~\cite{df-sadog-j09} uses the following ingredients. First, it is shown that the dual binary tree $T$ of a maximal outerplanar graph $G$ is a subgraph of $G$ itself. Second, a restricted class of straight-line drawings of binary trees, called \emph{star-shaped drawings}, is defined. Star-shaped drawings are straight-line drawings in which special visibility properties among the nodes of the tree are satisfied (see Fig.~\ref{fig:straight-starshaped}).
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=Star-shaped.eps,scale =0.5,clip=}}}
\caption{A star-shaped drawing $\Gamma$ of a binary tree $T$ (with thick edges and black vertices). The dashed edges and white vertices augment $\Gamma$ into a straight-line drawing of the outerplanar graph $T$ is dual to.}
\label{fig:straight-starshaped}
\end{figure}
Namely, if a tree $T$ admits a star-shaped drawing $\Gamma$, then the edges that augment $T$ into $G$ can be drawn in $\Gamma$ without creating crossings, thus resulting in a straight-line planar drawing of $G$. Third, an algorithm is shown to construct a star-shaped drawing of any binary tree $T$ in $O(n^{1.48})$ area. Such an algorithm works by induction on the number of nodes of $T$ (Fig.~\ref{fig:straight-line-outerplanar} depicts two inductive cases of such a construction), making use of a strong combinatorial decomposition of ordered binary trees introduced by Chan in~\cite{c-nlabdbt-02} (discussed in Sect.~\ref{se:straight-trees}).
\begin{figure}[htb]
\centering{
\begin{tabular} {c c}
\mbox{\epsfig{figure=OuterplanarComposition.eps,scale =0.4,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=OuterplanarComposition2.eps,scale =0.4,clip=}}
\end{tabular}}
\caption{Two inductive cases of the algorithm to construct star-shaped drawing of binary trees yielding an $O(n^{1.48})$ upper bound for straight-line drawings of outerplanar graphs. The rectangles and the half-circles represent subtrees recursively drawn by the construction on the right and on the left part of the figure, respectively.}
\label{fig:straight-line-outerplanar}
\end{figure}
Frati used in~\cite{f-sdogdnlogna-07} the same approach of~\cite{df-sadog-j09}, together with a different geometric construction (shown in Fig.~\ref{fig:straight-outerplanar-degreed}), to prove that every outerplanar graph with degree $d$ has a straight-line drawing with $O(dn\log n)$ area.
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=OuterplanarComposition3.eps,scale =0.5,clip=}}}
\caption{The inductive construction of star-shaped drawings of binary trees yielding an $O(dn\log n)$ upper bound for straight-line drawings of outerplanar graphs with degree $d$. The rectangles represent recursively constructed star-shaped drawings of subtrees.}
\label{fig:straight-outerplanar-degreed}
\end{figure}
As far as we know, no super-linear area lower bound is known for straight-line drawings of outerplanar graphs. In~\cite{b-dopgia-02} Biedl defined a class of outerplanar graphs, called \emph{snowflake graphs}, and conjectured that such graphs require $\Omega(n \log n)$ area in any straight-line or poly-line drawing. However, Frati disproved such a conjecture in~\cite{f-sdogdnlogna-07} by exhibiting $O(n)$ area straight-line drawings of snowflake graphs. In the same paper, he conjectured that an $O(n\log n)$ area upper bound for straight-line drawings of outerplanar graphs can not be achieved by squeezing the drawing along one coordinate direction, as stated in the following.
\begin{conjecture} (F. Frati) There exist $n$-vertex outerplanar graphs for which, for any straight-line drawing in which the longest side of the bounding-box is $O(n)$, the smallest side of the bounding-box is $\omega(\log n)$.
\end{conjecture}
The following problem remains wide open.
\begin{problem}
Close the gap between the $O(n^{1.48})$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line drawings of outerplanar graphs.
\end{problem}
\subsection{Trees} \label{se:straight-trees}
In this section, we present algorithms and bounds for constructing planar straight-line drawings of trees.
The best bound for constructing general trees is, as far as we know, the $O(n \log n)$ area upper bound provided by a simple modification of the $hv$-drawing algorithm of Crescenzi, Di Battista, and Piperno~\cite{cdp-noad-92}. Such an algorithm proves that a straight-line drawing of any tree $T$ in $O(n) \times O(\log n)$ area can be constructed with the further constraint that the root of $T$ is placed at the bottom-left corner of the bounding box of the drawing. If $T$ has one node, such a drawing is trivially constructed. If $T$ has more than one node, then let $T_1, \dots, T_k$ be the subtrees of $T$, where we assume, w.l.o.g., that $T_k$ is the subtree of $T$ with the greatest number of nodes. Then, the root of $T$ is placed at $(0,0)$, the subtrees $T_1, \dots, T_{k-1}$ are placed one besides the other, with the bottom side of their bounding boxes on the line $y=1$, and $T_{k}$ is placed besides the other subtrees, with the bottom side of its bounding box on the line $y=0$. The width of the drawing is clearly $O(n)$, while its height is $h(n)=\max\{h(n-1),1+h(n/2)\}=O(\log n)$, where $h(n)$ denotes the maximum height of a drawing of an $n$-node tree constructed by the algorithm. See Fig.~\ref{fi:trees-hvdrawing} for an illustration of such an algorithm. Interestingly, no super-linear area lower bound is known for the area requirements of straight-line drawings of trees.
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=HV.eps,scale =0.5,clip=}}}
\caption{Inductive construction of a straight-line drawing of a tree in $O(n\log n)$ area.}
\label{fi:trees-hvdrawing}
\end{figure}
For the special case of bounded-degree trees linear area bounds have been achieved. In fact, Garg and Rusu presented an algorithm to construct straight-line drawings of binary trees in $O(n)$ area~\cite{gr-sdbtlaaar-04} and an algorithm to construct straight-line drawings of trees with degree $O(\sqrt n)$ in $O(n)$ area~\cite{iccsa/GargR03}. Both algorithms rely on the existence of simple \emph{separators} for bounded degree trees. Namely, every binary tree $T$ has a \emph{separator edge}, that is an edge whose removal disconnects $T$ into two trees both having at most $2n/3$ vertices~\cite{Val81} and every degree-$d$ tree $T$ has a vertex whose removal disconnects $T$ into at most $d$ trees, each having at most $n/2$ nodes~\cite{iccsa/GargR03}. Such separators are exploited by Garg and Rusu to design quite complex inductive algorithms that achieve linear area bounds and optimal aspect ratio.
The following problem remains open:
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line drawings of trees.
\end{problem}
A lot of attention has been devoted to studying the area requirements of straight-line drawings of trees satisfying additional constraints. Table~\ref{ta:trees-straight-line} summarizes the best known area bounds for various kinds of straight-line drawings of trees.
\begin{table}[!htb]\centering\footnotesize
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\cline{2-9}
\multicolumn{1}{c|}{} & \emph{Ord. Pres.} & \emph{Upw.} & \emph{Str. Upw.} & \emph{Orth.} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.} \\
\cline{2-9}
\hline
\emph{Binary} & & & & & $O(n)$ & \cite{gr-sdbtlaaar-04} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{Binary} & \checkmark & & & & $O(n\log \log n)$ & \cite{gr-aeoppsdot-03} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{Binary} & & \checkmark & & & $O(n\log \log n)$ & \cite{skc-aeastd-00} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{Binary} & & & \checkmark & & $O(n\log n)$ & \cite{cdp-noad-92} & $\Omega(n \log n)$ & \cite{cdp-noad-92} \\
\hline
\emph{Binary} & \checkmark & & \checkmark & & $O(n \log n)$ & \cite{gr-aeoppsdot-03} & $\Omega(n\log n)$ & \cite{cdp-noad-92}\\
\hline
\emph{Binary} & & & & \checkmark & $O(n \log \log n)$ & \cite{cgkt-oaarsod-02,skc-aeastd-00} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Binary} & & \checkmark & & \checkmark & $O(n \log n)$ & \cite{cdp-noad-92,cgkt-oaarsod-02} & $\Omega(n\log n)$ & \cite{cgkt-oaarsod-02}\\
\hline
\emph{Binary} & \checkmark & & & \checkmark & $O(n^{1.5})$ & \cite{f-sodbtt-06} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Ternary} & & & & \checkmark & $O(n^{1.631})$ & \cite{f-sodbtt-06} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Ternary} & \checkmark & & & \checkmark & $O(n^2)$ & \cite{f-sodbtt-06} & $\Omega(n^2)$ & \cite{f-sodbtt-06}\\
\hline
\emph{General} & & & & & $O(n \log n)$ & \cite{cdp-noad-92} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{General} & \checkmark & & & & $O(n\log n)$ & \cite{gr-aeoppsdot-03} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{General} & & \checkmark & & & $O(n\log n)$ & \cite{cdp-noad-92} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{General} & & & \checkmark & & $O(n\log n)$ & \cite{cdp-noad-92} & $\Omega(n \log n)$ & \cite{cdp-noad-92} \\
\hline
\emph{General} & \checkmark & & \checkmark & & $O(n 4^{\sqrt{2\log n}})$ & \cite{c-nlabdbt-02} & $\Omega(n\log n)$ & \cite{cdp-noad-92}\\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small Summary of the best known area bounds for straight-line drawings of trees. ``Ord. Pres.'', ``Upw.'', ``Str. Upw.'', and ``Orth.'' stand for order-preserving, upward, strictly-upward, and orthogonal, respectively.}
\label{ta:trees-straight-line}
\end{table}
Concerning \emph{straight-line upward drawings}, the illustrated algorithm of Crescenzi {\em et al.}~\cite{cdp-noad-92} achieves the best known upper bound of $O(n\log n)$. For trees with constant degree, Shin \emph{et al.}~prove in~\cite{skc-aeastd-00} that upward straight-line drawings in $O(n\log \log n)$ area can be constructed. Their algorithm is based on nice inductive geometric constructions and suitable tree decompositions. No super-linear area lower bound is known, neither for binary nor for general trees, hence the following are open:
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of upward straight-line drawings of trees.
\end{problem}
\begin{problem}
Close the gap between the $O(n \log \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of upward straight-line drawings of binary trees.
\end{problem}
Concerning \emph{straight-line strictly-upward drawings}, tight bounds are known. In fact, the algorithm of Crescenzi {\em et al.}~\cite{cdp-noad-92} can be suitably modified in order to obtain strictly-upward drawings (instead of aligning the subtrees of the root with their bottom sides on the same horizontal line, it is sufficient to align them with their left sides on the same vertical line). The same authors also showed a binary tree $T^*$ requiring $\Omega(n \log n)$ area in any strictly-upward drawing, hence their bound is tight. The tree $T^*$, that is shown in Fig.~\ref{fig:trees-upward-LB}, is composed of a path with $\Omega(n)$ nodes (forcing the height of the drawing to be $\Omega(n)$) and of a complete binary tree with $\Omega(n)$ nodes (forcing the width of the tree to be $\Omega(\log n)$).
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=Trees-upwardLB.eps,scale =0.5,clip=}}}
\caption{A binary tree $T^*$ requiring $\Omega(n \log n)$ area in any strictly-upward drawing.}
\label{fig:trees-upward-LB}
\end{figure}
Concerning \emph{straight-line order-preserving drawings}, Garg and Rusu have shown in~\cite{gr-aeoppsdot-03} how to obtain an $O(n\log n)$ area upper bound for general trees. The algorithm of Garg and Rusu inductively assumes that an \emph{$\alpha$-drawing} of a tree $T$ can be constructed, that is, a straight-line order-preserving drawing of $T$ can be constructed with the further constraints that the root $r$ of $T$ is on the upper left corner of the bounding-box of the drawing, that the children of $r$ are placed on the vertical line one unit to the right of $r$, and that the vertical distance between $r$ and any other node of $T$ is at least $\alpha$. Refer to Fig.~\ref{fig:gargrusu-ordered}(a). To construct a drawing of $T$, the algorithm considers inductively constructed drawings of all the subtrees rooted at the children of $r$, except for the node $u$ that is the root of the subtree of $r$ with the greatest number of nodes, and place such drawings one unit to the right of $r$, with their left side aligned. Further, the algorithm considers inductively constructed drawings of all the subtrees rooted at the children of $u$, except for the node $v$ that is the root of the subtree of $u$ with the greatest number of nodes, and place such drawings two units to the right of $r$, with their left side aligned. Finally, the subtree rooted at $v$ is inductively drawn, the drawing is reflected and placed with its left side on the same vertical line as $r$. Thus, the height of the drawing is clearly $O(n)$, while its width is $w(n)=\max\{w(n-1),3+w(n/2)\}=O(\log n)$, where $w(n)$ denotes the maximum width of a drawing of an $n$-node tree constructed by the algorithm. Garg and Rusu also show how to combine their described result with a decomposition scheme of binary trees due to Chan \emph{et al.}~\cite{cgkt-oaarsod-02} to obtain $O(n\log \log n)$ area straight-line order-preserving drawings of binary trees. As no super-linear lower bound is known for the area requirements of straight-line order-preserving drawings of trees, the following problems remain open:
\begin{figure}[htb]
\centering{
\begin{tabular} {c c c c}
\mbox{\epsfig{figure=GargRusu-Ordered.eps,scale =0.45,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=GargRusu-Binary1.eps,scale =0.45,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=GargRusu-Binary2.eps,scale =0.45,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=Chan-Binary.eps,scale =0.45,clip=}} \\
(a) \hspace{4mm} & (b) \hspace{4mm} & (c) \hspace{4mm} & (d)
\end{tabular}}
\caption{(a) The inductive construction of a straight-line order-preserving drawing of a tree in $O(n\log n)$ area. (b)--(c) The inductive construction of a straight-line strictly-upward order-preserving drawing of a binary tree in $O(n\log n)$ area. The construction in (b) (resp. in (c)) refers to the case in which the left (resp. the right) subtree of $r$ contains more nodes than the right (resp. the left) subtree of $r$. (d) The geometric construction of the algorithm of Chan.}
\label{fig:gargrusu-ordered}
\end{figure}
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line order-preserving drawings of trees.
\end{problem}
\begin{problem}
Close the gap between the $O(n \log \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line order-preserving drawings of binary trees.
\end{problem}
Concerning \emph{straight-line strictly-upward order-preserving drawings}, Garg and Rusu have shown in~\cite{gr-aeoppsdot-03} how to obtain an $O(n\log n)$ area upper bound for binary trees. Observe that such an upper bound is still matched by the described $\Omega(n\log n)$ lower bound of Crescenzi {\em et al.}~\cite{cdp-noad-92}. The algorithm of Garg and Rusu, shown in Figs.~\ref{fig:gargrusu-ordered}(b)--(c), is similar to their described algorithm for constructing straight-line order-preserving drawings of trees. The results of Garg and Rusu improved upon previous results by Chan in~\cite{c-nlabdbt-02}. In~\cite{c-nlabdbt-02}, the author proved that every binary tree admits a straight-line strictly-upward order-preserving drawing in $O(n^{1+\epsilon})$ area, for any constant $\epsilon>0$. In the same paper, the author proved the best known upper bound for the area requirements of straight-line strictly-upward order-preserving drawings of trees, namely $O(n 4^{\sqrt{2\log n}})$. The approach of Chan consists of using very simple geometric constructions together with non-trivial tree decompositions. The simplest geometric construction discussed by Chan consists of selecting a path $P$ in the input tree $T$, of drawing $P$ on a vertical line $l$, and of inductively constructing drawings of the subtrees of $P$ to be placed to the left and right of $l$ (see Fig.~\ref{fig:gargrusu-ordered}(d)). Thus, denoting by $w(n)$ the maximum width of a drawing constructed by the algorithm, it holds $w(n)=1+w(n_1)+w(n_2)$, where $n_1$ and $n_2$ are the maximum number of nodes in a left subtree of $P$ and in a right subtree of $P$, respectively (assuming that $w(n)$ is monotone with $n$). Thus, depending on the way in which $P$ is chosen, different upper bounds on the asymptotic behavior of $w(n)$ can be achieved. Chan proves that $P$ can be chosen so that $w(n)=O(n^{0.48})$. Such a bound is at the base of the best upper bound for constructing straight-line drawings of outerplanar graphs (see~\cite{df-sadog-j09} and Sect.~\ref{se:straight-outerplanar}). An improvement on the following problem would be likely to improve the area upper bound on straight-line drawings of outerplanar graphs:
\begin{problem}
Let $w(n)$ be the function inductively defined as follows: $w(0)=0$, $w(1)=1$, and, for any $n>1$, let $w(n)=\max_T \{\min_P \{1 + w(n_1) + w(n_2)\}\}$, where the maximum is among all ordered rooted trees $T$ with $n$ vertices, the minimum is among all the root-to-leaf paths $P$ in $T$, where $n_1$ denotes the largest number of nodes in a left subtree of $P$, and where $n_2$ denotes the largest number of nodes in a right subtree of $P$. What is the asymptotic behavior of $w(n)$?
\end{problem}
It is easy to observe an $\Omega(\log n)$ lower bound for $w(n)$. We believe that in fact $w(n)=\Omega(2^{\sqrt{\log n}})$, but it is not clear to us whether the same bound can be achieved from above.
Turning the attention back to straight-line strictly-upward order-preserving drawings, the following problem remains open:
\begin{problem}
Close the gap between the $O(n 4^{\sqrt{2\log n}})$ upper bound and the $\Omega(n \log n)$ lower bound for the area requirements of straight-line strictly-upward order-preserving drawings of trees.
\end{problem}
Concerning \emph{straight-line orthogonal drawings}, Chan \emph{et al.}~in~\cite{cgkt-oaarsod-02} and Shin \emph{et al.}~in~\cite{skc-aeastd-00} have independently shown that $O(n \log \log n)$ area suffices for binary trees. Both algorithms are based on nice inductive geometric constructions and on non-trivial tree decompositions. Frati proved in~\cite{f-sodbtt-06} that every ternary tree admits a straight-line orthogonal drawing in $O(n^{1.631})$ area. The following problems are still open:
\begin{problem}
Close the gap between the $O(n \log \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line orthogonal drawings of binary trees.
\end{problem}
\begin{problem}
Close the gap between the $O(n^{1.631})$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line orthogonal drawings of ternary trees.
\end{problem}
Concerning \emph{straight-line upward orthogonal drawings}, Crescenzi \emph{et al.}~\cite{cdp-noad-92} and Chan \emph{et al.} in~\cite{cgkt-oaarsod-02} have shown that $O(n \log n)$ area suffices for binary trees. Such an area bound is worst-case optimal, as proved in~\cite{cgkt-oaarsod-02}. The tree providing the lower bound, shown in Fig.~\ref{fig:trees-orthogonal-LB}, consists of a path to which some complete binary trees are attached.
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=Trees-orthogonalLB.eps,scale =0.5,clip=}}}
\caption{A binary tree requiring $\Omega(n \log n)$ area in any straight-line upward orthogonal drawing. The tree is composed of a path $P$ and of complete binary trees with size $n^{\alpha/2}$, where $\alpha$ is some constant greater than $0$, attached to the $i$-th node of $P$, for each $i$ multiple of $n^{\alpha/2}$.}
\label{fig:trees-orthogonal-LB}
\end{figure}
Concerning \emph{straight-line order-preserving orthogonal drawings}, $O(n^{1.5})$ and $O(n^2)$ area upper bounds are known~\cite{f-sodbtt-06} for binary and ternary trees, respectively. Once again such algorithms are based on simple inductive geometric constructions. While the bound for ternary trees is tight, no super-linear lower bound is known for straight-line order-preserving orthogonal drawings of binary trees, hence the following is open:
\begin{problem}
Close the gap between the $O(n^{1.5})$ upper bound and the $\Omega(n)$ lower bound for the area requirements of straight-line order-preserving orthogonal drawings of binary trees.
\end{problem}
\section{Poly-line Drawings}\label{se:poly-line}
In this section, we discuss algorithms and bounds for constructing small-area planar poly-line drawings of planar graphs and their subclasses.
In Sect.~\ref{se:poly-planar} we deal with general planar graphs, in Sect.~\ref{se:poly-sp} we deal with series-parallel and outerplanar graphs, and in Sect.~\ref{se:poly-trees} we deal with trees. Table~\ref{ta:poly-line} summarizes the best known area bounds for poly-line planar drawings of planar graphs and their subclasses. Observe that the lower bound of the table referring to general planar graphs hold true for {\em plane} graphs.
\begin{table}[!htb]\footnotesize
\centering
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.} \\
\hline
{\em General Planar Graphs} & $\frac{4(n-1)^2}{9}$ & \cite{bsm-wtr-02} & $\frac{4(n-1)^2}{9}$ & \cite{fpp-hdpgg-90}\\
\hline
{\em Series-Parallel Graphs} & $O(n^{1.5})$ & \cite{b-sdogspgopg-11} & $\Omega(n 2^{\sqrt{\log n}})$ & \cite{f-lbarspg-j10}\\
\hline
{\em Outerplanar Graphs} & $O(n \log n)$ & \cite{b-dopgia-02,b-sdogspgopg-11} & $\Omega(n)$ & \emph{trivial}\\
\hline
{\em Trees} & $O(n \log n)$ & \cite{cdp-noad-92} & $\Omega(n)$ & \emph{trivial}\\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small A table summarizing the area requirements for poly-line planar drawings of several classes of planar graphs.}
\label{ta:poly-line}
\end{table}
\subsection{General Planar Graphs} \label{se:poly-planar}
Every $n$-vertex plane graph admits a planar poly-line drawing on a grid with $O(n^2)$ area. In fact, this has been known since the beginning of the 80's~\cite{w-dpgt-82}. Tamassia and Tollis introduced in~\cite{tt-uavrpg-86} a technique that has later become pretty much a standard for constructing planar poly-line drawings. Namely, the authors showed that a poly-line drawing $\Gamma$ of a plane graph $G$ can be easily obtained from a visibility representation $R$ of $G$; moreover, $\Gamma$ and $R$ have asymptotically the same area. In order to obtain a visibility representation $R$ of $G$, Tamassia and Tollis design a very nice algorithm (an application is shown in Fig.~\ref{fig:tamassia-vr}). The algorithm assumes that $G$ is biconnected (if it is not, it suffices to augment $G$ to biconnected by inserting dummy edges, apply the algorithm, and then remove the inserted dummy edges to obtain a visibility representation of $G$). The algorithm consists of the following steps:
\begin{figure}[htb]
\centering{
\begin{tabular} {c c}
\mbox{\epsfig{figure=Visibility1.eps,scale =0.4,clip=}} \hspace{7mm} &
\mbox{\epsfig{figure=Visibility2.eps,scale =0.4,clip=}} \\
(a) \hspace{7mm} & (b)
\end{tabular}
}
\caption{An illustration for the algorithm of Tamassia and Tollis~\cite{tt-uavrpg-86}. (a) White circles and solid edges represent $G$. Black circles and dashed edges represent $G^*$. An st-numbering of $G$ (and the corresponding orientation) is shown. An orientation of $G^*$ and the number $2\psi(f)$ for each each face $f$ of $G$ is shown. (b) A visibility representation of $G$.}
\label{fig:tamassia-vr}
\end{figure}
(1) Consider an orientation of $G$ induced by an \emph{st-numbering} of $G$, that is a bijective mapping $\phi : V(G)\rightarrow \{1,\dots,n\}$ such that, for a given edge $(s,t)$ incident to the outer face of $G$, $\phi(s)=1$, $\phi(t)=n$, and for each $u\in V(G)$ with $u \neq s,t$, there exist two neighbors of $u$, say $v$ and $w$, such that $\phi(v)<\phi(u)<\phi(w)$; (2) consider the orientation of the dual graph $G^*$ of $G$ induced by the orientation of $G$; (3) the $y$-coordinate of each vertex-segment $u$ is given by $\phi(u)$; (4) the $y$-coordinates of the endpoints of each edge-segment $(u,v)$ are given by $\phi(u)$ and $\phi(v)$; (5) the $x$-coordinate of edge-segment $(s,t)$ is set equal to $-1$; (6) the $x$-coordinate of each edge-segment $(u,v)$ is chosen to be any number strictly between $2\psi(f)$ and $2\psi(g)$, where $f$ and $g$ are the faces adjacent to $(u,v)$ in $G$ and $\psi(f)$ denotes the length of the longest path from the source to $f$ in $G^*$; (7) finally, the $x$-coordinates of the endpoints of each vertex-segment $u$ is set equal to the smallest and largest $x$-coordinates of its incident edges.
After the algorithm of Tamassia and Tollis, a large number of algorithms have been proposed to construct poly-line drawings of planar graphs (see, e.g.,~\cite{gm-ppdgar-98,gw-fdpgcp-00,cdgk-dpgca-01,zs-ppd-08,z-ppdgt-10}), proposing several tradeoffs between area requirements, number of bends, and angular resolution. Here we briefly discuss an algorithm proposed by Bonichon \emph{et al.}~in~\cite{bsm-wtr-02}, the first one to achieve optimal area, namely $\frac{4(n-1)^2}{9}$. The algorithm consists of two steps. In the first one, a deep study of Schnyder realizers (see~\cite{s-epgg-90} and Sect.~\ref{se:straight-planar} for the definition of Schnyder realizers) leads to the definition of a \emph{weak-stratification} of a realizer. Namely, given a realizer $(T_0,T_1,T_2)$ of a triangulation $G$, a weak-stratification is a layering $L$ of the vertices of $G$ such that $T_0$ (which is rooted at the vertex incident to the outer face of $G$) is upward, while $T_1$ and $T_2$ (which are rooted at the vertices incident to the outer face of $G$) are downward and some further conditions are satisfied. Each vertex will get a $y$-coordinate which is equal to its layer in the weak stratification. In the second step $x$-coordinates for vertices and bends are computed. The conditions of the weak stratification ensure that a planar drawing can in fact be obtained.
\subsection{Series-Parallel and Outerplanar Graphs} \label{se:poly-sp}
Biedl proved in~\cite{b-sdogspgopg-11} that every series-parallel graph admits a poly-line drawing with $O(n^{1.5})$ area and a poly-line drawing with $O(fn \log n)$ area, where $f$ is the fan-out of the series-parallel graph. In particular, since outerplanar graphs are series-parallel graphs with fan-out two, the last result implies that outerplanar graphs admit poly-line drawings with $O(n \log n)$ area. Biedl's algorithm constructs a visibility representation $R$ of the input graph $G$ with $O(n^{1.5})$ area; a poly-line drawing $\Gamma$ with asymptotically the same area of $R$ can then be easily obtained from $R$.
\begin{figure}[htb]
\centering{
\begin{tabular} {c c c c}
\mbox{\epsfig{figure=Biedl-SP-1a.eps,scale =0.45,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=Biedl-SP-1.eps,scale =0.45,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=Biedl-SP-2a.eps,scale =0.33,clip=}} \hspace{4mm} &
\mbox{\epsfig{figure=Biedl-SP-2.eps,scale =0.33,clip=}} \\
(a) \hspace{4mm} & (b) \hspace{4mm} & (c) \hspace{4mm} & (d) \vspace{3mm}\\
\mbox{\epsfig{figure=Biedl-SP-3a.eps,scale =0.27,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=Biedl-SP-3.eps,scale =0.27,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=Biedl-SP-4a.eps,scale =0.27,clip=}} \hspace{3mm} &
\mbox{\epsfig{figure=Biedl-SP-4.eps,scale =0.27,clip=}} \\
(e) \hspace{3mm} & (f) \hspace{3mm} & (g) \hspace{3mm} & (h)
\end{tabular}
}
\caption{Biedl's algorithm for constructing visibility representations of series-parallel graphs. (a)--(b) The base case. (c)--(d) The parallel case. (e)--(h) The series case.}
\label{fig:biedl-sp}
\end{figure}
In order to construct a visibility representation $R$ of the input graph $G$, Biedl relies on a strong inductive hypothesis, namely that a small area visibility representation $R$ of $G$ can be constructed with the further constraint that the poles $s$ and $t$ of $G$ are placed at the top right corner and at the bottom right corner of the representation, respectively. Figs.~\ref{fig:biedl-sp}(a)--(b) show how this is accomplished in the base case. The parallel case is also pretty simple, as the visibility representations of the components of $G$ are just placed one besides the other (as in Figs.~\ref{fig:biedl-sp}(c)--(d)). The series case is much more involved. Namely, assuming w.l.o.g. that $G$ is the series of two components $H_1$ and $H_2$, where $H_1$ has poles $s$ and $x$ and $H_2$ has poles $x$ and $t$, and assuming w.l.o.g. that $H_2$ has more vertices than $H_1$, then if $H_2$ is the parallel composition of a ``small'' number of components, the composition shown in Figs.~\ref{fig:biedl-sp}(e)--(f) is applied, while if $H_2$ is the parallel composition of a ``large'' number of components, the composition shown in Figs.~\ref{fig:biedl-sp}(g)--(h) is applied. The rough idea behind these constructions is that if $H_2$ is the parallel composition of a small number of components, then a vertical unit can be spent for each of them without increasing much the height of the drawing; on the other hand, if $H_2$ is the parallel composition of a large number of components, then lots of such components have few vertices, hence two of them can be placed one above the other without increasing much the height of the drawing.
The following problems remain open:
\begin{problem}
Close the gap between the $O(n^{1.5})$ upper bound and the $\Omega(n 2^{\sqrt{\log n}})$ lower bound for the area requirements of poly-line drawings of series-parallel graphs.
\end{problem}
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of poly-line drawings of outerplanar graphs.
\end{problem}
\subsection{Trees} \label{se:poly-trees}
No algorithms are known exploiting the possibility of bending the edges of a tree to get area bounds better than the corresponding ones shown for straight-line drawings.
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of poly-line drawings of trees.
\end{problem}
However, better bounds can be achieved for poly-line drawings satisfying further constraints. Table~\ref{ta:trees-straight-line} summarizes the best known area bounds for various kinds of poly-line drawings of trees.
\begin{table}[!htb]\centering\footnotesize
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\cline{2-9}
\multicolumn{1}{c|}{} & \emph{Ord. Pres.} & \emph{Upw.} & \emph{Str. Upw.} & \emph{Orth.} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.} \\
\cline{2-9}
\hline
\emph{Binary} & & & & & $O(n)$ & \cite{ggt-putdoa-96} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{Binary} & \checkmark & & & & $O(n\log \log n)$ & \cite{gr-aeoppsdot-03} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{Binary} & & \checkmark & & & $O(n)$ & \cite{ggt-putdoa-96} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{Binary} & & & \checkmark & & $O(n\log n)$ & \cite{cdp-noad-92} & $\Omega(n \log n)$ & \cite{cdp-noad-92} \\
\hline
\emph{Binary} & \checkmark & & \checkmark & & $O(n \log n)$ & \cite{ggt-putdoa-96} & $\Omega(n\log n)$ & \cite{cdp-noad-92}\\
\hline
\emph{Binary} & & & & \checkmark & $O(n)$ & \cite{Val81} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Binary} & & \checkmark & & \checkmark & $O(n \log \log n)$ & \cite{ggt-putdoa-96} & $\Omega(n \log \log n)$ & \cite{ggt-putdoa-96}\\
\hline
\emph{Binary} & \checkmark & & & \checkmark & $O(n)$ & \cite{dt-laepg-81} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Binary} & \checkmark & \checkmark & & \checkmark & $O(n \log n)$ & \cite{k-saoudbtt-95} & $\Omega(n \log n)$ & \cite{ggt-putdoa-96}\\
\hline
\emph{Ternary} & & & & \checkmark & $O(n)$ & \cite{Val81} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Ternary} & & \checkmark & & \checkmark & $O(n \log n)$ & \cite{k-saoudbtt-95} & $\Omega(n \log n)$ & \cite{k-saoudbtt-95}\\
\hline
\emph{Ternary} & \checkmark & & & \checkmark & $O(n)$ & \cite{dt-laepg-81} & $\Omega(n)$ & \emph{trivial}\\
\hline
\emph{Ternary} & \checkmark & \checkmark & & \checkmark & $O(n \log n)$ & \cite{k-saoudbtt-95} & $\Omega(n \log n)$ & \cite{ggt-putdoa-96}\\
\hline
\emph{General} & & & & & $O(n \log n)$ & \cite{cdp-noad-92} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{General} & \checkmark & & & & $O(n\log n)$ & \cite{gr-aeoppsdot-03} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{General} & & \checkmark & & & $O(n\log n)$ & \cite{cdp-noad-92} & $\Omega(n)$ & \emph{trivial} \\
\hline
\emph{General} & & & \checkmark & & $O(n\log n)$ & \cite{cdp-noad-92} & $\Omega(n \log n)$ & \cite{cdp-noad-92} \\
\hline
\emph{General} & \checkmark & & \checkmark & & $O(n 4^{\sqrt{2\log n}})$ & \cite{c-nlabdbt-02} & $\Omega(n\log n)$ & \cite{cdp-noad-92}\\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small Summary of the best known area bounds for poly-line drawings of trees. ``Ord. Pres.'', ``Upw.'', ``Str. Upw.'', and ``Orth.'' stand for order-preserving, upward, strictly-upward, and orthogonal, respectively.}
\label{ta:trees-straight-line}
\end{table}
Concerning \emph{poly-line upward drawings}, a linear area bound is known, due to Garg {\em et al.}~\cite{ggt-putdoa-96}, for all trees whose degree is $O(n^{\delta})$, where $\delta$ is \emph{any} constant less than $1$. The algorithm of Garg {\em et al.}~first constructs a layering $\gamma(T)$ of the input tree $T$; in $\gamma(T)$ each node $u$ is assigned a layer smaller than or equal to the layer of the leftmost child of $u$ and smaller than the layer of any other child of $u$; second, the authors show that $\gamma(T)$ can be converted into an upward poly-line drawing whose height is the number of layers and whose width is the maximum \emph{width of a layer}, that is the number of nodes of the layer plus the number of edges crossing the layer; third, the authors show how to construct a layering of every tree whose degree is $O(n^{\delta})$ so that the number of layers times the maximum width of a layer is $O(n)$. No upper bound better than $O(n \log n)$ (from the results on straight-line drawings, see~\cite{cdp-noad-92} and Sect.~\ref{se:straight-trees}) and no super-linear lower bound is known for trees with unbounded degree.
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of poly-line upward drawings of trees.
\end{problem}
Concerning \emph{poly-line order-preserving strictly-upward drawings}, Garg {\em et al.}~\cite{ggt-putdoa-96} show a simple algorithm to achieve $O(n\log n)$ area for bounded-degree trees. The algorithm, whose construction is shown in Fig.~\ref{fig:changood}(a), consists of stacking inductively constructed drawings of the subtrees of the root of the input tree $T$, in such a way that the tree with the greatest number of nodes is the bottommost in the drawing. The edges connecting the root to its subtrees are then routed besides the subtrees. The $O(n\log n)$ area upper bound is tight. Namely, there exist binary trees requiring $\Omega(n \log n)$ area in any strictly-upward order-preserving drawing~\cite{cdp-noad-92} and binary trees requiring $\Omega(n \log n)$ area in any (even non-strictly) upward order-preserving drawing~\cite{ggt-putdoa-96}. The lower bound tree of Garg {\em et al.}~is shown in Fig.~\ref{fig:changood}(b). As far as we know, no area bounds better than the ones for straight-line drawings have been proved for general trees, hence the following are open:
\begin{problem}
Close the gap between the $O(n \log n)$ upper bound and the $\Omega(n)$ lower bound for the area requirements of poly-line order-preserving drawings of trees.
\end{problem}
\begin{problem}
Close the gap between the $O(n 4^{\sqrt{\log n}})$ upper bound and the $\Omega(n \log n)$ lower bound for the area requirements of poly-line order-preserving strictly-upward drawings of trees.
\end{problem}
\begin{figure}[htb]
\centering{
\begin{tabular} {c c}
\mbox{\epsfig{figure=ChanGood.eps,scale =0.45,clip=}} \hspace{7mm} &
\mbox{\epsfig{figure=ChanGoodLower.eps,scale =0.45,clip=}} \\
(a) \hspace{7mm} & (b)
\end{tabular}
}
\caption{(a) The construction of Garg {\em et al.}~\cite{ggt-putdoa-96} to obtain $O(n\log n)$ area poly-line order-preserving strictly-upward drawings of bounded-degree trees. (b) A tree requiring $\Omega(n \log n)$ area in any upward order-preserving drawing. The triangle represents a complete binary tree with $n/3$ nodes.}
\label{fig:changood}
\end{figure}
Concerning \emph{orthogonal drawings}, Valiant proved in~\cite{Val81} that every $n$-node ternary tree (and every $n$-node binary tree) admits a $\Theta(n)$ area orthogonal drawing. Such a result was strengthened by Dolev and Trickey in ~\cite{dt-laepg-81}, who proved that ternary trees (and binary trees) admit $\Theta(n)$ area order-preserving orthogonal drawings. The technique of Valiant is based on the use of separator edges (see~\cite{Val81} and Sect.~\ref{se:straight-trees}). The result of Dolev and Trickey is a consequence of a more general result on the construction of linear area embeddings of degree-$4$ outerplanar graphs.
Concerning \emph{orthogonal upward drawings}, an $O(n \log \log n)$ area bound for binary trees was proved by Garg \emph{et al.} in~\cite{ggt-putdoa-96}. The algorithm has several ingredients. (1) A simple algorithm is shown to construct orthogonal upward drawings in $O(n \log n)$ area; such drawings exhibit the further property that no vertical line through a node of degree at most two intersects the drawing below such a node. (2) The \emph{separator tree} $S$ of the input tree $T$ is constructed; such a tree represents the recursive decomposition of a tree via separator edges; namely, $S$ is a binary tree that is recursively constructed as follows: The root $r$ of $S$ is associated with tree $T$ and with a separator edge of $T$, that splits $T$ into subtrees $T_1$ and $T_2$; the subtrees of $r$ are the separator trees associated with $T_1$ and $T_2$; observe that the leaves of $S$ are the nodes of $T$. (3) A \emph{truncated separator tree} $S'$ is obtained from $S$ by removing all the nodes of $S$ associated with subtrees of $T$ with less than $\log n$ nodes. (4) Drawings of the subtrees of $T$ associated with the leaves of $S'$ are constructed via the $O(n \log n)$ area algorithm. (5) Such drawings are stacked one on top of the other and the separator edges connecting them are routed (see Fig.~\ref{fig:changoodorthogonal}(a)). The authors prove that the constructed drawings have $O(\frac{n \log \log n}{\log n})$ height and $O(\log n)$ width, thus obtaining the claimed upper bound. The same authors also proved that the $O(n \log \log n)$ bound is tight, by exhibiting the class of trees shown in Fig.~\ref{fig:changoodorthogonal}(b). In~\cite{k-saoudbtt-95} Kim showed that $\Theta(n \log n)$ area is an optimal bound for upward orthogonal drawings of ternary trees. The upper bound comes from a stronger result on orthogonal order-preserving upward drawings cited below, while the lower bound comes from the tree shown in Fig.~\ref{fig:changoodorthogonal}(c).
\begin{figure}[htb]
\centering{
\begin{tabular} {c c c}
\mbox{\epsfig{figure=ChanGoodOrthogonal.eps,scale =0.45,clip=}} \hspace{7mm} &
\mbox{\epsfig{figure=ChanGoodLower2.eps,scale =0.45,clip=}} \hspace{7mm} &
\mbox{\epsfig{figure=KimLower.eps,scale =0.45,clip=}} \\
(a) \hspace{7mm} & (b) \hspace{7mm} & (c)
\end{tabular}
}
\caption{(a) The construction of Garg {\em et al.}~\cite{ggt-putdoa-96} to obtain $O(n\log \log n)$ area orthogonal upward drawings of binary trees. Rectangles represent drawings of small subtrees constructed via an $O(n \log n)$ area algorithm. (b) A binary tree requiring $\Omega(n \log \log n)$ area in any upward orthogonal drawing. The tree is composed of a chain with $n/3$ nodes, a complete binary tree with $n/3$ nodes (the large triangle in the figure), and $\frac{n}{3\sqrt{\log n}}$ subtrees (the small triangles in the figure) with $\sqrt{\log n}$ nodes rooted at the child of each $\sqrt{\log n}$-th node of the chain. (c) A ternary tree requiring $\Omega(n \log n)$ area in any upward orthogonal drawing. The tree is composed of a chain with $n/4$ nodes, two other children for each node of the chain, and a complete binary tree with $n/4$ nodes (the large triangle in the figure)}
\label{fig:changoodorthogonal}
\end{figure}
Concerning \emph{orthogonal order-preserving upward drawings}, $\Theta(n \log n)$ is an optimal bound both for binary and ternary trees. In fact, Kim~\cite{k-saoudbtt-95} proved the upper bound for ternary trees (such a bound can be immediately extended to binary trees). The simple construction of Kim is presented in Fig.~\ref{fig:kim}. The lower bound directly comes from the results of Garg \emph{et al.}~on order-preserving upward (non-orthogonal) drawings~\cite{ggt-putdoa-96}.
\begin{figure}[htb]
\centering{
\begin{tabular} {c c c}
\mbox{\epsfig{figure=Kim1.eps,scale =0.3,clip=}} \hspace{7mm} &
\mbox{\epsfig{figure=Kim2.eps,scale =0.3,clip=}} \hspace{7mm} &
\mbox{\epsfig{figure=Kim3.eps,scale =0.3,clip=}} \\
(a) \hspace{7mm} & (b) \hspace{7mm} & (c)
\end{tabular}
}
\caption{An algorithm to construct $O(n \log n)$ area orthogonal order-preserving upward drawings of ternary trees. The figures illustrate the cases in which: (a) The right subtree has the greatest number of nodes; (b) the middle subtree has the greatest number of nodes; and (b) the left subtree has the greatest number of nodes.}
\label{fig:kim}
\end{figure}
\section{Upward Drawings}\label{se:upward}
In this section, we discuss algorithms and bounds for constructing small-area planar straight-line/poly-line upward drawings of upward planar directed acyclic graphs. Table~\ref{ta:upward} summarizes the best known area bounds for straight-line upward planar drawings of upward planar DAGs and their subclasses.
\begin{table}[!htb]\footnotesize
\centering
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.} \\
\hline
{\em General Upward Planar DAGs} & $O(c^n)$ & \cite{GargTam93} & $\Omega(b^n)$ & \cite{BattistaTT92}\\
\hline
{\em Fixed-Embedding Series-Parallel DAGs} & $O(c^n)$ & \cite{GargTam93} & $\Omega(b^n)$ & \cite{BertolazziCBTT94}\\
\hline
{\em Series-Parallel DAGs} & $O(n^2)$ & \cite{BertolazziCBTT94} & $\Omega(n^2)$ & \emph{trivial}\\
\hline
{\em Bipartite DAGs} & $O(c^n)$ & \cite{GargTam93} & $\Omega(b^n)$ & \cite{f-mapudt-08}\\
\hline
{\em Fixed-Embedding Directed Trees} & $O(c^n)$ & \cite{GargTam93} & $\Omega(b^n)$ & \cite{f-mapudt-08}\\
\hline
{\em Directed Trees} & $O(n \log n)$ & \cite{f-mapudt-08} & $\Omega(n \log n)$ & \cite{f-mapudt-08}\\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small A table summarizing the area requirements for straight-line upward planar drawings of upward planar DAGs; $b$ and $c$ denote constants greater than $1$.}
\label{ta:upward}
\end{table}
It is known that testing the upward planarity of a DAG is an NP-complete problem if the DAG has a variable embedding~\cite{GargT01}, while it is polynomial-time solvable if the embedding of the DAG is fixed~\cite{BertolazziBLM94}, if the underlying graph is an outerplanar graph~\cite{Papakostas94}, if the DAG has a single source~\cite{HuttonL96}, or if it is bipartite~\cite{BattistaLR90}. Di Battista and Tamassia~\cite{BattistaT88} showed that a DAG is upward planar if and only if it is a subgraph of an st-planar DAG. Some families of DAGs are always upward planar, like the series-parallel DAGs and the directed trees.
Di Battista and Tamassia proved in~\cite{BattistaT88} that every upward planar DAG admits an upward straight-line drawing. Such a result is achieved by means of an algorithm similar to F\'ary's algorithm for constructing planar straight-line drawings of undirected planar graphs (see Sect.~\ref{se:straight-planar}). However, while planar straight-line drawings of undirected planar graphs can be constructed in polynomial area, Di Battista \emph{et al.}~proved in~\cite{BattistaTT92} that there exist upward planar DAGs that require exponential area in any planar straight-line upward drawing. Such a result is achieved by considering the class $G_n$ of DAGs whose inductive construction is shown in Fig.~\ref{fig:upward}(a)--(b) and by using some geometric considerations to prove that the area of the smallest region containing an upward planar straight-line drawing of $G_{n}$ is a constant number of times larger than the area of a region containing an upward planar straight-line drawing of $G_{n-1}$. The techniques introduced by Di Battista \emph{et al.}~in~\cite{BattistaTT92} to prove the exponential lower bound for the area requirements of upward planar straight-line drawings of upward planar DAGs have later been strengthened by Bertolazzi \emph{et al.}~in~\cite{BertolazziCBTT94} and by Frati in~\cite{f-mapudt-08} to prove, respectively, that there exist series-parallel DAGs with fixed embedding (see Fig.~\ref{fig:upward}(c)) and there exist directed trees with fixed embedding (see Fig.~\ref{fig:upward}(d)) requiring exponential area in any upward planar straight-line drawing. Similar lower bound techniques have also been used to deal with straight-line drawings of clustered graphs (see Sect.~\ref{se:clustered}).
\begin{figure}[htb]
\centering
\begin{tabular}{c c c c}
\mbox{\epsfig{figure=upward-lowerbound1.eps,scale=0.55,clip=}} \hspace{2mm} &
\mbox{\epsfig{figure=upward-lowerbound2.eps,scale=0.55,clip=}} \hspace{2mm} &
\mbox{\epsfig{figure=upward-lowerbound3.eps,scale=0.55,clip=}} \hspace{2mm} &
\mbox{\epsfig{figure=upward-lowerbound4.eps,scale=0.55,clip=}}\\
(a) \hspace{2mm} & (b) \hspace{2mm} & (c) \hspace{2mm} & (d) \\
\end{tabular}
\caption{(a)-(b) Inductive construction of a class $G_n$ of upward planar DAGs requiring exponential area in any planar straight-line upward drawing. (c) Inductive construction of a class of series-parallel DAGs requiring exponential area in any planar straight-line upward drawing respecting a fixed embedding. (d) A class of directed trees requiring exponential area in any planar straight-line upward drawing respecting a fixed embedding.}
\label{fig:upward}
\end{figure}
On the positive side, area-efficient algorithms exist for constructing upward planar straight-line drawings for restricted classes of upward planar DAGs. Namely, Bertolazzi \emph{et al.}~in~\cite{BertolazziCBTT94} have shown how to construct upward planar straight-line drawings of series-parallel DAGs in optimal $\Theta(n^2)$ area, and Frati~\cite{f-mapudt-08} has shown how to construct upward planar straight-line drawings of directed trees in optimal $\Theta(n \log n)$ area. Both algorithms are based on the inductive construction of upward planar straight-line drawings satisfying some additional geometric constraints. We remark that for upward planar DAGs whose underlying graph is a series-parallel graph neither an exponential lower bound nor a polynomial upper bound is known for the area requirements of straight-line upward planar drawings. Observe that testing upward planarity for this family of graphs can be done in polynomial time~\cite{dgl-usupt-05}.
\begin{problem}
What are the area requirements of straight-line upward planar drawings of upward planar DAGs whose underlying graph is a series-parallel graph?
\end{problem}
Algorithms have been provided to construct upward planar poly-line drawings of upward planar DAGs. The first $\Theta(n^2)$ optimal area upper bound for such drawings has been established by Di Battista and Tamassia in~\cite{BattistaT88}. Their algorithm consists of first constructing an upward visibility representation of the given upward planar DAG and then of turning such a representation into an upward poly-line drawing. Such a technique has been discussed in Sect.~\ref{se:poly-line}.
\section{Convex Drawings}\label{se:convex}
In this section, we discuss algorithms and bounds for constructing small-area convex and strictly-convex drawings of planar graphs. Table~\ref{ta:convex} summarizes the best known area bounds for convex and strictly-convex drawings of planar graphs.
\begin{table}[!htb]\footnotesize
\centering
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.} \\
\hline
{\em Convex} & $n^2+O(n)$ & \cite{ChrobakK97,st-cdpg-92,BattistaTV99,BonichonFM07} & $\frac{4n^2}{9} - O(n)$ & \cite{Val81,fpp-hdpgg-90,FratiP07,mnra-madp3t-10}\\
\hline
{\em Strictly-Convex} & $O(n^4)$ & \cite{br-scdpg-06} & $\Omega(n^3)$ & \cite{a-lbvscb-63,r-baclp-88,BaranyP92,BaranyT04} \\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small A table summarizing the area requirements for convex and strictly-convex drawings of triconnected plane graphs.}
\label{ta:convex}
\end{table}
Not every planar graph admits a convex drawing. Tutte~\cite{t-crg-60,t-hdg-63} proved that every triconnected planar graph $G$ admits a strictly-convex drawing in which its outer face is drawn as an arbitrary strictly-convex polygon $P$. His algorithm consists of first drawing the outer face of $G$ as $P$ and then placing each vertex at the barycenter of the positions of its adjacent vertices. This results in a set of linear equations that always admits a unique solution.
Characterizations of the plane graphs admitting convex drawings were given by Tutte in~\cite{t-crg-60,t-hdg-63}, by Thomassen in~\cite{Thomassen80,t-prg-84}, by Chiba, Yamanouchi, and Nishizeki in~\cite{cyn-lacdp-84}, by Nishizeki and Chiba in~\cite{nc-pgta-88}, by Di Battista, Tamassia, and Vismara in~\cite{BattistaTV01}. Roughly speaking, the plane graphs admitting convex drawings are biconnected, their separation pairs are composed of vertices both incident to the outer face, and distinct separation pairs do not ``nest''. Chiba, Yamanouchi, and Nishizeki presented in~\cite{cyn-lacdp-84} a linear-time algorithm for testing whether a graph admits a convex drawing and producing a convex drawing if the graph allows for one. The area requirements of convex and strictly-convex grid drawings have been widely studied, especially for triconnected plane graphs.
Convex grid drawings of triconnected plane graphs can be realized on a quadratic-size grid. This was first shown by Kant in~\cite{Kant96}. In fact, Kant proved that such drawings can always be realized on a $(2n-4)\times(n-2)$ grid. The result is achieved by defining a stronger notion of canonical ordering of a plane graph (see Sect.~\ref{se:straight-planar}). Such a strengthened canonical ordering allows to construct every triconnected plane graph $G$ starting from a cycle delimiting an internal face of $G$ and repeatedly adding to the previously constructed biconnected graph $G_{k}$ a vertex or a path in the outer face of $G_{k}$ so that the newly formed graph $G_{k+1}$ is also biconnected (see Fig.~\ref{fig:kant}). Observe that this generalization of the canonical ordering allows to deal with plane graphs containing non-triangular faces. Similarly to de Fraysseix \emph{et al.}'s algorithm~\cite{fpp-hdpgg-90}, Kant's algorithm exploits a canonical ordering of $G$ to incrementally construct a convex drawing of $G$ in which the outer face of the currently considered graph $G_k$ is composed of segments whose slopes are either $-45^{\degree}$, or $0^{\degree}$, or $45^{\degree}$.
\begin{figure}[htb]
\centering
\mbox{\epsfig{figure=KantCanonical.eps,scale=0.35,clip=}}
\caption{An illustration of the canonical ordering of a triconnected plane graph.}
\label{fig:kant}
\end{figure}
The bound of Kant was later improved down to $(n-2)\times(n-2)$ by Chrobak and Kant~\cite{ChrobakK97}, and independently by Schnyder and Trotter~\cite{st-cdpg-92}. The result of Chrobak and Kant again relies on a canonical ordering. On the other hand, the result of Schnyder and Trotter relies on a generalization of the Schnyder realizers (see Sect.~\ref{se:straight-planar}) in order to deal with triconnected plane graphs. Such an extension was independently shown by Di~Battista, Tamassia, and Vismara~\cite{BattistaTV99}, who proved that every triconnected plane graph has a convex drawing on a $(f-2)\times(f-2)$ grid, where $f$ is the number of faces of the graph. The best bound is currently, as far as we know, an $(n-2-\Delta)\times(n-2-\Delta)$ bound achieved by Bonichon, Felsner, and Mosbah in~\cite{BonichonFM07}. The bound is again achieved using Schnyder realizers. The parameter $\Delta$ is dependent of the Schnyder realizers, and can vary among $0$ and $\frac{n}{2}-2$. The following remains open:
\begin{problem}
Close the gap between the $(n-2-\Delta)\times(n-2-\Delta)$ upper bound and the $\frac{4n^2}{9}-O(n)$ lower bound for the area requirements of convex drawings of triconnected plane graphs.
\end{problem}
Strictly-convex drawings of triconnected plane graphs might require $\Omega(n^3)$ area. In fact, an $n$-vertex cycle needs $\Omega(n^3)$ area in any grid realization (see, e.g.,~\cite{a-lbvscb-63,BaranyP92,BaranyT04}). The currently best lower bound for the area requirements of a strictly-convex polygon drawn on the grid, which has been proved by Rabinowitz in~\cite{r-baclp-88}, is $\frac{n^3}{8\pi^2}$. The first polynomial upper bound for strictly-convex drawings of triconnected plane graphs has been proved by Chrobak, Goodrich, and Tamassia in~\cite{ChrobakGT96}. The authors showed that every triconnected plane graph admits a strictly-convex drawing in an $O(n^3)\times O(n^3)$ grid. Their idea consists of first constructing a (non-strictly-) convex drawing of the input graph, and of then perturbing the positions of the vertices in order to achieve strict convexity. A more elaborated technique relying on the same idea allowed Rote to achieve an $O(n^{7/3})\times O(n^{7/3})$ area upper bound in~\cite{Rote05}, which was further improved by B\'ar\'any and Rote to $O(n^2)\times O(n^2)$ and to $O(n)\times O(n^3)$ in~\cite{br-scdpg-06}. The last ones are, as far as we know, the best known upper bounds. One of the main differences between the Chrobak \emph{et al.}'s algorithm, and the B\'ar\'any and Rote's ones is that the former one constructs the intermediate non-strictly-convex drawing by making use of a canonical ordering of the graph, while the latter ones by making use of the Schnyder realizers. The following is, in our opinion, a very nice open problem:
\begin{problem}
Close the gap between the $O(n^4)$ upper bound and the $\Omega(n^3)$ lower bound for the area requirements of strictly-convex drawings of triconnected plane graphs.
\end{problem}
\section{Proximity Drawings}\label{se:proximity}
In this section, we discuss algorithms and bounds for constructing small-area proximity drawings of planar graphs.
Characterizing the graphs that admit a proximity drawing, for a certain definition of proximity, is a difficult problem. For example, despite several research efforts (see, e.g.,~\cite{d-rdt-90,ll-pdog-96,ds-gtcidr-96}), characterizing the graphs that admit a \emph{realization} (word which often substitutes \emph{drawing} in the context of proximity graphs) as Delaunay triangulations is still an intriguing open problem. Dillencourt showed that every maximal outerplanar graph can be realized as a Delaunay triangulation~\cite{d-rdt-90} and provided examples of small triangulations which can not. The decision version of several realizability problems (that is, given a graph $G$ and a definition of proximity, can $G$ be realized as a proximity graph?) is $\mathcal{NP}$-hard. For example, Eades and Whitesides proved that deciding whether a tree can be realized as a minimum spanning tree is an $\mathcal{NP}$-hard problem~\cite{EadesW96}, and that deciding whether a graph can be realized as a nearest neighbor graph is an $\mathcal{NP}$-hard problem~\cite{ew-lerpnng-96}, as well. Both proofs rely on a mechanism for providing the hardness of graph drawing problems, called \emph{logic engine}, which is interesting by itself. On the other hand, for several definitions of proximity graphs (such as Gabriel graphs and relative neighborhood graphs), the realizability problem is polynomial-time solvable for trees, as shown by Bose, Lenhart, and Liotta~\cite{bll-cpt-96}; further, Lubiw and Sleumer proved that maximal outerplanar graphs can be realized as relative neighborhood graphs and Gabriel graphs~\cite{ls-mogrng-93}, a result later extended by Lenhart and Liotta to all biconnected outerplanar graphs~\cite{ll-pdog-96}. For more results about proximity drawings, see~\cite{gll-pds-94,l-cpdg-95,gd-handbook}.
Most of the known algorithms to construct proximity drawings produce representations whose size increases exponentially with the number of vertices (see, e.g.,~\cite{ls-mogrng-93,bll-cpt-96,ll-pdog-96,dlw-swp-06}). This seems to be unavoidable for most kinds of proximity drawings, although few exponential area lower bounds are known. Liotta \emph{et al.}~\cite{ltt-argd-97} showed a class of graphs (whose inductive construction is shown in Fig.~\ref{fig:gabriel}) requiring exponential area in any Gabriel drawing, in any weak Gabriel drawing, and in any $\beta$-drawing.
\begin{figure}[htb]
\centering
\begin{tabular}{c c}
\mbox{\epsfig{figure=Gabrielgraphs.eps,scale=0.55,clip=}} \hspace{5mm} &
\mbox{\epsfig{figure=Gabrielgraphs2.eps,scale=0.55,clip=}}
\end{tabular}
\caption{Inductive construction of a class $G_n$ of graphs requiring exponential area in any Gabriel drawing, in any weak Gabriel drawing, and in any $\beta$-drawing.}
\label{fig:gabriel}
\end{figure}
Their proof is based on the observation that the circles whose diameters are the segments representing the edges incident to the outer face of $G_n$ can not contain any point in their interior. Consequently, the vertices of $G_{n-1}$ are allowed only to be placed in a region whose area is a constant number of times smaller than the area of $G_{n}$. On the other hand, Penna and Vocca~\cite{pv-pdpav-04} showed algorithms to construct polynomial-area weak Gabriel drawings and weak $\beta$-drawings of binary and ternary trees.
A particular attention has been devoted to the area requirements of Euclidean minimum spanning trees. In their seminal paper on Euclidean minimum spanning trees, Monma and Suri~\cite{MonmaS92} proved that any tree of maximum degree $5$ admits a planar embedding as a Euclidean minimum spanning tree. Their algorithm, whose inductive construction is shown in Fig.~\ref{fig:monmasuri}, consists of placing the neighbors $r_i$ of the root $r$ of the tree on a circumference centered at $r$, of placing the neighbors of $r_i$ on a much smaller circumference centered at $r_i$, and so on. Monma and Suri~\cite{MonmaS92} proved that the area of the realizations constructed by their algorithm is $2^{\Omega(n^2)}$ and conjectured that exponential area is sometimes required to construct realizations of degree-$5$ trees as Euclidean minimum spanning trees.
\begin{figure}[htb]
\centering
\mbox{\epsfig{figure=MonmaSuri.eps,scale=0.35,clip=}}
\caption{An illustration of the algorithm of Monma and Suri to construct realizations of degree-$5$ trees as Euclidean minimum spanning trees.}
\label{fig:monmasuri}
\end{figure}
Frati and Kaufmann~\cite{fk-pabmemstrt-08} showed how to construct polynomial area realizations of degree-$4$ trees as Euclidean minimum spanning trees. Their technique consists of using a decomposition of the input tree $T$ (similar to the ones presented in Sect.s~\ref{se:straight-outerplanar} and~\ref{se:straight-trees}) in which a path $P$ is selected such that every subtree of $P$ has at most $n/2$ nodes. Euclidean minimum spanning tree realizations of such subtrees are then inductively constructed and placed together with a drawing of $P$ to get a drawing of $T$. Suitable angles and lengths for the edges in $P$ have to be chosen to ensure that the resulting drawing is a Euclidean minimum spanning tree realization of $T$. The sketched geometric construction is shown in Fig.~\ref{fig:mstupper}.
\begin{figure}[htb]
\centering
\mbox{\epsfig{figure=mstupper.eps,scale=0.4,clip=}}
\caption{An illustration of the algorithm of Frati and Kaufmann to construct polynomial-area realizations of degree-$4$ trees as Euclidean minimum spanning tree realizations.}
\label{fig:mstupper}
\end{figure}
Very recently, Angelini~\emph{et al.}~proved in~\cite{abcfks-aremst-11} that in fact there exist degree-$5$ trees requiring exponential area in any realization as a Euclidean minimum spanning tree. The tree $T^*$ exhibited by Angelini~\emph{et al.}, which is shown in Fig.~\ref{fig:treelowerbound}, consists of a degree-$5$ complete tree $T_c$ with a constant number of vertices and of a set of degree-$5$ caterpillars, each one attached to a distinct leaf of $T_c$. The complete tree $T_c$ forces the angles incident to an end-vertex of the backbone of at least one of the caterpillars to be very small, that is, between $60^{\degree}$ and $61^{\degree}$. Using this as a starting point, Angelini~\emph{et al.}~prove that each angle incident to a vertex of the caterpillar is either very small, that is, between $60^{\degree}$ and $61^{\degree}$, or is very large, that is, between $89.5^{\degree}$ and $90.5^{\degree}$. As a consequence, the lengths of the edges of the backbone of the caterpillar decrease exponentially along the caterpillar, thus obtaining the area bound. There is still some distance between the best known lower and upper bounds, hence the following is open:
\begin{figure}[htb]
\centering{
\mbox{\epsfig{figure=TreeLowerBound.eps,width=0.5\textwidth,height=5cm,clip=}}}
\caption{A tree $T^*$ requiring $2^{\Omega(n)}$ area in any Euclidean minimum spanning tree realization.}
\label{fig:treelowerbound}
\end{figure}
\begin{problem}
Close the gap between the $2^{O(n^2)}$ upper bound and the $2^{\Omega(n)}$ lower bound for the area requirements of Euclidean minimum spanning tree realizations.
\end{problem}
Greedy drawings are a kind of proximity drawings that recently attracted lot of attention, due to their application to network routing. Namely, consider a network in which each node $a$ that has to send a packet to some node $b$ forwards the packet to any node $c$ that is closer to $b$ than $a$ itself. If the position of any node $u$ is not its real geographic location, but rather the pair of coordinates of $u$ in a drawing $\Gamma$ of the network, it is easy to see that routing protocol never gets stuck if and only if $\Gamma$ is a greedy drawing. Greedy drawings were introduced by Rao \emph{et al.}~in~\cite{rpss-grwli-03}. A lot of attention has been devoted to a conjecture of~\cite{pr-crgr-05} stating that every triconnected planar graph has a greedy drawing. Dhandapani verified the conjecture for triangulations in~\cite{d-gdt-10}, and later Leighton and Moitra~\cite{lm-srgems-44} and independently Angelini~\emph{et al.}~\cite{afg-acgdt-10} completely settled the conjecture in the positive. The approach of Leighton and Moitra (the one of Angelini~\emph{et al.}~is amazingly similar) consists of finding a certain subgraph of the input triconnected planar graph, called a \emph{cactus graph}, and of constructing a drawing of the cactus by induction. Greedy drawings have been proved to exist for every graph if the coordinates are chosen in the hyperbolic plane~\cite{k-gruhs-07}. Research efforts have also been devoted to construct greedy drawings in small area. More precisely, because of the routing applications, attention has been devoted to the possibility of encoding the coordinates of a greedy drawing with a small number of bits. When this is possible, the drawing is called \emph{succinct}. Eppstein and Goodrich~\cite{eg-sgghp-09} and Goodrich and Strash~\cite{gs-sggrep-09} showed how to modify the algorithm of Kleinberg~\cite{k-gruhs-07} and the algorithm of Leighton and Moitra~\cite{lm-srgems-44}, respectively, in order to construct drawings in which the vertex coordinates are represented by a logarithmic number of bits. On the other hand, Angelini~\emph{et al.}~\cite{adf-sgddae-10} proved that there exist trees requiring exponential area in any greedy drawing (or equivalently requiring a polynomial number of bits to represent their Cartesian coordinates in the Euclidean plane). The following is however open:
\begin{problem}
Is it possible to construct greedy drawings of triconnected planar graphs in the Euclidean plane in polynomial area?
\end{problem}
Partially positive results on the mentioned open problem were achieved by He and Zhang, who proved in~\cite{hz-scgd3pg-11} that succinct convex \emph{weekly greedy} drawings exist for all triconnected planar graphs, where weekly greedy means that the distance between two vertices $u$ and $v$ in the drawing is not the usual Euclidean distance $D(u,v)$ but a function $H(u,v)$ such that $D(u,v) \leq H(u,v) \leq 2 \sqrt 2 D(u,v)$. On the other hand, Cao et al.~proved in~\cite{csz-sggrep-09} that there exist triconnected planar graphs requiring exponential area in any \emph{convex} greedy drawing in the Euclidean plane.
\section{Clustered Graph Drawings}\label{se:clustered}
In this section, we discuss algorithms and bounds for constructing small-area $c$-planar drawings of clustered graphs. Table~\ref{ta:clustered} summarizes the best known area bounds for $c$-planar straight-line drawings of clustered graphs.
\begin{table}[!htb]\footnotesize
\centering
\linespread{1.2}
\selectfont
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \emph{Upper Bound} & \emph{Refs.} & \emph{Lower Bound} & \emph{Refs.}\\
\hline
{\em Clustered Graphs} & $O(c^n)$ & \cite{EadesFLN06,afk-srdcg-11} & $\Omega(b^n)$ & \cite{cocoon/FengCE95}\\
\hline
{\em $c$-Connected Trees} & $O(n^2)$ & \cite{BattistaDF07} & $\Omega(n^2)$ & \cite{BattistaDF07} \\
\hline
{\em Non-$c$-Connected Trees} & $O(c^n)$ & \cite{EadesFLN06,afk-srdcg-11} & $\Omega(b^n)$ & \cite{BattistaDF07} \\
\hline
\end{tabular}
\vspace{2mm}
\caption{\small A table summarizing the area requirements for $c$-planar straight-line drawings of clustered graphs in which clusters are convex regions; $b$ and $c$ denote constants greater than $1$.}
\label{ta:clustered}
\end{table}
Given a clustered graph, testing whether it admits a $c$-planar drawing is a problem of unknown complexity, and is perhaps the most studied problem in the Graph Drawing community during the last ten years~\cite{FengCE95,feng97algorithms,Dahlhaus98,GutwengerJLMPW02,GoodrichLS05,CorteseBPP05,cdpp-cccc-05,CornelsenW06,JelinkovaKKPSV07,df-ectefcgsf-07,cdfpp-cccg-j-08,JelinekSTV08,JelinekJKL08,afp-scgcp-09}.
Suppose that a $c$-planar clustered graph $C$ is given together with a $c$-planar embedding. How can the graph be drawn? Such a problem has been intensively studied in the literature and a number of papers have been presented for constructing $c$-planar drawings of $c$-planar clustered graphs within many drawing conventions.
Eades \emph{et al.}~show in~\cite{EadesFLN06} an algorithm for constructing $c$-planar straight-line drawings of $c$-planar clustered graphs in which each cluster is drawn as a convex region. Such a result is achieved by first studying how to construct planar straight-line drawings of hierarchical graphs. A \emph{hierarchical graph} is a graph such that each vertex $v$ is assigned a number $y(v)$, called the \emph{layer} of $v$; a drawing of a hierarchical graph has to place each vertex $v$ on the horizontal line $y=y(v)$. Eades \emph{et al.}~show an inductive algorithm to construct a planar straight-line drawing of any hierarchical-planar graph. Second, Eades \emph{et al.}~show how to turn a $c$-planar clustered graph $C$ into a hierarchical graph $H$ such that, for each cluster $\mu$ in $C$, all the vertices in $\mu$ appear in consecutive layers of the hierarchy. This implies that, once a planar straight-line drawing of $H$ has been constructed, as in Fig.~\ref{fig:clustered-convex}(a), each cluster $\mu$ can be drawn as a region surrounding the convex hull of the vertices in $\mu$, resulting in a straight-line $c$-planar drawing of $C$ in which each cluster is drawn as a convex region, as in Fig.~\ref{fig:clustered-convex}(b).
\begin{figure}[htb]
\centering
\begin{tabular}{c c}
\mbox{\epsfig{figure=Layered.eps,scale=0.3,clip=}} \hspace{8mm} &
\mbox{\epsfig{figure=LayeredToClustered.eps,scale=0.3,clip=}}\\
(a) \hspace{8mm} & (b)\\
\end{tabular}
\caption{(a) A planar straight-line drawing of a hierarchical graph $H$. Graph $H$ is obtained from a clustered graph $C$ by assigning consecutive layers to vertices of the same cluster. (b) A straight-line $c$-planar drawing of $C$.}
\label{fig:clustered-convex}
\end{figure}
Angelini \emph{et al.}, improving upon the described result of Eades \emph{et al.}~in~\cite{EadesFLN06} and answering a question posed in~\cite{EadesFLN06}, show in~\cite{afk-srdcg-11} an algorithm for constructing a \emph{straight-line rectangular drawing} of any clustered graph $C$, that is, a $c$-planar straight-line drawing of $C$ in which each cluster is drawn as an axis-parallel rectangle (more in general, the algorithm of Angelini \emph{et al.}~constructs straight-line $c$-planar drawings in which each cluster is an arbitrary convex shape). The algorithm of Angelini \emph{et al.}~is reminiscent of F\'ary's algorithm (see \cite{f-srpg-48} and Sect.~\ref{se:straight-planar}). Namely, the algorithm turns a clustered graph $C$ into a smaller clustered graph $C'$ by either removing a cluster, or splitting $C$ in correspondence of a separating $3$-cycle, or contracting an edge of $C$. A straight-line rectangular drawing of $C'$ can then be inductively constructed and easily augmented to a straight-line rectangular drawing of $C$. When none of the inductive cases applies, the clustered graph is an \emph{outerclustered graph}, that is, every cluster contains a vertex incident to the outer face (see Fig.~\ref{fig:clustered-rectangular}(a)). In order to draw an outerclustered graph $C$, Angelini \emph{et al.} show how to split $C$ into three \emph{linearly-ordered outerclustered graphs}, that are outerclustered graphs such that the graph induced by the ``direct containment'' relationship among clusters is a path (see Fig.~\ref{fig:clustered-rectangular}(b)), where a cluster $\mu$ \emph{directly contains} a cluster $\nu$ if $\mu$ contains $\nu$ and $\mu$ contains no cluster $\rho$ containing $\nu$. Moreover, they show how to combine the drawings of such graphs to get a straight-line rectangular drawing of $C$. Finally, Angelini \emph{et al.}~show an inductive algorithm for constructing a straight-line rectangular drawing of any linearly-ordered outerclustered graphs $C$. Such an algorithm finds a subgraph of $C$ (a path plus an edge) that splits $G$ into smaller linearly-ordered outerclustered graphs, inductively draws such subgraphs and combines their drawings to get a straight-line rectangular drawing of $C$.
\begin{figure}[htb]
\centering
\begin{tabular}{c c}
\mbox{\epsfig{figure=outerclusteredSimple.eps,scale=0.38,clip=}} \hspace{8mm} &
\mbox{\epsfig{figure=linearly-ordered.eps,scale=0.25,clip=}}\\
(a) \hspace{8mm} & (b)\\
\end{tabular}
\caption{(a) An outerclustered graph. (b) A linearly-ordered outerclustered graph. Any two consecutive clusters in the sequence $\mu_1,\dots,\mu_{12}$ are one the parent of the other.}
\label{fig:clustered-rectangular}
\end{figure}
Both the algorithm of Eades \emph{et al.}~and the algorithm of Angelini \emph{et al.}~construct drawings requiring, in general, exponential area. However, Feng \emph{et al.}~proved in~\cite{cocoon/FengCE95} that there exists a clustered graph $C$ requiring exponential area in any straight-line $c$-planar drawing in which the clusters are represented by convex regions. The proof of such a lower bound is strongly based on the proof of Di Battista \emph{et al.}~that there exist directed graphs requiring exponential area in any upward straight-line drawing (see~\cite{BattistaTT92} and Sect.~\ref{se:upward}). Eades \emph{et al.}~showed in~\cite{EadesFN99} how to construct $O(n^2)$ area $c$-planar orthogonal drawings of clustered graphs with maximum degree $4$; the authors first construct a visibility representation of the given clustered graph and then turn such a representation into an orthogonal drawing. Di Battista \emph{et al.}~\cite{BattistaDF07} show algorithms for drawing clustered trees in small area. In particular, they show an inductive algorithm to construct straight-line rectangular drawings of $c$-connected clustered trees in $O(n^2)$ area; however, they prove that there exist non-$c$-connected trees requiring exponential area in any straight-line drawing in which the clusters are represented by convex regions, again using the tools designed by Di Battista \emph{et al.}~in~\cite{BattistaTT92}. The following problem has been left open by Di Battista \emph{et al.}~\cite{BattistaTT92}.
\begin{problem}
What are the area requirements of order-preserving straight-line $c$-planar drawings of clustered trees in which clusters are represented by convex regions?
\end{problem}
|
2,877,628,088,918 | arxiv | \section{INTRODUCTION}
Brain-computer interface (BCI) is a system that makes humans could control the computer based on human intentions without physical interactions by linking the brain and computer \cite{suk2014predicting, zhang2017hybrid, won2017motion, zhang2021adaptive, lee2019connectivity, thung2018conversion, jeong2020decoding}. In particular, an electroencephalogram (EEG)-based BCI system is extensively studied due to its high portability and practicality. The ability to control the computer without physical actions is an effective system for communicating with patients with difficulty moving their bodies, like those with a stroke. In addition to the rehabilitation perspective, the capacity to control the computer reflecting human intentions could enhance safety and productivity in the industry aspect. Therefore, the BCI system has been studied in various applications, such as a robotic arm, speller, drone, and wheelchair \cite{lee2018high, yu2018asynchronous, kim2019subject, lee2020continuous, jeong2020brain, cho2021neurograsp, lee2021subject}.
The superior decoding performance of the model is necessary to construct the EEG-based BCI systems. As the neural network-based deep learning model showed remarkable performance by regressing the general distribution derived from many data based on gradient descent, the deep learning model has also been studied in the BCI domain to decode EEG signals \cite{ang2008filter, schirrmeister2017deep, kwon2019subject, lawhern2018eegnet, kim2022rethinking}. However, some issues exist in applying deep learning in BCI. The first is the lack of data issue. The deep learning model needs abundant data to find the optimal point. On the contrary, since EEG is a biological signal, acquiring data is difficult. The great difficulty of tasks, like imagination tasks, also makes data acquisition harder. The second is the overconfidence issue. Deep learning traditionally has the overconfidence issue. In the BCI domain, since the number of data is small, the deep learning model rapidly overfits the training data and makes more overconfidence in its predictions.
These two issues are all derived from the lack of data. Therefore, much research has been studied to solve this issue using data augmentation. Cheng \textit{et al.} \cite{cheng2020subject} conducted ten data augmentation methods at EEG signals to train the self-supervised-based model, which needs abundant data to search for the optimal pre-trained model. Although this approach alleviated the lack of data issue, the overconfidence issue of the deep learning model still exists. Zhang \textit{et al.} \cite{zhang2022eeg} proposed the generative adversarial network-based data augmentation method to improve the classification performance in the EEG signals-based emotion recognition domain. They enhanced the performance by generating unseen data using Wasserstein generative adversarial network (GAN). However, since the quantity of actual data is small, the issue that the classifier might learn artificially generated features from GAN, which are not real, in a biased way exists.
To solve the lack of data and overconfidence issues, we proposed the novel data augmentation method, CropCat, which augments data by cropping and concatenating the different class data of each subject. We successfully generated the data originating from real data and improved the classification performance training with CropCat. In addition, we refined the overconfidence issue by smoothing the label based on the class ratio in each data.
\section{MATERIALS AND METHODS}
\subsection{Datasets}
We used two open datasets to evaluate our method, BCI Competition IV 2a (dataset 2a) \cite{brunner2008bci} and BCI Competition IV 2b (dataset 2b) \cite{leeb2008bci}. These datasets are the most used public datasets to evaluate the performance of EEG signal decoding methods \cite{al2021deep}. We conducted the low-pass filtering at 38 Hz to only leave the frequency relevant to MI (mu and beta rhythms). \cite{nicolas2012brain, hobson2017interpretation} In addition, exponential moving standardization was performed to remove the peaks irrelevant to MI tasks. \cite{kim2022rethinking}
\subsection{CropCat}
The previous data augmentation studies \cite{banville2021uncovering, jiang2021self, zhang2022ganser} were focused on deletion, the addition of noise, and generation. These approaches showed significant performance in visually confirmable domains, like computer vision. In the case of EEG signals that could not visually confirm the change, since it is difficult to verify even if the signals are damaged through the data augmentations, the issue exists to apply these methods. Hence, we focused on generating the data based on real signals.
The EEG signals are the biosignals acquired from brain activity. The EEG signals are measured by calculating the voltage difference between the active and ground electrodes. Through this process, we could get spatial and temporal information on brain activity. Since spatial and temporal features are the main characteristics of EEG signals measuring from this process, we designed our proposed data augmentation method to enrich the spatial and temporal information.
The dataset consisted of pair containing one EEG data and label. We selected one pair and set this pair to the base (base pair) ($(X_b, y_b)\in{B}, X_b \in \mathbbm{R}^{C \times T}, y_b \in \mathbbm{R}$) for applying the data augmentation. After choosing the base data pair, we sample another data pair (material pair) ($(X_m, y_m)\in{B}, X_m \in \mathbbm{R}^{C \times T}, y_m \in \mathbbm{R}, y_m \neq y_b$) in the same subject whose label differs from the base data pair. For ease of calculation, since the batch size is sufficiently large, we sampled the material pair from the same mini-batch of the base pair. We denote ${B}$ as the mini-batch, ${C}$ as the the number of channels, and ${T}$ as the number of time points. In addition, $X_b$ and $X_m$ are the EEG signals, and $y_b$ and $y_m$ are the labels of the corresponding data. In addition, we could express $X_b$ and $X_m$ as follows:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
&{X_b} = [ c_{b1} ; c_{b2} ; \cdots ; c_{bC} ], \quad c_{bi} \in \mathbbm{R}^{1 \times T}, \; 1 \leq i \leq C \\
&\quad\; = [ t_{b1}, t_{b2}, \cdots, t_{bT} ], \quad t_{bj} \in \mathbbm{R}^{C \times 1}, \; 1 \leq j \leq T \\
&{X_m} = [ c_{m1} ; c_{m2} ; \cdots ; c_{mC} ], \quad c_{mi} \in \mathbbm{R}^{1 \times T}, \; 1 \leq i \leq C \\
&\quad\;\; = [ t_{m1}, t_{m2}, \cdots, t_{mT} ], \quad t_{mj} \in \mathbbm{R}^{C \times 1}, \; 1 \leq j \leq T
\end{split}
\end{equation}
We designed the proposed method by dividing the spatial (CropCat-spatial) and temporal (CropCat-temporal) ways. CropCat-spatial could mix the spatial information of two different labeled data. It might improve the decoding performance of ambiguous data, like simultaneously imagining the left and right hand at the same time point. CropCat-temporal could fuse the temporal information of two data. Using the augmented data through CropCat-temporal, the model could learn the ambiguous data, like imagining the left hand's movement for two sec. and then imagining the right hand's movement for one sec. when the left-hand task is assigned.
For applying CropCat-spatial and CropCat-temporal, we set the center point $c$, the anchor for mixing two data, and the $r$, which decides the ratio of using material pair. $c_s$ (the center point of CropCat-spatial) and $c_t$ (the center point of CropCat-temporal) are sampled from the uniform distribution.
\vspace{-0.3cm}
\begin{equation}
\begin{split}
&{c_s} \sim \text{Unif} (0, {C}) \\
&{c_t} \sim \text{Unif} (0, {T})
\end{split}
\end{equation}
In addition, the mixing ratio $r$ is randomly sampled from the uniform distribution. We restricted the interval of uniform distribution with the max value in half to keep the effect of the base pair more robust.
\vspace{-0.3cm}
\begin{equation}
\begin{split}
&{r_s}, {r_t} \sim \text{Unif} (0, \lambda), \quad \lambda \in [0, 0.5]
\end{split}
\end{equation}
where $r_s$ and $r_t$ indicate the ratio of CropCat-spatial and CropCat-temporal, respectively. In CropCat-spatial, we set the $\lambda$ as 0.333 in both dataset. In CropCat-temporal, we set the $\lambda$ as 0.125 in dataset 2a and 0.1 in dataset 2b.
We mixed the base and material data by conducting the cropping and concatenating processes based on set hyperparameters. In the case of CropCat-spatial, two data mixed the spatial information as follows the formula:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
&\tilde{X}_{spatial} = {X}_{b}\mathbbm{1}_{t \ \in \ [1, \ c_s \ - \ rC/2) \ \cup \ (c_s \ + \ rC/2, \ C]} \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad + {X}_{m}\mathbbm{1}_{t \ \in \ [c_s \ - \ rC/2, \ c_s \ + \ rC/2]} \\
&\quad\quad\quad\;\,=[ c_{b1} ; c_{b2} ; \cdots ; c_{b(c_s-rC/2-1)} ; c_{m(c_s-rC/2)}; \\
&\quad\quad\quad\quad\quad \cdots ; c_{m(c_s+rC/2)} ; c_{b(c_s-rC/2+1)} ; \cdots ; c_{bC} ]
\end{split}
\end{equation}
In addition, in the case of CropCat-temporal, both data fused the temporal information as follows the formula:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
&\tilde{X}_{temporal} = {X}_{b}\mathbbm{1}_{t \ \in \ [1, \ c_t \ - \ rT/2) \ \cup \ (c_t \ + \ rT/2, \ T]} \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad + {X}_{m}\mathbbm{1}_{t \ \in \ [c_t \ - \ rT/2, \ c_t \ + \ rT/2]} \\
&\quad\quad\quad\quad\,= [ t_{b1}, t_{b2}, \cdots, t_{b(c_t-rT/2-1)}, t_{m(c_t-rT/2)}, \\
&\quad\quad\quad\quad\quad\quad \cdots, t_{m(c_t+rT/2)}, t_{b(c_t+rT/2+1)}, \cdots, t_{bT} ]
\end{split}
\end{equation}
where $\tilde{X}_{spatial}, \tilde{X}_{temporal} \in\mathbbm{R}^{{C}\times{T}}$ are the mixed data in spatial and temporal ways, respectively.
Since the base and material data are mixed, we generated a fused label ($\tilde{y}$) based on the ratio applied to the data. The detailed formula is as follows:
\vspace{-0.3cm}
\begin{equation}
\begin{split}
\tilde{y} = (1 \ - \ {r}){y_b} \ + \ {r}{y_m}, \quad \tilde{y}\in\mathbbm{R}
\end{split}
\end{equation}
Based on CropCat-spatial and CropCat-temporal, we generated novel data containing various spatial and temporal information using real data. We trained the decoding models by including these augmented data pairs $(\tilde{X}, \tilde{y})$ in the training data.
\subfile{figures_and_tables/table1}
\subsection{Evaluation Settings}
We used three conventional data augmentation methods to compare the performances of methods, time masking, Gaussian noise, and CutOut \cite{devries2017improved}. Time masking is one of the traditional data augmentation methods which masks the specific interval of data. We set the masking ratio as 0.1 and 0.05 in dataset 2a and 2b, respectively. Adding Gaussian noise is a common data augmentation that could apply in various domains. We set the mean and standard deviation as zero and one, respectively. In addition, we scaled the noise by multiplying 0.05 before adding the noise. CutOut is the data augmentation method firstly proposed in the computer vision domain. This method generates the data by removing some random regions in the data. We set the channel lengths, time lengths, and the number of regions to remove as 0.25, 0.5, and 3, respectively.
We selected three models which are the most commonly used, ShallowConvNe, DeepConvNet, and EEGNet. In addition, we selected M-ShallowConvNet, which refined the issues that arose in ShallowConvNet. We trained the models for 1,000 epochs as a batch size of 64. Adam optimizer was used as a learning rate of 2e-3, and the cosine learning rate scheduler was applied to improve the training stability \cite{loshchilov2016sgdr}. The loss function was set to cross-entropy loss. We saved the checkpoint, which shows the lowest loss value, and used this checkpoint as the final model. Since the data size is small, we applied five-fold cross-validation to train the model. After training the five models by conducting five-fold cross-validation, we decided on results by voting. For the evaluation metrics, we used accuracy and standard deviation (std.).
\section{RESULTS AND DISCUSSION}
Table 1 presents the decoding performances of four baseline models trained using various conventional data augmentation methods (time masking, Gaussian noise, and CutOut) and the proposed methods (CropCat-spatial and CropCat-temporal) in datasets 2a and 2b. As shown in the Table 1, when conducting the time masking, all decoding performances are mostly dropped in both datasets compared to when data augmentations are not applied. Since the essential features of MI-based EEG signals are leaned in the specific region, augmented data quality is decreased when the critical parts are masked. In the case of adding Gaussian noise, the performances of ShallowConvNet, DeepConvNet, and EEGNet in dataset 2a and DeepConvNet in dataset 2b achieved the best accuracy. However, the performances are dropped compared to the baseline in other cases. Based on these results, we thought that a fine hyperparameter optimization process was needed for applying Gaussian noise to EEG signals where visual features could not be identified. CutOut shows the different aspects depending on the dataset. Although the performances of all models in dataset 2a are dropped compared to the baseline, the performances of ShallowConvNet and EEGNet are improved. We thought that, unlike time masking, CutOut caused performance improvements in both models because it only obscured some information, not all information, at certain times. However, since the performance improvement occurred only in some cases, we thought optimizing the hyperparameters was difficult, similar to Gaussian noise. In addition, we verified that the performances in four models when conducting CropCat-spatial are significantly dropped. We thought that since the spatial information of EEG signals at a specific time is important, the spatially fused data by CropCat-spatial generates data that have distorted information. However, in the case of CropCat-temporal, decoding performances of all models are improved compared to the baseline with 0.783, 0.715, 0.730, and 0.811 in dataset 2a and 0.860, 0.843, 0.835, and 0.863 in dataset 2b, respectively. Since CropCat-temporal produced data that could occur by generating the data based on real EEG signals, we could solve the lack of data issue in the BCI domain.
\section{CONCLUSION AND FUTURE WORKS}
In this paper, we proposed the novel data augmentation method, CropCat, which could apply in the BCI domain to decode EEG signals. Some issues with applying deep learning exist in the BCI domain. The first is the lack of data issue. Since the EEG signals are biosignals, the data acquisition cost is high. Therefore, acquiring sufficient data to train the deep learning-based model is difficult. The second is the overconfidence issue of the deep learning model. The overconfidence issue is the traditional issue in deep learning-based methods. In addition, the lack of data in the BCI domain exacerbates the issue. To solve these issues, we designed the data augmentation method, which fuses the two real EEG signals having different labels. Since the labels were adjusted according to the percentage of the data combined, the sharp distribution of data's features converted smoother. As a result, we improved the decoding performances of ShallowConvNet, DeepConvNet, and M-ShallowConvNet in dataset 2a with 0.783, 0.715, and 0.811, respectively. In addition, the decoding performances of ShallowConvNet, DeepConvNet, EEGNet, and M-ShallowConvNet are improved in dataset 2b with 0.860, 0.843, 0.835, and 0.863, respectively. We demonstrated with Grad-CAM that the model focuses on important features in the augmented data using CropCat, and we verified that the inference results of the deep learning model are not overconfident in augmented data. In future works, we will study the self-supervised learning algorithm for that the data augmentation method is critical. Hence, we will construct a pre-trained model that could be commonly used in various EEG signals.
\bibliographystyle{IEEEtran}
|
2,877,628,088,919 | arxiv | \section{Introduction}
{\subsection{ Motivation. }
It frequently occurs in applications that, in the presence of an underlying dynamical system,
only limited knowledge is available about the actual time evolution of a particular state of the system rather than information
about certain measurements along its orbit. It is then a natural problem to determine what asymptotic measurements one is able to observe for a particular class of systems
and what kind of information about the underlying system can be recovered from these measurements.
In this paper we consider deterministic discrete-time dynamical systems given by a continuous map $f:X\to X$ on a compact metric space $X$. We will deal with finitely many measurements
that are given by a continuous $m$-dimensional potential $\Phi=(\phi_1,\cdots,\phi_m):X\to {\mathbb R}^m$. To recover information
about the "typical" dynamics of $f$ let us consider the set $\EuScript{M}$ of all Borel invariant probability measures and denote by $\EuScript{M}_E\subset \EuScript{M}$ the subset of ergodic measures. By Birkhoff's Ergodic Theorem, for each $\mu\in \EuScript{M}_E$ there exists a set ${\mathcal B}(\mu)$ of full $\mu$-measure (called the {\it basin } of $\mu$) such that the Birkhoff averages $\frac{1}{n}S_n\phi(x)$, where
\begin{equation}\label{eqsn}
S_n \phi(x)=\sum_{k=0}^{n-1} \phi(f^k(x)),
\end{equation}
converge to $\int \phi\ d\mu$ as $n\to\infty$ for all $x\in {\mathcal B}(\mu)$ and all $\phi\in C(X,{\mathbb R})$. Following \cite{GM, Je}
we call ${\rm Rot}(\Phi)={\rm Rot}(f,\Phi)$ defined by
\begin{equation} \label{defrotset}
{\rm Rot}(\Phi)= \left\{{\rm rv}(\mu): \mu\in\EuScript{M}\right\},
\end{equation}
the {\it generalized rotation set} of $\Phi$ with respect to $f$,
where
\begin{equation}
{\rm rv}(\mu)=\left(\int \phi_1\ d\mu,\cdots,\int \phi_m\ d\mu\right)
\end{equation}
denotes the rotation vector of the measure $\mu$. This terminology goes back to Poincar\'e's rotation numbers for circle homeomorphisms \cite{Po}. Generalization of rotation numbers to higher dimensional tori leads to studying limits of Birkhoff averages $\frac{1}{n}S_n \phi(x)$ for a displacement function $\phi$ (\cite{GM},\cite{KMG}).
More generally for a potential $\phi$
one can vary $x$ and consider limits of statistical averages $\frac{1}{n_l}S_{n_l} \phi(x_l)$, where $(x_l)\subset X$ and $n_l\to\infty$. The set of all such limits is referred to as {\it generalized pointwise rotation set} of $\phi$ and we will denoted it by ${\rm Rot}_{Pt}(\phi)$. We refer to the overview article \cite{Mi1} and references therein for
further details about rotation sets.
One way that indicates the relevance of the rotation set for the understanding of the typical behavior of the dynamical system is to consider a sequence of potentials $(\phi_k)_{k\in {\mathbb N}}$ that is dense in $C(X,{\mathbb R})$. Let ${\rm R}_m$ be the rotation set of the initial $m$-segment of potentials, that is ${\rm R}_m= {\rm Rot}(\phi_1,\cdots,\phi_m)$. It follows from the representation theorem that the rotation classes of the rotation sets ${\rm R}_m$
form a decreasing sequence of covers of $\EuScript{M}$ whose intersections contain a unique invariant measure.
Therefore, for large $m$ the set ${\rm R}_m$ provides a fine cover of $\EuScript{M}$ and can be considered as a finite dimensional approximation of the set of all invariant probability measures.
In this paper we consider various classes of dynamical systems and potentials and study the geometric possibilities for the associated rotation sets. Moreover, we consider a notion of entropy associated with rotation vectors and study several aspects of this entropy including variational properties, relation to periodic orbits and regularity of the rotation entropy map.
We will now describe our results in more detail.
\subsection{Statement of the Results. }
Let $f:X\to X$ be a continuous map on a compact metric space $X$, let $\Phi\in C(X,{\mathbb R}^m)$ be a continuous ($m$-dimensional) potential, and let ${\rm Rot}(\Phi)$ be the rotation set of $\Phi$. It follows from the definitions that ${\rm Rot}(\Phi)$ is a compact and convex subset of ${\mathbb R}^m$. In particular, ${\rm Rot}(\Phi)$ has a Lipschitz boundary and therefore $\partial {\rm Rot}(\Phi)$ is differentiable at ${\mathcal H}^{m-1}$-almost every boundary point, where ${\mathcal H}^{m-1}$ is the $(m-1)$-dimensional Hausdorff measure.
Considerable attention has been given to the question which compact convex sets can arise as rotation sets. It is shown by Kwapisz \cite{Kw1, Kw2} that any polygon whose vertices are at rational points in the plane is a classical rotation set of some homeomorphism on the two-torus, however such rotation sets are not necessarily polygons. He also proved that certain line segments cannot be realized as such rotation sets \cite{Kw3}. Ziemian studied the case when a dynamical system is a transitive subshift of finite type and the potential $\Phi$ is constant on cylinders of length two \cite{Z}. She showed that under these assumptions ${\rm Rot}(\Phi)$ is a polyhedron. In contrast to this restrictive geometry for rotation sets we have the following
result (see Theorems \ref{thm1} and \ref{thm2m} in the text).
\begin{thmA} Let $K\subset {\mathbb R}^m$ be compact and convex. Then there exists a one-sided full shift $f:X\to X$ on a shift space with a finite alphabet and a $m$-dimensional continuous potential
$\Phi$ such that ${\rm Rot}(\Phi)=K$.
\end{thmA}
The main idea in the proof of Theorem A is to approximate $K$ from inside by polytopes $\mathcal{P}_j$ and to construct potentials $\Phi_j$ with ${\rm Rot}(\Phi_j)=\mathcal{P}_j$ in such a way that $\Phi_j$ converges
uniformly to a continuous potential $\Phi$ with ${\rm Rot}(\Phi)=K$. While our result is formulated for one-sided full shifts it can be easily generalized to (one-sided and two-sided) mixing subshifts of finite type, hyperbolic systems
and expansive homeomorphisms with specification. As a consequence of Theorem A we obtain that for shift maps it is possible that the boundary of the rotation set
is non-differentiable at a countable dense set of boundary points. We note that the potential $\Phi$ constructed in Theorem A is in general not H\"older
continuous. In Corollary \ref{corende} we construct for every H\"older continuous potential a natural family of analytic hypersurfaces in ${\rm int }\ K$ associated with certain equilibrium measures that converge to $\partial K$ with respect to the Hausdorff metric. This could make one believe that for shift maps and H\"older continuous potentials the boundary of the rotation set is at least piece-wise smooth. However, this
is in general not true due to the example by Bousch, see \cite{B, Je2}.
Next, we discuss closely related concepts of rotation sets. Let ${\rm Rot}_{Pt}(\Phi)$ be the set of all $w\in {\mathbb R}^m$ which are accumulation points of Birkhoff averages (see \eqref{defRf}
for the precise definition). The set ${\rm Rot}_{Pt}(\Phi)$ is frequently called the {\it pointwise rotation set} in the literature (see for example \cite{GM}).
Moreover, let ${\rm Rot}_E(\Phi)= \left\{{\rm rv}(\mu): \mu\in\EuScript{M}_E\right\}$. It follows from
Proposition \ref{prop456} and Examples 1 and 2 that for any continuous
map $f:X\to X$ and any $\Phi\in C(X,{\mathbb R}^m)$ we have
\[
{\rm Rot}_E(\Phi)\subset {\rm Rot}_{Pt}(\Phi)\subset {\rm Rot}(\Phi),
\]
and in general each of these inclusions can be strict. We also give conditions implying $ {\rm Rot}_{Pt}(\Phi)= {\rm Rot}(\Phi)$ which are satisfied for various classes of systems.
A natural invariant that quantifies the dynamical complexity of an invariant measure is the measure-theoretic entropy of $\mu$ denoted by ${\text h}_\mu(f)$ (see for example~\cite{Wal:81} for details). Following \cite{Je} we define the entropy of $w\in {\rm Rot}(\Phi)$ by
\begin{equation}\label{defH}
H(w)\overset{\text{def}}{=}\sup\{h_\mu(f): \mu \in \EuScript{M}\ {\rm and }\ {\rm rv}(\mu)=w\}.
\end{equation}
The entropy of rotation vectors was extensively studied by Geller and Misiurewicz in \cite{GM} as well as by Jenkinson, who added fundamental contributions in \cite{Je, Je1, Je2}. An alternative definition of entropy (denoted by $h(w)$), which is closely related to that of topological entropy, is in terms of the exponential growth rate of the cardinality of maximal $(n,\epsilon)$-separated sets of points whose Birkhoff averages are "close" to that of $w$.
We refer to \eqref{eqdefhw234} for the precise definition. We obtain that $h(w)\leq H(w)$ holds for all $w$ at which $H$ is continuous and $h(w)=H(w)$ holds for those $w\in {\rm Rot}_{Pt}(\Phi)$ whose entropy can be approximated by
ergodic measures (see Theorem \ref{thhwHw}). This result can be interpreted as a conditional variational principle with the condition being only using points and measures having rotation vectors
close respectively equal to $w$.
It is known since the classical works of Bowen that in framework of symbolic and hyperbolic dynamics entropy can be computed in terms of growth rate of periodic orbits rather than using arbitrary $(n,\epsilon)$-separated sets.
In the papers \cite{GW1,GW2} Gelfert and Wolf extended these results to smooth dynamical systems exhibiting some non-uniformly hyperbolic behavior. In Theorem \ref{theoperrot} we apply techniques from \cite{GW2} to compute $H(w)$ in terms of the growth rate of certain hyperbolic periodic orbits.
Our last result deals with systems which have strong thermodynamic properties, i.e. shift maps, uniformly hyperbolic systems and expansive homeomorphisms with specification.
Since for these systems the entropy map $\mu\mapsto h_\mu(f)$ is upper-semi continuous, the rotation entropy map $w\mapsto H(w)$ is continuous. However, we are able to obtain a stronger regularity.
\begin{thmB}
Suppose $f:X\to X$ has strong thermodynamic properties and assume $\Phi:X\to{\mathbb R}^m$ is H\"older continuous.
Then $w\mapsto H(w)$ is real-analytic on the interior of $ {\rm Rot}(\Phi)$. Moreover, $H(w)>0$ for all $w\in {\rm int}\ {\rm Rot}(\Phi)$.
\end{thmB}
The proof of this theorem relies heavily on methods from the thermodynamic formalism and, in particular, on the analyticity
of the topological pressure for H\"older continuous potentials. Moreover, we use results of Jenkinson \cite{Je} as key ingredients.
This paper is organized as follows. In Section 2 we review some
basic concepts and results about symbolic and smooth dynamics and the thermodynamic formalism.
Section 3 is devoted to the proof of Theorem A.
In Section 4 we introduce different notions of rotation sets and rotation entropy and study their relations.
In Section 5 we consider non-uniformly hyperbolic dynamics and derive results about the relation between rotation entropy and the growth rates of periodic orbits.
Finally, we Section 6 the dependence of $H(w)$ on $w$ for systems with strong thermodynamic properties is studied.
\section{Preliminaries}\label{sec:2}
In this section we discuss relevant background material which will be used later on. We will continue to use the notations from Section 1.
Let $f:X\to X$ be a continuous map on a compact metric space $(X,d)$. Let $\Phi=(\phi_1,\cdots,\phi_m)\in C(X,{\mathbb R}^m)$ and let ${\rm Rot}(f,\Phi)$
be the rotation set of $f$ with respect to $\Phi$. Since we will always work with a fixed dynamical system $f$ we frequently omit the dependence on $f$ in the notation of the rotation set and write ${\rm Rot}(\Phi)$ instead of ${\rm Rot}(f,\Phi)$.
Let $\EuScript{M}$ denote the space of all $f$-invariant Borel probability measures on $X$ endowed with the weak$\ast$ topology. This makes $\EuScript{M}$ a compact convex space. Moreover, let $\EuScript{M}_E\subset \EuScript{M}$ be the subset of ergodic measures.
Throughout the entire paper we use as a standing assumption
that $f$ has finite topological entropy.
\subsection{Rotation vectors of periodic points. }
We denote by ${\rm Fix}(f)$ the set of all fixed points of $f$, by ${\rm Per}(f)$ the set of all periodic points of $f$ and by ${\rm Per}_n(f)$ the set of all $x\in {\rm Fix}(f^n)$.
Given $x\in {\rm Per}_n(f)$ we denote by $\mu_x$ the unique invariant probability measure supported on the orbit of $x$, that is,
\begin{equation}\label{defpermes}
\mu_x=\frac{1}{n}\sum_{k=0}^{n-1} \delta_{f^k(x)},
\end{equation}
where $\delta$ denotes the delta Dirac measure supported on the point. Moreover, we define the rotation vector of $x$ by
\begin{equation}
{\rm rv}(x)\overset{\text{def}}{=} {\rm rv}(\mu_x) = \frac{1}{n} \sum_{k=0}^{n-1} \Phi(f^k(x)).
\end{equation}
Given $w\in {\rm Rot}(\Phi), n\in {\mathbb N}$ and $r>0$ we write
\begin{equation}
{\rm Per}_n(w,r)=\{x\in {\rm Per}_n(f): {\rm rv}(x)\in D(w,r)\}
\end{equation}
and ${\rm Per}(w,r)=\bigcup_{n\in {\mathbb N}} {\rm Per}_n(w,r)$. Here $D(w,r)$ denotes the open ball about $x\in {\mathbb R}^m$ and radius $r$ with respect to the Euclidean norm.
\subsection{Thermodynamic formalism}
Let us first recall the notion of topological pressure. Let
$(X,d)$ be a compact metric space and let $f\colon
X\to X$ be a continuous map. For $n \in {\mathbb N}$ we
define a new metric $ d_n $ on $ X $ by $
d_n(x,y)=\max_{k=0,\ldots ,n-1} d(f^k(x),f^k(y))$. The balls in the $d_n$ metric are frequently called Bowen balls. A set of points
$\{ x_i\colon i\in I \}\subset X$ is called
\emph{$(n,\varepsilon)$-separated} (with respect to $f$) if
$d_n(x_i,x_j)> \varepsilon$ holds for all $x_i,x_j$ with $x_i \ne
x_j$. Fix for all $\varepsilon>0$ and all $n\in{\mathbb N}$ a maximal
(with respect to the inclusion) $(n,\varepsilon)$-separated set
$F_n(\epsilon)$. The \emph{topological pressure} (with respect to $f$) is a mapping
$ P_{\rm top}(f,\cdot)\colon C(X,{\mathbb R})\to {\mathbb R}$ defined by
\begin{equation}\label{defdru}
P_{\rm top}(f,\phi) = \lim_{\varepsilon \to 0}
\limsup_{n\to \infty}
\frac{1}{n} \log \left(\sum_{x\in F_n(\epsilon)}
\exp S_n\phi(x) \right),
\end{equation}
where $S_n\phi(x)$ is defined as in \eqref{eqsn}.
Recall that the topological entropy of $f$ is defined by
$h_{\rm top}(f)=P_{\rm top}(f,0)$. We simply write $P_{\rm
top}(\phi)$ and $h_{\rm top}$ if there is no confusion about $f$.
Note that the definition of $P_{\rm top}(\phi)$ does not depend on the choice of the
sets $F_n(\epsilon)$ (see~\cite{Wal:81}).
The topological pressure satisfies the
well-known variational principle, namely,
\begin{equation}\label{eqvarpri}
P_{\rm top}(\phi)=
\sup_{\mu\in \EuScript{M}} \left(h_\mu(f)+\int_X \phi\,d\mu\right).
\end{equation}
Here $h_\mu(f)$ denotes the measure-theoretic entropy of $f$ with respect to $\mu$ (see~\cite{Wal:81} for details).
It is easy to see that the supremum in~\eqref{eqvarpri} can be replaced by
the supremum taken only over all $\mu\in\EuScript{M}_{\rm E}$.
If there exists a measure
$\mu\in \EuScript{M}$ at which the supremum in \eqref{eqvarpri} is
attained it is called an equilibrium state (or also equilibrium measure) of the potential $\phi$.
We denote by $ES(\phi)$ the set of all equilibrium states of $\phi$.
In general $ES(\phi)$ may be empty.
Note that if the entropy map
\begin{equation}\label{entup}
\mu\mapsto h_{\mu}(f)
\end{equation}
is upper semi-continuous on $\EuScript{M}$ then for each $\phi\in C(X,{\mathbb R})$ we have that $ES(\phi)\not=\emptyset$. Since $ES(\phi)$ is a compact, convex subset of $\EuScript{M}$ whose extremal points are the ergodic measures (see \cite{Wal:81}), we obtain in this case that
\begin{equation}\label{esne}
ES(\phi)\cap \EuScript{M}_E\not=\emptyset
\end{equation}
for all $\phi\in C(X,{\mathbb R})$.
Given $\alpha\in(0,1]$, let
$C^\alpha(X,{\mathbb R})$ be the space of H\"older continuous functions
with H\"older exponent~$\alpha$.
We
recall that two functions $\phi$, $\psi\colon X\to{\mathbb R}$ are
said to be cohomologous if $\phi-\psi=\eta-\eta\circ f$ for
some continuous function $\eta\colon X\to{\mathbb R}$.
We now list several properties of the topological pressure which hold for certain classes of dynamical systems which we will
be discussed latter on.
We say $f:X\to X$ has strong thermodynamic properties (which we abbreviate by {\rm{(STP)}}) if the following conditions hold:
\begin{enumerate}
\item $h_{\rm top}(f)<\infty$;
\item The entropy map $\mu\mapsto h_\mu (f)$ is upper semi-continuous;
\item The map $\phi\mapsto P_{\rm top}(f,\phi)$ is real-analytic on $C^\alpha(X,{\mathbb R})$;
\item \label{ref4} Each potential $\phi \in C^\alpha(X,{\mathbb R})$ has a
unique equilibrium measure $\mu_\phi\in ES(\phi)$. Furthermore,
$\mu_\phi$ is ergodic and given $\psi\in C^\alpha(X,{\mathbb R})$ we have
\begin{equation}\label{eqdifpre}
\frac{d}{dt} P_{\rm top}(f,\phi + t\psi )\Big|_{t=0}= \int_X \psi
\,d\mu_\phi.
\end{equation}
\item For each $\phi$, $\psi\in C^\alpha(X,{\mathbb R})$ we have
$\mu_\phi=\mu_\psi$ if and only if $\phi-\psi$ is cohomologous
to a constant.
\item \label{ref5} For each $\phi$, $\psi\in C^\alpha(X,{\mathbb R})$ and
$t\in{\mathbb R}$ we have
\begin{equation}\label{gg33}
\frac{d^2}{dt^2} P_{\rm top}(f,\phi + t\psi )\ge 0,
\end{equation}
with equality if and only if $\psi$ is cohomologous to a constant.
\end{enumerate}
Note that for several classes of systems properties (3)-(6) hold even for a wider class of potentials, namely for potentials with summable variation (see for example \cite{Je}). For simplicity, we restrict our
considerations to H\"older continuous potentials.
Next we discuss some examples with strong thermodynamic properties.
\subsection{Shifts and subshifts}
Let $d\in {\mathbb N}$ and let ${\mathcal A}=\{0,\cdots,d-1\}$ be a finite alphabet in $d$ symbols. The (one-sided) shift space $X$ on the alphabet ${\mathcal A}$ is the set of
all sequences $x=(x_n)_{n=1}^\infty$ where $x_n\in {\mathcal A}$ for all $n\in {\mathbb N}$. We endow $X$ with the Tychonov product topology
which makes $X$ a compact metrizable space. For example, given $0<\alpha<1$ it is easy to see that
\begin{equation}\label{defmet}
d(x,y)=d_\alpha(x,y)=\alpha^{\inf\{n\in {\mathbb N}:\ x_n\not=y_n\}}
\end{equation}
defines a metric which induces the Tychonov product topology on $X$.
The shift map $f:X\to X$ (defined by $f(x)_n=x_{n+1}$) is a continuous $d$ to $1$ map on $X$.
If $Y\subset X$ is a $f$-invariant set we say that $f|_Y$ is a sub-shift. In particular, for a $d\times d$ matrix $A$ with values in $\{0,1\}$
we define $X_A=\{x\in X: A_{x_n,x_{n+1}}=1\}$. It is easy to see that $X_A$ is a closed (and therefore compact) $f$-invariant set and we say that $f|_{X_A}$ a subshift of finite type. A subshift of finite type is (topologically) mixing if $A$ is aperiodic, that is, if there exists $n\in {\mathbb N}$ such that $A^n_{i,j}>0$ for all $i,j\in {\mathcal A}$.
Analogously, we obtain the concept of two-sided shift spaces and shift maps by defining $X$ to be the space of all bi-infinite sequences $x=(x_n)_{n=-\infty}^\infty$ where $x_n\in {\mathcal A}$ for all $n\in {\mathbb Z}$.
It turns out that the shift map $f:X\to X$ (defined as in the case of one-sided shift maps) is a homeomorphism on $X$.
Analogously as in the case of one-sided shift maps we obtain sub-shifts and of sub-shifts of finite type.
While we consider in this paper mostly the case of one-sided shift maps, all our result also hold for two-sided shift maps. For details how to make the connection between one-sided shift maps and two-sided shift maps we refer to \cite{Je}.
Let $(X_A,f)$ be a one-sided subshift of finite type.
Given $x\in X_A$ we write $\pi_n(x)=(x_1,\cdots,x_n)$. Moreover, for $\tau=(\tau_1,\cdots,\tau_n)\in {\mathcal A}^n$ we denote by ${\mathcal C}(\tau)=\{x\in X_A: x_1=\tau_1,\cdots, x_n=\tau_n\}$ the cylinder generated by $\tau$ and the element $\mathcal{O}(\tau)=(\tau_1,...,\tau_n,\tau_1,...,\tau_n,...)$ the periodic orbit generated by $\tau$. In this case $n$ is referred to as the length of the cylinder or the orbit respectively.
Similar, we use the analogous definitions in the case of two-sided shift maps, namely $\pi_n(x)=(x_{-n},\cdots,x_n)$, ${\mathcal C}(\tau)=\{x\in X_A: x_{-n}=\tau_{-n},\cdots, x_n=\tau_n\}$ if $\tau=(\tau_{-n},\cdots, \tau_n)$ and $\mathcal{O}(\tau)=(...,\tau_{-n},...,\tau_n,\tau_{-n},...,\tau_n,...)$.
It is a well-known fact that topological mixing sub-shifts of finite type have strong thermodynamic properties (see \cite{Je} and the references therein).
\subsection{Expansive homemorphisms with specification}
Let $f:X\to X$ be a homeomorphism on a compact metric space $(X,d)$. We say $f$ is expansive if there exists $\gamma>0$ (called an expansivity constant) such that if $x,y\in X$ with $d(f^n(x),f^n(y))<\gamma$ for all $n\in {\mathbb Z}$ then $x=y$.
Moreover, we say $f$ has the specification property (abbreviated to "a homeomorphism with specification") if for each $\delta>0$ there exists an integer $p=p(\delta)$ such that the following holds: if
\begin{enumerate}
\item[(a)]
$I_1,\cdots, I_n$ are intervals of integers, $ I_j\subset [a,b]$ for some $a,b\in {\mathbb Z}$ and all $j$.
\item[(b)] $dist(I_i,I_j)\geq p(\delta)$ for $i\not= j$,
\end{enumerate}
then for arbitrary $x_1,\cdots , x_n \in X$ there exists $x\in X$ such that
\begin{enumerate}
\item[(1)] $f^{b-a+p(\delta)}(x)=x$,
\item[(2)] $d(f^k(x),f^k(x_i))<\delta$ for $k\in I_i$.
\end{enumerate}
The specification property guarantees good mixing properties of $f$ and a sufficient number of periodic points. It turns out that topological mixing two-sided subshifts of finite type as well as diffeomorphisms
with a locally maximal topological mixing hyperbolic set are expansive homeomorphisms with specification (see \cite{KH}).
Moreover, expansive homeomorphisms with specification have strong thermodynamic properties, see \cite{Bo2,HRu, Ru}
We refer to \cite{KH} for a compact presentation.
\subsection{Smooth dynamical systems}
Let $M$ be a smooth Riemannian manifold and let
$f\colon M\to M$ be a $C^{1+\epsilon}$ map. We consider a compact locally
maximal $f$-invariant set $X\subset M$. Here \emph{locally
maximal} means that there exists an open neighborhood $U\subset M$
of $X$ such that $X=\bigcap_{n\in{\mathbb Z}} f^n(U)$. To avoid trivialities we will always assume
that $h_{{\rm top}}(f|_X)>0$. This rules out the case that $X$ is
only a periodic orbit. Given $x\in X$ and $v\in T_xM$, we
define the \emph{Lyapunov exponent} of $v$ at $x$ (with respect to
$f$) by
\begin{equation}\label{deflya}
\lambda(x,v)\overset{\text{def}}{=}\limsup_{n\to\infty}\frac{1}{n}\log\,\lVert df^n(x)(v)\rVert
\end{equation}
with the convention that $\log 0=-\infty$.
For each $x\in X$ there exist a positive integer $s(x)\le \dim M$,
real numbers $\chi_1(x)< \cdots < \chi_{s(x)}(x)$, and linear spaces
$\{0\}=E^{0}_x\subset \cdots \subset E^{s(x)}_x=T_xM$ such that for
$i=1$, $\ldots$, $s(x)$ we have
\[
E^{i}_x=\{v\in T_xM\colon \lambda(x,v)\le \chi_i(x)\},
\]
and $\lambda(x,v)=\chi_i(x)$ whenever $v\in E^{i}_x\setminus
E^{i-1}_x$.
We will count the values of the Lyapunov exponents
$\chi_i(x)$ with their multiplicities, i.e. we consider the numbers
\[
\lambda_1(x)\le\cdots\le\lambda_{\dim M}(x),
\]
where $\lambda_j(x)=\chi_i(x)$ for each $j\in\{\dim E^{i-1}_x+1,\cdots,\dim
E^i_x\}$.
By the Oseledets theorem, given $\mu\in \EuScript{M}=\EuScript{M}(f|_X)
$ the set of Lyapunov regular points (i.e. the set of points where the limit superior in~\eqref{deflya} is actually a limit) has full measure and
$\lambda_i(\cdot)$ is $\mu$-measurable. We denote by
\begin{equation}\label{deflyame}
\lambda_i(\mu)\overset{\text{def}}{=}\int\lambda_i(x) d\mu(x)
\end{equation}
the Lyapunov exponents of the measure $\mu$.
Note that if $\mu\in \EuScript{M}_{\rm E}$ then
$\lambda_i(.)$ is constant $\mu$-a.e., and therefore, the corresponding value
coincides with $\lambda_i(\mu)$.
For $\mu\in \EuScript{M}$
set
\[
\chi(\mu)\overset{\text{def}}{=}\min_{i=1,\ldots,\dim M} \lambda_i(\mu) =\lambda_1(\mu).
\]
Furthermore, we define the set of $\EuScript{M}^+=\{\mu\in \EuScript{M}\colon \chi(\mu)>0\}$ and
$\EuScript{M}^+
_{\rm E}= \EuScript{M}^+
\cap \EuScript{M}_{\rm E}$.
For $x\in {\rm Per}_n(f)$ we have that $\lambda_i(x)=\frac{1}{n}\log
\lvert \delta_i\rvert$, where $\delta_i$ are the eigenvalues of $Df^n(x)$.
We say a periodic point $x$ is \emph{expanding} if $\lambda_1(x)>0$. Let ${\rm EPer}_n(f)$ denote the fixed points of $f^n$ which are expanding. Hence,
${\rm EPer}(f)=\bigcup_{n\in {\mathbb N}} {\rm EPer}_n(f)$ is the set of all expanding periodic points.
Given $\alpha>0$ and $c>0$ we define
\begin{equation}\label{ne}
X_{\alpha,c}=\{x\in M\colon\lVert Df^k(x)\rVert_{\rm co} \geq ce^{k\alpha} \text{ and all }k\in{\mathbb N}\}.
\end{equation}
Here $\lVert Df^k(x) \rVert_{\rm co}$ is the minimums norm (also called co-norm) which coincides with $\lVert (Df^k(x))^{-1}\rVert^{-1}$ if $f$ is a local diffeomorphism at $x$.
Moreover, we say that a compact forward invariant set $X\subset M$ is \emph{uniformly expanding} if there exist constants $c>0$ and $\alpha>0$ such that $X\subset X_{\alpha, c}$.
For convenience we sometimes also refer to relative compact forward invariant sets contained in some $X_{\alpha,c}$ as uniformly expanding sets.
We denote by $\chi(X)$ the largest $\alpha>0$ such that $X\subset X_{\alpha,c}$ for some $c>0$.
It is easy to see that if $x\in {\rm EPer}_n(f)$ then there exists $c=c(x)>0$ such that for all
integers $k\ge 0$ and $0\le i\le n-1$ we have
\begin{equation}\label{eqhi}
ce^{k\lambda_{1}(x)} \leq \lVert (Df^k(f^i(x)))^{-1} \rVert^{-1}.
\end{equation}
For $\alpha>0, c>$ and $n\in{\mathbb N}$ we define
\begin{multline}\label{eqexpand}
{\rm EPer}_n(\alpha, c) = \{x\in {\rm Per}_n(f)\colon \lVert(Df^{-k}(f^i(x)))^{-1}\rVert^{-1}
\ge c e^{k\alpha}\colon\\
\text{ for all } k\in {\mathbb N} \text{ and } 0\le i\le n-1 \}.
\end{multline}
It follows that, if $\alpha\geq \alpha'$, $c\geq c'$, then
\begin{equation}\label{ni}
{\rm EPer}_n(\alpha,c) \subset {\rm EPer}_n(\alpha',c')
\end{equation}
and
\begin{equation}
{\rm EPer}(f)=\bigcup_{\alpha>0}\bigcup_{c>0}\bigcup_{n=1}^\infty
{\rm EPer}_n(\alpha,c).
\end{equation}
Given $\Phi\in C(X,{\mathbb R}^m)$, $w\in {\rm Rot}(\Phi), n\in {\mathbb N}$ and $r>0$ we define
\begin{equation}\label{eqsirot}
{\rm EPer}_n(w,r,\alpha,c) = {\rm EPer}_n(\alpha, c)\cap {\rm Per}_n(w,r).
\end{equation}
We will need the following construction of uniformly expanding sets.
Let $\alpha>0$, $c>0$, $w\in {\rm Rot}(\Phi)$ and $r>0$ such that ${\rm EPer}_n(w,r,\alpha,c)\not=\emptyset$ for some $n\in {\mathbb N}$. We define
\begin{equation}\label{wagner}
X= X_{w,r,\alpha,c}
= \overline{\bigcup_{n=1}^\infty {\rm EPer}_n(w,r,\alpha, c)} .
\end{equation}
A simple continuity argument shows that $X$ is a uniformly expanding set.
Moreover, for every $\mu\in \EuScript{M}(f|_{X})$ we have ${\rm rv}(\mu)\in D(w,r)$.
Furthermore, for every $n\in{\mathbb N}$ we have
\begin{equation}\label{eqsi}
{\rm Per}_n(f)\cap X = {\rm EPer}_n(w,r,\alpha,c).
\end{equation}
We will need the following classical result (see for example~\cite{Ru}).
\begin{proposition}\label{ha}
Let $f\colon M\to M$ be a $C^{1+\epsilon}$ map and let $X\subset M$ be
a compact uniformly expanding set of $f$. Then we have,
\begin{equation}\label{eqdruck}
\limsup_{n\to\infty}\frac{1}{n}\log {\rm card}\left( {\rm Per}_n(f)\cap X \right) \le h_{\rm top}(f|_X).
\end{equation}
Furthermore, if $f|_X$ is topologically mixing then we have
equality in~\eqref{eqdruck}, and the limsup is in fact a limit.
\end{proposition}
\section{The geometry of rotation sets}
In this section we study the question if every compact and convex set
is attained as a rotation set within the class of subshifts of finite type.
It follows from the facts that $\EuScript{M}$ is a compact convex set, and that $\mu\mapsto {\rm rv}(\mu)$ is continuous and affine that the rotation set is always compact and convex. To show that for shift maps,
these are the sole restrictions, we explicitly construct for an arbitrary convex compact set $K$ in ${\mathbb R}^m$ a continuous potential on a shift-space which generates $K$ as its rotation set.
Throughout this section we will use the notation from Section 2.3. The following theorem considers the 2-dimensional case. Subsequently, we will prove the corresponding statement in the $m$-dimensional setting.
\begin{theorem} \label{thm1} Let $K$ be a compact convex subset of $\mathbb{R}^2$ and $( X, f)$ be a full one-sided shift with alphabet ${\mathcal A}=\{0,1\}$. Then there exists a continuous potential $\Phi: X\to {\mathbb R}^2$ such that ${\rm Rot}(\Phi)=K$.
\end{theorem}
\begin{proof}
In case of empty interior, $K$ is reduced to either a single point or a line segment. In those situations the construction of $\Phi$ is trivial. Hence, we only need to consider the case when $K$ has non-empty interior. Note also that since $K$ is a compact convex set in ${\mathbb R}^2$, the boundary of $K$ is Lipschitz and is therefore rectifiable. After normalizing we may assume the boundary has length one.
The proof is based on an approximation argument. We create a sequence of continuous potentials $\Phi_n$ which converges uniformly to a potential $\Phi$. At the same time, we assure that ${\rm Rot}(\Phi_n)$ converge to $K$. This is done by approximating $K$ by polygons with vertices on the boundary of $K$. Ideally, the rotation sets of $\Phi_n$ are equal to these polygons, but this will be not possible to be achieved without compromising the uniform convergence. It is sufficient however, for ${\rm Rot}(\Phi_n)$ to be suitably close to those polygons.
We will construct a sequence $\{\Phi_n\}_{n=0}^\infty$ of potentials on $ X$ with the following properties:
\begin{enumerate}
\item \label{assumptions1} $\Phi_n: X\to {\mathbb R}^2$ is continuous
\item \label{assumptions2} $\|\Phi_{n+1}-\Phi_n\|_\infty\le\frac{11}{8}\cdot\frac{1}{2^{n}}$
\item \label{assumptions3} There exists a set of $2^{n+1}$ equidistant points $\{w_{n,j}\}_{j=1}^{2^{n+1}}$ on the boundary of $K$ and a corresponding set $\{w^*_{n,j}\}_{j=1}^{2^{n+1}}$ of points in $K$ such that
$${\text{conv}}\{w^*_{n,j}\}\subset {\rm Rot}(\Phi_n)\subset K\quad\text{and}\quad \|w_{n,j}-w^*_{n,j}\|_2\le\frac{5}{4}\cdot\frac{1}{6^{n}},$$
where ${\rm conv}\ A$ denotes the convex hull of the set $A$.
\end{enumerate}
The sequence will be explicitly constructed by induction on n. We will hence start with the case n=0.
To initiate, we place two equidistant points $w_{0,1}$ and $w_{0,2}$ on the boundary of $K$. We split $ X$ into 2 cylinders of length one generated by (0) and (1) and define
$$\Phi_0(x)=\begin{cases} w_{0,1}\quad if\,\, x\inC\!\!\!\!I(0)\\ w_{0,2}\quad if\,\, x\inC\!\!\!\!I(1)\end{cases}$$
Then ${\rm Rot}(\Phi_0)$ is exactly the line segment connecting points $w_{0,1}$ and $w_{0,2}$. We call $C\!\!\!\!I_{0,1}=C\!\!\!\!I(0)$ and $C\!\!\!\!I_{0,2}=C\!\!\!\!I(1)$ the original cylinders of step 0, which correspond to the points $w_{0,1}$ and $w_{0,2}$.
To illustrate the process, we will also show how to move from $n=0$ to $n=1$ here (see Figure \ref{RotSet}). Afterwards, we will demonstrate the general step from $n$ to $n+1$.
We move along the boundary of $K$ by $\frac18$ ($\frac14$ of the distance between $w_{0,1}$ and $w_{0,2}$) to the left and to the right of $w_{0,1}$ and place points $L_{0,1}$ and $R_{0,1}$ respectively. Similarly, we place points $L_{0,2}$ and $R_{0,2}$ around $w_{0,2}$. Then $L_{0,1}, R_{0,1}, L_{0,2}, R_{0,2}$ are four equidistant points on the boundary of $K$. This is shown on the left hand side of Figure \ref{RotSet}. To pass to the next potential $\Phi_1$ we start by taking $\Phi_1=\Phi_0$ and then modify it as follows.
We build the original cylinders of step 1 by
\begin{enumerate}
\item [(a)] replicating the generators of the original cylinders of the previous step three times: $C\!\!\!\!I(0)\leadstoC\!\!\!\!I(000),\,C\!\!\!\!I(1)\leadstoC\!\!\!\!I(111)$
\item [(b)] altering the last element to be either 0 or 1: $C\!\!\!\!I(000)\leadsto\{C\!\!\!\!I(000),C\!\!\!\!I(001)\}$ and $C\!\!\!\!I(111)\leadsto\{C\!\!\!\!I(111),C\!\!\!\!I(110)\}$
\end{enumerate}
We name the four cylinders obtained: $C\!\!\!\!I_{1,1}=C\!\!\!\!I(000), C\!\!\!\!I_{1,2}=C\!\!\!\!I(001), C\!\!\!\!I_{1,3}=C\!\!\!\!I(111), \text{ and }C\!\!\!\!I_{1,4}=C\!\!\!\!I(110)$. We assign $\Phi_1$ the value of the point to the left of the value of $\Phi_0$, if the last element was not changed, and the point to the right, if the last element was changed, e.g. $\Phi_1=R_{0,1}$ on $C\!\!\!\!I(001)$. As a result, the rotation set of $\Phi_1$ is contained in the polygon spanned by all the points (see left hand side of Figure \ref{RotSet}). However we need to make sure that ${\rm Rot}(\Phi_1)$ is sufficiently close to this polygon. To get rotation vectors which are close to the vertices, it is necessary to change the values of the potential on certain periodic orbits. A subtlety about doing this is that we should not change the values of $\Phi_1$ on the whole orbit, since we need these values to remain close to $\Phi_0$.
To be precise, each of the original cylinders contains a periodic orbit with the same generator. We define $\Phi_1$ to be the same on all cylinders generated by the first 3 elements of the shifted periodic orbits of such form. For example, since $C\!\!\!\!I(001)$ supports $\mathcal{O}(001)$ and $ f\mathcal{O}(001)=\mathcal{O}(010)$, we set $\Phi_1=R_{0,1}$ on $C\!\!\!\!I(010)$. We keep $\Phi_1$ at all the other points the same as $\Phi_0$. Summarizing all of the above we have
\begin{equation}\label{phi_1} \Phi_1(x)=\begin{cases} L_{0,1},\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(000)),\quad k=0,1\\
R_{0,1},\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(001)),\quad k=0,1\\
L_{0,2},\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(111)),\quad k=0,1\\
R_{0,2},\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(110)),\quad k=0,1\\
\Phi_0(x),\quad \text{otherwise}
\end{cases}
\end{equation}
The potential $\Phi_1$ is constant on cylinders of length $m_1=3$, and the value of $\Phi_1$ on each such cylinder is a point on the boundary of $K$. Since these cylinders form a disjoint partition of $ X$ by clopen sets, $\Phi_1$ is continuous with respect to the product topology.
Note that $\Phi_1(x)$ and $\Phi_0(x)$ may have different values only when $x$ is in one of the cylinders listed above. In this case $\|\Phi_0(x)-\Phi_1(x)\|$ is either $\|L_{0,i}-w_{0,i}\|$ or $\|R_{0,i}-w_{0,i}\|$ for $i=1,2$, and therefore cannot exceed $\frac18$. Hence, $$\|\Phi_1-\Phi_0\|_\infty=\sup_{x\in X}\|\Phi_1(x)-\Phi_0(x)\|\le\frac{1}{8}<\frac{11}{8}\cdot\frac{1}{2^0}$$
For $j=1,...,4$ let $w_{1,j}=\Phi_1(C\!\!\!\!I_{1,j})$ denote the point on the boundary of $K$, $\mathcal{O}_{1,j}$ be the orbit with the same generator as $C\!\!\!\!I_{1,j}$ and $w^*_{1,j}={\rm rv}_{\Phi_1}(\mu_{1,j})$, where $\mu_{1,j}$ is a $ f$-invariant measure supported on $\mathcal{O}_{1,j}$ (see the right hand side of Figure \ref{RotSet}). Note that from here on we use an additional index for rotation vectors to emphasize the underlying potential. Then $w^*_{1,1}=w_{1,1},\, w^*_{1,3}=w_{1,3}$. However,
\begin{equation}
\begin{split}
w^*_{1,2}&=\mu_{1,2}(\mathcal{O}(001))\Phi_1(C\!\!\!\!I(001))\\
&\quad +\mu_{1,2}(\mathcal{O}(010))\Phi_1(C\!\!\!\!I(010))+\mu_{1,2}(\mathcal{O}(100))\Phi_1(C\!\!\!\!I(100))\\
&=\frac13 w_{1,2}+\frac13 w_{1,2}+\frac13 w_{0,2}\\
&=\frac23 w_{1,2}+\frac13 w_{0,2}
\end{split}
\end{equation}
and $w^*_{1,4}=\frac23 w_{1,4}+\frac13 w_{0,1}$.
We compute the distance between corresponding points and obtain
$$\|w_{1,2}-w^*_{1,2}\|_2=\|w_{1,2}-\frac23 w_{1,2}-\frac13 w_{0,2}\|_2 =\frac13\|w_{1,2}-w_{0,2}\|_2\le\frac{1}{3}\cdot\frac38\le\frac54\cdot\frac16$$
and $\|w_{1,4}-w^*_{1,4}\|_2\le\frac54\cdot\frac16$.
For any measure $\mu\in\EuScript{M}$, the point
\begin{equation}
{\rm rv}_{\Phi_1}(\mu)=\sum_{C\text{-cylinder of length 3}}\mu(C)\Phi_1(C)
\end{equation}
is a convex combination of the points $\{w_{i,j}: i=0,1;j=1,...,2^{i+1}\}$ in $K$. Therefore, ${\rm Rot}(\Phi_1)$ is a subset of $K$. On the other hand, ${\rm Rot}(\Phi_1)$ contains points $w^*_{1,j}$ and hence ${\text{conv}}\{w^*_{1,j}\}_{j=1}^4\subset {\rm Rot}(\Phi_1)$.
\begin{figure}[h]
\begin{center}
\psfragscanon
\psfrag{1}[c][l]{{\huge{${{\rm Rot} (\Phi_{1})}$}}}
\psfrag{2}[c][l]{{\huge{is inside}}}
\psfrag{9}[c][l]{{\huge{inside}}}
\psfrag{3}[c][l]{{\huge{$w_{0,1}$}}}
\psfrag{4}[c][l]{{\huge{$L_{0,1}$}}}
\psfrag{5}[c][l]{{\huge{$R_{0,2}$}}}
\psfrag{6}[c][l]{{\huge{$w_{0,2}$}}}
\psfrag{7}[c][l]{{\huge{$L_{0,2}$}}}
\psfrag{8}[l][l]{{\huge{$R_{0,1}$}}}
\psfrag{5*}[c][l]{{\huge{$w_{1,4}^*$}}}
\psfrag{8*}[c][l]{{\huge{$w_{1,2}^*$}}}
\psfrag{4'}[c][l]{{\huge{$w_{1,1}$}}}
\psfrag{5'}[c][l]{{\huge{$w_{1,4}$}}}
\psfrag{7'}[c][l]{{\huge{$w_{1,3}$}}}
\psfrag{8'}[l][l]{{\huge{$w_{1,2}$}}}
\scalebox{0.45}{\includegraphics{RotSet.eps}}
\caption{This figure illustrates step from $n=0$ to $n=1$ in the proof of Theorem \ref{thm1}}
\label{RotSet}
\end{center}
\end{figure}
Now we are ready to follow the general induction step from $n$ to $n+1$.
Suppose we have defined the sequence of potentials $\Phi_0,\Phi_1,...,\Phi_{n}$ satisfying (\ref{assumptions1}),(\ref{assumptions2}),(\ref{assumptions3}). Here $\Phi_{n}$ is constant on cylinders of size $m_{n}$. We introduce a new set of points on the boundary of $K$. For each $j$ denote by $L_{n,j}$ the point located exactly $\frac14\cdot\frac{1}{2^{n}}$ along the boundary to the left of $w_{n,j}$ and by $R_{n,j}$ the point of the same distance to the right. Then the set $\{L_{n,j},R_{n,j}\}_{j=1}^{2^{n+1}}$ gives $2^{n+2}$ equidistant points on the boundary of $K$. We enumerate these points starting from $L_{n,1}$ and moving to the right along the boundary: $w_{n+1,1}=L_{n,1},\, w_{n+1,2}=R_{n,1},\, w_{n+1,3}=L_{n,2},\, w_{n+1,4}=R_{n,2}$ and so on. The general formula is
\begin{equation}
w_{n+1,i}=\left\{
\begin{array}{ll}
$$L_{n,j}$$, & \hbox{if $i$ is odd for $j=\frac12(i+1)$;} \\
$$R_{n,j}$$, & \hbox{if $i$ is even for $j=\frac12i$.}
\end{array}
\right.
\end{equation}
We denote the original cylinders of step $n$ by $\{C\!\!\!\!I_{n,j}\}_{j=1}^{2^{n+1}}$. Then for $j=1,...,2^{n+1}$ the corresponding point on the boundary of $K$ $w_{n,j}=\Phi_{n}(C\!\!\!\!I_{n,j})$. Furthermore, let $\mathcal{O}_{n,j}$ be the orbit with the same generator as $C\!\!\!\!I_{n,j}$ and $w^*_{n,j}={\rm rv}_{\Phi_{n}}(\mu_{n,j})$, where $\mu_{n,j}$ is a $ f$-invariant measure supported on $\mathcal{O}_{n,j}$.
We construct the original cylinders of step $n+1$ by replicating the generator of each original cylinder of the previous step $m_{n+1}=3^{n+1} m_{n}$ times and then changing the last entry to be either 0 or 1. E.g. for $j=1,...,2^{n+1}$ define
\begin{equation}
\tau_{j}=(\pi_{m_{n+1}}\mathcal{O}_{n,j})\quad\text{and}\quad\bar{\tau}_{j}=(\pi_{m_{n+1}-1}\mathcal{O}_{n,j},a)
\end{equation}
where $a$ is a binary complement to the last entry of $\tau_j$. Then the sequence of $m_{n+1}$-tuples $\tau_j$ and $\bar{\tau}_j$ will be the generators of the original cylinders of step $n+1$. For each cylinder of step $n$ we obtain 2 new original cylinders of step $n+1$. Let $k_n=m_{n+1}-m_n=(3^n-1)m_n$. We define $\Phi_{n+1}$ as follows
$$\Phi_{n+1}(x)=\begin{cases}
L_{n,j},&\text{if}\, x\inC\!\!\!\!I(\pi_{m_{n+1}}\circ f^k\mathcal{O}(\tau_{j})),\, k=0,...,m_{n+1}-1\\
R_{n,j},&\text{if}\, x\inC\!\!\!\!I(\pi_{m_{n+1}}\circ f^k\mathcal{O}(\bar{\tau}_{j})),\, k=0,...,k_n-1\\
\Phi_n( f^{k_n}\mathcal{O}(\bar{\tau}_{j})),&\text{if}\, x\inC\!\!\!\!I(\pi_{m_{n+1}}\circ f^k\mathcal{O}(\bar{\tau}_{j})),\, k=k_n,...,m_{n+1}-1\\
\Phi_n(x),&\text{otherwise}
\end{cases}$$
By definition, $\Phi_{n+1}$ is constant on cylinders of length $m_{n+1}$ and therefore it is continuous on $ X$.
Note that $ f^{k_n}\mathcal{O}(\bar{\tau}_{j})$ and $\mathcal{O}_{n,j}$ have the same first $m_n-1$ entrees and different $m_n$-th entree. Therefore, these elements are mapped into consecutive points under $\Phi_n$. Since $\Phi_n(\mathcal{O}_{n,j})=w_{n,j}$, either $\Phi_n( f^{k_n}\mathcal{O}(\bar{\tau}_{j}))=w_{n,j-1}$ or $\Phi_n( f^{k_n}\mathcal{O}(\bar{\tau}_{j}))=w_{n,j+1}$. (Here additive operations on j are performed modulo $2^{n+1}$).
Now we will estimate $\sup_{x\in F}\|\Phi_{n+1}(x)-\Phi_n(x)\|_2$. First, suppose $x\inC\!\!\!\!I(\pi_{m_{n+1}}\circ f^k\mathcal{O}(\tau_{j}))$ for some $k$ between 0 and $m_{n+1}-1$ and $j$ between 1 and $2^{n+1}$. Then $\mathcal{O}(\tau_j)=\mathcal{O}_{n,j}$ and $\Phi_n(\mathcal{O}_{n,j})=w_{n,j}$.
If $j$ is odd, then $w_{n,j}=L_{n-1,i}$ for $i=\frac12(j+1)$ and hence
\begin{equation}
\Phi_n(x)=\Phi_n( f^k\mathcal{O}_{n,j})=L_{n-1,i}=w_{n,j}\quad\text{for all k}.
\end{equation}
If $j$ is even, then $w_{n,j}=R_{n-1,i}$ for $i=\frac12j$. Therefore $\Phi_n(x)=\Phi_n( f^k\mathcal{O}_{n,j})=R_{n-1,i}$ for $k<k_{n-1}$ (here $k_{n-1}=m_n-m_{n-1}$) and $\Phi_n(x)=\Phi_{n-1}( f^{k_{n-1}}\mathcal{O}_{n,j})$ for $k_{n-1}\le k<m_n$. Then the same values repeat periodically for $k$ on each of the intervals of length $m_n$, that is,
$$\Phi_n(x)=\begin{cases}R_{n-1,i}, &\text{ for }sm_n\le k<sm_m+k_{n-1};\\
\Phi_{n-1}( f^{k_{n-1}}\mathcal{O}_{n,j}), &\text{ for } sm_n+k_{n-1}\le k<(s+1)m_n,\\
\end{cases}$$
where $s=0,...,3^{n+1}-1$.
In this case either $\Phi_{n-1}( f^{k_{n-1}}\mathcal{O}_{n,j})=w_{n-1,i-1}$ or $\Phi_{n-1}( f^{k_{n-1}}\mathcal{O}_{n,j})=w_{n-1,i+1}$. Summarizing the above we get
$$\|\Phi_{n+1}(x)-\Phi_n(x)\|_2\le \max\{\|L_{n,j}-w_{n,j}\|,\,\|L_{n,j}-w_{n-1,i-1}\|,\,\|L_{n,j}-w_{n-1,i+1}\|\}$$
Since the distance along the boundary between $L_{n,j}$ and $w_{n,j}$ is $\frac14\cdot\frac{1}{2^{n+1}}$, \\ $\|L_{n,j}-w_{n,j}\|\le \frac{1}{2^{n+3}}$. Also,
\begin{align*}\|L_{n,j}-w_{n-1,i\pm 1}\| &\le\|L_{n,j}-w_{n,j}\|+\|w_{n,j}-w_{n-1,i\pm 1}\|\\
&=\|L_{n,j}-w_{n,j}\|+\|R_{n-1,i}-w_{n-1,i\pm 1}\|\\
&\le \|L_{n,j}-w_{n,j}\|+\|R_{n-1,i}-w_{n-1,i}\|+\|w_{n-1,i}-w_{n-1,i\pm1}\|\\
&\le \frac14\cdot\frac{1}{2^{n+1}}+\frac14\cdot\frac{1}{2^{n}}+\frac{1}{2^{n}}\\
&\le \frac{11}{8}\cdot\frac{1}{2^{n}}.
\end{align*}
The estimates are similar when $x\inC\!\!\!\!I(\pi_{m_{n+1}}\circ f^k\mathcal{O}(\bar{\tau}_{j})),\,\,\, k=0,...,k_n-1$. In this case, since $\tau_j$ and $\bar{\tau}_j$ differ only by $m_{n+1}$ entry, $\pi_{m_n}\circ f^k\mathcal{O}(\bar{\tau}_j)=\pi_{m_n}\circ f^k\mathcal{O}_{n,j}$. Thus, $\Phi_n(x)=\Phi_n( f^k\mathcal{O}_{n,j})$. By replacing $L_{n,j}$ by $R_{n,j}$ in the arguments above we obtain
$$\|\Phi_{n+1}(x)-\Phi_n(x)\|_2\le\max\{\|R_{n,j}-w_{n,j}\|_2,\,\|R_{n,j}-w_{n-1,i\pm 1}\|_2\}\le\frac{11}{8}\cdot\frac{1}{2^n}.$$
Finally, consider $x\inC\!\!\!\!I(\pi_{m_{n+1}}\circ f^k\mathcal{O}(\bar{\tau}_{j}))$ when $k=k_n,...,m_{n+1}-1$. As discussed before, either $\Phi_{n+1}(x)=w_{n,j-1}$ or $\Phi_{n+1}(x)=w_{n,j+1}$. We assume $\Phi_{n+1}(x)=w_{n,j-1}$, the other case is similar. If $j$ is even, $j-1$ is odd and $w_{n,j-1}=L_{n-1,i}$ for $i=\frac12j$. Thus $\Phi_n(x)=\Phi_n( f^k\mathcal{O}(\bar{\tau}_j))=L_{n-1,i}=\Phi_{n+1}(x)$ for all $k=k_n,...,m_{n+1}-1$. If $j$ is odd, $j-1$ is even and $w_{n,j-1}=R_{n-1,i}$ for $i=\frac12(j-1)$. Then $\Phi_n( f^k\mathcal{O}(\bar{\tau}_j))=R_{n-1,i}$ for $k_n\le k<k_n+k_{n-1}$ and $\Phi_n( f^k\mathcal{O}(\bar{\tau}_j))=\Phi_{n-1}( f^k_{n-1}\mathcal{O}(\bar{\tau}_j))=w_{n-1,i\pm1}$ for $k_n+k_{n-1}\le k<m_{n+1}$. Since
\begin{align*}\|R_{n-1,i}-w_{n-1,i\pm1}\|_2&\le\|R_{n-1,i}-w_{n-1,i}\|_2+\|w_{n-1,i}-w_{n-1,i\pm1}\|_2\\ &\le\frac14\cdot\frac{1}{2^n}+\frac{1}{2^n}\\
&\le\frac54\cdot\frac{1}{2^n}
\end{align*}
we conclude that
\begin{equation}
\|\Phi_{n+1}(x)-\Phi_n(x)\|_2\le\frac54\cdot\frac{1}{2^n}.
\end{equation}
It follows that
\begin{equation}
\sup_{x\in F}\|\Phi_{n+1}(x)-\Phi_n(x)\|_2\le\frac{11}{8}\cdot\frac{1}{2^n},
\end{equation}
which concludes the proof of (\ref{assumptions2}). Next, we prove part (\ref{assumptions3}).
On any cylinder of size $m_{n+1}$ the potential $\Phi_{n+1}$ is equal to one of the points on the boundary of $K$ that were obtained during this or one of the previous steps. Since these cylinders form a disjoint partition of $ X$, for any probability measure $\mu$ the rotation vector ${\rm rv}_{\Phi_{n+1}}(\mu)$ is a convex combination of the boundary points of $K$. Convexity of $K$ implies that ${\rm Rot}(\Phi_{n+1})\subset K$.
As before, we denote the original cylinders of step $n+1$ by $\{C\!\!\!\!I_{n+1,j}\}_{j=1}^{2^{n+2}}$. Then for $j=1,...,2^{n+2}$ let $w_{n+1,j}=\Phi_{n+1}(C\!\!\!\!I_{n+1,j})$ be the corresponding point on the boundary of $K$, $\mathcal{O}_{n+1,j}$ be the orbit with the same generator as $C\!\!\!\!I_{n+1,j}$ and $w^*_{n+1,j}={\rm rv}_{\Phi_{n+1}}(\mu_{n+1,j})$, where $\mu_{n+1,j}$ is a $ f$-invariant measure supported on $\mathcal{O}_{n+1,j}$. Since $w^*_{n+1,j}$ are rotation vectors of $\Phi_{n+1}$ and the rotation set is convex, ${\text{conv}}\{w^*_{n+1,j}\}_j\subset {\rm Rot}(\Phi_{n+1})$.
Using the fact that $\mu_{n+1,j}( f^k\mathcal{O}_{n+1,j})=\frac{1}{m_{n+1}}$ for all $k$, we compute
\begin{equation}
w^*_{n+1,j}={\rm rv}_{\Phi_{n+1}}(\mu_{n+1,j})=\frac{1}{m_{n+1}}\sum_{k=0}^{m_{n+1}-1}\Phi_{n+1}( f^k\mathcal{O}_{n+1,j}).
\end{equation}
If $j$ is odd, $\Phi_{n+1}(\mathcal{O}_{n+1,j})=w_{n+1,j}=L_{n,i}$ for $i=\frac12(j+1)$. Then $\Phi_{n+1}(\mathcal{O}^k_{n+1,j})=L_{n,i}=w_{n+1,j}$ for all $k$ and hence $w^*_{n+1,j}=w_{n+1,j}$. If $j$ is even, $\Phi_{n+1}(\mathcal{O}_{n+1,j})=w_{n+1,j}=R_{n,i}$ for $i=\frac12j$. Then $\Phi_{n+1}(\mathcal{O}^k_{n+1,j})=R_{n,i}=w_{n+1,j}$ for $k<k_n$ and $\Phi_{n+1}(\mathcal{O}_{n+1,j})=w_{n+1,j}=L_{n,i}$ for $i=\frac12(j+1)$. Then $\Phi_{n+1}(\mathcal{O}^k_{n+1,j})$ is either $w_{n,i-1}$ or $w_{n,i+1}$ for $k\ge k_n$. Assuming $\Phi_{n+1}(\mathcal{O}^k_{n+1,j})=w_{n,i-1}$ we get
\begin{equation}
w^*_{n+1,j}=\frac{1}{m_{n+1}}\left(\sum_{k=0}^{k_{n}-1}w_{n+1,j}+\sum_{k=k_n}^{m_{n+1}-1}w_{n,i-1}\right).
\end{equation}
Hence,
\begin{align*}
\|w_{n+1,j}-w^*_{n+1,j}\|_2&=\frac{1}{m_{n+1}}\|\sum_{k=k_n}^{m_{n+1}-1}w_{n+1,j}-w_{n,i-1}\|_2\\
&\le\frac{m_{n+1}-k_n}{m_{n+1}}\|w_{n+1,j}-w_{n,i-1}\|_2\\
&\le\frac{m_{n}}{m_{n+1}}(\|R_{n,i}-w_{n,i}\|_2+\|w_{n,i}-w_{n,i-1}\|_2)\\
&\le\frac{1}{3^{n+1}}\left(\frac14\cdot\frac{1}{2^n}+\frac{1}{2^n}\right)\\
&=\frac{5}{4}\cdot\frac{1}{6^{n+1}}.
\end{align*}
This proves (\ref{assumptions3}), and the induction is complete.\\
It follows from (\ref{assumptions1}) and (\ref{assumptions2}) that the sequence $(\Phi_n)_{n\in{\mathbb N}}$ converges uniformly to a continuous potential $\Phi$. For any $\epsilon>0$ there is $N$ such that $\|\Phi-\Phi_n\|_\infty<\epsilon$ whenever $n>N$. Then for $x\in {\rm Rot}(\Phi)$ and $x={\rm rv}_{\Phi}(\mu_x)$ we have
\begin{equation}
\begin{split}
d(x,{\rm Rot}(\Phi_n))&=\inf_{y\in {\rm Rot}(\Phi_n)}\|x-y\| \\
&\le \|{\rm rv}_{\Phi}(\mu_x)-{\rm rv}_{\Phi_n}(\mu_x)\|\le\int\|\Phi-\Phi_n\|d \mu_x<\epsilon.
\end{split}
\end{equation}
Moreover, for $y\in {\rm Rot}(\Phi_n)$ and $y={\rm rv}_{\Phi_n}(\mu_y)$ we have
\begin{equation}
\begin{split}
d(y,{\rm Rot}(\Phi))&=\inf_{x\in {\rm Rot}(\Phi)}\|x-y\|\le\|{\rm rv}_{\Phi}(\mu_y)-{\rm rv}_{\Phi_n}(\mu_y)\| \\
&\le\int\|\Phi-\Phi_n\|d \mu_y<\epsilon.
\end{split}
\end{equation}
Therefore, the Hausdorff distance $d_H({\rm Rot}(\Phi),{\rm Rot}(\Phi_n))<\epsilon$ for all $n>N$ and hence ${\rm Rot}(\Phi_n)$ converges to ${\rm Rot}(\Phi)$.
On the other hand, ${\text{conv}}\{w^*_{n,j}\}_j\subset {\rm Rot}(\Phi_n)\subset K$ and the polygons ${\text{conv}}\{w^*_{n,j}\}_j$ converge to $K$ with respect to the Hausdorff metric. Thus, ${\rm Rot}(\Phi_n)$ also converges to $K$ as $n\to\infty$. We obtain ${\rm Rot}(\Phi)=K$, which concludes the proof of the theorem.
\end{proof}
\begin{remarks}{\rm (i) }
The potential obtained in Theorem \ref{thm1} is not H\"older continuous. To see this, consider $x=(0000000...)$ and $x_n=\mathcal{O}_{n,2}$ which is a periodic point generated by a $m_n$-tuple $(000...01)$. Then $\Phi(x)$ is the point located $\frac14$ along $\partial K$ to the left of $w_{0,1}$. Point $\Phi(x_n)$ is found by starting at $w_{0,1}$, then moving $\frac14$ to the left and $\frac12\cdot\frac{1}{2^n}$ to the right along $\partial K$. Then the distance between $\Phi(x)$ and $\Phi(x_n)$ decreases as $2^{-n}$, however $d(x,x_n)=2^{-m_n}$ where $m_n\approx 3^{n^2}$.\\
{\rm (ii)}
While Theorem \ref{thm1} is formulated for a one-sided full shift, the procedure of the proof can be easily generalized to (one and two-sided) shifts and topologically mixing subshifts of finite type.
In case of a topologically mixing subshift $(X_A,f)$ of finite type the fact that $f$ has positive topological entropy guarantees that $f$ has sufficiently many periodical points that can be used to construct the
sequence of potentials $(\Phi_n)_{n\in {\mathbb N}}$ converging to a potential $\Phi$ and satisfying similar conditions as {\rm (1), (2), (3)}.
\end{remarks}
We now generalize Theorem \ref{thm1} to ${\mathbb R}^m$.
\begin{theorem} \label{thm2m} Let $K$ be a compact convex subset of $\mathbb{R}^m$. Then there exist a full one-sided shift map $(X,f)$ with finite alphabet and a continuous potential $\Phi: X\to {\mathbb R}^m$ such that ${\rm Rot}(\Phi)=K$.
\end{theorem}
\begin{proof} We can assume that $K$ has non-empty interior, since otherwise we can repeat the argument of the proof in a lower dimension.
First note that any open bounded convex set in ${\mathbb R}^m$ has Lipschitz boundary (\cite[Corollary 1.2.2.3]{grisvard}). Then there is a finite covering of the boundary of $K$ by open balls $U_1,...,U_N$ with centers at points $A_1,...,A_N\in\partial K$ of radii $r_1,...,r_N$ and bijective maps $h_1,...,h_N$ from each neighborhood into the unit ball $B$ of ${\mathbb R}^m$ such that \begin{enumerate}
\item [(1)] $h_i$ and $h_i^{-1}$ are Lipschitz continuous with constant $L$.
\item [(2)] $h_i(\partial K\cap U_i)=\{(y_1,...,y_m)\in B:y_m=0\}=B_0$
\item [(3)] $h_i({\rm int}\ K\cap U_i)=\{(y_1,...,y_m)\in B:y_m>0\}=B_+$
\end{enumerate}
For convenience we endow ${\mathbb R}^m$ with the maximums norm. Let $ X$ be a one-sided shift space with alphabet ${\mathcal A}=\{0, 1, 2, 3,..., 2N-2, 2N-1\}$. For each coordinate $y_1,...,y_{m-1}$, we apply the procedure used in the proof of the previous theorem to the interval $[-1,1]$ instead of the boundary curve. Then the functions $h_1^{-1},...,h_N^{-1}$ map each point in $B_0$ into $N$ points on the boundary of $K$. To compensate, the potentials $\Phi_n$ will map the cylinders whose generators have only entries from a subset of the alphabet $\{i-1, i\}$ into the points in $U_i$.
Suppose $i$ is any integer between 1 and $N$. We place two initial points in $B_0$ with coordinates $w_{0,1}=(-\frac12,0,0,...,0)$ and $w_{0,2}=(\frac12,0,0,...,0)$. The original cylinders of step zero will be $C\!\!\!\!I_{0,1}^{(i)}=C\!\!\!\!I(i-1)$ and $C\!\!\!\!I_{0,2}^{(i)}=C\!\!\!\!I(i)$ and we set $\Phi_0(C\!\!\!\!I_{0,1}^{(i)})=h_i^{-1}(w_{0,1})$, $\Phi_0(C\!\!\!\!I_{0,2}^{(i)})=h_i^{-1}(w_{0,2})$. The original cylinders of step one $C\!\!\!\!I_{1,j}^{(i)}, \quad (j=1,2,3,4)$ are constructed in the same way as above using elements $i-1$ and $i$ instead of 0 and 1. To select the points in $B_0$ for this step we use the second coordinate $y_2$ and move to the "left" (negative direction of the $y_2$-axis) and to the "right" (positive direction) by $\frac12$. We obtain the following set of points in $B_0$:
$w_{1,1}=L_{0,1}=(-\frac12,-\frac12,0,...,0)$, $w_{1,2}=R_{0,1}=(-\frac12,\frac12,0,...,0)$, $w_{1,3}=L_{0,2}=(\frac12,-\frac12,0,...,0)$, $w_{1,4}=R_{0,2}=(\frac12,\frac12,0,...,0)$. Potential $\Phi_1$ is defined as in (\ref{phi_1}):
$$\Phi_1(x)=\begin{cases} h_i^{-1}(L_{0,1}),\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(i-1,i-1,i-1)),\quad k=0,1\\
h_i^{-1}(R_{0,1}),\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(i-1,i-1,i)),\quad k=0,1\\
h_i^{-1}(L_{0,2}),\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(i,i,i)),\quad k=0,1\\
h_i^{-1}(R_{0,2}),\quad if\,\, x\inC\!\!\!\!I(\pi_3\circ f^k\mathcal{O}(i,i,i-1)),\quad k=0,1\\
\Phi_0(x),\quad \text{otherwise}
\end{cases}
$$
Then we repeat the process with the next coordinate $y_3$ and use the original cylinders of step two to define $\Phi_3$. After we are finished with coordinate $m-1$ we start again with $y_1$, but now we shift to the left and right by an additional $\frac14$. Since the number of points in $B_0$ will double each time, we can use the original cylinders of the next step to define the next potential in the sequence. As in the proof of the previous theorem, we obtain a sequence of continuous potentials $\Phi_n$ such that
\begin{equation}
\|\Phi_{2(m-1)(n+1)}-\Phi_{2(m-1)n}\|_\infty\le L\cdot2\cdot\frac{11}{8}\cdot\frac{1}{2^n}.
\end{equation}
This estimate is true only for the subsequence $\Phi_{2(m-1)n}$ since we need $2(m-1)$ steps to reduce the distance between points in $B_0$ in half. We have also adjusted the quantity on the right hand side of condition (2) of Theorem \ref{thm1}. One change is the introduction of the Lipschitz constant $L$. The other change is that the length of the interval [-1,1] is two in contrast to the previous theorem, where we assumed a boundary of length one. The set of points
\begin{equation}
\{h_i^{-1}(w_{2(m-1)n,j}): \quad j=1,...,2^{2(m-1)(n+1)},\quad i=1,...,N \}
\end{equation}
forms an $\frac{L}{2^n}$ - net on the boundary of $K$. The corresponding points\\ $h_i^{-1}(w_{2(m-1)n,j}^*)$ in $K$ satisfy condition (3) of Theorem \ref{thm1} up to the constant $2L$ on the right hand side of the inequality. Thus, the subsequence $\Phi_{2(m-1)n}$ converges uniformly to a continuous potential $\Phi$ and ${\rm Rot}(\Phi)=K$
\end{proof}
\section{Entropy of rotation sets}\label{press}
In this section we give an alternative definition for rotation sets and its associated entropy function which is more related to the traditional
definition of topological entropy and does not make use of the variational principle.
Let
$(X,d)$ be a compact metric space, and let $f\colon
X\to X$ be continuous.
Let $\Phi=(\phi_1,\cdots,\phi_m)$ be a continuous ($m-$dimensional) potential. For $x\in X$ and $n\in {\mathbb N}$ we define
the $m$-dimensional Birkhoff average $\frac{1}{n}S_n\Phi(x)$ at $x$ of length $n$ with respect to $\Phi$ defined by
\begin{equation}\label{defSnm}
\frac{1}{n}S_n\Phi(x)=\left( \frac{1}{n}S_n\phi_1(x),\cdots,\frac{1}{n}S_n\phi_m(x)\right),
\end{equation}
where $\frac{1}{n}S_n\phi_i(x)=\frac{1}{n}\sum_{k=0}^{n-1} \phi_i(f^k(x))$.
Moreover, we define
\begin{multline}\label{defRf}
{\rm Rot}_{Pt}(f,\Phi)=\\
\left\{w\in {\mathbb R}^m: \forall r>0\ \forall N\ \exists n\geq N\ \exists\ x\in X: \ \frac{1}{n}S_n\Phi(x)\in D(w,r)\right\}.
\end{multline}
Here $D(w,r)$ denotes the open Euclidean ball with center $w$ with radius $r$.
We frequently write ${\rm Rot}_{Pt}(f,\Phi)={\rm Rot}_{Pt}(\Phi)$ when it is clear to which dynamical system $f$ we refer too. Recall the definition of the rotation set ${\rm Rot}(\Phi)$, see \eqref{defrotset}.
We define
\begin{equation}
{\rm Rot}_E(\Phi)=\{{\rm rv}(\mu) :\mu\in \EuScript{M}_E\}.
\end{equation}
We have the following.
\begin{proposition}\label{prop456}
Let $f:X\to X$ be a continuous map on a compact metric space and let $\Phi=(\phi_1,\cdots,\phi_m):X\to{\mathbb R}^m$ be continuous. Then
\begin{equation}\label{eqinc1}
{\rm Rot}_E(\Phi)\subset {\rm Rot}_{Pt}(\Phi) \subset {\rm Rot}(\Phi).
\end{equation}
\end{proposition}
\begin{proof}
Let $\mu\in \EuScript{M}_E$ and $r>0$. Define $w={\rm rv}(\mu)$. By Birkhoff's Ergodic Theorem, the basin of $\mu$
\begin{equation}
{\mathcal B}(\mu)=\left\{x\in X:\ \frac{1}{n}\sum_{k=1}^{n-1} \delta_{f^k(x)} \to \mu\ {\text as}\ n\to\infty\right\},
\end{equation}
is a set of full $\mu$-measure, i.e. $\mu({\mathcal B}(\mu))=1$. By weak$*$ convergence, for all $x\in {\mathcal B}(\mu)$ there exists $N=N(x)\in {\mathbb N}$ such that $\frac{1}{n}S_n\Phi(x)\in D(w,r)$ for all $n\geq N$.
This proves the left-hand side inclusion in \eqref{eqinc1}.
\\
To prove the right-hand side inclusion in \eqref{eqinc1} let $w\in {\rm Rot}_{Pt}(\Phi)$ and consider sequences $(x_l)_{x\in {\mathbb N}}\subset X$ and $n_x\in {\mathbb N}, n_l\geq l$ such that
$S_{n_l}\Phi(x_l)\in D(w,\frac{1}{l})$. The existence of these sequences follows from the definition of $ {\rm Rot}_{Pt}(\Phi) $.
Define probability measures
$\mu_{n_l}=\frac{1}{n_l}\sum_{k=0}^{n_l-1} \delta_{f^k(x_l)} .$ Hence ${\rm rv}(\mu_{n_l})\in D(w,1/l)$. Note that $\mu_{n_l}$ is in general not an invariant measure.
Since the space
of all Borel probability measures on $X$ is compact, there exists a Borel probability measure $\mu$ on $X$ which is a weak$\ast$ accumulation point of the measures $\mu_{n_l}$.
It now follows from similar arguments as in the proof of the Krylov-Bogolyubov Theorem (see for example \cite{KH}) that $\mu$ is invariant. Moreover, by construction ${\rm rv}(\mu)= w$. This completes the proof.
\end{proof}
\begin{proposition}\label{prop24}
Let $f:X\to X$ be a continuous map on a compact metric space and let $\Phi=(\phi_1,\cdots,\phi_m):X\to{\mathbb R}^m$ be continuous. Then
\begin{enumerate}
\item[(i)]
Both, ${\rm Rot}_{Pt}(\Phi)$ and ${\rm Rot}(\Phi)$ are compact and ${\rm Rot}(\Phi)$ is convex;
\item[(ii)]
If for all $w\in {\rm Rot}(\Phi)$ and all $r>0$ exists $\mu\in\EuScript{M}_E$ such that ${\rm rv}(\mu)\in D(w,r)$ then ${\rm Rot}_{Pt}(\Phi) = {\rm Rot}(\Phi)$.
In particular, if $\overline{\EuScript{M}_E}=\EuScript{M}$ then ${\rm Rot}_{Pt}(\Phi) = {\rm Rot}(\Phi)$.
\end{enumerate}
\end{proposition}
\begin{proof}
As stated before, the weak$*$ compactness and convexity of $\EuScript{M}$ implies that ${\rm Rot}(\Phi)$ is compact and convex.
The statement that ${\rm Rot}_{Pt}(\Phi)$ is closed follows directly from the definition. This proves (i).\\
Suppose for all $w\in {\rm Rot}(\Phi)$ and all $r>0$ exists $\mu\in\EuScript{M}_E$ such that ${\rm rv}(\mu)\in D(w,r)$. Then $\overline{{\rm Rot}_E(\Phi)}={\rm Rot}(\Phi)$, and therefore (ii) is a consequence of \eqref{eqinc1} and (i).
\end{proof}
\noindent
{\rm Remark.\ } As a consequence of Proposition \ref{prop24} (ii)
we obtain that ${\rm Rot}_{Pt}(\Phi) = {\rm Rot}(\Phi)$ for topological mixing subshifts of finite type, Axiom A basic sets and expansive systems with specification since in these cases
$\overline{\EuScript{M}_E}=\EuScript{M}$ holds.
\\[0.2cm]
\noindent
The following examples show that both of the inclusions \eqref{eqinc1} can be strict.
\begin{example}\label{ex1}
Let $a,b,c,d\in {\mathbb R}$ with $a<b<c<d$. Let $X=[a,b]\cup [c,d]$ and $f:X\to X$ be continuous with $f([a,b])\subset [a,b]$ and $f([c,d])\subset [c,d]$.
Moreover, we assume that $f(a)=a$ and $f(d)=d$. Consider the potential $\Phi={\rm id}_X$. The fact that $[a,b]$ and $[c,d]$ are both invariant sets of $f$ implies that $ {\rm Rot}_{Pt}(\Phi)\subset X$. On the other hand, since $\delta_a,\delta_d\in \EuScript{M}$ the convexity of ${\rm Rot}(\Phi)$ implies ${\rm Rot}(\Phi)=[a,d]$.
In particular, the inclusion ${\rm Rot}_{Pt}(\Phi) \subset {\rm Rot}(\Phi)$ is strict.
\end{example}
\begin{example}\label{ex2}
Let $d\geq 6$ be even and let $f:X\to X$ be the one-sided full shift on the alphabet $\{0,\cdots, d-1\}$.
Let $K$ be a compact and convex subset of ${\mathbb R}^2$ whose boundary is a closed polygon with vertices $w_1,\cdots, w_{d/2}$ which we label counter clock-wise. Since there are at least $3$ vertices $w_i$ the set $K$ has non-empty interior.
Pick $w_0$ in the interior of $K$.
Let $l_1,\cdots, l_{d/2}$ be the line segments joining $w_0$ and $w_i$ endowed with the canonical order induced by $w_0<w_i$.
For each $i=1,\cdots, d/2$ we pick a strictly increasing sequence $(w_i(k))_{k\in {\mathbb N}}\subset {\rm int}\ l_i$ with $|w_i(k)-w_i|<1/2^k$, in particular $\lim_{k\to \infty} w_i(k)=w_i$.
We also write $w_i(0)=w_0$.
Next, we define several subsets of $X$.
Let $S_1=\{0,1\}, \cdots, S_{d/2}=\{d-2,d-1\}$ and fix $ \alpha\in {\mathbb N}, \alpha\geq 3$.
For all $i=1,\cdots, d/2$ and all $k\geq \alpha$ we define $X_i(k)=\{x \in X: x_1,\cdots, x_{k}\in S_i\}$. Moreover, let $X_0(\alpha)= X\setminus (X_1(\alpha)\cup \cdots \cup X_{d/2}(\alpha))$.
\\[0.2cm]
\noindent
Finally, we define a potential $\Phi: X\to {\mathbb R}^2$ by
\begin{equation}
\Phi(x)=\begin{cases}
w_0\qquad & if\,\, x\in X_0(\alpha)\\
w_{i}(k-\alpha) & if\,\, x\in X_{i}(k)\ {\rm and}\ x\not\in X_i(k+1)\\
w_i & if\,\, \ x\in X_{i}(k)\ {\rm for\, all }\, k\geq \alpha
\end{cases}
\end{equation}
Note that $\Phi(x)=w_i$ if and only if $x_k\in \{2i-2,2i-1\}$ for all $k\in {\mathbb N}$, in particular $f|_{\Phi^{-1}(w_i)}$ is conjugate to a full shift in $2$ symbols.
\end{example}
\begin{figure}[h]
\begin{center}
\psfragscanon
\psfrag{0}[c][l]{\LARGE{$w_0$}}
\psfrag{1}[c][l]{\LARGE{$w_1$}}
\psfrag{2}[c][l]{\LARGE{$w_2$}}
\psfrag{3}[c][l]{\LARGE{$w_3$}}
\psfrag{4}[c][l]{\LARGE{$w_4$}}
\psfrag{5}[c][l]{\LARGE{$w_5$}}
\psfrag{k}[l][l]{\LARGE{$\{w_1(k)\}$}}
\scalebox{0.65} {
\includegraphics{Ex2_new.eps}
}
\caption{This figure illustrates Example \ref{ex2}}
\end{center}
\end{figure}
We now list several properties of the system in example \ref{ex2}.
\begin{theorem}
Let $X,f$ and $\Phi$ be as in example $2$. Then
\begin{enumerate}
\item[(i)] $\Phi$ is Lipschitz continuous;
\item[(ii)] ${\rm Rot}(\Phi)=K$;
\item[(iii)] ${\rm Rot}_{Pt}(\Phi)={\rm Rot}(\Phi)$;
\item[(iv)] ${\rm Rot}_E(\Phi)={\rm int }\ K\cup \{w_1,\cdots, w_{d/2}\}$, in particular $${\rm Rot}_E(\Phi)\cap \left(\partial K\setminus \{w_1,\cdots, w_{d/2}\}\right)=\emptyset,$$ and therefore
the inclusion ${\rm Rot}_E(\Phi)\subset {\rm Rot}_{Pt}(\Phi)$ is strict;
\item[(v)] $H({\rm int }\ K)=(\log 2, \log d]$ and $H(\partial K)=\log 2$.
\end{enumerate}
\end{theorem}
\begin{proof}
{\rm (i) } We will work with the $d_{1/2}$ metric (see \eqref{defmet}) on $X$ to show that $\Phi$ is Lipschitz continuous. Set $C={\rm diam}(K)$. Let $x,y\in X$ with $\Phi(x)\not=\Phi(y)$. If $\Phi(x)=w_0$
then $x_k\not=y_k$ for some $k\leq \alpha$. Hence,
\begin{equation}\label{eqwsx}
\|\Phi(x)-\Phi(y)\|_2\leq C = C 2^\alpha \frac{1}{2^\alpha} \leq C 2^\alpha d(x,y)
\end{equation}
The case $\Phi(y)=w_0$ is analogous. The case $\Phi(x)\in l_i\setminus \{w_0\}$ and $\Phi(y)\in l_j\setminus \{w_0\}$ with $i\not=j$ can be treated analogously as in \eqref{eqwsx}.
It remains to consider the case $\Phi(x),\Phi(y)\in l_i\setminus \{w_0\}$ for some $i=1,\cdots,d/2$. Without loss of generality we assume $\Phi(x)<\Phi(y)$ (with respect the canonical order on $l_i$.
Thus, $d(x,y)\geq\frac{1}{2^{k_x+1}}$ where $k_x$ is defined by $\Phi(x)=w_i(k_x)$. Since $|w_i(k)-w_i|<1/2^k$ for all $k \in {\mathbb N}$ we conclude that
\begin{equation}\label{eqwsx1}
\|\Phi(x)-\Phi(y)\|_2\leq \frac{1}{2^{k_x}} \leq 2 d(x,y),
\end{equation}
which completes the proof of (i).\\
\noindent
(ii) Since $\Phi(X)\subset K$ and since $K$ is compact and convex, it follows that ${\rm Rot}(\Phi)\subset K$. We have $\{w_1,\cdots,w_{d/2}\}\subset K$. This follows from the fact that for each $i$ there exists a fixed point $x(i)\in X$
with $\Phi(x(i))=w_i$. Using that $w_1,\cdots,w_{d/2}$ are the extreme points of $K$ and that ${\rm Rot}(\Phi)$ is compact and convex the inclusion $K\subset {\rm Rot}(\Phi)$ follows from the Krein-Milman theorem.\\
\noindent
(iii)
By a result Sigmund (see \cite{DGS}) applied to one-sided full shifts the set of periodic point measures (see \eqref{defpermes} for the definition) is weak$*$ dense in $\EuScript{M}$. Therefore, (iii) follows from Proposition
\ref{prop24} (ii). \\
\noindent
(iv) We already have shown in the proof of (ii) that $\{w_1,\cdots,w_{d/2}\}\subset {\rm Rot}_E(\Phi)$. The statement ${\rm int }\ K \subset {\rm Rot}_E(\Phi)$ will be proven in Corollary \ref{corintemp}
by using the thermodynamic formalism. Consider $w\in \partial K\setminus \{w_1,\cdots,w_{d/2}\}$. To prove (iv) we have to show that $w\not\in {\rm Rot}_E(\Phi)$. Let $i,j\in \{1,\cdots,d/2\}$ such that $w$ lies on the interior of the line segment $[w_i,w_j]$ joining $w_i$ and $w_j$. By construction of $K$ the line segment $[w_i,w_j]$ is a face of $K$ and since $\Phi(X)\cap (w_i,w_j)=\emptyset$ each invariant measure $\mu$ with ${\rm rv}(\mu)=w$ must put positive measure on each of the $f$-invariant sets $\Phi^{-1}(w_i)$ and $\Phi^{-1}(w_j)$. This implies that
$\mu$ is not ergodic.\\
\noindent
(v) Clearly, $H(w_i)=\log 2$ for all $i\in \{1,\cdots,d/2\}$. Let now $w\in \partial K\setminus \{w_1,\cdots,w_{d/2}\}$. It follows
from a similar argument as in the proof of (iv) that each $\mu\in \EuScript{M}$ with ${\rm rv}(\mu)=w$ must be a convex combination of invariant measures $\mu_1,\mu_2$ both of which have rotation vectors
in $\{w_1,\cdots,w_{d/2}\}$. Therefore, $H(\partial K)=\log 2$ follows from the fact that the measure-theoretic entropy is affine.
Next, we consider $w\in {\rm int }\ K$. Let $\mu_0\in \EuScript{M}$ be the unique measure of maximal entropy of $f$, i.e., the unique invariant measure satisfying $h_{\mu_0}(f)=\log d$. It follows from the construction
that ${\rm rv}(\mu_0)=w_0$. Therefore, it suffices to consider $w\not=w_0$.
Let $\tilde{w}$ be the unique point on $\partial K$ such that $w$ lies on the interior of the line segment $[w_0,\tilde{w}]$. Let $\tilde{\mu}\in\EuScript{M}$ such that ${\rm rv}(\tilde{\mu})=\tilde{w}$
and $h_{\tilde{\mu}}(f)=\log 2$. Let $t\in (0,1)$ such that for $\mu=t\mu_0+(1-t)\tilde{\mu}$ we have ${\rm rv}(\mu)=w$. Since the measure-theoretic entropy is affine we conclude that $h_{\mu}(f)>\log 2$.
Finally, $H({\rm int }\ K)=(\log 2, \log d]$ follows from the continuity of $w\mapsto H(w)$ and from the fact that ${\rm int}\ K$ is connected.
\end{proof}
\noindent
{\it Remark. } By property {\rm (v)}, the entropy $H(w)$ is on the boundary of $K$ strictly smaller than in the interior of $K$. This is however in general not true. Indeed, by slightly modifying
example $\rm 2$ and concentrating a shift with more then $2$ symbols on one of the points $w_i$, the continuity of $w\mapsto H(w)$ implies that $\max_{w\in \partial K} H(w)>\inf_{w\in {\rm int } K} H(w)$. \\[0.2cm]
\noindent
Next, we introduce an alternative definition for a entropy function which is closely related to the traditional definition of topological entropy.
Recall the definition of the \emph{topological entropy} of $f$ (see \eqref{defdru}),
\begin{equation}\label{defent}
h_{\rm top}(f) \overset{\text{def}}{=} \lim_{\varepsilon \to 0}
\limsup_{n\to \infty}
\frac{1}{n} \log {\rm card} \ F_n(\epsilon),
\end{equation}
where $ F_n(\epsilon)$ is a maximal $(n,\epsilon)$-separated set.
The topological entropy satisfies the variational principle (which is a special case of the variational principle for the topological pressure \eqref{eqvarpri}):
\begin{equation}\label{eqvarprient}
h_{\rm top}(f)= \sup_{\mu\in \EuScript{M}} h_\mu(f).
\end{equation}
Furthermore, the supremum in~\eqref{eqvarprient} can be replaced by
the supremum taken only over all $\mu\in\EuScript{M}_{\rm E}$. We denote by
$E_{\rm max}(f)$ the set of all measures of maximal entropy, that is the set of measures
$\mu\in\EuScript{M}$ which attain the supremum in~\eqref{eqvarpri}. In general $E_{\rm max}(f)$ may be empty (see for example \cite{Mi2}).
Fix $w\in {\rm Rot}_{Pt}(\Phi)$.
Let $n\in {\mathbb N}$ and $\epsilon,r>0$. We say $F\subset X$ is a $(n,\epsilon,w,r)$-set if $F$ is $(n,\epsilon)$-separated and
$\frac{1}{n}S_n\Phi(x)\in D(w,r)$ for all $x\in F$. For all $n\in {\mathbb N}$ and $\epsilon,r>0$ we pick a maximal (with respect to the inclusion)
$(n,\epsilon,w,r)$-set $F_n(\epsilon,w,r)$ and define
\begin{equation}\label{defhw}
h(\epsilon,w,r)=\limsup_{n\to\infty}\frac{1}{n} \log {\rm card} \ F_n(\epsilon,w,r)
\end{equation}
and
\begin{equation}\label{eqdefhw234}
h(w)=\lim_{r\to 0} \lim_{\epsilon\to 0} h(\epsilon,w,r)
\end{equation}
Analogous as in the case of $h_{\rm top}(f)$ one can show that $h(w)$ does not depend on the choice of the $(n,\epsilon,w,r)$-sets $F_n(\epsilon,w,r)$. Clearly, $h(w)$ and $h(\epsilon,w,r)$ are bounded above by $h_{\rm top}(f)$ and therefore finite.
We now review a standard construction of invariant measures with large entropy.
Given a finite set $F\subset X$ we define a probability measure $ \sigma(F)$ by
\begin{equation}\label{tilsig}
\sigma(F)=
\frac{1}{{\rm card}\ F}
\sum_{x\in F} \delta_x.
\end{equation}
Recall that for a Borel map $g:X\to X$ and a Borel measure $\mu$ on $X$ the push forward of $\mu$ is defined by $g_*\mu(A)=\mu(g^{-1}(A))$.
We will need the following result, which is typically shown by using the Misiurewicz argument when proving the variational principle (see, for example,~\cite[Section 4.5]{KH}).
\begin{lemma}\label{lemKH}
Let $f:X\to X$ be a continuous map on a compact metric space and let $\epsilon>0$.
Let $(F_n)_{n\in {\mathbb N}}$ be a sequence of $(n,\epsilon)$-separated sets in $X$, and define the measures
\begin{equation}\label{defmuin}
\nu_n= \sigma(F_n),
\quad
\mu_n=\frac{1}{n}\sum_{k=0}^{n-1}f^k_*\nu_n.
\end{equation}
Then there exists a weak$\ast$ accumulation point $\mu$ of the measures $(\mu_n)_{n\in{\mathbb N}}$ and any such accumulation point $\mu$ is invariant and satisfies
\begin{equation}\label{lemmis}
\limsup_{n\to \infty}\frac{1}{n}\log\sum_{x\in F_n}{\rm card}\ F_n
\leq h_\mu(f).
\end{equation}
\end{lemma}
The next result establishes an inequality between $h(w)$ and $H(w)$.
\begin{proposition}\label{propungleich}
Let $f:X\to X$ be a continuous map on a compact metric space, let $\Phi=(\phi_1,\cdots,\phi_m):X\to {\mathbb R}^m$ be continuous and let $w\in {\rm Rot}_{Pt}(\Phi)$.
Suppose $H$ is continuous at $w$.
Then $h(w)\leq H(w)$.
\end{proposition}
\begin{proof}
Note that $H(w)$ is well-defined by \eqref{eqinc1}. Let $\eta>0$. It follows from the definition of $h(w)$ (see \eqref{eqdefhw234}) and from
the continuity of $H$ at $w$ that there exists $r^*=r^*(\eta)>0$ such that for all $0<r\leq r^*$ and all $v\in D(w,r^*)$ we have
\begin{equation}\label{eqqaz}
h(w)\leq \lim_{\epsilon\to 0} h(\epsilon,w,r)<h(w)+\frac{\eta}{2}
\end{equation}
and
\begin{equation}\label{eqqaz1}
H(w)-\frac{\eta}{2}<H(v) < H(w)+\frac{\eta}{2}.
\end{equation}
Using \eqref{eqqaz} and the definition of $h(\epsilon,w,r)$ we can pick $0<r<r^*$, $\epsilon>0$ and an increasing sequence $(n_i)_{i\in{\mathbb N}}$ of positive integers such that
\begin{equation}
h(w)-\frac{\eta}{2}<\limsup_{i\to\infty}\frac{1}{n_i} \log {\rm card} \ F_{n_i}(\epsilon,w,r)<h(w)+\frac{\eta}{2}.
\end{equation}
We now apply Lemma \ref{lemKH} to the $(n_i,\epsilon)$-separated sets $F_{n_i}(\epsilon,w,r)$ and obtain the existence of $\mu\in \EuScript{M}$ with $h(w)-\eta/2\leq h_{\mu}(f)\leq H({\rm rv}(\mu))$ and ${\rm rv}(\mu)\in D(w,r)$. It follows that $h(w)-\eta<H(w)$ and since $\eta>0$ was arbitrary the claim follows.
\end{proof}
\noindent
\begin{remark}
We recall that the continuity of $w\mapsto H(w)$ holds for all $w\in {\rm Rot}(\Phi)$ if the entropy map $\mu\mapsto h_\mu(f)$ is upper semi-continuous. In particular, this is true, if $f$ is expansive \cite{Wal:81} or if $f$ is a $C^\infty$ map on a smooth Riemannian manifold \cite{N}.
\end{remark}
\noindent
In the following we establish under rather mild assumptions the identity of $h(w)$ and $H(w)$.
We say a topological space is a Besicovitch space if the Besicovitch covering theorem holds. Next, we give an alternative
formulation for the measure-theoretic entropy which is due to Katok.
Let $f:X\to X$ be a continuous map on a compact metric space and
let $\mu\in \EuScript{M}_E$. For $n\in {\mathbb N}$, $\epsilon>0$ and $0<\delta<1$
we denote by $N(n,\epsilon,\delta)$ the minimal number of $\epsilon$ balls in the $d_n$ metric that cover a set of measure greater or equal than $1-\delta$.
It is shown in \cite{Kat:80} that
\begin{equation}\label{katent}
h_\mu(f)=\lim_{\epsilon\to 0}\liminf_{n\to \infty} \frac{\log N(n,\epsilon,\delta)}{n} = \lim_{\epsilon\to 0}\limsup_{n\to \infty} \frac{\log N(n,\epsilon,\delta)}{n}.
\end{equation}
Given a continuous potential $\Phi$ and $w\in {\rm Rot}(\Phi)$ we say that $H(w)$ is \emph{approximated by ergodic measures} if there exists $(\mu_n)_{n\in{\mathbb N}}\subset \EuScript{M}_E$ such that ${\rm rv}(\mu_n)\to w$ and $h_{\mu_n}(f)\to H(w)$ as $n\to\infty$. In this case we have $w\in {\rm Rot}_{Pt}(\Phi)$ (see Proposition \ref{prop24} {\rm (ii)}).
Being approximated by ergodic measures occurs for several classes of systems and potentials.
For example, we will show in Corollary \ref{corintemp} that if $f$ satisfies {\rm{(STP)}} and $\Phi$ is H\"older continuous then
$H(w)$ can be approximated by ergodic measures for all $w\in {\rm Rot}(\Phi)$.
We are now ready to state our main result about the identity of $h(w)$ and $H(w)$.
\begin{theorem}\label{thhwHw}
Let $f:X\to X$ be a continuous map on a compact metric space $X$ that is a Besicovitch space with respect to the induced topology. Let $\Phi=(\phi_1,\cdots,\phi_m):X\to{\mathbb R}^m$ be continuous
and let $w\in {\rm Rot}(\Phi)$ such that $H$ is continuous at $w$ and $H(w)$ is approximated by ergodic measures. Then $h(w)=H(w)$.
\end{theorem}
\begin{proof}
The inequality $h(w)\leq H(w)$ is shown in Proposition \ref{propungleich}. To show $H(w)\leq h(w)$ we only need to consider the case $H(w)>0$.
Let $\eta>0$ be arbitrary.
Pick $r_0>0$ such that
\begin{equation}
\left|\lim_{\epsilon\to 0} h(\epsilon,w,r)- h(w)\right|< \frac{\eta}{2}
\end{equation}
for all $0<r\leq r_0$.
It therefore suffices
to show that there exists $\epsilon_0>0$ such that
\begin{equation}
h(\epsilon,w,r_0)>H(w)-\frac{\eta}{2}
\end{equation}
for all $0<\epsilon\leq\epsilon_0$.
By uniform continuity of $\Phi$ there exists $\epsilon_0$ such that if $0<\epsilon\leq \epsilon_0$, $x\in X$ and $n\in {\mathbb N}$ then for all
$x_1,x_2\in B_n(x,\epsilon)$ we have
\begin{equation}\label{varpot}
\left|\frac{1}{n}S_n\Phi(x_1)-\frac{1}{n}S_n\Phi(x_2)\right|<r_0/4.
\end{equation}
Here $\frac{1}{n}S_n\Phi$ denotes the $m$-dimensional
Birkhoff average defined in \eqref{defSnm}.
Since $H(w)$ is approximated by ergodic measures there exists $\mu\in \EuScript{M}_E$ such that $|{\rm rv}(\mu)-w|<r_0/2$ and $h_\mu(f)>H(w)-\eta/2$.
Let $0<\delta<1$ be fixed. Applying \eqref{katent} and making $\epsilon_0$ smaller if necessary we conclude that
\begin{equation}\label{eqkatb}
\liminf_{n\to \infty} \frac{\log N(n,\epsilon,\delta)}{n} > H(w)-\eta/2.
\end{equation}
for all $0<\epsilon\leq\epsilon_0$. From now we consider a fixed $0<\epsilon\leq\epsilon_0$. We define
\begin{equation}\label{defBn}
{\mathcal B}_{n,r_0/4}(\mu)=\left\{x\in {\mathcal B}(\mu): |S_l\Phi(x)-{\rm rv}(\mu)|<r_0/4\, \ {\rm for\ all }\, \ l\geq n\right\}.
\end{equation}
Since $({\mathcal B}_{n,r_0/4}(\mu))_{n\in {\mathbb N}}$ is an increasing sequence of Borel sets whose union is a set of full $\mu$-measure we conclude that
\begin{equation}\label{eqvbn}
\lim_{n\to \infty} \mu({\mathcal B}_{n,r_0/4}(\mu))=1.
\end{equation}
We denote by $\tilde{N}(n,\epsilon,\delta)$ the number of Bowen balls determining $N(n,\epsilon,\delta)$ that have non-empty intersection with ${\mathcal B}_{n,r_0/4}(\mu)$. For all sufficiently large $n$ these balls, due \eqref{defBn}, will cover a set of measure $1-\delta'$ for some $\delta<\delta'<1$.
It follows from the fact that \eqref{katent} also holds for $\delta'$ that \eqref{eqkatb} remains true when we replace $N(n,\epsilon,\delta)$ by
$\tilde{N}(n,\epsilon,\delta)$.
For $t\in {\mathbb R}$ let $[t]$ denote the largest integer smaller or equal than $t$.
Let $\beta$ be a Besicovitch constant of $X$. Note that this constant can be chosen independently of the metrics $d_n$ since they are decreasing in $n$.
It follows from the Besicovitch covering theorem that there exist $[\tilde{N}(n,\epsilon,\delta)/\beta]$ Bowen balls
in the collection of balls determining $\tilde{N}(n,\epsilon,\delta)$ that are pair-wise disjoint. The centers of these balls form a $(n,\epsilon)$-separated set which we denote by
$F_n(\epsilon,\delta)$. It follows from \eqref{eqkatb} and the construction of the set $F_n(\epsilon,\delta)$ that
\begin{equation}\label{eqh(w)12}
\liminf_{n\to \infty} \frac{\log {\rm card} \ F_n(\epsilon,\delta)}{n} > H(w)-\eta/2.
\end{equation}
Moreover, since each of the balls defining $\tilde{N}(n,\epsilon,\delta)$ has non-empty intersection with ${\mathcal B}_{n,r_0/4}(\mu)$ we obtain from \eqref{varpot} and \eqref{defBn} that
$\frac{1}{n}S_n\Phi(x)\in D(w,r)$ for all $x\in F_n(\epsilon,\delta)$. Finally, we may conclude from \eqref{eqh(w)12} (also using the definition of $h(\epsilon,w,r_0)$, see \eqref{defhw}) that $h(\epsilon,w,r_0)>H(w)-\eta/2$ which completes the proof of the theorem.
\end{proof}
\section{Entropy via periodic orbits}
In this section we consider smooth non-uniformly expanding systems and show that under certain assumptions on $w\in {\rm Rot}(\Phi)$ the entropy $H(w)$ is entirely determined by the growth rate of those periodic orbits whose rotation vectors are sufficiently
close to $w$. Our approach heavily relies on previous work by Gelfert and the second author of this paper \cite{GW2} about the computation of the topological pressure via
periodic orbits. Throughout this section we use the notations from Section 2.5.
Let $M$ be a smooth Riemannian manifold and let
$f\colon M\to M$ be a $C^{1+\epsilon}$-map. Let $X\subset M$ be compact locally maximal $f$-invariant set. We consider $f|_X$ and simply write $f$ instead of $f|_X$.
Let $\Phi=(\phi_1,\cdots,\phi_m): X\to {\mathbb R}^m$ be a continuous potential with rotation set ${\rm Rot}(\Phi)$.
To avoid trivialities we will always assume $h_{\rm top}(f)>0$.
We say that $H(w)$ is uniformly approximated by measures in $\EuScript{M}^+_E$ if there exist $\chi(w)>0$ and $(\mu_k)_{k\in{\mathbb N}}\subset \EuScript{M}_E^+$ such that $\chi(\mu_k)\geq \chi(w)$ for all $k\in{\mathbb N}$, and ${\rm rv}(\mu_k)\to w$ as well as $h_{\mu_k}(f)\to H(w)$ as $k\to\infty$.
We now introduce an entropy-like quantity which is entirely defined by the growth rate of those periodic points that have rotation vectors in a given ball about $w$ and have some uniform expansion.
Let $w\in {\rm Rot}(\Phi),r>0$ and let
$0<\alpha$, $0<c\leq 1$. Define
\begin{equation}
h_{\rm per}(w,r,\alpha,c,n) =
{\rm card} \ {\rm Per}_n(w,r,\alpha,c)
\end{equation}
if ${\rm Per}_n(w,r,\alpha,c)\ne\emptyset$ and
\begin{equation}\label{hper}
h_{\rm per}(w,r,\alpha,c,n)= 1
\end{equation}
otherwise. Furthermore, we define
\begin{equation}\label{hper}
h_{\rm per}(w,r,\alpha,c) = \limsup_{n\to\infty}
\frac{1}{n}\log h_{\rm per}(w,r,\alpha,c,n).
\end{equation}
We have the following.
\begin{proposition}\label{li}
Let $w\in {\rm Rot}(\Phi)$ and suppose $H(w)$ is uniformly approximated by measures in $\EuScript{M}^+_E$. Let $r>0$.
Then for all $0< \alpha< \chi(w)$ we have
\begin{equation}\label{eqas}
H(w) \le
\lim_{c\to 0} h_{\rm per}(w,r,\alpha,c).
\end{equation}
\end{proposition}
\begin{proof}
If $H(w)=0$ the statement is trivial; therefore, we can assume $H(w)>0$.
Let $0<\alpha<\chi(w)$ and let $\delta>0$. Since $H(w)$ is uniformly approximated by measures in $\EuScript{M}^+_E$ there exists $\mu\in \EuScript{M}_E^+$
with $\chi(\mu)>\alpha$, ${\rm rv}(\mu)\in D(w,r/2)$ and $|h_\mu(f)-H(w)|<\delta$.
It is a consequence of Katok's theory of approximation of hyperbolic measures by hyperbolic sets in it's version
for non-uniformly expanding maps (see for example \cite{GW2} and the references therein) that there exists
a sequence $(\mu_n)_n$ of measures $\mu_n\in \EuScript{M}_{\rm E}$ supported on compact
invariant expanding sets $X_n\subset X$ such that
\begin{equation}\label{holl}
h_\mu(f) \le \liminf_{n\to\infty}h_{\rm top}(f|_{X_n}),
\end{equation}
$\mu_n\to\mu$ in the weak$\ast$ topology and that ${\rm rv}(x)\in D(w,r)$ for all $x\in {\rm Per}(f)\cap X_n$.
Furthermore, for each $n\in{\mathbb N}$ there exist $l,s\in{\mathbb N}$ such that $f^l|_{ X_n}$
is conjugate to the full shift in $s$ symbols. For every
$\eta>0$ there is a number $N=N(\varepsilon)\ge 1$
such that
\begin{equation}\label{kuh}
h_\mu(f) -\eta
\le h_{\rm top}(f|_{X_n})
\end{equation}
for all $n\geq N$.
Moreover, there exists a number $c_0=c_0(n)$ with $0<c_0(n)\le1$ such that for
every $x\in X_n$ with $x\in {\rm Per}_k(f)$ we have $x\in {\rm Per}_k(\alpha,c_0)$.
Together we obtain
\begin{equation}\label{kir}
{\rm Per}_k(f)\cap X_n \subset {\rm Per}_k(w,r,\alpha,c_0)
\end{equation}
for every $k\in{\mathbb N}$.
Let $l, s\in{\mathbb N}$ such that $f^l|_{X_n}$ is topologically conjugate to the full
shift in $s$ symbols.
Since $l\cdot h_{\rm top}(f|_{X_n}) = h_{\rm top}(f^l|_{X_n})$
(see~\cite[Theorem 9.8]{Wal:81}), we may conclude that
\begin{equation*}
h_\mu(f)-\eta
\le \frac{1}{l}h_{\rm top}(f^l|_{X_n}).
\end{equation*}
It now follows from Proposition~\ref{ha} and an elementary calculation that
\begin{equation}\label{eqrep}
\begin{split}
&h_\mu(f) -\eta\\
&\le
\frac{1}{l}\limsup_{k\to\infty}\frac{1}{k}\log\left({\rm card} \left({\rm Per}_{lk}(f)\cap X_n\right)
\right)\\
&\le \limsup_{k\to\infty}\frac{1}{k}\log\left({\rm card} \left({\rm Per}_{k}(f)\cap X_n \right)\right).
\end{split}
\end{equation}
Combining~\eqref{kir} and~\eqref{eqrep} yields
\begin{equation*}
h_\mu(f)-\eta
\le \limsup_{k\to\infty}\frac{1}{k}\log\left( {\rm card} \ {\rm Per}_k(w,r,\alpha,c_0)\right).
\end{equation*}
Recall that by~\eqref{ni} the map $c\mapsto h_{\rm per}(w,r,\alpha,c)$ is
non-decreasing as $c\to 0$.
Since $\eta>0$ and $\delta>0$ can be chosen arbitrarily small the claim follows.
\end{proof}
The following Theorem is the main result of this section.
\begin{theorem}\label{theoperrot}
Let $f:M\to M$ be a $C^{1+\epsilon}$-map, and let $X\subset M$ be a compact $f$-invariant locally maximal set. Let $\Phi=(\phi_1,\cdots,\phi_m): X\to {\mathbb R}^m$ be continuous and
let $w\in{\rm Rot}(\Phi)$ such that $H(w)$ is uniformly approximated by measures in $\EuScript{M}^+_E$ and that $H$ is continuous at $w$. Then for all $0<\alpha<\chi(w)$,
\begin{equation}\label{idmain}
H(w)=\lim_{r\to 0}\lim_{c\to 0}\limsup_{n\to\infty} \frac{1}{n} \log h_{\rm per}(w,r,\alpha,c,n).
\end{equation}
\end{theorem}
\begin{proof}
The $"\leq"$ part in \eqref{idmain} already follows from Proposition \ref{li}. To prove the opposite inequality
pick $0<\alpha<\chi(w)$ and $\eta>0$.
Since the right hand side in the limit $r\to 0$ in \eqref{idmain} is non-increasing as $r\to 0$, it suffices to show that
\begin{equation}\label{idmain2}
\lim_{c\to 0} h_{\rm per}(w,r,\alpha,c)< H(w)+\eta
\end{equation}
for some $r>0$.
By continuity of $H$ at $w$ there exists $r>0$ such that $H(v)<H(w)+\eta/2$ for all $v\in D(w,r)\cap {\rm Rot}(\Phi)$. Pick $c_0>0$ such that
\begin{equation}\label{main4}
\lim_{c\to 0}\limsup_{n\to\infty} \frac{1}{n} \log h_{\rm per}(w,r,\alpha,c,n)-\frac{\eta}{2}<
\limsup_{n\to\infty} \frac{1}{n} \log h_{\rm per}(w,r,\alpha,c_0,n)
\end{equation}
for all $0<c\leq c_0$.
If ${\rm Per}_n(w,r,\alpha,c_0)=
\emptyset$ for all $n\in{\mathbb N}$, then $h_{\rm per}(w,r,\alpha,c_0)=0$ and \eqref{idmain2} trivially holds.
Next we consider the case that ${\rm Per}_n(w,r,\alpha,c_0)\not=
\emptyset$ for some $n\in{\mathbb N}$.
We define
\[
X_{w,r,\alpha,c_0}
\overset{\text{def}}{=} \overline{\bigcup_{n=1}^\infty {\rm Per}_n(w,r,\alpha,c_0)} .
\]
It follows from the continuity of the minimums norm of $Df$ that $X_{w,r,\alpha,c_0} $ is a compact invariant uniformly expanding set for $f$.\footnote{This implies that limit $c\to 0$ on the
left-hand side of \eqref{main4} is bounded above by $h_{\rm top}(f)$ and thus in particular finite.}
For $n\ge 1$ with ${\rm Per}_n(w,r,\alpha,c_0)\ne\emptyset$ we
define a measure $ \sigma_n= \sigma_n(w,r,\alpha,c_0)$ by
\begin{equation}\label{tilsig}
\sigma_n=
\frac{1}{{\rm card}\ {\rm Per}_n(w,r,\alpha,c_0)}
\sum_{x\in{\rm Per}_n(w,r,\alpha,c_0)} \delta_x,
\end{equation}
where $\delta_x$ denotes the delta Dirac measure supported at $x$.
Note that every measure $ \sigma_n$ defined
in~\eqref{tilsig} belongs to $\EuScript{M}$ and is in the convex hull of the set
$\{\delta_x\colon x\in {\rm Per}_n(w,r,\alpha,c_0)\}$.
Since $\EuScript{M}$ is weak$\ast$ compact, there exists a subsequence $( \sigma_{n_k})_k$ converging to some measure
$\mu=\mu_{w,r,\alpha,c_0}\in\EuScript{M}$ in the weak$\ast$ topology.
It follows that $\chi(\mu)\ge \alpha$ and ${\rm rv}(\mu)\in D(w,r)$.
Since $X_{w,r,\alpha,c_0} $ is uniformly expanding, there exists $\delta=\delta(w,r,\alpha,c_0)$ which is an
expansivity constant for $f|_{X_{w,r,\alpha,c_0} }$. In
particular, for every $n\in {\mathbb N}$ and every $0<\varepsilon'\le\delta$ the set
${\rm Per}_n(w,r,\alpha,c_0)$ is
$(n,\varepsilon')$-separated. It now follows from Lemma \ref{lemKH} that
\begin{equation}\label{krey}
\limsup_{n\to\infty}\frac{1}{n}\log {\rm card} \ {\rm Per}_n(w,r,\alpha,c_0) \le
h_{\mu}(f)\le H({\rm rv}( \mu))<H(w)+\frac{\eta}{2}.
\end{equation}
Combining this with \eqref{main4} gives \eqref{idmain2} and the proof is complete.
\end{proof}
\noindent
\begin{remarks} {\rm (i)} While Theorem \ref{theoperrot} is stated in the context of non-uniformly expanding systems the analogous result holds for non-uniformly hyperbolic diffeomorphisms of saddle type. For such a map one defines $\chi(\mu)$ as the
smallest absolute value of the Lyapunov exponents of $\mu$. We refer to \cite{GW1} for more details about these classes of systems.\\
{\rm (ii)} We recall that the continuity of $w\mapsto H(w)$ holds for all $w\in {\rm Rot}(\phi)$
if the entropy map $\mu\mapsto h_\mu(f)$ is upper-semi continuous (and therefore particular if $f|_X$ is expansive).\\
{\rm (iii)} If $X$ is a topological mixing hyperbolic set (uniformly expanding or of saddle type) then it can be shown that Theorem \ref{theoperrot} holds for all H\"older continuous potentials $\Phi$ and all $w\in {\rm Rot}(\Phi)$.
Similar results hold for topological mixing subshifts of finite type and expansive homeomorphisms with specification.
\\
{\rm (iv)} Theorem \ref{theoperrot} even provides new information in the case when $X$ is a topological mixing hyperbolic set and $w$ is the rotation vector of the measure of maximal entropy. Indeed, Proposition \ref{ha} provides a version of \eqref{idmain} by considering all periodic points. On the other hand, our result shows that it is already sufficient to consider periodic points with sufficiently “large” Lyapunov exponents and rotation vectors sufficiently close to $w$.
\end{remarks}
\section{Dependence on parameters}
Let $f:X\to X$ be a continuous map on a compact metric space. In this section we assume that $f$ has strong thermodynamic properties {\rm{(STP)}}. Let $\Phi=(\phi_1,\cdots,\phi_m)$, where $\phi_1,\cdots,\phi_m\in C^\alpha(X,{\mathbb R})$ for some fixed $\alpha>0$.
The main goal of this section is to study the dependence of the entropy $H(w)$ on $w$ in the interior of the rotation set.
Recall that since the entropy map is upper semi-continuous (property (2) in the definition of {\rm{(STP)}}) it follows that $w\mapsto H(w)$ is continuous. Here we show
that under the assumption of strong thermodynamic properties $w\mapsto H(w)$
is even real analytic on the interior of ${\rm Rot}(\Phi)$.
We start by introducing some notation.
Given $T=(t_1,\cdots,t_m)\in {\mathbb R}^m$ we consider the linear combination $T\cdot \Phi = t_1\phi_1+\cdots + t_m\phi_m$ of the potentials $\phi_1,\cdots,\phi_m$.
We write
\begin{equation}
Q(T)=P_{\rm top}(T\cdot \Phi).
\end{equation}
It follows from property 3 of {\rm{(STP)}}\ that $Q$ is a real-analytic function of ${\mathbb R}^m$.
Let $\mu_T$ denote the unique equilibrium measure of the potential $T\cdot \Phi$ (which is well-defined by property 4 of {\rm{(STP)}}).
As a consequence of \eqref{eqdifpre} we obtain that
\begin{equation}\label{eqw1}
DQ(T)={\rm rv}(\mu_T).
\end{equation}
Hence, the map $T\mapsto {\rm rv}(\mu_T)$ is also real-analytic.
Writing $h(T)=h_{\mu_T}(f)$ gives
\begin{equation}\label{eqvarT}
Q(T)=h(T)+ T\cdot DQ(T)
\end{equation}
which implies that $T\mapsto h(T)$ is also real-analytic.
First we apply results in \cite{Je} to obtain a characterization for ${\rm Rot}(\Phi)$ having non-empty interior.
\begin{proposition}\label{proptri}
The following are equivalent.
\begin{enumerate}
\item[(i)]
No non-trivial linear combination of $\Phi$ is cohomologous to a constant.
\item[(ii)]
${\rm int}\ {\rm Rot}(\Phi)\not=\emptyset$.
\end{enumerate}
\end{proposition}
\begin{proof}
Assume that non-trivial linear combination of $\Phi$ is cohomologous to a constant. It follows from \eqref{gg33} that $Q$ is a strictly convex function in ${\mathbb R}^m$.
Therefore, \cite[Corollary 3]{Je} implies that $\{{\rm rv}(\mu_T): T\in {\mathbb R}^m\}={\rm int}\ {\rm Rot}(\Phi)$ and thus {\rm (ii)} holds.\\
Next, we assume that there exists a nontrivial linear combination of $\Phi$ that is cohomologous to a constant. It is easy to see that in this case $\{{\rm rv}(\mu_T): T\in {\mathbb R}^m\}$ must be contained in some
lower-dimensional affine subspace of ${\mathbb R}^m$. Since $\overline{\{{\rm rv}(\mu_T): T\in {\mathbb R}^m\}}= {\rm Rot}(\Phi)$ (see \cite[Theorem 1]{Je}), we conclude that ${\rm Rot}(\Phi)$ has empty interior.
\end{proof}
A side product of the proof of Proposition \ref{proptri} is the following.
\begin{corollary}\label{corintemp}
If ${\rm int}\ {\rm Rot}(\Phi)\not=\emptyset$ then
$\{{\rm rv}(\mu_T):\ T\in {\mathbb R}^m\} = {\rm int}\ {\rm Rot}(\Phi)$.
In particular, $H(w)$ is well approximated by ergodic measures for all $w\in {\rm Rot}(\Phi)$.
\end{corollary}
Next we show that unless $f$ has zero topological entropy the map $w\mapsto H(w)$ is strictly positive on the interior of the rotation set.
\begin{theorem}
Let $f:X\to X$ be a continuous map on a compact metric space satisfying {\rm{(STP)}} and let $\Phi=(\phi_1,\cdots,\phi_m):X\to{\mathbb R}^m$ be H\"older continuous. Then one and only
one of the following conditions holds.
\begin{enumerate}
\item[(i)]
$h_{\rm top}(f)=0$.
\item[(ii)]
$H(w)> 0$ for all $w\in {\rm int}\ {\rm Rot}(\Phi)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Assume $h_{\rm top}(f)>0$.
If ${\rm int}\ {\rm Rot}(\Phi)=\emptyset$ then (ii) trivially holds. From now on we assume that ${\rm int}\ {\rm Rot}(\Phi)\not=\emptyset$ and thus, by Proposition \ref{proptri} no nontrivial linear combination of $\Phi$ is cohomologous to a constant.
Let $w_0\in {\rm int}\ {\rm Rot}(\Phi)$. As a consequence of Corollary \ref{corintemp} there exists $T_0\in {\mathbb R}^m$ such that ${\rm rv}(\mu_{T_0})=w_0$. By definition,
$\mu_{T_0}$ is the unique equilibrium measure of $T_0\cdot \Phi$ which implies that $H(w_0)=h(T_0)=h_{\mu_{T_0}}(f)$.
If $T_0=0$, then $\mu_{T_0}$ is the unique measure of maximal entropy and we obtain $H(w_0)=h_{\rm top}(f)>0$.
Otherwise, there exists $w_1\in {\rm int }\ {\rm Rot}(\Phi)$ with
\begin{equation} \label{fgh}
T_0 \cdot w_1 > T_0\cdot w_0.
\end{equation}
It follows from Corollary \ref{corintemp} that there exists $T_1\in {\mathbb R}^d$ such that ${\rm rv}(\mu_{T_1})=w_1$.
Applying the variational principle and the fact that $\mu_{T_i}$ is the unique equilibrium measure of the potential $T_i\cdot \Phi$ yields
\begin{equation}\label{eqBS}
P_{\rm top}(T_0\cdot \Phi)= h_{\mu_{T_0}}(f) +T_0\cdot {\rm rv}(\mu_{T_0}) \geq h_{\mu_{T_1}}(f)+T_0\cdot {\rm rv}(\mu_{T_1}).
\end{equation}
Since $T_0\cdot {\rm rv}(\mu_{T_i})= T_0\cdot w_i$, \eqref{fgh} and \eqref{eqBS} imply $H(w_0)=h_{\mu_{T_0}}(f)> h_{\mu_{T_1}}(f)\geq 0$ and we are done.
\end{proof}
Finally we present the main result of this section.
\begin{theorem}\label{thanalytic}
Let $f:X\to X$ be a continuous map on a compact metric space satisfying {\rm{(STP)}} and let $\Phi=(\phi_1,\cdots,\phi_m):X\to{\mathbb R}^m$ be H\"older continuous. Then $w\mapsto H(w)$ is real-analytic
on $ {\rm int}\ {\rm Rot}(\Phi)$.
\end{theorem}
\begin{proof}
If $ {\rm int}\ {\rm Rot}(\Phi)$ is empty there is nothing to prove. From now on we assume $ {\rm int}\ {\rm Rot}(\Phi)\not=\emptyset$.
Without loss of generality we only consider the case $m=2$ (i.e. $\Phi=(\phi_1,\phi_2)$) and leave the general case to the reader.
Recall that
\begin{equation}
I:{\mathbb R}^2\to {\rm int}\ {\rm Rot}(\Phi),\ \ \ T=(t_1,t_2)\mapsto {\rm rv}(\mu_T)
\end{equation}
is a real-analytic surjective map. We will show that $I$ is a $C^\omega$-diffeomorphism.
Let $T, S \in {\mathbb R}^2$ with ${\rm rv}(\mu_T)={\rm rv}(\mu_S)$.
Combining that $\mu_{T}$ and $\mu_{S}$ are the unique equilibrium measures of the potentials $T\cdot \Phi$
and $S\cdot \Phi$ respectively, with \eqref{eqvarT} implies $h(T)=h(S)$. We conclude that $\mu_{T}=\mu_{S}$ which implies that
$T\cdot \Phi- S\cdot \Phi$ is cohomologous to a constant. Therefore, $T=S$ and we have shown that $I$ is a bijection. \\
\noindent
Finally, we have to prove that $I$ is a local $C^\omega$-diffeomorphism.
Equations \eqref{gg33} and \eqref{eqw1} combined with the fact that neither $\phi_1$ nor $\phi_2$ are cohomologous to a constant imply
that
\begin{equation}
\partial_{t_1}^2 Q>0\quad {\rm
and } \quad \partial_{t_2}^2 Q>0.
\end{equation}
Consider now the bilinear form
\[
A(\varphi_1,\varphi_2)=\partial_{\tau_1}\partial_{\tau_2}
P_{\rm top}(t_1\phi_1+t_2\phi_2+\tau_1\varphi_1+\tau_2\varphi_2)\rvert_{\tau_1=\tau_2=0}.
\]
Then $A(v\phi_1+w\phi_2,v\phi_1+w\phi_2)$ coincides with
\[
\begin{pmatrix}v&w\end{pmatrix}
\begin{pmatrix}A(\phi_1,\phi_2)&A(\phi_1,\phi_2)\\
A(\phi_1,\phi_1)&A(\phi_2,\phi_2)\end{pmatrix}
\begin{pmatrix}v\\w\end{pmatrix}
=
\begin{pmatrix}v&w\end{pmatrix}
B
\begin{pmatrix}v\\w\end{pmatrix},
\]
where
\[
B=\begin{pmatrix} \partial^2_{t_1}Q&
\partial_{t_1}\partial_{t_2}Q\\
\partial_{t_1}\partial_{t_2}Q& \partial^2_{t_2}Q\end{pmatrix}.
\]
Since no nontrivial linear combination of $\phi_1$ and $\phi_1$ is
cohomologous to a constant, if $(v,w)\ne0$ then
$A(v\phi_1+w\phi_2,v\phi_1+w\phi_2)>0$ (see \cite{Ru}) and hence
$B$ is positive definite. In particular $\det B$ is positive.
Using that $B$ is the derivative $DI$ of $I$, implies that ${\rm det} DI(t_1,t_2)>0$ for all $(t_1,t_2)\in {\mathbb R}^2$. It now follows from the inverse function theorem that $I$ is a $C^\omega$-diffeomorphism.
Since $T\mapsto h(T)$ is real-analytic we conclude that
\[
w\mapsto H(w)= h\circ I^{-1}(w)
\]
is real-analytic in ${\rm int}\ {\rm Rot}(\Phi)$.
\end{proof}
We say a sequence of compact sets $(E_n)_{n\in {\mathbb N}}\subset {\mathbb R}^m$ is a real-analytic exhaustion of ${\mathbb R}^m$ if $E_n\subset {\rm int }\ E_{n+1}$ for all $n\in {\mathbb N}$, $\bigcup_{n\in{\mathbb N}} E_n = {\mathbb R}^m$ and each $\partial E_n$ is a real-analytic $m-1$-dimensional submanifold of ${\mathbb R}^m$. The following is a direct consequence of the proof of Theorem \ref{thanalytic}.
\begin{corollary}\label{corende}
Let $f:X\to X$ be a continuous map on a compact metric space satisfying {\rm{(STP)}} and let $\Phi=(\phi_1,\cdots,\phi_m):X\to {\mathbb R}^m$ be H\"older continuous with ${\rm int }\ {\rm Rot}(\Phi)\not=\emptyset. $
Let $(E_n)_{n\in {\mathbb N}}\subset {\mathbb R}^m$ is a real-analytic exhaustion of ${\mathbb R}^m$. Then $C_n=\{{\rm rv}(\mu_T): T\in \partial E_n\}\subset {\rm int }\ {\rm Rot}(\Phi)$ is a sequence of $m-1$-dimensional
real-analytic submanifolds that converges to $\partial {\rm Rot}(\Phi)$ in the Hausdorff metric. Moreover, $w\mapsto H(w)$ varies real-analytically on $C_n$ for all $n\in {\mathbb N}$.
\end{corollary}
|
2,877,628,088,920 | arxiv | \section{Introduction}
The visual aesthetic quality of the image measures the visual attraction of the images for humans. Since visual aesthetics is a subjective attribute \cite{kairanbay2019beauty}, it always depends on personal emotions and preferences. This makes image aesthetic quality assessment a subjective task and evaluated only by experts. If there is a large number of image samples, the efficiency of artificial aesthetic quality assessment will be quite low. However, people tend to agree that some images do indeed more attractive than others in daily life, which creates a computational aesthetics \cite{2008STUDYING}. Computational aesthetics lets the computer mimic the process of human aesthetic assessment and compute the methods to predict the aesthetic quality of the images automatically.
The focus of computational aesthetics is to predict people's emotional responses to visual stimuli through computational technology, study the internal mechanism of human perception and explore the mystery of artificial aesthetic intelligence. Computational visual aesthetics \cite{2017Computational} is the computational processing of human visual information. Image aesthetic assessment is the most popular research direction in the field of computational visual aesthetics, and it is also the first step in studying computational visual aesthetics. Image aesthetic assessment is to simulate human perception and cognition by the computers, which can provide the quantitative assessment of aesthetic quality. Image aesthetic attributes assessment mainly focuses on the quantitative assessment formed by aesthetic attributes such as composition, color and light in the photographic images.
There are two main parts for the image aesthetics quality assessment: the feature extraction part and the assessment part. In the feature extraction, Yan et al. \cite{1640788} proposed the 7-dimensional aesthetic features based on photography knowledge and high-level semantics. The features include simplicity, spatial edges distribution, color distribution, hue count, blur, contrast and brightness. With the development of deep learning, researchers introduce deep convolutional neural networks in the task of image aesthetics assessment. Due to the ability of learning features automatically, people can extract aesthetic features from images by the deep convolutional neural networks without a lot of aesthetic knowledge and professionally photography experiences. However, there are deficiencies in general aesthetic benchmark datasets. For example, in AADB \cite{2016Photo}, each image is only marked by a small number of annotation. So the data labels is kind of subjective. It is difficult to extract the aesthetic attribute features and limits the extraction ability of features for the model. At the same time, the multidimensional attribute assessment will lead to a sharp increase in the number of network parameters, which is not conducive to the actual development and application.
To solve these problems, we propose a efficient method for image aesthetic attribute assessment based on the mixed multi-attribute dataset. Our work includes the following three contributions:
1. We screen and construct the aesthetic mixed dataset with attributes(AMD-A), which has more image aesthetic attribute annotations compared with the traditional aesthetic attribute datasets.
2. We propose an aesthetic multitasking network architecture based on EfficientNet-B0 and ECA channel attention modules to simplify the model parameters and realize the aesthetic classification, overall scoring and attribute scoring.
3. We design several kinds of external attribute features and use feature fusion to imporve the performance of the image aesthetic attribute assessment. Besides, inspired by teacher-student network, we propose the soft loss to imporve the performance of the image aesthetic overall assessment.
\section{Related work}
\subsection{Image aesthetic assessment}
Image aesthetic assessment is usually regarded as a classification or score regression task. For the classification task, images can generally be divided into high-quality images and low-quality images \cite{1969Interhemispheric}. For the regression task, images can be evaluated according to aesthetic overall scores. However, the overall scores can not accurately measure the results of aesthetic assessment, which largely ignores the diversity, subjectivity, and personality in the human aesthetic consensus. An image also contains many aesthetic attributes such as light and shadow, color, composition, blur, movement, and interest. So the related work has focused on the multidimensional aesthetic attribute assessment.
In the early stage, the handcraft designing features was the main way to extract features from images \cite{2004Classification,2008Photo,2011Aesthetic,2011Assessing}. Datta et al. \cite{2008STUDYING} used bottom-level features (color, texture, shape, image size, etc.) and high-level features (depth of field, rule of thirds, regional contrast) as the image aesthetic features. Marchesotti \cite{6126444} and others directly used SIFT (BOV or FisherVector) and local color descriptors to classify aesthetic images.
Today, deep learning has been widely used in different walks of life, such as IoV \cite{wang2022software}, IoT \cite{wang2022time} and edge intelligence \cite{zhang2020psac}. With the development of deep learning, the researchers introduce deep convolutional neural networks into the image aesthetic assessment. Due to the powerful learning capabilities in deep convolutional neural networks, people can automatically extract aesthetic features without substantial theoretical knowledge and photographic experience. In recent years, deep convolutional neural networks have shown good performance in the overall regression and classification tasks of image aesthetics, and a series of excellent models \cite{2017A,2015Visual,2017Aestheticd,2021Hierarchical,zhou2022double,xu2022mfgan,zhen2022towards} have emerged. At the same time, in the field of image aesthetic attribute assessment, there are image aesthetic attribute scoring methods based on hierarchical multitasking networks \cite{2018Predicting} and incremental learning multitasking networks \cite{2019Incremental} by using image composition, light and color, exposure depth of field and other attributes. However, due to the limitation of data quantity and the categories of aesthetic attributes, the above attribute assessment method don't have high accuracy. With the increasing of attribute categories, the network models become extremely complex, which is not conducive to expansion and practical application.
\subsection{Aesthetic attribute datasets}
The emergence of large-scale aesthetic datasets provides a rich source of samples for aesthetic assessment model training. Murray et al. \cite{2012AVA} constructed an aesthetic visual analysis dataset(AVA), containing 255,530 images from the www.dpchallenge.com. AVA is the benchmark dataset for the current image aesthetic quality assessment task. Each image includes a numerical aesthetic overall regression label, 66 semantic labels, and 14 style labels. Building on this dataset, Kang et al. \cite{kang2020eva} proposed the explainable visual aesthetics dataset(EVA), which contains 4070 images. Each image has 30 annotation scores at least and mainly includes an overall score and 4 different aesthetic attribute scores. Compared with AVA, EVA adopts a more rigorous annotation collection method and overcomes the noisy labels due to the misunderstanding bias of the annotators, which facilitates the research on aesthetic understanding.
Kong et al. \cite{2016Photo} designed the aesthetics and attributes database(AADB). The dataset contains 9958 images from professional photographers and ordinary photographers, each image has an overall score and eleven aesthetic attributes scores. Chang et al. \cite{2017Aesthetic} first proposed the annotation information for aesthetic language assessment, and designed the photo critique captioning dataset(PCCD), which contains 4235 valid images from the foreign photography website Gurushots.com. Except for the overall score, each image has language comments and scores on the composition perspective, color illumination, image theme, depth of field, focus, and camera use. Fang et al. \cite{2020Perceptual} conducted the first systematic study on the smartphone image aesthetic quality assessment and constructed smartphone photography attribute and quality dataset(SPAQ). The dataset consists of 11125 images taken by 66 smartphones, and each image including an overall label, an image attribute label and a scene category label.
\subsection{Feature fusion}
Feature fusion is a common way to improve the model performance. In the field of aesthetic assessment, the low-level features obtained by deep neural networks have high resolution and contain more aesthetic low-level information, such as color, texture and structure. But after fewer convolutional layers, low-level features contain lower high-level semantics and are easy to cause noise interference \cite{zhou2022contextual,zhou2019multi}. High-level features tend to have stronger semantic information, but they have very low feature resolution and poor perception of details. In the aesthetic field, local aesthetic attributes based on low-level information are as important as global aesthetic features based on high-level information. Especially in the regression task of aesthetic attributes. The output results of deep convolutional networks are often in the last layer of the network, which leads to the inability to effectively extract the corresponding attribute features in the assessment of aesthetic attributes.
In related work, the researchers improve the performance of detecting and dividing objects by integrating the network layer features at different locations of the neural network. We classified feature fusion and predicted output as early and late fusion. The idea of early fusion strategy is to integrate the high-level and low-leve features, and then conduct model training and prediction on the fusion features. Such methods usually use concatting and adding operations to fuse the features. Related research work such as Inside-Outside Net \cite{2016Inside} and HyperNet \cite{2016HyperNet}. The idea of the late fusion strategy is to output the prediction results of different network layers and then to fuse all the detection results. Related research work such as Feature Pyramid Network \cite{2017Feature}, Single Shot MultiBox Detector \cite{2016SSD}, and Densenet \cite{2016Densely}. This paper mainly adopts the early fusion strategy to integrate different high-level features of the sub-networks and external attribute features to improve the supervised assessment performance of image aesthetic attributes.
\section{Aesthetic mixed dataset with attributes(AMD-A)}
In order to construct a dataset with a reasonable distribution both in image aesthetics quality assessment and image aesthetic attributes assessment, we rebuild a dataset named Aesthetic mixed dataset with attributes(AMD-A) cincluding 16924 images. According to different tasks, AMD-A is divided into two sets. One set(11166 images) is applied to aesthetic overall score regression, another set(16924 images) is applied to aesthetic attribute score classification and regression.
As for the attribute regression, we collect images from EVA \cite{kang2020eva}, AADB \cite{2016Photo}, PCCD \cite{2017Aesthetic}, PADB, and HADB. PADB and HADB are self-built datasets. Each image has three attribute labels and one overall label for assessment. The three attributes include light, color and composition. To increase the training samples of the aesthetic regression, we collected 5758 images with only overall scores from other datasets. There are from DPCallenge.com, SCUI-FBP5500 \cite{2018SCUT}, Photo.net and SPAQ \cite{2020Perceptual}. Data labels are continuous and each one has a score range of 0-1. The distribution of different labels are as Fig.2.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{The distribution of light, color, composition and overall labels in AMD-A}
\end{figure}
We calculated the means and the standard deviations of the labels in Table 1. The results show that the 4 different categories of labels have the similar data distributions.
\begin{table}[htbp]
\caption{Means and standard deviations for light, color, composition and overall labels}
\centering
\label{tab:1}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
& light & color & composition & overall \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Mean & 0.5324 & 0.5472 & 0.5420 & 0.5398 \\
Standard deviation & 0.1522 & 0.1319 & 0.1388 & 0.2116 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Network architecture}
We build a multitasking network architecture including the backbone network and five sub-networks. The basic network architecture is shown in Fig.3.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{The architecture of the network}
\end{figure}
We use EfficientNet-B0 as the backbone network \cite{2019EfficientNet}. EfficientNet has an efficient feature extraction ability and can achieve more accurate aesthetic regression with a small number of parameters. The size of the parameter model for EffiecinetNet-B0 is only 20M, which is smaller than many existing model parameter model. So it is an ideal pre-trained parameter model in development and application scenarios. We use the blue part for the backbone network in the fig.3.
Between the backbone network and the sub-networks, there are ECA channel attention modules. In ECA, GAP is the global average pooling, $\sigma$ is the sigmoid function, $\psi(C)$ is the function of parameter \textit{k}, and each layer will be activated by the relu function. We will introduce it in Section 4.1.
The five sub-networks are one classification sub-network, three attribute sub-networks and one regression sub-network consisting of fully connected layers. We use the red part for the classification sub-network, the khaki part for the attribute sub-networks and the green part for the regression sub-network. The attribute sub-networks in the figure have the similar structure, so we omitted additional attribute sub-networks and displayed only one attribute sub-network. The yellow part is for the external attribute features(4 light features, 7 color features, and 10 composition features).
From the architecture of the network, we can see two different kinds of feature fusions. One for external features and 10 attribute features in the attribute regression tasks, and another one for 10 regression features and 30 attribute features in the regression task. The training details will be explained in Section 4.2.
\subsection{ECA channel attention}
From the network structure in Fig.3, each ECA channel attention module is added between the backbone network and each sub-network. Recently, there are a lot of researches improving channel and spatial attention to make progresses in the performance. The ECA channel attention can improve the ability of extracting features. It mainly improves from the SENet \cite{2019ECA} module. ECA channel attention proposes a local cross-channel interaction strategy without the dimension reduction and implement a method of adaptively selecting the size of the one-dimensional convolution kernel. So we use it to improve the performance of the image aesthetic attribute and overall assessment. To the best of our knowledge, it is also the first time to use the ECA channel attention in the task of aesthetics attribute assessment.
Instead of reducing global average pooling layers at the channel level, ECA channel attention captures local cross-channel interaction information by considering each channel and its \textit{k} neighbors. This can be effectively achieved efficiently by a fast one-dimensional convolution. \textit{k} indicates how many neighbors near the channel participate in the attention prediction of this channel. To avoid artificially raising \textit{k} by cross-validation, \cite{2019ECA} proposes a way to determine \textit{k} automatically. The size of the convolutional kernel \textit{k} is proportional to the channel dimension as follows:
\begin{equation}
k=\psi(C)\ =\left|\frac{\log_2(C)}{\gamma}+\frac{b}{\gamma}\right|_{\mathrm{odd}}
\end{equation}
${|t|}_{odd}$ means the nearest odd number of \emph{t}, the value of \emph{$\gamma$} is 2, the value of \emph{b} is 1. We can get \emph{k} = 7 if \emph{C} = 1280.
\subsection{Multitasking network}
We use EfficientNet-B0 as the backbone network and design the classification sub-network for the classification task. Between these two networks, we use the ECA channel attention module to improve the capability of the aesthetic feature extraction. The classification task divides aesthetic images into ten categories based on the level of aesthetic scores, which is a coarse-grained aesthetic regression. During training, we relaxed the parameters of the backbone network and the classification sub-network, obtained the high-level aesthetic features of 1280×7×7 through the EfficientNet-B0, and made the feature reweighted through the ECA channel attention module. After that, we used the global average pooling layer to transform the high-level features into 1280 features, then reduced the features to 10 through one full connection layer and the activation function. Each feature of the output is the probability of each aesthetic category.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{fig4.pdf}
\caption{Attribute feature fusion}
\end{figure}
After completing the classification task, we trained the three sub-networks in the order of light, color and composition. The training labels are the attribute labels of each image. The structure of attribute sub-networks are similar to the classification sub-networks. We reduced 1280 features to 10 features by one fully connected layer and the activation function. These 10 features are the attribute features of the image. As shown in Fig.4, we spliced different external features onto attribute features and regress the features to an attribute score using one fully connected layer and the activation function. The design and calculation of the external features will be mentioned in Section 5.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.35]{fig5.pdf}
\caption{Overall feature fusion}
\end{figure}
\begin{equation}
Loss = MSE Loss + \lambda × Soft Loss
\end{equation}
As shown in Fig.5, we also reduced 1280 features to 10 features in the regression sub-network. Then, we combined these 10 features with 3×10 attribute features in the penultimate layer of the attribute sub-network to obtain an aesthetic overall score through one fully connected layer and the activation function. Inspired by the idea of teacher-student networks \cite{2015Distilling}, we used 10 features from the output of the classification sub-network to guide the 10 features of the regression sub-network. We calculate the relative entropy between the two feature sequences and define it as the soft loss. The regression sub-network is gradient-adjusted by the mse loss of the aesthetic score regression and a weighted soft loss. We set the weight of the soft loss as 0.1, which we will prove in Section 6.
\section{Attribute Features Fusion}
Before training the three attribute regression sub-networks, we will design and calculate the external attribute features of the input images. The external attribute features includes light features, color features and composition features. We will store the external attribute features in the data labels. The design details are presented in this section.
\subsection{Light features}
Referring to the value and lightness information \cite{2008STUDYING}, we extract the light features as mean of value(\textit{f1}), standard deviation of value(\textit{f2}), mean of lightness(\textit{f3}) and standard deviation of lightness(\textit{f4}).
The light features was mainly obtained by calculating the mean and standard deviation of the L and V channels. Both the value and lightness range is 0-255. We use x to represent the pixels in the images. The calculation formulas of average brightness (\textit{f1}), standard deviation of brightness (\textit{f2}), average lightness (\textit{f3}) and standard deviation of lightness (\textit{f4}) are as follows:
\begin{equation}
f_1=\frac{1}{|I|}{\mathrm{\sum}_{x\in{I}}L(x)}
\end{equation}
\begin{equation}
f_2=std(L(x))
\end{equation}
\begin{equation}
f_3=\frac{1}{|I|}{\mathrm{\sum}_{x\in{I}}V(x)}
\end{equation}
\begin{equation}
f_4=std(V(x))
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{fig6.pdf}
\caption{Samples with different light effects}
\end{figure}
\begin{table}[htbp]
\caption{Light feature values for samples}
\centering
\label{tab:1}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
Light feature & 6(a) & 6(b) & 6(c) & 6(d) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Mean of value (\textit{f1}) & 148.253016 & 34.985529 & 72.411960 & 139.248896 \\
Standard deviation of value (\textit{f2}) & 52.104682 & 40.385166 & 29.037064 & 39.939841 \\
Mean of lightness (\textit{f3}) & 183.823056 & 148.253016 & 79.659121 & 151.227269 \\
Standard deviation of lightness (\textit{f4}) & 50.643524 & 78.682690 & 28.775265 & 43.909947 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Fig.6 shows four sample images of different light effects, where Fig.6(a) and Fig.6(b) have better light effects, while Fig.6(c) and Fig.6(d) have poor light effect. Table 2 shows the light feature values corresponding to these four images. From the feature values in Table 2, we can see that the images with better light effect generally have higher means and standard deviations. While the image is too dark, the means and standard deviations are lower, and if the exposure is too excessive, the means will be higher and the standard deviations will be lower.
\subsection{Color features}
Referring to the color information \cite{2012The}, we extract the color features as the weight of color channel (\textit{f1}), the number of RGB dominant colors (\textit{f2}), the degree of RGB dominant colors (\textit{f3}), the number of HSV dominant colors (\textit{f4}), the degree of HSV dominant colors (\textit{f5}), the number of dominant hues (\textit{f6}), the contrast ratio of dominant hues (\textit{f7}).
By calculating the approximation degree of the RGB color channel, the images can be divided into colored three-channel images, approximate grayscale channel images and grayscale single-channel images. The weight of color channel \textit{f1} is calculated by converting the image into the RGB channels. If the values of the three channel are exactly the same, \textit{f1} = 0. If the three channels are not exactly consistent, we will calculate the average difference between the RGB channels at each pixel, while the average difference is less than 10, \textit{f1} = 0.5; otherwise, the images are considered as the colored three-channel images, and \textit{f1} = 1.
The dominant colors are mainly calculated by the color histogram in the RGB channel and the hue histogram in the HSV channel. As for RGB dominant colors, we quantize each RGB channel to 8 values, so that a 512-dimensional histogram {h$_{RGB}$}=\{{h$_{0}$}, {h$_{1}$},..., {h$_{511}$}\} can be created. {h$_{i}$} represents the number of pixels in the i-th histogram. {c$_{1}$} is the threshold parameter of the color number in the formula and {c$_{1}$} = 0.01, which means if the number of the color in RGB is more then 51, and we can consider it as the RGB dominant color. We define the number of RGB dominant colors (\textit{f2}) as follows:
\begin{equation}
\begin{aligned}
f_{2}=\sum_{k=0}^{512} 1\left(h_{k} \geq c_{1} \max _{i} h_{i}\right)
\end{aligned}
\end{equation}
The degree of RGB dominant colors (\textit{f3}) indicates that how the dominant color is occupied in the image colors. The formula is as follows:
\begin{equation}
\begin{aligned}
f_{3}=\frac{\max _{i} h_{i}}{|I|}
\end{aligned}
\end{equation}
Similarly, replacing the RGB channel with the HSV channel, we can get the number of HSV dominant colors (\textit{f4}) and the degree of HSV dominant colors (\textit{f5}).
For \textit{f6} and \textit{f7}, we eliminated pixels with saturation elimination values less than 0.2, in other words, we eliminated all light or dark pixels. We then calculate the hue histograms of the remaining pixels with 20 uniform intervals, each occupying 18° sectors of the hue ring. In the hue histogram {H$_{hue}$} = \{{h$_{0}$}, {h$_{1}$}, …, {h$_{19}$}\}, {h$_{i}$} represents the pixel histogram in the i-th interval. The formula for the number of dominant hues is as follows:
\begin{equation}
\begin{aligned}
f_{6}=\sum_{i=1}^{20} \mathbf{1}\left(\left|h_{i}\right| \geq c_{2}|I|\right)
\end{aligned}
\end{equation}
Similarly, we set {c$_{2}$} = 0.01. The contrast ratio of dominant hues (\textit{f7}) indicates the maximum contrast between the two principal hues in the image, the formula is as follows:
\begin{equation}
\begin{aligned}
f_{7}=\max _{i, j}\left\|\left|h_{i}\right|-\left|h_{j}\right|\right\|
\end{aligned}
\end{equation}
Fig.7 shows four sample images of different color effects, where Fig.7(a) and Fig.7(b) have better color effects, while Fig.7(c) and Fig.7(d) have poor color effect. Table 3 shows the color feature values corresponding to these four images. From the feature values in Table 3, we can see that the images with better color effect generally have more RGB and HSV dominant colors. Besides, the degree and the contrast ratio is higher. So we can find that the images have the better color effects when they have richer colors.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{fig7.pdf}
\caption{Color feature values for samples}
\end{figure}
\begin{table}[htbp]
\caption{Calculation results of Color feature sample}
\centering
\label{tab:2}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
Color feature & 7(a) & 7(b) & 7(c) & 7(d) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Weight of color channel \textit{f1} & 1.0 & 1.0 & 1.0 & 1.0 \\
Number of RGB dominant colors \textit{f2} & 102 & 100 & 19 & 14 \\
Degree of RGB dominant colors \textit{f3} & 0.087823 & 0.075602 & 0.253429 & 0.228055 \\
Number of HSV dominant colors \textit{f4} & 220 & 171 & 49 & 39 \\
Degree of HSV dominant colors \textit{f5} & 0.040854 & 0.052193 & 0.160361 & 0.197352 \\
Number of dominant hues \textit{f6} & 8 & 10 & 5 & 4 \\
Contrast ratio of dominant hues \textit{f7} & 162 & 162 & 18 & 18 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Composition Features}
We extract the composition features as golden section (\textit{f1}), center (\textit{f2}), slant (\textit{f3}), the triangle (\textit{f4}), guideline (\textit{f5}), rule of thirds (\textit{f6}), symmetry (\textit{f7}), diagonal (\textit{f8}), frame (\textit{f9}) and circle (\textit{f10}).
For the composition features \textit{f1} and \textit{f2}, we first obtain the salient area of the images through salient detection \cite{liu2020dynamic}, and then calculate the distance \textit{d} from the golden section and central points of the image according to the central position of the pixel coordinates in the significance area. After that, we divide the distance \textit{d} by the oblique length of the image to indicate the proximity to the composition. To numerically represent the positive correlation of the compositional feature values, we subtract this value by \textit{1} and represent it as \textit{f1} and \textit{f2}. The formula of the features is as follows:
\begin{equation}
\begin{aligned}
f_{1,2}=1-\frac{d}{\sqrt{w^{2}+h^{2}}}
\end{aligned}
\end{equation}
We obtain the composition feature line of the images by the model in \cite{HanEccv20SemLine} and calculate \textit{f3}-\textit{f6}. Firstly, we determine the composition feature lines according to the rule of thirds line, symmetrical line, diagonal line and slash line. Then, we respectively calculate the distances between the two endpoints of each composition feature line and the two endpoints of the composition feature line, divide them by the long side of the image and average them. Finally, we subtract this value with 1 to represent the compositional feature calculated values.To better distinguish the accuracy of the values, we set a threshold as 0.3, above which yields a positive value, or 0 otherwise. The calculation formula for \textit{f3}-\textit{f6} is as follows:
\begin{equation}
\begin{aligned}
f_{3-6}=1-\frac{d_{1}+d_{2}}{2 \max (w, h)}
\end{aligned}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{fig8.pdf}
\caption{Samples with diffenrent composition effects}
\end{figure}
\begin{table}[htbp]
\caption{Composition feature values for samples}
\centering
\label{tab:1}
\begin{tabular}{ccccccccccc}
\hline\noalign{\smallskip}
Composition feature & 11(a) & 11(b) & 11(c) & 11(d) & 11(e) & 11(f) & 11(g) & 11(h) & 11(i) & 11(j) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Golden Section \textit{f1} & 0.67 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Center \textit{f2} & 0 & 0.85 & 0 & 0 & 0 & 0.66 & 0.76 & 0 & 0.69 & 0.70 \\
Slant \textit{f3} & 0 & 0 & 0.48 & 0 & 0.41 & 0 & 0 & 0.71 & 0 & 0 \\
Triangle \textit{f4} & 0 & 0 & 0 & 0.44 & 0 & 0 & 0 & 0 & 0 & 0 \\
Guildline \textit{f5} & 0 & 0 & 0 & 0 & 0.71 & 0 & 0 & 0 & 0 & 0 \\
Rule of Thirds \textit{f6} & 0 & 0 & 0 & 0 & 0.48 & 0.84 & 0 & 0 & 0 & 0 \\
Symmetry \textit{f7} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
Diagonal \textit{f8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
Frame \textit{f9} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
Circle \textit{f10} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
We obtained features \textit{f7}-\textit{f9} by calculating whether the intersection positions of the main composition lines are within the range of the images. If there is an intersection of the composition feature line on the image, we will calculate the horizontal angle of the composition feature line. If more than a pair of horizontal angles are complementary, and the distances between the intersections of the composition feature lines are within 0.05max(w, h), \textit{f7} = 1. If the sum of any three composition feature lines and the horizontal surface is 360° and its intersection are within the image, we will consider it as the triangle composition, \textit{f8} = 1. If two composition feature lines are approximately parallel in the horizontal or vertical direction, and another composition feature line is almost vertical to them at the same time, then we will consider it as the frame composition, \textit{f9} = 1. If we detect the image by the hough circle and get the radius of the circle is longer than 0.1min (w, h), we will consider it as the circle composition, \textit{f10} = 1. Otherwise all the above features \textit{f7}-\textit{f10} are 0.
Fig.8 shows ten sample images of different composition effects. Table 4 shows the composition feature values corresponding to these ten images. From the feature values in Table 4, we can see that the images with one or more than one composition features have better composition effects.
\subsection{Feature fusion}
The network structure in Fig.3 shows that the yellow part is the external attribute feature {f$_{external}$}(4 for light features, 7 for color features or 10 for composition features). We concat these features with the aesthetic high-level features extracted from the backbone network by the method mentioned in Section 4.2. The experiment results in Section 6 verify the method of the feature fusion.
\section{Experiment}
\subsection{Training details}
We set the classification batch size to be 32, the regression batch size to be 64 and the learning rate to be 0.0001. We use Adam as the optimizer; betas are set as (0.98, 0.999); weight decay is set as 0.0001. If the accuracy rate of classification is not improved in two consecutive rounds, or the regression loss is not decreased in two consecutive rounds, the learning rate will multiply by 0.5. Our running environment is in MindSpore \cite{mindspore} 1.6.0 and Nvidia TITAN XPs. We divide the dataset into three sets; the ratio of training set and validation set and testing set is 8:1:1.
\subsection{Analysis of results}
This paper uses several indicators to measure training results:
MSE mean square error represents the estimating errors of the difference between the predicted results and the true values; the formula is:
\begin{equation}
MSE=\frac{1}{N}\sum_{i}{(r_i-\widehat{r_i})}^2
\end{equation}
SROCC (Spearman rankorder correlation coefficient, Spearman rank correlation coefficient) represents the correlation between the predicted result and the true value and the formula is:
\begin{equation}
SROCC=1-\frac{6\sum_{i}{(r_i-\widehat{r_i})}^2}{N^3-N}
\end{equation}
The accuracy of the two-classification indicates whether the predicted score and the real score are consistent when the boundary is 5. The accuracy of two-classification indicates the most basic accuracy of classification; the formula is:
\begin{equation}
ACCURACY=\frac{TP+TN}{P+N}
\end{equation}
The positive and negative accuracy indicates whether the absolute value of the error is within 1 point between the predicted score and the real score. If the absolute value of the error is within 1 point, the result can be considered accurate. The formula is:
\begin{equation}
ACCURACY_{\left|error\right|\le1}=\frac{N_{\left|error\right|\le1}}{N}
\end{equation}
\begin{table}[htbp]
\caption{Feature fussion experimental results}
\centering
\label{tab:3}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Methods & Attributes & MSE & SROCC & Acc & $Acc_{\left|error\right|\le1}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& Color & 0.008660 & 0.6863 & 76.17\% & 73.68\% \\
Baseline (Mindspore) & Light & 0.011266 & 0.6939 & 77.72\% & 68.29\% \\
& Composition & 0.009491 & 0.6915 & 77.41\% & 70.57\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{Basebone +} & \textbf{Color} & \textbf{0.008307} & \textbf{0.7087} & \textbf{77.72\%} & \textbf{73.69\%} \\
\textbf{Feature fusion} & \textbf{Light} & \textbf{0.010675} & \textbf{0.7126} & \textbf{79.27\%} & \textbf{69.53\%} \\
\textbf{(Mindspore)} & \textbf{Composition} & \textbf{0.009285} & \textbf{0.7013} & \textbf{79.79\%} & \textbf{73.47\%} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\iffalse
\begin{table}[htbp]
\caption{Feature fussion experimental results}
\centering
\label{tab:3}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Methods & Attributes & MSE & SROCC & Acc & $Acc_{\left|error\right|\le1}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& Color & 0.009884 & 0.6614 & 79.79\% & 77.01\% \\
Baseline (Pytorch) & Light & 0.012159 & 0.7525 & 81.40\% & 66.91\% \\
& Composition & 0.010483 & 0.7634 & 85.69\% & 76.07\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Basebone + & Color & 0.007404 & 0.8368 & 85.15\% & 77.46\% \\
Feature fusion & Light & 0.009853 & 0.8265 & 87.75\% & 71.11\% \\
(Pytorch) & Composition & 0.007302 & 0.8520 & 90.07\% & 79.61\% \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\textbf{Basebone +} & \textbf{Color} & \textbf{0.008307} & \textbf{0.7087} & \textbf{77.72\%} & \textbf{73.69\%} \\
\textbf{Feature fusion} & \textbf{Light} & \textbf{0.010675} & \textbf{0.7126} & \textbf{79.27\%} & \textbf{69.53\%} \\
\textbf{(Mindspore)} & \textbf{Composition} & \textbf{0.009285} & \textbf{0.7013} & \textbf{79.79\%} & \textbf{73.47\%} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Soft loss experimental results}
\centering
\label{tab:3}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Methods & Attributes & MSE & SROCC & Acc & $Acc_{\left|error\right|\le1}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Baseline (Pytorch) & Score & 0.012575 & 0.7301 & 79.48\% & 62.07\% \\
Baseline + Soft loss (Pytorch) & Score & 0.011789 & 0.8618 & 89.16\% & 69.42\% \\
\textbf{Baseline + Soft loss (Mindspore)} & \textbf{Score} & \textbf{0.011686} & \textbf{0.8574} & \textbf{87.61\%} & \textbf{68.25\%} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\fi
\begin{table}[htbp]
\caption{Soft loss experimental results}
\centering
\label{tab:3}
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Methods & Attributes & MSE & SROCC & Acc & $Acc_{\left|error\right|\le1}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Baseline (Mindspore) & Score & 0.012940 & 0.8424 & 85.78\% & 64.99\% \\
\textbf{Baseline + Soft loss (Mindspore)} & \textbf{Score} & \textbf{0.011315} & \textbf{0.8604} & \textbf{88.08\%} & \textbf{68.72\%} \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
We use EfficientNet-B0 as the baseline and conducte several experiments as follows: for feature fusion, Table 5 showed that external light, color, and composition attribute features could return to a more accurate assessment after fusion. For overall score regression, we used the soft loss mentioned in Section 4.2, and Table 6 showed that the model with soft loss had better performances in all indicators.
\iffalse
\begin{table}[htbp]
\caption{The comparison experiment for \textit{$\lambda$}}
\centering
\label{tab:3}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
$\lambda$ & MSE & SROCC & Acc & $Acc_{\left|error\right|\le1}$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
0 & 0.012575 & 0.7301 & 79.48\% & 62.07\%\\
\textbf{0.1} & \textbf{0.011789} & \textbf{0.8618} & \textbf{89.16\%} & \textbf{69.42\%} \\
0.2 & 0.01822 & 0.8568 & 88.24\% & 68.31\%\\
0.3 & 0.01854 & 0.8582 & 88.08\% & 68.21\%\\
0.4 & 0.01998 & 0.8497 & 86.59\% & 67.48\%\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\fi
\begin{table}[htbp]
\caption{The comparison experiment for \textit{$\lambda$}}
\centering
\label{tab:3}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
$\lambda$ & MSE & SROCC & Acc & $Acc_{\left|error\right|\le1}$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
0 & 0.011408 & 0.8591 & 88.02\% & 68.31\%\\
\textbf{0.1} & \textbf{0.011315} & \textbf{0.8604} & \textbf{88.08\%} & \textbf{68.72\%} \\
0.2 & 0.011318 & 0.8603 & 87.88\% & 68.44\%\\
0.3 & 0.011402 & 0.8595 & 87.81\% & 68.58\%\\
0.4 & 0.011437 & 0.8591 & 87.74\% & 68.45\%\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
We did a comparison experiment in 40 epoches of training. Table 7 showes that the presence of \textit{$\lambda$} does improve the accuracy of the regression, when \textit{$\lambda$} = 0.1, so we set \textit{$\lambda$} to 0.1.
Besides, we selected some typical samples and used this method to score the images, including the overall aesthetic score and the scores of the three aesthetic attributes. Among them, S represents the overall score, C represents the color score, L represents the light score, and CM represents the composition score, as shown in Fig.9:
\begin{figure*}[btp!]
\centering
\includegraphics[width=\linewidth]{fig9.pdf}
\caption{Test samples on the testing set with scores from low to high. S represents the overall score, C represents the color score, L represents the light score, and CM represents the composition score}
\label{fig:10}
\end{figure*}
\section{Summarize}
It is challenging to construct a new dataset in the image aesthetic quality assessment. Traditional datasets have the problem of few data and limited attribute labels. By mixing and filtering massive datasets and designing external attribute features, we get a new dataset called aesthetic mixed dataset with attributes(AMD-A) with a more reasonable distribution. Besides, we propose a model with multitasking network including one classification sub-network, three attribute sub-network and one regression sub-network. This is an innovative exploration of the training method for the numerical assessment of image aesthetics. Moreover, we design and use the external attribute feature fusion to improve the regressing of aesthetic attributes. According to the idea of the teacher-student network, we use the classification sub-network to guide the regression sub-network through high-level features by the soft loss. The experimental results proved that our model is more accurate than the traditional deep learning network regression model, and it improves the prediction of aesthetic attributes and overall scores.
\section*{Acknowledgements}
This work is partially supported by the National Natural Science Foundation of China (62072014\& 62106118), the CAAI-Huawei MindSpore Open Fund (CAAIXSJLJJ-2021-022A), the Open Fund Project of the State Key Laboratory of Complex System Management and Control (2022111), the Project of Philosophy and Social Science Research, Ministry of Education of China (No.20YJC760115), and the Advanced Discipline Construction Project of Beijing Universities (20210051Z0401).
We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks), Ascend AI Processor and Zhongyuan AI Computing Center used for this research.
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,088,921 | arxiv | \section{Introduction}
One of the candidates for the role of dark matter are superheavy
particles~\cite{Kuzmin:1997jua, Berezinsky:1997hy} (see also Ref.~\cite{Khlopov:1987bh, Fargion:1995xs}), that we will denote
as $X$ particles. From the point of view of particle physics they
can be incorporated into various theories (see e.g. Ref.~\cite{Kolb:1998ki, Kuzmin:1999zk}
and references therein). In cosmology these particles could be created at some early stages of the Universe
evolution~\cite{Kuzmin:1997jua, Kofman:1994rk, Felder:1998vq, Berezinsky:1997hy, Chung:1998zb, Chung:1998rq, Kolb:1998ki, Kuzmin:1999zk}.
In this paper we consider indirect detection of superheavy dark matter (SHDM).
The parameters that can be experimentally constrained in this approach are mass, annihilation
cross-section and lifetime of the dark matter particles.
While there are several constrains on $X$ particles mass $M_X$ imposed by various scenarios
of the dark matter production~\cite{Kolb:1998ki, Kuzmin:1998kk, Chung:1998zb, Chung:2004nh, Gorbunov:2012ij},
in this study we conservatively consider the full range of $M_X$ accessible for indirect
observation in recent high energy cosmic ray experiments, namely: $10^{7} \lesssim M_X \lesssim 10^{16}$ GeV.
The detection of the annihilation signal of particles with these masses is far beyond reach
of the modern experiments because of the unitarity bound on the $X$ particles annihilation
cross-section~\cite{Aloisio:2006yi}: $\sigma_{ann.} \sim 1/M_{X}^2$.
Therefore, in this work we are focusing on the case of decaying DM with long lifetime $\tau \gg 10^{10}$ yr.
Modern cosmic ray experiments allow to study primary particles composition based on
observed extensive air showers (EAS) properties. The spectrum of protons and nuclei with $E>100$ TeV
has been studied in detail in several experiments. In contrast only upper
limits on gamma ray fluxes in the same energy range have been obtained so far~\footnote{We should also mention a
tentative result of primary gamma detection in EAS-MSU experiment~\cite{Fomin:2013mia, Fomin:2014ura}.}.
In this paper we are using these limits to build constraints on decaying SHDM.
For the highest energy ($E \gtrsim 10^{18}$ eV) the recent constraints on gamma
ray flux are given by Pierre Auger Observatory~\cite{Aab:2015bza, Abreu:2011pf},
Telescope Array experiment~\cite{Rubtsov:ICRC2015} and Yakutsk experiment~\cite{Glushkov:2009tn}.
Among the constraints of lower energy gamma flux are the results of KASCADE-Grande~\cite{Kang:2015gpa},
KASCADE~\cite{Schatz:2003} and CASA-MIA~\cite{Chantell:1997gs}.
The main motivation for this study is to refine constraints on SHDM parameters
using all currently available experimental data. For previous works on the same subject
see e.g.~\cite{Aloisio:2006yi, Kalashev:2008dh, Murase:2012xs}.
In recent years the interest to the subject has grown~\cite{Murase:2015gea, Aloisio:2015lva, Esmaili:2015xpa, Dev:2016qbd}
due to PeV neutrino events observation by IceCube~\cite{Aartsen:2013jdh}.
This paper is organized as follows. In Sec.~\ref{flux} we briefly review SHDM decay physics,
consider assumptions about source distribution and propagation of photons in cosmic medium and calculate
photon flux from the decay of SHDM. In Sec.~\ref{results} we compare our results with existing limits on
high energy photon flux and constraint SHDM mass and lifetime.
\section{Gamma-ray flux from SHDM decay}
\label{flux}
The decay of super-heavy particles $X$ was in detail studied in several
works~\cite{Berezinsky:2000up, Sarkar:2001se, Aloisio:2003xj, Barbot:2002ep, Barbot:2002gt}.
In this work we concentrate on QCD decay channels, since in
this case relatively large flux of photons is produced which
makes them easier to constrain with experimental photon flux limits.
Note that other decay modes (i.e. leptonic) may also lead to
some photon flux either via direct gamma production or by means
of interactions of products (i.e. electrons) with cosmic microwave
background (CMB) and galactic media. Though these channels may be
important only if QCD decay is relatively suppressed. For a review of
various DM decay modes see Ref.~\cite{Cirelli:2010xx}.
We consider the two-body decay into quark--antiquark
or gluon-gluon pair. The following QCD cascade develops down in energy until the hadronization
occurs. As a result of hadronization and subsequent decay of unstable hadrons
particles such as protons, photons, electrons and neutrinos are produced.
It is important to note, that the impact of electroweak interactions
on the hadronic decay channels is subdominant with respect to other
uncertainties of this calculation (e.g. the choice of fragmentation functions, see below).
For low $M_X$ --- low energy region we validate this assumption
comparing the decay spectra of Refs.~\cite{Cirelli:2010xx, Ciafaloni:2010ti}
with and without EW corrections. For high $M_X$ and high energies we
compare the spectra obtained in Ref.~\cite{Barbot:2002ep, Barbot:2002gt},
where full MSSM was considered, with that of Ref.~\cite{Aloisio:2003xj},
where authors considered only SUSY QCD interactions. In both energy regions
the difference was found negligible.
In some earlier works (see e.g. Ref.~\cite{Kalashev:2008dh}) the observed
shape of proton spectrum was also used to constrain the SHDM parameters.
This method gives weaker results than the usage of $\gamma$--limits since
the proton flux is dominated by the particles of astrophysical origin. Therefore
in this study we do not consider proton flux from SHDM decay.
Technically the spectra of the $X$-particle decay is defined similarly to the spectra of
$e^+ e^- \rightarrow hadrons$ process~\cite{Hirai:2007cx}:
\begin{equation}
F^h (x,s) = \sum_i
\int\limits_x^1 \frac{dz}{z} C_i(z,\alpha_s(s)) D_i^h(\frac xz,s)
\end{equation}
where $x~\equiv~\frac{2 \cdot E}{M_X}$ is the energy of hadron as a fraction of the total
available energy, $D^h_i(x,s)$ are the fragmentation functions of hadron of the type $h$
from the parton of the type $i$, $C_i(z,\alpha_s(s))$ are the coefficient functions and
the summation goes over all types of partons $i= \{u,\bar{u},d,\bar{d},...,g\}$.
The normalization to the $X$ particle decay width is assumed.
For the leading order in $\alpha_s$ the coefficient functions $C_i$ are proportional to $\delta(1-z)$
and the total spectrum is equal to the sum of fragmentation functions $F^h(x,s) = \sum_i D^h_i(x,s)$.
Given the fragmentation function at some scale $s$ we can
evolve it to another scale using DGLAP equations~\cite{GLD, AP}:
\begin{equation}
\frac{\partial D_i^h(x,s)}{\partial \ln s} = \sum_j \frac{\alpha_s(s)}{2\pi}P_{ij}(x,\alpha_s(s)) \otimes
D_j^h(x,s)\,,
\end{equation}
where $\otimes$ denotes the convolution $f(x) \otimes g(x) \equiv \int_x^1 dz/z f(z)g(x/z)=\\
\int_x^1 dz/z f(x/z)g(z)$ and $P_{ij}(x,s)$ is the splitting function for the parton branching $i \rightarrow j$.
Since we study the process on the scale $M_X \gg m_q$ we assume all $N_f$ quark flavors are coupled to gluon similarly
and we can confine ourselves to considering only the mixing of gluon fragmentation function with a quark
singlet fragmentation function:
\begin{equation}
D_q^h(x,s)=\frac{1}{N_f} \sum\limits_{i=1}^{N_f} [D_{q_i}^h(x,s) + D_{\bar{q_i}}^h(x,s)]\, .
\end{equation}
Then DGLAP equations take the form:
\begin{equation}
\frac{\partial}{\partial \ln s}\left(\begin{array}{c}
D_q^h(x,s)\\
D_g^h(x,s)\\
\end{array} \right)=
\left( \begin{array}{cc}
P_{qq}(x,s) & P_{gq}(x,s) \\
2N_f P_{qg}(x,s) & P_{gg}(x,s) \\
\end{array} \right)
\otimes
\left( \begin{array}{c}
D_q^h(x,s)\\
D_g^h(x,s)\\
\end{array} \right)\, .
\end{equation}
In this study we use the code kindly provided by the authors of Ref.~\cite{Aloisio:2003xj}.
This code evaluates the DGLAP equations numerically in the leading order in $\alpha(s)$.
We use the initial fragmentation functions from the Ref.~\cite{Hirai:2007cx}
parametrized on the scale $M_Z$ and extrapolated to the region $10^{-5} \le x \le 1$.
Although the low $x$ tail is unreliable at this scale, the results obtained for the high scales $M_X$
agree with that obtained by Monte-Carlo simulation, as it was shown in~\cite{Aloisio:2003xj}.
Fortunately, the spectra calculated in this region of $x$ are enough to constrain the results
with the experiment in the mass range of interest: $10^{7} \le M_X \le 10^{16}$ GeV.
In this paper we calculate only prompt photon spectra of $\pi^0$s decay and neglect the smaller
amount of photons from inverse Compton scattering (ICS) of prompt $e^\pm$ on the interstellar
background photons. While for the leptonic decay channels the relative contribution of inverse
Compton photons to the full spectrum can be significant~\cite{Esmaili:2015xpa},
for hadronic channels it is at least by order of magnitude lower~\cite{Cirelli:2010xx},
so we neglect the contribution from prompt $e^\pm$ via ICS in this study.
Following~\cite{Aloisio:2003xj} we also neglect
roughly $10\%$ contribution of other mesons decay.
Then the photon spectrum of the $X$--particle decay is given by:
\begin{equation}
D^\gamma(x) = 2 \int\limits_x^1 \frac{dz}{z} \: D^{\pi^0}(z) \,,
\end{equation}
where $D^{\pi^0}(x,s) \equiv [D^{\pi^0}_q(x,s) + D^{\pi^0}_g(x,s)]$.
The examples of prompt photon spectra for decay of $X$--particle
with different masses are shown in Fig~\ref{prompt_spectra}.
\begin{figure}
\includegraphics[width=13.50cm]{fig1.pdf}
\caption{Prompt photon spectra of $X$--particle decay.}
\label{prompt_spectra}
\end{figure}
Having the injected photon spectra we can calculate the corresponding
photon flux reaching the Earth. We use the following assumptions.
First of all we neglect the flux coming from extragalactic region.
Starting at $E_{\gamma} = 2 \cdot 10^{14}$ eV, which is the lowest energy where
the EAS experiments provide photon limits, and up to $E_{\gamma} \simeq 2 \cdot 10^{18}$ eV
the photon attenuation length doesn't exceed the size of our Galaxy halo.
Then, up to the highest experimentally tested energy $E_{\gamma} = 10^{20}$ eV
photons can come from a region of size not exceeding $50$ Mpc, which contribution to
the flux is about $1\%$ of that from our Galaxy~\cite{Dubovsky:1998pu}.
For the galactic photon flux calculation we use Navarro-Frenk-White dark matter
distribution~\cite{Navarro:1995iw, Navarro:1996gj} with the parametrization for Milky Way
from Ref.~\cite{Cirelli:2010xx}~\footnote{ For comparison we have also tested Burkert dark matter
profile~\cite{Burkert:1995yz}. }.
We assume photons being radiated isotropically in the decay
of $X$--particle. As it was mentioned above, for photons with $E \gtrsim 10^{18}$ eV
the attenuation length in interstellar medium exceeds the size of our Galaxy halo.
This implies that for higher energy photon we can neglect the absorption and cascaded
radiation. Indeed, the comparison of the non-interacting and cascading $\gamma$ fluxes from
our Galaxy (the latter was calculated using the numerical code from Ref.~\cite{Kalashev:2014xna}, see below )
shows that the discrepancy does not exceeds few percent for $E = 10^{18}$ eV (see Fig.~\ref{cascaded_flux}).
Therefore we neglect the photon interaction with the medium for $M_X \gtrsim 10^{14}$ GeV.
Using the above assumptions we obtain the following expression for the integral photon flux
received by a given cosmic ray observatory:
\begin{equation}
\label{int_flux}
F(E>E_{min}) = \frac{N(E>E_{min})}{4\pi M_X \tau} \cdot
\frac{\int\limits_{V} \frac{\rho(R) \omega(\delta, a_0, \theta_{max})}{r^2} d V}{ 2\pi \int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \omega(\delta, a_0, \theta_{max}) \cos(\delta) d \delta}\; ;
\end{equation}
where $\rho(R)$ --- is a DM density as a function of distance $R$ from galactic center,
$r$ --- is a distance from Earth, $\omega$ --- is a relative exposure of the given
observatory and $N(E>E_{min})$ --- is an integral number
of photons with energies higher than $E_{min}$ produced
in the decay of $X$--particle. Integration in the numerator
takes over all volume of halo ($R_{max} = 260$ kpc)
and in the denominator over all sky (the averaging over
right ascension is included in the definition of $\omega$).
The relative exposure $\omega$ is a function of
declination $\delta$, geographical latitude
of the given experiment $a_0$ and the maximal
zenith angle $\theta_{max}$ of particles allowed
for observation in this experiment
(see Refs.~\cite{Sommers:2000us, Aab:2014ila} for details).
For $M_X \lesssim 10^{14}$ GeV we also take into account the
attenuation of photons on CMB using the numerical code~\cite{Kalashev:2014xna}.
The code simulates development of electron-photon cascades on CMB driven by
the chain of $e^\pm$ pair production and inverse Compton scattering. Although
the code allows to calculate the flux of the cascade photons it doesn't take
into account deflections of $e^\pm$ by the halo magnetic field.
Since electrons in the code propagate rectilinearly they produce
less cascade photons. Therefore the calculated flux of photons
should be considered as conservative lower bound. The propagation
code~\cite{Kalashev:2014xna} also includes attenuation of photons
on extragalactic background light (EBL), though the effect of
EBL is negligible on distances which we consider.
\begin{figure}
\includegraphics[width=13.50cm]{fig2.pdf}
\caption{Integral photon flux from SHDM decay in our Galaxy halo as
received by Telescope array experiment, $M_X=10^{12}$ GeV,
without interactions of photons with medium (solid line), with photon interactions with CMB and secondary cascade radiation
included (dashed curve)}
\label{cascaded_flux}
\end{figure}
\section{Comparison with photon limits}
\label{results}
Finally we compare the predicted SHDM signal with the existing experimental upper-limits
on photon flux. For the highest observable cosmic ray energies ($E_{CR} \gtrsim 10^{18}$ eV)
the recent constraints are provided by Pierre Auger Observatory~\cite{Aab:2015bza, Abreu:2011pf},
Telescope Array experiment~\cite{Rubtsov:ICRC2015} and Yakutsk experiment~\cite{Glushkov:2009tn},
while for the lower energies we use the results of CASA-MIA~\cite{Chantell:1997gs},
KASCADE~\cite{Schatz:2003}, KASCADE-Grande~\cite{Kang:2015gpa} and EAS-MSU~\cite{Fomin:2014ura}.
For a review of experimental results see e.g. Ref.~\cite{Karg:2015gxa} and references therein.
We should note, that the higher energy limits are more effective for constraining SHDM
since its decay spectra is quite hard, i.e. SHDM photon flux grows slower than the experimental
limits with the decreasing of energy.
\begin{figure}
\includegraphics[width=13.50cm]{fig3.pdf}
\caption{Predicted integral photon flux from decay of SHDM with mass $M_X=10^{14}$ GeV and
lifetime $\tau= 2 \cdot 10^{22}$ yr compared with upper--limits of Pierre Auger
Observatory~\cite{Aab:2015bza, Abreu:2011pf} and its estimated sensitivity for 2020
(assuming the upgrade of facility)~\cite{Karg:2015gxa}. Estimates of
$\gamma$--ray background produced by attenuation of UHE protons~\cite{Gelmini:2005wu} (green shaded) and
UHE protons and iron induced cascades~\cite{Hooper:2010ze} (blue and orange shaded) are shown with their
theoretical uncertainties.}
\label{flux_limits_hi}
\end{figure}
\begin{figure}
\includegraphics[width=13.50cm]{fig4.pdf}
\caption{Predicted integral photon flux from decay of SHDM with mass $M_X=10^{9}$ GeV and lifetime $\tau= 3 \cdot 10^{21}$ yr
compared with upper limits of KASCADE and KASCADE-Grande experiments~\cite{Schatz:2003, Kang:2015gpa},
estimated sensitivity of Carpet $2+$ experiment~\cite{Dzhappuev:2015hxl}
and the estimate~\cite{Kalashev:2014vra} of $\gamma$ background from $pp$--interactions in halo.}
\label{flux_limits_low}
\end{figure}
Another possible contribution to UHE photon flux comes from astrophysics.
UHE protons and nuclei produced by extragalactic sources interact
with CMB and other interstellar background producing secondary electron-photon cascades and neutrinos.
The essentially isotropic flux of photons of this origin has been estimated in several scenarios including proton and nuclei
emitting sources (see e.g. Refs.~\cite{Gelmini:2005wu, Hooper:2010ze, Kalashev:2014vra, Joshi:2013aua, Ahlers:2013xia}).
Contrary to the astrophysical signal the SHDM contribution is anisotropic with maximum flux arriving from the center of Milky Way.
In Figs.~\ref{flux_limits_hi}--\ref{flux_limits_low} we show the $\gamma$--ray
flux limits by KASCADE, KASCADE-Grande and Pierre Auger Observatory
together with predicted SHDM decay photon flux (for certain parameters of SHDM)
and some estimates of astrophysical photon flux.
Also we show the estimated future sensitivity of
Carpet experiment~\cite{Dzhappuev:2015hxl} in Fig.~\ref{flux_limits_low} and upgraded
PAO~\cite{Karg:2015gxa} in Fig.~\ref{flux_limits_hi}\footnote{Because of the strong anisotropy
of the predicted SHDM signal, we do not show all the existing experimental limits on single
picture. KASCADE and Carpet experiments have approximately the same geographical latitude.}.
\begin{figure}
\includegraphics[width=13.50cm]{fig5.pdf}
\caption{Constraints on mass $M_X$ and lifetime $\tau$ of super heavy dark matter. White area is excluded.
For comparison we present the constraints obtained with Burkert DM profile (solid thin red line). We also
show the constraint obtained with neutrino limits: for $X \rightarrow \nu\bar{\nu}$ channel~\cite{Esmaili:2012us} (blue dots)
and for $X \rightarrow b\bar{b}$ channel~\cite{Murase:2012xs} (black dots).}
\label{exclusion_plot}
\end{figure}
We compare the constraints of various experiments on SHDM mass and lifetime in Fig.~\ref{exclusion_plot}.
The constraints are built by scanning SHDM parameter space and matching the predicted photon signal with the limits
of the given experiment. The model is considered as excluded as soon as the
signal touches the limit points from below. For EAS-MSU result of photon
detection~\cite{Fomin:2014ura} we show the fit assuming the whole photon flux being produced by SHDM decay.
The constraints based on Pierre Auger Observatory limits are the strongest since this
experiment has largest exposure among UHECR experiments and it is located in the Southern hemisphere
where higher $\gamma$--ray flux coming from galactic center could be detected.
The strongest constraint over all mass range is $\tau \gtrsim 3 \cdot 10^{22}$ yr at $M_X \simeq 3 \cdot 10^{12}$ GeV.
It slightly improves the result of Ref.~\cite{Aloisio:2015lva} for which the old PAO limits were used.
In the low energy region the best constraints are derived from KASCADE, CASA-MIA and KASCADE-Grande:
minimal lifetime increases from $\tau \simeq 6 \cdot 10^{19}$ yr at $M_X = 10^7$ GeV to $\tau \simeq 3 \cdot 10^{21}$ yr
at $M_X = 5 \cdot 10^{9}$ GeV being of the same order as the constraints of
Refs.~\cite{Murase:2012xs, Murase:2015gea, Esmaili:2015xpa} that were obtained in a wider theoretical
context. The constraints obtained with Burkert dark matter profile is slightly weaker than that
of NFW in the high energy region, where PAO observes the Galactic center,
and stronger for low energies, where constraints are put by Northern hemisphere experiments.
It is also interesting to compare our constraints with those obtained from neutrino limits.
In Ref.~\cite{Murase:2012xs} the neutrino constraints on $\tau$ was imposed for $M_X < 10^{10}$ GeV
and for various decay channels. Our constraints are of the same order as these for $M_X \gtrsim 10^9$ GeV
but become weaker for $M_X \lesssim 10^9$ GeV. The case of direct decay of
dark matter into neutrino was studied in Ref.~\cite{Esmaili:2012us} for a wide region
$10 < M_X < 10^{19}$ GeV. The constraints on $\tau$ obtained there are of the same order
as ours for $M_X \lesssim 10^8$ GeV and weaker for all higher masses.
Our constraints have implication for the EAS-MSU tentative result of 100 PeV gamma detection~\cite{Fomin:2014ura}.
We may see that the curve interpreting it as the product of SHDM decay lies deep in the
parameter area excluded by the other experiments, this implies that SHDM component
in EAS-MSU photon signal can not be dominant.
Discussing these results we may note that although the recent experimental limits
touch the astrophysically predicted region, due to large uncertainty of astrophysical
$\gamma$--ray flux, one can not yet exclude the dominant contribution of SHDM decay.
Nevertheless, one might use the guaranteed i.e. minimal predicted astrophysical gamma flux to
constrain SHDM parameters even stronger. Finally, if $\gamma$--rays are detected the
discrimination between the astrophysical and the SHDM (or other exotic) origin scenario
could be in principal made by analysing the flux anisotropy and energy spectrum.
\section*{Acknowledgements}
We thank S.~Troitsky and G.~Rubtsov for helpful discussions. We are especially indebted to
R.~Aloisio, V.~Berezinsky and M.~Kachelriess for providing the numerical code solving DGLAP equations.
This work has been supported by Russian Science Foundation grant 14-22-00161.
Numerical simulations have been performed in part at the computer cluster of
the Theoretical Physics Department of the Institute for Nuclear Research
of the Russian Academy of Sciences.
\suppressfloats
|
2,877,628,088,922 | arxiv | \section*{\refname}}
\begin{document}
\title{Universal size ratios {{of Gaussian polymers with complex architecture:}}\\ Radius of gyration vs hydrodynamic radius}
\author{Khristine Haidukivska}
\author{Viktoria Blavatska}
\affiliation{Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1, Svientsitskii Str., 79011 Lviv, Ukraine,
}
\author{Jaros{\l}aw Paturej}
\affiliation{Institute of Physics, University of Silesia, 75 Pu\l{}ku Piechoty 1, 41-500 Chorz{\'o}w, Poland}
\affiliation{Leibniz Institute of Polymer Research Dresden e.V., Hohe Str. 6, 01069 Dresden, Germany}
\email[Correspondence to~]{[email protected]}
\begin{abstract}
{ The present research is dedicated to provide deeper understanding of the impact of complex architecture of branched polymers on their
behaviour in solvents. The folding dynamics of macromolecules and hydrodynamics of polymer fluids are
strongly dependent on size and shape measures of
single macromolecules, which in turn are determined by their topology.
For this aim,} we use combination of analytical theory, based on path integration method, and molecular dynamics simulations to study
structural properties of complex Gaussian polymers containing $f^c$ linear branches and $f^r$ closed loops grafted to the central core.
Using theory we determine the size measures such as gyration radius $R_g$ and the hydrodynamic radii $R_H$,
and obtain the estimates for the size ratio
$R_g /R_H$ with its dependence on the functionality $f=f^c+f^r$ of grafted polymers.
{ In particular, we obtain the quantitative estimate of compactification (decrease of size measure) of such complex polymer
architectures with increasing number of closed loops $f^r$ as compared with
linear or star-shape molecules of the same total molecular weight.} Numerical simulations corroborate theoretical prediction that $R_g /R_H$ decreases towards unity with increasing $f$.
These findings provide qualitative description of complex polymers with different arm architecture in $\theta$ solutions.
\end{abstract}
\pacs{36.20.-r, 36.20.Ey, 64.60.ae}
\date{\today}
\maketitle
\section{Introduction}
{{Polymer macromolecules of complex branched structure attract considerable
attention both from academical \cite{S,Ferber97} and applied \cite{Gao04,Jeon18} perspective, being encountered as building blocks of materials like synthetic and biological
gels \cite{gels}, thermoplastics \cite{bates}, melts and elastomers \cite{paturej1,paturej2}. High functionality of polymers provides novel properties with
applications in diverse fields like drug delivery \cite{Li16}, tissue engineering
\cite{Lee01}, super-soft materials \cite{daniel}, and antibacterial surfaces \cite{Zhou10} etc. On the other hand,
multiple loop formation in macromolecules is often encountered and plays an important role in biological
processes such as stabilization of globular proteins \cite{Nagi97} or transcriptional regularization of
genes \cite{Towles09}.
In this concern, it is of fundamental interests to study conformational properties of complex polymer architectures.
}}
In statistical description of polymers, a considerable attention is paid
to the universal quantities describing equilibrium size and shape of typical conformation
adapted by individual macromolecule in a solvent \cite{Clo,Gennes}.
In particular, many physical properties are manifestations of the underlaying polymer conformation, including
the hydrodynamic properties of polymer fluids \cite{Torre01}, the folding dynamics and catalytic activity of proteins \cite{Quyang08} etc.
As a size measure of a single macromolecule one usually considers the mean square radius of gyration $R_g^2$, which is directly measurable in static scattering experiments \cite{Ferri01,Smilgies15}.
Denoting coordinates of the monomers along the polymer chain by $\vec{r}_n$, $n = 1, \ldots,N$, this quantity is defined as:
\begin{equation}
\langle R_g^2 \rangle = \frac{1}{2N^2} \sum_{n, m}\langle (\vec{r}_n-\vec{r}_m)^2 \rangle,\label{Rg}
\end{equation}
and is thus given by a trace of gyration tensor $\bf{Q}$ \cite{Aronovitz86}.
Here and below, $\langle (\ldots ) \rangle$ denotes ensemble average over possible polymer conformations.
Another important quantity that characterizes the size of a polymer coil is hydrodynamic radius $R_H$, which is directly obtained in dynamic light scattering experiments \cite{Schmidt81,Varma84,Linegar10}.
This quantity was introduced based on the
following motivation \cite{Doi}.
According to the Stokes-Einstein equation, the diffusion coefficient $D$ of a spherical particle of radius $R_s$ in a solvent of viscosity $\eta$ at temperature $T$ is given by:
\begin{equation}
D=\frac{k_BT}{6\pi\eta R_s} \label{Stok}
\end{equation}
where $k_B$ is Boltzmann constant. In order to generalize the above relation for the case of molecules of more complex shape, their center-of-mass diffusion coefficient $D$
is given by Eq.~(\ref{Stok}) with $R_s$
replaced by
$R_H$. The latter is given as the average
of the reciprocal distances between all pairs of monomers \cite{TERAOKA}:
\begin{equation}
\langle R_H^{-1} \rangle = \frac{1}{N^2} \sum_{n, m} \left\langle \frac{1} { |\vec{r}_n-\vec{r}_m|} \right\rangle. \label{Rh}
\end{equation}
Namely, $R_H$ is related with the averaged components of the Oseen tensor ${\bf H}_{nm}$ characterizing the hydrodynamic interactions between monomers $n$ and $m$ \cite{Kirkwood54}.
To compare $R^2_g$ and $R^{-1}_H$, it is convenient to introduce the universal size ratio
\begin{equation}
\rho=\sqrt{ R_g^2} / R_H , \label{ratio}
\end{equation}
which does not depend on any details of chemical microstructure and is governed by polymer architecture. In the present paper we restrict our consideration to the
ideal (Gaussian) polymers, i.e. monomers have no excluded volume. {{This to a certain extent corresponds to the behavior of flexible polymers in the so-called $\theta$-solvents. Note that our theoretical approach is not capable to correctly capture structural properties of more rigid branched polymers like dendrimers
or molecular bottlebrushes. The rigidity of these macromolecules is controlled by steric repulsions between connected branches or grafts}}.
This approach allows to obtain the exact analytical results for the set of universal quantities characterizing conformational
properties of macromolecules. In particular, for a linear Gaussian polymer chain the exact analytical result for the ratio (\ref{ratio})
in $d=3$ dimensions
reads \cite{zimm,burchard,dunweg}:
\begin{equation}
\rho_{{\rm chain}}= \frac8{3\sqrt{\pi}}\approx 1.5045.
\label{ratiochain}
\end{equation}
The universal ratio of a Gaussian ring polymer was calculated in Refs.~\cite{burchard,fukatsu,Uehara2016} and is given by
\begin{equation}
\rho_{{\rm ring}} = \frac{\sqrt{2\pi}}2\approx 1.2533.
\label{ratioring}
\end{equation}
The validity of theoretically derived ratios $\rho_{{\rm chain}}$ and $\rho_{{\rm ring}}$ was confirmed in several simulation studies~\cite{dunweg,Uehara2016,Clisby16}.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.25]{Topologies.eps}
\caption{ \label{fig:1} Schematic presentation of rosette polymer topology comprised $f^r=4$ rings (green) and $f^c=8$ linear chains (red) grafted to a central core (black).}
\end{center}\end{figure}
\begin{table}[b!]
\begin{tabular}{| c | c | c | c | c |}
\hline
Topology & $f^c$ & $f^r$ & $\rho_{\mbox{\tiny theory}}$ & $\rho_{\mbox{\tiny sim}}$\\ \hline
Chain & $1$ & $0$ & $1.5045$ Eq.~(\ref{ratiochain}) & $1.5045\pm0.0005$ \cite{Clisby16} \\ \hline
Ring & $0$ & $1$ & $1.253$ Eq.~(\ref{ratioring}) & $1.253\pm0.013$ \cite{Uehara2016} \\ \hline
Star & $3$ & $0$ & $1.40$ Eq.~(\ref{ratiostar}) & $1.11$ \cite{Shida04}\\ \hline
Star & $4$ & $0$ & $1.33$ Eq.~(\ref{ratiostar})& $1.04$ \cite{Shida04}\\ \hline
Tadpol& $1$ & $1$ & $1.415$ Eq.~(\ref{ratiodring}) & $1.380\pm0.021$ \cite{Uehara2016} \\ \hline
Double ring & $0$ & $2$ & $1.217$ Eq.~(\ref{rhotadpole}) & $1.215\pm0.011$ \cite{Uehara2016} \\ \hline\end{tabular}
\caption{Literature data for the universal size ratio for different polymer topologies, derived
using analytical theory $\rho_{\mbox{\tiny theory}}$ and numerical simulations $\rho_{\mbox{\tiny sim}}$.
The theoretical values for tadpol and double ring architectures were calculated on the basis of our general analytical result, cf. Eq. (\ref{rhorosette}).
}\label{table1}
\end{table}
The distinct example of branched macromolecule is the so-called rosette polymer \cite{Blavatska15}, containing $f^c$ linear chains and $f^r$ closed loops (rings), radiating from the same branching point (see Fig. 1).
Note that for $f^r=0$ one restores architecture of a star polymer with $f^c$ functionalized linear chains radiating from a central core, for which an exact analytical result is known for the size ratio (Ref. \cite{TERAOKA}):
\begin{equation}
\rho_{{\rm star}}=\frac{8\sqrt{f(3f^c-2)}}{3(f^c)^2\sqrt{\pi}}(\sqrt{2}-1)(\sqrt{2}+f^c). \label{ratiostar}
\end{equation}
The estimates for $\rho_{{\rm star}}$ have been also obtained by numerical Monte-Carlo simulations \cite{Shida04}.
Using molecular dynamics (MD) simulations, Uehara and Deguchi derived the universal size ratios for macromolecules
such as single ring ($f^c=0$, $f^r=1$), tadpole ($f^r=1$, $f^c=1$) and double ring ($f^r=2$, $f^c=0$) \cite{Uehara2016}. The
overview of existing literature data
for universal size ratios obtained in analytical $\rho_{\mbox{\tiny theory}}$ and numerical $\rho_{\mbox{\tiny sim}}$ investigations
are listed in
Table~\ref{table1}. Note large discrepancy between previous numerical study of star polymers \cite{Shida04}
and the theoretical result of Eq.~(\ref{ratiostar}). This significant difference between theory and simulations is due to
too short chains that were used in Ref.~\cite{Shida04} with maximum degree of polymerization $N=150$.
As it will be shown the finite-size effect of polymer chains
strongly affects measured value of $\rho$. In our numerical study we calculate $\rho$ in the asymptotic limit. For this purpose we simulated
long polymer chains with degree of polymerization equal to $N=6400$.
The aim of the present work is to extend the previous analysis of rosette-like polymers \cite{Blavatska15}, by thoroughly studying their universal size characteristics. For this purpose we apply
the analytical theory, based on path-integration method, and extensive numerical molecular dynamics simulations. The layout of the paper is as follows.
In the next section, we introduce the continuous chain model and provide the details of analytical calculation of the universal size ratios $\rho$ for various
polymer architectures applying path integration method.
In section \ref{sec:MD} we describe the numerical model and details of MD simulations. In the same section we present numerical results and compare them with our theoretical predictions.
We draw conclusions and remarks in section \ref{Con}.
\section{Analytical approach}\label{An}
\subsection{The model}
Within the frame of continuous chain model \cite{Edwards}, a single Gaussian polymer chain of length $L$ is represented as a path $\vec{r}(s)$, parameterized by $0<s<L$.
We adapt this model to more complicated branched polymer topologies, containing in general $f^c$ linear branches and $f^r$ closed rings (see figure \ref{fig:1}).
In the following, let us use notation $f=f^c+f^r$ for total functionality of such structure.
The weight of each $i$th path ($i=1,\ldots,f$) is given by
\begin{equation}
W_i={\rm e}^{-\frac{1}{2}\int\limits_0^L ds \left(\frac{{\rm d}\vec{r}_i}{{\rm d}s}\right)^2}.
\end{equation}
The corresponding partition function of rosette polymer is thus:
\begin{equation}
Z_{f^c,f^r} = \frac{\int\!{\cal D}\{\vec{r}\} \prod\limits_{j=1}^{f^r} \delta(\vec{r}_j(L){-}\vec{r}_j(0))\prod\limits_{i=1}^{f} \delta(\vec{r}_i(0))\, W_i}
{\int\!{\cal D}\{\vec{r}\} \prod\limits_{i=1}^{f} \delta(\vec{r}_i(0)) \, W_i},\label{Z}
\end{equation}
where ${\cal D}\,\{\vec{r}\}$ denotes multiple path integration over trajectories $\vec{r}_i(s)$ ($i=1,\ldots,f$) assumed to be of equal length
$L_i=L$, the first product of $\delta$-functions reflects the fact that all $f^c+f^r$ trajectories start
at the same point (central core), and the second $\delta$-functions product up to $f^r$ describes the closed ring structures of $f^r$ trajectories (their starting and end points coincide). Note that (\ref{Z}) is normalised in such a way that
the partition function of the system consisting of $f^c+f^r$ open linear Gaussian chains (star-like structure) is unity.
The expression for partition function of rosette-like polymer architecture have been evaluated in {{Ref.~\cite{Blavatska15}
and in Gaussian approximation reads:}}
\begin{equation}
Z_{f^c,f^r} =(2\pi L)^{-df^r/2}.
\end{equation}
{{where $d$ denotes spatial dimensionality.}}
Within the frame of presented model, the expression for the mean square gyration radius from Eq.~(\ref{Rg}) can be rewritten as
\begin{equation}
\langle R_g^2 \rangle = \frac{1}{2(fL)^2} \sum_{i,j=1}^{f}\int_0^L\int_0^{L}\,ds_2\,ds_1 \langle (\vec{r}_i(s_2)-\vec{r}_j(s_1))^2 \rangle,
\end{equation}
whereas the expression (\ref{Rh}) for hydrodynamic radius reads:
\begin{equation}
\langle R_H^{-1} \rangle = \frac{1}{(fL)^2} \sum_{i,j=1}^{f} \int_0^L\int_0^{L}\,ds_2\,ds_1 \langle |\vec{r}_i(s_2)-\vec{r}_j(s_1)|^{-1} \rangle, \label{rhc}
\end{equation}
where $\langle (\ldots) \rangle$ denotes averaging over an ensemble of all possible configurations defined as:
\begin{eqnarray}
&&\langle (\ldots) \rangle = \frac{1}{Z_{f_c,f_r}} \times\label{av}\\
&& \times \frac{\int\!{\cal D}\{\vec{r}\} \prod\limits_{j=1}^{f^r} \delta(\vec{r}_j(L){-}\vec{r}_j(0))\prod\limits_{i=1}^{f} \delta(\vec{r}_i(0))(\ldots\,) W_i}
{\int\!{\cal D}\{\vec{r}\} \prod\limits_{i=1}^{f} \delta(\vec{r}_i(0)) \, W_i}. \nonumber
\end{eqnarray}
\subsection{Calculation of hydrodynamic radius and universal size ratio}
The crucial point in the calculation of the hydrodynamic radius is utilization of the following equality \cite{Haydukivska14}:
\begin{equation}
|\vec{r}|^{-1}{ =} (2\pi)^{-d}\! \int {\rm d}\vec{k} \,2^{d-1} \pi^{\frac{d-1}{2}} \Gamma\left(\!\frac{d-1}{2}\!\right) k^{1-d}{\rm e}^{i\vec{r}\vec{k}}.
\end{equation}
where $\Gamma(x)$ is Gamma function.
Applying the above expression to Eq.~(\ref{rhc}) allows to rewrite the mean reciprocal distance from the definition of $R_H$ as
\begin{eqnarray}
&&\langle|\vec{r}_i(s_2)-\vec{r}_j(s_1)|^{-1}\rangle { =} (2\pi)^{-d}\! \int {\rm d}\vec{k}\, 2^{d-1} \pi^{\frac{d-1}{2}}\times \nonumber\\
&& \times \Gamma\left(\!\frac{d-1}{2}\!\right) \, k^{1-d} \langle \xi(s_1,s_2) \rangle \label{defk}
\end{eqnarray}
with notation
\begin{equation}
\xi(s_1,s_2) \equiv {\rm e}^{i\vec{k}(\vec{r}_i(s_2)-\vec{r}_j(s_1))}.
\end{equation}
Below we will apply path integration approach to calculate the mean reciprocal distances.
Exploiting the Fourier-transform of the $\delta$-functions in definition (\ref{av})
\begin{equation}
\delta (\vec{r}_j(L)-\vec{r}_j(0)) =(2\pi)^{-d}\int {\rm d}\vec{q}_j\, {\rm e}^{-i\vec{q}_j(\vec{r}_j(L)-\vec{r}_j(0))} \label{d}
\end{equation}
we get a set of wave vectors $\vec{q}_j$ with $j=1,\ldots,f^r$ associated with $f^r$ closed loop trajectories, which is an important point in following evaluation.
To visualize different contributions into $\langle |\vec{r}_i(s_2)-\vec{r}_j(s_1)|^{-1}\rangle$,
it is convenient to use the diagrammatic technique (see Fig.~\ref{fig:2}).
Taking into account
the general rules of diagram calculations \cite{Clo}, each segment between any two restriction points $s_a$ and $s_b$
is oriented and bears a wave vector $\vec{p}_{ab}$ given by a sum of incoming and outcoming wave vectors injected
at restriction points and end points.
At these points, the flow of wave vectors is
conserved.
A factor $\exp\left(-{{p}_{ab}}^{\,\,2}(s_b-s_a)/2\right)$ is associated with each segment. An integration is to be made
over all independent segment areas and over wave vectors injected at the end points.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=90mm]{diagramnew.eps}
\caption{ \label{fig:2} Diagrammatic presentation of contributions into $\langle R_H^{-1} \rangle$ according to (\ref{rhc}). Solid lines are schematic presentation of polymer paths, arrows denote point $s_1$, $s_2$. }
\end{center}\end{figure}
To make these rules more clear, let us start with diagram (1), corresponding to the case when both points $s_1$ and $s_2$ are
located along any linear arm of rosette polymer. The vector $\vec{k}$ is injected at restriction point $s_1$
and the segment $s_2-s_1$ is associated with factor $\exp\left(-{{k}}^{\,\,2}(s_2-s_1)/2\right)$.
Next step is performing integration over $k$. Passing to $d$-dimensional spherical coordinates, we have:
\begin{equation}
\int {\rm d}\vec{k}\, k^{1-d} f(k^2)= \frac{2\pi^{d/2}}{\Gamma\left( \frac{d}{2}\right)}\int{\rm d} {k}\, f(k^2),
\end{equation}
and thus integration over $k$ can be easily performed
\begin{equation}
\int_0^{\infty}\!{\rm d} k\, {\rm e}^{-\frac{k^2(s_2-s_1)}{2}}=\sqrt{ \frac{\pi}{2} }(s_2-s_1)^{-1/2}.
\end{equation}
The analytic expression corresponding to contribution from diagram (1) thus reads
\begin{equation}
\langle \xi(s_1,s_2)\rangle_{(1)} {=}\frac{(2\pi^{d+1})^{\frac{1}{2}}}{\Gamma\left(
\frac{d}{2}\right)}\!\int_0^L\!\!{\rm d}s_2\int_0^{s_2}\!\!\!{\rm d}s_1\left(s_2{-}s_1\right)^{-\frac{1}{2}}. \label{r1}
\end{equation}
Diagram (2) describes the situation when restriction points $s_1$ and $s_2$ are
located along two different linear arms of rosette polymer. We thus have a segment of length $(s_2+s_1)$ between them, associated with factor
$\exp\left(-{{k}}^{\,\,2}(s_2+s_1)/2\right)$. After performing integration over $k$ we receive
\begin{equation}
\langle \xi(s_1,s_2)\rangle_{(2)} { =}\frac{(2\pi^{d+1})^{\frac{1}{2}}}{\Gamma\left(
\frac{d}{2}\right)}\int_0^L\!\!{\rm d}s_2\int_0^{L}\!\!\!{\rm d}s_1\left(s_2{+}s_1\right)^{-\frac{1}{2}}.
\end{equation}
In the case (3), both $s_1$ and $s_2$ are located on the closed loop, let it be the loop with $j=1$. Here, we
need to take into account the wave vector $\vec{q}_1$, ``circulating'' along this loop, so that three segments should be taken into account
with lengths
$s_1$, $s_2-s_1$, and $L-s_2$, correspondingly, with associated factors $\exp\left(-{{q_1}}^{\,\,2}s_1/2\right)$,
$\exp\left(-(q_1+k)^{\,\,2}(s_2-s_1)/2\right)$, $\exp\left(-{{q}}^{\,\,2}(L-s_2)/2\right)$. Integration over the wave vector $q_1$ gives
\begin{eqnarray}
&&(2\pi)^{-d}\!\int{\rm d}\vec{q}_1\, {\rm e}^{-\frac{q_1^2L}{2}-\vec{q}\vec{k}(s_2-s_1)}=\nonumber\\
&&=(2\pi L)^{-d/2}(s_2-s_1)^{-\frac{1}{2}}{\rm e}^{\frac{k^2(s_2-s_1)^2}{2L}}.
\end{eqnarray}
After performing final integration over $k$ we receive
\begin{eqnarray}
&&\langle \xi(s_1,s_2)\rangle_{(3)} {=}\frac{(2\pi^{d+1})^{\frac{1}{2}}}{\Gamma\left(
\frac{d}{2}\right)}\int_0^L\!\!ds_2\int_0^{s_2}\!\!ds_1 \nonumber\\
&&\times\left(s_2{-}s_1-\frac{(s_2{-}s_1)^2}{L}\right)^{-\frac{1}{2}}.
\end{eqnarray}
Following the same scheme, we receive analytic expressions, corresponding to diagrams (4) and (5) on Fig.~\ref{fig:2}:
\begin{eqnarray}
&&\langle \xi(s_1,s_2)\rangle_{(4)} {=}\frac{(2\pi^{d+1})^{\frac{1}{2}}}{\Gamma\left(
\frac{d}{2}\right)}\int_0^L\!\!ds_2\int_0^{L}\!\!ds_1\nonumber\\
&&\times\left(s_2{+}s_1- \frac{s_2^2}{L}{-}\frac{s_1^2}{L}\right)^{-\frac{1}{2}}, \\
&&\langle \xi(s_1,s_2)\rangle_{(5)} {=}\frac{(2\pi^{d+1})^{\frac{1}{2}}}{\Gamma\left(
\frac{d}{2}\right)}\int_0^L\!\!ds_2\int_0^{L}\!\!ds_1\nonumber\\
&&\times\left(s_2{+}s_1-\frac{s_1^2}{L}\right)^{-\frac{1}{2}}.\label{r5}
\end{eqnarray}
Note that each diagram in Fig.~\ref{fig:2} is associated with the corresponding combinatorial factor.
Namely, the contribution (1) in above expressions is taken with the pre-factor $f^c$, contribution (2) with $\frac{f^c(f^c-1)}{2}$, (3) with $f^r$, (4) with $\frac{f^r(f^r-1)}{2}$ and the last contribution (5) with the pre-factor $f^r f^c$.
Summing up all contributions from Eq. (\ref{r5}) with taking into account corresponding pre-factors,
on the base of Eq. (\ref{defk}) we finally obtain the expression for the hydrodynamic radius of a rosette structure:
\begin{eqnarray}
&&\langle R_{h,{\mbox{\tiny rosette}}}\rangle=\frac{\Gamma\left(\frac{d-1}{2}\right) }{\Gamma\left(\frac{d}{2}\right)\sqrt{2}}12(f^c+f^r)^2\sqrt{L}\times\nonumber\\
&&\left[-6 f_r\pi \left(\sqrt{2}(f^r-1)-2 f^r+1\right)+\right.\nonumber\\
&&16 \left( \sqrt {2}-1 \right) f^c \left( \sqrt {2}+ f^c \right) +\nonumber\\
&&\left.3f^cf^r \left( 10 \arcsin \left( \frac{\sqrt {5}}{5} \right) -\pi +4 \right)\right]^{-1}.\label{rha}
\end{eqnarray}
The expression for the mean square gyration radius of a rosette architecture is \cite{Blavatska15}:
\begin{eqnarray}
&&\langle R^2_{g,{\mbox{\tiny rosette}}}\rangle=\frac{Ld}{12(f^r+f^c)^2}[f^r(2f^r-1)+\nonumber\\
&&2f^c(3f^c-2)+8f^rf^c]. \label{rga}
\end{eqnarray}
Finally, using Eqs.~(\ref{rha}) and (\ref{rga}), we calculate the the universal size ratio (\ref{ratio}) of rosette-like polymer architecture in Gaussian approximation:
\begin{eqnarray}
&&\rho_{{\mbox{\tiny rosette}}}=\frac{\sqrt{6\,d}\,\Gamma\left(\frac{d-1}{2}\right)}{72(f^r+f^c)^3\Gamma\left(\frac{d}{2}\right)}\times\nonumber\\
&&\sqrt{6(f^c)^2+8f^cf^r+2(f^r)^2-4f^c-f^r}\times\nonumber\\
&&\left[-6 f_r\pi \left(\sqrt{2}(f^r-1)-2 f^r+1\right)+\right.\nonumber\\
&&16 \left( \sqrt {2}-1 \right) f^c \left( \sqrt {2}+ f^c \right) +\nonumber\\
&&\left.3f^cf^r \left( 10 \arcsin \left( \frac{\sqrt {5}}{5} \right) -\pi +4 \right)\right]. \label{rhorosette}
\end{eqnarray}
Substituting $d=3$ in expression (\ref{rhorosette}),
for $f^r=0$, { both at $f^c=1$ and $f^c=2$} we restore the universal size ratio of a linear polymer (\ref{ratiochain}),
whereas $f^c>2$ and $f^r=0$ gives the expression for a star polymer (\ref{ratiostar}).
For $f^c=0$ and $f^r=1$ we reproduce the known analytical expression of a single ring from Eq.~(\ref{ratioring}).
Consequently $f^c=0$ and $f^r=2$ Eq.~(\ref{rhorosette}) provides the formula for universal size ratio of a star comprised of
two ring polymers:
\begin{equation}
\rho_{{\mbox{\tiny double ring}}}=\frac{\sqrt{3\pi}}{4}(3-\sqrt{2})\approx 1.217. \label{ratiodring}
\end{equation}
For $f^c=1$ and $f^r=1$ we find analytic expression for the so-called tadpole architecture:
\begin{eqnarray}
&&\rho_{{\mbox{\tiny tadpole}}}=\frac{\sqrt{22}}{96\sqrt{\pi}}\left[3\pi+28+30\arcsin\left(\frac{\sqrt{5}}{5}\right)\right]\nonumber\\
&&\;\;\;\;\;\;\;\;\;\;\;\approx 1.415. \label{rhotadpole}
\end{eqnarray}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.3]{theory.eps}
\caption{Summary of theoretical results for universal size ratio $\rho$ as given by (\ref{rhorosette})
vs functionality $f=f^c+f^r$
for different polymer topologies.
Data for architectures containing: only linear chains (star-like polymer with $f^r=0$) as function of $f=f^c$ (red symbols),
only ring polymer (with $f^c=0$) as function of $f=f^r$ (blue symbols) and
``symmetric'' rosette structure with equal number of rings and linear branches $f=f^r+f^c$ (purple symbols).}\label{theory}
\end{center}\end{figure}
In Fig.~\ref{theory} we plot calculated theoretical values of the universal size ratio vs number of functionalized chains for stars comprised of linear polymers with $f^c>0$, $f^r=0$ (red symbols) and ring polymers $f^r>0$, $f^c=0$ (blue) as well as rosette polymers
with equal number of grafted linear chains and rings $f^r=f^c>0$ (purple).
For all architectures we observe decrease in $\rho$ with increasing functionality.
In the next subsection we compare our theoretical predictions with the result of MD simulations.
\section{Numerical approach}
\label{sec:MD}
\subsection{The method}
Numerical data in this work have been obtained from
MD simulations.
We consider simple three-dimensional numerical model of a rosette polymer consisting of arms which are $f^c$ linear chains and/or $f^r$ ring polymers.
Each arm is composed of $N$ sizeless particles
of equal mass $m$ connected by bonds.
We study ideal (Gaussian) conformations of rosette polymers {{corresponding to a certain extent to the conformations of real rosette polymers at dilute $\theta$ solvent conditions.}}
In our numerical model the connectivity along the polymer chain backbone is assured via harmonic potential
\begin{equation}
V(r)=\frac k2(r-r_0)^2,
\label{harmonic}
\end{equation}
where $k=200$~$k_BT/b^2$ is the interaction strength measured in units of thermal energy $k_BT$ and and the equilibrium bond distance
$r_0=b$.
The molecular dynamics simulations were performed
by solving the Langevin equation of motion for
the position $\vec{r}_i=[x_i,y_i,z_i]$ of each monomer,
\begin{equation}
m\ddot{\vec{r}}_i = \vec{F}_i -\zeta\dot{\vec{r}}_i + \vec{F}_i^{\mbox{\tiny R}}, \,\,\,
i=1,\ldots,fN,
\label{langevin}
\end{equation}
which describes the motion of bonded monomers.
Forces $\vec{F}_i$ in Eq.~(\ref{langevin}) above
are obtained from the harmonic interaction potential between (Eq.~\ref{harmonic}).
The second and third term on the right hand side of Eq.~(\ref{langevin})
is a slowly evolving viscous force $-\zeta\dot{\vec{r}}_i$
and a rapidly fluctuating stochastic force
$\vec{F}_i^{\mbox{\tiny R}}$ respectively. This random force $\vec{F}_i^{\mbox{\tiny R}}$ is related to the friction coefficient $\zeta$ by the fluctuation-dissipation
theorem $\langle \vec{F}_i^{\mbox{\tiny R}}(t) \vec{F}_j^{\mbox{\tiny R}}(t')\rangle = k_BT\zeta \delta_{ij}\delta(t-t')$.
The friction coefficient used in simulations was $\zeta=0.5\,m\tau^{-1}$ where $\tau = [mb^2/(k_BT)]^{1/2}$
is the unit of time.
A Langevin thermostat was
used to keep the temperature constant. The integration step employed to solve the equations of motions was taken to be
$\Delta t=0.0025\tau$.
All simulations were performed in a cubic box
with periodic boundary conditions imposed in all spatial dimensions.
We used Large-scale Atomic/Molecular Massively Parallel Simulator
(LAMMPS) \cite{lammps} to perform simulations. Simulation snapshots were rendered using Visual
Molecular Dynamics (VMD) \cite{vmd}.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.22]{lin_ring.eps}
\caption{Molecular dynamics data for the universal size ratio $\rho$ of linear chains (red symbols) and ring polymers (blue symbols) plotted
as a function of correction-to-scaling variable $N^{-1/2}$ with corresponding simulation snapshots for polymer architectures with degree of polymerization $N=6400$. Solid lines represent fitting functions of the general form given in Eq.~(\ref{corr}). Horizontal dotted lines correspond to
asymptotic values $\rho_{\infty}$ predicted by theory, cf. Eqs.~(\ref{ratiochain}) and (\ref{ratioring}).
\label{MD1}
}
\end{center}
\end{figure}
\subsection{Results }
Simulations of rosette polymers were performed for the following number of monomer beads per arm $N=100,200,400,800,1600$ and 6400.
The number of arms for star polymers composed of solely linear chains (i.e.~with $f^r$=0) and ring polymers (i.e.~with $f^c=0$) were varied in the range between 1 to 4. In the case of rosette polymers which are hybrid polymer architectures comprised of linear chains and ring polymers we considered two arm functionalities with $f^c=f^r=1$ and 2.
To increase conformational sampling each simulation was carried out with 50 identical molecules in a simulation box.
In the course of simulations the universal size ratio was measured, cf.~Eq.~(\ref{ratio}).
In the numerical calculation of quantities like $\rho$ a crucial aspect is finite degree of polymerization $N$ that we are dealing with in simulations, while theoretically obtained values of $\rho$ hold in the asymptotic limit $N\to\infty$.
Thus, the finite-size effects (or corrections to scaling) should be appropriately taken into account.
For the size ratio of an ideal linear chain, this correction is given by
\begin{equation}
\rho=\rho_{\infty}(1+aN^{-\Delta}), \label{corr}
\end{equation}
where $\rho_{\infty}$ is the asymptotic value obtained at $N\to\infty$, $a$ is non-universal amplitude, $\Delta$ is the correction-to-scaling exponent for
$\theta$-solvent is $\Delta=1/2$ \cite{dunweg} whereas for good solvent conditions
is $\simeq0.53$ \cite{Clisby16}.
In our numerical analysis we use Eq.~(\ref{corr}) to obtain the universal size ratio in the asymptotic limit for all considered architectures.
For this purpose we plot $\rho$ vs correction-to-scaling term $N^{-1/2}$ and get $\rho=\rho_{\infty}$ for $N\rightarrow \infty$.
In Fig.~\ref{MD1} we display the results of our MD simulations for two "benchmark" systems which are Gaussian linear chain (red circles)
and Gaussian ring (blue circles).
For both architectures systematic increase in the size ratio is observed with increasing value of $N$.
In the asymptotic limit $N\rightarrow \infty$ we obtain
$\rho_{\mbox{\tiny chain}}=1.499\pm0.005$ and $\rho_{\mbox{\tiny ring}}=1.244\pm0.004$.
These numerical values with very good accuracy reproduce known theoretical results.
The latter are given by Eq.~(\ref{ratiochain}) for linear chains and by (\ref{ratioring}) for rings.
The complete list of numerically derived universal size ratios and their comparison to theoretical values can be found in
Table \ref{table2}.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.23]{stars_v2.eps}
\caption{Molecular dynamics data for the universal size ratio $\rho$ of star polymers comprised of a) linear, b) ring polymers and c) rosette polymers
plotted
as a function of correction-to-scaling variable $N^{-1/2}$.
Data displayed for different amount of arms $f^c$ and $f^r$ as indicated. For rosette polymers data are for symmetric number of arms $f^c=f^r$.
Solid lines represent fitting functions according to Eq.~(\ref{corr}). Horizontal dotted lines correspond to asymptotic
values obtained from analytical theory, see Table \ref{table2}. Insets show simulation snapshots for with $N=6400$ and: a) $f^c=3$, b) $f^r=2$ and c) $f^c=f^r=1$.
\label{MD2}
}
\end{center}\end{figure}
\begin{table}[!h]
\begin{tabular}{| c | c | c | c |}
\hline
$f^c$ & $f^r$ & $\rho_{\mbox{\tiny theory}}$ & $\rho_{\mbox{\tiny sim}}$ \\ \hline
$1$ & $0$ & $1.504$ & $1.499\pm0.005$ \\ \hline
$2$ & $0$ & $1.504$ & $1.499\pm0.005$ \\ \hline
$3$ & $1$ & $1.401$ & $1.395\pm0.006$ \\ \hline
$4$ & $0$ & $1.334$ & $1.336\pm0.006$ \\ \hline\hline
$0$ & $1$ & $1.253$ & $1.244\pm0.004$ \\ \hline
$0$ & $2$ & $1.217$ & $1.204\pm0.010$ \\ \hline
$0$ & $3$ & $1.171$ & $1.165\pm0.011$ \\ \hline
$0$ & $4$ & $1.143$ & $1.135\pm0.012$ \\ \hline\hline
$1$ & $1$ & $1.415$ & $1.401\pm0.008$ \\ \hline
$2$ & $2$ & $1.305$ & $1.295\pm0.018$ \\ \hline
\end{tabular}
\caption{Summary of theoretical results for the size ratio $\rho_{\mbox{\tiny theory}}$ calculated using Eq.~(\ref{rhorosette}) and asymptotic values $\rho_{\mbox{\tiny sim}}$ obtained from MD simulations for rosette polymer architectures
comprised of different number of $f^c$ linear chains and $f^r$ ring polymers.
}\label{table2}
\end{table}
In Fig.~\ref{MD2} we show numerically derived universal size ratios as a function of $N^{-1/2}$ for more complex architectures.
We investigated conformations of stars comprised of linear chains, stars of ring polymers and
rosette polymers with equal number of grafted linear and ring chains. For all architectures we observe
systematic approaching to asymptotic values predicted by theory with increasing value of $N$ per arm.
For
stars of linear chains with functionality $f^c=3$ and 4 (cf.~Fig.~\ref{MD2}a) simulations provide the following universal size ratios: $1.395\pm0.006$ and $1.336\pm0.006$. Both values are with very good agreement to the theoretical prediction
given by Eq.~(\ref{ratiostar}).
Note that the values of $\rho$ calculated in the course of our simulations are much closer to the analytical theory results as compared to existing numerical data \cite{Shida04}.
For stars comprised of cyclic macromolecules (cf.~Fig.~\ref{MD2}b) we reproduce the theoretical value of Eq.~(\ref{ratiodring}) for double ring architecture ($f^r=2$)
as well as for stars with larger number of grafted rings, cf. Eq.~(\ref{rhorosette}) with $f^c=0$ and $f^r=3$ or 4.
Namely, we get $1.204\pm0.010$ for $f^r=2$, $1.165\pm0.011$ for $f^r=3$ and $1.135\pm0.012$ for $f^r=4$.
For the tadpole architecture, the simplest rosette polymer which is comprised of $f^c=1$ and $f^r=1$ arms (see snapshot in Fig.~\ref{MD2}c), we
obtain the size ratio of $1.401\pm0.008$ which matches theoretically predicted value for this type of polymer from Eq.~(\ref{rhotadpole}).
For rosette polymers with $f^c=2$ and $f^r=2$ our simulations provide $1.295\pm0.018$ which is comparable with the corresponding
value calculated from the formula given in Eq.~(\ref{rhorosette}). The full list of calculated values of $\rho$ is in Table \ref{table2}.
\section{Conclusions}\label{Con}
We have studied by combination of analytical theory and molecular dynamics
simulations conformational properties of rosette polymers which are complex macromolecules
consisting of $f^c$ linear chains (branches) and $f^r$ closed loops (rings) radiating from the central branching point.
Our focus was on characterizing structure of ideal polymer conformation with no excluded volume interactions. For this purpose we investigated
basic structural quantities
such as the mean square radius of gyration $R_g^2$, the hydrodynamic radius $R^{-1}_H$ and most importantly the universal size ratio $\rho \equiv\sqrt{R_g^2}/R_H$.
Our calculations demonstrated gradual decrease in $\rho$ with increasing functionality $f=f^c+f^r$ of grafted polymers.
The analytical results are in perfect agreement with our numerical simulations data.
Since both quantities $R_g^2$ and $R_H$ are directly accessible via correspondingly static and dynamic scattering techniques we hope
that our results will stimulate further experimental studies on the behavior of complex polymer architectures in solutions.
\begin{acknowledgements}
J.P. acknowledged the support from the National Science Center, Poland (Grant No. 2018/30/E/ST3/00428) and
the computational time at PL-Grid, Poland.
\end{acknowledgements}
|
2,877,628,088,923 | arxiv | \section{Introduction}\label{sec:Intro}
As we know from the pioneering paper by Jackiw and Rebbi \cite{Jackiw:1975fn} and from the subsequent development reported in \cite{Niemi:1984vz}, the fermion number of solitons can take fractional and even irrational values. In the known cases, the fermion number is topological. This means that it depends on the boundary or asymptotic values of the background fields and is not sensitive to smooth variations of these fields in the interior of manifold. From the very beginning, the fermion number fractionization had applications to condensed matter physics \cite{Jackiw:1981wc}. More recently, this mechanism was applied to the physics of topological insulators, see \cite{Qi:2011zya}.
Among the planar ($2+1$-dimensional) solitonic systems, a prominent role is played by the Abrikosov-Nielsen-Olesen (ANO) vortex. There are many possible ways to couple fermions to this system, and thus there are many quantum systems which include the ANO vortex as a bosonic sector. In a supersymmetric model, the one-loop shift of the mass of the vortex was calculated in \cite{Vassilevich:2003xk,Rebhan:2003bu}. In a pure bosonic model, this was done in \cite{AlonsoIzquierdo:2004ru,izquerdo2005,izquierdo2016} while (non-supersymmetric) fermions were added in \cite{bordag2003,Graham:2004jb}.
The fermion number fractionization in $2+1$ dimensions on a pure gauge field background was calculated in \cite{Niemi:1983rq}. For a singular magnetic vortex this effect was considered in \cite{Sitenko:1996np}. For a pair of fermions coupled to both gauge and Higgs fields of the ANO vertex, the half-integer fermion fractionization was obtained in \cite{Chamon:2007pf,Chamon:2007hx} (see also the preceding papers \cite{Hou:2006qc,Jackiw:2007rr}). In these models, the fermions have the elementary electric charge $e$ while the scalar fields possess charge $2e$. Thus, the ANO vortex gets a fractional flux. There is a way of coupling a single generation of fermions to the ANO system which is given by the Jackiw-Rossi model \cite{Jackiw:1981ee}. This coupling reminds us of planar superconducting systems. A candidate for the fractional flux vortex in such systems was recently found experimentally \cite{Tanaka:2018}. This discovery motivated a study \cite{Yanagisawa:2020} of fermion charge fractionization in the Jackiw-Rossi model. The computations in this paper were based on the usual relation between vacuum fermion number and the $\eta$-function of the Hamiltonian which is not correct in the Jackiw-Rossi model, as will be demonstrated below.
The purpose of this paper is to analyze the vacuum fermion number in the Jackiw-Rossi model paying special attention to its topological (or rather non-topological) nature. First, we observe that the interaction between scalar and spinor fields does not allow to immediately relate the fermion number to the $\eta$-function of an operator. One has to double the spinor components. This is similar to what has been done in \cite{Jackiw:1981ee,Weinberg:1981eu} to analyses the zero-energy spectrum, but we do this in the path integral formalism following the method of \cite{Novozhilov:1994xs}. The fermion density is then related to an $\eta$-function which, however, is weighted with a matrix. The presence of this matrix destroys the standard proof of vanishing local variations of $\eta$ and of the topological nature of this quantity. We go on by computing a few leading term in the large mass expansion of the fermion number with the heat kernel methods and confirm the presence of non-topological contributions depending on the profiles of magnetic and Higgs fields rather than on their global characteristics.
This paper is organized as follows. In the next section, we derive an expression for the vacuum fermion number in Jackiw-Rossi model in terms of an $\eta$ function with a matrix weight. Section \ref{sec:Heat} is dedicated to the heat kernel evaluation of fermion number. First, we show why the standard proof of topological nature does not work for weighted $\eta$ function. Then, we pinpoint the non-topological fermion number within the large mass expansion. Some concluding remarks are given in the last section.
\section{Fermion number in the Jackiw-Rossi model}
The Jackiw-Rossi model \cite{Jackiw:1981ee} in $(2+1)$ dimensions is described by the Lagrangian
\begin{equation}
\mathcal{L}=\bar \psi (\gamma^\mu (\mathrm{i} \partial_\mu -eA_\mu))\psi -\frac 12 \mathrm{i} g \phi \bar\psi\psi^C +\frac 12 \mathrm{i} g^*\phi^* \bar\psi^C \psi -m\bar\psi\psi \label{JRL}
\end{equation}
governing the dynamics of a two complex component spinor field $\psi$ coupled to a gauge and a complex scalar fields $A_\mu$, $\phi$.
As compared to the original work \cite{Jackiw:1981ee} a mass term has been added.
For convenience, let us take the $\gamma$-matrices in Majorana representation:
\begin{equation}
\gamma^0=\sigma^2,\qquad \gamma^1=\mathrm{i} \sigma^1,\qquad \gamma^{2} = \mathrm{i} \sigma^3.\label{JRgam}
\end{equation}
Then the charge conjugation matrix can be taken as $C=-\gamma^0$. We have the usual relations $C\gamma^\mu C^{-1}=-\gamma^{\mu\, T}$, $\psi^C=\psi^*$, etc.
We assume that bosonic fields $A_\mu$ and $\phi$ belong to the topological class of an ANO vortex. This configuration is static, so that $A_0=0$ and all fields do not depend on time. We are not going to use the exact profile functions, though it will be important to us that this configuration is localized somewhere near the origin. If $r$ is the radial coordinate, for $r\to\infty$ we have
\begin{equation}
|\phi|\to v,\qquad D_j\phi\to 0,\qquad F_{jk}\to 0 .\label{asANO}
\end{equation}
Here and in what follows $x^j$, $x^k$, etc denote spatial coordinates. $D_j\phi =(\partial_j +2\mathrm{i} e A_j)\phi$ is a gauge covariant derivative, depending on the charge of the field it acts upon; therefore, in our notation, $D_j \phi^{*} \equiv (D_j\phi)^{*}$. Note that the electric charge of $\phi$ is $2e$, $F_{jk}\equiv \partial_jA_k-\partial_kA_j$, and $v$ is a minimum of the Higgs potential. All functions in (\ref{asANO}) go to their asymptotic values exponentially fast. Let $\mathcal{N}\in\mathbb{Z}$ be the topological charge of the vortex. The magnetic flux quantization condition
\begin{equation}
\frac e{\pi}\int d^2x\, F_{12}=\mathcal{N} \label{flux}
\end{equation}
has an unusual factor on the right hand side due to the charge $2e$ of $\phi$. This is why we say that the vortex has a fractional flux.
The Lagrangian (\ref{JRL}) besides the $\bar\psi\psi$ contains also the $\bar\psi \psi^*$ and $\bar\psi^*\psi$ couplings to the Higgs field. Thus, it does not have the form that allows to relate immediately the states to eigenfunctions of some differential operator. To overcome this difficulty we pass to doubled spinors following the approach developed in the paper \cite{Novozhilov:1994xs} (see also \cite{Ball:1989hn,Novozhilov:1994he}). We introduce
\begin{equation}
\Psi:=\begin{pmatrix} \psi \\ \bar\psi^T \end{pmatrix}. \label{JRPsi}
\end{equation}
With the help of identities
\begin{eqnarray*}
&&\int d^3x \bar \psi \gamma^\mu (\mathrm{i} \partial_\mu -eA_\mu)\psi =
\int d^3x \psi^T \gamma^{\mu\, T} (\mathrm{i} \partial_\mu +eA_\mu)\bar\psi^T,\\
&&\bar\psi^C\psi=\psi^T\gamma^0\psi,\qquad \bar\psi\psi^C=-\bar\psi \gamma^0\bar\psi^T,\qquad -m\bar\psi\psi =m\psi^T \bar\psi^T
\end{eqnarray*}
we rewrite the action as
\begin{equation}
S=\frac 12 \int d^3x \Psi^T \widehat F \Psi \label{JRS}
\end{equation}
with
\begin{equation}
\widehat F=\begin{pmatrix}
\mathrm{i} g^*\phi^* \gamma^0 & \gamma^{\mu\, T} (\mathrm{i}\partial_\mu + eA_\mu) +m \\
\gamma^\mu(\mathrm{i}\partial_\mu -eA_\mu) -m & \mathrm{i} g\phi \gamma^0
\end{pmatrix}.
\end{equation}
The corresponding Hamiltonian reads
\begin{equation}
H=\begin{pmatrix}
\alpha^j (\mathrm{i} \partial_j -eA_j) - \beta m & \mathrm{i} g \phi \\
-\mathrm{i} g^* \phi^* & -\alpha^j (\mathrm{i} \partial_j + eA_j) -\beta m
\end{pmatrix}. \label{JRH}
\end{equation}
Here, as usual, $\beta\equiv \gamma^0$ and $\alpha^j=\beta\gamma^j$.
Let us consider the effective action $W$ which is obtained by integrating out the fermionic degrees of freedom,
\begin{equation}
e^{\mathrm{i} W}=\int\mathcal{D}\psi\, \mathcal{D}\bar\psi\, \exp\left( \mathrm{i} \int d^3 x \mathcal{L}\right).\label{defW}
\end{equation}
This action depends on the background bosonic fields $\phi$ and $A_\mu$. The charge density is given by the variational derivative
\begin{equation}
j^0=-\frac 1e \, \frac{\delta W}{\delta A_0}.\label{j0}
\end{equation}
The same effective action $W$ can be written through a path integral over the doubled spinors $\Psi$ as
\begin{equation}
W=-\mathrm{i} \, \ln \int\mathcal{D}\Psi\, \exp\left( \mathrm{i} \tfrac 12 \int d^3x\, \Psi^T \widehat F \Psi \right)=-\frac {\mathrm{i}}2 \ln\det (\widehat F),\label{WF}
\end{equation}
see \cite{Novozhilov:1994xs}. The functional integration measure became $\mathcal{D}\Psi =\mathcal{D}\psi\, \mathcal{D}\bar\psi$.
Symbolically, we may write
\begin{equation}
j^0=\frac{\mathrm{i}}{2e} \, \mathrm{Tr}\left( \frac{\delta\widehat F}{\delta A_0} \, \widehat{F}^{-1} \right).\label{j01}
\end{equation}
To give precise meaning to this formula one has to invert $\widehat F$ and regularize the functional trace.
After having calculated the variational derivative in (\ref{j01}) one puts the background fields to their values for the static vortex configuration. On such a background, the eigenfunctions of Hamiltonian (\ref{JRH}) can be taken depending on the spatial coordinates $\vec{x}$ only,
\begin{equation}
H\Psi_n (\vec{x})=E_n\Psi_n (\vec{x}). \label{HPsin}
\end{equation}
The energy spectrum has both discrete and continuous parts. To avoid notation clutter we write the formulas below as if the whole spectrum were discrete.
For $g=0$, the Hamiltonian $H$ consists of two hermitian anticommuting parts. Thus one can easily show that $(H(g=0))^2\geq m^2$. Consequently, if $|m|>|g\phi|$ the full Hamiltonian does not have zero energy eigenstates.
The vectors
\begin{equation}
\Psi_{\omega,n}(\vec{x},t)=(2\pi)^{-1/2}e^{-\mathrm{i} \omega t} \, \Psi_n(\vec{x})
\end{equation}
form a basis for the space of square integrable 4-spinors on $\mathbb{R}^3$. To compute (\ref{j01}) one has to sandwich the expression under the trace between $\Psi_{\omega,n}^\dag$ and $\Psi_{\omega,n}$, integrate over $\omega$ and sum over $n$. To regularize the $\omega$-integral we use a symmetric time-splitting regularization. Namely, we take two eigenvectors at shifted time arguments, $\Psi_{\omega,n}^\dag(\vec{x},t)$ and $\Psi_{\omega,n}(\vec{x},t+\Delta t)$. After computing the integral, we take the limits $\tfrac 12(\lim_{\Delta t\to +0} + \lim_{\Delta t \to -0})$. To regularize the sum, we multiply the expression by $|E_n|^{-s}$ with $\Re s$ sufficiently large to ensure the convergence and analytically continue to $s= 0$ afterwards.
\begin{eqnarray}
&&j^0(\vec{x},t)=-\frac{\mathrm{i}}2 \frac 12 \left( \lim_{\Delta t\to +0} + \lim_{\Delta t \to -0} \right) \int_{-\infty}^\infty \frac{d\omega}{2\pi}\, \sum_n |E_n|^{-s} \nonumber \\
&&\qquad\qquad \times \Psi_n^\dag (\vec{x}) \begin{pmatrix}
1 & 0 \\ 0 & -1 \end{pmatrix} \Psi_n(\vec{x}) \frac{e^{-\mathrm{i} \omega\, \Delta t}}{\omega +\mathrm{i}\, 0 \, \mathrm{sgn}(\omega) -E_n }\,.\label{j02}
\end{eqnarray}
After performing the integration over $\omega$ one obtains
\begin{equation}
j^0(\vec{x},t)=-\frac 14 \sum_n \Psi_n^\dag (\vec{x}) \begin{pmatrix}
1 & 0 \\ 0 & -1 \end{pmatrix} \Psi_n(\vec{x})\, \mathrm{sgn}\, (E_n)\, |E_n|^{-s} \label{j03}
\end{equation}
The analytic continuation to $s=0$ is understood in both formulas (\ref{j02}) and (\ref{j03}).
Let $Q$ be a smooth bounded matrix-valued function (a smooth endomorphism). The $\eta$ function of $H$ smeared with $Q$ is defined as
\begin{equation}
\eta(s,H;Q)=\mathrm{Tr}\, \left(Q\, \mathrm{sgn}\, (H)\, |H|^{-s} \right)=
\mathrm{Tr}\, \left( Q \cdot (H^2)^{-s/2} \cdot (H/|H|) \right)=\mathrm{Tr}\, \left( Q \cdot (H^2)^{-\frac{s+1}2} H\right). \label{eta}
\end{equation}
Here again $s$ is a complex parameter. The trace in (\ref{eta}) exists if $\Re\, s$ is sufficiently large. This function can be analytically continued as a meromorphic function to the whole complex plane. At $s=0$, equation (\ref{j03}) yields
\begin{equation}
\int d^2x\, j^0(x)\, \rho(x)=-\frac 14\, \eta(0,H;\rho \tau_3)\,, \label{j0eta}
\end{equation}
where $\rho$ is a smooth localizing function of compact support, and $\tau_3=\left(\begin{array}{cc}1 & 0\\ 0 &-1\end{array}\right)$. An integrated version of (\ref{j03}) gives an expression for the fermion number $N$ in terms of the $\eta$ function,
\begin{equation}
N\equiv \int d^2x\, j^0(x)=-\frac 14\, \eta(0,H; \tau_3). \label{Neta}
\end{equation}
There are two important differences from the corresponding formula derived in the seminal paper \cite{Niemi:1983rq}. These are the coefficient $1/4$ instead of $1/2$ and the presence of $\tau_3$ in the $\eta$ function. Both are caused by our spinor field doubling procedure. The presence of $\tau_3$ has a profound consequence: the standard proof that $\eta(0)$ is topological in 2D does not work any more.
\section{Heat kernel computations of the fermion current}\label{sec:Heat}
\subsection{Why the standard proof of $N$ being topological does not work for the JR model}\label{sec:why}
Here we study local variations of the $\eta$-function with and without a matrix weighting factor. Our method goes back to the paper by Atiyah, Patodi and Singer\cite{Atiyah:1980jh}. We closely follow the procedure presented in \cite{Gilkey:book}. A slightly different method was used in \cite{AlvarezGaume:1984nf}.
Let $H(\varepsilon)=H+\varepsilon\, h$ where $h$ is a perturbation caused by an infinitesimal localized variation of background bosonic fields $\phi$ and $A$. Let us consider the case $Q=1$. By using Lemma 1.10.2 of \cite{Gilkey:book} we can express the variation of the $\eta$ function through a $\zeta$ function weighted with $h$
\begin{equation}
\left.\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\right\vert_{\varepsilon=0} \eta(s,H(\varepsilon)) =
-s \mathrm{Tr}\, \left[ h\, (H^2)^{-\frac{s+1}2} \right]=-s\zeta\left(\frac{s+1}2 ,H^2;h\right).\label{deleta}
\end{equation}
Now, we need some basics on the spectral functions. Let $L$ be a Laplace type operator on a manifold $M$ of dimension $n$ with or without boundary. Let $h$ be a smooth matrix valued function. Then, residues of the $\zeta$ function can be expressed by the formula
\begin{equation}
\mathrm{Res}_{u=\frac{n-k}2} \left( \Gamma(u)\zeta(u,L;h)\right)=a_k(L;h) \label{Res}
\end{equation}
through the heat kernel coefficients defined through the following asymptotic expansion at $t\to +0$
\begin{equation}
\mathrm{Tr}\, \left( h e^{-tL}\right)\simeq \sum_{k=0}^\infty t^{\frac{k-n}2} a_k(L;h). \label{hkex}
\end{equation}
By Eq.\ (\ref{deleta}), the derivative $(\mathrm{d} \eta(0,H))/(\mathrm{d}\varepsilon)\vert_{\varepsilon=0}$ is given by the residue of $\zeta(u,H^2;h)$ function at $u=\tfrac 12$ which is in turn proportional to $a_1(H^2;h)$. Since $h$ is localized inside the manifold and does not extend to boundaries or asymptotic regions, the coefficient $ a_1(H^2;h)$ vanishes. We conclude that $\eta(0,H)$ does not change under local variations $H\to H(\varepsilon)=H+\varepsilon h$ and thus is a topological invariant.
The key point of the proof presented above was the Eq.\ (\ref{deleta}) relating the variation of the $\eta$ function to a residue of the $\zeta$ function which happened to be local and vanishing in the dimension $n=2$. Roughly speaking, to get (\ref{deleta}) one needs to differentiate $\eta(0,H)$ as if it were a usual function of a commutative variable. This property is ensured by the possibility of reordering operators under the trace. This possibility is (partially) lost if $Q$ does not commute with $H$. In such a case, the variation of $\eta(0,H;Q)$ cannot be written in the simple form of (\ref{deleta}) and all subsequent arguments break down.
\subsection{Computations for the JR model}
To evaluate the large mass expansion of the current (\ref{j0eta}) we shall use the method proposed in \cite{Alonso-Izquierdo:2019tms,MateosGuilarte:2019eem}.
With the help of the identity
\begin{equation}
\int_0^\infty dt\, t^a e^{-bt}=b^{-(1+a)}\Gamma(1+a) \label{iden}
\end{equation}
we write
\begin{equation}
\eta(s,H,\rho\tau_3)=\frac 1{\Gamma\left( \frac{s+1}2 \right)} \int_0^\infty dt\, t^{\frac{s-1}2}\mathrm{Tr}\, \left(\rho\tau_3He^{-tH^2} \right).\label{etasH}
\end{equation}
Let us introduce a shifted operator $H_\rho=H-\varepsilon\rho\tau_3$ with $\varepsilon$ being a real parameter. Then
\begin{equation}
\eta(s,H,\rho\tau_3)=\frac 1{2\Gamma\left( \frac{s+1}2 \right)} \int_0^\infty dt\, t^{\frac{s-3}2} \left.\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\right\vert_{\varepsilon=0} \mathrm{Tr}\, \left(e^{-tH_\rho^2} \right).\label{eta21}
\end{equation}
To evaluate this expression by using a large mass expansion we isolate $m^2$ in $H_\rho^2$ and take the limit $s\to 0$ to obtain
\begin{equation}
\eta(0,H,\rho\tau_3)=\frac 1{2\sqrt{\pi}} \int_0^\infty dt\, t^{-\frac{3}2} \left.\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\right\vert_{\varepsilon=0}e^{-tm^2} \mathrm{Tr}\, \left(e^{-t\tilde H_\rho^2} \right), \label{eta22}
\end{equation}
where $\tilde{H}_\rho^2\equiv H_\rho^2 -m^2$. Next, we make the heat kernel expansion (\ref{hkex}) and integrate over $t$.
\begin{eqnarray}
\eta(0,H;\rho\tau_3)&\simeq& \frac 1{2\sqrt{\pi}} \int_0^\infty dt\,\sum_{k=0}^\infty t^{\frac{k-5}2}\left. \frac{\mathrm{d}}{\mathrm{d}\varepsilon}\right\vert_{\varepsilon=0}a_k(\tilde H_\rho^2)\, e^{-tm^2}\nonumber\\
&=&\frac 1{2\sqrt{\pi}}\sum_k \Gamma\left( \frac{k-3}2\right)\, |m|^{3-k} \left.\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\right\vert_{\varepsilon=0}a_k(\tilde{H}_\rho^2).\label{JRetaex}
\end{eqnarray}
Here $a_k(\tilde H_\rho^2)\equiv a_k(\tilde H_\rho^2;1)$. The integral above is convergent if the contributions of heat kernel coefficients $a_k$ with $k\leq 3$ vanish. We shall check this condition below.
To be able to use universal expressions for the heat kernel coefficients (see, e.g., \cite{Vassilevich:2003xt}) we represent the operator $\tilde{H}_\rho^2$ in the canonical form
\begin{equation}
\tilde{H}_\rho^2=-(\nabla_j\nabla_j +E), \label{canH}
\end{equation}
where $\nabla_j=\partial_j+\omega_j$ plays the role of a covariant derivative while $E$ is a matrix valued potential. For our operator they read
\begin{eqnarray}
&&E=\begin{pmatrix}
\frac e2 \beta\, \epsilon^{jk}F_{jk}-|g\phi|^2 & g\alpha^jD_j\phi +2\mathrm{i} \beta g \phi m\\
g^*\alpha^jD_j\phi^* -2\mathrm{i} \beta g^* \phi^*m & -\frac e2 \beta\, \epsilon^{jk}F_{jk}-|g\phi|^2
\end{pmatrix} -2\varepsilon\rho\beta m \begin{pmatrix}
1 & 0 \\ 0 & -1
\end{pmatrix} \label{E} \\
&& \omega_j=\begin{pmatrix}
\mathrm{i} e A_j & 0 \\ 0 & -\mathrm{i} e A_j
\end{pmatrix} +\mathrm{i} \alpha^j \varepsilon\rho \label{omega}
\end{eqnarray}
In this section, we are working in a Euclidean space with a positive unit metric. We still keep the distinction between upper and lower indices of some quantities which have a $(2+1)$-dimensional origin. For example, $A$ always appears with a subscript, while $\alpha$ and $\gamma$ come with superscripts. The summation over repeated indices is always done with the Kronecker symbol independently of the position of indices. This prescription destroys the balance between upper and lower indices within formulas, but keeps the notations simple and unambiguous.
Each heat kernel coefficient $a_k$ is an integral of a trace of a local polynomial constructed from $E$, the field strength $\Omega_{ij}=[\nabla_i,\nabla_j]$, and their repeated covariant derivatives (e.g., $E_{;j}=[\nabla_j,E]$, etc). For example,
\begin{eqnarray}
&&E_{;j}=\begin{pmatrix}
\frac e2 \beta \partial_j(\epsilon^{kl}F_{kl}) -2\varepsilon \beta m\partial_j\rho -\partial_j |g\phi|^2 &
g\alpha^k (D_jD_k\phi)+2\mathrm{i} g \beta m (D_j\phi) \\
g^*\alpha^k (D_jD_k\phi^*)-2\mathrm{i} g^* \beta m (D_j\phi^*) &
-\frac e2 \beta \partial_j(\epsilon^{kl}F_{kl}) +2\varepsilon \beta m \partial_j\rho -\partial_j|g\phi|^2
\end{pmatrix}\nonumber\\
&&\qquad + \begin{pmatrix}
-\mathrm{i}\gamma^j e \hspace{1.5pt} \epsilon^{kl}F_{kl} & 2g\beta \epsilon^{jk}(D_k\phi) +4\gamma^jmg\phi \\
2g^*\beta \epsilon^{jk}(D_k\phi^*) -4\gamma^jmg^*\phi^* & \mathrm{i}\gamma^j e \hspace{1.5pt} \epsilon^{kl}F_{kl}
\end{pmatrix}\varepsilon\rho \label{Ej}
\end{eqnarray}
and
\begin{equation}
\Omega_{ij}=\begin{pmatrix}
\mathrm{i} e F_{ij} +\mathrm{i} \varepsilon \left(\alpha^j (\partial_i\rho) - \alpha^i (\partial_j\rho)\right) & 0 \\ 0 & -\mathrm{i} e F_{ij} +\mathrm{i} \varepsilon \left(\alpha^j (\partial_i\rho) - \alpha^i (\partial_j\rho)\right) \label{Oij}
\end{pmatrix}\,.
\end{equation}
All invariants entering $a_k$ have the canonical mass dimension $k$. On manifolds without boundaries, all coefficients with odd values of $k$ vanish.
By a direct computation with the expressions from \cite{Vassilevich:2003xt}, one obtains that the contributions of $a_0$ and $a_2$ to (\ref{JRetaex}) vanish thus fulfilling the consistency condition presented below (\ref{JRetaex}).
Let us compute the current as an expansion in $\phi$ and its derivatives keeping the terms up to $D^2$ and $\phi^2$. Since $[D_j,D_k]\propto F_{jk}$ we shall also keep the terms with $F$ and $F\phi^2$, while $m$ can enter with any power.
It is important to establish upper bounds on the number $k$ of the heat kernel coefficient which contains the required invariants. Consider the term $\rho\epsilon^{jk}F_{jk}$. It has canonical mass dimension $3$. Thus, in $a_k$ it has to be multiplied by $m^{k-3}$. Since $\omega$ does not contain $m$, this requires product of $E$ or of its derivatives at least $k-3$ times -- an expression which has the mass dimension greater than or equal to $2(k-3)$. Since the mass dimension of $a_k$ is $k$, we have the upper bound $k\leq 6$. In a similar way one comes to the conclusion that $\rho (D\phi)(D\phi^*)$ and $\rho |g\phi|^2 F$ terms may appear for $k\leq 10$. By refining these arguments one can exclude a lot of possible terms in the expansion and even improve the bounds mentioned above. At any rate, with explicit expressions from \cite{Fliegner:1997rk} for flat space heat kernel coefficients up to $a_{12}$, the rest may be done by a Wolfram Mathematica script.
We obtain that, up to the order considered, just a few terms in the heat kernel expansion contribute. The result reads
\begin{eqnarray}
&&a_4=\frac 1{4\pi}\int d^2x\, \mathrm{tr}\left( \tfrac 12 E^2 +\dots \right)=-\frac 1{\pi} \int d^2x \, e\epsilon^{jk}F_{jk}\varepsilon\rho m +\dots ,\label{a4E2}\\
&&a_6=\frac 1{4\pi}\int d^2x\, \mathrm{tr}\left( \tfrac 16 E^3 -\tfrac 1{12}E_{;j}E_{;j}+\dots \right)=
\frac 1{\pi}\int d^2 x\, \varepsilon\rho m\, |g|^2 \epsilon^{jk}\left( \frac{\mathrm{i}}3 (D_j\phi)(D_k\phi^*) +\frac{5e}3 |\phi|^2F_{jk} \right)+\dots , \label{a6E3}\\
&&a_8=\frac 1{4\pi}\int d^2x\, \mathrm{tr}\left( \tfrac 1{24} E^4 +\dots \right)=
-\frac 2{3\pi} \int d^2x\, em^3 |g\phi|^ 2 \varepsilon\rho\, \epsilon^{jk}F_{jk}+\dots \,,\label{a8E4}
\end{eqnarray}
where dots denote irrelevant terms.
Thus, in the approximation adopted here,
\begin{equation}
j^0=\frac 1{8\pi}\, \frac m{|m|} \left[ e\epsilon^{jk}F_{jk} -\frac {\mathrm{i}|g|^2}{6m^2}\, \epsilon^{jk} (D_j\phi)(D_k\phi^*) - \frac{|g\phi|^2}{3m^2} e\epsilon^{jk}F_{jk} \right].\label{j05}
\end{equation}
The integral of $j^0$ gives the vacuum fermion number
\begin{equation}
N=\frac{\mathcal{N}}4 \, \frac m{|m|} -\frac {|g|^2e}{48\pi\, m\,|m|}\int d^2 x\, |\phi|^2 \epsilon^{jk}F_{jk} .\label{Nfin}
\end{equation}
To obtain this expression we integrated by parts and used the asymptotic conditions (\ref{asANO}) together with the relation (\ref{flux}). The first term on right hand side of (\ref{Nfin}) describes the (expected) quarter-integer quantization of the fermion number in the absence of scalar field $\phi$. The second term depends on the profiles of $|\phi|^2$ and $F_{jk}$ in the interior of manifold and thus is not topological.
\section{Conclusions}\label{sec:con}
In this paper, we have expressed the vacuum fermion number of the Jackiw-Rossi model through an $\eta$-invariant of a matrix-weighted Hamiltonian. We have pinpointed the reason why the standard proof of the topological nature of the (fractional) fermion number fails and we have also explicitly computed a non-topological contribution to this quantity.
We have computed the fermion current in just a few leading orders of the large mass expansion, since this was enough for our purposes. If needed, further terms can also be calculated with the help of the flat space heat kernel expansion from the paper \cite{Fliegner:1997rk}. One can also use resummations of the heat kernel, see \cite{Barvinsky:1990up,Avramidi:1997jy}.
The fact that the fermion number depends on the profiles of the magnetic field and of the Higgs field should have some consequences for condensed matter physics. We are not ready to go deeper into this subject. We just mention a potentially related work which studies, both theoretically and experimentally, the influence of non-uniformity of the magnetic field on Hall conductivity for various planar systems \cite{Schirmer_2020}.
Speaking about future prospects, we would also like to mention the work \cite{Bazeia:2020nmz} which studies relations between the parameters of solitons and the fermion spectrum. Probably, these results can be lifted to the quantum level to gain information about the fermion fractionization and other similar effects.
\begin{acknowledgments}
This work was done in the framework of an agreement between the S\~ao Paulo Research Foundation (FAPESP) and the University of Salamanca, project 2017/50294-1 (SPRINT). The work was supported in parts by the project 2016/03319-6 of FAPESP, by the grants 305594/2019-2 and 428951/2018-0 of CNPq. Besides, D.V. was supported by the Tomsk State University Competitiveness Improvement Program. AAI and JMG acknowledge the Junta de Castilla y Leon for partial financial support, under Grants No. BU229P18 and No. SA067G19. CA received a graduate scholarship directly provided by the Federal University of ABC and, hence, would like to thank the institution for the financial support.
\end{acknowledgments}
|
2,877,628,088,924 | arxiv | \section{Introduction}
Policy gradient (PG) has served as one fundamental principle of a plethora of benchmark reinforcement learning algorithms \citep{degris2012offpac, lillicrap2016Continuous,gu2017q,mnih2016asynchronous}.
In addition to the empirical success, PG algorithms have recently been shown to enjoy provably global convergence guarantees in the \textit{on-policy} settings, including the true gradient settings \citep{agarwal2020theory,bhandari2019global,mei2020global,Cen2022fast} and the Monte-Carlo stochastic gradient settings \citep{liu2020improved,mei2021understanding}.
However, on-policy PG is known to suffer from data inefficiency and lack of exploration due to the tight coupling between the learned target policy and the sampled trajectories. As a result, in many cases, \textit{off-policy} learning is preferred to achieve better exploration with an aim to either increase sample efficiency or address the committal behavior in the on-policy learning scenarios \citep{mei2021understanding,chung2021beyond}.
To address this, the off-policy PG theorem \citep{degris2012offpac,imani2018off,maei2018convergent} and the corresponding off-policy actor-critic methods, which are established to optimize a \textit{surrogate objective} defined as the total discounted return of the target policy in expectation with respect to the state distribution of the \textit{behavior policy}, has been proposed and widely adopted to decouple policy learning from trajectory sampling \citep{wang2016sample,gu2017interpolated,chung2021beyond,ciosek2018expected,espeholt2018impala}.
Despite the better exploration capability, off-policy PG methods are subject to the following fundamental issues: (i) \textit{Correction for distribution mismatch}: The standard off-policy PG methods resort to a surrogate objective, which ignores the mismatch between on-policy and the off-policy state distributions. Notably, it has been shown that such mismatch could lead to sub-optimal policies as well as poor empirical performance \citep{liu2020off}. As a result, substantial efforts are needed to correct this distribution mismatch \citep{imani2018off,liu2020off,zhang2020provably}. (ii) \textit{Fixed behavior policy and importance sampling}: The formulation of off-policy PG presumes the use of a static behavior policy throughout training as it is designed to optimize a surrogate objective with respect to the behavior policy. However, in many cases, we do prefer that the behavior policy varies with the target policy (e.g., epsilon-greedy exploration) as it is widely known that importance sampling could lead to significant variance in gradient estimation, especially when the behavior policy substantially deviates from the current policy.
As a result, one fundamental research question that we would like to answer is: ``\textit{How to achieve off-policy policy optimization with global convergence guarantees, but without the above limitations of off-policy PG?}''
To answer this question, in this paper we take a different approach and propose an alternative off-policy policy optimization framework termed Coordinate Ascent Policy Optimization (CAPO), which revisits the policy optimization problem through the lens of {coordinate ascent}.
\pch{Our key insight is that \textit{the distribution mismatch and the fixed behavior policy issues in off-policy PG both result from the tight coupling between the behavior policy and the objective function in policy optimization.} To address this issue, we propose to still adopt the original objective of standard on-policy PG, but from the perspective of \textit{coordinate ascent} with the update coordinates determined by the behavior policy.
Through this design, we can completely decouple the objective function from the behavior policy while still enabling off-policy policy updates.}
Under the canonical tabular softmax parameterization, where each ``coordinate" corresponds to a parameter specific to each state-action pair, CAPO iteratively updates the policy by performing coordinate ascent for those state-action pairs in the mini-batch, without resorting to the full gradient information or any gradient estimation.
While being a rather simple method in the optimization literature, coordinate ascent and the resulting CAPO enjoy two salient features that appear rather useful in the context of RL:
\vspace{-2mm}
\begin{itemize}[leftmargin=*]
\vspace{-1mm}
\item With the simple coordinate update, CAPO is capable of improving the policy by following any policy under a mild condition, directly enabling off-policy policy updates with an adaptive behavior policy. This feature addresses the issue of fixed behavior policy.
\vspace{-1mm}
\item Unlike PG, which requires having either full gradient information (the true PG setting) or an unbiased estimate of the gradient (the stochastic PG setting), updating the policy in a coordinate-wise manner allows CAPO to obviate the need for true gradient or unbiasedness while still retaining strict policy improvement in each update. As a result, this feature also obviates the need for distribution correction or importance sampling in the policy update.
\end{itemize}
\vspace{-1mm}
\begin{comment}
Notably, under softmax policy parameterization, the design of CAPO takes a similar form as the Natural Policy Gradient (NPG), and main difference lies in that CAPO allows state-action-dependent learning rate and could achieve policy improvement based only on the sign of the advantage function.
\citep{mei2021understanding, chung2020variance} have shown that in the stochastic settings, NPG does not always converges to the optimal policy. The underlying reason for that NPG can get stuck in local optimum has been argued to be the early committal to some sub-optimal action, which comes from the vicious cycle induced by on-policy learning that updating the policy towards the incorrect direction will lead to future exploration using sub-optimal policies. We show that CAPO can reach global optimum in the stochastic settings, including the on-policy setting where both early committal and vicious cycle could exist.
We also show the interesting relationship between NPG and CAPO, {giving useful insight into why CAPO does not suffer the same problem as NPG.}
\end{comment}
To establish the global convergence of CAPO, we need to tackle the following main challenges: (i) In the coordinate descent literature, one common property is that the coordinates selected for the update are either determined according to a deterministic sequence (e.g., cyclic coordinate descent) or drawn independently from some distribution (e.g., randomized block coordinate descent) \citep{nesterov2012efficiency}.
By contrast, given the highly stochastic and non-i.i.d. nature of RL environments, in the general update scheme of CAPO, we impose no assumption on the data collection process, except for the standard condition of infinite visitation to each state-action pair \citep{singh2000convergence,munos2016safe}.
(ii) The function of total discounted expected return is in general non-concave, and the coordinate ascent methods could only converge to a stationary point under the general non-concave functions.
Despite the above, we are able to show that the proposed CAPO algorithm attains a globally optimal policy with properly-designed step sizes under the canonical softmax parameterization.
\pch{(iii) In the optimization literature, it is known that the coordinate ascent methods can typically converge slowly compared to the gradient counterpart. Somewhat surprisingly, we show that CAPO achieves comparable convergence rates as the true on-policy PG \citep{mei2020global}. Through our convergence analysis, we found that this can be attributed to the design of the state-action-dependent variable step sizes.}
Built on the above results, we further generalize CAPO to the case of neural policy parameterization for practical implementation.
Specifically, Neural CAPO (NCAPO) proceeds by the following two steps: (i) Given a mini-batch of state-action pairs, we leverage the tabular CAPO as a subroutine to obtain a collection of reference action distributions for those states in the mini-batch. (ii) By constructing a loss function (e.g., Kullback-Leibler divergence), we guide the policy network to update its parameters towards the state-wise reference action distributions.
Such update can also be interpreted as solving a distributional regression problem.
\textbf{Our Contributions.}
In this work, we revisit off-policy policy optimization and propose a novel policy-based learning algorithm from the perspective of coordinate ascent. The main contributions can be summarized as follows:
\vspace{-2mm}
\begin{itemize}[leftmargin=*]
\vspace{-1mm}
\item We propose CAPO, a simple yet practical off-policy actor-critic framework with global convergence, and naturally enables direct off-policy policy updates with more flexible use of adaptive behavior policies, without the need for distribution correction or importance sampling correction to the policy gradient.
\vspace{-1mm}
\item We show that the proposed CAPO converges to a globally optimal policy under tabular softmax parameterization for general coordinate selection rules and further characterize the convergence rates of CAPO under multiple popular variants of coordinate ascent.
We then extend the idea of CAPO to learning general neural policies to address practical RL settings.
\vspace{-1mm}
\item Through experiments, we demonstrate that NCAPO achieves comparable or better empirical performance than various popular benchmark methods in the MinAtar environment \citep{young19minatar}.
\end{itemize}
\textbf{Notations}. Throughout the paper, we use $[n]$ to denote the set of integers $\{1,\cdots, n\}$. For any $x\in \mathbb{R}\backslash \{0\}$, we use $\sgn(x)$ to denote $\frac{x}{\lvert x\rvert}$ and set $\sgn(0)=0$. We use $\mathbb{I}\{\cdot\}$ to denote the indicator function.
\section{Preliminaries}
\label{section:prelim}
\textbf{Markov Decision Processes.} We consider an infinite-horizon Markov decision process (MDP) characterized by a tuple $(\cS, \cA, \mathcal{P}, r, \gamma, \rho)$, where (i) $\mathcal{S}$ denotes the state space, (ii) $\cA$ denotes a \textit{finite} action space, (iii) $\cP:\cS\times\cA\rightarrow \Delta(\cS)$ is the transition kernel determining the transition probability $\mathcal{P}(s'\rvert s,a)$ from each state-action pair $(s,a)$ to a next state $s'$, where $\Delta(\cS)$ is a probability simplex over $\cS$, (iv) $r: \mathcal{S} \times \mathcal{A} \rightarrow[0,1]$ is the reward function,
(v) $\gamma \in (0,1)$ is the discount factor, and (vi) \pch{$\rho$ is the initial state distribution.}
In this paper, we consider learning a stationary parametric stochastic policy denoted as $\pi_{\theta}:\cS \rightarrow \Delta(\cA)$, which specifies through a parameter vector $\theta$ the action distribution from a probability simplex $\Delta(\cA)$ over $\cA$ for each state.
For a policy $\pi_\theta$, the value function $V^{\pi_\theta}: \mathcal{S} \rightarrow \mathbb{R}$ is defined as the sum of discounted expected future rewards obtained by starting from state $s$ and following $\pi_\theta$, i.e.,
\begin{equation}
V^{\pi_\theta}(s):=\mathbb{E}\bigg[\sum_{t=0}^{\infty} \gamma^{t} r(s_{t}, a_{t})\bigg\vert \pi_\theta, s_{0}=s\bigg],
\end{equation}
where $t$ represents the timestep of the trajectory $\{(s_t, a_t)\}^{\infty}_{t=0}$ induced by the policy $\pi_\theta$ with the initial state $s_0=s$.
The goal of the learner is to search for a policy that maximizes the following objective function as
\begin{equation}
V^{\pi_\theta}(\rho):=\mathbb{E}_{s\sim\rho}[V^{\pi_{\mathbf{\theta}}}(s)].
\end{equation}
For ease of exposition, we use $\pi^*$ to denote an optimal policy and let $V^*(s)$ be a shorthand notation for $V^{\pi^*}(s)$.
Moreover, for any given policy $\pi_\theta$, we define the $Q$-function $Q^{\pi_\theta}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ as
\begin{equation}
Q^{\pi_\theta}(s, a):=\mathbb{E}\bigg[\sum_{t=0}^{\infty} \gamma^{t} r(s_{t}, a_{t})\bigg\vert \pi, s_{0}=s,a_0=a\bigg].
\end{equation}
We also define the advantage function $A^{\pi_\theta}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ as
\begin{equation}
A^{\pi_{\theta}}(s, a):=Q^{\pi_\theta}(s, a)-V^{\pi_{\theta}}(s),
\end{equation}
which reflects the relative benefit of taking the action $a$ at state $s$ under policy $\pi_{\theta}$. Moreover, throughout this paper, we use $m$ as the index of the training iterations and use $\pi_m$ and $\pi_{\theta_m}$ interchangeably to denote the parameterized policy at iteration $m$.
\textbf{Policy Gradients.}
The policy gradient is a popular policy optimization method that updates the parameterized policy $\pi_\theta$ by applying gradient ascent with respect to an objective function $V^{\pi_\theta}(\mu)$, where $\mu$ is some starting state distribution.
The standard stochastic policy gradient theorem states that the policy gradient $\nabla_{\theta} V^{\pi_\theta}({\mu})$ takes the form as \citep{sutton1999policy}
\begin{align}
\label{eq:pg}
&\nabla_{\theta} V^{\pi_\theta}({\mu})\nonumber\\
&\hspace{-6pt}= \frac{1}{1-\gamma} \mathbb{E}_{s \sim d_{\mu}^{\pi_{\theta}},a \sim \pi_{\theta}(\cdot \mid s)}\big[\nabla_{\theta} \log \pi_{\theta}(a \rvert s) A^{\pi_{\theta}}(s, a)\big],
\end{align}
where the outer expectation is taken over the \textit{discounted state visitation distribution} under $\mu$ as
\begin{equation}
d_{\mu}^{\pi_\theta}(s):=\mathbb{E}_{s_{0} \sim \mu}\bigg[(1-\gamma) \sum_{t=0}^{\infty} \gamma^{t} \bbP\big(s_{t}=s \rvert s_{0},{\pi_\theta}\big)\bigg].
\end{equation}
Note that $d_{\mu}^{\pi_\theta}(s)$ reflects how frequently the learner would visit the state $s$ under $\pi_\theta$.
Regarding PG for off-policy learning, the learner's goal is to learn an optimal policy $\pi^*$ by following a behavior policy.
\citet{degris2012offpac} proposed to optimize the following surrogate objective defined as
\begin{equation}
J^{\pi_\theta}(\beta):=\sum_{s\in\cS}\bar{d}^{\beta}(s)V^{\pi_\theta}(s),
\label{eq:offpac_fixed}
\end{equation}
where $\beta:\cS\rightarrow \Delta(\cA)$ is a \textit{fixed} behavior policy and $\bar{d}^{\beta}(s)$ is the stationary state distribution under $\beta$ (which is assumed to exist in \citep{degris2012offpac}).
The resulting off-policy PG enjoys a closed-form expression as
\begin{align}
\nabla_\theta J^{\pi_\theta}(\beta)=\mathbb{E}_{s\sim \bar{d}^{\beta}(s)}\Big[&\sum_{a\in\cA}\Big(\nabla_\theta \pi_\theta(a\rvert s)Q^{\pi_\theta}(a\rvert s)\nonumber\\
&+\pi_{\theta}(a\rvert s)\nabla_{\theta}Q^{\pi_\theta}(s,a)\Big)\Big].\label{eq:OPPG}
\end{align}
Moreover, \citet{degris2012offpac} showed that one can ignore the term $\pi_{\theta}(a\rvert s)\nabla_{\theta}Q^{\pi_\theta}(s,a)$ in (\ref{eq:OPPG}) under tabular parameterization without introducing any bias and proposed the corresponding Off-Policy Actor-Critic algorithm (Off-PAC)
\begin{equation}
\theta_{m+1}=\theta_{m}+\eta\cdot \omega_m(s,a)Q^{\pi_{m}}(s,a)\nabla_{\theta}\log \pi_{\theta_m}(s,a),
\end{equation}
where $s$ is drawn from $\bar{d}^{\beta}$, $a$ is sampled from $\beta(\cdot\rvert s)$, and $\omega_m(s,a):=\frac{\pi_m(a\rvert s)}{\beta(a\rvert s)}$ denotes the importance ratio.
Subsequently, the off-policy PG has been generalized by incorporating state-dependent emphatic weightings \citep{imani2018off} and introducing a counterfactual objective \citep{zhang2019general}.
\textbf{Coordinate Ascent.} Coordinate ascent (CA) methods optimize a parameterized objective function $f(\theta):\mathbb{R}^{n}\rightarrow\mathbb{R}$ by iteratively updating the parameters along coordinate directions or coordinate hyperplanes.
Specifically, in the $m$-th iteration, the CA update along the $i_m$-th coordinate is
\begin{equation}
\theta_{m+1}= \theta_{m} +\eta\cdot [\nabla_\theta f(\theta)]_{i_m} e_{i_m},
\end{equation}
where $e_{i_m}$ denotes the one-hot vector of the $i_m$-th coordinate and $\eta$ denotes the step size. The main difference among the CA methods mainly lies in the selection of coordinates for updates. Popular variants of CA methods include:
\vspace{-2mm}
\begin{itemize}[leftmargin=*]
\item {\textbf{Cyclic CA}: The choice of coordinate proceeds in a predetermined cyclic order \citep{saha2013nonasymptotic}. For example, one possible configuration is $i_m\leftarrow m \bmod n$.}
\vspace{-2mm}
\item {\textbf{Randomized CA}: In each iteration, one coordinate is drawn randomly from some distribution with support $[n]$ \citep{nesterov2012efficiency}.}
\end{itemize}
\vspace{-2mm}
Moreover, the CA updates can be extended to the \textit{blockwise} scheme \citep{tseng2001convergence,beck2013convergence}, where multiple coordinates are selected in each iteration.
{Despite the simplicity, the CA methods have been widely used in variational inference \citep{jordan1999introduction} and large-scale machine learning \citep{nesterov2012efficiency} due to its parallelization capability.
To the best of our knowledge, CA has remained largely unexplored in the context of policy optimization.}
\section{The Effect of Stochasticity in Policy Gradient}
\label{section:SGD}
While it has been shown that exact NPG converges to the global optimum \cite{agarwal2020theory, bhandari2019global}, such property is not reserved under the stochastic setting. \citep{chung2020variance, mei2021understanding} argued that due to the vicious cycle of on-policy sampling, NPG will converge to local optimum with positive probability.
\subsection{Setting: Multi-Arm Bandit}
\label{SGD:multi-arm-bandit}
We consider a deterministic K-arm bandit with single state and action set $a \in \left[K\right]$ and a softmax policy $\pi_{\theta}: [K] \rightarrow [0, 1]$, the reward vector $r \in \mathbb{R}^{K}$ is the reward corresponding to each action. This setting is the same as the one in \textbf{Section 2.} \citet{mei2021understanding}, where we also assume that the reward is positive such that $r(a) \in [0, 1), \forall a \in [K]$.
Our goal here is to find the optimal policy $\pi^{*}$ that maximize the expected total reward, since there is only one single state, the objective function can by written as:
\begin{equation}
J(\theta) = \mathbb{E}_{{a \sim \pi_{\theta}(\cdot)}}[r(a)]
\end{equation}
\subsection{Stochastic Policy Gradient with Importance Sampling}
\citet{mei2021understanding} has showed that by using the importance sampling (IS) reward estimator $\hat{r}_{t}(a)$:
\begin{equation}
\hat{r}_{m}(a)=\frac{\mathbb{I}\left\{a_{m}=a\right\}}{\pi_{\theta_{m}}(a)} \cdot r(a) \text { for all } a \in[K]
\end{equation}
the stochastic policy gradient with IS update (\eqref{eq:is_spg_update}) converges to global optimum with probability:
\begin{equation}
\label{eq:is_spg_update}
\theta_{m+1} \longleftarrow \theta_{m}+\eta \cdot \frac{d \pi_{m}^{\top} \hat{r}_{m}}{d \theta_{m}},
\text{where }
\frac{d \pi_{m}^{\top} \hat{r}_{m}}{d \theta_{m}(a)} =
\pi_{m}(a) \cdot\left(\hat{r}_{m}(a)-\pi_{m}^{\top} \hat{r}_{m}\right)
\end{equation}
\subsection{Vanilla Stochastic Policy Gradient}
However, another unbiased estimator of PG (\eqref{eq:spg_update}) that does not utilize the IS reward estimator is more wildly used:
\begin{equation}
\label{eq:spg_update}
\theta_{m+1} \longleftarrow \theta_{m}+\eta \cdot \frac{d \pi_{m}^{\top} r}{d \theta_{m}},
\text{where }
\frac{d \pi_{\theta_{m}}^{\top} r}{d \theta_{m}(a^{\prime})} =
(\mathbb{I}\left[a=a^{\prime}\right] - \pi_{m}(a^{\prime})) \cdot \left(r(a)\right)
\end{equation}
\begin{theorem} With positive probability, $\sum_{a \neq a^{*}} \pi_{\theta_m} \rightarrow 1$ as $m \rightarrow \infty$ under the vanilla stochastic policy update (\eqref{eq:spg_update}).
\label{theorem:on_pg_stuck}
The proof can be found in Appendix \ref{app:on_pg}, the idea is that early update toward the sub-optimal action will increase the probability of visiting such sub-optimal action in the future iterations. This vicious cycle of on-policy sampling will cause the policy to become deterministic too fast, the inability of SPG to escape from the local optimum then directly cause the policy to stuck at local.
Theorem \ref{theorem:on_pg_stuck} shows that \eqref{eq:spg_update}, unlike its IS twin \eqref{eq:is_spg_update}, can stuck in local optimum.
Two main problems can be observed:
\vspace{-2mm}
\begin{itemize}[leftmargin=*]
\vspace{-1mm}
\item On-policy sampling will be guided by early updates, thus causing potential early commitment to sub-optimal actions.
%
\vspace{-1mm}
\item Beside the commitment to sub-optimal actions, the inability to escape from the local optimum is another cause that PG will stuck in the local optimum.
\end{itemize}
\end{theorem}
To address the above two problems, we introduce an off-policy actor-critic method Off-CAPO in section \ref{section:method} to avoid the vicious cycle from on-policy sampling. Moreover, with a well-designed learning rate that allows CAPO to escape from the local optimum, we show that even under the on-policy setting can guarantee global convergence in section \ref{section:method:onCAPO}.
\section{Methodology}
\label{section:method}
In this section, we present the proposed CAPO algorithm, which improves the policy through coordinate ascent updates.
Throughout this section, we consider the class of tabular softmax policies.
Specifically, for each state-action pair $(s, a)$, let $\theta(s,a)$ denote the corresponding parameter. The probability of selecting action $a$ given state $s$ is given by $\pi_\theta(a \rvert s) =\frac{\exp({\theta(s,a)})}{\sum_{a' \in \mathcal{A}}\exp({\theta(s,a')})}$.
\subsection{Coordinate Ascent Policy Optimization}
\label{subsection:CAPO}
To begin with, we present the general policy update scheme of CAPO. The discussion about the specific instances of CAPO along with their convergence rates will be provided subsequently in \Cref{subsection:convergence_rate}.
To motivate the policy improvement scheme of CAPO, we first state the following lemma \citep{agarwal2020theory,mei2020global}.
\begin{lemma}
\label{lemma:PG}
Under tabular softmax policies, the standard policy gradient with respect to $\theta$ is given by
\begin{equation}
\frac{\partial V^{\pi_\theta}(\mu)}{\partial \theta(s,a)}=\frac{1}{1-\gamma}d^{\pi_\theta}_{\mu}(s)\cdot \pi_{\theta}(a\rvert s)\cdot A^{\pi_\theta}(s,a).
\end{equation}
\end{lemma}
Based on Lemma $\ref{lemma:PG}$, we see that the update direction of each coordinate is completely determined by the \textit{sign} of the advantage function. Accordingly, the proposed general CAPO update scheme is as follows: In each update iteration $m$, let ${B}_m$ denote the mini-batch of state-action pairs sampled by the behavior policy. The batch ${B}_m$ determines the coordinates of the policy parameter to be updated. Specifically, the policy is updated by
\begin{align}
\label{eq:CAPO_form}
&\theta_{m+1}(s, a)\nonumber\\
&=\theta_m(s, a) +\alpha_m(s, a)\mathbb{I}\{(s,a) \in {B}_m\} \cdot \sign\left(A^{\pi_{\theta_m}}(s, a)\right),
\end{align}
where $\alpha_m: \cS\times \cA\rightarrow \bR_{+}$ is the function that controls the \textit{magnitude} of the update and plays the role of the learning rate, the term $\sgn(A^{\pi_{\theta_m}}(s,a))$ controls the update \textit{direction}, and ${B}_m$ is the sampled batch of state-action pairs in the $m$-th iteration and determines the \textit{coordinate selection}. Under CAPO, only those parameters associated with the sampled state-action pairs will be updated accordingly, as suggested by (\ref{eq:CAPO_form}).
Based on this, we could reinterpret $B_m$ as produced by a \textit{coordinate generator}, which could be induced by the behavior policies.
\begin{remark}
\normalfont Note that under the general CAPO update, the learning rate $\alpha$ is state-action-dependent. This is one salient difference from the learning rates of conventional coordinate ascent methods in the optimization literature \citep{nesterov2012efficiency,saha2013nonasymptotic}. As will be shown momentarily in \Cref{subsection:analysis}, this design allows CAPO to attain global optimality without statistical assumptions about the samples (i.e., the selected coordinates).
On the other hand, while it appears that the update rule in (\ref{eq:CAPO_form}) only involves the sign of the advantage function, the magnitude of the advantage $\lvert A(s,a)\rvert$ could also be taken into account if needed through $\alpha(s,a)$, which is also state-action-dependent. As a result, (\ref{eq:CAPO_form}) indeed provides a flexible expression that separates the effect of the sign and magnitude of the advantage.
Interestingly, as will be shown in the next subsections, we establish that CAPO can achieve global convergence without the knowledge of the magnitude of the advantage.
\end{remark}
\begin{remark}
\normalfont Compared to the off-policy PG methods \citep{degris2012offpac,wang2016sample,imani2018off}, one salient property of CAPO is that it allows \textit{off-policy learning} through coordinate ascent on the original \textit{on-policy} total expected reward $\mathbb{E}_{s\sim\rho}[V^{\pi}(s)]$, instead of the \textit{off-policy} total expected reward over the discounted state visitation distribution induced by the behavior policy.
On the other hand, regarding the learning of a critic, similar to the off-policy PG methods, CAPO can be integrated with any off-policy policy evaluation algorithm, such as Retrace \citep{munos2016safe} or V-trace \citep{espeholt2018impala}.
\end{remark}
\input{./method_sub/tabular}
\input{./method_sub/analysis}
\input{./method_sub/convergence_rate}
\input{./method_sub/onCAPO}
\input{./method_sub/neural}
\section{Experimental Results}
\label{section:exp}
In this section, we empirically evaluate the performance of CAPO on several benchmark RL tasks.
We evaluate NCAPO in MinAtar \citep{young19minatar}, a simplified Arcade Learning Environment (ALE), and consider a variety of environments, including \textit{Seaquest, Breakout, Asterix, and Space Invaders}. Each environment is associated with $10 \times 10 \times n$ binary state representation, which corresponds to the $10 \times 10$ grid and $n$ channels (the value $n$ depends on the game).
\begin{comment}
\subsection{Difficulty in Estimating state visitation distribution}
\label{section:exp:dpi}
It is often assumed that $d^\pi$ can be estimated by simply obtaining trajectories by following $\pi$, however we show in this section that in order to correctly estimate such distribution, a large number of samples are required even under a simple toy example.
We consider a simple $100 \times 100$ grid world and a uniform policy, where the agent will perform movement to all 4 directions (up, down, left, right) with equal probability. The agent starts at the upper left corner $(1, 1)$, and its goal is to reach the bottom left corner $(100, 100)$.
To add a little more complexity, there will be a vertical wall with a single open stopping the agent from passing through. We first compute the ground truth $d^\pi$ with $10^6$ trajectories by recording every $(s, a)$ visited, then we compute an estimated $\hat{d}^{\pi}$ with a much smaller number of trajectories. Finally, we compute the one-norm between $\hat{d}^{\pi}$ and $d^\pi$, the result is shown in \Cref{tab:grid_dpi}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|| c || c | c | c | c | c | c | c ||}
\hline
Batch size & 1 & 64 & 128 & 512 & 1024 & 4096 & 8192 \\ [0.5ex]
\hline
1-norm & 1.955 & 0.872 & 0.609 & 0.311 & 0.238 & 0.116 & 0.0819\\
\hline
\end{tabular}
\caption{One-norm between $d^\pi$ and $\hat{d}^{\pi}$ with different numbers of trajectories.}
\label{tab:grid_dpi}
\end{center}
\end{table}
As we can see in \Cref{tab:grid_dpi}, the difference between the estimated $\hat{d}^{\pi}$ and $d^\pi$ is high even when 1024 trajectories are used to estimate $\hat{d}^\pi$, and even more so when only 128 trajectories are used. In practice, an update is often performed with only a small number (much less than 1024) of trajectories, or even only a couple transitions. This simple experiment shows the difficulty in accurately estimating $d^\pi$, and provide an insight to the advantage of eliminating $d^\pi$ from the update, as in CAPO.
\begin{remark}
While the result of this experiment might seem intuitive, we include this experiment as this problem is often overlooked in various literature in RL where they somehow assume $\hat{d}^\pi$ is correctly estimated, when such assumption is not reasonable. On the other hand, practical applications of RL often deal with environments much more complicated than a simple $100 \times 100$ grid, where batch update using $\hat{d}^\pi$ becomes an unjustified weight correction.
\end{remark}
\end{comment}
\textbf{Benchmark Methods.}
We select several benchmark methods for comparison, including Rainbow \citep{hessel2018rainbow,obando2020revisiting}, PPO \citep{schulman2017ppo}, Off-PAC \citep{degris2012offpac}, and Advantage Actor-Critic (A2C) \citep{mnih2016asynchronous}, to demonstrate the effectiveness of NCAPO.
For Rainbow, we use the code provided by \citep{obando2020revisiting} without any change.
For the other methods, we use the open-source implementation provided by Stable Baselines3 \citep{stable-baselines3}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.25\textwidth]{./figures/seaquest.pdf}
\hspace{-5mm}
\includegraphics[width=0.245\textwidth]{./figures/breakout.pdf}
\hspace{-5mm}
\includegraphics[width=0.25\textwidth]{./figures/asterix.pdf}
\hspace{-5mm}
\includegraphics[width=0.25\textwidth]{./figures/invaders.pdf}
\caption{A comparison between the performance of NCAPO and other benchmark methods algorithms in MinAtar. All the results are averaged over 10 random seeds (with the shaded area showing the range of $\text{mean} \pm 0.5\cdot\text{std}$).}
\label{fig:minatar}
\end{figure*}
\textbf{Empirical Evaluation.}
The detailed implementation of NCAPO is provided in Appendix \ref{app:exp}.
From \Cref{fig:minatar}, we can observe that NCAPO has the best performance in \textit{Seaquest, Breakout, Space Invaders}.
We also see that NCAPO is more robust across tasks than PPO and Rainbow. For example, Rainbow performs especially well in \textit{Asterix}, while relatively poorly in \textit{Space Invaders}.
PPO performs relatively strong in \textit{Breakout} and \textit{Seaquest}, but converges rather slowly in \textit{Asterix}.
Off-PAC with a uniform behavior policy has very little improvement throughout training in all the tasks due to the issue of fixed behavior policy, which could hardly find sufficiently long trajectories with high scores.
By contrast, NCAPO outperforms all the benchmark methods in three out of four environments, while being on par with other methods in the remaining environment.
\begin{comment}
\begin{remark}
\normalfont Both PG-based methods and DQN-based methods have been extensively studied and experimented. For example, Rainbow has evolved from the vanilla DQN through multiple enhancements, including double Q-learning, prioritized experience replay, dueling networks, multi-step learning, distributional RL and noisy networks. On the other hand, as the first attempt of coordinate ascent method for RL, NCAPO could surely benefit from future enhancements. The competitive empirical performance of NCAPO further manifests its potential .
\end{remark}
\end{comment}
\section{Related Work}
\label{section:related}
\begin{comment}
\textbf{Stochastic Policy Gradient}
While it has been shown that exact policy gradient converges to the global optimal policy \citep{agarwal2020theory, bhandari2019global}, it was also shown that not all stochastic settings of policy gradient will converge to the global optimum. \citep{chung2021beyond} showed that the choice of baseline not only affect the variance of SGD, it can directly impact the convergence of the algorithm due to the early commitment to a sub-optimal action. They showed that under three-arm bandit, natural policy gradient can stuck in local optimum. \citet{mei2021understanding} extends the result from three-arm bandit to a more general K-arm bandit case under the NPG setting.
Despite the difficulties PG-based methods are facing, CAPO is invariant to the stochasticity of sampling and guarantees global convergence.
On the other hand, to accelerate the unlearning process of PG, \cite{laroche2022beyond} proposes a cross-entropy policy update scheme.
\end{comment}
\textbf{Off-Policy Policy Gradients}.
Off-policy learning via PG has been an on-going research topic.
Built on the off-policy PG theorem \citep{degris2012offpac, silver2014deterministic,zhang2019general, imani2018off}, various off-policy actor-critic algorithms have been developed with an aim to achieve more sample-efficient RL \citep{wang2016sample,gu2017q,chung2021beyond,ciosek2018expected,espeholt2018impala,schmitt2020off}.
In the standard off-policy PG formulation, the main idea lies in the use of a {surrogate objective}, which is the expected total return with expectation taken over the stationary distribution induced by the behavior policy.
While this design avoids the issue of an exponentially-growing importance sampling ratio, it has been shown that this surrogate objective can suffer from convergence to sub-optimal policies due to distribution mismatch, and distribution correction is therefore needed, either via a learned density correction ratio \citep{liu2020off} or emphatic weighting \citep{maei2018convergent,zhang2019general,zhang2020provably}.
On the other hand, off-policy actor-critic based on NPG has been recently shown to achieve provable sample complexity guarantees in both tabular \citep{khodadadian2021finite} and linear function approximation setting
\citep{chen2022sample,chen2022finite}.
Another line of research is on characterizing the convergence of off-policy actor-critic methods in the \textit{offline} setting, where the learner is given only a fixed dataset of samples \citep{xu2021doubly,huang2022convergence}.
Some recent attempts propose to enable off-policy learning beyond the use of policy gradient.
For example, \citep{romain2021JnH} extends the on-policy PG to an off-policy policy update by generalizing the role of the discounted state visitation distribution.
\citep{laroche2022beyond} proposes to use the gradient of the cross-entropy loss with respect to the action with maximum Q.
Both approaches are shown to attain similar convergence rates as the on-policy true PG.
Different from all the above, CAPO serves as the first attempt to address off-policy policy optimization through the lens of coordinate ascent, without using the policy gradient.
\textbf{Exploiting the Sign of Advantage Function.}
As pointed out in Section \ref{section:prelim}, the sign of the advantage function (or temporal difference (TD) residual as a surrogate) can serve as an indicator of policy improvement.
For example, \citep{van2007reinforcement} proposed Actor Critic Learning Automaton (ACLA), which is designed to reinforce only those state-action pairs with positive TD residual and ignore those pairs with non-positive TD residual.
The idea of ACLA is later extended by \citep{zimmer2016neural} to Neural Fitted Actor Critic (NFAC) , which learns neural policies for continuous control, and penalized version of NFAC for improved empirical performance \citep{zimmer2019exploiting}.
On the other hand, \citep{tessler2019distributional} proposes generative actor critic (GAC), a distributional policy optimization approach that leverages the actions with positive advantage to construct a target distribution.
By contrast, CAPO takes the first step towards understanding the use of coordinate ascent with convergence guarantees for off-policy RL.
\section{Conclusion}
\label{section:conclusion}
We propose CAPO, which takes the first step towards addressing off-policy policy optimization by exploring the use of coordinate ascent in RL.
Through CAPO, we enable off-policy learning without the need for importance sampling or distribution correction.
We show that the general CAPO can attain asymptotic global convergence and establish the convergence rates of CAPO with several popular coordinate selection rules.
Moreover, through experiments, we show that the neural implementation of CAPO can serve as a competitive solution compared to the benchmark RL methods and thereby demonstrates the future potential of CAPO.
\section*{Checklist}
The checklist follows the references. Please
read the checklist guidelines carefully for information on how to answer these
questions. For each question, change the default \answerTODO{} to \answerYes{},
\answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf
justification to your answer}, either by referencing the appropriate section of
your paper or providing a brief inline description. For example:
\begin{itemize}
\item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.}
\item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.}
\item Did you include the license to the code and datasets? \answerNA{}
\end{itemize}
Please do not modify the questions and only use the provided macros for your
answers. Note that the Checklist section does not count towards the page
limit. In your paper, please delete this instructions block and only keep the
Checklist section heading above along with the questions/answers below.
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerTODO{}
\item Did you describe the limitations of your work?
\answerYes{See \cref{section:exp}.}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerTODO{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerTODO{}
\item Did you include complete proofs of all theoretical results?
\answerYes{We have include complete proofs for our theoretical results, and properly cite all results that was established by others.}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{We include the code as a supplemental material.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{We include training details in \cref{section:exp} and hyperparameters in \Cref{app:exp}.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{We include the both the empirical mean and standard deviation of the result.}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{In \cref{section:exp}.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{Yes, we give clear reference to both algorithms and testing environment in \cref{section:exp}.}
\item Did you mention the license of the assets?
\answerTODO{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{Our algorithm is implemented upon the base framework of \textit{stable-baseline3} \cite{stable-baselines3}, and we have included the code as supplementary material.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNo{We use open source resources that are publicly open for academic purpose with proper citation.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNo{Our data do not have such information and content.}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\end{comment}
\newpage
\section{On-Policy CAPO With Global Convergence}
\label{app:OnCAPO}
The main focus and motivation for CAPO is on off-policy RL.
Despite this, we show that it is also possible to apply CAPO to on-policy learning.
While on-policy learning is a fairly natural RL setting, one fundamental issue with on-policy learning is the \textit{committal issue}, which was recently discovered by \citep{chung2021beyond,mei2021understanding}.
In this section, we show that CAPO could tackle the committal issue with the help of variable learning rates.
Consider on-policy CAPO with state-action dependent learning rate:
\begin{equation}
\label{eq:onCAPO_alpha_update}
\theta_{m+1}(s, a) = \theta_{m}(s, a) + \alpha^{(m)}(s, a) \cdot \sgn(A^{(m)}(s,a)) \cdot \mathbb{I}\{a = a_m\},
\end{equation}
where $N^{(k)}(s,a) = \sum^k_{m=0} \mathbb{I}\{(s, a) \in \mathcal{B}_m\}$ and $\alpha^{(m)}(s,a)$ is given by:
\begin{align}
\alpha^{(m)}(s, a)=\begin{cases}
\log\big(\frac{1}{\pi^{(m)}(a\rvert s)}\big),& \text{ if } A^{(m)}(s,a)\leq 0\\
\log\big(\frac{\beta}{1-\beta}\cdot\frac{1}{\pi^{(m)}(a\rvert s)}\big), &\text{ if }A^{(m)}(s,a)>0 \text{ and } \pi^{(m)}(a\rvert s) < \beta \\
\zeta\log\big(\frac{N^{(m)}(s,a)+1}{{N^{(k)}(s,a)}}\big), &\text{ otherwise }
\end{cases}
\end{align}
\begin{comment}
\begin{theorem}
\label{thm:on-policy CAPO}
Under on-policy CAPO with $0<\beta\leq \frac{1}{\lvert \cA\rvert+1}$ and $0<\zeta\leq \frac{1}{\lvert \cA\rvert}$, we have $V_m(s)\rightarrow V^*(s)$ as $m \rightarrow \infty$, for all $s\in \cS$, almost surely.
\end{theorem}
The complete proof of \Cref{thm:on-policy CAPO} can be found in \Cref{app:thm:on-policy CAPO}. Here we highlight the main ideas behind establishing the convergence of on-policy CAPO as follows: (i) We prove this by contradiction. Specifically, under a fixed state $s$, we suppose that the event that the set of actions with positive advantage (denoted by $I_s^{+}$, for ease of exposition) is non-empty happens with positive probability. (ii) Conditioning on this event, we show that the actions in $I_s^{+}$ shall be assigned zero probability by the policy in the limit and are selected for finitely many times, almost surely. (iii) On the other hand, we further show that for any fixed action subset of $\cA$ (denoted by $I_0$), the event that the actions in $I_0$ are selected for policy updates for only finitely many times must happen with probability zero, given the design of the learning rate in (\ref{eq:onCAPO_alpha_update}). This contradicts the result in (ii).
\end{comment}
\subsection{Global Convergence of On-Policy CAPO}
\label{app:OnCAPO:convergence}
Recall that in the on-policy setting, we choose the step size of CAPO as
\begin{align}
\alpha^{(k)}(\pi^{(k)}(a\rvert s))=\begin{cases}
\log\big(\frac{1}{\pi^{(k)}(a\rvert s)}\big),& \text{ if } A^{(k)}(s,a)\leq 0\\
\log\big(\frac{\beta}{1-\beta}\cdot\frac{1}{\pi^{(k)}(a\rvert s)}\big), &\text{ if }A^{(k)}(s,a)>0 \text{ and } \pi^{(k)}(a\rvert s) < \beta \\
\zeta\log\big(\frac{N^{(k)}(s,a)+1}{{N^{(k)}(s,a)}}\big), &\text{ otherwise }
\end{cases}
\label{eq:on-policy CAPO update}
\end{align}
\begin{theorem}
\label{app:thm:on-policy CAPO}
Under on-policy CAPO with $0<\beta\leq \frac{1}{\lvert \cA\rvert+1}$ and $0<\zeta\leq \frac{1}{\lvert \cA\rvert}$, we have $V_k(s)\rightarrow V^*(s)$ as $k\rightarrow \infty$, for all $s\in \cS$, almost surely.
\end{theorem}
To prove this result, we start by introducing multiple supporting lemmas.
\begin{lemma}[A Lower Bound of Action Probability]
\label{lemma:pi lower bound}
Under on-policy CAPO, in any iteration $k$, if an action $a$ that satisfies $\pi^{(k)}(a\rvert s)<\beta$ and $A^{(k)}(s,a)>0$ is selected for policy update, then we have $\pi^{(k+1)}(a\rvert s)>\beta$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:pi lower bound}]
By the on-policy CAPO update in (\ref{eq:on-policy CAPO update}), we know that if the selected action $a$ satisfies $\pi^{(k)}(a\rvert s)<\beta$ and $A^{(k)}(s,a)>0$, we have
\begin{align}
\theta^{(k+1)}_{s,a}&=\theta^{(k)}_{s,a}+\log\big(\frac{\beta}{1-\beta}\cdot\frac{1}{\pi^{(k)}(a\rvert s)}\big)\\
&=\theta^{(k)}_{s,a}+\log\Big(\frac{\beta}{1-\beta}\cdot\frac{\sum_{a'\in\cA} \exp(\theta^{(k)}_{s,a'})}{\exp(\theta^{(k)}_{s,a})}\Big)\\
&=\log\Big(\frac{\beta}{1-\beta}\cdot\sum_{a'\in\cA} \exp(\theta^{(k)}_{s,a'}) \Big).
\end{align}
Therefore, by the softmax policy parameterization, we have
\begin{align}
\pi^{(k+1)}(a\rvert s)&=\frac{\frac{\beta}{1-\beta}\cdot\sum_{a'\in\cA} \exp(\theta^{(k)}_{s,a'})}{\frac{\beta}{1-\beta}\cdot\sum_{a'\in\cA} \exp(\theta^{(k)}_{s,a'})+\sum_{a''\in\cA, a''\neq a} \exp(\theta^{(k)}_{s,a''})}\\
&=\frac{\frac{\beta}{1-\beta}}{\frac{\beta}{1-\beta}+(1-\pi^{(k)}(a\rvert s))}>\beta.
\end{align}
\end{proof}
As we consider tabular policy parameterization, we could discuss the convergence behavior of each state separately.
For ease of exposition, we first fix a state $s\in\cS$ and analyze the convergence regarding the policy at state $s$.
Define the following events:
\begin{align}
E_0:=&\Big\{\omega: I_s^{+}(\omega)\neq \varnothing \Big\},\\
E_1:=&\Big\{\omega: \lim_{k\rightarrow \infty}\pi_{s,a}^{(k)}(\omega)= 0 , \forall a\in I_{s}^{-}(\omega)\Big\},\\
{E}_{1,1}:=&\Big\{\omega: \exists a\in I_s^{-} \text{ with } N^{(\infty)}_{s,a}(\omega)=\infty \Big\},\\
E_{1,2}:=&\Big\{\omega:\exists a\in I_s^{+} \text{ with } N^{(\infty)}_{s,a}(\omega)=\infty\Big\},\\
E_{1,3}:=&\Big\{\omega:\exists a'\in I_s^{0}(\omega) \text{ with } N^{(\infty)}_{s,a'}(\omega)=\infty\Big\}.
\end{align}
Since there shall always exist at least one action $a\in\cA$ with $N^{(\infty)}_{s,a}=\infty$ for each sample path, then we have $E_{1,1}\cup E_{1,2} \cup E_{1,3}=\Omega$.
Therefore, we can rewrite the event ${E}_1^{c}$ as ${E}_{1}^{c}=({E}_1^{c}\cap E_{1,1})\cup({E}_1^{c}\cap E_{1,2})\cup({E}_1^{c}\cap E_{1,3})$.
By the union bound, we have
\begin{equation}
\bbP({E}_{1}^{c}\rvert E_0)\leq \sum_{i=1}^{3}\bbP({E}_{1}^{c}\cap E_{1,i}\rvert E_0).\label{eq:E1 tilde complement}
\end{equation}
\begin{lemma}
\label{lemma:E1,1 and E1 tilde}
Under on-policy CAPO and the condition that $\bbP(E_0)>0$, we have $\bbP({E}_1^{c}\cap {E}_{1,1}\rvert E_0)=0$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:E1,1 and E1 tilde}]
Under on-policy CAPO and the condition that $E_0$ happens, for each $\omega$, there exists an action $a'\in I_{s}^{+}(\omega)$ and some finite constant $B_0$ such that $\theta^{(k)}(s,a')\geq B_0$, for all sufficiently large $k\geq T^{+}_{s,a'}(\omega)$.
On the other hand, for each $a''\in I_{s}^{-}(\omega)$, we know that $\theta^{(k)}(s,a'')$ is non-increasing for all $k\geq T^{-}_{s,a''}$.
Therefore, $\pi^{(k)}(s,a'')\leq \frac{\exp\Big(\theta^{\big(T^{-}_{s,a''}\big)}(s,a'')\Big)}{\exp\Big(\theta^{\big(T^{-}_{s,a''}\big)}(s,a'')\Big)+\exp(B_0)}$, for all $k\geq \max\{T^{+}_{s,a'},T^{-}_{s,a''}\}$.
As a result, we know if $(s,a'')$ is contained in $\cB^{(k)}$ with $k\geq \max\{T^{+}_{s,a'},T^{-}_{s,a''}\}$, under CAPO, we must have
\begin{equation}
\theta^{(k+1)}_{s,a''}-\theta^{(k)}_{s,a''}\leq -\log\bigg(\frac{\exp\big(\theta^{(T^{-}_{s,a''})}(s,a'')\big)+\exp(B_0)}{\exp\big(\theta^{(T^{-}_{s,a''})}(s,a'')\big)}\bigg).
\end{equation}
Therefore, for each $\omega\in E_0$ and for each $a''\in I_{s}^{-}(\omega)$, if $N_{s,a''}^{(\infty)}(\omega)=\infty$, then we have $\theta^{(k)}_{s,a''}(\omega)\rightarrow -\infty$ as $k\rightarrow \infty$.
This implies that $\bbP(E_1^{c}\cap E_{1,1}\rvert E_0)=0$.
\end{proof}
\begin{lemma}
\label{lemma:E1,2 and E1 tilde}
Under on-policy CAPO and the condition that $\bbP(E_0)>0$, we have $\bbP({E}_1^{c}\cap {E}_{1,2}\rvert E_0)=0$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:E1,2 and E1 tilde}]
By Lemma \ref{lemma:E1,1 and E1 tilde}, we have $\bbP(E_1^{c}\cap E_{1,2}\rvert E_0)= \bbP(E_1^{c}\cap E_{1,1}^{c}\cap E_{1,2}\rvert E_0)$.
Let $a\in I_{s}^{+}$ be an action with $N^{(\infty)}_{s,a}(\omega)=\infty$, and suppose $N_{s,a'}^{(\infty)}$ are finite for all $a'\in I_{s}^{-}$ (which also implies that $\theta^{(k)}_{s,a'}$ are finite for all $k\in \mathbb{N}$).
Let $\{k_m\}_{m=1}^{\infty}$ be the sequence of iteration indices where $(s,a)$ in included in the batch.
Now we discuss two possible cases as follows:
\begin{itemize}[leftmargin=*]
\item Case 1: $\pi^{(k_m)}(a\rvert s)\rightarrow 1$ as $m\rightarrow \infty$
Conditioning on $E_0$, both $I_s^+$ and $I_s^-$ are non-empty. Since $\theta^{(k)}_{s,a'}$ is finite for each $a'\in I_s^-$, we know that $\pi^{(k_m)}(a\rvert s)\rightarrow 1$ implies that
\begin{equation}
\label{eq:E1,2 eq 1}
\theta^{(k_m)}_{s,a}\rightarrow \infty, \text{ as } m\rightarrow \infty.
\end{equation}
Moreover, under CAPO, as $\theta^{(k_m)}_{s,a}$ shall be increasing for all sufficiently large $m$ (given that $a\in I_{s}^{+}$), we know (\ref{eq:E1,2 eq 1}) implies that $\theta^{(k)}_{s,a}\rightarrow \infty, \text{ as } k\rightarrow \infty$.
Therefore, we have $\lim_{k\rightarrow \infty}\pi^{(k)}(a'\rvert s)=0$, for all $a'\in I_{s}^{-}$.
\item Case 2: $\pi^{(k_m)}(a\rvert s)\nrightarrow 1$ as $m\rightarrow \infty$:
Since $A^{(k_m)}(s,a)$ shall be positive for all sufficiently large $m$ (given that $a\in I_{s}^{+}$), we know: (i) If $\pi^{(k_m)}(a\rvert s)\geq \beta$, we have $\theta^{(k_m+1)}_{s,a}-\theta^{(k_m)}_{s,a}\geq\zeta\log(\frac{N^{(k_m)}(s,a)+1)}{N^{(k_m)}(s,a)})=\zeta\log(\frac{m+1}{m})$; (ii) Otherwise, if $\pi^{(k_m)}(a\rvert s)<\beta$, we shall have $\theta^{(k_m+1)}_{s,a}-\theta^{(k_m)}_{s,a}\geq\log(\frac{1}{1-\beta})>\zeta\log(\frac{m+1}{m})$, for all sufficiently large $m$.
This implies that $\theta^{(k_m)}(s,a)\rightarrow \infty$ as $m\rightarrow \infty$.
As $\theta^{(k_m)}_{s,a}$ shall be increasing for all sufficiently large $m$ (given that $a\in I_{s}^{+}$), we also have $\theta^{(k)}_{s,a}\rightarrow \infty$, as $k\rightarrow \infty$.
As $\theta^{(k)}_{s,a'}$ remains finite for all $a'\in I_{s}^{-}$, we therefore have that $\lim_{k\rightarrow \infty}\pi^{(k)}(a'\rvert s)=0$, for all $a'\in I_{s}^{-}$.
\end{itemize}
\end{proof}
\begin{lemma}
\label{lemma:E1,3 and E1 tilde}
Under on-policy CAPO and the condition that $\bbP(E_0)>0$, we have $\bbP({E}_1^{c}\cap {E}_{1,3}\rvert E_0)=0$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:E1,3 and E1 tilde}]
By Lemma \ref{lemma:E1,1 and E1 tilde} and Lemma \ref{lemma:E1,2 and E1 tilde}, we have $\bbP({E}_1^{c}\cap {E}_{1,3}\rvert E_0)=\bbP({E}_1^{c}\cap E_{1,2}^{c}\cap E_{1,1}^{c} \cap {E}_{1,3}\rvert E_0)$.
Under $E_{1,1}^{c}\cap E_{1,2}^{c}$, we know that any action in $I_{s}^{+}\cup I_{s}^{-}$ can appear in $\cB^{(k)}$ only for finitely many times.
This implies that there exists $T_0\in \mathbb{N}$ such that $\cB^{(k)}$ contains only actions in $I_{s}^{0}$, for all $k\geq T_0$.
In order for the above to happen, we must have $\sum_{a\in I_{s}^{0}}\pi^{(k)}(a\rvert s)\rightarrow 1$, as $k\rightarrow \infty$ (otherwise there would exist some $\epsilon>0$ such that $\sum_{a\in I_{s}^{0}}\pi^{(k)}(a\rvert s)\leq 1-\epsilon$ for infinitely many $k$).
This implies that $\lim_{k\rightarrow\infty} \pi^{(k)}(a'\rvert s)=0$, for any $a'\in I_{s}^{-}$.
Hence, $\bbP({E}_1^{c}\cap E_{1,2}^{c}\cap E_{1,1}^{c} \cap {E}_{1,3}\rvert E_0)=0$.
\end{proof}
\begin{lemma}
\label{lemma:E1}
Under on-policy CAPO and the condition that $\bbP(E_0)>0$, we have $\bbP(E_1\rvert E_0)=1$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:E1}]
By (\ref{eq:E1 tilde complement}), Lemma \ref{lemma:E1,1 and E1 tilde}, Lemma \ref{lemma:E1,2 and E1 tilde}, and Lemma \ref{lemma:E1,3 and E1 tilde}, we know $\bbP(E_1^{c}\rvert E_0)=0$.
\end{proof}
Before we proceed, we define the following events:
\begin{align}
E_2:=&\Big\{\omega: \lim_{k\rightarrow \infty}\pi_{s,a}^{(k)}(\omega)= 0 , \forall a\in I_{s}^{+}(\omega)\Big\},\\
E_3:=&\Big\{\omega:\exists a\in I_{s}^{+}(\omega) \text{ with } N^{(\infty)}_{s,a}(\omega)=\infty\Big\}
\end{align}
\begin{lemma}
\label{lemma:E2}
Under on-policy CAPO and the condition that $\bbP(E_0)>0$, we have $\bbP(E_2\rvert E_0)=1$.
\end{lemma}
\begin{proof}
This is a direct result of Lemma \ref{lemma:E1}.
\end{proof}
\begin{lemma}
\label{lemma:E2 and E3}
Under on-policy CAPO and the condition that $\bbP(E_0)>0$, we have $\bbP(E_2\cap E_3\rvert E_0)=0$.
\end{lemma}
\begin{proof}
Under the event $E_2$, we know that for each action $a\in I_{s}^{+}$, for any $\epsilon>0$, there exists some $T_{a,\epsilon}$ such that $\pi_{s,a}^{(k)}< \epsilon$ for all $k\geq T_{a,\epsilon}$.
On the other hand, by Lemma \ref{lemma:pi lower bound}, under $E_3$, {we know that $\pi_{s,a}^{(k)}>\beta$ infinitely often}.
Hence, we know $\bbP(E_2\cap E_3\rvert E_0)=0$.
\end{proof}
Note that by Lemma \ref{lemma:E2} and Lemma \ref{lemma:E2 and E3}, we have
$\bbP(E_2\cap E_3^{c}\rvert E_0)=1$.
The main idea of the proof of Theorem \ref{app:thm:on-policy CAPO} is to establish a contradiction by showing that under $E_0$, $E_3^{c}$ cannot happen with probability one.
Let us explicitly write down the event $E_3^{c}$ as follows:
\begin{equation}
E_3^{c}:=\big\{\omega: \exists \tau(\omega)<\infty \text{ such that } \cB^{(k)}\subseteq I_s^{0}\cup I_{s}^{-}, \forall k\geq \tau(\omega)\big\}.
\end{equation}
Define
\begin{equation}
\theta^{(k)}_{s,\max}:=\max_{a\in \cA} \theta^{(k)}_{s,a}.
\end{equation}
\begin{lemma}
\label{lemma:theta diff}
For any $t\in \mathbb{N}$ and any $K\in \mathbb{N}$, we have
\begin{equation}
\theta_{s,\max}^{(t+K)} - \theta_{s,\max}^{(t)} \leq \log (K+1),
\end{equation}
for every sample path.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:theta diff}]
We consider the changes of $\theta_{s,a}$ of each action separately:
\begin{itemize}[leftmargin=*]
\item $\theta^{(k)}_{s,a} <\theta^{(k)}_{s,\max}$, and $\pi^{(k)}(a\rvert s)<\beta$:
For such an action $a$, we have
\begin{align}
\theta^{(k+1)}_{s,a}&\leq \theta^{(k)}_{s,a}+\log\Big(\frac{\beta}{(1-\beta) \cdot\pi^{(k)}(a\rvert s)} \Big)\label{eq:theta max 1}\\
&\le \log\Big(\frac{\beta}{(1-\beta)} \cdot\lvert\cA \rvert \exp({\theta^{(k)}_{s,\max}})\Big) ,\label{eq:theta max 2}\\
&\leq \log\Big(\frac{\frac{1}{\lvert\cA \rvert+1}}{(1-\frac{1}{\lvert\cA \rvert+1})} \cdot\lvert\cA \rvert \exp({\theta^{(k)}_{s,\max}})\Big)\label{eq:theta max 3}\\
&=\theta^{(k)}_{s,\max},
\end{align}
where (\ref{eq:theta max 1}) holds by the design of on-policy CAPO, (\ref{eq:theta max 2}) follows from the softmax policy parameterization, and (\ref{eq:theta max 3}) follows from the definition of $\theta^{(k)}_{s,\max}$ and the condition of $\beta$.
Note that (\ref{eq:theta max 1}) would be an equality if $A^{(k)}(s,a)>0$.
As a result, this change cannot lead to an increase in $\theta_{s,\max}^{(k)}$.
\item $\theta^{(k)}_{s,a}<\theta^{(k)}_{s,\max}$, and $\pi^{(k)}(a\rvert s)\geq \beta $:
For such an action $a$, we have
\begin{align}
\theta^{(k+1)}_{s,a}&\leq \theta^{(k)}_{s,a}+\zeta\log\Big( \frac{N^{(k)}_{s,a}+1}{N^{(k)}_{s,a}}\Big),\label{eq:theta max 4}
\end{align}
where (\ref{eq:theta max 4}) holds by the design of on-policy CAPO and would be an equality if $A^{(k)}(s,a)>0$.
\item $\theta^{(k)}_{s,a}=\theta^{(k)}_{s,\max}$:
Similarly, we have
\begin{align}
\theta^{(k+1)}_{s,a}&\leq \theta^{(k)}_{s,a}+\zeta\log\Big( \frac{N^{(k)}_{s,a}+1}{N^{(k)}_{s,a}}\Big),\label{eq:theta max 5}
\end{align}
where (\ref{eq:theta max 5}) holds by the design of on-policy CAPO and would be an equality if $A^{(k)}(s,a)>0$.
\end{itemize}
Based on the above discussion, we thereby know
\begin{equation}
\theta^{(k+1)}_{s,\max}- \theta^{(k)}_{s,\max}\leq \zeta \sum_{a\in \cA}\log\Big( \frac{N^{(k+1)}_{s,a}}{N^{(k)}_{s,a}}\Big), \quad\forall k.\label{eq:theta max 6}
\end{equation}
Therefore, for any $t\in\mathbb{N}$, the maximum possible increase in $\theta_{s,\max}^{(k)}$ between the $t$-th and the $(t+K)$-th iterations shall be upper bounded as
\begin{align}
\theta_{s,\max}^{(t+K)} - \theta_{s,\max}^{(t)}&\leq \sum_{k=t}^{t+K-1}\zeta \sum_{a\in \cA}\log\Big( \frac{N^{(k+1)}_{s,a}}{N^{(k)}_{s,a}}\Big)\label{eq:theta max 7}\\
&\leq \zeta \cdot \sum_{a\in \cA} \log\Big(\frac{N^{(t)}_{s,a}+K}{N^{(t)}_{s,a}}\Big)\label{eq:theta max 8}\\
&\leq \log(K+1),\label{eq:theta max 9}
\end{align}
where (\ref{eq:theta max 7}) follows directly from (\ref{eq:theta max 6}), (\ref{eq:theta max 7}) is obtained by interchanging the summation operators, and (\ref{eq:theta max 8}) holds by the condition that $\zeta \leq \frac{1}{\lvert \cA\rvert }$.
Hence, we know $\theta_{s,\max}^{(t+K)} - \theta_{s,\max}^{(t)} \leq \log (K+1)$.
\end{proof}
For any fixed action set $I_{s}^{\dagger}\subset \cA$, define
\begin{equation}
E_4(I_{s}^{\dagger}):=\big\{\omega: \text{ For every } a\in I_{s}^{\dagger}, N^{(\infty)}(s,a)<\infty\big\}.
\end{equation}
\begin{lemma}
\label{lemma:E4}
For any $I_{s}^{\dagger}\subset \cA$, we have $\bbP(E_4(I_{s}^{\dagger}))=0$.
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:E4}]
For a given action set $I_{s}^{\dagger}\subset \cA$, define a sequence of events as follows: For each $n\in \mathbb{N}$,
\begin{equation}
E_{4,n}(I_{s}^{\dagger}):=\big\{\omega: \text{ For every } a\in I_{s}^{\dagger}, (s,a)\notin\cB^{(k)},\forall k\geq n\big\}.\label{eq:theta max 10}
\end{equation}
$\{E_{4,n}(I_{s}^{\dagger})\}_{n=1}^{\infty}$ form an increasing sequence of events, i.e., $E_{4,1}(I_{s}^{\dagger})\subseteq E_{4,2}(I_{s}^{\dagger})\cdots \subseteq E_{4,n}(I_{s}^{\dagger}) \subseteq E_{4,n+1}(I_{s}^{\dagger})\cdots$.
Moreover, we have $E_4(I_{s}^{\dagger})=\bigcup_{n=1}^{\infty} E_{4,n}(I_{s}^{\dagger})$.
By the continuity of probability, we have
\begin{equation}
\bbP(E_{4}(I_{s}^{\dagger}))=\bbP(\lim_{n\rightarrow\infty} E_{4,n}(I_{s}^{\dagger})) = \lim_{n\rightarrow \infty} \bbP(E_{4,n}(I_{s}^{\dagger})).\label{eq:theta max 11}
\end{equation}
Next, we proceed to evaluate $\bbP(E_{4,n}(I_{s}^{\dagger}))$.
\begin{align}
\log\big(\bbP(E_{4,n}(I_{s}^{\dagger}))\big)&\leq\log\bigg(\prod_{k\geq n} \frac{\sum_{a'\in I_{s}^{0}\cup I_{s}^{-}} \exp(\theta_{s,a’}^{(k)})}{\sum_{a'\in I_{s}^{0}\cup I_{s}^{-}} \exp(\theta_{s,a’}^{(k)})+\sum_{a \in I_{s}^{+}} \exp(\theta_{s,a}^{(k)})}\bigg)\label{eq:theta max 12}\\
&\leq \log\bigg(\prod_{k\geq n} \frac{\lvert \cA\rvert\exp(\theta_{s,\max}^{(k)})}{\lvert \cA\rvert\exp(\theta_{s,\max}^{(k)})+\sum_{a \in I_{s}^{+}} \exp(\theta_{s,a}^{(n)})} \bigg)\label{eq:theta max 13}\\
&\leq \log\bigg(\prod_{m\geq 1} \frac{\lvert \cA\rvert\exp\big(\theta_{s,\max}^{(n)}+\log(m+1)\big)}{\lvert \cA\rvert\exp\big(\theta_{s,\max}^{(n)}+\log(m+1)\big)+\sum_{a \in I_{s}^{+}} \exp(\theta_{s,a}^{(n)})} \bigg)\label{eq:theta max 14}\\
&\leq \sum_{m\geq 1} \log\bigg(1-\frac{\sum_{a \in I_{s}^{+}} \exp(\theta_{s,a}^{(n)})}{\lvert \cA\rvert(m+1)\exp(\theta_{s,\max}^{(n)})} \bigg)=-\infty,\label{eq:theta max 15}
\end{align}
where (\ref{eq:theta max 12}) holds by the softmax policy parameterization, (\ref{eq:theta max 13}) holds by the definition of $\theta^{(k)}_{s,\max}$ and $E_{4,n}(I_{s}^{\dagger})$, and (\ref{eq:theta max 13}) follows directly from Lemma \ref{lemma:theta diff}.
Equivalently, we have $\bbP(E_{4,n}(I_{s}^{\dagger}))=0$, for all $n\in\mathbb{N}$.
By (\ref{eq:theta max 11}), we conclude that $\bbP(E_4(I_s^{\dagger}))=0$.
\end{proof}
Now we are ready to prove Theorem \ref{app:thm:on-policy CAPO}.
\begin{proof}[Proof of Theorem \ref{app:thm:on-policy CAPO}]
Recall that the main idea is to establish a contradiction by showing that conditioning on $E_0$, $E_3^{c}$ cannot happen with probability one.
Note that by Lemma \ref{lemma:E2} and Lemma \ref{lemma:E2 and E3}, we have
$\bbP(E_2\cap E_3^{c}\rvert E_0)=1$.
However, by Lemma \ref{lemma:E4}, we know that for any fixed action set $I_{s}^{\dagger}\subset \cA$, the event that the actions in $I_{s}^{\dagger}$ are selected for policy updates for only finitely many times must happen with probability zero. This contradicts the result in Lemma \ref{lemma:E2 and E3}.
Therefore, we shall have $\bbP(E_0)=0$.
\end{proof}
\subsection{On-Policy CAPO with Fixed Learning Rate}
\label{app:OnCAPO:onCAPO_fixed}
One interesting question is whether on-policy CAPO can be applied with a fixed learning rate. Through a simple single state bandit example, we show that without the help of variable learning rate, on-policy CAPO with fixed learning rate will stuck in local optimum with positive probability.
Therefore, this fact further motivates the use of variable learning rate in CAPO.
We provide the detailed discussion in \Cref{app:example}.
\section{Sub-Optimality of On-Policy CAPO Due to Improper Step Sizes}
\label{app:example}
In this section, we construct a toy example to further showcase how the proposed CAPO benefits from the properly-designed step sizes in Algorithm \ref{algo:CAPO}.
We consider a deterministic $K$-armed bandit with a single state and an action set $\left[K\right]$ and a softmax policy $\pi_{\theta}: [K] \rightarrow [0, 1]$, the reward vector $r \in \mathbb{R}^{K}$ is the reward corresponding to each action. This setting is the same as the one in Section 2 of \citep{mei2021understanding}, except that we do not have the assumption of positive rewards such that $r(a) \in [0, 1), \forall a \in [K]$, the reward can be any real number such that $r \in \mathbb{R}^K$.
Our goal here is to find the optimal policy $\pi^{*}$ that maximize the expected total reward. Since there is only one single state, the objective function can by written as:
\begin{equation}
J(\theta) = \mathbb{E}_{{a \sim \pi_{\theta}(\cdot)}}[r(a)].
\end{equation}
The on-policy CAPO with fixed learning rate updates the policy parameters by:
\begin{equation}
\label{eq:onCAPO_fixed_update}
\theta_{m+1}(s, a) = \theta_{m}(s, a) + \eta \cdot \sign(A(s,a)) \cdot \mathbb{I}\{a = a_m\}
\end{equation}
where $\eta$ is a constant representing the fixed learning rate.
To demonstrate that on-policy CAPO with fixed learning rate can get stuck in a sub-optimal policy, we consider a simple three-armed bandit where $K=3$ (i.e. a single state with 3 actions). We set $r = [1, 0.99, -1]$. Then we have:
\begin{theorem} Given a uniform initial policy $\pi_1$ such that $\pi_{1}(a) = \frac{1}{K}, \forall a \in [K]$, under the policy update of (\ref{eq:onCAPO_fixed_update}), we have $\bbP(\pi_{\infty}(a_2) = 1) > 0$.
\label{theorem:onCAPO_fixed_stuck}
\end{theorem}
The idea is that with $\pi_1(a_1) = \pi_1(a_3)$ and $r(a_1) = -r(a_3)$, when we only sample $a_2$ in the first $t$ steps, $A_m(a_2) > 0, \forall m \le t$.
Thus, $\pi_{m}(a_2)$ shall be strictly improving, and the probability of sampling $a_2$ will increase accordingly, thus causing a vicious cycle.
\Cref{theorem:onCAPO_fixed_stuck} shows that the naive fixed learning rate is insufficient. In the next section, we will show that with a properly chosen variable learning rate, on-policy CAPO can guarantee global convergence. Empirical results can be found in \Cref{app:extra_exp:bandit}.
\begin{proof}[Proof of Theorem \ref{theorem:onCAPO_fixed_stuck}]
\label{app:proof:onCAPO_fixed}
Inspired by the proof in \citep{mei2021understanding} (Theorem 3, second part), we also consider the event $\mathcal{E}_{t}$ such that $a_2$ is chosen in the first $t$ time steps. We will show that there exists some sequence $b_s$ such that $\bbP(\mathcal{E}_{t}) \ge \prod_{s=1}^{t} b_{s} > 0$.
The first part argument is the same as \citep{mei2021understanding}, we restate the argument for completeness:
Let $\mathcal{B}_m=\left\{a_m = a_2 \right\}$ be the event that $a_2$ is sampled at time $m$.
Define the event $\mathcal{E}_{t} = \mathcal{B}_{1} \cap \cdots \cap \mathcal{B}_{t}$ be the event that $a_2$ is chosen in the first $t$ time steps. Since $\left\{\mathcal{E}_{t}\right\}_{t \geq 1}$ is a nested sequence, we have $\lim _{t \rightarrow \infty} \bbP\left(\mathcal{E}_{t}\right)=\bbP(\mathcal{E})$ by monotone convergence theorem.
Following equation (197) and equation (198) in \citep{mei2021understanding}, we will show that a suitable choice of $b_t$ under the On-policy CAPO with fixed learning rate is:
\begin{equation}
b_t = \exp\left\{- \frac{\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\}}{\exp\left\{\theta_1(a_2)\right\}} \cdot \frac{\exp\left\{\eta\right\}}{\eta} \right\}.
\end{equation}
\begin{lemma} $\pi_m(a_1) = \pi_m(a_3), \forall 1 \le m \le t$.
\label{prop:OnCAPO_fixed:theta_13}
\end{lemma}
\begin{proof}[Proof of \Cref{prop:OnCAPO_fixed:theta_13}]
Under uniform initialization $\theta_1(a_1) = \theta_1(a_3)$, since only $a_2$ is sampled in the first $t$ steps, we have $\forall 1 \le m \le t$:
\begin{align}
&\pi_m(a_1) = \frac{\exp(\theta_m(a_1))}{\sum_{a}\exp(\theta_m(a))} \\
&= \frac{\exp(\theta_1(a_1))}{\sum_{a}\exp(\theta_m(a))} = \frac{\exp(\theta_1(a_3))}{\sum_{a}\exp(\theta_m(a))} \\
&= \pi_m(a_3).
\end{align}
\end{proof}
\begin{lemma} For all $1 \le m \le t$, we have $A_m(a_2) \ge 0$.
\label{lemma:OnCAPO_fixed:pos_A}
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:OnCAPO_fixed:pos_A}]
Note that under the CAPO update (\ref{eq:onCAPO_fixed_update}), we have
\begin{align}
&A_m(a_2) = r(a_2) - \sum_{a}\pi_m(a) \cdot r(a) \\
&\quad= (1-\pi_m(a_2))r(a_2) - \sum_{a \neq a_2}\pi_m(a) \cdot r(a) \\
&\quad= (1-\pi_m(a_2))r(a_2) - \sum_{a \neq a_2}\pi_m(a) \cdot r(a) \\
&\quad= (1-\pi_m(a_2))r(a_2) \ge 0,
\end{align}
where the last equation comes from \Cref{prop:OnCAPO_fixed:theta_13} and $r(a_1) = -1 \cdot r(a_3)$.
\end{proof}
\begin{lemma} $\theta_t(a_2) = \theta_1(a_2) + \eta \cdot (t-1)$.
\label{lemma:OnCAPO_fixed:theta t-step diff}
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:OnCAPO_fixed:theta t-step diff}]
By \Cref{lemma:OnCAPO_fixed:pos_A} and (\ref{eq:onCAPO_fixed_update}), we have:
\begin{align}
&\theta_t(a_2) = \theta_1(a_2) + \eta \cdot \sum_{s=1}^{t-1} \sign(A_s(a_2)) \cdot \mathbb{I}\{a_2 = a_s\} \\
&= \theta_1(a_2) + \eta \cdot \sum_{s=1}^{t-1} 1 \\
&= \theta_1(a_2) + \eta \cdot (t-1).\\
\end{align}
\end{proof}
\begin{lemma} For all $x \in(0,1)$, we have:
\label{lemma:exp_lower_bound}
\end{lemma}
\begin{equation}
\label{eq:exp_lower_bound}
1-x \geq \exp \left\{\frac{-x}{1 - x}\right\}
\end{equation}
\begin{proof}[Proof of \Cref{lemma:exp_lower_bound}]
This is a direct result of Lemma 14 in \citep{mei2021understanding}. Here we also include the proof for completeness.
\begin{align}
1-x &=\exp \{\log (1-x)\} \\
& \geq \exp \left\{1-e^{-\log (1-x)}\right\} \quad\left(y \geq 1-e^{-y}\right) \\
&=\exp \left\{\frac{-1}{1 / x-1}\right\}
\\
&=\exp \left\{\frac{-x}{1 - x}\right\}.
\end{align}
Then, we can plug in $x$ as $\frac{a}{b}$ for some $a < b$ to obtain a more useful form of this lemma as follows:
\begin{equation}
\label{eq:exp_lower_bound2}
1 -\frac{a}{b} \ge \exp \left\{\frac{-a}{b-a} \right\}.
\end{equation}
\end{proof}
\begin{lemma} $\pi_t(a_2) \ge \exp\left\{\frac{-\sum_{a \neq a_2}\exp\left\{\theta_t(a)\right\} }{\exp\left\{\theta_t(a_2) \right\}}\right\}$.
\label{lemma:bandit positive prob}
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:bandit positive prob}]
\begin{align}
&\pi_t(a_2) = 1 - \sum_{a \neq a_2}\pi_t(a) \\
&= 1 - \frac{\sum_{a \neq a_2}\exp\left\{\theta_t(a)\right\} }{\exp\left\{\theta_t(a_2) \right\} + \sum_{a \neq a_2}\exp\left\{\theta_t(a)\right\}} \\
&\ge \exp\left\{\frac{-\sum_{a \neq a_2}\exp\left\{\theta_t(a)\right\} }{\exp\left\{\theta_t(a_2) \right\}}\right\},
\end{align}
where the last inequality uses (\ref{eq:exp_lower_bound2}).
\end{proof}
Finally, we have
\begin{align}
&\prod_{t=1}^{\infty} \pi_t(a_2) \ge \prod_{t=1}^{\infty} \exp\left\{\frac{-\sum_{a \neq a_2}\exp\left\{\theta_t(a)\right\} }{\exp\left\{\theta_t(a_2) \right\}}\right\} \\
&= \prod_{t=1}^{\infty} \exp\left\{\frac{-\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\} }{\exp\left\{\theta_1(a_2) + \eta \cdot (t-1) \right\}}\right\} \\
&= \exp\left\{ \sum_{t=1}^{\infty} \frac{-\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\} }{\exp\left\{\theta_1(a_2) + \eta \cdot (t-1) \right\}} \right\} \\
&= \exp\left\{- \frac{\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\}}{\exp\left\{\theta_1(a_2)\right\}} \cdot \exp\left\{\eta\right\} \cdot \sum_{t=1}^{\infty}\frac{1} {\exp\left\{\eta \cdot t \right\}} \right\} \\
&\ge \exp\left\{- \frac{\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\}}{\exp\left\{\theta_1(a_2)\right\}} \cdot \exp\left\{\eta\right\} \cdot \int_{t=0}^{\infty}\frac{1} {\exp\left\{\eta \cdot t \right\}} \right\} \\
&= \exp\left\{- \frac{\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\}}{\exp\left\{\theta_1(a_2)\right\}} \cdot \frac{\exp\left\{\eta\right\}}{\eta} \right\} \\
&= \Omega(1),
\end{align}
where the last line comes from the fact that $\sum_{a \neq a_2}\exp\left\{\theta_1(a)\right\} \in \Theta(1)$, $\exp\left\{\theta_1(a_2)\right\} \in \Theta(1)$ and $\frac{\exp\left\{\eta\right\}}{\eta} \in \Theta(1)$.
\end{proof}
\section{Convergence of Stochastic Vanilla Policy Gradient}
\label{app:on_pg}
\begin{proof}
The proof follows the similar idea as the proof in \citet{mei2021understanding} Theorem 3, which shows that under stochastic NPG update, it is possible to stuck in the local optimum. However, we now extends the proof to the SPG update with no importance correction (\eqref{eq:spg_update}).
The proof consists of several steps:
\begin{itemize}[leftmargin=*]
\vspace{-1mm}
\item Shows that under the above assumptions, $\operatorname{Pr}\left(\mathcal{E}_{t}\right) \geq \prod_{s=1}^{t} b_{s}$
\vspace{-1mm}
\item Shows that there exists a suitable choice of $b_s$ such that $\prod_{s=1}^{t} b_{s} > 0$.
\end{itemize}
\vspace{-1mm}
Let $\mathcal{E}_{t}$ be the event that the optimal action $a^{*}$ is not sampled in the first $t$ time steps. We want to show that as $t \rightarrow \infty$, $\operatorname{Pr}\left(\mathcal{E}_{t}\right) > 0$.
When $a^{*}$ is not sampled in the first $t$ steps, we have:
\begin{align}
\theta_{t}(a) &= \theta_{1}(a) + \eta \cdot \sum_{s=1}^{t-1} \frac{d \pi_{s}^{\top} r}{d \theta_{s}(a)} \\
&= \theta_{1}(a) + \eta \cdot \sum_{s=1}^{t-1} (\mathbb{I}\left[a=a_{s}\right] - \pi_{s}(a)) \cdot r(a_{s}) \\
&\ge \theta_{1}(a) + \eta \cdot r_{min} \cdot \sum_{s=1}^{t-1} (\mathbb{I}\left[a=a_{s}\right] - \pi_{s}(a)), \left(r_{\min }:=\min _{a \neq a^{*}} r(a)\right)
\end{align}
Since
$\pi_{\theta_{t}}(a^{*}) = \frac{\exp \left\{\theta_{t}\left(a^{*}\right)\right\}}{\sum_{a \neq a^{*}} \exp \left\{\theta_{t}(a)\right\}+\exp \left\{\theta_{t}\left(a^{*}\right)\right\}}$
we first compute the first term of the denominator:
\begin{align}
&\sum_{a \neq a^{*}} \exp \left\{\theta_{t}(a)\right\} \geq(K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{t}(a)}{K-1}\right\} \quad \text { (by Jensen's inequality) } \\
&\quad\geq (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \sum_{a \neq a^{*}} \sum_{s=1}^{t-1} \left(\mathbb{I}\left\{a_{s}=a\right\} - \pi_s(a)\right)}{K-1}\right\} \\
&\quad= (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[(t-1) - \sum^{t-1}_{s=1}\sum_{a \neq a^{*}} \pi_s(a)\right]}{K-1}\right\} \\
&\quad= (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[(t-1) - \sum^{t-1}_{s=1}\left(1 - \pi_s(a^{*})\right)\right]}{K-1}\right\} \\
&\quad= (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[ \sum^{t-1}_{s=1}\pi_s(a^{*})\right]}{K-1}\right\} \\
&\quad\geq (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[ \sum^{t-1}_{s=1}\pi_t(a^{*})\right]}{K-1}\right\} \\
&\quad\geq (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[ \pi_1(a^{*}) + c_{t}\right]}{K-1}\right\} \\
\end{align}
where we use the fact that $\sum_{a \neq a^{*}} \sum_{s=1}^{t-1} \mathbb{I}\left\{a_{s}=a\right\} = t-1$ and $\pi(a^{*}) = 1 - \sum_{a \neq a^{*}} \pi(a)$.
Note that since $a^{*}$ is not sampled for the first $t$ steps, $\theta_{t}(a^{*}) \leq \theta_{1}(a^{*})$, we have:
\begin{align}
\label{eq:sum_a_lowerbound}
&\sum_{a \neq a^{*}} \pi_{\theta_{t}}(a) = 1 - \pi_{t}(a^{*}) \\
&\quad= 1 - \frac{\exp \left\{\theta_{t}\left(a^{*}\right)\right\}}{\sum_{a \neq a^{*}} \exp \left\{\theta_{t}(a)\right\}+\exp \left\{\theta_{t}\left(a^{*}\right)\right\}} \\
&\quad\geq 1 - \frac
{\exp\left\{\theta_{t}\left(a^{*}\right)\right\}}
{\sum_{a \neq a^{*}} (K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[(t-1) \pi_{t}(a^*)\right]} {K-1}\right\} + \exp \left\{\theta_{t}\left(a^{*}\right)\right\}} \\
\end{align}
\begin{lemma} For all $x \in(0,1)$, we have:
\label{lemma:exp_lower_bound}
\end{lemma}
\begin{equation}
\label{eq:exp_lower_bound}
1-x \geq \exp \left\{\frac{-x}{1 - x}\right\}
\end{equation}
\begin{proof}
\begin{align}
1-x &=\exp \{\log (1-x)\} \\
& \geq \exp \left\{1-e^{-\log (1-x)}\right\} \quad\left(y \geq 1-e^{-y}\right) \\
&=\exp \left\{\frac{-1}{1 / x-1}\right\}
\\
&=\exp \left\{\frac{-x}{1 - x}\right\}
\end{align}
we can plug in $x$ as $\frac{a}{b}$ to obtain a more useful form of this lemma:
\begin{equation}
\label{eq:exp_lower_bound2}
1 -\frac{a}{b} \ge \exp \left\{\frac{-a}{b-a} \right\}
\end{equation}
\end{proof}
Using lemma \ref{lemma:exp_lower_bound} and \eqref{eq:sum_a_lowerbound} we have:
\begin{align}
\sum_{a \neq a^{*}} \pi_{\theta_{t}}(a) &= 1 - \pi_{t}(a^{*}) \\
&\geq \exp \left\{\frac{-\exp\left\{\theta_{t}\left(a^{*}\right)\right\}}{(K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[(t-1) \pi_{t}(a^*)\right]} {K-1}\right\}}\right\}
\end{align}
So the probability that $a^{*}$ is not sampled in the first t steps is:
\begin{align}
\operatorname{Pr}(\mathcal{E}) &\geq \prod_{t=1}^{\infty} \exp \left\{\frac{-\exp\left\{\theta_{t}\left(a^{*}\right)\right\}}{(K-1) \cdot \exp \left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)+\eta \cdot r_{\min } \cdot \left[(t-1) \pi_{t}(a^*)\right]} {K-1}\right\}}\right\} \\
&\geq \exp\left\{ \frac{-\exp\left\{\theta_{1}\left(a^{*}\right)\right\}}{(K-1)}
\frac{1}
{\exp\left\{\frac{\sum_{a \neq a^{*}} \theta_{1}(a)}{K-1}\right\}}
\sum_{t=1}^{\infty} \frac{1}{\exp \left\{\frac{\eta \cdot r_{\min } \cdot \left(\pi_{1}\left(a^*\right) + c_{t}\right)} {K-1}\right\}}\right\}
\end{align}
\end{proof}
\section{A Closer Look at the Learning Rate}
\label{app:extra_exp:bandit}
Unlike most RL algorithms, CAPO leverages variable learning rate that is state action dependent, instead of a fixed learning rate.
In this section, we provide some insights into why this design is preferred under CAPO from both theoretical and empirical perspectives.
\subsection{Variable Learning Rate v.s. Fixed Learning Rate}
In \Cref{lemma:lower_bdd}, we quantify the one-step improvement $V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s)$ in terms of state visitation distribution, policy weight, and advantage value under learning rate ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$. Now, we provide the one-step improvement \textit{under fixed learning rate}, $\alpha \in \mathbb{R}$, $\alpha > 0$:
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) =
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{(e^{\alpha}-1) \cdot \pi_m(a_m|s_m)}{(e^{\alpha}-1) \cdot \pi_m(a_m|s_m) + 1} \cdot A^{m}(s_m, a_m) & \text{, if } A^{m}(s_m, a_m) > 0 \\
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{(1-e^{-\alpha}) \cdot \pi_m(a_m|s_m)}{(e^{-\alpha}-1) \cdot \pi_m(a_m|s_m) + 1} \cdot (-A^{m}(s_m, a_m)) & \text{, if } A^{m}(s_m, a_m) < 0\\
\end{cases}\\
\text{where $\alpha \in \mathbb{R}$, $\alpha > 0$}
\end{align}
Note that the result above can be obtained by using the same technique in \Cref{lemma:change_of_policy_weight_1}, \Cref{lemma:change_of_policy_weight_2} and \Cref{lemma:lower_bdd} by substituting the learning rate.
Compared to the one-step improvement under the variable learning rate, the one-step improvement under the fixed learning rate would be tiny as the updated action's policy weight $\pi_m(a_m|s_m) \rightarrow 0$. This property makes it difficult for an action that has positive advantage value but small policy weight to contribute enough to overall improvement, i.e., for those actions, the improvement of the policy weight $\pi_{m+1}(a_m|s_m) - \pi_{m}(a_m|s_m) \rightarrow 0$ under some improper fixed learning rate, leading to small one-step improvement.\\
Now, to provide some further insights into the possible disadvantage of a fixed learning rate, we revisit the proof of the convergence rate of Cyclic CAPO in \Cref{subsection:cyclic_CAPO}.
By combining the one-step improvement above, the result from \hyperref[item:case1]{Case 1} and \hyperref[item:case2]{Case 2} under the fixed learning rate, $\alpha \in \mathbb{R}$, $\alpha > 0$ can be rewritten as:
\begin{align}
V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)
&\ge \frac{(1-\gamma)^2}{2} \cdot \frac{1}{{\max} \left \{ \frac{ (1-\pi_{m+T}(a_{m+T}|s_{m+T})) \cdot (e^{\alpha}-1 ) \cdot \pi_{m+T}(a_{m+T}|s_{m+T}) + 1 }{ (1-\gamma) \cdot (e^{\alpha}-1 ) \cdot \pi_{m+T}(a_{m+T}|s_{m+T}) } , \frac{c_m \cdot T}{(1-\gamma)^2} \right \}} \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2&&\\
&=\frac{(1-\gamma)^2}{2} \cdot {\min} \left \{ \frac{ (1-\gamma) \cdot (e^{\alpha}-1 ) \cdot \pi_{m+T}(a_{m+T}|s_{m+T}) }{ (1-\pi_{m+T}(a_{m+T}|s_{m+T})) \cdot (e^{\alpha}-1 ) \cdot \pi_{m+T}(a_{m+T}|s_{m+T}) + 1 } , \frac{(1-\gamma)^2}{c_m \cdot T} \right \}&&\\
& \quad \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2 &&
\end{align}
where $c_m = \underset{k \in [m,m+T-1]}{\max} \left \{ c_{k1}, c_{k2} \right \} \in [0,1]$ \\
and $c_{k1} = \mathbbm{1} \left \{ A^k(s_k,a_k)>0 \right \} \cdot d^{\pi_{k+1}}_{s}(s_k) \cdot \frac{(e^{\alpha}-1 ) \cdot \pi_{k}(a_k|s_k) \cdot (1-\pi_{k}(a_k|s_k)) }{ (e^{\alpha}-1 ) \cdot \pi_{k}(a_k|s_k) + 1 }$, $c_{k2} = \mathbbm{1} \left \{ A^k(s_k,a_k) < 0 \right \} \cdot d^{\pi_{k+1}}_{s}(s_k) \cdot \frac{(1-e^{-\alpha} ) \cdot \pi_{k}(a_k|s_k) \cdot (1-\pi_{k}(a_k|s_k)) }{ (e^{-\alpha}-1 ) \cdot \pi_{m+T}(a_m|s_m) + 1 }$.\\
Note that the first term $\frac{ (1-\gamma) \cdot (e^{\alpha}-1 ) \cdot \pi_{m+T}(a_{m+T}|s_{m+T}) }{ (1-\pi_{m+T}(a_{m+T}|s_{m+T})) \cdot (e^{\alpha}-1 ) \cdot \pi_{m+T}(a_{m+T}|s_{m+T}) + 1 }$ in the ``min'' operator is derived from \hyperref[item:case2]{Case 2} and the second term $\frac{(1-\gamma)^2}{c_m \cdot T}$ is derived from \hyperref[item:case1]{Case 1}.
Once we cannot guarantee that \hyperref[item:case1]{Case 1} provide enough amount of improvement, we must show that we can get the rest of the required improvement in \hyperref[item:case2]{Case 2}. However, we can find that there is a term $\pi_{m+T}(a_{m+T}|s_{m+T})$ in the numerator of the first term in the ``min'' operator, which is provided by \hyperref[item:case2]{Case 2}, implying that the multi-step improvement $V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)$ might also be tiny when the improvement provided by \hyperref[item:case1]{Case 1} is insufficient and the policy weight $\pi_{m+T}(a_{m+T}|s_{m+T}) \rightarrow 0$ in \hyperref[item:case2]{Case 2}.\\
Accordingly, we highlight the importance of the choice of the learning rate, especially when the visitation frequency of the coordinate generator is extremely unbalanced (e.g. sampling the optimal action every $(|\mathcal{S}||\mathcal{A}|)^{1000}$ epoch) or the approximated advantage value is oscillating between positive and negative during the update. The design of the variable learning rate ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$ somehow tackles the difficulty of the insufficient one-step improvement by providing larger step size to the action with tiny policy weight, solving the problem of small improvement of the policy weight. Therefore, we can conclude that under this design of the learning rate, the one-step improvement is more steady with the policy weight of the action chosen for policy update.
\subsection{Demonstrating the Effect of Learning Rate in a Simple Bandit Environment}
In this section, we present the comparison in terms of the empirical convergence behavior of On-policy CAPO and Off-policy CAPO. Specifically, we evaluate the following four algorithms: (i) On-Policy CAPO with state-action-dependent learning rate (cf. (\ref{eq:on-policy CAPO update})), (ii) On-Policy CAPO with fixed learning rate (\ref{eq:onCAPO_fixed_update}), (iii) Off-Policy CAPO with state-action-dependent learning rate (cf. (\ref{eq:CAPO_form})), (iv) Off-Policy CAPO with fixed learning rate.
We consider the multi-armed bandit as in \Cref{app:example} with $K=4$, and $r = [10, 9.9, 9.9, 0]$. To further demonstrate the ability of CAPO in escaping from the sub-optimal policies, instead of considering the uniform initial policy where $\pi_1(a) = \frac{1}{K}, \forall a \in [K]$, we initialize the policy to a policy that already prefers the sub-optimal actions ($a_2, a_3$) such that $\theta_1 = [0, 3, 3, 0]$ and $\pi_1 \approx [0.0237, 0.4762, 0.4762, 0.0237]$ under the softmax parameterization. For each algorithm, we run the experiments under 100 random seeds. For all the variants of CAPO, we set $\lvert B_m\rvert=1$.
In \Cref{fig:bandit}, On-policy CAPO with fixed learning rate can get stuck in a sub-optimal policy due to the skewed policy initialization that leads to insufficient visitation to each action, and this serves an example for demonstrating the effect described in \Cref{theorem:onCAPO_fixed_stuck}. On the other hand, on-policy CAPO with state-action dependent learning rate always converges to the global optimum despite the extremely skewed policy initialization. This corroborates the importance of variable learning rate for on-policy CAPO. Without such design, the policies failed to escape from a sub-optimal policy under all the random seeds.
Next, we look at the result of off-policy CAPO: We noticed that off-policy CAPO with fixed learning rate is able to identify the optimal action. However, Off-policy CAPO with fixed learning rate learns much more slowly than its variable learning rate counterpart (notice that the x-axis (Iteration) in each graph is scaled differently for better visualization).
Also, we notice that the different choices of fixed learning rate have direct impact on the learning speed, and this introduces a hyperparameter that is dependent on the MDP. On the other hand, $\alpha_m(s, a)$ can be used as a general learning rate for different cases (For example, in \Cref{app:generator} where a different environment Chain is introduced, learning rate for off-policy Actor Critic has to be tuned while $\alpha_m(s,a)$ can be used as the go-to learning rate.)
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{./figures/oncapo_alpha-short.pdf}
\includegraphics[width=0.4\textwidth]{./figures/offcapo-short.pdf}
\includegraphics[width=0.4\textwidth]{./figures/oncapo_fixed.pdf}
\includegraphics[width=0.4\textwidth]{./figures/off_CAPO_fix.pdf}
\caption{The probability weight of the trained policies on the optimal action at different iterations.}
\label{fig:bandit}
\end{figure}
\section{Proofs of the Convergence Rates of CAPO in Section \ref{subsection:convergence_rate}}
\label{app:convergence_rate}
\begin{lemma} $|A^m(s,a)| \le \frac{1}{1-\gamma} \cdot (1-\pi_m(a|s))$, \quad \text{for all $(s,a) \in \mathcal{S} \times \mathcal{A}$.}
\label{lemma:upper_bound_of_advantage}
\begin{proof}[Proof of \Cref{lemma:upper_bound_of_advantage}]
\phantom{}
If $A^m(s,a) > 0$ :
\begin{align}
|A^m(s,a)|
&=Q^{\pi_m}(s,a) - V^{\pi_m}(s) &&\\
&=Q^{\pi_m}(s,a) - \sum_{a' \in \mathcal{A}}\pi_m(a'|s) \cdot Q^{\pi_m}(s,a') &&\\
&\le Q^{\pi_m}(s,a) - \pi_m(a|s) \cdot Q^{\pi_m}(s,a) &&\\
&= Q^{\pi_m}(s,a) \cdot (1 - \pi_m(a|s)) &&\\
&\le \frac{1}{1-\gamma} \cdot (1 - \pi_m(a|s)) &&
\end{align}
If $A^m(s,a) \le 0$ :
\begin{align}
|A^m(s,a)|
&= V^{\pi_m}(s) - Q^{\pi_m}(s,a) &&\\
&= \sum_{a' \in \mathcal{A}}\pi_m(a'|s) \cdot Q^{\pi_m}(s,a') - Q^{\pi_m}(s,a) &&\\
&= \sum_{a' \ne a}\pi_m(a'|s) \cdot Q^{\pi_m}(s,a') - (1 - \pi_m(a|s)) \cdot Q^{\pi_m}(s,a) &&\\
&\le \sum_{a' \ne a}\pi_m(a'|s) \cdot Q^{\pi_m}(s,a') &&\\
&\le \frac{1}{1-\gamma} \cdot \sum_{a' \ne a}\pi_m(a'|s) &&\\
&= \frac{1}{1-\gamma} \cdot (1 - \pi_m(a|s)) &&
\end{align}
\end{proof}
\end{lemma}
\begin{lemma} $\left ( V^{*}(s) - V^{\pi_m}(s) \right )^2 \le \left ( \frac{1}{1-\gamma} \cdot A^{m}(\tilde{s_m},\tilde{a_m}) \right )^2$, \text{for all $m \ge 1$} where $(\tilde{s_m},\tilde{a_m}) = \underset{(s,a) \in \mathcal{S} \times \mathcal{A}}{\argmax} A^{m}(s,a)$.
\label{lemma:lower_bound_of_performance_difference}
\begin{proof}[Proof of \Cref{lemma:lower_bound_of_performance_difference}]
\phantom{}
\begin{align}
\left ( V^{*}(s) - V^{\pi_m}(s) \right )^2
&= \left ( \frac{1}{1-\gamma} \cdot \sum_{s' \in \mathcal{S}} d^{\pi^*}_{s} (s') \sum_{a' \in \mathcal{A}} \pi^*(a'|s') \cdot A^{m}(s',a') \right )^2&&\\
&\le \left ( \frac{1}{1-\gamma} \cdot \sum_{s' \in \mathcal{S}} d^{\pi^*}_{s} (s') \cdot \underset{a' \in \mathcal{A}}{\max}A^{m}(s',a') \right )^2&&\\
&\le \left ( \frac{1}{1-\gamma} \cdot \underset{(s',a') \in \mathcal{S} \times \mathcal{A}}{\max} A^{m}(s',a') \right )^2&&\\
&= \left ( \frac{1}{1-\gamma} \cdot A^{m}(\tilde{s_m},\tilde{a_m}) \right )^2&&
\end{align}
The first equation holds by \Cref{lemma:perf_diff}.\\
The first and the second inequality hold since the value inside the quadratic term is non-negative.
\end{proof}
\end{lemma}
\begin{lemma} $V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{1}{1-\gamma} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )$
\label{lemma:performance_difference_in_rho}
\begin{proof}[Proof of \Cref{lemma:performance_difference_in_rho}]
\phantom{}
\begingroup
\allowdisplaybreaks
\begin{align}
V^{*}(\rho) - V^{\pi_m}(\rho)
&= \frac{1}{1-\gamma} \cdot \sum_{s \in \mathcal{S}} d^{\pi^*}_{\rho} (s) \sum_{a \in \mathcal{A}} \pi^*(a|s) \cdot A^{m}(s,a)&&\\
&= \frac{1}{1-\gamma} \cdot \sum_{s \in \mathcal{S}} d^{\pi^*}_{\mu}(s) \cdot \frac{d^{\pi^*}_{\rho}(s)}{d^{\pi^*}_{\mu}(s)} \sum_{a' \in \mathcal{A}} \pi^*(a|s) \cdot A^{m}(s,a) &&\\
&\le \frac{1}{1-\gamma} \cdot \left \| \frac{1}{d^{\pi^*}_{\mu}} \right \|_{\infty} \cdot \sum_{s \in \mathcal{S}} d^{\pi^*}_{\mu}(s) \sum_{a' \in \mathcal{A}} \pi^*(a|s) \cdot A^{m}(s,a)&&\\
&\le \frac{1}{(1-\gamma)^2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \sum_{s \in \mathcal{S}} d^{\pi^*}_{\mu}(s) \sum_{a' \in \mathcal{A}} \pi^*(a|s) \cdot A^{m}(s,a)&&\\
&= \frac{1}{1-\gamma} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right ) &&
\end{align}
\endgroup
The first and the last equation holds by the performance difference lemma in \Cref{lemma:perf_diff}.\\
The first and second inequality holds since the value inside the summation is non-negative.
\end{proof}
\end{lemma}
\begin{lemma} $d^{\pi}_{\mu}(s) \ge (1-\gamma) \cdot \mu(s)$, \quad \text{for any $\pi, s \in \mathcal{S} $} where $\mu(s)$ is some starting state distribution of the MDP.
\label{lemma:lower_bound_of_state_visitation_distribution}
\begin{proof}[Proof of \Cref{lemma:lower_bound_of_state_visitation_distribution}]
\phantom{}
\begin{align}
d^{\pi}_{\mu}(s)
&= \underset{s_0 \sim \mu}{\mathbb{E}} \left [ d^{\pi}_{\mu}(s) \right]&&\\
&= \underset{s_0 \sim \mu}{\mathbb{E}} \left [ (1-\gamma) \cdot \sum_{t=0}^{\infty}\gamma^t \cdot \mathbb{P}(s_t=s \: | \: s_0, \pi) \right]&&\\
&\ge \underset{s_0 \sim \mu}{\mathbb{E}} \left [ (1-\gamma) \cdot \mathbb{P}(s_0=s \: | \: s_0, \pi) \right] &&\\
&= (1-\gamma) \cdot \mu(s)&&
\end{align}
The first equation holds by the performance difference lemma in \Cref{lemma:perf_diff}.\\
The second and the third equation hold since the value inside quadratic term is non-negative.
\end{proof}
\end{lemma}
\begin{lemma} Given $\delta_{m+1} \le \delta_{m} - c \cdot \delta_{m}^2$ where $\delta_{m} \le \frac{1}{1-\gamma}$ for all $m \ge 1$ and $c \le \frac{1-\gamma}{2}$, then $\delta_{m} \le \frac{1}{c} \cdot \frac{1}{m}$ and $\sum_{m=1}^{M} \delta_m \le \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }$ for all $m \ge 1$.
\label{lemma:induction}
\begin{proof}[Proof of \Cref{lemma:induction}]
\phantom{}
We prove this lemma by induction. For $m \le 2$, $\delta_{m} \le \frac{1}{c} \cdot \frac{1}{m}$ directly holds since $c \le \frac{1-\gamma}{2}$ and $\delta_{m} \le \frac{1}{1-\gamma}$.\\
Let $f_t(x) = x - c \cdot x^2 = -c(x-\frac{1}{2c})^2 + \frac{1}{4c}$. Then $f_t(x)$ is monotonically increasing in $[0,\frac{1}{2c}]$. And so we have :
\begin{align}
\delta_{m+1}
&\le f_t(\delta_{m})&&\\
&\le f_t(\frac{1}{c} \cdot \frac{1}{m})&&\\
&= \frac{1}{c} \cdot (\frac{1}{m}-\frac{1}{m^2})&&\\
&\le \frac{1}{c} \cdot \frac{1}{m+1}
\end{align}
and by summing up $\delta_{m}$, we have :
\begin{align}
\sum_{m=1}^{M} \delta_{m}
&\le \sum_{m=1}^{M} \frac{1}{c} \cdot \frac{1}{m}&&\\
&= \frac{1}{c} \cdot \sum_{m=1}^{M} \frac{1}{m}&&\\
&\le \frac{1}{c} \cdot (\ln{M}+1)&&
\end{align}
On the other hand, we also have :
\begin{align}
\sum_{m=1}^{M} \delta_{m}^2
&\le \frac{1}{c} \cdot \sum_{m=1}^{M} (\delta_{m}-\delta_{m+1})&&\\
&\le \frac{1}{c} \cdot (\delta_{1}-\delta_{M+1})&&\\
&\le \frac{1}{c} \cdot \frac{1}{1-\gamma}&&
\end{align}
Therefore, by Cauchy-Schwarz,
\begin{align}
\sum_{m=1}^{M} \delta_{m}
&\le \sqrt{M} \cdot \sqrt{\sum_{m=1}^{M} \delta_{m}^2}&&\\
&\le \sqrt{M} \cdot \sqrt{\frac{1}{c} \cdot \frac{1}{1-\gamma}}&&\\
&=\sqrt{\frac{M}{c \cdot (1-\gamma)}}&&
\end{align}
\end{proof}
\end{lemma}
\begin{comment}
\begin{prop} $\sum\limits_{a \in \mathcal{A}}\pi(a|s) \cdot A^{\pi}(s,a) = 0, \forall s \in \mathcal{S}$ .
\label{prop:pi_s_Adv}
\end{prop}
\end{comment}
\begin{lemma} Under the CAPO update (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$, if $B_m = \left \{ (s_m, a_m) \right \} $ and $A^{m}(s_m, a_m) > 0$, then the policy weight difference $\pi_{m+1}(a|s) - \pi_{m}(a|s)$ can be written as :
\label{lemma:change_of_policy_weight_1}
\begin{align}
\pi_{m+1}(a|s) - \pi_{m}(a|s) =
\begin{cases}
\frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)} & \text{, if } s = s_m, a = a_m \\
- \frac{1-\pi_{m}(a_m|s_m)}{2-\pi_{m}(a_m|s_m)} \cdot \pi_{m}(a|s) & \text{, if } s = s_m, a \ne a_m\\
0 & \text{, else }
\end{cases}
\end{align}
\begin{proof}[Proof of \Cref{lemma:change_of_policy_weight_1}]
\phantom{}
For $s = s_m, a = a_m$:
\begingroup
\allowdisplaybreaks
\begin{align}
\pi_{m+1}(a_m|s_m) - \pi_{m}(a_m|s_m)
&= \frac{e^{\theta_{m+1}(s_m, a_m)}}{ \sum\limits_{a \in \mathcal{A} } e^{\theta_{m+1}(s_m, a)} } - \pi_m(a_m|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a_m)+ln(\frac{1}{\pi_m(a_m|s_m)}) \cdot sign(A^{m}(s_m, a_m)}}{ e^{\theta_{m}(s_m, a_m)+ln(\frac{1}{\pi_m(a_m|s_m)}) \cdot sign(A^{m}(s_m, a_m)} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a_m|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a_m)+ln(\frac{\sum_a e^{\theta_m}(a)}{e^{\theta_m}(a_m)})}}{ e^{\theta_{m}(s_m, a_m)+ln(\frac{\sum_a e^{\theta_m}(a)}{e^{\theta_m}(s_m, a_m)})} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a_m|s_m)&&\\
&= \frac{ \frac{e^{\theta_m}(s_m, a_m)}{\pi_m(a_m|s_m)} }{ \frac{e^{\theta_m}(s_m, a_m)}{\pi_m(a_m|s_m)} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a_m|s_m)&&\\
&= \frac{ \frac{e^{\theta_m}(s_m, a_m)}{\pi_m(a_m|s_m)} }{ \frac{e^{\theta_m}(s_m, a_m)}{\pi_m(a_m|s_m)} + (\frac{1}{\pi_m(a_m|s_m)}-1) \cdot e^{\theta_m}(s_m, a_m) } - \pi_m(a_m|s_m)&&\\
&= \frac{ \frac{1}{\pi_m(a_m|s_m)} }{ \frac{2}{\pi_m(a_m|s_m)}-1} - \pi_m(a_m|s_m)&&\\
&= \frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)}&&
\end{align}
\endgroup
For $s = s_m, a \ne a_m$:
\begin{align}
\pi_{m+1}(a|s_m) - \pi_{m}(a|s_m)
&= \frac{e^{\theta_{m+1}(s_m, a)}}{ \sum\limits_{a \in \mathcal{A} } e^{\theta_{m+1}(s_m, a)} } - \pi_m(a|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a)}}{ e^{\theta_{m}(s_m, a_m)+ln(\frac{1}{\pi_m(a_m|s_m)}) \cdot sign(A^{m}(s_m, a_m)} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a)}}{ e^{\theta_{m}(s_m, a_m)+ln(\frac{\sum_a e^{\theta_m}(a)}{e^{\theta_m}(s_m, a_m)})} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a|s_m)&&\\
&= \frac{e^{\theta_{m}(s_m, a)} }{ \frac{e^{\theta_m}(s_m, a_m)}{\pi_m(a_m|s_m)} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a|s_m)&&\\
&= \frac{e^{\theta_{m}(s_m, a)} }{ \frac{e^{\theta_m}(s_m, a_m)}{\pi_m(a_m|s_m)} + (\frac{1}{\pi_m(a_m|s_m)}-1) \cdot e^{\theta_m}(s_m, a_m) } - \pi_m(a|s_m)&&\\
&= \left ( \frac{e^{\theta_{m}(s_m, a)} }{ (\frac{2}{\pi_m(a_m|s_m)}-1) \cdot e^{\theta_m}(s_m, a_m) } {\div} \pi_m(a|s_m) - 1 \right ) \cdot \pi_m(a|s_m) &&\\
&= \left ( \frac{ \frac{1}{\pi_m(a_m|s_m)} }{ (\frac{2}{\pi_m(a_m|s_m)}-1) } -1 \right ) \cdot \pi_m(a|s_m) &&\\
&= - \frac{1-\pi_{m}(a_m|s_m)}{2-\pi_{m}(a_m|s_m)} \cdot \pi_{m}(a|s)
\end{align}
\end{proof}
\end{lemma}
\begin{lemma} Under the CAPO update (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_{m}}(a\rvert s)})}$, if $B_m = \left \{ (s_m, a_m) \right \} $ and $A^{m}(s_m, a_m) < 0$, then the policy weight difference $\pi_{m+1}(a|s) - \pi_{m}(a|s)$ can be written as :
\label{lemma:change_of_policy_weight_2}
\begin{align}
\pi_{m+1}(a|s) - \pi_{m}(a|s) =
\begin{cases}
\frac{-\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1} & \text{, if } s = s_m, a = a_m \\
\frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1} \cdot \pi_{m}(a|s) & \text{, if } s = s_m, a \ne a_m\\
0 & \text{, else }
\end{cases}
\end{align}
\begin{proof}[Proof of \Cref{lemma:change_of_policy_weight_2}]
\phantom{}
For $s = s_m, a = a_m$ :
\begingroup
\allowdisplaybreaks
\begin{align}
\pi_{m+1}(a_m|s_m) - \pi_{m}(a_m|s_m)
&= \frac{e^{\theta_{m+1}(s_m, a_m)}}{ \sum\limits_{a \in \mathcal{A} } e^{\theta_{m+1}(s_m, a)} } - \pi_m(a_m|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a_m)+ln(\frac{1}{\pi_m(a_m|s_m)}) \cdot sign(A^{m}(s_m, a_m)}}{ e^{\theta_{m}(s_m, a_m)+ln(\frac{1}{\pi_m(a_m|s_m)}) \cdot sign(A^{m}(s_m, a_m)} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a_m|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a_m)-ln(\frac{\sum_a e^{\theta_m}(a)}{e^{\theta_m}(a_m)})}}{ e^{\theta_{m}(s_m, a_m)-ln(\frac{\sum_a e^{\theta_m}(a)}{e^{\theta_m}(s_m, a_m)})} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a_m|s_m)&&\\
&= \frac{ e^{\theta_m}(s_m, a_m) \cdot \pi_m(a_m|s_m) }{ e^{\theta_m}(s_m, a_m) \cdot \pi_m(a_m|s_m) + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a_m|s_m)&&\\
&= \frac{ e^{\theta_m}(s_m, a_m) \cdot \pi_m(a_m|s_m) }{ e^{\theta_m}(s_m, a_m) \cdot \pi_m(a_m|s_m) + (\frac{1}{\pi_m(a_m|s_m)}-1) \cdot e^{\theta_m}(s_m, a_m) } - \pi_m(a_m|s_m)&&\\
&= \frac{ \pi_m(a_m|s_m) }{ \pi_m(a_m|s_m) - 1 + \frac{1}{\pi_m(a_m|s_m)}} - \pi_m(a_m|s_m)&&\\
&= \frac{-\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1}&&
\end{align}
\endgroup
For $s = s_m, a \ne a_m$ :
\begin{align}
\pi_{m+1}(a|s_m) - \pi_{m}(a|s_m)
&= \frac{e^{\theta_{m+1}(s_m, a)}}{ \sum\limits_{a \in \mathcal{A} } e^{\theta_{m+1}(s_m, a)} } - \pi_m(a|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a)}}{ e^{\theta_{m}(s_m, a_m)+ln(\frac{1}{\pi_m(a_m|s_m)}) \cdot sign(A^{m}(s_m, a_m)} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a|s_m) &&\\
&= \frac{e^{\theta_{m}(s_m, a)}}{ e^{\theta_{m}(s_m, a_m)-ln(\frac{\sum_a e^{\theta_m}(a)}{e^{\theta_m}(s_m, a_m)})} + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a|s_m)&&\\
&= \frac{e^{\theta_{m}(s_m, a)} }{ \pi_m(a_m|s_m) \cdot e^{\theta_m}(s_m, a_m) + \sum\limits_{a \ne a_m } e^{\theta_{m}(s_m, a)} } - \pi_m(a|s_m)&&\\
&= \frac{e^{\theta_{m}(s_m, a)} }{ \pi_m(a_m|s_m) \cdot e^{\theta_m}(s_m, a_m) + (\frac{1}{\pi_m(a_m|s_m)}-1) \cdot e^{\theta_m}(s_m, a_m) } - \pi_m(a|s_m)&&\\
&= \left ( \frac{e^{\theta_{m}(s_m, a)} }{ ( \pi_m(a_m|s_m) - 1 + \frac{1}{\pi_m(a_m|s_m)}) \cdot e^{\theta_m}(s_m, a_m) } {\div} \pi_m(a|s_m) - 1 \right ) \cdot \pi_m(a|s_m) &&\\
&= \left ( \frac{ \frac{1}{\pi_m(a_m|s_m)} }{ ( \pi_m(a_m|s_m) - 1 + \frac{1}{\pi_m(a_m|s_m)}) } -1 \right ) \cdot \pi_m(a|s_m) &&\\
&= \frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1} \cdot \pi_{m}(a|s)&&
\end{align}
\end{proof}
\end{lemma}
\begin{lemma} Under the CAPO update (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$, if $B_m = \left \{ (s_m, a_m) \right \} $ and $A^{m}(s_m, a_m) > 0$, then the policy weight difference $\pi_{m+1}(a|s) - \pi_{m}(a|s)$ can be written as :
\label{lemma:policy_weight_distribution}
\begin{align}
\pi_{m+1}(a|s) - \pi_{m}(a|s) =
\begin{cases}
W^+ & \text{, if } s = s_m, a = a_m \\
- W^+ \cdot \frac{\pi_{m}(a|s)}{1-\pi_{m}(a_m|s_m)} & \text{, if } s = s_m, a \ne a_m\\
0 & \text{, else }
\end{cases}
\\ \text{where $(1-\pi_{m}(a_m|s_m)) \ge W^+ \ge \frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)}$}
\end{align}
\begin{proof}[Proof of \Cref{lemma:policy_weight_distribution}]
\phantom{}
By \Cref{lemma:change_of_policy_weight_1}, we have $W^+ = \frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)}$ under ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$.
Since $\pi_{m+1}(a|s)$ is proportional to the learning rate $\alpha_m(s,a)$, we establish the lower bound of $W^+$ directly.
The upper bound of $W^+$ is constructed by the maximum value of improvement.\\
Also, for $s = s_m, a \ne a_m$, we have:
\begin{align}
\frac{\pi_{m+1}(a|s)}{\pi_{m}(a|s)} = \frac{ \frac{e^{\theta_{m}(s,a)}}{Z_{m}(s)} }{\frac{e^{\theta_{m+1}(s,a)}}{Z_{m+1}(s)}} = \frac{ \frac{e^{\theta_{m}(s,a)}}{Z_{m}(s)} }{\frac{e^{\theta_{m}(s,a)}}{Z_{m+1}(s)}} = \frac{Z_{m}(s)}{Z_{m+1}(s)}
\end{align}
Since $\sum_{a \ne a_m} \left ( \pi_{m+1}(a|s) - \pi_{m}(a|s) \right ) = -W^+$, we have:
\begin{align}
\sum_{a \ne a_m} \left ( \pi_{m+1}(a|s) - \pi_{m}(a|s) \right ) = \sum_{a \ne a_m} \left ( \frac{Z_{m}(s)}{Z_{m+1}(s)}-1 \right ) \cdot \pi_{m}(a|s) = \left ( \frac{Z_{m}(s)}{Z_{m+1}(s)}-1 \right ) \cdot (1-\pi_{m}(a_m|s)) = -W^+
\end{align}
Hence, for $s = s_m, a \ne a_m$, we get:
\begin{align}
\pi_{m+1}(a|s) - \pi_{m}(a|s) = \frac{Z_{m}(s) \cdot \pi_m{(a|s)}}{Z_{m+1}(s)} - \pi_{m}(a|s) = \left ( \frac{Z_{m}(s)}{Z_{m+1}(s)}-1 \right ) \cdot \pi_{m}(a|s) = \frac{-W^+}{1-\pi_{m}(a_m|s)} \cdot \pi_{m}(a|s)
\end{align}
\end{proof}
\end{lemma}
\begin{lemma} Under the CAPO update (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$, if $B_m = \left \{ (s_m, a_m) \right \} $ and $A^{m}(s_m, a_m) < 0$, then the policy weight difference $\pi_{m+1}(a|s) - \pi_{m}(a|s)$ can be written as :
\label{lemma:policy_weight_distribution_2}
\begin{align*}
\pi_{m+1}(a|s) - \pi_{m}(a|s) =
\begin{cases}
-W^- & \text{, if } s = s_m, a = a_m \\
W^- \cdot \frac{\pi_{m}(a|s)}{1-\pi_{m}(a_m|s_m)} & \text{, if } s = s_m, a \ne a_m\\
0 & \text{, else }
\end{cases}
\\ \text{where $\pi_m(a_m|s_m) \ge W^- \ge \frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1} $}
\end{align*}
\begin{proof}[Proof of \Cref{lemma:policy_weight_distribution_2}]
\phantom{}
By \Cref{lemma:change_of_policy_weight_2}, we have $W^- = \frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1}$ under ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$.
Since $\pi_{m+1}(a|s)$ is proportional to the learning rate $\alpha_m(s,a)$, we establish the lower bound of $W^-$ directly.
The upper bound of $W^-$ is constructed by the maximum value of improvement.\\
Also, for $s = s_m, a \ne a_m$, we have:
\begin{align}
\frac{\pi_{m+1}(a|s)}{\pi_{m}(a|s)} = \frac{ \frac{e^{\theta_{m}(s,a)}}{Z_{m}(s)} }{\frac{e^{\theta_{m+1}(s,a)}}{Z_{m+1}(s)}} = \frac{ \frac{e^{\theta_{m}(s,a)}}{Z_{m}(s)} }{\frac{e^{\theta_{m}(s,a)}}{Z_{m+1}(s)}} = \frac{Z_{m}(s)}{Z_{m+1}(s)}
\end{align}
Moreover, since $\sum_{a \ne a_m} \left ( \pi_{m+1}(a|s) - \pi_{m}(a|s) \right ) = W^-$, we have:
\begin{align}
\sum_{a \ne a_m} \left ( \pi_{m+1}(a|s) - \pi_{m}(a|s) \right ) = \sum_{a \ne a_m} \left ( \frac{Z_{m}(s)}{Z_{m+1}(s)}-1 \right ) \cdot \pi_{m}(a|s) = \left ( \frac{Z_{m}(s)}{Z_{m+1}(s)}-1 \right ) \cdot (1-\pi_{m}(a_m|s)) = W^-
\end{align}
Hence, for $s = s_m, a \ne a_m$, we get:
\begin{align}
\pi_{m+1}(a|s) - \pi_{m}(a|s) = \frac{Z_{m}(s) \cdot \pi_m{(a|s)}}{Z_{m+1}(s)} - \pi_{m}(a|s) = \left ( \frac{Z_{m}(s)}{Z_{m+1}(s)}-1 \right ) \cdot \pi_{m}(a|s) = \frac{W^-}{1-\pi_{m}(a_m|s)} \cdot \pi_{m}(a|s)
\end{align}
\end{proof}
\end{lemma}
\begin{lemma} Under the CAPO update (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$, if $B_m = \left \{ (s_m, a_m) \right \} $ then the improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ can be written as :
\label{lemma:lower_bdd}
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) =
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot A^{m}(s_m, a_m) & \text{, if } A^{m}(s_m, a_m) > 0 \\
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot (-A^{m}(s_m, a_m)) & \text{, if } A^{m}(s_m, a_m) < 0\\
\end{cases}\\
\text{where }
\begin{cases}
(1-\pi_{m}(a_m|s_m)) \ge W^+ \ge \frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)}\\
\pi_m(a_m|s_m) \ge W^- \ge \frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1}
\end{cases}
\end{align}
and it can also be lower bounded by :
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) \ge
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) > 0 \\
d^{\pi_{m+1}}_{s}(s_m) \cdot \pi_m(a_m|s_m) \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) < 0\\
\end{cases}
\end{align}
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:lower_bdd}]
\phantom{}
\begin{comment}
\begin{prop} Updating $\theta_m$ with (\ref{eq:CAPO_form}), if $B_m = \left \{ (s_m, a_m) \right \} $ then $\pi_{m+1}(s,a) - \pi_{m}(s,a) = 0, \forall s \ne s_m, a \in \mathcal{A}$.
\label{prop:pi_s_independent}
\end{prop}
\end{comment}
If $A^{m}(s, a) > 0$, then :
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s)
&= \frac{1}{1-\gamma} \cdot \sum_{s \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s) \sum_{a \in \mathcal{A}} \pi_{m+1}(a|s) \cdot A^{m}(s, a) &&\\
&= \frac{1}{1-\gamma} \sum_{s \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s) \sum_{a \in \mathcal{A}} \left ( \pi_{m+1}(a|s) - \pi_{m}(a|s) \right ) \cdot A^{m}(s, a) &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \sum_{a \in \mathcal{A}} \left ( \pi_{m+1}(a|s_m) - \pi_{m}(a|s_m) \right ) \cdot A^{m}(s_m, a) &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \left [ W^+ \cdot A^{m}(s_m, a_m) - \sum_{a \ne a_m} \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot \pi_{m}(a|s_m) \cdot A^{m}(s_m, a) \right ] &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \left [ W^+ \cdot A^{m}(s_m, a_m) - \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot \sum_{a \ne a_m} \pi_{m}(a|s_m) \cdot A^{m}(s_m, a) \right ] &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \left [ W^+ \cdot A^{m}(s_m, a_m) + \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot \pi_{m}(a_m|s_m) \cdot A^{m}(s_m, a_m) \right ] &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot A^{m}(s_m, a_m) &&\\
&\ge \frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(s_m, a_m)^2 &&
\end{align}
The first equation holds by the performance difference lemma in \Cref{lemma:perf_diff}.\\
The second equation holds by the definition of $A(s,a)$.\\
The third equation holds since $\pi_{m+1}(a|s) = \pi_{m}(a|s)$, $\quad \forall s \ne s_m$.\\
The fourth equation holds by the difference of the updated policy weight that we have shown in \Cref{lemma:change_of_policy_weight_1} and \Cref{lemma:policy_weight_distribution}.\\
The last inequality holds by the bound of $A(s,a)$ in \Cref{lemma:upper_bound_of_advantage}.
If $A^{m}(s, a) < 0$, then :
\begingroup
\allowdisplaybreaks
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s)
&= \frac{1}{1-\gamma} \cdot \sum_{s \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s) \sum_{a \in \mathcal{A}} \pi_{m+1}(a|s) \cdot A^{m}(s, a) &&\\
&= \frac{1}{1-\gamma} \sum_{s \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s) \sum_{a \in \mathcal{A}} \left ( \pi_{m+1}(a|s) - \pi_{m}(a|s) \right ) \cdot A^{m}(s, a) &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \sum_{a \in \mathcal{A}} \left ( \pi_{m+1}(a|s_m) - \pi_{m}(a|s_m) \right ) \cdot A^{m}(s_m, a) &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \left [ -W^- \cdot A^{m}(s_m, a_m) + \sum_{a \ne a_m} \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot \pi_{m}(a|s_m) \cdot A^{m}(s_m, a) \right ] &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \left [ -W^- \cdot A^{m}(s_m, a_m) + \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot \sum_{a \ne a_m} \pi_{m}(a|s_m) \cdot A^{m}(s_m, a) \right ] &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \left [ -W^- \cdot A^{m}(s_m, a_m) - \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot \pi_{m}(a_m|s_m) \cdot A^{m}(s_m, a_m) \right ] &&\\
&= \frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma} \cdot \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot (-A^{m}(s_m, a_m)) &&\\
&\ge d^{\pi_{m+1}}_{s}(s_m) \cdot \pi_{m}(a_m|s_m) \cdot A^{m}(s_m, a_m)^2 &&
\end{align}
\endgroup
The first equation holds by the performance difference lemma in \Cref{lemma:perf_diff}.\\
The second equation holds by the definition of $A(s,a)$.\\
The third equation holds since $\pi_{m+1}(a|s) = \pi_{m}(a|s)$, $\quad \forall s \ne s_m$.\\
The fourth equation holds by the difference of the updated policy weight that we have shown in \Cref{lemma:change_of_policy_weight_2} and \Cref{lemma:policy_weight_distribution_2}.\\
The last inequality holds by the bound of $A(s,a)$ in \Cref{lemma:upper_bound_of_advantage}.
\end{proof}
\subsection{Convergence Rate of Cyclic CAPO}
\label{subsection:cyclic_CAPO}
For ease of exposition, we restate \Cref{theorem:cyclic_convergence_rate} as follows.
\begin{theorem*}
Consider a tabular softmax parameterized policy $\pi_\theta$.
Under Cyclic CAPO with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$ and $|B_m| = 1$, $\bigcup_{i=1}^{|\mathcal{S}||\mathcal{A}|} B_{m \cdot |\mathcal{S}||\mathcal{A}| + i} = \mathcal{S} \times \mathcal{A}$, we have :
\begin{align}
V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{|\mathcal{S}||\mathcal{A}|}{c} \cdot \frac{1}{m} , \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} V^{*}(\rho) - V^{\pi_m}(\rho) \le |\mathcal{S}||\mathcal{A}| \cdot \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }, \quad \text{for all $m \ge 1$}
\end{align}
where $c = \frac{(1-\gamma)^4}{2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot {\min} \left \{ \frac{\min_s{\mu(s)} }{2} , \frac{ (1-\gamma)}{|\mathcal{S}||\mathcal{A}|} \right \} > 0$.
\label{app:cyclic_convergence_rate}
\end{theorem*}
\begin{proof}[Proof of \Cref{theorem:cyclic_convergence_rate}]
\phantom{}
The proof can be summarized as:
\begin{enumerate}
\itemsep0em
\item We first write the improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ in state visitation distribution, policy weight, and advantage value in \Cref{lemma:lower_bdd}, and also construct the lower bound of it.
%
\item We then construct the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$ using $V^{\pi_{m+ |\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)$.
%
\item Finally, we can show the desired result inductively by \Cref{lemma:induction}.
\end{enumerate}
By \Cref{lemma:lower_bdd}, we have for all $m \ge 1$:
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) =
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot A^{m}(s_m, a_m) & \text{, if } A^{m}(s_m, a_m) > 0 \\
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot (-A^{m}(s_m, a_m)) & \text{, if } A^{m}(s_m, a_m) < 0\\
\end{cases}\\
\text{where }
\begin{cases}
(1-\pi_{m}(a_m|s_m)) \ge W^+ \ge \frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)}\\
\pi_m(a_m|s_m) \ge W^- \ge \frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1}
\end{cases}
\end{align}
and it can also be lower bounded by:
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) \ge
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) > 0 \\
d^{\pi_{m+1}}_{s}(s_m) \cdot \pi_m(a_m|s_m) \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) < 0
\end{cases}
\end{align}
Now, we're going to construct the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$ using $V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)$. Note that by \Cref{lemma:lower_bound_of_performance_difference}, there exists $(\tilde{s_m},\tilde{a_m})$ such that $\left ( V^{*}(s) - V^{\pi_m}(s) \right )^2 \le \left ( \frac{1}{1-\gamma} \cdot A^{m}(\tilde{s_m},\tilde{a_m}) \right )^2$ for all $m \ge 1$.\\
Hence, if we construct the upper bound of $A^{m}(\tilde{s_m},\tilde{a_m})^2$ using $V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)$, which is the improvement of the performance during the whole cycle, then we can get the the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$ using $V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)$ for all $m \equiv 0 \pmod{|\mathcal{S}||\mathcal{A}|} $.\\
Without loss of generality, Assume we update $(\tilde{s_m},\tilde{a_m})$ at episode $(m+T)$, where $T \in \left [ 0,|\mathcal{S}||\mathcal{A}| \right ) \bigcap \mathbb{N}$, $m \equiv 0 \pmod{|\mathcal{S}||\mathcal{A}|} $. We discuss two possible cases as follows:
\begin{itemize}[leftmargin=*]
\item\label{item:case1} Case 1: $ V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \ge A^{m}(\tilde{s_m},\tilde{a_m})$:\\
\begingroup
\allowdisplaybreaks
\begin{align}
A^{m}(\tilde{s_m},\tilde{a_m})^2
&\le \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right ) ^2 &&\\
&= \left ( \sum_{k=m}^{m+T-1} \left ( V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s) \right ) \right ) ^2 &&\\
&= \left ( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} (V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s)) + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k) < 0}} (V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s)) \right ) ^2 &&\\
&= \left( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} \frac{d^{\pi_{k+1}}_{s}(s_k)}{1-\gamma } \cdot \frac{W^{k+}}{1-\pi_k(a_k|s_k)} \cdot A^{k}(s_k, a_k) \right. \\
&\left. \quad \quad + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k) < 0}} \frac{d^{\pi_{k+1}}_{s}(s_k)}{1-\gamma } \cdot \frac{W^{k-}}{1-\pi_k(a_k|s_k)} \cdot (-A^{k}(s_k, a_k)) \right)^2 &&\\
&\le T \cdot \left( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k)}{1-\gamma } \cdot \frac{W^{k+}}{1-\pi_k(a_k|s_k)} \cdot A^{k}(s_k, a_k) \right)^2 \right. \\
&\left. \quad \quad \enspace + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)<0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k)}{1-\gamma } \cdot \frac{W^{k-}}{1-\pi_k(a_k|s_k)} \cdot A^{k}(s_k, a_k) \right)^2 \right) &&\\
&= T \cdot \left( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k)}{1-\gamma } \cdot \frac{W^{k+}}{1-\pi_k(a_k|s_k)} \right)^2 \cdot A^{k}(s_k, a_k)^2 \right. \\
&\left. \quad \quad \enspace + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)<0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k)}{1-\gamma } \cdot \frac{W^{k-}}{1-\pi_k(a_k|s_k)} \right)^2 \cdot A^{k}(s_k, a_k)^2 \right) &&\\
&= T \cdot \left( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k+}}{1-\gamma } \right)^2 \cdot \frac{|A^{k}(s_k, a_k)|}{1-\pi_k(a_k|s_k)} \cdot \frac{|A^{k}(s_k, a_k)|}{1-\pi_k(a_k|s_k)} \right.&&\\
&\left. \quad\quad\quad + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)<0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k-}}{1-\gamma } \right)^2 \cdot \frac{|A^{k}(s_k, a_k)|}{1-\pi_k(a_k|s_k)} \cdot \frac{|A^{k}(s_k, a_k)|}{1-\pi_k(a_k|s_k)} \right) &&\\
&\le T \cdot \left( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k+}}{1-\gamma } \right)^2 \cdot \frac{1}{1-\gamma} \cdot \frac{1-\gamma }{d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k+}} \cdot \left( V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s) \right) \right.&&\\
&\left. \quad\quad\quad + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)<0}} \left( \frac{d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k-}}{1-\gamma } \right)^2 \cdot \frac{1}{1-\gamma} \cdot \frac{1-\gamma }{d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k-}} \cdot \left( V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s) \right) \right) &&\\
&= \frac{T}{(1-\gamma)^2} \cdot \left( \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)>0}} d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k+} \cdot \left( V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s) \right) \right. \\
&\left. \qquad \qquad \qquad + \sum\limits_{\substack{k \in [m,m+T-1] \\ A^k(s_k,a_k)<0}} d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k-} \cdot \left( V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s) \right) \right) &&\\
&\le \frac{T}{(1-\gamma)^2} \cdot \underset{k \in [m,m+T-1]}{\max} \left \{ \mathbbm{1} \left \{ A^k(s_k,a_k)>0 \right \} \cdot d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k+} + \mathbbm{1} \left \{ A^k(s_k,a_k) < 0 \right \} \cdot d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k-} \right \} \\
& \quad \cdot \left ( \sum_{k=m}^{m+T-1} V^{\pi_{k+1}}(s) - V^{\pi_{k}}(s) \right ) &&\\
&\le c_m \cdot \frac{T}{(1-\gamma)^2} \cdot \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right )&&\\
&\le c_m \cdot \frac{T}{(1-\gamma)^2} \cdot \left ( V^{\pi_{m+T+1}}(s) - V^{\pi_{m}}(s) \right )&&\\
&\le 2 \cdot {\max} \left \{ \frac{2}{d^{\pi_{m+T+1}}_{s}(s_{m+T})} , \frac{c_m \cdot T}{(1-\gamma)^2} \right \} \cdot \left ( V^{\pi_{m+T+1}}(s) - V^{\pi_{m}}(s) \right ) &&
\end{align}
\endgroup
where $c_m = \underset{k \in [m,m+T-1]}{\max} \left \{ c_{k1}, c_{k2} \right \} \in [0,1]$ \\
and $c_{k1} = \mathbbm{1} \left \{ A^k(s_k,a_k)>0 \right \} \cdot d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k+}$, $c_{k2} = \mathbbm{1} \left \{ A^k(s_k,a_k) < 0 \right \} \cdot d^{\pi_{k+1}}_{s}(s_k) \cdot W^{k-}$.\\
The third equation holds by \Cref{lemma:lower_bdd}.\\
The second inequality holds by Cauchy-Schwarz.\\
The third inequality holds by \Cref{lemma:upper_bound_of_advantage} and \Cref{lemma:lower_bdd}.\\
\item\label{item:case2} Case 2: $V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) < A^{m}(\tilde{s_m},\tilde{a_m})$:\\
\begin{align}
A^{m}(\tilde{s_m},\tilde{a_m})^2
&= \left ( \left ( Q^{\pi_m}(\tilde{s_m},\tilde{a_m}) - V^{\pi_{m+T}}(s) \right ) + \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right ) \right ) ^2 &&\\
&\le \left ( \left ( Q^{\pi_{m+T}}(\tilde{s_m},\tilde{a_m}) - V^{\pi_{m+T}}(s) \right ) + \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right ) \right ) ^2 &&\\
&= \left ( A^{m+T}(\tilde{s_m},\tilde{a_m}) + \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right ) \right ) ^2 &&\\
&\le \left ( A^{m+T}(\tilde{s_m},\tilde{a_m} )^2 + \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right )^2 \right ) \cdot (1^2 + 1^2) &&\\
&\le 2 \cdot \left ( \frac{2}{d^{\pi_{m+T+1}}_{s}(s_{m+T})} \cdot \left ( V^{\pi_{m+T+1}}(s) - V^{\pi_{m+T}}(s) \right ) \right. \\
&\left. \quad \quad \enspace + c_m \cdot \frac{T}{(1-\gamma)^2} \cdot \left ( V^{\pi_{m+T}}(s) - V^{\pi_{m}}(s) \right ) \right ) &&\\
&\le 2 \cdot {\max} \left \{ \frac{2}{d^{\pi_{m+T+1}}_{s}(s_{m+T})} , \frac{c_m \cdot T}{(1-\gamma)^2} \right \} \cdot \left ( V^{\pi_{m+T+1}}(s) - V^{\pi_{m}}(s) \right ) &&
\end{align}
The first inequality holds by the strict improvement of $V^{\pi}(s)$ \ref{app:proof_strict_improvement}, leading to the strict improvement of $Q^{\pi}(s,a)$.\\
The second inequality holds by Cauchy-Schwarz.\\
The third inequality holds by the result of Case 1 and \Cref{lemma:lower_bdd}
\end{itemize}
Hence, in both case we get:
\begin{align}
&V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s) \ge V^{\pi_{m+T+1}}(s) - V^{\pi_{m}}(s) \ge \frac{1}{2} \cdot \frac{1}{{\max} \left \{ \frac{2}{d^{\pi_{m+T+1}}_{s}(s_{m+T})} , \frac{c_m \cdot T}{(1-\gamma)^2} \right \}} \cdot A^{m}(\tilde{s_m},\tilde{a_m})^2
\end{align}
for all $m \equiv 0 \pmod{|\mathcal{S}||\mathcal{A}|} $.\\
Combining \Cref{lemma:lower_bound_of_performance_difference}, we can construct the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$ using $V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_m}(s)$:
\begin{align}
V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(s) - V^{\pi_{m}}(s)
&\ge \frac{(1-\gamma)^2}{2} \cdot \frac{1}{{\max} \left \{ \frac{2}{d^{\pi_{m+T+1}}_{s}(s_{m+T})} , \frac{c_m \cdot T}{(1-\gamma)^2} \right \}} \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2&&\\
&=\frac{(1-\gamma)^2}{2} \cdot {\min} \left \{ \frac{d^{\pi_{m+T+1}}_{s}(s_{m+T})}{2} , \frac{(1-\gamma)^2}{c_m \cdot T} \right \} \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2 &&
\end{align}
and if we consider the whole initial state distribution, $\mu$, we have :
\begin{align}
V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(\mu) - V^{\pi_{m}}(\mu)
&\ge \frac{(1-\gamma)^2}{2} \cdot \frac{1}{{\max} \left \{ \frac{2}{d^{\pi_{m+T+1}}_{\mu}(s_{m+T})} , \frac{c_m \cdot T}{(1-\gamma)^2} \right \}} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )^2&&\\
&=\frac{(1-\gamma)^2}{2} \cdot {\min} \left \{ \frac{d^{\pi_{m+T+1}}_{\mu}(s_{m+T})}{2} , \frac{(1-\gamma)^2}{c_m \cdot T} \right \} \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2 &&\\
&\ge\frac{(1-\gamma)^2}{2} \cdot {\min} \left \{ \frac{(1-\gamma) \cdot \min_s{\mu(s)} }{2} , \frac{ (1-\gamma)^2}{|\mathcal{S}||\mathcal{A}|} \right \} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )^2&&\\
&\ge \underbrace{ \frac{(1-\gamma)^3}{2} \cdot {\min} \left \{ \frac{\min_s{\mu(s)} }{2} , \frac{ (1-\gamma)}{|\mathcal{S}||\mathcal{A}|} \right \} }_{:= c' > 0} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )^2 &&
\end{align}
The second inequality holds since $d^{\pi}_{\mu}(s) \ge (1- \gamma) \cdot \mu(s)$ \ref{lemma:lower_bound_of_state_visitation_distribution}.\\
And since $V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(\mu) - V^{\pi_{m}}(\mu) = (V^{\pi^{*}}(\mu) - V^{\pi_{m}}(\mu)) - (V^{\pi^{*}}(\mu) - V^{\pi_{m+|\mathcal{S}||\mathcal{A}|}}(\mu))$, by rearranging the inequality above, we have :
\begin{align}
&\delta_{m+|\mathcal{S}||\mathcal{A}|} \le \delta_{m} - c' \cdot \delta_{m}^2 \quad \text{where $\delta_{m} = V^{\pi^*}(\mu) - V^{\pi_m}(\mu) $ for all $m \equiv 0 \pmod{|\mathcal{S}||\mathcal{A}|} $}
\end{align}
Then, we can get the following result by induction \ref{lemma:induction} :
\begin{align}
V^{*}(\mu) - V^{\pi_m}(\mu) \le \frac{1}{c'} \cdot \frac{1}{\max \left \{ \left \lfloor \frac{m}{|\mathcal{S}||\mathcal{A}|} \right \rfloor, 1 \right \} } \le \frac{1}{c'} \cdot \min \left \{ \frac{|\mathcal{S}||\mathcal{A}|}{m} , 1 \right \} \le \frac{|\mathcal{S}||\mathcal{A}|}{c'} \cdot \frac{1}{m} , \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} V^{*}(\mu) - V^{\pi_m}(\mu) \le |\mathcal{S}||\mathcal{A}| \cdot \min{ \left \{ \sqrt{\frac{M}{c' \cdot (1-\gamma)}}, \frac{\log M +1}{c'} \right \} }, \quad \text{for all $m \ge 1$}
\end{align}
where $c' = \frac{(1-\gamma)^3}{2} \cdot {\min} \left \{ \frac{\min_s{\mu(s)} }{2} , \frac{ (1-\gamma)}{|\mathcal{S}||\mathcal{A}|} \right \} > 0$.
Finally, we get the desired result by \Cref{lemma:performance_difference_in_rho}:
\begin{align}
V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{1}{1-\gamma} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right ) \le \frac{|\mathcal{S}||\mathcal{A}|}{c} \cdot \frac{1}{m} , \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} V^{*}(\rho) - V^{\pi_m}(\rho) \le |\mathcal{S}||\mathcal{A}| \cdot \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }, \quad \text{for all $m \ge 1$}
\end{align}
where $c = \frac{(1-\gamma)^4}{2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot {\min} \left \{ \frac{\min_s{\mu(s)} }{2} , \frac{ (1-\gamma)}{|\mathcal{S}||\mathcal{A}|} \right \} > 0$.
\end{proof}
\subsection{Convergence Rate of Batch CAPO}
For ease of exposition, we restate \Cref{theorem:batch_convergence_rate} as follows.
\begin{theorem*}
Consider a tabular softmax parameterized policy $\pi_\theta$. Under Batch CAPO with ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$ and $B_m = \left \{ (s,a) : (s,a) \in \mathcal{S} \times \mathcal{A} \right \}$, we have :
\begin{align}
V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{1}{c} \cdot \frac{1}{m}, \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} V^{*}(\rho) - V^{\pi_m}(\rho) \le \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }, \quad \text{for all $m \ge 1$}\\
\end{align}
where $c = \frac{(1-\gamma )^4}{|\mathcal{A}|} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot \underset{s}{min} \left \{ \mu(s) \right \} > 0$.
\label{app:batch_convergence_rate}
\end{theorem*}
\begin{proof}[Proof of \Cref{theorem:batch_convergence_rate}]
\phantom{}
The proof can be summarized as follows:
\begin{enumerate}
\itemsep0em
\item We first construct the lower bound of the improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ in state visitation distribution, number of actions, and advantage value in \Cref{lemma:lower_bdd}.
%
\item We then construct the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$ using $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$.
%
\item Finally, we can show the desired result inductively \ref{lemma:induction}.
\end{enumerate}
\begin{lemma} Under (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$, if $B_m = \left \{ (s,a) : (s,a) \in \mathcal{S} \times \mathcal{A} \right \}$, then the updated policy weight $\pi_{m+1}(a|s)$ can be written as :
\label{lemma:updated_policy_weight}
\begin{align*}
\pi_{m+1}(a|s) =
\begin{cases}
\frac{1}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2} & \text{, if } A^{m}(s,a) > 0 \\
\frac{\pi_m(a)}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2} & \text{, if } A^{m}(s,a) = 0 \\
\frac{\pi_m(a)^2}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2} & \text{, if } A^{m}(s,a) < 0
\end{cases}
\end{align*}
where $s_{m}^{+} := \left \{ a \in \mathcal{S} \: | \: A^{m}(s,a) > 0 \right \}$
\begin{proof}[Proof of \Cref{lemma:updated_policy_weight}]
\phantom{}
For $A^{m}(s,a) > 0$ :
\begin{align}
&\pi_{m+1}(a|s) = \frac{e^{\theta_m(s,a)+\ln \frac{1}{\pi_m(a|s)}}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)} } = \frac{e^{\theta_m(s,a)+\ln \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{e^{\theta_m(s,a)}}}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)} } = \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}} &&
\end{align}
For $A^{m}(s,a) = 0$ :
\begin{align}
&\pi_{m+1}(a|s) = \frac{e^{\theta_m(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)} } = \frac{e^{ \theta_m(s,a)} \cdot \underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)} }{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)} } = \pi_m(a|s) \cdot \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}}&&
\end{align}
For $A^{m}(s,a) < 0$ :
\begin{align}
&\pi_{m+1}(a|s) = \frac{e^{\theta_m(s,a)-\ln \frac{1}{\pi_m(a|s)}}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)} } = \frac{e^{2 \cdot \theta_m(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)} } = \pi_m(a|s)^2 \cdot \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}}&&
\end{align}
Moreover, since $\underset{a \in \mathcal{A}}{\sum} \pi_{m+1}(a|s) = 1$, we have:
\begin{align}
&\underset{A^{m}(s,a) > 0}{\sum} \pi_{m+1}(a|s) + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m+1}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m+1}(a|s) &&\\
&= |s_{m}^+| \cdot \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}} + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) \cdot \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}} + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2 \cdot \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}} &&\\
&= \left ( |s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2 \right ) \cdot \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}} = 1,&&
\end{align}
where $s_{m}^{+} := \left \{ a \in \mathcal{S} \: | \: A^{m}(s,a) > 0 \right \}$\\
Hence, we get:
\begin{align}
& \frac{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m}(s,a)}}{\underset{a \in \mathcal{A}}{\sum} e^{\theta_{m+1}(s,a)}} = \frac{1}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2}.&&
\end{align}
Finally, we get the desired result by substitution.
\end{proof}
\end{lemma}
\begin{lemma} Under (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$, if $B_m = \left \{ (s,a) : (s,a) \in \mathcal{S} \times \mathcal{A} \right \}$ then the improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ can be bounded by :
\label{lemma:lower_bdd_2}
\begin{align}
&V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) \ge \frac{1}{|\mathcal{A}|} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_m}_{s}(s') \sum_{a \in s_m^{'+}} A^{m}(s', a)^2 &&
\end{align}
where $s_{m}^{'+} := \left \{ a \in \mathcal{S} \: | \: A^{m}(s',a) > 0 \right \}$
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:lower_bdd_2}]
\phantom{}
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s)
&= \frac{1}{1-\gamma} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \sum_{a \in \mathcal{A}} \pi_{m+1}(a|s') \cdot A^{m}(s', a) &&\\
&= \frac{1}{1-\gamma} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \cdot \frac{1}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2}&&\\
& \quad \cdot \left ( \sum_{a \in s_{m}^{'+}} A^{m}(s', a) + \sum_{a \notin s_{m}^{'+}} \pi_{m}(a|s')^2 \cdot A^{m}(s', a) \right ) &&\\
&\ge \frac{1}{1-\gamma} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \cdot \frac{1}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2} &&\\
& \quad \cdot \left ( \sum_{a \in s_{m}^{'+}} A^{m}(s', a) + \sum_{a \notin s_{m}^{'+}} \pi_{m}(a|s') \cdot A^{m}(s', a) \right ) &&\\
&= \frac{1}{1-\gamma} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \cdot \frac{1}{|s_{m}^+| + \underset{A^{m}(s,a) = 0}{\sum} \pi_{m}(a|s) + \underset{A^{m}(s,a) < 0}{\sum} \pi_{m}(a|s)^2} &&\\
& \quad \cdot \left ( \sum_{a \in s_{m}^{'+}} (1-\pi_{m}(a|s')) \cdot A^{m}(s', a) \right ) &&\\
&\ge \frac{1}{1-\gamma} \cdot \frac{1}{|\mathcal{A}|} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \cdot \left ( \sum_{a \in s_{m}^{'+}} (1-\pi_{m}(a|s')) \cdot A^{m}(s', a) \right ) &&\\
&\ge \frac{1}{|\mathcal{A}|} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \cdot \sum_{a \in s_{m}^{'+}} A^{m}(s', a)^2
\end{align}
The first equation holds by the performance difference lemma in \Cref{lemma:perf_diff}.\\
The second equation holds by \Cref{lemma:change_of_policy_weight_2}.\\
The third equation holds by the definition of $A(s,a)$.\\
The last inequality holds by the bound of $A(s,a)$ in \Cref{lemma:upper_bound_of_advantage}.\\
\end{proof}
Hence, combining \Cref{lemma:lower_bdd_2} and \Cref{lemma:lower_bound_of_performance_difference}, we can construct the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$ using $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ :
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s)
&\ge \frac{1}{|\mathcal{A}|} \cdot \sum_{s' \in \mathcal{S}} d^{\pi_{m+1}}_{s}(s') \cdot \sum_{a \in s_{m}^{'+}} A^{m}(s', a)^2 &&\\
&\ge \frac{1}{|\mathcal{A}|} \cdot d^{\pi_{m+1}}_{s}(\tilde{s_m}) \cdot A^{m}(\tilde{s_m}, \tilde{a_m})^2 &&\\
&= \frac{1}{|\mathcal{A}|} \cdot d^{\pi_{m+1}}_{s}(\tilde{s_m}) \cdot (1-\gamma)^2 \cdot (\frac{1}{1-\gamma})^2 \cdot A^{m}(\tilde{s_m}, \tilde{a_m})^2&&\\
&\ge \frac{1}{|\mathcal{A}|} \cdot d^{\pi_{m+1}}_{s}(\tilde{s_m}) \cdot (1-\gamma)^2 \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2&&
\end{align}
Moreover, if we consider the whole starting state distribution $\mu$, we have :
\begin{align}
V^{\pi_{m+1}}(\mu) - V^{\pi_{m}}(\mu)
&\ge \frac{1}{|\mathcal{A}|} \cdot d^{\pi_{m+1}}_{\mu}(\tilde{s_m}) \cdot (1-\gamma)^2 \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )^2&&\\
&\ge \frac{1}{|\mathcal{A}|} \cdot \mu(\tilde{s_m}) \cdot (1-\gamma)^3 \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )^2&&\\
& \ge \underbrace{\frac{(1-\gamma)^3}{|\mathcal{A}|} \cdot \underset{s' \in \mathcal{S}}{\min}\left \{ \mu(s') \right \}}_{:= c' > 0} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right )^2
\end{align}
The second inequality holds since $d^{\pi}_{\mu}(s) \ge (1- \gamma) \cdot \mu(s)$ in \Cref{lemma:lower_bound_of_state_visitation_distribution}.\\
Since $V^{\pi_{m+1}}(\mu) - V^{\pi_{m}}(\mu) = (V^{\pi^{*}}(\mu) - V^{\pi_{m}}(\mu)) - (V^{\pi^{*}}(\mu) - V^{\pi_{m+1}}(\mu))$, by rearranging the inequality above, we have :
\begin{align}
&\delta_{m+1} \le \delta_{m} - c' \cdot \delta_{m}^2 \quad \text{where $\delta_{m} = V^{\pi^*}(\mu) - V^{\pi_m}(\mu) $}
\end{align}
Then, we can get the following result by induction based on \Cref{lemma:induction} :
\begin{align}
V^{*}(\mu) - V^{\pi_m}(\mu) \le \frac{1}{c'} \cdot \frac{1}{m}, \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} V^{*}(\mu) - V^{\pi_m}(\mu) \le \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c'} \right \} }, \quad \text{for all $m \ge 1$}\\
\end{align}
where $c' = \frac{(1-\gamma )^3}{|\mathcal{A}|} \cdot \underset{s}{min} \left \{ \mu(s) \right \} > 0$.
Finally, we get the desired result by \Cref{lemma:performance_difference_in_rho}:
\begin{align}
V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{1}{1-\gamma} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \left ( V^{*}(\mu) - V^{\pi_m}(\mu) \right ) \le \frac{1}{c} \cdot \frac{1}{m} , \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} V^{*}(\rho) - V^{\pi_m}(\rho) \le \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }, \quad \text{for all $m \ge 1$}
\end{align}
where $c = \frac{(1-\gamma )^4}{|\mathcal{A}|} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot \underset{s}{min} \left \{ \mu(s) \right \} > 0$.\\
\begin{remark}
\normalfont In \Cref{theorem:batch_convergence_rate}, we choose the learning rate ${\alpha_m(s, a)}$ to be exactly $\log (\frac{1}{\pi_{\theta_m}(a\rvert s)})$ instead of greater than or equal to $\log (\frac{1}{\pi_{\theta_m}(a\rvert s)})$. The reason is that under ${\alpha_m(s, a)} = \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})$, we can guarantee that all state-action pair with positive advantage value can get the same amount of the policy weight with each other actions in the same state after every update \ref{lemma:updated_policy_weight}. This property directly leads to the result of \Cref{lemma:lower_bdd_2} that the one-step improvement $V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s)$ can be quantified using the summation of all positive advantage value $\sum_{a \in s_{m}^{'+}} A^{m}(s', a)^2$, and hence it guarantees that one of the $A^{m}(s', a)^2$ will connect the one-step improvement with the performance difference $V^{\pi^{*}}(s) - V^{\pi_{m}}(s)$. This property also prevents some extreme cases where one of the learning rates of the state-action pairs with extremely tiny but positive advantage value dominates the updated policy weight, i.e., $\pi_{m+1}(a_m|s_m) \rightarrow 1$, leading to tiny one-step improvement.
\end{remark}
\end{proof}
\subsection{Convergence Rate of Randomized CAPO}
For ease of exposition, we restate \Cref{theorem:randomized_convergence_rate} as follows.
\begin{theorem*}
Consider a tabular softmax parameterized policy $\pi_\theta$, under (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m}(a\rvert s)})}$ and $|B_m| = 1$, if Condition \ref{condition:sa} is satisfied, then we have :
\begin{align}
\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\rho) - V^{\pi_m}(\rho) \right] \le \frac{1}{c} \cdot \frac{1}{m}, \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\rho) - V^{\pi_m}(\rho) \right] \le \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }, \quad \text{for all $m \ge 1$}\\
\end{align}
where $c = \frac{(1-\gamma )^4}{2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot \underset{(s,a)}{min} \left \{ d_{gen}(s,a)\cdot\mu(s) \right \} > 0$ and $d_{gen}:\mathcal{S} \times \mathcal{A} \rightarrow (0,1)$, $d_{gen}(s,a) = \mathbb{P}((s,a) \in B_m)$.
\label{app:randomized_convergence_rate}
\end{theorem*}
\begin{proof}[Proof of \Cref{theorem:randomized_convergence_rate}]
\phantom{}
The proof can be summarized as:
\begin{enumerate}
\itemsep0em
\item We first write the improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ in state visitation distribution, policy weight, and advantage value in \Cref{lemma:lower_bdd}, and also construct the lower bound of it. Note that the result is the same as \Cref{subsection:cyclic_CAPO}.
%
\item We then write the improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ in probability form condition on $(s_m,a_m)$.
%
\item By taking expectation of the probability form, we get the upper bound of the expected performance difference $\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right]$ using $\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi_{m+1}}(\mu) - V^{\pi_m}(\mu) \right]$.
%
\item Finally, we can show the desired result by induction based on \Cref{lemma:induction}.
\end{enumerate}
By \Cref{lemma:lower_bdd}, we have for all $m \ge 1$:
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) =
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{W^+}{1-\pi_{m}(a_m|s_m)} \cdot A^{m}(s_m, a_m) & \text{, if } A^{m}(s_m, a_m) > 0 \\
\frac{d^{\pi_{m+1}}_{s}(s_m)}{1-\gamma } \cdot \frac{W^-}{1-\pi_{m}(a_m|s_m)} \cdot (-A^{m}(s_m, a_m)) & \text{, if } A^{m}(s_m, a_m) < 0\\
\end{cases}\\
\text{where }
\begin{cases}
(1-\pi_{m}(a_m|s_m)) \ge W^+ \ge \frac{(1-\pi_{m}(a_m|s_m))^2}{2-\pi_{m}(a_m|s_m)}\\
\pi_m(a_m|s_m) \ge W^- \ge \frac{\pi_{m}(a_m|s_m) \cdot (1-\pi_{m}(a_m|s_m))^2}{\pi_{m}(a_m|s_m)^2 - \pi_{m}(a_m|s_m) + 1}
\end{cases}
\end{align}
and it can also be lower bounded by:
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) \ge
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) > 0 \\
d^{\pi_{m+1}}_{s}(s_m) \cdot \pi_m(a_m|s_m) \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) < 0
\end{cases}
\end{align}
Hence, considering the randomness of the generator, it will choose (s,a) with probability $d_{gen}(s,a)$ to update in each episode $m$. Then we can rewrite \Cref{lemma:lower_bdd} in probability form :
\begin{align}
V^{\pi_{m+1}}(s) - V^{\pi_{m}}(s) \ge
\begin{cases}
\frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(\tilde{s_m},\tilde{a_m})^2 & \text{, if } A^{m}(s_m, a_m) > 0 \text{, w.p. } d_{gen}(\tilde{s_m},\tilde{a_m}) \\
\frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) > 0 \text{, w.p. } d_{gen}(s,a)\\
d^{\pi_{m+1}}_{s}(s_m) \cdot \pi_m(a_m|s_m) \cdot A^{m}(s_m, a_m)^2 & \text{, if } A^{m}(s_m, a_m) < 0 \text{, w.p. } d_{gen}(s,a)
\end{cases}
\end{align}
Then, by taking expectation, we have :
\begin{align}
\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi_{m+1}}(s) - V^{\pi_m}(s) \right]
&= \sum_{(s',a') \in \mathcal{S} \times \mathcal{A} } d_{gen}(s',a') \cdot \left [ V^{\pi_{m+1}}(s) - V^{\pi_m}(s) \:|\: (s_m,a_m)=(s',a') \right ] &&\\
&\ge d_{gen}(\tilde{s_m},\tilde{a_m}) \cdot \left [ V^{\pi_{m+1}}(s) - V^{\pi_m}(s) \:|\: (s_m,a_m)=(\tilde{s_m},\tilde{a_m}) \right ]&&\\
&\ge d_{gen}(\tilde{s_m},\tilde{a_m}) \cdot \frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot A^{m}(\tilde{s_m},\tilde{a_m})^2 &&\\
&\ge d_{gen}(\tilde{s_m},\tilde{a_m}) \cdot \frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot (1-\gamma)^2 \cdot \left ( V^{*}(s) - V^{\pi_m}(s) \right )^2&&\\
&= d_{gen}(\tilde{s_m},\tilde{a_m}) \cdot \frac{d^{\pi_{m+1}}_{s}(s_m)}{2} \cdot (1-\gamma)^2 \cdot \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(s) - V^{\pi_m}(s) \right]^2 &&
\end{align}
The third inequality holds by \Cref{lemma:lower_bound_of_performance_difference}.\\
The last equation holds since the performance difference at episode $m$ is independent of $(s_m,a_m)$, which is the state action pair chosen at episode $m$.\\
If we consider the whole starting state distribution $\mu$, we have:
\begin{align}
\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi_{m+1}}(\mu) - V^{\pi_m}(\mu) \right]
&\ge d_{gen}(\tilde{s_m},\tilde{a_m}) \cdot \frac{d^{\pi_{m+1}}_{\mu}(s_m)}{2} \cdot (1-\gamma)^2 \cdot \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right]^2 &&\\
&\ge d_{gen}(\tilde{s_m},\tilde{a_m}) \cdot \frac{\mu(s_m)}{2} \cdot (1-\gamma)^3 \cdot \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right]^2 &&\\
&\ge \underbrace{\underset{(s',a') \in \mathcal{S} \times \mathcal{A}}{\min} \left \{ d_{gen}(s',a') \cdot \mu(s') \right \} \cdot \frac{(1-\gamma)^3}{2}}_{:= c' > 0} \cdot \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right]^2 &&
\end{align}
The second inequality holds since $d^{\pi}_{\mu} \ge (1- \gamma) \cdot \mu(s)$ by \Cref{lemma:lower_bound_of_state_visitation_distribution}.\\
Since $\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi_{m+1}}(\mu) - V^{\pi_m}(\mu) \right] = \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi^*}(\mu) - V^{\pi_m}(\mu) \right] - \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi^*}(\mu) - V^{\pi_{m+1}}(\mu) \right]$, by rearranging the inequality above, we have:
\begin{align}
&\delta_{m+1} \le \delta_{m} - c' \cdot \delta_{m}^2 \quad \text{where $\delta_{m} = \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{\pi^*}(\mu) - V^{\pi_m}(\mu) \right]$}
\end{align}
Then, we can get the following result by \Cref{lemma:induction} :
\begin{align}
\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right] \le \frac{1}{c'} \cdot \frac{1}{m}, \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right] \le \min{ \left \{ \sqrt{\frac{M}{c' \cdot (1-\gamma)}}, \frac{\log M +1}{c'} \right \} }, \quad \text{for all $m \ge 1$}\\
\end{align}
where $c' = \frac{(1-\gamma )^3}{2} \cdot \underset{(s,a)}{\min} \left \{ d_{gen}(s,a)\cdot\mu(s) \right \} > 0$.
Finally, we get the desired result by \Cref{lemma:performance_difference_in_rho}:
\begin{align}
\underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\rho) - V^{\pi_m}(\rho) \right] \le \frac{1}{1-\gamma} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right] ) \le \frac{1}{c} \cdot \frac{1}{m} , \quad \text{for all $m \ge 1$}
\end{align}
\begin{align}
\sum_{m=1}^{M} \underset{({s_m},{a_m}) \sim d_{gen}}{\mathbb{E}} \left [ V^{*}(\mu) - V^{\pi_m}(\mu) \right] \le \min{ \left \{ \sqrt{\frac{M}{c \cdot (1-\gamma)}}, \frac{\log M +1}{c} \right \} }, \quad \text{for all $m \ge 1$}
\end{align}
where $c = \frac{(1-\gamma )^4}{2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot \underset{(s,a)}{min} \left \{ d_{gen}(s,a)\cdot\mu(s) \right \} > 0$.
\end{proof}
\begin{comment}
\subsection{New Pf of global convergence (following the last step in the original pf)}
\begin{lemma}
\label{lemma:contradiciton_of_a+}
If $I_{+}^{s} \neq \emptyset$ is true, then there must exist one action $a \in I_{+}^{s}$ such that $\lim_{m \rightarrow \infty} \pi_m(a|s) = 1$ under (\ref{eq:CAPO_form}) with $\alpha_m(s, a) \ge \log \frac{1}{\pi_m(a|s)}$
\end{lemma}
\begin{proof}
\phantom{}
Since $V^{m}$ and $Q^{m}$ will converge to a stationary point, $V^{(\infty)}$ and $Q^{(\infty)}$. If $I_{+}^{s} \neq \emptyset$ is true, then there exist $K$ such that $\forall m > K, s \in \mathcal{S}$, we have:
\begin{align*}
Q^{m}(s,a^{+}) > Q^{m}(s,a^{0}) > Q^{m}(s,a^{-}) , \quad \text{for all $a^{+} \in I^{s}_{+}$, $a^{0} \in I^{s}_{0}$, $a^{-} \in I^{s}_{-}$}
\end{align*}
Without loss of generality, $\forall m > K$, assume the order of $Q^{m}$ is:
\begin{align*}
Q^{m}(s,\tilde {a}^{+}) > Q^{m}(s,a_1) > Q^{m}(s,a_2) > \dots > Q^{m}(s,a_{|\mathcal{A}|-1}), \quad \text{provided $I_{+}^{s} \neq \emptyset$}
\end{align*}
where $\tilde {a}^{+} = \argmax_{a^{+} \in I^{s}_{+}} Q^{(\infty)}(s, a^{+})$\\
Note that we simplify the case above by considering "strictly greater than" instead of "greater than or equal to", but the simplification can be lifted with a little extra work.\\
Now, we are going to show that If $I_{+}^{s} \neq \emptyset$, then $\lim_{m \rightarrow \infty} \pi_m(a|s) = 0$ for all $a \ne \tilde {a}^{+}$ by induction:\\
\begin{prop} $\forall m \ge 1, s \in \mathcal{S}, a \in \mathcal{A}$, if $A^{m}(s,a) \le 0$ and $\exists \enspace a' \ne a$, $a' \in \mathcal{B}_{m}(s)$, satisfying $A^{m}(s,a') > 0$, then $\pi_{m+1}(a|s) \le \frac{1}{2}$, regardless of whether $a \in \mathcal{B}_{m}(s)$ or not.
\label{prop:upperbdd_of_pi}
\end{prop}
\begin{proof}
\phantom{}
Since $A^{m}(s,a) \le 0$, we have $sign(A^m(s,a)) \cdot \alpha_{m}(s,a) \le 0$, and so we have:
\begin{align*}
\begin{cases}
\pi_{m+1}(a|s) = \frac{ e^{\theta_{m}(s, a) + sign(A^m(s,a)) \cdot \alpha_{m}(s,a)} }{ Z_{m+1}(s) } \le \frac{ e^{\theta_{m}(s, a)} }{ Z_{m+1}(s) } \le \frac{Z_{m}(s)}{Z_{m+1}(s)} \\
\pi_{m+1}(a^{'}|s) = \frac{ e^{\theta_{m}(s, a' ) + \alpha_{m}(s,a')} }{ Z_{m+1}(s) } \ge \frac{ e^{\theta_{m}(s, a' ) + \log(\frac{1}{ \pi_m(a'|s)})} }{ Z_{m+1}(s) } = \frac{Z_{m}(s)}{Z_{m+1}(s)}
\end{cases}
\end{align*}
Hence we have $\pi_{m+1}(a^{'}|s) \ge \pi_{m+1}(a|s)$, and since $\pi_{m+1}(a^{'}|s) + \pi_{m+1}(a|s) \le 1$, we get $\pi_{m+1}(a|s) \le \frac{1}{2}$
\end{proof}
\begin{prop}
\label{prop:adv_all_neg}
$\forall s \in \mathcal{S}, a \in \mathcal{A}\setminus \left \{ \tilde {a}^{+} \right \} $, if $\exists \enspace T \in \mathbb{N}$ such that $\forall m > T$, $A^m(s,a) \le 0$, then $\exists \enspace n \in \mathbb{N}$, $\bar{K} \in \mathbb{N}$ such that $A^{m+n+1}(s,a) < 0 \text{, } \forall \enspace m > \bar{K} $
\end{prop}
\begin{proof}
\phantom{}
By Condition \ref{condition:sa} and $I_{+}^{s} \neq \emptyset$, there exist some finite $n \in \mathbb{N}$ such that $\exists \enspace a' \ne a$, $a' \in \mathcal{B}_{m+n}(s)$, satisfying $A^{m+n+1}(s,a') > 0$. \\
Then by \Cref{prop:upperbdd_of_pi} we have:
\begin{align*}
\pi_{m+n+1}(a|s) \le \frac{1}{2} \quad \text{$\forall \enspace m \ge T$}
\end{align*}
Hence, we have:
\begin{align*}
V^{m+n+1}(s) = \sum_{a \in \mathcal{A}} \pi_{m+n+1}(a|s) \cdot Q^{m+n+1}(s,a) \ge \frac{1}{2} \cdot \left ( Q^{m+n+1}(s,a) + Q^{m+n+1}(s,a') \right ) \\
\text{where $a' = \underset{\substack{a' \in \mathcal{A} \\ Q^{\infty}(s,a')>Q^{\infty}(s,a)}}{\mathrm{argmin}} Q^{\infty}(s,a')$}
\end{align*}
And also by the order of $Q^{m}$ and $\lim_{m \rightarrow \infty}Q^m(s, a) = Q^{\infty}(s, a)$, for $\epsilon = \frac{1}{4} \cdot \left ( Q^{\infty}(s, a')-Q^{\infty}(s, a) \right ) > 0 $, $\exists \enspace \bar{T}$ such that $\forall m > \bar{T}$:
\begin{align*}
\begin{cases}
Q^{m}(s, a) \in \left ( Q^{\infty}(s, a) - \epsilon, Q^{\infty}(s, a) + \epsilon \right )\\
Q^{m}(s, a') \in \left ( Q^{\infty}(s, a') - \epsilon, Q^{\infty}(s, a') + \epsilon \right )
\end{cases}
\end{align*}
\end{proof}
Finally, we have $\forall \enspace m > \max \left \{ T,\bar{T} \right \}$:
\begin{align*}
V^{m+n+1}(s) &\ge \frac{1}{2} \cdot \left ( Q^{m+n+1}(s,a) + Q^{m+n+1}(s,a') \right ) \\
& > \frac{1}{2} \cdot \left ( (Q^{\infty}(s, a)) + (Q^{\infty}(s, a')) \right ) \\
& = \frac{1}{2} \cdot \left ( Q^{\infty}(s,a) + Q^{\infty}(s,a') \right ) - \epsilon \\
& = Q^{\infty}(s,a) + \epsilon > Q^{m+n+1}(s,a)
\end{align*}
Which is equivalent to $A^{m+n+1}(s,a) < 0 \enspace \forall \enspace m > \bar{K}$ where $\bar{K} = \max \left \{ T,\bar{T} \right \}$.
\begin{prop}
\label{prop:ratio}
If $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $ then $\exists \enspace T' \in \mathbb{N}$ such that $\forall m > T'$:
\begin{align*}
\frac{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)}{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)} \ge \frac{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})}{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}\\
\text{provided $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $}
\end{align*}
\end{prop}
\begin{proof}
\phantom{}
Since $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $, we have $A^m(s,a_{|\mathcal{A}|-j}) < 0 \enspace \forall \enspace j = 1, 2, \dots , k$. And by Condition \ref{condition:sa}, there exist some finite $n \in \mathbb{N}$ such that $a \in \mathcal{B}_{m+n}(s)$ for some $\bar{a}^{+} \in \left \{ \tilde{a}^{+}, a_1, a_2, \dots, a_{|\mathcal{A}|-(k+2)} \right \} $. \\
And hence we have:
\begin{align*}
\frac{\pi_{m+n+1}(\bar{a}^{+}|s)}{\pi_{m+n+1}(\bar{a}^{-}|s)} \ge \frac{ \frac{ e^{\theta_{m+n}(s, \bar{a}^+ ) + \log{\frac{1}{\pi_{m+n}(\bar{a}^+|s)} } }}{ Z_{m+n+1}(s) }}{ \frac{ e^{\theta_{m+n}(s, \bar{a}^- ) }}{ Z_{m+n+1}(s) }} = \frac{ Z_{m+n}(s) }{ e^{\theta_{m+n}(s, \bar{a}^-)} } = \frac{1}{\pi_{m+n}(\bar{a}^{-}|s)}
\end{align*}
\end{proof}
Since $\lim_{m \rightarrow \infty}\pi_m(s, a) = 0$, we have $\forall z \in \mathbb{Z}$, $\exists \enspace T \in \mathbb{N}$ such that $\frac{\pi_{m+n+1}(\bar{a}^{+}|s)}{\pi_{m+n+1}(\bar{a}^{-}|s)} \ge z \enspace \forall m > T$.
And for $m > K$, we have $Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)}) > 0$.
Hence by simply choosing $z = \frac{1}{\mathcal{A}} \cdot \frac{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})}{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}$ and taking the summation over $\pi_m$, we can reach the desired result with $T'=\max \left \{ K, T' \right \} $.
\begin{itemize}[leftmargin=*]
\item Show that if $I_{+}^{s} \neq \emptyset$, then $\lim_{m \rightarrow \infty} \pi_m(a_{|\mathcal{A}|-1}|s) = 0$:\\
By the order of $Q^{m}$, we have:
\begin{align*}
\min_{m > K} \left \{ V^m(s) \right \} = \min_{m > K} \left \{ \sum_{a \in \mathcal{A}} \pi_m(a|s) \cdot Q^m(s,a) \right \} = 1 \cdot Q^{m}(s,a_{|\mathcal{A}|-1})
\end{align*}
Hence, $\forall m > K$, we have:
\begin{align*}
A^m(s,a_{|\mathcal{A}|-1}) = Q^m(s,a_{|\mathcal{A}|-1}) - V^m(s) \le Q^m(s,a_{|\mathcal{A}|-1}) - \min_{m > K} \left \{ V^m(s) \right \} = 0
\end{align*}
And so by \Cref{prop:adv_all_neg}, we have $\exists \enspace n_{|\mathcal{A}|-1} \in \mathbb{N}$, $K_{|\mathcal{A}|-1} \in \mathbb{N}$ such that:
\begin{align*}
A^{m+n_{|\mathcal{A}|-1}+1}(s,a_{|\mathcal{A}|-1}) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-1}$ }
\end{align*}
And also:
\begin{align*}
sign(A^m(s,a_{|\mathcal{A}|-1})) \cdot \alpha\left ( s,a_{|\mathcal{A}|-1} \right ) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-1}$ }
\end{align*}
With strict-decreasing and infinite visitation, it is guaranteed that $\lim_{m \rightarrow \infty}\theta_m(s, a_{|\mathcal{A}|-1}) = -\infty$.\\
Hence we have $\lim_{m \rightarrow \infty}\pi_m(s, a_{|\mathcal{A}|-1}) = 0$.
\item Assume that $\lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-1}|s) = \lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-2}|s) = \dots = \lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-k}|s) = 0$, where $k \in \left [ 1, (|\mathcal{A}|-2) \right ] $, then for $\lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-(k+1)}|s)$:\\
By the assumption, we have:
\begin{align*}
\lim_{m \rightarrow \infty} \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) = 0
\end{align*}
And by \Cref{prop:ratio}, $\exists \enspace K'_{|\mathcal{A}|-(k+1)} \in \mathbb{N}$ such that $\forall \enspace m > K'_{|\mathcal{A}|-(k+1)} $, we can establish the ratio between the summation of policy weight of the policy worse than $a_{|\mathcal{A}|-(k+1)}$ and the policy better than $a_{|\mathcal{A}|-(k+1)}$:
\begin{align*}
\frac{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)}{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)} \ge \frac{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})}{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}\\
\text{provided $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $}
\end{align*}
Hence, $\forall m > K'_{|\mathcal{A}|-(k+1)}$, we have:
\begin{align*}
& A^m(s,a_{|\mathcal{A}|-(k+1)}) &&\\
& = Q^m(s,a_{|\mathcal{A}|-(k+1)}) - V^m(s) &&\\
& \le Q^m(s,a_{|\mathcal{A}|-(k+1)}) - \min_{m > K} \left \{ V^m(s) \right \} &&\\
& \le Q^m(s,a_{|\mathcal{A}|-(k+1)}) - \left [ Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \right. &&\\
&\left. \quad + Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) \cdot \pi_m(a_{|\mathcal{A}|-(k+1)}|s) + Q^{m}(s,a_{|\mathcal{A}|-1}) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \right ] &&\\\\
&= \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \right ) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
& \quad + \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-1}) \right ) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
&\le \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \right ) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
& \quad + \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-1}) \right ) \cdot \frac{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})} \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
& = 0
\end{align*}
And so by \Cref{prop:adv_all_neg}, we have $\exists \enspace n_{|\mathcal{A}|-(k+1)} \in \mathbb{N}$, $K_{|\mathcal{A}|-(k+1)} \in \mathbb{N}$ such that:
\begin{align*}
A^{m+n_{|\mathcal{A}|-(k+1)}+1}(s,a_{|\mathcal{A}|-(k+1)}) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-(k+1)}$ }
\end{align*}
And also:
\begin{align*}
sign(A^m(s,a_{|\mathcal{A}|-1})) \cdot \alpha\left ( s,a_{|\mathcal{A}|-(k+1)} \right ) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-(k+1)}$ }
\end{align*}
With strict-decreasing and infinite visitation, it is guaranteed that $\lim_{m \rightarrow \infty}\theta_m(s, a_{|\mathcal{A}|-(k+1)}) = -\infty$. Hence we have $\lim_{m \rightarrow \infty}\pi_m(s, a_{|\mathcal{A}|-(k+1)}) = 0$.\\
Finally we complete the induction and so we conclude that $\forall a \ne \tilde{a^+}$, $\lim_{m \rightarrow \infty}\pi_m(s, a) = 0$, which is equivalent to $\lim_{m \rightarrow \infty}\pi_m(s, \tilde{a^+}) = 1$.
\end{itemize}
\end{proof}
\end{comment}
\section{Detailed Configuration of Experiments}
\label{app:exp}
\subsection{Implementation Detail}
\label{exp:imple}
Algorithm \ref{algo:NCAPO_imple} shows the pseudo code of NCAPO.
In order to demonstrate the off-policy capability of NCAPO, we use simple $\epsilon$-greedy with initial exploration $\epsilon_{start}$ and decayed exploration for off-policy exploration and estimates $A(s,a)$ with Retrace \cite{munos2016safe}.
NCAPO uses four simple 2-layer feed-forward neural networks, a behavior network ($\theta^b$), a target network ($\theta$), a critic network ($\theta^Q$) and a target critic network ($\theta^{Q^{\prime}}$).
In each episode, $N_{\text{rollouts}}$ rollouts are collected,
and each rollout $r = [(s_t, a_t), ..., (s_{t+l}, a_{t+l})]$ has length of $l$. Note that instead of storing a single $(s,a)$-pair in the replay buffer $R$, we store the entire rollout of length $l$ into $R$ to better compute $Q_{\text{retrace}}$.
Due to the limited representation capability of floating-point numbers, during the CAPO update, the term $\log \frac{1}{\pi}$ can grow unbounded as $\pi \rightarrow 0$. To address this, we clip the term so that $\alpha(s,a) = \min(\log \frac{1}{\pi(a \mid s)}, \text{clip})$.
As the target networks have demonstrated the ability to stabilize training, the target networks are used and updated by polyak average update with coefficient $\tau_\theta$ and $\tau_{Q}$.
The experiment is conducted on a computational node equipped with Xeon Platinum 8160M CPU with a total of 40 cores.
Off-PAC shares a similar code base as NCAPO, the major difference is that the use of a fixed behavior policy. We choose such behavior policy to be a uniform policy.
\subsection{Hyperparameters}
We use the hyperparameters for Atari games of stable-baseline3 for \citep{stable-baselines3} for PPO and A2C, and the exact same hyperparameter from \citep{obando2020revisiting} for Rainbow.
The hyperparameters are listed in Table \ref{table:hyper}.
\begin{table}[!ht]
\caption{Hyperparameters for CAPO and OffPAC}
\label{table:hyper}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Hyperparameters & CAPO & PPO & A2C & OffPAC\\
\midrule
batch size & 32 & 16 & - & 32\\
learning rate & 5e-4 & 2.5e-4 & 7e-4 & 5e-4 \\
exploration fraction & 10\% & 0 & 0 & 10\%\\
Initial exploration rate* & 0.3 & 0 & 0 & 0\\
Final exploration rate & 0.05 & 0 & 0 & 0\\
Critic loss coefficient* & 1 & 0.38 & 0.25 & 1\\
max gradient norm & 0.8 & 0.5 & 0.5 & 0.8\\
gradient steps & 30 & 1 & 1 & 30\\
train frequency & (64, steps) & (256, steps) & - & (64, steps)\\
$\tau_Q$ & 0.05 & - & - & 0.05\\
$\tau_\theta$ & 1 & - & - & 1\\
gamma & 0.99 & 0.98 & 0.99 & 0.99\\
replay buffer & 6400 & - & - & 6400\\
clip value & 50 & - & - & -\\
entropy coef & 0 & - & 4.04e-6 & 0 \\
\bottomrule
\end{tabular}
\end{sc}
\item[*] For \textit{Asterix}, the critic loss coefficient is $0.25$ and the initial exploration rate is $0.8$.
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\section{Exploration Capability Provided by a Coordinate Generator in CAPO}
In this section, we demonstrate empirically the exploration capability provided by the coordinate generator in CAPO.
\label{app:generator}
\subsection{Configuration}
The Chain environment is visualized in \Cref{fig:chain_env}.
This environment is meant to evaluate the agent's ability to resist the temptation of immediate reward and look for the better long-term return.
We compare the performance of Batch CAPO, Cyclic CAPO and Off-policy Actor Critic on Chain with $N=10$, and the result can be found in \Cref{fig:generator}.
To eliminate the factor of critic estimation, true value of the value function is used during training. All the agents are trained for 1000 iterations with learning rate $=0.001$. The policies are represented by a neural network with a single hidden layer (with hidden layer size 256).
Both Cyclic CAPO and Off-policy Actor Critic is trained with a batch size of 16 and a replay buffer size of 100. As Batch CAPO shall take all the $\mathcal{S}\mathcal{A}$-pairs into account by design, the effective batch size of Batch CAPO is equal to $\mathcal{S} \times \mathcal{A}$. Unlike the CAPO methods, Off-policy Actor Critic presumes the use of a fixed behavior policy. As a result, similar to the experimental setup of various prior works (e.g., \citep{liu2020off}), we use a uniform behavior policy for Off-policy Actor Critic.
The use of a fixed behavior policy makes it difficult to identify an optimal policy, and this highlights the benefit of a coordinate generator in terms of exploration.
\subsection{Discussion}
From \Cref{fig:generator} we can see that it is difficult for Off-policy Actor Critic to escape from a sub-optimal policy, despite that the true value of the value function is provided. Since both Cyclic CAPO and Batch CAPO satisfy \Cref{condition:sa}, using such coordinate selection rules provides sufficient exploration for CAPO to identify the optimal policy. This feature can be particularly useful when the reward is sparse and the trajectory is long.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.16]
\tikzstyle{every node}+=[inner sep=0pt]
\draw [black] (4.9,-35.5) circle (3);
\draw (4.9,-35.5) node {$S_1$};
\draw [black] (16.4,-35.5) circle (3);
\draw (16.4,-35.5) node {$S_2$};
\draw [black] (20.4,-21.1) circle (3);
\draw (20.4,-21.1) node {$S_0$};
\draw [black] (20.4,-21.1)circle (2.4);
\draw [black] (52,-35.5) circle (3);
\draw (52,-35.5) node {$S_n$};
\draw [black] (52,-35.5) circle (2.4);
\draw [black] (40.2,-35.5) circle (3);
\draw (40.2,-35.5) node {$S_{n-1}$};
\draw [black] (27.6,-35.5) circle (3);
\draw (27.6,-35.5) node {$S_3$};
\draw [black] (7.1,-33.46) -- (18.2,-23.14);
\fill [black] (18.2,-23.14) -- (17.28,-23.32) -- (17.96,-24.05);
\draw (10.88,-27.81) node [above] {$0.1$};
\draw [black] (17.2,-32.61) -- (19.6,-23.99);
\fill [black] (19.6,-23.99) -- (18.9,-24.63) -- (19.86,-24.9);
\draw (19.17,-28.84) node [right] {$0.1$};
\draw [black] (7.9,-35.5) -- (13.4,-35.5);
\fill [black] (13.4,-35.5) -- (12.6,-35) -- (12.6,-36);
\draw (10.65,-36) node [below] {$0$};
\draw [black] (43.2,-35.5) -- (49,-35.5);
\fill [black] (49,-35.5) -- (48.2,-35) -- (48.2,-36);
\draw (46.1,-36) node [below] {$100$};
\draw [black] (37.77,-33.74) -- (22.83,-22.86);
\fill [black] (22.83,-22.86) -- (23.18,-23.74) -- (23.77,-22.93);
\draw (32.05,-27.8) node [above] {$0.1$};
\draw [black] (19.4,-35.5) -- (24.6,-35.5);
\fill [black] (24.6,-35.5) -- (23.8,-35) -- (23.8,-36);
\draw (22,-36) node [below] {$0$};
\draw [black] (26.26,-32.82) -- (21.74,-23.78);
\fill [black] (21.74,-23.78) -- (21.65,-24.72) -- (22.55,-24.28);
\draw (24.7,-27.19) node [right] {$0.1$};
\draw [dash pattern=on 3*\pgflinewidth off 8pt] (30.6,-35.5) -- (37.2,-35.5);
\end{tikzpicture}
\caption{The Chain environment has a total of $n+1$ states, and the agent always starts at state $1$. The agent has two actions to choose from at every state, either receive a reward of $0.1$ and terminate immediately, or move one state to the right. While moving right will receive no reward in most states, the transition from $S_{n-1}$ to $S_{n}$ would induce a huge reward of $100$. A well-performing policy should prefer the delayed reward of 100 over the immediate reward of 0.1.}
\label{fig:chain_env}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{./figures/Chain.pdf}
\caption{Comparison between Cyclic CAPO, Batch CAPO and off-policy Actor Critic, where the result is the average over 30 runs. We can see that despite the true value function is given and the optimal reward is much larger than the immediate reward (100 v.s. 0.1), Off-policy Actor Critic still suffers from a sub-optimal policy.}
\label{fig:generator}
\end{figure}
\section{Proofs of the Theoretical Results in Section \ref{subsection:analysis}}
\subsection{Proof of Lemma \ref{lemma:strict_improvement}}
\begin{lemma}[Performance Difference Lemma in \citep{Kakade2002approx}]
\label{lemma:perf_diff}
For each state $s_0$, the difference in the value of $s_0$ between two policies $\pi$ and $\pi^{\prime}$ can be characterized as:
\begin{equation}
V^{\pi}\left(s_{0}\right)-V^{\pi^{\prime}}\left(s_{0}\right)=\frac{1}{1-\gamma} \mathbb{E}_{s \sim d_{s_{0}}^{\pi}} \mathbb{E}_{a \sim \pi(\cdot \rvert s)}\left[A^{\pi^{\prime}}(s, a)\right]
\end{equation}
\end{lemma}
Now we are ready to prove Lemma \ref{lemma:strict_improvement}. For ease of exposition, we restate Lemma \ref{lemma:strict_improvement} as follows.
\label{app:method}
\begin{lemma*}
Under the CAPO update given by (\ref{eq:CAPO_form}), we have $V^{\pi_{m+1}}(s) \geq V^{\pi_{m}}(s)$, for all $s \in S$, for all $m\in \mathbb{N}$.
\label{app:proof_strict_improvement}
\end{lemma*}
\begin{proof}[Proof of \Cref{lemma:strict_improvement}]
Note that by the definition of $A(s, a)$, we have
\begin{equation}
\label{eq:sumAdvZero}
\sum_{a \in \mathcal{A}}\pi_m(a|s)A^{m}(s,a) = 0, \quad\forall s \in \mathcal{S}
\end{equation}
To simplify notation, let $Z_{m}(s) := \sum_{a \in \mathcal{A}}\exp({\theta_{m}(s,a)})$. Then, $\pi_{m} (a|s)$ and $\pi_{m+1} (a|s)$ can be simplified as:
\begin{equation}
\pi_{m} (a|s) = \frac{\exp({\theta_m(s,a)})}{Z_{m}(s)},\quad
\pi_{m+1} (a|s) = \frac{\exp({\theta_{m+1}(s,a)})}{Z_{m+1}(s)}.\label{eq:sum pi and A}
\end{equation}
By Lemma \ref{lemma:perf_diff}, in order to show that
$V^{\pi_{m+1}}(s) \geq V^{\pi_{m}}(s), \forall s \in S$,
it is sufficient to show that
\begin{equation}
\label{eq:posA}
\sum_{a\in \cA}\pi_{m+1}(a|s)A^{m}(s,a) > 0, \quad\forall s \in \mathcal{S}.
\end{equation}
For ease of notation, we define ${B}_m(s) := \{a\rvert (s,a)\in {B}_m \}$.
To establish (\ref{eq:sum pi and A}), we have that for all $s \in \mathcal{S}$,
\begin{align}
\sum_{a \in \mathcal{A}}\pi_{m+1}(a|s)A^{m}(s,a)
&= \sum_{a \in \mathcal{A}}\frac{\exp({\theta_{m+1}(s,a)})}{Z_{m+1}(s)}A^m(s,a) &&\\
&= \frac{Z_{m}(s)}{Z_{m+1}(s)} \sum_{a \in \mathcal{A} }\frac{\exp({\theta_{m+1}(s,a)})}{Z_m(s)}A^m(s,a) &&\\
&= \frac{Z_{m}(s)}{Z_{m+1}(s)} \left [ \sum_{a \in {B}_m(s)}
\frac{\exp({\theta_{m+1}(s,a)})}{Z_m(s)}A^m(s, a)+\sum_{a \not\in {B}_m(s)}\frac{\exp({\theta_{m}(s,a)})}{Z_m(s)}A^m(s, a) \right ] &&\\
&>\frac{Z_{m}(s)}{Z_{m+1}(s)} \left[ \sum_{a \in {B}_m(s)}\frac{\exp({\theta_{m}(s,a)})}{Z_m(s)}A^m(s, a)+ \sum_{a \not\in {B}_m(s)}\frac{\exp({\theta_{m}(s,a)})}{Z_m(s)} A^m(s, a)
\right] \label{eq:lemma2 ineq}&&\\
&=\frac{Z_{m}(s)}{Z_{m+1}(s)} \sum_{a \in \mathcal{A}}\pi_m(a|s)A^{m}(s,a)&& \\
&= 0, &&
\end{align}
where (\ref{eq:lemma2 ineq}) holds by the CAPO update given by (\ref{eq:CAPO_form}).
\end{proof}
\subsection{Proof of Theorem \ref{theorem:convergeoptimal}}
Since $\{V^m\}$ is bounded above and enjoys strict improvement by \Cref{lemma:strict_improvement}. By the monotone convergence theorem, the limit of $\{V^m\}$ is guaranteed to exist.
Similarly, we know that the limit of $\{Q^m\}$ also exists.
We use $V^{(\infty)}(s)$ and $Q^{(\infty)}(s,a)$ to denote the limits of $\{V^{(m)}(s)\}$ and $\{Q^{(m)}(s,a)\}$, respectively.
We also define $A^{(\infty)}(s, a):=Q^{(\infty)}(s, a)-V^{(\infty)}(s)$.
Our concern is whether the corresponding policy $\pi_{\infty}$ is optimal.
Inspired by \citep{agarwal2020theory}, we first define the following three sets as
\begin{align}
I_{0}^{s} &:=\left\{a \rvert Q^{(\infty)}(s, a)=V^{(\infty)}(s)\right\}, \\
I_{+}^{s} &:=\left\{a \rvert Q^{(\infty)}(s, a)>V^{(\infty)}(s)\right\}, \\
I_{-}^{s} &:=\left\{a \rvert Q^{(\infty)}(s, a)<V^{(\infty)}(s)\right\}.
\end{align}
By definition, $V^{(\infty)}$ is optimal if and only if $I_{+}^{s}$ is empty, for all $s$.
We prove by contradiction that $V^{(\infty)}$ is optimal by showing $I_{+}^{s} = \emptyset$.
\noindent \textbf{Main steps of the proof}. The proof procedure can be summarized as follows:
\begin{itemize}[leftmargin=*]
\item Step 1: We first assume $V^{(\infty)}$ is not optimal so that by definition $\exists s \in \mathcal{S}, I_{+}^{s} \neq \emptyset$.
%
\item Step 2: We then show in \Cref{lemma:sumIzero}, $\forall s \in \mathcal{S}$, actions $a_{-} \in I_{-}^{s}$ have zero weights in policy (i.e.\ $\pi_{\infty}(a_{-} | s) = 0$, $\forall a_{-} \in I_{-}^{s}$).
%
\item Step 3: Since the actions in $I_{-}^{s}$ have zero probability, by (\ref{eq:sumAdvZero}), this directly implies \Cref{lemma:sumI+zero}: $\forall I_{+}^{s} \neq \emptyset$, $a \in I_{+}^{s}$ must also have zero probability (i.e.\ $\pi_{\infty}(a_{+} | s) = 0$, $\forall a_{+} \in I_{+}^{s}$).
%
\item Step 4: Moreover, under CAPO, in the sequel we can show \Cref{lemma:contradiction_of_a+}, which states that as long as \Cref{condition:sa} is satisfied, there must exist one action $a_+ \in I_{+}^{s}$ such that $\lim_{m \rightarrow \infty} \pi_m(a_+|s) = 1$. This contradicts the assumption that $\exists s \in \mathcal{S}, I_{+}^{s} \neq \emptyset$, proving that $I_{+}^{s} = \emptyset, \forall s \in \mathcal{S}$.
\end{itemize}
\begin{lemma} Under CAPO, there exists $M_1$ such that for all $m > M_1, s \in \mathcal{S}, a \in \mathcal{A}$, we have :
\label{lemma:A_strict}
\begin{align}
A^{(m)}(s, a)<-\frac{\Delta}{4}, &\quad\text{ for } a \in I_{-}^{s},\\
A^{(m)}(s, a)>\frac{\Delta}{4}, &\quad\text{ for } a \in I_{+}^{s},
\end{align}
where $\Delta:=\min_{\left\{s, a \rvert A^{(\infty)}(s, a) \neq 0\right\}}\left|A^{(\infty)}(s, a)\right|$.
\end{lemma}
\begin{proof}[Proof of \Cref{lemma:A_strict}]
Given the strict policy improvement property of CAPO in Lemma \ref{lemma:strict_improvement}, this can be shown by applying Lemma C.4 in \citep{agarwal2020theory}.
\end{proof}
\begin{lemma}
\label{lemma:sumIzero}
Under CAPO, $\pi_{\infty}(a_{-}|s) = 0, \forall s \in \mathcal{S}, a_{-} \in I^{s}_{-}$.
\begin{proof}[Proof of \Cref{lemma:sumIzero}]
Lemma \ref{lemma:A_strict} shows that for all $m > M_1$, the sign of $A^{(m)}(s, a)$ is fixed.
Moreover, we know that under CAPO update, $\theta_m(s, a_{-})$ is non-increasing, $\forall a_{-} \in I^{s}_{-}, \forall m> M_1$.
Similarly, $\forall a_{+} \in I^{s}_{+}, m > M_1$, $\theta_m(s, a_{+})$ is non-decreasing.
By \Cref{condition:sa}, all the state-action pairs with negative advantage are guaranteed to be sampled for infinitely many times as $m \rightarrow \infty$.
Under the CAPO update in (\ref{eq:CAPO_form}), we have
\begin{equation}
\theta_{m+1}(s, a_{-}) - \theta_{m}(a_{-} \rvert s) \le -\log( \frac{1}{\pi_m(a_{-} \rvert s)} ) < 0.
\end{equation}
Given the infinite visitation, we know that $\lim_{m \rightarrow \infty}\theta_m(s, a_{-}) = -\infty$.
\end{proof}
\end{lemma}
We now show in \Cref{lemma:sumI+zero} that \Cref{lemma:sumIzero} implies $\sum_{a_{+} \in I_{+}^{s}} \pi_{\infty}(a_{+} \rvert s) = 0$.
\begin{lemma}
\label{lemma:sumI+zero}
If $I_{+}^{s} \neq \emptyset$ is true, then \Cref{lemma:sumIzero} implies $\sum_{a_{+} \in I_{+}^{s}} \pi_{\infty}(a_{+}|s) = 0$.
\begin{proof}[Proof of \Cref{lemma:sumI+zero}]
Recall from (\ref{eq:sumAdvZero}) that
$\sum_{a \in \mathcal{A}} \pi_{m}(a \rvert s) A^{m}(s, a)=0, \forall s \in \mathcal{S}, m>0$.
By definition, $\sum_{a_{0} \in I_{0}^{s}} \pi_{\infty}(a_{0}|s)A^{\infty}(s, a_0) = 0$, which directly implies that
\begin{align}
\sum_{a_{+} \in I_{+}^{s}} \pi_{\infty}(a_{+}|s)A^{\infty}(s, a_{+})
&= \sum_{a \in \mathcal{A}} \pi_{\infty}(a \rvert s) A^{\infty}(s, a) - \sum_{a_{0} \in I_{0}^{s}} \pi_{\infty}(a_{0} \rvert s)A^{\infty}(s, a) -\sum_{a_{-} \in I_{-}^{s}} \pi_{\infty}(a_{-} \rvert s)A^{\infty}(s, a) &&\\
&= 0 - 0 - 0 = 0, &&
\end{align}
where the second equality holds by \Cref{lemma:sumIzero}.
Since $A^{\infty}(s, a_{+}) > 0$ and $\pi_{\infty}(a_{+} \rvert s) \ge 0$, we have $\sum_{a_{+} \in I_{+}^{s}} \pi_{\infty}(a_{+} \rvert s) = 0$.
This completes the proof of Lemma \ref{lemma:sumI+zero}.
\end{proof}
\end{lemma}
In \Cref{lemma:sumI+zero}, we have that if $I_{+}^{s} \neq \emptyset$ is true, then $\pi_{m}(a_{+} \rvert s) \rightarrow 0$ as $m \rightarrow \infty$.
To establish contradiction, we proceed to show in the following \Cref{lemma:contradiction_of_a+} that there must exist one action $a \in I_{+}^{s}$ such that $\lim_{m \rightarrow \infty} \pi_m(a|s) = 1$, which contradicts \Cref{lemma:sumI+zero} and hence implies the desired result that $I_{+}^{s} = \emptyset$.
If $I_{+}^{s} \neq \emptyset$ is true, then there exist $K$ such that $\forall m > K, s \in \mathcal{S}$, we have:
\begin{align}
Q^{m}(s,a^{+}) > Q^{m}(s,a^{0}) > Q^{m}(s,a^{-}) , \quad \text{for all $a^{+} \in I^{s}_{+}$, $a^{0} \in I^{s}_{0}$, $a^{-} \in I^{s}_{-}$}.
\end{align}
Without loss of generality, assume that the order of $Q^{m}$, $\forall m > K$, can be written as
\begin{align}
Q^{m}(s,\tilde {a}^{+}) > Q^{m}(s,a_1) > Q^{m}(s,a_2) > \dots > Q^{m}(s,a_{|\mathcal{A}|-1}), \quad \text{provided that $I_{+}^{s} \neq \emptyset$},
\end{align}
where $\tilde {a}^{+} := \argmax_{a^{+} \in I^{s}_{+}} Q^{(\infty)}(s, a^{+})$. Note that we simplify the case above by considering "strictly greater than" instead of "greater than or equal to", but the simplification can be relaxed with a little extra work.
\begin{claim}
\label{lemma:contradiction_of_a+}
If $I_{+}^{s} \neq \emptyset$ is true, then there must exist one action $a_+ \in I_{+}^{s}$ such that $\lim_{m \rightarrow \infty} \pi_m(a_+|s) = 1$ under (\ref{eq:CAPO_form}) with $\alpha_m(s, a) \ge \log \frac{1}{\pi_m(a|s)}$.
\end{claim}
To establish \Cref{lemma:contradiction_of_a+}, we show that if $I_{+}^{s} \neq \emptyset$, then $\lim_{m \rightarrow \infty} \pi_m(a|s) = 0$ for all $a \ne \tilde {a}^{+}$ by induction.
For ease of exposition, we first present the following propositions.
\begin{prop} For any $m \ge 1, s \in \mathcal{S}, a \in \mathcal{A}$, if $A^{m}(s,a) \le 0$ and $\exists \enspace a' \ne a$, $a' \in {B}_{m}(s)$, satisfying $A^{m}(s,a') > 0$, then $\pi_{m+1}(a|s) \le \frac{1}{2}$, regardless of whether $a \in {B}_{m}(s)$ or not.
\label{prop:upperbdd_of_pi}
\end{prop}
\begin{proof}[Proof of \Cref{prop:upperbdd_of_pi}]
\phantom{}
Since $A^{m}(s,a) \le 0$, we have $\sign(A^m(s,a)) \cdot \alpha_{m}(s,a) \le 0$. As a result, we have:
\begin{align}
\begin{cases}
\pi_{m+1}(a|s) = \frac{ \exp({\theta_{m}(s, a) + \sign(A^m(s,a)) \cdot \alpha_{m}(s,a)}) }{ Z_{m+1}(s) } \le \frac{ \exp({\theta_{m}(s, a)})}{ Z_{m+1}(s) } \le \frac{Z_{m}(s)}{Z_{m+1}(s)} \\
\pi_{m+1}(a^{'}|s) = \frac{ \exp({\theta_{m}(s, a') + \alpha_{m}(s,a')}) }{ Z_{m+1}(s) } \ge \frac{ \exp({\theta_{m}(s, a') + \log(\frac{1}{ \pi_m(a'|s)})})}{ Z_{m+1}(s) } = \frac{Z_{m}(s)}{Z_{m+1}(s)}
\end{cases}
\end{align}
Hence, we have $\pi_{m+1}(a^{'}|s) \ge \pi_{m+1}(a|s)$. Since $\pi_{m+1}(a^{'}|s) + \pi_{m+1}(a|s) \le 1$, we get $\pi_{m+1}(a|s) \le \frac{1}{2}$.
\end{proof}
\begin{prop}
\label{prop:adv_all_neg}
For any $s \in \mathcal{S}, a \in \mathcal{A}\setminus \left \{ \tilde {a}^{+} \right \} $, if $\exists \enspace T \in \mathbb{N}$ such that $\forall m > T$, $A^m(s,a) \le 0$, then $\exists \enspace n \in \mathbb{N}$, $\bar{K} \in \mathbb{N}$ such that $A^{m+n+1}(s,a) < 0 \text{, } \forall \enspace m > \bar{K} $.
\end{prop}
\begin{proof}[Proof of \Cref{prop:adv_all_neg}]
\phantom{}
By Condition \ref{condition:sa} and $I_{+}^{s} \neq \emptyset$, there exist some finite $n \in \mathbb{N}$ such that $\exists \enspace a' \ne a$, $a' \in {B}_{m+n}(s)$, satisfying $A^{m+n+1}(s,a') > 0$.
Then, by \Cref{prop:upperbdd_of_pi}, we have
\begin{align}
\pi_{m+n+1}(a|s) \le \frac{1}{2}, \quad \text{$\forall m \ge T$}.
\end{align}
Hence, we have
\begin{align}
V^{m+n+1}(s) = \sum_{a \in \mathcal{A}} \pi_{m+n+1}(a|s) \cdot Q^{m+n+1}(s,a) \ge \frac{1}{2} \cdot \left ( Q^{m+n+1}(s,a) + Q^{m+n+1}(s,a') \right ) \\
\text{where $a' = \underset{\substack{a'' \in \mathcal{A} \\ Q^{\infty}(s,a'')>Q^{\infty}(s,a)}}{\mathrm{argmin}} Q^{\infty}(s,a'')$}
\end{align}
Moreover, by the ordering of $Q^{m}$ and that $\lim_{m \rightarrow \infty}Q^m(s, a) = Q^{\infty}(s, a)$, for $\epsilon = \frac{1}{4} \cdot \left ( Q^{\infty}(s, a')-Q^{\infty}(s, a) \right ) > 0 $, $\exists \enspace \bar{T}$ such that for all $m > \bar{T}$:
\begin{align}
\begin{cases}
Q^{m}(s, a) \in \left ( Q^{\infty}(s, a) - \epsilon, Q^{\infty}(s, a) + \epsilon \right )\\
Q^{m}(s, a') \in \left ( Q^{\infty}(s, a') - \epsilon, Q^{\infty}(s, a') + \epsilon \right )
\end{cases}
\end{align}
Finally, we have that for all $m > \max \left \{ T,\bar{T} \right \}$:
\begin{align}
V^{m+n+1}(s) &\ge \frac{1}{2} \cdot \left ( Q^{m+n+1}(s,a) + Q^{m+n+1}(s,a') \right ) \\
& > \frac{1}{2} \cdot \left ( (Q^{\infty}(s, a)) + (Q^{\infty}(s, a')) \right ) \\
& > \frac{1}{2} \cdot \left ( Q^{\infty}(s,a) + Q^{\infty}(s,a') \right ) - \epsilon \\
& = Q^{\infty}(s,a) + \epsilon > Q^{m+n+1}(s,a).
\end{align}
The above is equivalent to $A^{m+n+1}(s,a) < 0 \enspace \forall \enspace m > \bar{K}$, where $\bar{K} = \max \left \{ T,\bar{T} \right \}$.
\end{proof}
\begin{prop}
\label{prop:ratio}
If $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $, then $\exists \enspace T' \in \mathbb{N}$ such that for all $m > T'$:
\begin{align}
\frac{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)}{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)} \ge \frac{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})}{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}\\
\text{provided $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $}
\end{align}
\end{prop}
\begin{proof}[Proof of \Cref{prop:ratio}]
\phantom{}
Since $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $, we have $A^m(s,a_{|\mathcal{A}|-j}) < 0, \enspace \forall \enspace j = 1, 2, \dots , k$.
By Condition \ref{condition:sa}, there exists some finite $n \in \mathbb{N}$ such that $\bar{a}^{+} \in \mathcal{B}_{m+n}(s)$ for some $\bar{a}^{+} \in \left \{ \tilde{a}^{+}, a_1, a_2, \dots, a_{|\mathcal{A}|-(k+2)} \right \} $. \\
Hence, we have that for all $\bar{a}^{-} \in \left \{ a_{|\mathcal{A}|-k}, a_{|\mathcal{A}|-(k-1)}, \dots, a_{|\mathcal{A}|-1} \right \} $,
\begin{align}
\frac{\pi_{m+n+1}(\bar{a}^{+}|s)}{\pi_{m+n+1}(\bar{a}^{-}|s)} \ge \frac{ \frac{ e^{\theta_{m+n}(s, \bar{a}^+ ) + \log{\frac{1}{\pi_{m+n}(\bar{a}^+|s)} } }}{ Z_{m+n+1}(s) }}{ \frac{ e^{\theta_{m+n}(s, \bar{a}^- ) }}{ Z_{m+n+1}(s) }} = \frac{ Z_{m+n}(s) }{ e^{\theta_{m+n}(s, \bar{a}^-)} } = \frac{1}{\pi_{m+n}(\bar{a}^{-}|s)}.
\end{align}
Since $\lim_{m \rightarrow \infty}\pi_m(s, a) = 0$, we have $\forall z \in \mathbb{Z}$, $\exists \enspace T \in \mathbb{N}$ such that $\frac{\pi_{m+n+1}(\bar{a}^{+}|s)}{\pi_{m+n+1}(\bar{a}^{-}|s)} \ge z, \enspace \forall m > T$.
For $m > K$, we have $Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)}) > 0$.
Hence, by simply choosing $z = \frac{1}{\mathcal{A}} \cdot \frac{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})}{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}$ and taking the summation of the ratio over $\bar{a}^{+}$ and $\bar{a}^{-}$, we can reach the desired result with $T'=\max \left \{ K, T \right \} $.
\end{proof}
Now, we are ready to prove \Cref{lemma:contradiction_of_a+} by an induction argument.
\textit{Proof of \Cref{lemma:contradiction_of_a+}.}
\begin{itemize}[leftmargin=*]
\item Show that if $I_{+}^{s} \neq \emptyset$, then $\lim_{m \rightarrow \infty} \pi_m(a_{|\mathcal{A}|-1}|s) = 0$:\\
By the ordering of $Q^{m}$, we have:
\begin{align}
V^m(s) = \sum_{a \in \mathcal{A}} \pi_m(a|s) \cdot Q^m(s,a) \ge 1 \cdot Q^{m}(s,a_{|\mathcal{A}|-1}) \text{, \quad $\forall m > K$ }.
\end{align}
Hence, for all $m > K$, we have:
\begin{align}
A^m(s,a_{|\mathcal{A}|-1}) = Q^m(s,a_{|\mathcal{A}|-1}) - V^m(s) \le Q^m(s,a_{|\mathcal{A}|-1}) - Q^m(s,a_{|\mathcal{A}|-1}) = 0.
\end{align}
Therefore, by \Cref{prop:adv_all_neg}, we have $\exists \enspace n_{|\mathcal{A}|-1} \in \mathbb{N}$, $K_{|\mathcal{A}|-1} \in \mathbb{N}$ such that:
\begin{align}
A^{m+n_{|\mathcal{A}|-1}+1}(s,a_{|\mathcal{A}|-1}) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-1}$ }.
\end{align}
Moreover,
\begin{align}
\sign(A^m(s,a_{|\mathcal{A}|-1})) \cdot \alpha_m\left ( s,a_{|\mathcal{A}|-1} \right ) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-1}$. }
\end{align}
With the monotone-decreasing property and the infinite visitation condition, it is guaranteed that $\lim_{m \rightarrow \infty}\theta_m(s, a_{|\mathcal{A}|-1}) = -\infty$. Hence, we have $\lim_{m \rightarrow \infty}\pi_m(s, a_{|\mathcal{A}|-1}) = 0$.
\item Suppose that $\lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-1}|s) = \lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-2}|s) = \dots = \lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-k}|s) = 0$, where $k \in \left [ 1, (|\mathcal{A}|-2) \right ] $. Then we would like to derive $\lim_{m \rightarrow \infty}\pi_m(a_{|\mathcal{A}|-(k+1)}|s)$:\\
By the above assumption, we have:
\begin{align}
\lim_{m \rightarrow \infty} \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) = 0
\end{align}
By \Cref{prop:ratio}, $\exists \enspace K'_{|\mathcal{A}|-(k+1)} \in \mathbb{N}$ such that $\forall \enspace m > K'_{|\mathcal{A}|-(k+1)} $, we can establish the ratio between the summation of policy weight of the policy worse than $a_{|\mathcal{A}|-(k+1)}$ and the policy better than $a_{|\mathcal{A}|-(k+1)}$:
\begin{align}
\frac{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)}{\sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)} \ge \frac{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})}{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}\\
\text{provided $V^m(s) \in \left ( Q^m(s,a_{|\mathcal{A}|-k}), Q^m(s,a_{|\mathcal{A}|-(k+2)}) \right ) $}
\end{align}
And by the ordering of $Q^{m}$, we have:
\begin{align}
V^m(s) &= \sum_{a \in \mathcal{A}} \pi_m(a|s) \cdot Q^m(s,a) &&\\
&= \left [ \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \cdot Q^{m}(s,a) + Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) \cdot \pi_m(a_{|\mathcal{A}|-(k+1)}|s) \right. &&\\
&\left. \quad + \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \cdot Q^{m}(s,a) \right ] &&\\
&\ge \left [ Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s)+ Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) \cdot \pi_m(a_{|\mathcal{A}|-(k+1)}|s) \right. &&\\
&\left. \quad + Q^{m}(s,a_{|\mathcal{A}|-1}) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \right ] \text{, \quad $\forall m > K$ }.
\end{align}
Hence, for all $m > K'_{|\mathcal{A}|-(k+1)}$, we have:
\begingroup
\allowdisplaybreaks
\begin{align}
A^m(s,a_{|\mathcal{A}|-(k+1)})
& = Q^m(s,a_{|\mathcal{A}|-(k+1)}) - V^m(s) &&\\
& \le Q^m(s,a_{|\mathcal{A}|-(k+1)}) - \left [ Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \right. &&\\
&\left. \qquad \qquad \qquad \qquad \qquad \quad + Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) \cdot \pi_m(a_{|\mathcal{A}|-(k+1)}|s) \right. &&\\
&\left. \qquad \qquad \qquad \qquad \qquad \quad + Q^{m}(s,a_{|\mathcal{A}|-1}) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) \right ] &&\\\\
&= \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \right ) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
& \quad + \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-1}) \right ) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)<Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
&\le \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-(k+2)}) \right ) \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
& \quad + \left ( Q^{m}(s,a_{|\mathcal{A}|-(k+1)}) - Q^{m}(s,a_{|\mathcal{A}|-1}) \right ) \cdot \frac{Q^m(s,a_{|\mathcal{A}|-(k+2)}) - Q^m(s,a_{|\mathcal{A}|-(k+1)})}{Q^m(s,a_{|\mathcal{A}|-(k+1)}) - Q^m(s,a_{|\mathcal{A}|-1})} &&\\
& \qquad \cdot \sum_{\substack{a \in \mathcal{A} \\ Q^{\infty}(s,a)>Q^{\infty}(s,a_{|\mathcal{A}|-(k+1)})}} \pi_{m} (a|s) &&\\
& = 0
\end{align}
By \Cref{prop:adv_all_neg}, we have $\exists \enspace n_{|\mathcal{A}|-(k+1)} \in \mathbb{N}$, $K_{|\mathcal{A}|-(k+1)} \in \mathbb{N}$ such that:
\begin{align}
A^{m+n_{|\mathcal{A}|-(k+1)}+1}(s,a_{|\mathcal{A}|-(k+1)}) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-(k+1)}$ }
\end{align}
Moreover,
\begin{align}
\sign(A^m(s,a_{|\mathcal{A}|-1})) \cdot \alpha_m\left ( s,a_{|\mathcal{A}|-(k+1)} \right ) < 0 \text{, \quad $\forall m > K_{|\mathcal{A}|-(k+1)}$ }
\end{align}
\endgroup
With the monotone-decreasing property and the infinite visitation, it is guaranteed that $\lim_{m \rightarrow \infty}\theta_m(s, a_{|\mathcal{A}|-(k+1)}) = -\infty$. Hence we have $\lim_{m \rightarrow \infty}\pi_m(s, a_{|\mathcal{A}|-(k+1)}) = 0$.\\
Finally we complete the induction and so we conclude that $\forall a \ne \tilde{a^+}$, $\lim_{m \rightarrow \infty}\pi_m(s, a) = 0$, which is equivalent to $\lim_{m \rightarrow \infty}\pi_m(s, \tilde{a^+}) = 1$. This completes the proof of \Cref{lemma:contradiction_of_a+}.
\end{itemize}
\hfill $\qed$
Now we are ready to put everything together and prove Theorem \ref{theorem:convergeoptimal}.
For ease of exposition, we restate Theorem \ref{theorem:convergeoptimal} as follows.
\begin{theorem*}
Consider a tabular softmax parameterized policy $\pi_\theta$, under (\ref{eq:CAPO_form}) with ${\alpha_{m}(s, a) \ge \log (\frac{1}{\pi_{\theta_{m}}(a\rvert s)})}$, if Condition \ref{condition:sa} is satisfied, then we have $V^{\pi_{m}}(s) \rightarrow V^{*}(s)$ as $m\rightarrow \infty$, for all $s \in \mathcal{S}$.
\end{theorem*}
\label{app:proof_convergeoptimal}
\begin{proof}[Proof of \Cref{theorem:convergeoptimal}]
In \Cref{lemma:contradiction_of_a+}, we have that if $I_{+}^{s} \neq \emptyset$ is true, then there must exist one action $a \in I_{+}^{s}$ such that $\lim_{m \rightarrow \infty} \pi_m(a|s) = 1$. This leads to the contradiction with \Cref{lemma:sumI+zero}, and finally we get the desired result that $I_{+}^{s} = \emptyset$, implying that $V^{(\infty)}$ is optimal.
\end{proof}
\begin{comment}
However, we now show in \Cref{lemma:pi_not_zero} that at some time $m > M^\epsilon_s$, batch $\mathcal{B}_m$ is sampled and for some $(s, a_{+}) \in \mathcal{B}_m, a_{+} \in I_{+}^{s}$, we can show that $\pi_{m+1}(s, a) > \frac{1}{|\mathcal{B}_m| + 1}$, which contradicts with lemma \Cref{lemma:sumI+zero}.
\begin{lemma}
\label{lemma:pi_not_zero}
Regardless of the value of $\pi_{m}(s, a_{+})$, if $(s, a_{+})$ is sampled at episode $m > M_1$ (i.e. $(s, a_{+}) \in \mathcal{B}_{m}$), then
$\pi_{m+1}(s, a_{+}) > \frac{1}{|\mathcal{B}_{m}| + 1}$, $\forall a_{+} \in I^{s}_{+}$.
\end{lemma}
\begin{proof}
\begin{prop} $\pi_{m+1}(s, a_{+}) =\frac{Z_{m}(s)}{Z_{m+1}(s) }$
\label{prop:pi_update}
\end{prop}
\begin{proof}
Note that by \Cref{lemma:A_strict}, $\forall m > M_1, a_{+} \in I^{s}_{+}$, $A(s,a_{+}) > 0$ :
\begin{flalign*}
&\theta_{m+1}(s, a_{+}) &&\\
&= \theta_{m}(s, a_{+}) + \log(\frac{1}{ \pi_m(a_{+}|s)})\sign(A^{\pi_{m}}(s,a_{+})) &&\\
&=\theta_{m}(s, a_{+}) + \log(\frac{1}{ \pi_m(a_{+}|s)})
\end{flalign*}
So we have:
\begin{align}
&\pi_{m+1}(s, a_{+}) =\frac{e^{\theta_{m+1}(s, a_{+})}} {Z_{m+1}(s)}
= \frac{e^{\theta_{m}(s, a_{+}) + \log(\frac{1}{ \pi_m(a_{+}|s)})}}{Z_{m+1}(s) } \\
&\quad= \frac{e^{\theta_{m}(s, a_{+}) + \log(\frac{Z_{m}(s)}{e^{ \theta_m(s, a_{+})}})}}{Z_{m+1}(s) }
= \frac{Z_{m}(s)}{Z_{m+1}(s)}
\end{align}
\end{proof}
\begin{prop} $\pi_{m+1}(s, a_{+}) \ge \sum_{a \not\in\mathcal{B} }\pi_{m+1}(a | s)$
\label{prop:policy_lowerbound}
\end{prop}
\begin{proof}
If we summed up the probability for $(s, a) \not\in \mathcal{B}$, for a given state $s$, we have
\begin{equation}
\sum_{a \not\in\mathcal{B} }\pi_{m+1}(a | s) = \frac{\sum_{a \not\in\mathcal{B} }e^{ \theta_m(s, a)} }{Z_{m+1}(s)} \le \frac{\sum_{a \in \mathcal{A}} e^{\theta_m(s,a)}}{Z_{m+1}(s)} = \frac{Z_{m}(s)}{Z_{m+1}(s)} = \pi_{m+1}(s, a_{+})
\end{equation}
Proposition \ref{prop:policy_lowerbound} shows that all state-action pairs that are not sampled at episode $m$, their collective policy weights will be less than those $(s, a_+)$ that are sampled at episode $m$ with $A^m(s, a_+) > 0$.
\end{proof}
Thus combining proposition \ref{prop:pi_update} and \ref{prop:policy_lowerbound}, for any $s \in \mathcal{S}$ we have:
\begin{align}
&(|\mathcal{B}| + 1)\pi_{m+1}(a_{+} \rvert s) \\
&\ge \pi_{m+1}(a_{+} \rvert s) + \sum_{a \in \mathcal{B}(s)}\pi_{m+1}(a \rvert s) \\
&\ge\sum_{a \not\in \mathcal{B}(s)}\pi_{m+1}(a \rvert s) + \sum_{a \in\mathcal{B}(s)}\pi_{m+1}(a \rvert s) = 1
\end{align}
The first inequality holds since $\pi_{m+1}(a_{+} \rvert s) \ge \pi_{m+1}(a \rvert s)$ for $a_{+} \in I^{s}_{+}$ and $a \in \mathcal{A}$.
\end{proof}
\begin{remark}
\normalfont \Cref{lemma:pi_not_zero} shows the special choice of step size $\alpha(\pi(a \rvert s)) = \log\frac{1}{\pi(a \rvert s)}$ allows CAPO to re-distribute policy weights to all $\pi(a \rvert s)$ with $A(s, a) > 0$ while improving the policy, this special property keeps CAPO from getting stuck in the local optimum.
\end{remark}
\end{comment}
\section{Coordinate descent methods.}
\label{app:related:coord}
Coordinate descent is one classic optimization method \citep{tseng2001convergence} and has been widely used for large-scale machine learning problems
\citep{nesterov2012efficiency}.
\citep{gurbuzbalaban2017cyclic}
There are various ways of selecting the coordinates for update, such as cyclic coordinate descent \cite{saha2013nonasymptotic}, randomized coordinate descent, and more.
The above list is by no means exhaustive but only to provide an overview of the relevant works on applying coordinate descent to learning problems.
Coordinate descent has also been utilized in deep learning \citep{zeng2019global}.
Recently, \citet{zhong2021coordinate} proposes to apply coordinate descent to design baseline functions.
\end{comment}
\section{Pseudo Code of the Proposed Algorithms}
\begin{algorithm}[ht]
\caption{Coordinate Ascent Policy Optimization}
\label{algo:CAPO}
\begin{algorithmic}[1]
\STATE Initialize policy $\pi_{\theta}$, $\theta \in \mathcal{S} \times \mathcal{A}$
\FOR{$m = 1, \cdots, M$}
\STATE Generate $|\mathcal{B}|$ state-action pairs $((s_0, a_0), ..., (s_{|\mathcal{B}|}, a_{|\mathcal{B}|}))$ from some coordinate selection rule satisfying \Cref{condition:sa}.
\FOR{$i = 1, \cdots, |\mathcal{B}|$}
\STATE $\theta_{m+1}(s_i, a_i) \leftarrow \theta_{m}(s_i, a_i) + \alpha_{m}(s_i, a_i) \sign\left(A^{m}(s_i, a_i)\right)$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{Neural Coordinate Ascent Policy Optimization}
\label{algo:NCDPO}
\begin{algorithmic}[1]
\STATE Initialize actor network $f_{\theta}$, where policy is parameterized as $\pi_{\theta}(a | s) = \frac{f_\theta(s, a)}{\sum_{a' \in \mathcal{A}} f_\theta(s, a')}$
\FOR{$m = 1, \cdots, M$}
\STATE Generate state-action pairs $\left((s_0, a_0), ..., (s_{|\mathcal{B}|}, a_{|\mathcal{B}|}\right))$ from some coordinate selection rule satisfying \Cref{condition:sa}.
\STATE Evaluate Advantage $A^{\pi_{m}}$ with arbitrary policy evaluation algorithm.
\STATE Compute target $\hat{\theta}$ by
(\ref{eq:NCAPO_update}).
\STATE Compute target policy $\hat{\pi}$ by taking softmax over $\hat{\theta}$.
\STATE Update the policy network with NCAPO loss:
\STATE $\nabla_{\theta}L = \nabla_{\theta} D_{KL}\left(\pi_{f_{\theta_{m}}} \| \hat{\pi}\right)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht]
\caption{Neural Coordinate Ascent Policy Optimization with Replay Buffer}
\label{algo:NCAPO_imple}
\begin{algorithmic}[1]
\STATE Initialize behavior network $f(s,a \mid \theta^b)$, critic $Q(s,a \mid \theta^Q)$
\STATE Initialize Replay Buffer $R$,
\STATE Initialize target networks $f(s, a \mid \theta) \leftarrow f(s, a \mid \theta^b)$, $Q(s,a | \theta^{Q^{\prime}}) \leftarrow Q(s,a | \theta^Q)$
\FOR{episode $m = 1, \cdots, M$}
\STATE Generate behavior policy and target policy by computing softmax $\pi_{\theta}(a \mid s) = \frac{ e^{f(s, a \mid \theta)} }{\sum_{a^{\prime} \in \mathcal{A}} e^{f\left(s, a^{\prime} \mid \theta \right)}}$.
\STATE Collect $N_{rollouts}$ rollouts with length $l$ by following $\pi_{\theta_b}$ with decayed ${\epsilon}$-greedy, store rollouts to $R$.
\STATE Replace old rollouts if $len(R)> R_{max}$ .
\FOR{gradient steps $= 1,...,\mathcal{G}$}
\STATE Sample rollout $r$ from $R$.
\STATE Compute $Q_{retrace}(s, a)$ for $(s,a) \in r$
\STATE $\theta^Q_{Loss} \leftarrow \sum_{(s,a) \in r}(Q_{retrace}(s,a) - Q_{\theta^Q}(s,a))^2$
\STATE $\theta_{Loss} \leftarrow D_{KL}\left(\pi_{m}(\cdot \mid s) \mid \pi_{\hat{\theta}}(\cdot \mid s\right)$
\STATE Update $Q(s,a \mid \theta^Q)$ with gradient $\nabla_{\theta^{Q}}\theta^{Q}_{Loss}$
\STATE Update $f(s,a \mid \theta^b)$ with gradient $\nabla_{\theta}\theta_{Loss}$
\ENDFOR
\STATE Update target networks: \\
$\theta^{Q^{\prime}} \leftarrow \tau_{Q} \theta^{Q}+(1-\tau_{Q}) \theta^{Q^{\prime}}$ \\
$\theta \leftarrow \tau_{\theta} \theta^{b}+(1-\tau_{\theta}) \theta$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Theoretical Analysis for the Tabular Case}
We proceed to substantiate the benefit of the state-action-dependent learning rate used in the general CAPO update in (\ref{eq:CAPO_form}) by showing that CAPO can attain a globally optimal policy with a properly designed learning rate $\alpha(\cdot,\cdot)$.
\begin{theorem}
\label{theorem:convergeoptimal}
Consider a tabular softmax parameterized policy $\pi_\theta$. Under (\ref{eq:CAPO_form}) with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m(a\rvert s)}})}$, if Condition \ref{condition:sa} is satisfied, then we have $V^{\pi_{m}}(s) \rightarrow V^{*}(s)$ as $m\rightarrow \infty$, for all $s\in\cS$.
\end{theorem}
\begin{proof}[Proof Sketch]
The detailed proof can be found in Appendix \ref{app:proof_convergeoptimal}.
To highlight the main ideas of the analysis, we provide a sketch of the proof as follows: (i) Since the expected total reward is bounded above, with the strict policy improvement property of CAPO update (cf. Lemma \ref{lemma:strict_improvement}), the sequence of value functions is guaranteed to converge, i.e., the limit of $V^{\pi_m}(s)$ exists. (ii) The proof proceeds by contradiction. We suppose that CAPO converges to a sub-optimal policy, which implies that there exists at least one state-action pair $(s',a')$ such that $A^{(\infty)}(s',a')>0$ and $\pi_{\infty}(a'' \rvert s'') = 0$ for all state-action pair $(s'',a'')$ satisfying $A^{(\infty)}(s'',a'')>0$.
As a result, this implies that for any $\epsilon>0$, there must exist a time $M^{{\epsilon}}$ such that $\pi_m(a'' \rvert s'') < {\epsilon}$, $\forall m > M^{{\epsilon}}_{}$. (iii) However, under CAPO update, we show that the policy weight of the state-action pair which has the greatest advantage value shall approach $1$, and this leads to a contradiction.
\end{proof}
\begin{remark}
\normalfont \pch{The proof of Theorem \ref{theorem:convergeoptimal} is inspired by \citep{agarwal2020theory}. Nevertheless, the analysis of CAPO presents its own salient challenge: Under true PG, the policy updates in all the iterations can be fully determined once the initial policy and the step size are specified. By contrast, under CAPO, the policy obtained in each iteration depends on the selected coordinates, which can be almost arbitrary under Condition \ref{condition:sa}. This makes it challenging to establish a contradiction under CAPO, compared to the argument of directly deriving the policy parameters in the limit in true PG \citep{agarwal2020theory}. Despite this, we address the challenge by using a novel induction argument based on the action ordering w.r.t. the $Q$ values in the limit.}
\end{remark}
\begin{remark}
\normalfont Notably, the condition of the learning rate $\alpha$ in Theorem \ref{theorem:convergeoptimal} does not depend on the advantage, but only on the action probability $\pi_{\theta}(a\rvert s)$. As a result, the CAPO update only requires the sign of the advantage function, without the knowledge of the magnitude of the advantage. Therefore, CAPO can still converge even under a low-fidelity critic that merely learns the sign of the advantage function.
\end{remark}
\subsection{Natural Policy Gradient}
The natural policy gradient (NPG) \citep{kakade2001natural} exploits the landscape of the parameter space and updates the policy by:
\begin{equation}
\label{eq:natural_pg_update}
\theta_{m+1} = \theta_{m}+\eta\left(F_{\rho}^{\theta_{m}}\right)^{\dagger} \nabla_{\theta} J^{\pi_{\theta}}(\rho),
\end{equation}
where $\eta$ is the step size and $\left(F_{\rho}^{\theta_{m}}\right)^{\dagger}$ is the Moore-Penrose pseudo inverse of the Fisher information matrix
$F_{\rho}^{\theta_{m}}:={\mathbb{E}}_{s \sim d_{\rho}^{\pi_{\theta_{m}}}, a \sim \pi_{\theta_{m}}(\cdot \mid s)}[(\nabla_{\theta} \log \pi_{\theta_{m}}(a \rvert s))(\nabla_{\theta} \log \pi_{\theta_{m}}(a \rvert s))^{\top}]$.
Moreover, under softmax parameterization, the true NPG update takes the following form \citep{agarwal2020theory}:
\begin{equation}
\label{eq:npg_softmax_update}
\theta_{m+1}=\theta_{m}+\frac{\eta}{1-\gamma} A^{\pi_{\theta_m}},
\end{equation}
where $A^{\pi_{\theta_m}}$ denotes the $\lvert S\rvert \lvert A\rvert$-dimensional vector of all the advantage values of ${\pi_{\theta_m}}$.
It has been shown that the true NPG can attain linear convergence \citep{mei2021understanding,khodadadian2021linear}.
Given the expression in (\ref{eq:npg_softmax_update}), CAPO can be interpreted as adapting NPG to the mini-batch or stochastic settings. That said, compared to true NPG, CAPO only requires the sign of the advantage function, not the magnitude of the advantage.
On the other hand, it has recently been shown that some variants of on-policy stochastic NPG could exhibit committal behavior and thereby suffer from convergence to sub-optimal policies \citep{mei2021understanding}.
The analysis of CAPO could also provide useful insights into the design of stochastic NPG methods.
Interestingly, in the context of variational inference, a theoretical connection between coordinate ascent and the natural gradient has also been recently discovered \citep{ji2021marginalized}.
\begin{comment}
While being a classic algorithm, stochastic NPG has recently been shown to exhibit committal behavior and thereby suffer from convergence to sub-optimal policies \cite{mei2021understanding}.
Specifically, \citet{mei2021understanding} has demonstrated this through a simple multi-armed bandit example, where the stochastic NPG update (without a baseline) is
\begin{equation}
\label{eq:snpg_bandit_update}
\theta_{m+1} \leftarrow \theta_{m}+\eta \cdot \hat{r}_{m},
\end{equation}
where $\hat{r}_{m}$ is the on-policy importance sampling reward estimator $\hat{r}_{m}(a)=\frac{\mathbb{I}\left\{a_{m}=a\right\}}{\pi_{\theta_{m}}(a)} \cdot r(a), \forall a \in \mathcal{A}$.
The above stochastic NPG has been shown to converge to a sub-optimal policy with positive probability.
This motivates us to look for an alternative update scheme that can provably attain global optimality in the stochastic sampling scenario.
Consider the stochastic setting where the batch size $|\mathcal{B}| = 1$, and $(s_m, a_m)$ denotes the sample at training episode $m$. We can then write down the update of SNPG in the following form:
\begin{equation}
\label{eq:snpg_softmax_update}
\theta^{(m+1)}(s,a)=\theta^{(m)}(s,a) + \frac{\eta}{1-\gamma} \cdot\frac{1}{d^{\pi_{\theta}}(s) \cdot \pi_{\theta}(a \mid s)} \cdot A^{(m)}(s, a) \cdot \mathbb{I}\{s=s_m, a=a_m\}
\end{equation}
since $A^{(m)} = E_{s \sim d^{\pi}, a \sim \pi(\cdot \mid s)}\left[ \frac{1}{d^{\pi_{\theta}}(s) \cdot \pi_{\theta}(a \mid s)} \cdot A^{(m)}(s, a) \cdot \mathbb{I}\{s=s_m, a=a_m\} \right]$.
(\ref{eq:snpg_softmax_update}) can be viewed as coordinate update by considering specific $(s,a)$-pair within each batch. To see this even more clearly, we can rewrite $\frac{1}{d^{\pi_{\theta}}(s) \cdot \pi_{\theta}(a \mid s)} \cdot A^{(m)}(s, a)$ as $f^{(m)}(s, a)$ and $\frac{\eta}{1 - \gamma}$ as $\eta^{\prime}$
\begin{equation}
\label{eq:snpg_softmax_update_simp}
\theta^{(m+1)}=\theta^{(m)}+ \eta^{\prime} \cdot f^{(m)}(s, a) \cdot \mathbb{I}\{s=s_m, a=a_m\}.
\end{equation}
\end{comment}
\textbf{CAPO for Low-Fidelity RL Tasks.}
One salient feature of CAPO is that it requires only the sign of the advantage function, instead of the exact advantage value. It has been shown that accurate estimation of the advantage value could be rather challenging under benchmark RL algorithms \citep{ilyas2019closer}. As a result, CAPO could serve as a promising candidate solution for RL tasks with low-fidelity or multi-fidelity value estimation \citep{cutler2014reinforcement,kandasamy2016multi,khairy2022multifidelity}.
\textbf{CAPO for On-Policy Learning.}
The original motivation of CAPO is to achieve off-policy policy updates without the issues of distribution mismatch and fixed behavior policy.
Despite this, the CAPO scheme in (\ref{eq:CAPO_form}) can also be used in an \textit{on-policy} manner.
Notably, the design of on-policy CAPO is subject to a similar challenge of committal behavior in on-policy stochastic PG and stochastic NPG \citep{chung2021beyond,mei2021understanding}.
Specifically: (i) We show that on-policy CAPO with a fixed step size could converge to sub-optimal policies through a multi-armed bandit example similar to that in \citep{chung2021beyond}. (ii) We design a proper step size for on-policy CAPO and establish asymptotic global convergence. Through a simple bandit experiment, we show that this variant of on-policy CAPO can avoid the committal behavior. Due to space limitation, all the above results are provided in Appendix \ref{app:OnCAPO}.
\subsection{Convergence Rates of CAPO With Specific Coordinate Selection Rules}
\label{subsection:convergence_rate}
In this section, we proceed to characterize the convergence rates of CAPO under softmax parameterization and the three specific coordinate generators, namely, Cyclic, Batch, and Randomized CAPO.
\begin{itemize}[leftmargin=*]
\item \textbf{Cyclic CAPO}: Under Cyclic CAPO, every state action pair $(s,a) \in \mathcal{S} \times \mathcal{A}$ will be chosen for policy update by the coordinate generator cyclically. Specifically, Cyclic CAPO sets $\lvert B_m\rvert=1$ and $\bigcup_{i=1}^{|\mathcal{S}||\mathcal{A}|} B_{m \cdot |\mathcal{S}||\mathcal{A}| + i} = \mathcal{S} \times \mathcal{A}$.
\item \textbf{Randomized CAPO}: {Under Randomized CAPO, in each iteration, one state-action pair $(s,a) \in \mathcal{S} \times \mathcal{A}$ is chosen randomly from some coordinate generator distribution $d_{\text{gen}}$ with support $\cS\times \cA$ for policy update, where $d_{\text{gen}}(s,a)>0$ for all $(s,a)$. For ease of exposition, we focus on the case of a fixed $d_{\text{gen}}$. Our convergence analysis can be readily extended to the case of time-varying $d_{\text{gen}}$.}
\item \textbf{Batch CAPO}: Under Batch CAPO, we let each batch contain all of the state-action pairs, i.e., $B_m = \left \{ (s,a) : (s,a) \in \mathcal{S} \times \mathcal{A} \right \}$, in each iteration. Despite that Batch CAPO may not be a very practical choice, we use this variant to further highlight the difference in convergence rate between CAPO and the true PG.
\end{itemize}
We proceed to state the convergence rates of the above three instances of CAPO as follows.
\begin{theorem}[Cyclic CAPO]
\label{theorem:cyclic_convergence_rate}
Consider a tabular softmax policy $\pi_\theta$. Under Cyclic CAPO with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m(a\rvert s)}})}$ and $|B_m| = 1$, $\bigcup_{i=1}^{|\mathcal{S}||\mathcal{A}|} B_{m \cdot |\mathcal{S}||\mathcal{A}| + i} = \mathcal{S} \times \mathcal{A}$, we have:
\begin{align}
V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{|\mathcal{S}||\mathcal{A}|}{c} \cdot \frac{1}{m} , \quad\text{for all $m \ge 1$}
\end{align}
where $c = \frac{(1-\gamma)^4}{2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot {\min} \left \{ \frac{\min_s{\mu(s)} }{2} , \frac{ (1-\gamma)}{|\mathcal{S}||\mathcal{A}|} \right \} > 0$.
\end{theorem}
\begin{proof}[Proof Sketch]
The detailed proof and the upper bound of the partial sum can be found in Appendix \ref{app:convergence_rate}.
To highlight the main ideas of the analysis, we provide a sketch of the proof as follows: (i) We first write the one-step improvement of the performance $V^{\pi_{m+1}}(s) - V^{\pi_m}(s)$ in state visitation distribution, policy weight, and advantage value, and also construct the lower bound of it. (ii) We then construct the upper bound of the performance difference $ V^{*}(s) - V^{\pi_m}(s)$. (iii) Since the bound in (i) and (ii) both include advantage value, we can connect them and construct the upper bound of the performance difference using one-step improvement of the performance. (iv) Finally, we can get the desired convergence rate by induction.
\end{proof}
Notably, it is somewhat surprising that Theorem \ref{theorem:cyclic_convergence_rate} holds under Cyclic CAPO \textit{without} any further requirement on the specific cyclic ordering. This indicates that Cyclic CAPO is rather flexible in the sense that it provably attains $\cO(\frac{1}{m})$ convergence rate under any cyclic ordering or even cyclic orderings that vary across cycles.
{On the flip side, such a flexible coordinate selection rule also imposes significant challenges on the analysis: (i) While Lemma \ref{lemma:strict_improvement} ensures strict improvement in each iteration, it remains unclear how much improvement each Cyclic CAPO update can actually achieve, especially under an \textit{arbitrary cyclic ordering}. This is one salient difference compared to the analysis of the true PG \citep{mei2020global}. (ii) Moreover, an update along one coordinate can already significantly change the advantage value (and its sign as well) of other state-action pairs. Therefore, it appears possible that there might exist a well-crafted cyclic ordering that leads to only minimal improvement in each coordinate update within a cycle.}
{Despite the above, we tackle the challenges by arguing that in each cycle, under a properly-designed variable step size $\alpha$, there must exist at least one state-action pair such that the one-step improvement is sufficiently large, regardless of the cyclic ordering. Moreover, by the same proof technique, Theorem \ref{theorem:cyclic_convergence_rate} can be readily extended to CAPO with almost-cyclic coordinate selection, where the cycle length is greater than $\lvert S\rvert\lvert A\rvert$ and each coordinate appears at least once.}
We extend the proof technique of Theorem \ref{theorem:cyclic_convergence_rate} to establish the convergence rates of the Batch and Randomized CAPO.
\begin{theorem}[Batch CAPO]
\label{theorem:batch_convergence_rate}
Consider a tabular softmax policy $\pi_\theta$. Under Batch CAPO with ${\alpha_m(s,a) = \log (\frac{1}{\pi_{\theta_m(a\rvert s)}})}$ and $B_m = \left \{ (s,a) : (s,a) \in \mathcal{S} \times \mathcal{A} \right \}$, we have :
\begin{equation}
V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{1}{c} \cdot \frac{1}{m}, \quad \text{for all $m \ge 1$}
\end{equation}
where $c = \frac{(1-\gamma )^4}{|\mathcal{A}|} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot \underset{s}{\min} \left \{ \mu(s) \right \} > 0$.
\end{theorem}
\begin{proof}
The proof and the upper bound of the partial sum can be found in Appendix \ref{app:batch_convergence_rate}.
\end{proof}
\begin{theorem}[Randomized CAPO]
\label{theorem:randomized_convergence_rate}
Consider a tabular softmax policy $\pi_\theta$. Under Randomized CAPO with ${\alpha_m(s, a) \ge \log (\frac{1}{\pi_{\theta_m(a\rvert s)}})}$, we have :
\begin{equation}
\underset{({s_m},{a_m}) \sim d_{\text{gen}}}{\mathbb{E}} \left [ V^{*}(\rho) - V^{\pi_m}(\rho) \right] \le \frac{1}{c} \cdot \frac{1}{m}, \quad \text{for all $m \ge 1$}
\end{equation}
where $c = \frac{(1-\gamma )^4}{2} \cdot \left \| \frac{1}{\mu} \right \|_{\infty}^{-1} \cdot \underset{(s,a)}{\min} \left \{ d_{\text{gen}}(s,a)\cdot\mu(s) \right \} > 0$ and $d_{\text{gen}}:\mathcal{S} \times \mathcal{A} \rightarrow (0,1)$, $d_{\text{gen}}(s,a) = \mathbb{P}((s,a) \in B_m)$.
\end{theorem}
\begin{proof}
The proof and the upper bound of the partial sum can be found in Appendix \ref{app:randomized_convergence_rate}.
\end{proof}
\begin{remark}
\normalfont The above three specific instances of CAPO all converge to a globally optimal policy at a rate $\cO(\frac{1}{m})$ and attains a better pre-constant than the standard policy gradient \citep{mei2020global} under tabular softmax parameterization. Moreover, as the CAPO update can be combined with a variety of coordinate selection rules, one interesting future direction is to design coordinate generators that improve over the convergence rates of the above three instances.
\end{remark}
\begin{table*}[!ht]
\caption{A summary of convergence rates under tabular softmax parameterization under different algorithms.} \label{tab:constant_conv_rate}
\begin{center}
\begin{tabular}{ll}
\textbf{Algorithm} &\textbf{Convergence Rate} \\
\hline \\
Policy Gradient \citep{mei2020global} &$V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{16 \cdot |\mathcal{S}|}{\inf_{m \ge 1} \pi_m(a^*|s)^2\cdot (1-\gamma)^6} \cdot \left \| \frac{d^{\pi^*}_{\mu}}{\mu} \right \|_\infty^2 \cdot\left \| \frac{1}{\mu} \right \|_{\infty } \cdot \frac{1}{m} $ \\
Cyclic CAPO (Theorem \ref{theorem:cyclic_convergence_rate}) &$ V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{2 \cdot |\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot {\max} \left \{ \frac{2}{\min_s{\mu(s)} } , \frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)} \right \} \cdot \frac{1}{m}$ \\
Batch CAPO (Theorem \ref{theorem:batch_convergence_rate}) &$ V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{|\mathcal{A}|}{(1-\gamma )^4} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \frac{1}{\underset{s}{\min} \left \{ \mu(s) \right \}} \cdot \frac{1}{m}$ \\
Randomized CAPO (Theorem \ref{theorem:randomized_convergence_rate}) &$ \underset{({s_m},{a_m}) \sim d_{\text{gen}}}{\mathbb{E}} \left [ V^{*}(\rho) - V^{\pi_m}(\rho) \right] \le \frac{2}{(1-\gamma )^4} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \frac{1}{\underset{(s,a)}{\min} \left \{ d_{\text{gen}}(s,a)\cdot\mu(s) \right \}} \cdot \frac{1}{m}$\\
\end{tabular}
\end{center}
\end{table*}
\begin{comment}
\begin{table*}[!h]
\begin{center}
\begin{tabular}{||c|c||}
\hline
\textbf{ALGORITHM} & \textbf{CONVERGENCE RATE} \\ \hline
\begin{tabular}[c]{@{}c@{}}Policy Gradient\\\citep{mei2020global}\\ \end{tabular} & $V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{16 \cdot |\mathcal{S}|}{\inf_{m \ge 1} \pi_m(a^*|s)^2\cdot (1-\gamma)^6} \cdot \left \| \frac{d^{\pi^*}_{\mu}}{\mu} \right \|_\infty^2 \cdot\left \| \frac{1}{\mu} \right \|_{\infty } \cdot \frac{1}{m} $ \\ \hline
\begin{tabular}[c]{@{}c@{}}Natural Policy Gradient \\\citep{agarwal2020theory}\\ \end{tabular} & $V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{2}{(1-\gamma)^2} \cdot \frac{1}{m}$ \\ \hline
\begin{tabular}[c]{@{}c@{}}Cyclic CAPO\\ (Theorem \ref{theorem:cyclic_convergence_rate})\\ \end{tabular} & $ V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{2 \cdot |\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot {\max} \left \{ \frac{2}{\min_s{\mu(s)} } , \frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)} \right \} \cdot \frac{1}{m}$ \\ \hline
\begin{tabular}[c]{@{}c@{}}Batch CAPO\\ (Theorem \ref{theorem:batch_convergence_rate})\\ \end{tabular} & $ V^{*}(\rho) - V^{\pi_m}(\rho) \le \frac{|\mathcal{A}|}{(1-\gamma )^4} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \frac{1}{\underset{s}{\min} \left \{ \mu(s) \right \}} \cdot \frac{1}{m}$ \\ \hline
\begin{tabular}[c]{@{}c@{}}Randomized CAPO\\(Theorem \ref{theorem:randomized_convergence_rate})\\ \end{tabular} & $ \underset{({s_m},{a_m}) \sim d_{\text{gen}}}{\mathbb{E}} \left [ V^{*}(\rho) - V^{\pi_m}(\rho) \right] \le \frac{2}{(1-\gamma )^4} \cdot \left \| \frac{1}{\mu} \right \|_{\infty} \cdot \frac{1}{\underset{(s,a)}{\min} \left \{ d_{\text{gen}}(s,a)\cdot\mu(s) \right \}} \cdot \frac{1}{m}$\\ \hline
\end{tabular}
\caption{A summary of convergence rates under tabular softmax parameterization under different algorithms.}
\label{tab:constant_conv_rate}
\end{center}
\end{table*}
\end{comment}
\section{Practical Implementation of CAPO}
To address the large state and action spaces of the practical RL problems, we proceed to parameterize the policy for CAPO by a neural network and make use of its powerful representation ability.
As presented in \Cref{section:method:onCAPO}, the coordinate update and variable learning rate are two salient features of CAPO.
These features are difficult to preserve if the policy is trained in a completely end-to-end manner.
Instead, we take a two-step approach by first leveraging the tabular CAPO to derive target action distributions and then design a loss function that moves the output of the neural network towards the target distribution.
Specifically, we designed a neural version of CAPO, called Neural Coordinate Ascent Policy Optimization (NCAPO): Let $f_\theta(s,a)$ denote the output of the policy network parameterized by $\theta$, for each $(s,a)$. In NCAPO, we use neural softmax policies, i.e., ${\pi}_{\theta}(a\rvert s) =
\frac{\exp({f_{\theta}\left(s, a\right)})}{\sum_{a^{\prime} \in \mathcal{A}} \exp({f_{\theta}\left(s, a^{\prime}\right)})}$.
\begin{itemize}[leftmargin=*]
\item Inspired by the tabular CAPO, we compute a target softmax policy $\pi_{\hat{\theta}}(s, a)$ by following the CAPO update (\ref{eq:CAPO_form})
\begin{equation}
\label{eq:NCAPO_update}
\tilde{\theta}(s, a) = f_{\theta}(s,a) +\alpha(s, a) \mathbb{I}\{(s,a) \in {B}\}\cdot \sign\left(A^{\pi_{\theta}}\left(s, a\right)\right).
\end{equation}
The target action distribution is then computed w.r.t. $\tilde{\theta}$ as
$\tilde{\pi}(a\rvert s) =
\frac{\exp({\tilde{\theta}\left(s, a\right)})}{\sum_{a^{\prime} \in \mathcal{A}} \exp({\tilde{\theta}\left(s, a^{\prime}\right)})}$.
\item Finally, we learn $f_\theta$ by minimizing the NCAPO loss, which is the KL-divergence loss between the current policy and the target policy:
\begin{equation}
\label{eq:NCAPO_loss}
\mathcal{L}(\theta)=\sum_{s \in B} D_{\text{KL}}\left(\pi_{\theta}(\cdot \rvert s) \| \tilde{\pi}(\cdot \rvert s)\right).
\end{equation}
\end{itemize}
\section{Discussions}
\label{section:method:onCAPO}
In this section, we describe the connection between CAPO and the existing policy optimization methods and present additional useful features of CAPO.
\input{method_sub/connecting_npg}
\subsection{Asymptotic Global Convergence of CAPO With General Coordinate Selection}
\label{subsection:analysis}
In this section we discuss the convergence result of CAPO under softmax parameterization.
In the subsequent analysis, we assume that the following \Cref{condition:sa} is satisfied.
\begin{condition}
\label{condition:sa}
$\lim_{M \rightarrow \infty} \sum^{M}_{m=1} \bbI\{(s,a) \in {B}_m\} \rightarrow \infty$
\end{condition}
Note that \Cref{condition:sa} is rather mild as it could be met by exploratory behavior policies (e.g., $\epsilon$-greedy policies) given the off-policy capability of CAPO.
Moreover, Condition \ref{condition:sa} is similar to the standard condition of infinite visitation required by various RL methods \citep{singh2000convergence,munos2016safe}.
Notably, \Cref{condition:sa} indicates that under CAPO the coordinates are not required to be selected by following a specific policy, as long as infinite visitation to every state-action pair is satisfied.
This feature naturally enables flexible off-policy learning, justifies the use of a replay buffer, and enables the flexibility to decouple policy improvement from value estimation.
We first show that CAPO guarantees strict improvement under tabular softmax parameterization.
\begin{lemma}[Strict Policy Improvement]
Under the CAPO update given by (\ref{eq:CAPO_form}), we have $V^{\pi_{m+1}}(s) \geq V^{\pi_{m}}(s)$, for all $s \in S$, for all $m\in \mathbb{N}$.
\label{lemma:strict_improvement}
\end{lemma}
\begin{proof}
The proof can be found in Appendix \ref{app:proof_strict_improvement}.
\end{proof}
|
2,877,628,088,925 | arxiv | \section{Introduction}
\label{section: introduction}
Probability theory, information theory, learning theory,
statistical signal processing and other related disciplines,
greatly benefit from non-negative measures of dissimilarity (a.k.a.
divergence measures) between pairs of probability measures defined
on the same measurable space (see, e.g., \cite{Basseville13}, \cite{LieseV_book87},
\cite{LieseV_IT2006}, \cite{ReidW11}, \cite{Tsybakov09}, \cite{Vapnik98}, \cite{Verdu_book}).
An axiomatic characterization of information measures, including divergence
measures, was provided by Csisz\'ar \cite{Csiszar08}.
Many useful divergence measures belong to the set of $f$-divergences,
independently introduced by Ali and Silvey
\cite{AliS}, Csisz\'{a}r (\cite{Csiszar63}--\cite{Csiszar67b}), and Morimoto
\cite{Morimoto63} in the early sixties. The family of $f$-divergences
generalizes the relative entropy (a.k.a. the Kullback-Leibler
divergence) while also satisfying the data processing inequality
among other pleasing properties (see, e.g., \cite{LieseV_IT2006}
and references therein).
Integral representations of $f$-divergences serve to study properties
of these information measures, and they are also used to establish relations
among these divergences. An integral representation of
$f$-divergences, expressed by means of the DeGroot statistical
information, was provided in \cite{LieseV_IT2006} with a simplified
proof in \cite{Liese2012}.
The importance of this integral representation stems from the
operational meaning of the DeGroot statistical information \cite{DeGroot62},
which is strongly linked to Bayesian binary hypothesis testing.
Some earlier specialized versions
of this integral representation were introduced in \cite{CZK98}, \cite{FeldmanO89},
\cite{Guttenbrunner92}, \cite{OV93} and \cite{Torgersen}, and a variation
of it also appears in \cite[Section~5.B]{ISSV16}.
Implications of the integral
representation of $f$-divergences, by means of the DeGroot statistical information,
include an alternative proof of the data processing inequality, and a study of
conditions for the sufficiency or $\varepsilon$-deficiency of observation channels
(\cite{LieseV_IT2006}, \cite{Liese2012}).
Since many distance measures of interest fall under the
paradigm of an $f$-divergence \cite{GibbsSu02}, bounds among $f$-divergences
are very useful in many instances such as the analysis of rates of
convergence and concentration of measure bounds,
hypothesis testing, testing goodness of fit, minimax risk in estimation and modeling,
strong data processing inequalities and contraction coefficients, etc.
Earlier studies developed systematic approaches to obtain $f$-divergence inequalities
while dealing with pairs of probability measures defined on arbitrary alphabets. A list
of some notable existing $f$-divergence inequalities is provided, e.g., in \cite[Section~3]{GibbsSu02}
and \cite[Section~1]{ISSV16}.
State-of-the-art techniques which serve to derive bounds among
$f$-divergences include:
\begin{enumerate}[1)]
\item Moment inequalities which rely on log-convexity arguments (\cite{AnwarHP09}, \cite[Section~5.D]{ISSV16},
\cite{Simic07}, \cite{Simic08}, \cite{Simic09}, \cite{Simic15});
\item Inequalities which rely on the characterization of the exact locus of the joint range of
$f$-divergences \cite{HarremoesV_2011};
\item $f$-divergence inequalities via functional domination
(\cite{SV16a}, \cite[Section~3]{ISSV16}, \cite{Taneja05b}, \cite{Taneja13});
\item Sharp $f$-divergence inequalities by using numerical tools for maximizing or minimizing an
$f$-divergence subject to a finite number of constraints on other $f$-divergences \cite{GSS_IT14};
\item Inequalities which rely
on powers of $f$-divergences defining a distance (\cite{EndresS03}, \cite{KafkaOV91}, \cite{LL2015}, \cite{Vajda09});
\item Vajda and Pinsker-type inequalities for $f$-divergences (\cite{Csiszar63}--\cite{Csiszar67b},
\cite{Gilardoni10}, \cite[Section~6-7]{ISSV16}, \cite{ReidW11}, \cite{Topsoe_IT00});
\item Bounds among $f$-divergences when the relative information is bounded (\cite{Dragomir00a}, \cite{Dragomir00b},
\cite{Dragomir00c}, \cite{DragomirG_01}, \cite{Dragomir06},
\cite{KumarC04}, \cite{SV15}, \cite[Sections~4-5]{ISSV16}, \cite{Taneja05}), and reverse Pinsker inequalities
(\cite{Binette18}, \cite{SV15}, \cite[Section~6]{ISSV16});
\item Inequalities which rely on the minimum of an $f$-divergence for a given total variation
distance and related bounds (\cite{Gilardoni06}, \cite{Gilardoni06-cor}, \cite{Gilardoni10}, \cite[p.~115]{GSS_IT14},
\cite{Gushchin16}, \cite{ReidW11}, \cite{IS15}, \cite{IS16}, \cite{Vajda09});
\item Bounds among $f$-divergences (or functions of $f$-divergences such as the R\'enyi divergence) via integral
representations of these divergence measures \cite[Section~8]{ISSV16};
\item Inequalities which rely on variational representations of $f$-divergences (e.g., \cite[Section~2]{Liu17}).
\end{enumerate}
Following earlier studies of the local behavior of $f$-divergences and their asymptotic properties (see
related results by Csisz\'{a}r and Shields \cite[Theorem~4.1]{CsiszarS_FnT}, Pardo and Vajda \cite[Section~3]{PardoV03},
and Sason and V\'{e}rdu \cite[Section~3.F]{ISSV16}), it is known that the local behavior of $f$-divergences
scales like the chi-square divergence (up to a scaling factor which depends on $f$) provided that the
first distribution approaches the reference measure in a certain strong sense. The study of the local
behavior of $f$-divergences is an important aspect of their properties, and we further study it in this work.
This paper considers properties of $f$-divergences, while first introducing in
Section~\ref{section: preliminaries} the basic definitions and notation needed,
and in particular the various measures of dissimilarity between probability measures
used throughout this paper. The presentation of our new results is then structured as follows:
Section~\ref{section: integral representation} is focused on
the derivation of new integral representations of $f$-divergences,
expressed as a function of the relative information spectrum of
the pair of probability measures, and the convex function $f$.
The novelty of Section~\ref{section: integral representation} is in
the unified approach which leads to integral representations of $f$-divergences
by means of the relative information spectrum, where the latter cumulative distribution function
plays an important role in information theory and statistical decision theory (see, e.g., \cite{Liu17} and
\cite{Verdu_book}). Particular integral representations of the type
of results introduced in Section~\ref{section: integral representation}
have been recently derived by Sason and Verd\'{u} in a case-by-case basis
for some $f$-divergences (see \cite[Theorems~13 and 32]{ISSV16}), while
lacking the approach which is developed in Section~\ref{section: integral representation}
for general $f$-divergences. In essence,
an $f$-divergence $D_f(P\|Q)$ is expressed in Section~\ref{section: integral representation}
as an inner product of a simple function of the relative
information spectrum (depending only on the probability measures $P$ and $Q$),
and a non-negative weight function $\omega_f \colon (0, \infty) \mapsto [0, \infty)$
which only depends on $f$. This kind of representation, followed by a generalized result,
serves to provide new integral representations of various useful $f$-divergences.
This also enables in Section~\ref{section: integral representation}
to characterize the interplay between the DeGroot statistical information (or
between another useful family of $f$-divergence, named the $E_\gamma$ divergence with $\gamma \geq 1$)
and the relative information spectrum.
Section~\ref{section: new inequalities} provides a new approach for the
derivation of $f$-divergence inequalities, where an arbitrary $f$-divergence
is lower bounded by means of the $E_\gamma$ divergence \cite{PPV10}
or the DeGroot statistical information \cite{DeGroot62}.
The approach used in Section~\ref{section: new inequalities} yields several
generalizations of the Bretagnole-Huber inequality \cite{BretagnolleH79},
which provides a closed-form and simple upper bound on the total variation
distance as a function of the relative entropy; the Bretagnole-Huber inequality
has been proved to be useful, e.g., in the context of lower bounding the minimax
risk in non-parametric estimation (see, e.g., \cite[pp.~89--90, 94]{Tsybakov09}),
and in the problem of density estimation (see, e.g., \cite[Section~1.6]{Vapnik98}).
Although Vajda's tight lower bound in \cite{Vajda70} is slightly tighter everywhere
than the Bretagnole-Huber inequality, our motivation for the generalization of
the latter bound is justified later in this paper.
The utility of the new inequalities is exemplified in the setup of Bayesian binary
hypothesis testing.
Section~\ref{section: local behavior} finally derives new results on the local behavior
of $f$-divergences, i.e., the characterization of their scaling when the pair of probability
measures are sufficiently close to each other. The starting point of our analysis in
Section~\ref{section: local behavior} relies on the analysis in \cite[Section~3]{PardoV03},
regarding the asymptotic properties of $f$-divergences.
The reading of Sections~\ref{section: integral representation}--\ref{section: local behavior}
can be done in any order since the analysis in these sections is independent.
\section{Preliminaries and Notation}
\label{section: preliminaries}
We assume throughout that the probability measures $P$ and $Q$ are
defined on a common measurable space $(\ensuremath{\mathcal}{A}, \mathscr{F})$, and $P \ll Q$
denotes that $P$ is {\em absolutely continuous} with respect to $Q$, namely
there is no event $\ensuremath{\mathcal}{F} \in \mathscr{F}$ such that $P(\ensuremath{\mathcal}{F}) > 0 = Q(\ensuremath{\mathcal}{F})$.
\begin{definition}
\label{def:RI}
The {\em relative information} provided by $a \in \ensuremath{\mathcal}{A}$
according to $(P,Q)$, where $P \ll Q$, is given by
\begin{align} \label{eq:RI}
\imath_{P\|Q}(a) := \log \, \frac{\text{d}P}{\text{d}Q} \, (a).
\end{align}
More generally, even if $P \not\ll Q$, let $R$ be an arbitrary dominating probability measure such that
$P,Q \ll R$ (e.g., $R = \tfrac12 (P+Q)$); irrespectively of the choice of $R$, the relative information
is defined to be
\begin{align} \label{eq2:RI}
\imath_{P\|Q}(a) := \imath_{P\|R}(a) - \imath_{Q\|R}(a), \quad a \in \ensuremath{\mathcal}{A}.
\end{align}
\end{definition}
The following asymmetry property follows from \eqref{eq2:RI}:
\begin{align}
\label{eq: asymmetry RI}
\imath_{P\|Q} = -\imath_{Q\|P}.
\end{align}
\begin{definition} \label{def:RIS}
The {\em relative information spectrum} is the cumulative distribution function
\begin{align} \label{eq:RIS}
\mathds{F}_{P \| Q}(x) = \ensuremath{\mathbb{P}}\bigl[\imath_{P\|Q}(X) \leq x \bigr], \quad x \in \ensuremath{\mathbb{R}}, \; X \sim P.
\end{align}
The {\em relative entropy} is the expected valued of the relative information when it is distributed according to
$P$:
\begin{align} \label{eq: KL div}
D(P\|Q) := \mathbb{E}\bigl[\imath_{P\|Q}(X)\bigr], \quad X \sim P.
\end{align}
\end{definition}
Throughout this paper, $\ensuremath{\mathcal}{C}$ denotes the set of convex functions $f \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$
with $f(1)=0$. Hence, the function $f \equiv 0$ is in $\ensuremath{\mathcal}{C}$;
if $f \in \ensuremath{\mathcal}{C}$, then $a f \in \ensuremath{\mathcal}{C}$ for all $a>0$;
and if $f,g \in \ensuremath{\mathcal}{C}$, then $f+g \in \ensuremath{\mathcal}{C}$.
We next provide a general definition for the family of $f$-divergences (see \cite[p.~4398]{LieseV_IT2006}).
\begin{definition} \label{def:fD} ($f$-divergence \cite{AliS, Csiszar63, Csiszar67a})
Let $P$ and $Q$ be probability measures, let $\mu$ be a dominating measure of $P$ and $Q$
(i.e., $P, Q \ll \mu$; e.g., $\mu = P+Q$),
and let $p := \frac{\text{d}P}{\text{d}\mu}$ and $q := \frac{\text{d}Q}{\text{d}\mu}$.
The {\em $f$-divergence from $P$ to $Q$} is given, independently of $\mu$, by
\begin{align} \label{eq:fD}
D_f(P\|Q) := \int q \, f \Bigl(\frac{p}{q}\Bigr) \, \text{d}\mu,
\end{align}
where
\begin{align}
& f(0) := \underset{t \downarrow 0}{\lim} \, f(t), \\
& 0 f\Bigl(\frac{0}{0}\Bigr) := 0, \\
& 0 f\Bigl(\frac{a}{0}\Bigr) := \lim_{t \downarrow 0} \, t f\Bigl(\frac{a}{t}\Bigr) = a \lim_{u \to \infty} \frac{f(u)}{u}, \quad a>0.
\end{align}
\end{definition}
We rely in this paper on the following properties of $f$-divergences:
\begin{proposition} \label{proposition: uniqueness}
Let $f, g \in \ensuremath{\mathcal}{C}$. The following conditions are equivalent:
\begin{enumerate}[1)]
\item
\begin{align}
D_f(P\|Q) = D_g(P\|Q), \quad \forall \, P, Q;
\end{align}
\item
there exists a constant $c \in \ensuremath{\mathbb{R}}$ such that
\begin{align} \label{eq: invariance of f-div}
f(t) - g(t) = c \; (t-1), \quad \forall \, t \in (0, \infty).
\end{align}
\end{enumerate}
\end{proposition}
\begin{proposition} \label{proposition: conjugate}
Let $f \in \ensuremath{\mathcal}{C}$, and let
$f^\ast \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ be the {\em conjugate function}, given by
\begin{align} \label{eq: conjugate f}
f^\ast(t) = t \, f\left(\tfrac1t\right)
\end{align}
for $t > 0$. Then,
\begin{enumerate}[1)]
\item $f^\ast \in \ensuremath{\mathcal}{C}$;
\item $f^{\ast\ast} = f$;
\item for every pair of probability measures $(P,Q)$,
\begin{align} \label{eq: Df and Df^ast}
D_f(P \| Q) = D_{f^\ast}(Q \| P).
\end{align}
\end{enumerate}
\end{proposition}
By an analytic extension of $f^\ast$ in \eqref{eq: conjugate f} at $t=0$, let
\begin{align}
\label{eq: conjugate f at 0}
f^\ast(0) := \lim_{t \downarrow 0} f^\ast(t) = \lim_{u \to \infty} \frac{f(u)}{u}.
\end{align}
Note that the convexity of $f^\ast$ implies that $f^\ast(0) \in (-\infty, \infty]$.
In continuation to Definition~\ref{def:fD}, we get
\begin{align} \label{eq:fD2}
D_f(P\|Q) &= \int q \; f\left(\frac{p}{q}\right) \, \text{d}\mu \\
\label{eq2:fD2}
& = \int_{\{pq > 0\}} q \, f \left( \frac{p}{q} \right) \,\mathrm{d} \mu
+ Q( p = 0 ) \, f(0) + P (q = 0) \, f^\ast(0)
\end{align}
with the convention in \eqref{eq2:fD2} that $0 \cdot \infty = 0$.
\vspace*{0.2cm}
We refer in this paper to the following $f$-divergences:
\begin{enumerate}[1)]
\item {\em Relative entropy}:
\begin{align}
\label{eq: KL divergence}
D(P\|Q) &= D_f(P\|Q).
\end{align}
with
\begin{align}
\label{eq: f for KL}
f(t) = t\, \log t, \quad t>0.
\end{align}
\item {\em Jeffrey's divergence} \cite{jeffreys46}:
\begin{align}
\label{eq1: jeffreys}
J(P\|Q) &:= D( P \| Q) + D(Q\|P) \\
\label{eq2: jeffreys}
& \, = D_f(P\|Q)
\end{align}
with
\begin{align}
\label{f - jeffreys}
f(t) = (t-1) \, \log t, \quad t>0.
\end{align}
\item {\em Hellinger divergence of order $\alpha \in (0,1) \cup (1, \infty)$}
\cite[Definition~2.10]{LieseV_book87}:
\begin{align} \label{eq: Hel-divergence}
\mathscr{H}_{\alpha}(P \| Q) = D_{f_\alpha}(P \| Q)
\end{align}
with
\begin{align} \label{eq: H as fD}
f_\alpha(t) = \frac{t^\alpha-1}{\alpha-1}, \quad t > 0.
\end{align}
Some of the significance of the Hellinger divergence stems from the following facts:
\begin{enumerate}[a)]
\item
The analytic extension of $\mathscr{H}_{\alpha}(P \| Q)$ at $\alpha=1$ yields
\begin{align}
\label{eq1: KL}
D(P \| Q) = H_1(P\| Q) \, \log e.
\end{align}
\item
The {\em chi-squared divergence} \cite{Pearson1900x} is
the second order Hellinger divergence (see, e.g.,
\cite[p.~48]{LeCam86}), i.e.,
\begin{align}
\label{eq2: chi^2}
\chi^2(P \| Q) = \mathscr{H}_2(P \| Q).
\end{align}
Note that, due to Proposition~\ref{proposition: uniqueness},
\begin{align} \label{eq: chi squared}
\chi^2(P \| Q) = D_f(P\|Q),
\end{align}
where $f \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ can be defined as
\begin{align} \label{eq: f for chi^2}
f(t) = (t-1)^2, \quad t>0.
\end{align}
\item The \textit{squared Hellinger distance} (see, e.g., \cite[p.~47]{LeCam86}),
denoted by $\mathscr{H}^2(P \| Q)$, satisfies the identity
\begin{align}
\label{eq3: Sq Hel}
\mathscr{H}^2(P \| Q) = \tfrac12 \, \mathscr{H}_{\frac12}(P \| Q).
\end{align}
\item
The {\em Bhattacharyya distance} \cite{Kailath67}, denoted by
$B(P\|Q)$, satisfies
\begin{align}
\label{eq: B dist}
B(P\|Q) = \log \frac1{1 - \mathscr{H}^2(P \| Q)}.
\end{align}
\item
The {\em R\'enyi divergence} of order $\alpha \in (0,1) \cup (1,\infty)$ is a
one-to-one transformation of the Hellinger divergence of the same order \cite[(14)]{Csiszar66}:
\begin{align}\label{renyimeetshellinger}
D_\alpha(P \| Q ) = \frac1{\alpha -1} \; \log \bigl( 1 + (\alpha - 1)
\, \mathscr{H}_\alpha(P \| Q) \bigr).
\end{align}
\item
The {\em Alpha-divergence} of order $\alpha$, as it is defined in \cite{AmariN00} and \cite[(4)]{CichockiA10}, is
a generalized relative entropy which (up to a scaling factor) is equal to the Hellinger divergence of the
same order $\alpha$. More explicitly,
\begin{align}
D_{\text{A}}^{(\alpha)}(P\|Q) = \frac1\alpha \, \mathscr{H}_{\alpha}(P \| Q),
\end{align}
where $D_{\text{A}}^{(\alpha)}(\cdot \| \cdot)$ denotes the Alpha-divergence of order $\alpha$.
Note, however, that the Beta and Gamma-divergences in \cite{CichockiA10}, as well as the
generalized divergences in \cite{CichockiCA11} and \cite{CichockiCA15}, are
not $f$-divergences in general.
\end{enumerate}
\item {\em $\chi^s$ divergence for $s \geq 1$} \cite[(2.31)]{LieseV_book87}, and the {\em total variation distance}:
The function
\begin{align}
\label{eq: f - chi^s div}
f_s(t) = |t-1|^s, \quad t>0
\end{align}
results in
\begin{align}
\label{eq: chi^s div}
\chi^s(P\|Q) &= D_{f_s}(P\|Q).
\end{align}
Specifically, for $s=1$, let
\begin{align} \label{eq: f-TV}
f(t):=f_1(t)=|t-1|, \quad t>0,
\end{align}
and the total variation distance is expressed as an $f$-divergence:
\begin{align}
\label{eq1: TV distance}
|P-Q| &= D_f(P\|Q).
\end{align}
\item {\em Triangular Discrimination \cite{Topsoe_IT00} (a.k.a. Vincze-Le Cam distance)}:
\begin{align} \label{eq:delta}
\Delta(P\|Q) = D_f(P\|Q)
\end{align}
with
\begin{align} \label{eq:tridiv}
f(t) = \frac{(t-1)^2}{t+1}, \quad t > 0.
\end{align}
Note that
\begin{align}
\label{eq: Delta and chi^2}
\tfrac12 \, \Delta (P \| Q) & = \chi^2 (P \, \| \, \tfrac12 P + \tfrac12 Q)
= \chi^2 (Q \, \| \, \tfrac12 P + \tfrac12 Q).
\end{align}
\item {\em Lin's measure} \cite[(4.1)]{Lin91}:
\begin{align} \label{eq: Lin91}
L_\theta(P\|Q) &:= H\bigl(\theta P + (1-\theta)Q\bigr) - \theta H(P) - (1-\theta) H(Q) \\
\label{eq2: Lin91}
& \, = \theta \, D\bigl(P \, \| \, \theta P + (1-\theta)Q\bigr) + (1-\theta) \, D\bigl(Q \, \| \, \theta P + (1-\theta)Q \bigr),
\end{align}
for $\theta \in [0,1]$. This measure can be expressed by the following $f$-divergence:
\begin{align}
\label{eq: Lin div as Df}
L_\theta(P\|Q) = D_{f_\theta}(P\|Q),
\end{align}
with
\begin{align}
\label{eq: f of Lin div}
f_{\theta}(t) := \theta t \log t - \bigl(\theta t + 1-\theta \bigr) \, \log \bigl(\theta t + 1-\theta \bigr), \quad t>0.
\end{align}
The special case of \eqref{eq: Lin div as Df} with $\theta = \tfrac12$ gives the
{\em Jensen-Shannon divergence} (a.k.a. capacitory discrimination):
\begin{align}
\label{eq:js1}
\mathrm{JS}(P\|Q) & := L_{\frac12}(P\|Q) \\
\label{eq:js2}
&= \tfrac12 D\bigl(P \, \| \, \tfrac12 P + \tfrac12 Q\bigr) + \tfrac12 D\bigl(Q \, \| \, \tfrac12 P + \tfrac12 Q \bigr).
\end{align}
\item {\em $E_\gamma$ divergence} \cite[p.~2314]{PPV10}: For $\gamma \geq 1$,
\begin{align}
\label{eq1:E_gamma}
E_{\gamma}(P\|Q) &:= \max_{\ensuremath{\mathcal}{U} \in \mathscr{F}} \bigl( P(\ensuremath{\mathcal}{U}) - \gamma \, Q(\ensuremath{\mathcal}{U}) \bigr) \\
\label{eq2:E_gamma}
& \; = \ensuremath{\mathbb{P}}[\imath_{P\|Q}(X) > \log \gamma] - \gamma \, \ensuremath{\mathbb{P}}[\imath_{P\|Q}(Y) > \log \gamma]
\end{align}
with $X \sim P$ and $Y \sim Q$, and where \eqref{eq2:E_gamma} follows from the Neyman-Pearson
lemma. The $E_\gamma$ divergence can be identified as an $f$-divergence:
\begin{align} \label{eq:Eg f-div}
E_{\gamma}(P \| Q) = D_{f_\gamma}(P\|Q)
\end{align}
with
\begin{align} \label{eq: f for EG}
f_\gamma(t) := (t-\gamma)^+, \quad t > 0
\end{align}
where $(x)^+ := \max\{x,0\}$.
The following relation to the total variation distance holds:
\begin{align} \label{eq:EG-TV}
E_1(P\|Q) = \tfrac12 \, |P-Q|.
\end{align}
\item {\em DeGroot statistical information} (\cite{DeGroot62}, \cite{LieseV_IT2006}):
For $\omega \in (0,1)$,
\begin{align} \label{eq:DG f-div}
\mathcal{I}_\omega(P\|Q) = D_{\phi_\omega}(P\|Q)
\end{align}
with
\begin{align} \label{eq: f for DG}
\phi_\omega(t) = \min \{\omega, 1-\omega\}
- \min \{\omega t, 1-\omega\}, \quad t > 0.
\end{align}
The following relation to the total variation distance holds:
\begin{align} \label{eq:DG-TV}
\mathcal{I}_{\frac12}(P\|Q) = \tfrac14 \, |P-Q|,
\end{align}
and the DeGroot statistical information and the $E_\gamma$ divergence
are related as follows \cite[(384)]{ISSV16}:
\begin{align}
\label{eq: DG-EG}
\mathcal{I}_\omega(P\|Q) =
\begin{dcases}
\omega \, E_{\frac{1-\omega}{\omega}}(P\|Q), & \quad \mbox{$\omega \in \bigl(0, \tfrac12\bigr]$,} \\[0.2cm]
(1-\omega) \, E_{\frac{\omega}{1-\omega}}(Q\|P), & \quad \mbox{$\omega \in \bigl(\tfrac12, 1)$.}
\end{dcases}
\end{align}
\end{enumerate}
\vspace*{0.2cm}
\section{New Integral Representations of $f$-divergences}
\label{section: integral representation}
The main result in this section provides new integral representations
of $f$-divergences as a function of the relative information spectrum
(see Definition~\ref{def:RIS}). The reader is referred to other integral
representations (see \cite[Section~2]{Liese2012}, \cite[Section~5]{ReidW11},
\cite[Section~5.B]{ISSV16}, and references therein), expressing a general
$f$-divergence by means of the DeGroot statistical information or the
$E_\gamma$ divergence.
\begin{lemma} \label{lemma: g}
Let $f \in \ensuremath{\mathcal}{C}$ be a strictly convex function at~1. Let $g \colon \ensuremath{\mathbb{R}} \mapsto \ensuremath{\mathbb{R}}$ be defined as
\begin{align} \label{eq:g}
g(x) := \exp(-x) \, f\bigl(\exp(x)\bigr) - f'_{+}(1) \, \bigl(1 - \exp(-x) \bigr), \qquad x \in \ensuremath{\mathbb{R}}
\end{align}
where $f'_{+}(1)$ denotes the right-hand derivative of $f$ at 1 (due to the convexity of $f$ on $(0, \infty)$,
it exists and it is finite). Then, the function $g$ is non-negative, it is strictly monotonically decreasing
on $(-\infty, 0]$, and it is strictly monotonically increasing on $[0, \infty)$ with $g(0)=0$.
\end{lemma}
\begin{proof}
For any function $u \in \ensuremath{\mathcal}{C}$, let $\widetilde{u} \in \ensuremath{\mathcal}{C}$ be given by
\begin{align} \label{eq:widetilde u}
\widetilde{u}(t) = u(t) - u'_{+}(1) (t-1), \quad t \in (0, \infty),
\end{align}
and let $u^\ast \in \ensuremath{\mathcal}{C}$ be the conjugate function, as given in \eqref{eq: conjugate f}.
The function $g$ in \eqref{eq:g} can be expressed in the form
\begin{align}
\label{eq:decompose g}
g(x) = (\widetilde{f})^{\ast} \bigl( \exp(-x) \bigr), \quad x \in \ensuremath{\mathbb{R}},
\end{align}
as it is next verified. For $t>0$, we get from \eqref{eq: conjugate f} and \eqref{eq:widetilde u},
\begin{align}
\label{eq:verify g}
(\widetilde{f})^{\ast}(t) = t \widetilde{f}\left(\frac1t\right)
= t f\left( \frac1t \right) + f'_{+}(1) \, (t-1),
\end{align}
and the substitution $t := \exp(-x)$ for $x \in \ensuremath{\mathbb{R}}$
yields \eqref{eq:decompose g} in view of \eqref{eq:g}.
By assumption, $f \in \ensuremath{\mathcal}{C}$ is strictly convex at~1, and therefore these properties are inherited to
$\widetilde{f}$. Since also $\widetilde{f}(1) = \widetilde{f}'(1) = 0$, it follows from \cite[Theorem~3]{LieseV_IT2006}
that both $\widetilde{f}$ and $\widetilde{f}^\ast$ are non-negative on $(0, \infty)$, and they are also
strictly monotonically decreasing on $(0,1]$. Hence, from \eqref{eq: conjugate f}, it follows that the
function $(\widetilde{f})^\ast$ is strictly monotonically increasing on $[1, \infty)$.
Finally, the claimed properties of the function $g$ follow from \eqref{eq:decompose g}, and in view
of the fact that the function $(\widetilde{f})^\ast$ is non-negative with $(\widetilde{f})^\ast(1)=0$,
strictly monotonically decreasing on $(0,1]$ and strictly monotonically increasing on $[1, \infty)$.
\end{proof}
\begin{lemma} \label{lemma: 1st int. for f-div}
Let $f \in \ensuremath{\mathcal}{C}$ be a strictly convex function at~1, and let $g \colon \ensuremath{\mathbb{R}} \mapsto \ensuremath{\mathbb{R}}$ be as
in \eqref{eq:g}. Let
\begin{align}
\label{eq:a}
a & := \lim_{x \to \infty} g(x) \in (0, \infty], \\
\label{eq:b}
b & := \lim_{x \to -\infty} g(x) \in (0, \infty],
\end{align}
and let $\ell_1 \colon [0, a) \mapsto [0, \infty)$ and $\ell_2 \colon [0, b) \mapsto (-\infty, 0]$ be
the two inverse functions of $g$. Then,
\begin{align} \label{eq: 1st int. for f-div}
D_f(P \| Q) = \int_0^a \bigl[ 1 - \mathds{F}_{P \| Q}\bigl( \ell_1(t) \bigr) \bigr] \, \mathrm{d}t
+ \int_0^b \mathds{F}_{P \| Q}\bigl( \ell_2(t) \bigr) \, \mathrm{d}t.
\end{align}
\end{lemma}
\begin{proof}
In view of Lemma~\ref{lemma: g}, it follows that $\ell_1 \colon [0, a) \mapsto [0, \infty)$ is strictly monotonically increasing
and $\ell_2 \colon [0, b) \mapsto (-\infty, 0]$ is strictly monotonically decreasing with $\ell_1(0) = \ell_2(0) = 0$.
Let $X \sim P$, and let $V := \exp \bigl( \imath_{P\|Q}(X) \bigr)$. Then, we have
\begin{align}
\label{eq: linear shift}
D_f(P\|Q) &= D_{\widetilde{f}}(P \| Q) \\
\label{eq: P and Q switched}
&= D_{(\widetilde{f})^\ast}(Q\|P) \\
&= \int (\widetilde{f})^\ast \bigl( \exp\bigl(\imath_{Q\|P}(x) \bigr) \bigr) \, \mathrm{d}P(x) \\
\label{eq: plus/minus RI}
&= \int (\widetilde{f})^\ast \bigl( \exp\bigl(-\imath_{P\|Q}(x) \bigr) \bigr) \, \mathrm{d}P(x) \\
\label{eq: by the expression for g}
&= \int g \bigl( \imath_{P\|Q}(x) \bigr) \, \mathrm{d}P(x) \\
\label{eq: by V}
&= \ensuremath{\mathbb{E}}\bigl[ g(V) \bigr] \\
\label{eq: expectation of non-negative RV}
&= \int_0^{\infty} \ensuremath{\mathbb{P}} \bigl[ g(V) > t \bigr] \, \mathrm{d}t \\
\label{eq: sum pf prob}
&= \int_0^a \ensuremath{\mathbb{P}} \bigl[V \geq 0, \, g(V) > t \bigr] \, \mathrm{d}t
+ \int_0^b \ensuremath{\mathbb{P}} \bigl[ V < 0, \, g(V)>t \bigr] \, \mathrm{d}t \\
\label{eq: ell's}
&= \int_0^a \ensuremath{\mathbb{P}} \bigl[V > \ell_1(t) \bigr] \, \mathrm{d}t
+ \int_0^b \ensuremath{\mathbb{P}} \bigl[ V \leq \ell_2(t) \bigr] \, \mathrm{d}t \\
\label{eq: expression with RIS}
&= \int_0^a \bigl[ 1 - \mathds{F}_{P \| Q}\bigl( \ell_1(t) \bigr) \bigr] \, \mathrm{d}t
+ \int_0^b \mathds{F}_{P \| Q}\bigl( \ell_2(t) \bigr) \, \mathrm{d}t
\end{align}
where \eqref{eq: linear shift} relies on Proposition~\ref{proposition: uniqueness};
\eqref{eq: P and Q switched} relies on Proposition~\ref{proposition: conjugate};
\eqref{eq: plus/minus RI} follows from \eqref{eq: asymmetry RI};
\eqref{eq: by the expression for g} follows from \eqref{eq:decompose g};
\eqref{eq: by V} holds by the definition of the random variable $V$;
\eqref{eq: expectation of non-negative RV} holds since, in view of Lemma~\ref{lemma: g},
$Z := g(V) \geq 0$, and $\ensuremath{\mathbb{E}}[Z] = \int_0^{\infty} \ensuremath{\mathbb{P}}[Z > t] \, \mathrm{d}t$
for any non-negative random variable $Z$;
\eqref{eq: sum pf prob} holds in view of the monotonicity properties of $g$ in Lemma~\ref{lemma: g},
the definition of $a$ and $b$ in \eqref{eq:a} and \eqref{eq:b}, and by expressing the event
$\{g(V) > t\}$ as a union of two disjoint events;
\eqref{eq: ell's} holds again by the monotonicity properties of $g$ in Lemma~\ref{lemma: g},
and by the definition of its two inverse functions $\ell_1$ and $\ell_2$ as above;
in \eqref{eq: expectation of non-negative RV}--\eqref{eq: ell's}
we are free to substitute $>$ by $\geq$, and $<$ by $\leq$; finally, \eqref{eq: expression with RIS}
holds by the definition of the relative information spectrum in \eqref{eq:RIS}.
\end{proof}
\begin{remark} \label{remark: invariance of g}
The function $g \colon \ensuremath{\mathbb{R}} \mapsto \ensuremath{\mathbb{R}}$ in \eqref{eq:g} is invariant to the mapping
$f(t) \mapsto f(t) + c \, (t-1)$, for $t>0$, with an arbitrary $c \in \ensuremath{\mathbb{R}}$. This invariance
of $g$ (and, hence, also the invariance of its inverse functions $\ell_1$ and $\ell_2$) is
well expected in view of Proposition~\ref{proposition: uniqueness} and Lemma~\ref{lemma: 1st int. for f-div}.
\end{remark}
\begin{example} \label{example: chi-squared divergence}
For the chi-squared divergence in \eqref{eq: chi squared}, letting $f$ be as in \eqref{eq: f for chi^2},
it follows from \eqref{eq:g} that
\begin{align}
g(x) = 4 \sinh^2\left(\tfrac1{2 \log e} \, x \right), \quad x \in \ensuremath{\mathbb{R}},
\end{align}
which yields, from \eqref{eq:a} and \eqref{eq:b}, $a=b=\infty$. Calculation of the two inverse functions of $g$,
as defined in Lemma~\ref{lemma: 1st int. for f-div}, yields the following closed-form expression:
\begin{align} \label{ell_1,2 chi^2}
\ell_{1,2}(u) &= \pm 2 \log \left(\frac{u+\sqrt{u+4}}{2}\right), \quad u \geq 0.
\end{align}
Substituting \eqref{ell_1,2 chi^2} into \eqref{eq: 1st int. for f-div} provides an integral
representation of $\chi^2(P\|Q)$.
\end{example}
\begin{lemma} \label{lemma: identity with RIS}
\begin{align} \label{eq: int. identity with RIS}
\int_0^\infty \frac{\mathds{F}_{P \| Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta = 1.
\end{align}
\end{lemma}
\begin{proof}
Let $X \sim P$. Then, we have
\begin{align}
\int_0^\infty \frac{\mathds{F}_{P \| Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta
\label{eq1: int. identity with RIS}
& = \int_0^\infty \frac1{\beta^2} \, \ensuremath{\mathbb{P}}[ \imath_{P\|Q}(X) \leq \log \beta ] \, \mathrm{d}\beta \\[0.1cm]
\label{eq2: int. identity with RIS}
& = \int_0^\infty \frac1{\beta^2} \, \ensuremath{\mathbb{P}}\biggl[ \exp\bigl(\imath_{Q\|P}(X)\bigr) \geq \frac1\beta \biggr] \, \mathrm{d}\beta \\[0.1cm]
\label{eq3: int. identity with RIS}
& = \int_0^\infty \, \ensuremath{\mathbb{P}}\bigl[ \exp\bigl(\imath_{Q\|P}(X)\bigr) \geq u \bigr] \, \mathrm{d}u \\[0.1cm]
\label{eq4: int. identity with RIS}
& = \ensuremath{\mathbb{E}}\bigl[ \exp\bigl(\imath_{Q\|P}(X)\bigr)\bigr] \\
\label{eq5: int. identity with RIS}
& = 1,
\end{align}
where \eqref{eq1: int. identity with RIS} holds by \eqref{eq:RIS};
\eqref{eq2: int. identity with RIS} follows from \eqref{eq: asymmetry RI};
\eqref{eq3: int. identity with RIS} holds by the substitution $u := \frac1\beta$;
\eqref{eq4: int. identity with RIS} holds since $\exp\bigl(\imath_{Q\|P}(X)\bigr) \geq 0$,
and finally \eqref{eq5: int. identity with RIS} holds since $X \sim P$.
\end{proof}
\begin{remark}
Unlike Example~\ref{example: chi-squared divergence}, in general, the inverse functions
$\ell_1$ and $\ell_2$ in Lemma~\ref{lemma: 1st int. for f-div} are not expressible in
closed form, motivating our next integral representation in Theorem~\ref{theorem: Int. rep.}.
\end{remark}
\vspace*{0.2cm}
The following theorem provides our main result in this section.
\begin{theorem} \label{theorem: Int. rep.}
The following integral representations of $f$-divergences hold:
\begin{enumerate}[a)]
\item
\label{theorem: int. rep. - part 1}
Let
\begin{itemize}
\item $f \in \ensuremath{\mathcal}{C}$ be differentiable on $(0, \infty)$;
\item $w_f \colon (0, \infty) \mapsto [0, \infty)$ be the non-negative weight
function given, for $\beta>0$, by
\begin{align} \label{eq: weight function}
w_f(\beta) & := \frac1{\beta} \left| f'(\beta) - \frac{f(\beta) + f'(1)}{\beta} \right|;
\end{align}
\item the function $G_{P\|Q} \colon (0, \infty) \mapsto [0,1]$ be given by
\begin{align} \label{eq: G function}
G_{P\|Q}(\beta) :=
\begin{dcases}
1 - \mathds{F}_{P \| Q}(\log \beta), & \beta \in [1, \infty), \\[0.1cm]
\mathds{F}_{P \| Q}(\log \beta), & \beta \in (0,1).
\end{dcases}
\end{align}
\end{itemize}
Then,
\begin{align} \label{eq: new int rep Df}
D_f(P \| Q) = \langle w_f, \, G_{P\|Q} \rangle
= \int_0^{\infty} w_f(\beta) \, G_{P\|Q}(\beta) \, \mathrm{d}\beta.
\end{align}
\item \label{theorem: int. rep. - part 2}
More generally, for an arbitrary $c \in \ensuremath{\mathbb{R}}$, let $\widetilde{w}_{f,c} \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$
be a modified real-valued function defined as
\begin{align} \label{eq: generalized w_f}
\widetilde{w}_{f,c}(\beta) := w_f(\beta) + \frac{c}{\beta^2} \, \bigl( 1\{\beta \geq 1\} - 1\{0 < \beta < 1\} \bigr).
\end{align}
Then,
\begin{align} \label{eq2: new int rep Df}
D_f(P \| Q) = \langle \widetilde{w}_{f,c}, \, G_{P\|Q} \rangle.
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
We start by proving the special integral representation in \eqref{eq: new int rep Df},
and then extend our proof to the general representation in \eqref{eq2: new int rep Df}.
\begin{enumerate}[a)]
\item We first assume an additional requirement that $f$ is strictly
convex at~1. In view of Lemma~\ref{lemma: 1st int. for f-div},
\begin{align}
\label{eq: ell_1 of g}
& \ell_1 \bigl( g(u) \bigr) = u, \quad u \in [0, \infty), \\
\label{eq: ell_2 of g}
& \ell_2 \bigl( g(u) \bigr) = u, \quad u \in (-\infty, 0].
\end{align}
Since by assumption $f \in \ensuremath{\mathcal}{C}$ is differentiable on $(0, \infty)$
and strictly convex at~1,
the function $g$ in \eqref{eq:g} is differentiable on $\ensuremath{\mathbb{R}}$.
In view of \eqref{eq: ell_1 of g} and \eqref{eq: ell_2 of g},
substituting $t := g \bigl( \log \beta \bigr)$ in \eqref{eq: 1st int. for f-div}
for $\beta > 0$ implies that
\begin{align} \label{eq: int with mod w_f}
D_f(P \| Q) = \int_1^\infty \bigl[ 1 - \mathds{F}_{P \| Q}\bigl( \log \beta \bigr) \bigr] \,
\overline{w}_f(\beta) \, \mathrm{d}\beta
- \int_0^1 \mathds{F}_{P \| Q}\bigl( \log \beta \bigr) \, \overline{w}_f(\beta) \, \mathrm{d}\beta,
\end{align}
where $\overline{w}_f \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ is given by
\begin{align} \label{eq: mod w_f - g}
\overline{w}_f(\beta) & := \frac{g'\bigl( \log \beta \bigr)}{\beta} \, \log \mathrm{e} \\
\label{eq: mod w_f - f}
& = \frac1\beta \left[ f'(\beta) - \frac{f(\beta) + f'(1)}{\beta} \right]
\end{align}
for $\beta>0$, where \eqref{eq: mod w_f - f} follows from \eqref{eq:g}.
Due to the monotonicity properties of $g$ in Lemma~\ref{lemma: g},
\eqref{eq: mod w_f - g} implies that $\overline{w}_f(\beta) \geq 0$
for $\beta \geq 1$, and $\overline{w}_f(\beta) < 0$ for
$\beta \in (0,1)$. Hence, the weight function $w_f$ in
\eqref{eq: weight function} satisfies
\begin{align} \label{eq: w_f/ mod w_f}
w_f(\beta) = \bigl| \overline{w}_f(\beta) \bigr| = \overline{w}_f(\beta) \, \bigl( 1\{\beta \geq 1\}
- 1\{0 < \beta < 1\} \bigr), \quad \beta>0.
\end{align}
The combination of \eqref{eq: G function}, \eqref{eq: int with mod w_f} and \eqref{eq: w_f/ mod w_f}
gives the required result in \eqref{eq: new int rep Df}.
\par
We now extend the result in \eqref{eq: new int rep Df} when $f \in \ensuremath{\mathcal}{C}$ is differentiable on $(0, \infty)$,
but not necessarily strictly convex at~1.
To that end, let $s \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ be defined as
\begin{align} \label{eq: s function}
s(t) := f(t) + (t^2-1), \quad t>0.
\end{align}
This implies that $s \in \ensuremath{\mathcal}{C}$ is differentiable on $(0, \infty)$, and it is also strictly convex at~1.
In view of the proof of \eqref{eq: new int rep Df} when $f$ is strict convexity of $f$ at~1, the application
of this result to the function $s$ in \eqref{eq: s function} yields
\begin{align}
\label{eq: apply for s}
D_s(P\|Q) = \langle w_s, \, G_{P\|Q} \rangle.
\end{align}
In view of \eqref{eq:fD}, \eqref{eq: Hel-divergence}, \eqref{eq: H as fD}, \eqref{eq2: chi^2}
and \eqref{eq: s function},
\begin{align}
\label{eq: D_s}
D_s(P\|Q) = D_f(P\|Q) + \chi^2(P\|Q);
\end{align}
from \eqref{eq: weight function}, \eqref{eq: w_f/ mod w_f}, \eqref{eq: s function}
and the convexity and differentiability of $f \in \ensuremath{\mathcal}{C}$, it follows that the weight function
$w_s \in (0, \infty) \mapsto [0, \infty)$ satisfies
\begin{align}
\label{eq: w_s}
w_s(\beta) = w_f(\beta) + \left(1-\frac1{\beta^2}\right) \left(1\{\beta \geq 1\} - 1\{0<\beta<1\}\right)
\end{align}
for $\beta>0$. Furthermore, by applying the result in \eqref{eq: new int rep Df} to the chi-squared
divergence $\chi^2(P\|Q)$ in \eqref{eq2: chi^2} whose corresponding function $f_2(t) := t^2-1$ for $t>0$
is strictly convex at~1, we obtain
\begin{align}
\label{eq: application to chi^2}
\chi^2(P\|Q) = \int_0^\infty \left(1-\frac1{\beta^2}\right) \left(1\{\beta \geq 1\} - 1\{0<\beta<1\}\right)
\, G_{P\|Q}(\beta) \, \mathrm{d}\beta.
\end{align}
Finally, the combination of \eqref{eq: apply for s}--\eqref{eq: application to chi^2}, yields
$D_f(P\|Q) = \langle w_f, \, G_{P\|Q} \rangle$; this asserts that \eqref{eq: new int rep Df} also holds
by relaxing the condition that $f$ is strictly convex at~1.
\item In view of \eqref{eq: G function}, \eqref{eq: new int rep Df} and \eqref{eq: generalized w_f},
in order to prove \eqref{eq2: new int rep Df} for an arbitrary $c \in \ensuremath{\mathbb{R}}$, it is required to prove
the identity
\begin{align} \label{eq7: identity RIS}
\int_1^{\infty} \frac{1-\mathds{F}_{P \| Q}\bigl( \log \beta \bigr)}{\beta^2} \, \mathrm{d}\beta =
\int_0^1 \frac{\mathds{F}_{P \| Q}\bigl( \log \beta \bigr)}{\beta^2} \, \mathrm{d}\beta.
\end{align}
Equality~\eqref{eq7: identity RIS} can be verified by Lemma~\ref{lemma: identity with RIS}:
by rearranging terms in \eqref{eq7: identity RIS}, we get the identity in \eqref{eq: int. identity with RIS}
(since $\int_1^{\infty} \frac{\mathrm{d}\beta}{\beta^2} = 1$).
\end{enumerate}
\end{proof}
\begin{remark} \label{remark 2: w_f}
Due to the convexity of $f$, the absolute value in the right side of \eqref{eq: weight function}
is only needed for $\beta \in (0,1)$ (see \eqref{eq: mod w_f - f} and \eqref{eq: w_f/ mod w_f}).
Also, $w_f(1)=0$ since $f(1)=0$.
\end{remark}
\begin{remark} \label{remark 1: w_f}
The weight function $w_f$ only depends on $f$, and the function $G_{P\|Q}$ only depends on the
pair of probability measures $P$ and $Q$. In view of Proposition~\ref{proposition: uniqueness},
it follows that, for $f, g \in \ensuremath{\mathcal}{C}$, the equality $w_f = w_g$ holds on $(0, \infty)$ if and
only if \eqref{eq: invariance of f-div} is satisfied with an arbitrary constant $c \in \ensuremath{\mathbb{R}}$.
It is indeed easy to verify that \eqref{eq: invariance of f-div} yields $w_f = w_g$ on $(0, \infty)$.
\end{remark}
\begin{remark} \label{remark: G function}
An equivalent way to write $G_{P\|Q}$ in \eqref{eq: G function} is
\begin{align} \label{eq2: G function}
G_{P\|Q}(\beta) =
\begin{dcases}
\ensuremath{\mathbb{P}}\left[ \frac{\text{d}P}{\text{d}Q} \, (X) > \beta \right], & \beta \in [1, \infty) \\[0.2cm]
\ensuremath{\mathbb{P}}\left[ \frac{\text{d}P}{\text{d}Q} \, (X) \leq \beta \right], & \beta \in (0, 1)
\end{dcases}
\end{align}
where $X \sim P$.
Hence, the function $G_{P\|Q} \colon (0, \infty) \mapsto [0,1]$ is monotonically increasing in $(0,1)$,
and it is monotonically decreasing in $[1, \infty)$; note that this function is in general discontinuous
at~1 unless $\mathds{F}_{P \| Q}(0) = \tfrac12$.
If $P \ll \gg Q$, then
\begin{align}
\lim_{\beta \downarrow 0} G_{P\|Q}(\beta) = \lim_{\beta \to \infty} G_{P\|Q}(\beta) = 0.
\end{align}
Note that if $P = Q$, then $G_{P\|Q}$ is zero everywhere, which is consistent with the fact that $D_f(P \| Q)=0$.
\end{remark}
\begin{remark} \label{remark: strict convexity}
In the proof of Theorem~\ref{theorem: Int. rep.}-\ref{theorem: int. rep. - part 1}),
the relaxation of the condition of strict convexity at~1 for a differentiable function $f \in \ensuremath{\mathcal}{C}$ is crucial, e.g., for
the $\chi^s$ divergence with $s>2$. To clarify this claim, note that in view of \eqref{eq: f - chi^s div}, the function
$f_s \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ is differentiable if $s>1$, and $f_s \in \ensuremath{\mathcal}{C}$ with $f_s'(1)=0$;
however, $f_s''(1)=0$ if $s>2$, so $f_s$ in not strictly convex at~1 unless $s \in [1,2]$.
\end{remark}
\begin{remark} \label{remark: utility of part 2}
Theorem~\ref{theorem: Int. rep.}-\ref{theorem: int. rep. - part 2}) with $c \neq 0$ enables, in some cases,
to simplify integral representations of $f$-divergences. This is next exemplified in the proof of
Theorem~\ref{theorem: some int. representations}.
\end{remark}
\par
Theorem~\ref{theorem: Int. rep.} yields integral representations for various $f$-divergences
and related measures; some of these representations were previously derived by Sason and Verd\'{u} in
\cite{ISSV16} in a case by case basis, without the unified approach of Theorem~\ref{theorem: Int. rep.}.
We next provide such integral representations.
Note that, for some $f$-divergences, the function $f \in \ensuremath{\mathcal}{C}$ is not differentiable
on $(0, \infty)$; hence, Theorem~\ref{theorem: Int. rep.} is not necessarily directly applicable.
\begin{theorem} \label{theorem: some int. representations}
The following integral representations hold as a function of the relative information spectrum:
\begin{enumerate}[1)]
\item Relative entropy \cite[(219)]{ISSV16}:
\begin{align} \label{eq: int. rep. KL}
\tfrac1{\log e} \, D(P\|Q) &= \int_1^{\infty} \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta} \, \mathrm{d}\beta
- \int_0^1 \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta} \, \mathrm{d}\beta.
\end{align}
\item Hellinger divergence of order $\alpha \in (0,1) \cup (1, \infty)$ \cite[(434) and (437)]{ISSV16}:
\begin{align}
\label{eq: int. rep. Hel}
\mathscr{H}_{\alpha}(P \| Q) &=
\begin{dcases}
\frac1{1-\alpha} - \int_0^\infty \beta^{\alpha-2} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta, &\; \alpha \in (0,1) \\[0.2cm]
\int_0^{\infty} \beta^{\alpha-2} \left(1 - \mathds{F}_{P\|Q}(\log \beta) \right) \, \mathrm{d}\beta - \frac1{\alpha-1}, &\; \alpha \in (1, \infty).
\end{dcases}
\end{align}
In particular, the chi-squared divergence, squared Hellinger distance and Bhattacharyya distance satisfy
\begin{align}
\label{eq: int. rep. chi^2 div}
\chi^2(P \| Q)
&= \int_0^{\infty} \left(1 - \mathds{F}_{P\|Q}(\log \beta) \right) \, \mathrm{d}\beta - 1; \\[0.1cm]
\label{eq: int. rep. H^2 dist}
\mathscr{H}^2(P\|Q)
&= 1 - \tfrac12 \int_0^\infty \beta^{-\frac32} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta; \\[0.1cm]
\label{eq: int. rep. B dist}
B(P\|Q) &= \log 2 - \log \left( \int_0^\infty \beta^{-\frac32} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta \right),
\end{align}
where \eqref{eq: int. rep. chi^2 div} appears in \cite[(439]{ISSV16}.
\item R\'enyi divergence \cite[(426) and (427)]{ISSV16}:
For $\alpha \in (0,1) \cup (1,\infty)$,
\begin{align} \label{eq: int. rep. RenyiD}
D_{\alpha}(P\|Q) &=
\begin{dcases}
\frac1{\alpha-1} \, \log \left( (1-\alpha) \int_0^\infty \beta^{\alpha-2} \,
\mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta \right), &\; \alpha \in (0,1) \\[0.2cm]
\frac1{\alpha-1} \, \log \left( (\alpha-1) \int_0^{\infty} \beta^{\alpha-2}
\left(1 - \mathds{F}_{P\|Q}(\log \beta) \right) \, \mathrm{d}\beta \right), &\; \alpha \in (1, \infty).
\end{dcases}
\end{align}
\item $\chi^s$ divergence: For $s \geq 1$
\begin{align}
\label{eq: int. rep. chi^s}
\chi^s(P\|Q) &= \int_1^\infty \frac1{\beta} \left(s-1+\frac1\beta \right)
(\beta-1)^{s-1} \left(1 - \mathds{F}_{P\|Q}(\log \beta) \right) \, \mathrm{d}\beta \nonumber \\[0.1cm]
& \hspace*{0.4cm} + \int_0^1 \frac1{\beta} \left(s-1+\frac1\beta \right)
(1-\beta)^{s-1} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta.
\end{align}
In particular, the following identities hold for the total variation distance:
\begin{align} \label{eq: int. rep. TV}
|P-Q| &= 2 \int_1^{\infty} \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta \\[0.1cm]
\label{eq2: int. rep. TV}
&= 2 \int_0^1 \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta,
\end{align}
where \eqref{eq: int. rep. TV} appears in \cite[(214)]{ISSV16}.
\item DeGroot statistical information:
\begin{align}
\label{eq: int. rep. DeGroot Info}
\ensuremath{\mathcal}{I}_w(P \| Q) &=
\begin{dcases}
(1-w) \int_0^{\frac{1-w}{w}} \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta^2}
\, \mathrm{d}\beta, & \; w \in \bigl(\tfrac12, 1) \\[0.2cm]
(1-w) \int_{\frac{1-w}{w}}^{\infty} \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2}
\, \mathrm{d}\beta, & \; w \in \bigl(0, \tfrac12\bigr].
\end{dcases}
\end{align}
\item Triangular discrimination:
\begin{align} \label{eq: int. rep. TD}
\Delta(P\|Q) &= 4 \int_0^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{(\beta+1)^2} \, \mathrm{d}\beta - 2.
\end{align}
\item Lin's measure: For $\theta \in [0,1]$,
\begin{align} \label{eq: int. rep. Lin's div}
L_\theta(P \| Q) &= h(\theta) - (1-\theta) \int_0^{\infty}
\frac{\log \left(1 + \frac{\theta \beta}{1-\theta}\right)}{\beta^2}
\; \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta,
\end{align}
where $h \colon [0,1] \mapsto [0, \log 2]$ denotes the binary entropy function. Specifically, the
Jensen-Shannon divergence admits the integral representation:
\begin{align} \label{eq: int. rep. JS div}
\mathrm{JS}(P\|Q) &= \log 2 - \int_0^\infty \frac{\log(\beta+1)}{2 \beta^2}
\; \mathds{F}_{P\|Q}(\log \beta) \; \mathrm{d}\beta.
\end{align}
\item Jeffrey's divergence:
\begin{align}
J(P\|Q) &= \int_1^\infty \bigl(1 - \mathds{F}_{P\|Q}(\log \beta) \bigr) \left( \frac{\log e}{\beta}
+ \frac{\log \beta}{\beta^2} \right) \, \mathrm{d}\beta \nonumber \\[0.1cm]
\label{eq: int. rep. Jefreey's div}
& \hspace*{0.4cm} - \int_0^1 \mathds{F}_{P\|Q}(\log \beta) \, \left( \frac{\log e}{\beta}
+ \frac{\log \beta}{\beta^2} \right) \, \mathrm{d}\beta.
\end{align}
\item $E_\gamma$ divergence: For $\gamma \geq 1$,
\begin{align} \label{eq: int. rep. E_gamma}
E_\gamma(P\|Q) = \gamma \int_{\gamma}^{\infty} \frac{1 - \mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta.
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
See Appendix~\ref{appendix: proof of identities}.
\end{proof}
\vspace*{0.2cm}
An application of \eqref{eq: int. rep. E_gamma} yields the following interplay
between the $E_\gamma$ divergence and the relative information spectrum.
\begin{theorem} \label{theorem: RIS -- EG}
Let $X \sim P$, and let the random variable $\imath_{P\|Q}(X)$ have no probability masses. Denote
\begin{align}
\label{eq: A1}
& \ensuremath{\mathcal}{A}_1 := \bigl\{ E_\gamma(P\|Q) : \gamma \geq 1 \bigr\}, \\[0.1cm]
\label{eq: A2}
& \ensuremath{\mathcal}{A}_2 := \bigl\{ E_\gamma(Q\|P) : \gamma > 1 \bigr\}.
\end{align}
Then,
\begin{enumerate}[a)]
\item
$E_{\gamma}(P\|Q)$ is a continuously differentiable function of $\gamma$ on
$(1, \infty)$, and $E'_{\gamma}(P\|Q) \leq 0$;
\item
the sets $\ensuremath{\mathcal}{A}_1$ and $\ensuremath{\mathcal}{A}_2$ determine, respectively, the relative
information spectrum $\mathds{F}_{P\|Q}(\cdot)$ on $[0, \infty)$ and $(-\infty, 0)$;
\item
for $\gamma > 1$,
\begin{align}
\label{eq: RIS - positive arguments}
& \mathds{F}_{P\|Q}(+\log \gamma) = 1 - E_\gamma(P\|Q) + \gamma E'_\gamma(P\|Q), \\
\label{eq: RIS - negative arguments}
& \mathds{F}_{P\|Q}(-\log \gamma) = -E'_\gamma(Q\|P), \\
\label{eq1: RIS at 0}
& \mathds{F}_{P\|Q}(0) = 1 - E_1(P\|Q) + \lim_{\gamma \downarrow 1} E'_\gamma(P\|Q) \\
\label{eq2: RIS at 0}
& \hspace*{1.4cm} = -\lim_{\gamma \downarrow 1} E'_\gamma(Q\|P).
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
We first prove Item~a).
By our assumption, $\mathds{F}_{P\|Q}(\cdot)$ is continuous on $\ensuremath{\mathbb{R}}$. Hence, it follows from
\eqref{eq: int. rep. E_gamma} that $E_\gamma(P\|Q)$ is continuously differentiable in $\gamma \in (1, \infty)$;
furthermore, \eqref{eq1:E_gamma} implies that $E_\gamma(P\|Q)$ is monotonically decreasing
in $\gamma$, which yields $E'_\gamma(P\|Q) \leq 0$.
We next prove Items~b) and c) together. Let $X \sim P$ and $Y \sim Q$.
From \eqref{eq: int. rep. E_gamma}, for $\gamma > 1$,
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}\gamma} \left(\frac{E_\gamma(P\|Q)}{\gamma}\right)
= -\frac{1-\mathds{F}_{P\|Q}(\log \gamma)}{\gamma^2},
\end{align}
which yields \eqref{eq: RIS - positive arguments}. Due to the continuity of
$\mathds{F}_{P\|Q}(\cdot)$, it follows that the set $\ensuremath{\mathcal}{A}_1$ determines
the relative information spectrum on $[0, \infty)$.
To prove \eqref{eq: RIS - negative arguments}, we have
\begin{align}
\label{RIS - eq1}
E_{\gamma}(Q\|P) &= \ensuremath{\mathbb{P}}[\imath_{Q\|P}(Y) > \log \gamma] - \gamma \, \ensuremath{\mathbb{P}}[\imath_{Q\|P}(X) > \log \gamma] \\
\label{RIS - eq2}
&= 1 - \mathds{F}_{Q\|P}(\log \gamma) - \gamma \, \ensuremath{\mathbb{P}}[\imath_{Q\|P}(X) > \log \gamma] \\
\label{RIS - eq3}
&= E_\gamma(Q\|P) - \gamma E'_\gamma(Q\|P) - \gamma \, \ensuremath{\mathbb{P}}[\imath_{Q\|P}(X) > \log \gamma] \\
\label{RIS - eq4}
&= E_\gamma(Q\|P) - \gamma E'_\gamma(Q\|P) - \gamma \, \ensuremath{\mathbb{P}}[\imath_{P\|Q}(X) < -\log \gamma] \\
\label{RIS - eq5}
&= E_\gamma(Q\|P) - \gamma E'_\gamma(Q\|P) - \gamma \, \mathds{F}_{P\|Q}(-\log \gamma)
\end{align}
where \eqref{RIS - eq1} holds by switching $P$ and $Q$ in \eqref{eq2:E_gamma}; \eqref{RIS - eq2}
holds since $Y \sim Q$; \eqref{RIS - eq3} holds by switching $P$ and $Q$ in \eqref{eq: RIS - positive arguments}
(correspondingly, also $X \sim P$ and $Y \sim Q$ are switched); \eqref{RIS - eq4} holds since
$\imath_{Q\|P} = -\imath_{P\|Q}$; \eqref{RIS - eq5} holds by the assumption that
$\frac{\mathrm{d}P}{\mathrm{d}Q} \, (X)$ has no probability masses, which implies that the sign $<$
can be replaced with $\leq$ at the term $\ensuremath{\mathbb{P}}[\imath_{P\|Q}(X) < -\log \gamma]$ in the right side
of \eqref{RIS - eq4}. Finally, \eqref{eq: RIS - negative arguments} readily follows from
\eqref{RIS - eq1}--\eqref{RIS - eq5}, which implies that the set $\ensuremath{\mathcal}{A}_2$ determines
$\mathds{F}_{P\|Q}(\cdot)$ on $(-\infty, 0)$.
Equalities \eqref{eq1: RIS at 0} and \eqref{eq1: RIS at 0} finally follows by letting
$\gamma \downarrow 1$, respectively, on both sides of \eqref{eq: RIS - positive arguments}
and \eqref{eq: RIS - negative arguments}.
\end{proof}
A similar application of \eqref{eq: int. rep. DeGroot Info} yields an interplay
between DeGroot statistical information and the relative information spectrum.
\begin{theorem}
\label{theorem: RIS -- DG}
Let $X \sim P$, and let the random variable $\imath_{P\|Q}(X)$ have no probability masses. Denote
\begin{align}
\label{eq: B1}
& \ensuremath{\mathcal}{B}_1 := \Bigl\{ \mathcal{I}_\omega(P\|Q) : \omega \in \bigl(0, \tfrac12 \bigr] \Bigr\}, \\[0.1cm]
\label{eq: B2}
& \ensuremath{\mathcal}{B}_2 := \Bigl\{ \mathcal{I}_\omega(P\|Q) : \omega \in \bigl(\tfrac12, 1 \bigr) \Bigr\}.
\end{align}
Then,
\begin{enumerate}[a)]
\item
$\mathcal{I}_\omega(P\|Q)$ is a continuously differentiable function of $\omega$
on $(0, \tfrac12) \cup (\tfrac12, 1)$,
\begin{align}
\lim_{\omega \uparrow \tfrac12} \, \mathcal{I}'_\omega(P\|Q) - \lim_{\omega \downarrow \tfrac12} \, \mathcal{I}'_\omega(P\|Q) = 2,
\end{align}
and $\mathcal{I}'_\omega(P\|Q)$ is, respectively, non-negative or non-positive on $\bigl(0, \tfrac12\bigr)$ and $\bigl(\tfrac12, 1 \bigr)$;
\item
the sets $\ensuremath{\mathcal}{B}_1$ and $\ensuremath{\mathcal}{B}_2$ determine, respectively, the relative
information spectrum $\mathds{F}_{P\|Q}(\cdot)$ on $[0, \infty)$ and $(-\infty, 0)$;
\item
for $\omega \in \bigl(0, \tfrac12 \bigr)$
\begin{align}
\label{eq2: RIS - pos. arg.}
\mathds{F}_{P\|Q}\left(\log \tfrac{1-\omega}{\omega}\right)
= 1 - \mathcal{I}_\omega(P\|Q) - (1-\omega) \, \mathcal{I}'_\omega(P\|Q),
\end{align}
for $\omega \in \bigl(\tfrac12, 1\bigr)$
\begin{align}
\label{eq2: RIS - neg. arg.}
\mathds{F}_{P\|Q}\left(\log \tfrac{1-\omega}{\omega}\right)
= - \mathcal{I}_\omega(P\|Q) - (1-\omega) \, \mathcal{I}'_\omega(P\|Q),
\end{align}
and
\begin{align}
\label{eq3: RIS at 0}
\mathds{F}_{P\|Q}(0) &=
-\mathcal{I}_{\frac12}(P\|Q) - \tfrac12 \lim_{\omega \downarrow \tfrac12}
\, \mathcal{I}'_\omega(P\|Q).
\end{align}
\end{enumerate}
\end{theorem}
\begin{remark} \label{remark: discontinuities}
By relaxing the condition in Theorems~\ref{theorem: RIS -- EG} and \ref{theorem: RIS -- DG}
where $\frac{\mathrm{d}P}{\mathrm{d}Q} \, (X)$ has no probability masses with $X \sim P$,
it follows from the proof of Theorem~\ref{theorem: RIS -- EG} that each one of the sets
\begin{align}
\label{eq: union A}
& \ensuremath{\mathcal}{A} := \ensuremath{\mathcal}{A}_1 \cup \ensuremath{\mathcal}{A}_2 = \Bigl\{ \bigl(E_\gamma(P\|Q), E_\gamma(Q\|P) \bigr): \gamma \geq 1 \Bigr\}, \\
\label{eq: union B}
& \ensuremath{\mathcal}{B} := \ensuremath{\mathcal}{B}_1 \cup \ensuremath{\mathcal}{B}_2 = \Bigl\{ \mathcal{I}_\omega(P\|Q): \omega \in (0,1) \Bigr\}
\end{align}
determines $\mathds{F}_{P\|Q}(\cdot)$ at every point on $\ensuremath{\mathbb{R}}$ where this
relative information spectrum is continuous. Note that, as a cumulative distribution
function, $\mathds{F}_{P\|Q}(\cdot)$ is discontinuous at a countable number of points.
Consequently, under the condition that $f \in \ensuremath{\mathcal}{C}$ is differentiable on $(0, \infty)$,
the integral representations of $D_f(P\|Q)$ in Theorem~\ref{theorem: Int. rep.} are not
affected by the countable number of discontinuities for $\mathds{F}_{P\|Q}(\cdot)$.
\end{remark}
In view of Theorems~\ref{theorem: Int. rep.}, \ref{theorem: RIS -- EG} and \ref{theorem: RIS -- DG}
and Remark~\ref{remark: discontinuities}, we get the following result.
\begin{corollary} \label{corollary: DG-EG/ D_f}
Let $f \in \ensuremath{\mathcal}{C}$ be a differentiable function on $(0, \infty)$, and let $P \ll \gg Q$
be probability measures. Then, each one of the sets $\ensuremath{\mathcal}{A}$ and $\ensuremath{\mathcal}{B}$ in \eqref{eq: union A}
and \eqref{eq: union B}, respectively, determines $D_f(P\|Q)$.
\end{corollary}
\begin{remark}
Corollary~\ref{corollary: DG-EG/ D_f} is supported by the integral representation of $D_f(P\|Q)$
in \cite[Theorem~11]{LieseV_IT2006}, expressed as a function of the set of values in $\ensuremath{\mathcal}{B}$,
and its analogous representation in \cite[Proposition~3]{ISSV16} as a function of the set of values
in $\ensuremath{\mathcal}{A}$. More explicitly, \cite[Theorem~11]{LieseV_IT2006} states that if $f \in \ensuremath{\mathcal}{C}$, then
\begin{align} \label{eq1: LieseV06}
D_f(P\|Q) = \int_0^1 \ensuremath{\mathcal}{I}_\omega(P\|Q) \, \text{d}\Gamma_f(\omega)
\end{align}
where $\Gamma_f$ is a certain $\sigma$-finite measure defined on the Borel subsets of $(0,1)$;
it is also shown in \cite[(80)]{LieseV_IT2006} that if $f \in \ensuremath{\mathcal}{C}$ is twice
differentiable on $(0, \infty)$, then
\begin{align}
\label{eq4: LieseV06}
D_f(P\|Q)
&= \int_0^1 \ensuremath{\mathcal}{I}_\omega(P\|Q) \; \frac1{\omega^3}
\; f''\left(\frac{\omega}{1-\omega}\right) \, \text{d}\omega.
\end{align}
\end{remark}
\vspace*{0.2cm}
\section{New $f$-divergence Inequalities}
\label{section: new inequalities}
Various approaches for the derivation of $f$-divergence inequalities were studied
in the literature (see Section~\ref{section: introduction} for references). This section suggests
a new approach, leading to a lower bound on an arbitrary $f$-divergence by
means of the $E_\gamma$ divergence of an arbitrary order $\gamma \geq 1$
(see \eqref{eq1:E_gamma}) or the DeGroot statistical information (see \eqref{eq:DG f-div}).
This approach leads to generalizations of the Bretagnole-Huber inequality
\cite{BretagnolleH79}, whose generalizations are later motivated in this section.
The utility of the $f$-divergence inequalities
in this section is exemplified in the setup of Bayesian binary hypothesis testing.
In the following, we provide the first main result in this section for the
derivation of new $f$-divergence inequalities by means of the $E_\gamma$
divergence. Generalizing the total variation distance, the $E_\gamma$ divergence
in \eqref{eq1:E_gamma}--\eqref{eq:Eg f-div} is an $f$-divergence whose
utility in information theory has been exemplified in \cite[Chapter~3]{CZK98},
\cite{Liu17}, \cite[p.~2314]{PPV10} and \cite{PW15}; the properties of
this measure were studied in \cite[Section~2.B]{Liu17} and
\cite[Section~7]{ISSV16}.
\vspace*{0.1cm}
\begin{theorem} \label{theorem: f-div ineq}
Let $f \in \ensuremath{\mathcal}{C}$, and let $f^\ast \in \ensuremath{\mathcal}{C}$ be the
conjugate convex function as defined in \eqref{eq: conjugate f}.
Let $P$ and $Q$ be probability measures.
Then, for all $\gamma \in [1, \infty)$,
\begin{align} \label{eq: f-div ineq}
D_f(P\|Q) \geq f^\ast\left(1 + \tfrac1\gamma \, E_\gamma(P\|Q) \right)
+ f^\ast\left(\tfrac1\gamma \, \bigl(1- E_\gamma(P\|Q) \bigr) \right)
- f^\ast\left( \tfrac1\gamma \right).
\end{align}
\end{theorem}
\begin{proof}
Let $p = \frac{\mathrm{d}P}{\mathrm{d}\mu}$
and $q = \frac{\mathrm{d}Q}{\mathrm{d}\mu}$
be the densities of $P$ and $Q$
with respect to a dominating measure
$\mu$ $(P, Q \ll \mu)$. Then,
for an arbitrary $a \in \ensuremath{\mathbb{R}}$,
\begin{align}
D_f(P \| Q) &= D_{f^\ast}(Q \| P) \\[0.1cm]
&= \int p \, f^\ast\left( \frac{q}{p} \right) \, \mathrm{d}\mu \\[0.1cm]
&= \int p \left[ f^\ast\left( \max\left\{a, \frac{q}{p} \right\} \right)
+ f^\ast\left( \min\left\{a, \frac{q}{p} \right\} \right) - f^\ast(a) \right] \, \mathrm{d}\mu \\[0.1cm]
\label{eq1: Jensen}
&\geq f^\ast\left( \int p \max\left\{a, \frac{q}{p} \right\} \, \mathrm{d}\mu \right)
+ f^\ast\left( \int p \min\left\{a, \frac{q}{p} \right\} \, \mathrm{d}\mu \right) - f^\ast(a)
\end{align}
where \eqref{eq1: Jensen} follows from the convexity of $f^\ast$ and by invoking Jensen's inequality.
Setting $a := \frac1\gamma$ with $\gamma \in [1, \infty)$ gives
\begin{align}
\int p \max\left\{a, \frac{q}{p} \right\} \, \mathrm{d}\mu
&= \int \max \left\{ \frac{p}{\gamma}, q \right\} \, \mathrm{d}\mu \\[0.1cm]
&= \int q \, \mathrm{d}\mu + \int \max \left\{ \frac{p}{\gamma} - q, 0 \right\} \, \mathrm{d}\mu \\[0.1cm]
&= 1 + \frac1\gamma \int q \max\left\{\frac{p}{q} - \gamma, 0 \right\} \, \mathrm{d}\mu \\[0.1cm]
\label{eq: 1st integral}
&= 1 + \tfrac1\gamma \, E_\gamma(P \| Q),
\end{align}
and
\begin{align}
\int p \min\left\{a, \frac{q}{p} \right\} \, \mathrm{d}\mu
&= \int p \left( a + \frac{q}{p} - \max\left\{a, \frac{q}{p} \right\} \right) \mathrm{d}\mu \\[0.1cm]
&= a+1 - \int p \max\left\{a, \frac{q}{p} \right\} \, \mathrm{d}\mu \\[0.1cm]
\label{eq: 2nd integral}
&= \tfrac1\gamma \, \bigl( 1 - E_\gamma(P \| Q) \bigr)
\end{align}
where \eqref{eq: 2nd integral} follows from \eqref{eq: 1st integral} by setting $a := \frac1\gamma$.
Substituting \eqref{eq: 1st integral} and \eqref{eq: 2nd integral} into the right side of
\eqref{eq1: Jensen} gives \eqref{eq: f-div ineq}.
\end{proof}
An application of Theorem~\ref{theorem: f-div ineq} gives the following lower bounds
on the Hellinger and R\'enyi divergences with arbitrary positive orders, expressed
as a function of the $E_\gamma$ divergence with an arbitrary order $\gamma \geq 1$.
\begin{corollary} \label{cor: Hel RD}
For all $\alpha > 0$ and $\gamma \geq 1$,
\begin{align} \label{eq: cor - Hel}
\mathscr{H}_{\alpha}(P \| Q) \geq
\begin{dcases}
\frac1{\alpha-1} \left[ \left(1 + \frac1\gamma \, E_\gamma(P\|Q) \right)^{1-\alpha}
+ \left( \frac{1-E_\gamma(P\|Q)}{\gamma} \right)^{1-\alpha} - 1 -
\gamma^{\alpha-1} \right], & \quad \alpha \neq 1 \\[0.2cm]
-\log_{\mathrm{e}} \Biggl( \left(1 + \frac1\gamma \, E_\gamma(P\|Q) \right)
\bigl(1-E_\gamma(P\|Q)\bigr) \Biggr), &\quad \alpha=1,
\end{dcases}
\end{align}
and
\begin{align} \label{eq: cor - RD}
D_{\alpha}(P \| Q) \geq
\begin{dcases}
\frac1{\alpha-1} \log \Biggl( \left(1 + \frac1\gamma \, E_\gamma(P\|Q) \right)^{1-\alpha}
+ \gamma^{\alpha-1} \left[ \bigl(1-E_\gamma(P\|Q) \bigr)^{1-\alpha} - 1 \right] \Biggr),
& \quad \alpha \neq 1 \\[0.2cm]
-\log \Biggl( \left(1 + \frac1\gamma \, E_\gamma(P\|Q) \right) \bigl(1-E_\gamma(P\|Q)\bigr)
\Biggr), &\quad \alpha=1.
\end{dcases}
\end{align}
\end{corollary}
\begin{proof}
Inequality~\eqref{eq: cor - Hel}, for $\alpha \in (0,1) \cup (1, \infty)$, follows from
Theorem~\ref{theorem: f-div ineq} and \eqref{eq: Hel-divergence}; for $\alpha=1$,
it holds in view of Theorem~\ref{theorem: f-div ineq}, and equalities \eqref{eq: KL divergence}
and \eqref{eq1: KL}. Inequality~\eqref{eq: cor - RD}, for $\alpha \in (0,1) \cup (1, \infty)$,
follows from \eqref{renyimeetshellinger} and \eqref{eq: cor - Hel}; for $\alpha=1$,
it holds in view of \eqref{eq1: KL}, \eqref{eq: cor - Hel} and since $D_1(P\|Q)=D(P\|Q)$.
\end{proof}
\vspace*{0.1cm}
Specialization of Corollary~\ref{cor: Hel RD} for $\alpha=2$ in \eqref{eq: cor - Hel} and $\alpha=1$
in \eqref{eq: cor - RD} gives the following result.
\begin{corollary} \label{corollary 3}
For $\gamma \in [1, \infty)$, the following upper bounds on $E_\gamma$ divergence hold as a function
of the relative entropy and $\chi^2$ divergence:
\begin{align}
\label{eq: chi^2-EG}
& E_\gamma(P \| Q) \leq \tfrac12 \left[ 1-\gamma + \sqrt{(\gamma-1)^2
+ \frac{4 \gamma \, \chi^2(P \| Q)}{1 + \gamma + \chi^2(P \| Q)}} \; \right], \\[0.2cm]
\label{eq: generalized BH}
& E_\gamma(P \| Q) \leq \tfrac12 \left[ 1-\gamma + \sqrt{(\gamma-1)^2
+ 4 \gamma \bigl( 1 -\exp(-D(P\|Q)) \bigr)} \right].
\end{align}
\end{corollary}
\begin{remark}
From \cite[(58)]{ReidW11},
\begin{align} \label{eq: lb chi-square - TV}
\chi^2(P\|Q) \geq
\begin{dcases}
|P-Q|^2, & \quad \mbox{$|P-Q| \in \bigl[0, 1)$} \\[0.2cm]
\frac{|P-Q|}{2-|P-Q|}, & \quad \mbox{$|P-Q| \in \bigl[1, 2)$}
\end{dcases}
\end{align}
is a tight lower bound on the chi-squared divergence as a function of the
total variation distance. In view of \eqref{eq:EG-TV}, we compare
\eqref{eq: lb chi-square - TV} with the specialized version of
\eqref{eq: chi^2-EG} when $\gamma=1$. The latter bound is expected to be
looser than the tight bound in \eqref{eq: lb chi-square - TV}, as a result of
the use of Jensen's inequality in the proof of Theorem~\ref{theorem: f-div ineq};
however, it is interesting to examine how much we loose in the tightness of
this specialized bound with $\gamma=1$. From \eqref{eq:EG-TV}, the substitution
of $\gamma=1$ in \eqref{eq: chi^2-EG} gives
\begin{align} \label{eq2: LB Chi^2-TV}
\chi^2(P\|Q) \geq \frac{2 |P-Q|^2}{4-|P-Q|^2}, & \qquad |P-Q| \in [0,2),
\end{align}
and, it can be easily verified that
\begin{itemize}
\item if $|P-Q| \in [0,1)$, then the lower bound in the right side of \eqref{eq2: LB Chi^2-TV} is
at most twice smaller than the tight lower bound in the right side of
\eqref{eq: lb chi-square - TV};
\item if $|P-Q| \in [1,2)$, then the lower bound in the right side of \eqref{eq2: LB Chi^2-TV} is
at most $\tfrac{3}{2}$ times smaller than the tight lower bound in the right side of
\eqref{eq: lb chi-square - TV}.
\end{itemize}
\end{remark}
\begin{remark}
Setting $\gamma=1$ in \eqref{eq: generalized BH}, and using \eqref{eq:EG-TV},
specializes to the Bretagnole-Huber inequality \cite{BretagnolleH79}:
\begin{align} \label{eq: BretagnolleH79}
|P-Q| \leq 2 \sqrt{1- \exp\bigl(-D(P\|Q)\bigr)}.
\end{align}
\end{remark}
\par
Inequality \eqref{eq: BretagnolleH79} forms a counterpart to Pinsker's inequality:
\begin{align} \label{eq: Pinsker}
\tfrac12 |P-Q|^2 \log e \leq D(P \| Q),
\end{align}
proved by Csisz\'{a}r \cite{Csiszar67a} and Kullback \cite{kullbackTV67}, with Kemperman
\cite{kemperman} independently a bit later. As upper bounds on the total variation distance,
\eqref{eq: Pinsker} outperforms \eqref{eq: BretagnolleH79} if $D(P\|Q) \leq 1.594$ nats,
and \eqref{eq: BretagnolleH79} outperforms \eqref{eq: Pinsker} for larger values of
$D(P\|Q)$.
\begin{remark}
In \cite[(8)]{Vajda70}, Vajda introduced a lower bound on the relative entropy
as a function of the total variation distance:
\begin{align} \label{eq: Vajda's LB}
D(P\|Q) \geq \log\left(\frac{2+|P-Q|}{2-|P-Q|}\right)-\frac{2|P-Q| \, \log e}{2+|P-Q|}, \quad |P-Q| \in [0,2).
\end{align}
The lower bound in the right side of \eqref{eq: Vajda's LB} is asymptotically tight
in the sense that it tends to $\infty$ if $|P-Q| \uparrow 2$, and the difference between
$D(P\|Q)$ and this lower bound is everywhere upper bounded
by $\frac{2|P-Q|^3}{(2+|P-Q|)^2} \leq 4$ (see \cite[(9)]{Vajda70}). The Bretagnole-Huber
inequality in \eqref{eq: BretagnolleH79}, on the other hand, is equivalent to
\begin{align} \label{eq2: BretagnolleH79}
D(P\|Q) \geq -\log\left(1 - \tfrac14 |P-Q|^2 \right), \quad |P-Q| \in [0,2).
\end{align}
Although it can be verified numerically that the lower bound on the relative entropy
in \eqref{eq: Vajda's LB} is everywhere slightly tighter than the lower bound in \eqref{eq2: BretagnolleH79}
(for $|P-Q| \in [0,2)$), both lower bounds on $D(P\|Q)$ are of the same asymptotic tightness
in a sense that they both tend to $\infty$ as $|P-Q| \uparrow 2$ and their ratio tends to~1.
Apart of their asymptotic tightness, the Bretagnole-Huber inequality in \eqref{eq2: BretagnolleH79}
is appealing since it provides a closed-form simple upper bound on $|P-Q|$ as a function of $D(P\|Q)$
(see \eqref{eq: BretagnolleH79}), whereas such a closed-form simple upper bound cannot be obtained from
\eqref{eq: Vajda's LB}.
In fact, by the substitution $v := -\frac{2-|P-Q|}{2+|P-Q|}$ and the exponentiation of both sides
of \eqref{eq: Vajda's LB}, we get the inequality $v e^v \geq -\tfrac1e \, \exp\bigl(-D(P\|Q)\bigr)$
whose solution is expressed by the Lambert $W$ function \cite{Corless96};
it can be verified that \eqref{eq: Vajda's LB} is equivalent to the following upper bound
on the total variation distance as a function of the relative entropy:
\begin{align}
\label{eq: UB igal}
& |P-Q| \leq \frac{2 \bigl(1+W(z)\bigr)}{1-W(z)}, \\[0.1cm]
\label{eq2: UB igal}
& z := -\tfrac1{e} \, \exp\bigl(-D(P\|Q)\bigr),
\end{align}
where $W$ in the right side of \eqref{eq: UB igal} denotes the principal real branch of the Lambert $W$ function.
The difference between the upper bounds in \eqref{eq: BretagnolleH79} and \eqref{eq: UB igal}
can be verified to be marginal if $D(P\|Q)$ is large (e.g., if $D(P\|Q)=4$ nats, then the upper bounds
on $|P-Q|$ are respectively equal to 1.982 and 1.973), though the former upper bound in \eqref{eq: BretagnolleH79}
is clearly more simple and amenable to analysis.
The Bretagnole-Huber inequality in \eqref{eq: BretagnolleH79} is proved to be useful in the context of lower
bounding the minimax risk (see, e.g., \cite[pp.~89--90, 94]{Tsybakov09}), and the problem of density estimation
(see, e.g., \cite[Section~1.6]{Vapnik98}). The utility of this inequality motivates its generalization in this section
(see Corollaries~\ref{cor: Hel RD} and~\ref{corollary 3}, and also see later Theorem~\ref{theorem: DG UBs} followed by
Example~\ref{example: Poisson}).
\end{remark}
\vspace*{0.2cm}
In \cite[Section~7.C]{ISSV16}, Sason and Verd\'{u} generalized Pinsker's inequality
by providing an upper bound on the $E_\gamma$ divergence, for $\gamma>1$, as a function
of the relative entropy. In view of \eqref{eq:EG-TV} and the optimality of the constant in
Pinsker's inequality \eqref{eq: Pinsker}, it follows
that the minimum achievable $D(P\|Q)$ is quadratic in $E_1(P\|Q)$ for small values of $E_1 (P\|Q)$.
It has been proved in \cite[Section~7.C]{ISSV16} that this situation ceases to be the case for
$\gamma > 1$, in which case it is possible to upper bound $E_\gamma(P\|Q)$ as a constant times
$D(P\|Q)$ where this constant tends to infinity as we let $\gamma \downarrow 1$.
We next cite the result in \cite[Theorem~30]{ISSV16}, extending \eqref{eq: Pinsker}
by means of the $E_\gamma$ divergence for $\gamma>1$, and compare it numerically to
the bound in \eqref{eq: generalized BH}.
\begin{theorem} (\cite[Theorem~30]{ISSV16})
\label{thm:EG vs. RE}
For every $\gamma > 1$,
\begin{align} \label{eq:sup-EG and RE}
\sup \frac{E_{\gamma}(P\|Q)}{D(P\|Q)} = c_{\gamma}
\end{align}
where the supremum is over $P \ll Q, P \neq Q$, and $c_{\gamma}$ is a universal
function (independent of $(P, Q)$), given by
\begin{align}
\label{eq: c_gamma}
& c_{\gamma} = \frac{t_\gamma-\gamma}{t_\gamma \, \log t_\gamma + (1-t_\gamma) \, \log e},
\\[0.1cm]
\label{eq: t_gamma}
& t_\gamma = - \gamma \, W_{-1}\left(-\tfrac1{\gamma} \, e^{-\frac1{\gamma}} \right)
\end{align}
where $W_{-1}$ in \eqref{eq: t_gamma} denotes the secondary real branch of the
Lambert $W$ function \cite{Corless96}.
\end{theorem}
As an immediate consequence of \eqref{eq:sup-EG and RE}, it follows that
\begin{align} \label{eq:SLB-EG-RE}
E_{\gamma}(P\|Q) \leq c_{\gamma} D(P\|Q),
\end{align}
which forms a straight-line bound on the $E_{\gamma}$ divergence as a function of
the relative entropy for $\gamma>1$.
Similarly to the comparison of the Bretagnole-Huber inequality \eqref{eq: BretagnolleH79} and
Pinsker's inequality \eqref{eq: Pinsker}, we exemplify numerically that the extension of
Pinsker's inequality to the $E_\gamma$ divergence in \eqref{eq:SLB-EG-RE} forms a counterpart
to the generalized version of the Bretagnole-Huber inequality in \eqref{eq: generalized BH}.
\begin{figure}[h]
\vspace*{-4.5cm}
\centerline{\includegraphics[width=12cm]{E_gamma_RE.pdf}}
\vspace*{-3.6cm}
\caption{\label{figure:RE-EG}
Upper bounds on the $E_\gamma$ divergence, for $\gamma > 1$, as a function of the relative entropy
(the curvy and straight lines follow from \eqref{eq: generalized BH} and \eqref{eq:SLB-EG-RE}, respectively).}
\end{figure}
Figure~\ref{figure:RE-EG} plots an upper bound on the $E_\gamma$ divergence,
for $\gamma \in \{1.1, 2.0, 3.0, 4.0\}$, as a function of the relative entropy
(or, alternatively, a lower bound on the relative entropy as a function of the
$E_\gamma$ divergence).
The upper bound on $E_\gamma(P\|Q)$ for $\gamma > 1$, as a function of $D(P\|Q)$,
is composed of the following two components:
\begin{enumerate}[a)]
\item the straight-line bound, which refers to the right side of \eqref{eq:SLB-EG-RE}, is tighter than
the bound in the right side of \eqref{eq: generalized BH} if the relative entropy is below a certain value
that is denoted by $d(\gamma)$ in nats (it depends on $\gamma$);
\item the curvy line, which refers to the bound in the right side of \eqref{eq: generalized BH},
is tighter than the straight-line bound in the right side of \eqref{eq:SLB-EG-RE} for larger values
of the relative entropy.
\end{enumerate}
It is supported by Figure~\ref{figure:RE-EG} that $d \colon (1, \infty) \mapsto (0, \infty)$ is positive
and monotonically increasing, and $\underset{\gamma \downarrow 1}{\lim} \, d(\gamma)=0$; e.g., it
can be verified that $d(1.1) \approx 0.02$, $d(2) \approx 0.86$, $d(3) \approx 1.61$, and
$d(4) \approx 2.10$ (see Figure~\ref{figure:RE-EG}).
\subsection*{Bayesian Binary Hypothesis Testing}
The DeGroot statistical information \cite{DeGroot62} has the following meaning:
consider two hypotheses $H_{\mathtt{0}}$ and $H_{\mathtt{1}}$, and let
$\ensuremath{\mathbb{P}}[H_{\mathtt{0}}]=\omega$ and $\ensuremath{\mathbb{P}}[H_{\mathtt{1}}]=1-\omega$ with $\omega \in (0,1)$.
Let $P$ and $Q$ be probability measures, and consider an observation $Y$ where
$Y | H_{\mathtt{0}} \sim P$, and $Y | H_{\mathtt{1}} \sim Q$. Suppose that one wishes
to decide which hypothesis is more likely given the observation $Y$. The operational meaning
of the DeGroot statistical information, denoted by $\ensuremath{\mathcal}{I}_\omega(P\|Q)$, is that this
measure is equal to the minimal difference between the {\em a-priori} error probability
(without side information) and {\em a posteriori} error probability (given the observation $Y$).
This measure was later identified as an $f$-divergence by Liese and Vajda \cite{LieseV_IT2006}
(see \eqref{eq:DG f-div} here).
\begin{theorem} \label{theorem: DG UBs}
The DeGroot statistical information satisfies the following upper bound as a function of the
chi-squared divergence:
\begin{align}
\label{eq: DG-chi^2 UB}
\mathcal{I}_\omega(P\|Q) \leq
\begin{dcases}
\omega - \tfrac12 + \sqrt{ \tfrac14 - \frac{\omega (1-\omega)}{1+\omega \, \chi^2(P\|Q)}} \, ,
& \quad \omega \in \bigl(0, \tfrac12 \bigr], \\[0.2cm]
\tfrac12 - \omega + \sqrt{ \tfrac14 - \frac{\omega (1-\omega)}{1+\omega \, \chi^2(Q\|P)}} \, ,
& \quad \omega \in \bigl(\tfrac12, 1\bigr),
\end{dcases}
\end{align}
and the following bounds as a function of the relative entropy:
\begin{enumerate}[1)]
\item
\begin{align}
\label{eq1: DG-KL UB}
\mathcal{I}_\omega(P\|Q) \leq
\begin{dcases}
\omega \, c_{\frac{1-\omega}{\omega}} \, D(P\|Q)\, ,
& \quad \omega \in \bigl(0, \tfrac12\bigr), \\[0.2cm]
\sqrt{ \tfrac1{8 \log e} \, \min\bigl\{ D(P\|Q), D(Q\|P) \bigr\}} \, , & \quad \omega = \tfrac12, \\[0.2cm]
(1-\omega) \, c_{\frac{\omega}{1-\omega}} \, D(Q\|P) \, ,
& \quad \omega \in \bigl(\tfrac12, 1\bigr),
\end{dcases}
\end{align}
where $c_\gamma$ for $\gamma > 1$ is introduced in \eqref{eq: c_gamma};
\item
\begin{align}
\label{eq2: DG-KL UB}
\mathcal{I}_\omega(P\|Q) \leq
\begin{dcases}
\omega - \tfrac12 + \sqrt{\tfrac14 - \omega (1-\omega) \, \exp\bigl(-D(P\|Q)\bigr)}\, ,
& \quad \omega \in \bigl(0, \tfrac12\bigr], \\[0.2cm]
\tfrac12 - \omega + \sqrt{\tfrac14 - \omega (1-\omega) \, \exp\bigl(-D(Q\|P)\bigr)}\, ,
& \quad \omega \in \bigl(\tfrac12, 1\bigr).
\end{dcases}
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
The first bound in \eqref{eq: DG-chi^2 UB} holds by combining \eqref{eq: DG-EG} and \eqref{eq: chi^2-EG};
the second bound in \eqref{eq1: DG-KL UB} follows from \eqref{eq:SLB-EG-RE} and \eqref{eq: DG-EG} for
$\omega \in \bigl(0, \tfrac12\bigr) \cup \bigl(\tfrac12, 1\bigr)$, and it follows from
\eqref{eq:DG-TV} and \eqref{eq: Pinsker} when $\omega = \tfrac12$; finally,
the third bound in \eqref{eq2: DG-KL UB} follows from \eqref{eq: generalized BH} and \eqref{eq: DG-EG}.
\end{proof}
\begin{remark}
The bound in \eqref{eq1: DG-KL UB} forms an extension of Pinsker's inequality \eqref{eq: Pinsker}
when $\omega \neq \tfrac12$ (i.e., in the asymmetric case where the hypotheses $H_{\mathtt{0}}$ and
$H_{\mathtt{1}}$ are not equally probable).
Furthermore, in view of \eqref{eq:DG-TV}, the bound in \eqref{eq2: DG-KL UB} is specialized
to the Bretagnole-Huber inequality in \eqref{eq: BretagnolleH79} by letting $\omega = \tfrac12$.
\end{remark}
\begin{remark}
Numerical evidence shows that none of the bounds in \eqref{eq: DG-chi^2 UB}--\eqref{eq2: DG-KL UB}
supersedes the others.
\end{remark}
\begin{remark}
The upper bounds on $\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda)$ in \eqref{eq: DG-chi^2 UB} and
\eqref{eq2: DG-KL UB} are asymptotically tight when we let $D(P\|Q)$ and $D(Q\|P)$ tend
to infinity. To verify this, first note that (see \cite[Theorem~5]{GibbsSu02})
\begin{align} \label{grout425 - introduction}
D ( P \| Q) &\leq \log \bigl( 1 + \chi^2(P\| Q) \bigr),
\end{align}
which implies that also $\chi^2(P\| Q)$ and $\chi^2(Q\| P)$ tend to infinity. In this case,
it can be readily verified that the bounds in \eqref{eq: DG-chi^2 UB} and \eqref{eq2: DG-KL UB} are
specialized to $\ensuremath{\mathcal}{I}_\omega(P\|Q) \leq \min\{\omega, 1-\omega\}$; this upper bound, which
is equal to the {\em a-priori} error probability, is also equal to the DeGroot statistical
information since the {\em a-posterior} error probability tends to zero in the considered
extreme case where $P$ and $Q$ are sufficiently far from each other, so that $H_{\mathtt{0}}$
and $H_{\mathtt{1}}$ are easily distinguishable in high probability when the observation $Y$
is available.
\end{remark}
\begin{remark}
Due to the one-to-one correspondence between the $E_\gamma$ divergence and DeGroot statistical
information in \eqref{eq: DG-EG}, which shows that the two measures are related by a multiplicative
scaling factor, the numerical results shown in Figure~\ref{figure:RE-EG} also apply to the bounds
in \eqref{eq1: DG-KL UB} and \eqref{eq2: DG-KL UB}; i.e., for $\omega \neq \tfrac12$, the first bound
in \eqref{eq1: DG-KL UB} is tighter than the second bound in \eqref{eq2: DG-KL UB} for small values
of the relative entropy, whereas \eqref{eq2: DG-KL UB} becomes tighter than \eqref{eq1: DG-KL UB}
for larger values of the relative entropy.
\end{remark}
\begin{corollary} \label{corolary: DG}
Let $f \in \ensuremath{\mathcal}{C}$, and let $f^\ast \in \ensuremath{\mathcal}{C}$ be as defined in \eqref{eq: conjugate f}.
Then,
\begin{enumerate}[1)]
\item
for $w \in (0, \tfrac12\bigr]$,
\begin{align} \label{eq1: ineq. with DG}
D_f(P\|Q) \geq f^\ast\left( 1 + \frac{\ensuremath{\mathcal}{I}_w(P \| Q)}{1-w} \right)
+ f^\ast\left( \frac{w - \ensuremath{\mathcal}{I}_w(P \| Q)}{1-w} \right)
- f^\ast\left(\frac{w}{1-w}\right);
\end{align}
\item
for $w \in \bigl(\tfrac12, 1 \bigr)$,
\begin{align} \label{eq2: ineq. with DG}
D_f(P\|Q) \geq f^\ast\left( 1 + \frac{\ensuremath{\mathcal}{I}_w(Q \| P)}{w} \right)
+ f^\ast\left( \frac{1-w - \ensuremath{\mathcal}{I}_w(Q \| P)}{w} \right)
- f^\ast\left(\frac{1-w}{w}\right).
\end{align}
\end{enumerate}
\end{corollary}
\begin{proof}
Inequalities~\eqref{eq1: ineq. with DG} and \eqref{eq2: ineq. with DG} follow by combining \eqref{eq: f-div ineq} and \eqref{eq: DG-EG}.
\end{proof}
We end this section by exemplifying the utility of the bounds in Theorem~\ref{theorem: DG UBs}.
\begin{example} \label{example: Poisson}
Let $\ensuremath{\mathbb{P}}[H_{\mathtt{0}}]=\omega$ and $\ensuremath{\mathbb{P}}[H_{\mathtt{1}}]=1-\omega$ with $\omega \in (0,1)$,
and assume that the observation $Y$ given that the hypothesis is $H_{\mathtt{0}}$ or
$H_{\mathtt{1}}$ is Poisson distributed with the positive parameter $\mu$ or $\lambda$, respectively:
\begin{align}
& Y | H_{\mathtt{0}} \sim P_\mu, \\
& Y | H_{\mathtt{1}} \sim P_\lambda
\end{align}
where
\begin{align}
P_\lambda[k] = \frac{e^{-\lambda} \lambda^k}{k!}, \quad k \in \{0, 1, \ldots\}.
\end{align}
Without any loss of generality, let $\omega \in \bigl(0, \tfrac12\bigr]$.
The bounds on the DeGroot statistical information $\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda)$
in Theorem~\ref{theorem: DG UBs} can be expressed in a closed form by relying on the
following identities:
\begin{align}
& D(P_\mu \| P_\lambda) = \mu \log\Bigl(\frac{\mu}{\lambda}\Bigr) + (\lambda-\mu) \log e, \\
& \chi^2(P_\mu \| P_\lambda) = e^{\frac{(\mu-\lambda)^2}{\lambda}} - 1.
\end{align}
In this example, we compare the simple closed-form bounds on $\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda)$
in \eqref{eq: DG-chi^2 UB}--\eqref{eq2: DG-KL UB} with its exact value
\begin{align} \label{eq1: exact DG}
\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda) &= \min\{\omega, 1-\omega\} - \sum_{k=0}^{\infty} \min \Bigl\{ \omega P_\mu[k], (1-\omega) P_\lambda[k] \Bigr\}.
\end{align}
To simplify the right side of \eqref{eq1: exact DG}, let $\mu > \lambda$, and define
\begin{align} \label{eq: k_0}
k_0 = k_0(\lambda, \mu, \omega) := \left\lfloor \frac{\mu-\lambda
+ \ln \frac{1-\omega}{\omega}}{\ln \frac{\mu}{\lambda}} \right\rfloor,
\end{align}
where, for $x \in \ensuremath{\mathbb{R}}$, $\lfloor x \rfloor$ denotes the largest integer
that is smaller than or equal to $x$. It can be verified that
\begin{align}
\begin{dcases} \label{eq: compare PMFs}
\omega P_\mu[k] \leq (1-\omega) P_\lambda[k], & \quad \mbox{for $k \leq k_0$}\\
\omega P_\mu[k] > (1-\omega) P_\lambda[k], & \quad \mbox{for $k > k_0$.}
\end{dcases}
\end{align}
Hence, from \eqref{eq1: exact DG}--\eqref{eq: compare PMFs},
\begin{align} \label{eq2: exact DG}
\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda) &= \min\{\omega, 1-\omega\}
- \omega \sum_{k=0}^{k_0} P_\mu[k] - (1-\omega) \sum_{k=k_0+1}^{\infty} P_\lambda[k] \\
\label{eq3: exact DG}
&= \min\{\omega, 1-\omega\} - \omega \sum_{k=0}^{k_0} P_\mu[k]
- (1-\omega) \left(1- \sum_{k=0}^{k_0} P_\lambda[k] \right).
\end{align}
To exemplify the utility of the bounds in Theorem~\ref{theorem: DG UBs}, suppose that
$\mu$ and $\lambda$ are close, and we wish to obtain a guarantee on how small
$\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda)$ is. For example, let $\lambda = 99$, $\mu = 101$,
and $\omega = \tfrac1{10}$. The upper bounds on $\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda)$ in
\eqref{eq: DG-chi^2 UB}--\eqref{eq2: DG-KL UB} are, respectively, equal to $4.6 \cdot 10^{-4}$,
$5.8 \cdot 10^{-4}$ and $2.2 \cdot 10^{-3}$; we therefore get an informative guarantee by
easily calculable bounds. The exact value of $\ensuremath{\mathcal}{I}_\omega(P_\mu \| P_\lambda)$ is, on
the other hand, hard to compute since $k_0 = 209$ (see \eqref{eq: k_0}), and the calculation
of the right side of \eqref{eq3: exact DG} appears to be sensitive to the selected parameters in this setting.
\end{example}
\section{Local Behavior of $f$-divergences}
\label{section: local behavior}
This section is focused on the local behavior of $f$-divergences;
the starting point relies on \cite[Section~3]{PardoV03} which studies
the asymptotic properties of $f$-divergences. The reader is also
referred to a related study in \cite[Section~4.F]{ISSV16}.
\begin{lemma} \label{lemma: tilted}
Let
\begin{itemize}
\item
$\{P_n\}$ be a sequence of probability measures on a measurable space $(\mathcal{A}, \mathscr{F})$;
\item
the sequence $\{P_n\}$ converge to a probability measure $Q$ in the sense that
\begin{align}
\label{eq: 1st condition}
\lim_{n \to \infty} \text{ess\,sup} \frac{\text{d}P_n}{\text{d}Q} \, (Y) = 1, \quad Y \sim Q
\end{align}
where $P_n \ll Q$ for all sufficiently large $n$;
\item $f, g \in \mathcal{C}$ have continuous second derivatives
at~1 and $g''(1) > 0$.
\end{itemize}
Then
\begin{align} \label{eq: limit of ratio of f-div}
\lim_{n \to \infty} \frac{D_f(P_n \| Q)}{D_g(P_n \| Q)} = \frac{f''(1)}{g''(1)}.
\end{align}
\end{lemma}
\begin{proof}
This follows from \cite[Theorem~3]{PardoV03}, even
without the additional restriction in \cite[Section~3]{PardoV03} which would require
that the second derivatives of $f$ and $g$ are locally Lipschitz at a neighborhood of~1.
More explicitly, in view of the analysis in \cite[p.~1863]{PardoV03}, we get by relaxing
the latter restriction that (cf. \cite[(31)]{PardoV03})
\begin{align}
& \bigl| D_f(P_n \| Q) - \tfrac12 \, f''(1) \, \chi^2(P_n \| Q) \bigr| \nonumber \\
\label{eq1: PardoV}
& \leq \tfrac12 \, \sup_{y \in [1-\varepsilon_n, \, 1+\varepsilon_n]} \bigl| f''(y)-f''(1) \bigr| \; \chi^2(P_n \| Q),
\end{align}
with $\varepsilon_n \downarrow 0$ as we let $n \to \infty$, and also
\begin{align} \label{eq2: PardoV}
\lim_{n \to \infty} \chi^2(P_n \| Q) = 0.
\end{align}
By our assumption, due to the continuity of $f''$ and $g''$ at~1, it follows from \eqref{eq1: PardoV}
and \eqref{eq2: PardoV} that
\begin{align}
\label{eq3: PardoV}
\lim_{n \to \infty} \frac{D_f(P_n \| Q)}{\chi^2(P_n \| Q)} = \tfrac12 \, f''(1), \\[0.2cm]
\label{eq4: PardoV4}
\lim_{n \to \infty} \frac{D_g(P_n \| Q)}{\chi^2(P_n \| Q)} = \tfrac12 \, g''(1),
\end{align}
which yields \eqref{eq: limit of ratio of f-div} (recall that, by assumption, $g''(1)>0$).
\end{proof}
\vspace*{0.1cm}
\begin{remark}
Since $f$ and $g$ in Lemma~\ref{lemma: tilted} are assumed
to have continuous second derivatives at~1, the left and right derivatives
of the weight function $w_f$ in \eqref{eq: weight function} at~1 satisfy,
in view of Remark~\ref{remark 2: w_f},
\begin{align}
w_f'(1^+) = -w_f'(1^-) = f''(1).
\end{align}
Hence, the limit in the right side of \eqref{eq: limit of ratio of f-div}
is equal to $\frac{w_f'(1^+)}{w_g'(1^+)}$ or also to $\frac{w_f'(1^-)}{w_g'(1^-)}$.
\end{remark}
\vspace*{0.1cm}
\begin{lemma} \label{lemma: chi^2 div}
\begin{align}
\label{eq: chi^2 identity}
\chi^2(\lambda P + (1-\lambda)Q \, \| \, Q) = \lambda^2 \, \chi^2(P \| Q), \quad \forall \, \lambda \in [0,1].
\end{align}
\end{lemma}
\begin{proof}
Let $p = \frac{\mathrm{d}P}{\mathrm{d}\mu}$
and $q = \frac{\mathrm{d}Q}{\mathrm{d}\mu}$
be the densities of $P$ and $Q$
with respect to an arbitrary probability measure
$\mu$ such that $P, Q \ll \mu$. Then,
\begin{align}
\chi^2(\lambda P + (1-\lambda)Q \, \| \, Q) &= \int \frac{\bigl( (\lambda p + (1-\lambda)q) - q \bigr)^2}{q} \, \mathrm{d}\mu \\
&= \lambda^2 \int \frac{(p-q)^2}{q} \, \mathrm{d}\mu \\
&= \lambda^2 \; \chi^2(P \| Q).
\end{align}
\end{proof}
\begin{remark}
The result in Lemma~\ref{lemma: chi^2 div}, for the chi-squared divergence,
is generalized to the identity
\begin{align}
\label{eq: chi^s identity}
\chi^s(\lambda P + (1-\lambda)Q \, \| \, Q) = \lambda^s \, \chi^s(P \| Q), \quad \forall \, \lambda \in [0,1],
\end{align}
for all $s \geq 1$ (see \eqref{eq: chi^s div}). The special case of $s=2$ is required in the continuation of this section.
\end{remark}
\begin{remark}
The result in Lemma~\ref{lemma: chi^2 div} can be generalized as follows:
let $P, Q, R$ be probability measures, and $\lambda \in [0,1]$. Let
$P, Q, R \ll \mu$ for an arbitrary probability measure $\mu$, and
$p := \frac{\mathrm{d}P}{\mathrm{d}\mu}$, $q := \frac{\mathrm{d}Q}{\mathrm{d}\mu}$,
and $r := \frac{\mathrm{d}R}{\mathrm{d}\mu}$ be the corresponding
densities with respect to $\mu$. Calculation shows that
\begin{align} \label{eq: gen. identity chi^2}
\chi^2(\lambda P + (1-\lambda) Q \, \| \, R) - \chi^2(Q \| R)
&= c \lambda + \bigl[ \chi^2(P\|R) - \chi^2(Q\|R) - c\bigr] \lambda^2
\end{align}
with
\begin{align} \label{eq: c}
c & := \int \frac{(p-q)q}{r} \, \mathrm{d}\mu.
\end{align}
If $Q=R$, then $c=0$ in \eqref{eq: c}, and \eqref{eq: gen. identity chi^2}
is specialized to \eqref{eq: chi^2 identity}. However, if $Q \neq R$, then
$c$ may be non-zero. This shows that, for small $\lambda \in [0,1]$, the
left side of \eqref{eq: gen. identity chi^2} scales linearly in $\lambda$
if $c \neq 0$, and it has a quadratic scaling in $\lambda$ if $c=0$ and
$\chi^2(P\|R) \neq \chi^2(Q\|R)$ (e.g., if $Q=R$, as in Lemma~\ref{lemma: chi^2 div}).
The identity in \eqref{eq: gen. identity chi^2} yields
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}\lambda} \, \chi^2(\lambda P + (1-\lambda) Q \, \| \, R) \, \Bigl|_{\lambda=0}
= \lim_{\lambda \downarrow 0} \, \frac{\chi^2(\lambda P + (1-\lambda) Q \, \| \, R) - \chi^2(Q \| R)}{\lambda} = c.
\end{align}
\end{remark}
We next state the main result in this section.
\begin{theorem} \label{theorem: local behavior 2018}
Let
\begin{itemize}
\item $P$ and $Q$ be probability measures defined on a measurable space $(\mathcal{A}, \mathscr{F})$, $Y \sim Q$, and suppose that
\begin{align} \label{eq: bounded RND}
\text{ess\,sup} \frac{\text{d}P}{\text{d}Q} \, (Y) < \infty;
\end{align}
\item $f \in \mathcal{C}$, and $f''$ be continuous at 1.
\end{itemize}
Then,
\begin{align}
\lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_f( \lambda P + (1-\lambda)Q \, \| \, Q)
&= \lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_f(Q \, \| \, \lambda P + (1-\lambda)Q)
\label{eq1: local behavior} \\[0.1cm]
&= \tfrac12 \, f''(1) \, \chi^2(P \| Q).
\label{eq2: local behavior}
\end{align}
\end{theorem}
\begin{proof}
Let $\{\lambda_n\}_{n \in \ensuremath{\mathbb{N}}}$ be a sequence in $[0,1]$, which tends to zero.
Define the sequence of probability measures
\begin{align} \label{eq: R_n}
R_n := \lambda_n P + (1-\lambda_n) Q, \qquad n \in \ensuremath{\mathbb{N}}.
\end{align}
Note that $P \ll Q$ implies that $R_n \ll Q$ for all $n \in \ensuremath{\mathbb{N}}$. Since
\begin{align}
\frac{\mathrm{d}R_n}{\mathrm{d}Q} = \lambda_n \, \frac{\mathrm{d}P}{\mathrm{d}Q} + (1-\lambda_n),
\end{align}
it follows from \eqref{eq: bounded RND} that
\begin{align}
\lim_{n \to \infty} \text{ess\,sup} \frac{\mathrm{d}R_n}{\mathrm{d}Q} \; (Y) = 1.
\end{align}
Consequently, \eqref{eq3: PardoV} implies that
\begin{align} \label{eq: limit R_n}
\lim_{n \to \infty} \frac{D_f(R_n \| Q)}{\chi^2(R_n \| Q)} = \tfrac12 \, f''(1)
\end{align}
where $\{\lambda_n\}$ in \eqref{eq: R_n} is an arbitrary sequence which tends to zero.
Hence, it follows from \eqref{eq: R_n} and \eqref{eq: limit R_n} that
\begin{align} \label{eq: 101}
\lim_{\lambda \downarrow 0} \frac{D_f(\lambda P + (1-\lambda) Q
\, \| \, Q)}{\chi^2(\lambda P + (1-\lambda) Q \, \| \, Q)}
= \tfrac12 \, f''(1),
\end{align}
and, by combining \eqref{eq: chi^2 identity} and \eqref{eq: 101}, we get
\begin{align} \label{eq1: limit}
\lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_f( \lambda P
+ (1-\lambda)Q \, \| \, Q) = \tfrac12 \, f''(1) \, \chi^2(P \| Q).
\end{align}
We next prove the result for the limit in the right side of \eqref{eq1: local behavior}.
Let $f^\ast \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ be the conjugate function of $f$,
which is given in \eqref{eq: conjugate f}. By the assumption that $f$ has a second continuous
derivative, so is $f^\ast$ and it is easy to verify that the second derivatives of $f$
and $f^\ast$ coincide at~1. Hence, from \eqref{eq: Df and Df^ast} and \eqref{eq1: limit},
\begin{align}
\lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_f(Q \, \| \, \lambda P + (1-\lambda)Q)
&= \lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_{f^\ast}( \lambda P + (1-\lambda)Q \, \| \, Q) \\
&= \tfrac12 \, f''(1) \, \chi^2(P \| Q).
\end{align}
\end{proof}
\begin{remark}
Although an $f$-divergence is in general not symmetric, in the sense that
the equality $D_f(P\|Q) = D_f(Q\|P)$ does not necessarily hold for all pairs
of probability measures $(P,Q)$, the reason for the equality in
\eqref{eq1: local behavior} stems from the fact that the second derivatives
of $f$ and $f^\ast$ coincide at~1 when $f$ is twice differentiable.
\end{remark}
\begin{remark}
Under the conditions in Theorem~\ref{theorem: local behavior 2018}, it follows
from \eqref{eq2: local behavior} that
\begin{align}
\label{first derivative D_f}
& \frac{\mathrm{d}}{\mathrm{d}\lambda} \; D_f(\lambda P + (1-\lambda)Q \, \| \, Q) \, \Bigl|_{\lambda=0}
= \lim_{\lambda \downarrow 0} \frac1\lambda \; D_f(\lambda P + (1-\lambda) Q \, \| \, Q) = 0, \\[0.1cm]
\label{second derivative D_f}
& \lim_{\lambda \downarrow 0} \; \frac{\mathrm{d^2}}{\mathrm{d}\lambda^2} \; D_f(\lambda P + (1-\lambda)Q \, \| \, Q)
= 2 \, \lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_f(\lambda P + (1-\lambda) Q \, \| \, Q) = f''(1) \, \chi^2(P\|Q)
\end{align}
where \eqref{second derivative D_f} relies on L'H\^{o}pital's rule.
The convexity of $D_f(P \| Q)$ in $(P,Q)$ also implies that, for all $\lambda \in (0,1]$,
\begin{align}
D_f(\lambda P + (1-\lambda) Q \, \| \, Q) \leq \lambda D_f(P \| Q).
\end{align}
\end{remark}
The following result refers to the local behavior of R\'{e}nyi divergences of an
arbitrary non-negative order.
\begin{corollary}
Under the condition in \eqref{eq: bounded RND}, for every $\alpha \in [0, \infty]$,
\begin{align}
\lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_\alpha( \lambda P + (1-\lambda)Q \, \| \, Q)
&= \lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; D_\alpha(Q \, \| \, \lambda P + (1-\lambda)Q)
\label{eq1: local behavior RD} \\[0.1cm]
&= \tfrac12 \, \alpha \, \chi^2(P \| Q) \, \log e.
\label{eq2: local behavior RD}
\end{align}
\end{corollary}
\begin{proof}
Let $\alpha \in (0,1) \cup (1, \infty)$. In view of \eqref{eq: H as fD} and Theorem~\ref{theorem: local behavior 2018},
it follows that the local behavior of the Hellinger divergence of order $\alpha$ satisfies
\begin{align}
\lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; \mathscr{H}_{\alpha}( \lambda P + (1-\lambda)Q \, \| \, Q)
&= \lim_{\lambda \downarrow 0} \frac1{\lambda^2} \; \mathscr{H}_{\alpha}(Q \, \| \, \lambda P + (1-\lambda)Q)
\label{eq1: local behavior Hel} \\[0.1cm]
&= \tfrac12 \, \alpha \, \chi^2(P \| Q).
\label{eq2: local behavior Hel}
\end{align}
The result now follows from \eqref{renyimeetshellinger}, which implies that
\begin{align}
\label{eq1: ratio RD/Hel}
\lim_{\lambda \downarrow 0} \frac{D_{\alpha}( \lambda P + (1-\lambda)Q \, \| \, Q)}{\mathscr{H}_{\alpha}( \lambda P + (1-\lambda)Q \, \| \, Q)}
&= \lim_{\lambda \downarrow 0} \frac{D_{\alpha}(Q \, \| \, \lambda P + (1-\lambda)Q)}{\mathscr{H}_{\alpha}(Q \, \| \, \lambda P + (1-\lambda)Q)} \\
&= \frac1{\alpha-1} \lim_{u \to 0} \frac{\log \bigl(1 + (\alpha-1) u \bigr)}{u} \\
&= \log e. \label{eq2: ratio RD/Hel}
\end{align}
The result in \eqref{eq1: local behavior RD} and \eqref{eq2: local behavior RD}, for $\alpha \in (0,1) \cup (1, \infty)$,
follows by combining the equalities in \eqref{eq1: local behavior Hel}--\eqref{eq2: ratio RD/Hel}.
Finally, the result in \eqref{eq1: local behavior RD} and \eqref{eq2: local behavior RD} for $\alpha \in \{0, 1, \infty\}$
follows from its validity for all $\alpha \in (0,1) \cup (1, \infty)$, and also due to the property where
$D_{\alpha}(\cdot \| \cdot)$ is monotonically increasing in $\alpha$ (see \cite[Theorem~3]{ErvenH14}).
\end{proof}
\appendices
\section{Proof of Theorem~\ref{theorem: some int. representations}}
\label{appendix: proof of identities}
We prove in the following the integral representations of $f$-divergences and
related measures in Theorem~\ref{theorem: some int. representations}.
\begin{enumerate}[1)]
\item Relative entropy: The function $f \in \ensuremath{\mathcal}{C}$ in \eqref{eq: f for KL}
yields the following weight function in \eqref{eq: weight function}:
\begin{align}
w_f(\beta) = \left( \frac1\beta - \frac1{\beta^2} \right) \left( 1\{\beta \geq 1\} - 1\{0 < \beta < 1\}\right) \, \log e, \quad \beta > 0.
\end{align}
Consequently, setting $c:=\log e$ in \eqref{eq: generalized w_f} yields
\begin{align}
\label{eq: modified w for KL}
\widetilde{w}_{f,c}(\beta) = \frac1\beta \left( 1\{\beta \geq 1\} - 1\{0 < \beta < 1\}\right) \log e,
\end{align}
for $\beta>0$. Equality \eqref{eq: int. rep. KL} follows from the substitution of \eqref{eq: modified w for KL}
into the right side of \eqref{eq2: new int rep Df}.
\item Hellinger divergence: In view of \eqref{eq: Hel-divergence}, for $\alpha \in (0,1) \cup (1, \infty)$,
the weight function $w_{f_\alpha} \colon (0, \infty) \mapsto [0, \infty)$ in \eqref{eq: weight function}
which corresponds to $f_\alpha \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ in \eqref{eq: H as fD} can be
verified to be equal to
\begin{align} \label{eq1: w for Hel}
w_{f_\alpha}(\beta) = \left( \beta^{\alpha-2} - \frac1{\beta^2} \right) \left( 1\{\beta \geq 1\} - 1\{0 < \beta < 1\} \right)
\end{align}
for $\beta>0$. In order to simplify the integral representation of the Hellinger divergence $\mathscr{H}_{\alpha}(P\|Q)$, we
apply Theorem~\ref{theorem: Int. rep.}-\ref{theorem: int. rep. - part 2}). From \eqref{eq1: w for Hel}, setting $c:=1$ in
\eqref{eq: generalized w_f} implies that $\widetilde{w}_{f_\alpha, 1} \colon (0, \infty) \to \ensuremath{\mathbb{R}}$ is given by
\begin{align} \label{eq2: w for Hel}
\widetilde{w}_{f_\alpha, 1}(\beta) = \beta^{\alpha-2} \left( 1\{\beta \geq 1\} - 1\{0 < \beta < 1\} \right)
\end{align}
for $\beta>0$. Hence, substituting \eqref{eq: G function} and \eqref{eq2: w for Hel} into \eqref{eq2: new int rep Df} yields
\begin{align} \label{eq3: int. rep. Hel}
\mathscr{H}_{\alpha}(P\|Q) = \int_1^\infty \beta^{\alpha-2} \, \bigl(1-\mathds{F}_{P\|Q}(\log \beta)\bigr) \, \mathrm{d}\beta
- \int_0^1 \beta^{\alpha-2} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta.
\end{align}
For $\alpha > 1$, \eqref{eq3: int. rep. Hel} yields
\begin{align} \label{eq4: int. rep. Hel}
\mathscr{H}_{\alpha}(P\|Q) &= \int_0^\infty \beta^{\alpha-2} \, \bigl(1-\mathds{F}_{P\|Q}(\log \beta)\bigr) \,
\mathrm{d}\beta - \int_0^1 \beta^{\alpha-2} \, \mathrm{d}\beta \\[0.1cm]
&= \int_0^\infty \beta^{\alpha-2} \, \bigl(1-\mathds{F}_{P\|Q}(\log \beta)\bigr) \, \mathrm{d}\beta - \frac1{\alpha-1},
\end{align}
and, for $\alpha \in (0,1)$, \eqref{eq3: int. rep. Hel} yields
\begin{align} \label{eq5: int. rep. Hel}
\mathscr{H}_{\alpha}(P\|Q) &= \int_1^\infty \beta^{\alpha-2} \, \mathrm{d}\beta
- \int_0^\infty \beta^{\alpha-2} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta \\[0.1cm]
\label{eq6: int. rep. Hel}
&= \frac1{1-\alpha} - \int_0^\infty \beta^{\alpha-2} \, \mathds{F}_{P\|Q}(\log \beta) \, \mathrm{d}\beta.
\end{align}
This proves \eqref{eq: int. rep. Hel}. We next consider the following special cases:
\begin{itemize}
\item In view of \eqref{eq2: chi^2}, equality \eqref{eq: int. rep. chi^2 div}
readily follows from \eqref{eq: int. rep. Hel} with $\alpha=2$.
\item In view of \eqref{eq3: Sq Hel}, equality \eqref{eq: int. rep. H^2 dist}
readily follows from \eqref{eq: int. rep. Hel} with $\alpha=\tfrac12$.
\item In view of \eqref{eq: B dist}, equality \eqref{eq: int. rep. B dist}
readily follows from \eqref{eq: int. rep. H^2 dist}.
\end{itemize}
\item R\'enyi divergence:
In view of the one-to-one correspondence in \eqref{renyimeetshellinger}
between the R\'enyi divergence and the Hellinger divergence of the same order,
\eqref{eq: int. rep. RenyiD} readily follows from \eqref{eq: int. rep. Hel}.
\item $\chi^s$ divergence with $s \geq 1$: We first consider the case where $s>1$.
From \eqref{eq: chi^s div}, the function $f_s \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$
in \eqref{eq: f - chi^s div} is differentiable and $f_s'(1)=0$. Hence, the respective
weight function $w_{f_s} \colon (0, \infty) \mapsto (0, \infty)$ can be verified from
\eqref{eq: weight function} to be given by
\begin{align} \label{eq: weight function chi^s}
w_{f_s}(\beta) = \frac1\beta \left( s-1+\frac1\beta \right) |\beta-1|^{s-1}, \quad \beta>0.
\end{align}
The result in \eqref{eq: int. rep. chi^s}, for $s > 1$, follows readily from
\eqref{eq: chi^s div}, \eqref{eq: G function}, \eqref{eq: new int rep Df} and
\eqref{eq: weight function chi^s}.
We next prove \eqref{eq: int. rep. chi^s} with $s=1$. In view of \eqref{eq: f - chi^s div},
\eqref{eq: f-TV}, \eqref{eq1: TV distance} and the dominated convergence theorem,
\begin{align}
\label{eq1: TV - s=1}
|P-Q| &= \lim_{s \downarrow 1} \, \chi^s(P \| Q) \\
\label{eq2: TV - s=1}
&= \int_1^{\infty} \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta +
\int_0^1 \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta.
\end{align}
This extends \eqref{eq: int. rep. chi^s} for all $s \geq 1$, although $f_1(t)=|t-1|$
for $t>0$ is not differentiable at~1. For $s=1$, in view of \eqref{eq7: identity RIS},
the integral representation in the right side of \eqref{eq2: TV - s=1} can be
simplified to \eqref{eq: int. rep. TV} and \eqref{eq2: int. rep. TV}.
\item DeGroot statistical information:
In view of \eqref{eq:DG f-div}--\eqref{eq: f for DG}, since the function
$\phi_w \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ is not differentiable at the point
$\frac{1-\omega}{\omega} \in (0, \infty)$ for $\omega \in (0,1)$,
Theorem~\ref{theorem: Int. rep.} cannot be applied directly to get an integral
representation of the DeGroot statistical information. To that end,
for $(\omega, \alpha) \in (0,1)^2$, consider the family of convex functions
$f_{\omega, \alpha} \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ given by (see \cite[(55)]{LieseV_IT2006})
\begin{align} \label{LieseV-55}
f_{\omega, \alpha}(t) = \frac1{1-\alpha} \left( \Bigl[ (\omega t )^{\frac1\alpha}
+ (1-\omega)^{\frac1{\alpha}} \Bigr]^\alpha - \Bigl[ \omega^{\frac1\alpha}
+ (1-\omega)^{\frac1{\alpha}} \Bigr]^\alpha \right),
\end{align}
for $t > 0$. These differentiable functions also satisfy
\begin{align}
\label{eq: f --> phi}
\lim_{\alpha \downarrow 0} f_{\omega, \alpha}(t) = \phi_w(t),
\end{align}
which holds due to the identities
\begin{align}
\label{eq1: 2 identities}
& \lim_{\alpha \downarrow 0} \left(a^{\frac1{\alpha}} + b^{\frac1{\alpha}}\right)^\alpha = \max\{a,b\}, \quad \; \; a, b \geq 0; \\
\label{eq2: 2 identities}
& \min\{a,b\} = a+b - \max\{a,b\}, \quad a, b \in \ensuremath{\mathbb{R}}.
\end{align}
The application of Theorem~\ref{theorem: Int. rep.}-\ref{theorem: int. rep. - part 2}) to the set
of functions $f_{\omega, \alpha} \in \ensuremath{\mathcal}{C}$ with
\begin{align}
c:= \frac{(1-\omega)^{\frac1\alpha}}{\alpha-1} \left[ \omega^{\frac1\alpha}
+ (1-\omega)^{\frac1{\alpha}} \right]^{\alpha-1}
\end{align}
yields
\begin{align}
\label{eq: w - Arimoto info.}
\widetilde{w}_{f_{\omega, \alpha}, \, c}(\beta) = \frac{1-\omega}{1-\alpha} \,
\frac1{\beta^2} \left[ 1 + \left( \frac{\omega \beta}{1-\omega} \right)^{\frac1\alpha} \right]^{\alpha-1} \;
\Bigl[ 1\{0<\beta<1\} - 1\{\beta \geq 1\} \Bigr],
\end{align}
for $\beta>0$, and
\begin{align}
\label{eq: Arimoto info.}
D_{f_{\omega, \alpha}}(P\|Q) = \int_0^{\infty} \widetilde{w}_{f_{\omega, \alpha}, \, c}(\beta) \, G_{P\|Q}(\beta) \, \mathrm{d}\beta
\end{align}
with $G_{P\|Q}(\cdot)$ as defined in \eqref{eq: G function}, and $(\omega, \alpha) \in (0,1)^2$.
From \eqref{eq1: 2 identities} and \eqref{eq: w - Arimoto info.}, it follows that
\begin{align}
\label{eq: limit of w for DeGroot info.}
\lim_{\alpha \downarrow 0} \, \widetilde{w}_{f_{\omega, \alpha}, \, c}(\beta)
= \frac{1-\omega}{\beta^2} \; \Bigl[ 1\{0<\beta<1\} - 1\{\beta \geq 1\} \Bigr] \;
\Bigl[ \tfrac12 \, 1\bigl\{\beta = \tfrac{1-\omega}{\omega}\bigr\} + 1\bigl\{0<\beta<\tfrac{1-\omega}{\omega}\bigr\} \Bigr],
\end{align}
for $\beta>0$. In view of \eqref{eq:DG f-div}, \eqref{eq: f for DG}, \eqref{eq: G function}, \eqref{eq: f --> phi},
\eqref{eq: Arimoto info.} and \eqref{eq: limit of w for DeGroot info.}, and the monotone convergence theorem,
\begin{align}
\mathcal{I}_{\omega}(P\|Q) &= D_{\phi_\omega}(P\|Q) \\
&=\lim_{\alpha \downarrow 0} D_{f_{\omega, \alpha}}(P\|Q)\\
\label{eq: to be simplified}
&= (1-\omega) \int_0^{\min\{1, \frac{1-\omega}{\omega}\}} \; \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta
- (1-\omega) \int_1^{\max\{1, \frac{1-\omega}{\omega}\}} \; \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta,
\end{align}
for $\omega \in (0,1)$. We next simplify \eqref{eq: to be simplified} as follows:
\begin{enumerate}[a)]
\item if $\omega \in \bigl(1, \tfrac{1-\omega}{\omega} \bigr)$, then $\tfrac{1-\omega}{\omega} < 1$ and \eqref{eq: to be simplified} yields
\begin{align}
\mathcal{I}_{\omega}(P\|Q) &= (1-\omega) \int_0^{\frac{1-\omega}{\omega}} \; \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta;
\end{align}
\item if $\omega \in \bigl(0, \tfrac12 \bigr]$, then $\tfrac{1-\omega}{\omega} \geq 1$ and \eqref{eq: to be simplified} yields
\begin{align}
\mathcal{I}_{\omega}(P\|Q) &= (1-\omega) \int_0^1 \frac{\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta -
(1-\omega) \int_1^{\frac{1-\omega}{\omega}} \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta \\[0.2cm]
\label{eq: DeGroot + Lemma}
&= (1-\omega) \int_1^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta -
(1-\omega) \int_1^{\frac{1-\omega}{\omega}} \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta \\[0.2cm]
&= (1-\omega) \int_{\frac{1-\omega}{\omega}}^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta,
\end{align}
where \eqref{eq: DeGroot + Lemma} follows from \eqref{eq7: identity RIS} (or its equivalent from in \eqref{eq: int. identity with RIS}).
\end{enumerate}
This completes the proof of \eqref{eq: int. rep. DeGroot Info}. Note that, due to \eqref{eq7: identity RIS},
the integral representation of $\mathcal{I}_{\omega}(P\|Q)$ in \eqref{eq: int. rep. DeGroot Info} is indeed
continuous at $\omega=\tfrac12$.
\item Triangular discrimination: In view of \eqref{eq:delta}--\eqref{eq:tridiv},
the corresponding function $\widetilde{w}_{f,1} \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ in
\eqref{eq: generalized w_f} (i.e., with $c:=1$) can be verified to be given by
\begin{align} \label{eq: w for delta div}
\widetilde{w}_{f,1}(\beta) = \frac{4}{(\beta+1)^2} \left(1\{\beta \geq 1\} - 1\{0 < \beta < 1\} \right)
\end{align}
for $\beta>0$. Substituting \eqref{eq: G function} and \eqref{eq: w for delta div} into
\eqref{eq2: new int rep Df} proves \eqref{eq: int. rep. TD} as follows:
\begin{align}
\Delta(P\|Q) &= 4 \left( \int_1^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{(\beta+1)^2} \, \mathrm{d}\beta
- \int_0^1 \frac{\mathds{F}_{P\|Q}(\log \beta)}{(\beta+1)^2} \, \mathrm{d}\beta \right) \\[0.3cm]
&= 4 \left( \int_0^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{(\beta+1)^2} \, \mathrm{d}\beta
- \int_0^1 \frac1{(\beta+1)^2} \, \mathrm{d}\beta \right) \\[0.3cm]
&= 4 \int_0^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{(\beta+1)^2} \, \mathrm{d}\beta - 2.
\end{align}
\item Lin's measure and the Jensen-Shannon divergence: Let $\theta \in (0,1)$
(if $\theta \in \{0, 1\}$, then \eqref{eq: Lin91}--\eqref{eq2: Lin91}
imply that $L_\theta(P\|Q)=0$). In view of \eqref{eq: Lin div as Df},
the application of Theorem~\ref{theorem: Int. rep.}-\ref{theorem: int. rep. - part 1})
with the function $f_{\theta} \colon (0, \infty) \mapsto \ensuremath{\mathbb{R}}$ in
\eqref{eq: f of Lin div} yields the weight function
$w_{f_\theta} \colon (0, \infty) \mapsto [0, \infty)$ defined as
\begin{align} \label{eq: w_f - Lin}
w_{f_\theta}(\beta) &= \frac{(1-\theta) \, \log(\theta \beta + 1 - \theta)}{\beta^2}
\; \Bigl(1\{\beta \geq 1\} - 1\{0 < \beta < 1\}\Bigr).
\end{align}
Consequently, we get
\begin{align}
\label{eq11: Lin int. rep.}
L_{\theta}(P\|Q) = & \, (1-\theta) \left( \int_1^\infty \tfrac{\log(\theta \beta + 1 - \theta)}{\beta^2}
\left(1 - \mathds{F}_{P \| Q}(\log \beta)\right) \, \mathrm{d}\beta
- \int_0^1 \tfrac{\log(\theta \beta + 1 - \theta)}{\beta^2} \;
\mathds{F}_{P \| Q}(\log \beta) \, \mathrm{d}\beta \right) \\[0.3cm]
\label{eq12: Lin int. rep.}
= & \, (1-\theta) \left( \int_1^\infty \frac{\log(\theta \beta + 1 - \theta)}{\beta^2} \, \mathrm{d}\beta
- \int_0^\infty \frac{\log(\theta \beta + 1 - \theta)}{\beta^2} \;
\mathds{F}_{P \| Q}(\log \beta) \, \mathrm{d}\beta \right) \\[0.3cm]
\label{eq13: Lin int. rep.}
= & \, \theta \log \frac1\theta - (1-\theta) \int_0^\infty \frac{\log(\theta \beta + 1 - \theta)}{\beta^2} \;
\mathds{F}_{P \| Q}(\log \beta) \, \mathrm{d}\beta \\[0.3cm]
\label{eq14: Lin int. rep.}
= & \, h(\theta) - (1-\theta) \int_0^\infty
\frac1{\beta^2} \, \log\left(\frac{\theta \beta}{1 - \theta} + 1\right) \;
\mathds{F}_{P \| Q}(\log \beta) \, \mathrm{d}\beta
\end{align}
where \eqref{eq11: Lin int. rep.} follows from \eqref{eq: G function}, \eqref{eq: new int rep Df}
and \eqref{eq: w_f - Lin}; for $\theta \in (0,1)$, equality \eqref{eq13: Lin int. rep.} holds since
\begin{align}
\label{eq15: Lin int. rep.}
\int_1^\infty \frac{\log(\theta \beta + 1 - \theta)}{\beta^2} \, \mathrm{d}\beta
= \frac{\theta}{1-\theta} \, \log \frac1{\theta};
\end{align}
finally, \eqref{eq14: Lin int. rep.} follows from \eqref{eq: int. identity with RIS} where
$h \colon [0,1] \mapsto [0,\log 2]$ denotes the binary entropy function. This proves
\eqref{eq: int. rep. Lin's div}. In view of \eqref{eq:js1}, the identity in \eqref{eq: int. rep. JS div}
for the Jensen-Shannon divergence follows from \eqref{eq: int. rep. Lin's div} with $\theta = \tfrac12$.
\item Jeffrey's divergence: In view of \eqref{eq2: jeffreys}--\eqref{f - jeffreys}, the corresponding
weight function $w_f \colon (0, \infty) \mapsto [0, \infty)$ in \eqref{eq: weight function} can be
verified to be given by
\begin{align} \label{eq1: w for jeffreys}
w_f(\beta) = \left( \frac{\log e}{\beta} + \frac1{\beta^2} \, \log \frac{\beta}{e} \right)
\left( 1\{\beta \geq 1\} - 1\{0<\beta<1\} \right).
\end{align}
Hence, setting $c := \log e$ in \eqref{eq: generalized w_f} implies that
\begin{align} \label{eq2: w for jeffreys}
\widetilde{w}_{f,c}(\beta)
= \left( \frac{\log e}{\beta} + \frac{\log \beta}{\beta^2} \right) \left( 1\{\beta \geq 1\} - 1\{0<\beta<1\} \right)
\end{align}
for $\beta > 0$. Substituting \eqref{eq: G function} and \eqref{eq2: w for jeffreys} into
\eqref{eq2: new int rep Df} yields \eqref{eq: int. rep. Jefreey's div}.
\item $E_\gamma$ divergence: Let $\gamma \geq 1$, and let
$\omega \in \bigl(0, \tfrac12 \bigr]$ satisfy $\frac{1-\omega}{\omega} = \gamma$; hence,
$\omega = \frac1{1+\gamma}$. From \eqref{eq: DG-EG}, we get
\begin{align} \label{eq1: EG-DG}
E_{\gamma}(P\|Q) = (1+\gamma) \, \mathcal{I}_{\frac1{1+\gamma}}(P\|Q).
\end{align}
The second line in the right side of \eqref{eq: int. rep. DeGroot Info} yields
\begin{align} \label{eq2: EG-DG}
\mathcal{I}_{\frac1{1+\gamma}}(P\|Q) = \frac{\gamma}{1+\gamma}
\int_\gamma^\infty \frac{1-\mathds{F}_{P\|Q}(\log \beta)}{\beta^2} \, \mathrm{d}\beta.
\end{align}
Finally, substituting \eqref{eq2: EG-DG} into the right side of \eqref{eq1: EG-DG}
yields \eqref{eq: int. rep. E_gamma}.
\end{enumerate}
\begin{remark}
In view of \eqref{eq7: identity RIS}, the integral representation
for the $\chi^s$ divergence in \eqref{eq: int. rep. chi^s} specializes to
\eqref{eq: int. rep. chi^2 div} and \eqref{eq: int. rep. TV}--\eqref{eq2: int. rep. TV}
by letting $s=2$ and $s=1$, respectively.
\end{remark}
\begin{remark}
In view of \eqref{eq:EG-TV}, the first identity for the total variation distance
in \eqref{eq: int. rep. TV} follows readily from \eqref{eq: int. rep. E_gamma}
with $\gamma=1$. The second identity in \eqref{eq2: int. rep. TV} follows from
\eqref{eq: int. identity with RIS} and \eqref{eq: int. rep. TV}, and since
$\int_1^\infty \frac{\mathrm{d}\beta}{\beta^2} = 1$.
\end{remark}
\section*{Acknowledgment}
The author is grateful to Sergio Verd\'{u} and the two anonymous reviewers, whose suggestions improved the
presentation in this paper.
|
2,877,628,088,926 | arxiv | \section{Introduction}
Reinforcement learning (RL) has achieved state of the art results in gaming~\cite{silver2017mastering}, robotics~\cite{andrychowicz2018learning} and others.
Our work relates to the growing literature of applying RL to optimization problems. \cite{bello2016neural} show RL techniques produce near optimal solutions for the traveling salesman (TSP) and knapsack problems. \cite{kool2018attention} use RL to solve TSP and its variants: vehicle routing, orienteering, and a stochastic variant of prize-collecting TSP. \cite{nazari2018reinforcement} solve both static and online versions of the vehicle routing problem. \cite{gijsbrechts2018can} apply RL to the dual sourcing inventory replenishment problem, and further demonstrate results on a real dataset. \cite{kong2018new} apply RL to online versions of the knapsack, secretary and adwords problems. \cite{oroojlooyjadid2017deep} apply RL to the beer game problem. \cite{lin2018efficient} use RL for fleet management of taxis on a real life dataset.
Our contribution is to extend the existing RL literature to a set of dynamic resource allocation problems which parallel real-world problems. In particular, we present benchmarks for three classic problems: Bin Packing, Newsvendor and Vehicle Routing. In each case, we show that trained policies from out-of-the-box RL algorithms with simple 2 layer neural networks are competitive with or superior to established approaches.
We open source our code\footnote{https://github.com/awslabs/or-rl-benchmarks} and parameterize the complexity of the problems in order to encourage fair comparisons of algorithmic contributions. Each environment is implemented with the OpenAI Gym interface~\cite{brockman2016openai} and integrated with the RLlib~\cite{liang2018rllib} library so researchers can replicate our results, test algorithms and tune hyperparameters.
\section{Bin Packing}
\label{binpacking}
In the classic version of the bin packing problem, we are given items of different sizes and need to pack them into as few bins as possible. In the online stochastic version, items arrive one at a time with item sizes drawn from a fixed but unknown distribution.
Many resource allocation problems in Operations Research and Computer Science face uncertain supply, and can be cast as variants of the online bin packing problem. In warehouse and transportation operations, variants of bin packing can be seen in: the order assignment problem (where we assign orders to fulfillment resources), the tote packing problem (where we fill items as they arrive into totes for shipment), and the trailer truck packing problem. In computing, bin packing problems arise in cloud computing scenarios, where virtual machines with varying memory and cpu requirements are allocated to servers with fixed capacity.
\ifx 3d bin packing has many applications within Amazon operations from packing of picked items into totes to the packing of packages onto trailers. The need to use as few bins as possible directly translates into cost savings. \emph{Bharath} can you write about the VM
\fi
\subsection{Problem Formulation} \label{sec: bin_packing_prob_form}
We use a formulation of the bin packing problem similar to \citet{gupta2012online}. In the online stochastic bin packing problem, items arrive online, one in each time period $t$, with $t \in \{1, \ldots, T\}$. Items can be of different types $j \in \{1,...,J\}$. The size of type $j$ is $s_j$ and the probability that an item is of type $j$ is $p_j$. Without loss of generality, we assume item types are indexed in the increasing order of their size: $s_1 < s_2 < ... < s_J$. Upon arrival, the item needs to be packed into one of the bins, each with size $B$ (we assume that $s_J < B < \infty$). A packing is considered feasible if the total size of the items packed in each bin does not exceed the bin size. The task is to find a feasible packing that minimizes the number of bins used to pack all of the items that arrive within the time horizon. We assume the item sizes $s_j$ and bin size $B$ are integers. We assume the number of bins one can open is unlimited and denote the sum of item sizes in a bin $k$ as \emph{level} $h_{k}$. After $t$ items have been packed, we denote the number of bins at some level $h$ as $N_h(t)$, where $h \in \{1,...,B\}$.
It can be shown that minimizing the number of non-empty bins is equivalent to minimizing the total waste (i.e. empty space) in the partially filled bins. In real applications (e.g. packing trucks, or virtual machines), there is a dollar-value cost associated with the consumption of these resources, so at any time horizon our objective is to minimize total waste $\sum_{t=0}^{T} W(t)$, where
\begin{equation}
\label{waste}
W(t) \triangleq \sum_{h=1}^{B-1} N_h(t)(B-h).
\end{equation}
\noindent We use $W^{A}_{F}(t)$ to denote the total waste after step $t$ of algorithm $A$ when the input samples come from distribution $F$. For RL, we define the cumulative reward up to time step $t$ to be $W^{RL}_{F}(t)$. \citet{courcobetis1990stability} showed that any discrete distribution falls into one of three categories based on expected distribution $E[W^{OPT}_{F}(t)]$, where OPT is an offline optimal policy.
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Linear waste (LW): $E[W^{OPT}_{F}(t)] = \Theta(t)$, e.g. $B = 9$, two item types of size $\{2,3\}$ with probability $\{0.8, 0.2\}$ respectively.
\item Perfectly Packable (PP): $E[W^{OPT}_{F}(t)] = \Theta(\sqrt{t})$, e.g. $B = 9$, two item types of size $\{2,3\}$ with probability $\{0.75, 0.25\}$ respectively.
\item PP with bounded waste (BW): $E[W^{OPT}_{F}(t)] = \Theta(1)$, e.g. $B = 9$, two item types of size $\{2,3\}$ with probability $\{0.5, 0.5\}$ respectively.
\end{enumerate}
We will train an RL policy for each of the three distribution types and compare our policy to the appropriate baseline.
We formulate the bin packing problem as an MDP, where the state $S_t \in \mathcal{S}$ is the current item with size $s_j$ and the number of bins at each level is $N_h(t)$, where $h \in \{1,...,B\}$. The action $a$ is to pick a bin level which can fit the item. Thus, the number of actions possible is $B$ with one action for each level and action 0 corresponds to opening a new bin. An episode defines the start and end of simulation. Initially, all the bins are empty. The reward $R_t$ is the negative of incremental waste as each item is put into a bin $s_j$. If the item is put into an existing bin, the incremental waste will reduce by item size. If the item is put into a new bin, the waste increases by the empty space left in the new bin. $T$ items need to be placed in the bins, after which the episode ends. We leave varying number of time steps for future work. We impose action masking for infeasible actions, such as picking a level for which bins do not exist yet, by outputting a probability for every action (regardless of feasibility) but multiplying the infeasible action probabilities by zero so they are never executed.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/bin_packing_rl_vs_baseline_bounded_waste_binsize_100.png}
\caption{RL vs baseline for BW distribution}
\label{fig:bin_packing_BW_dist1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/bin_packing_rl_vs_baseline_perfect_pack_binsize_100.png}
\caption{RL vs baseline for PP distribution}
\label{fig:bin_packing_PP_dist1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/bin_packing_rl_vs_baseline_linear_waste_binsize_100.png}
\caption{RL vs baseline for LW distribution}
\label{fig:bin_packing_LW_dist1}
\end{subfigure}
\caption{Comparison of episodic rewards between RL and Best Fit baseline during training.}
\label{fig:RLvsBF_binpacking}
\end{figure*}
\subsection{Related Work}
Bin packing is a well-studied problem in the operations research and computer science literature.
The problem is already NP-hard in its basic form. As a result, many of the classical approaches to bin packing analyze the performance of approximation algorithms. We refer the readers to the survey \cite{Coffmanetal2013} for algorithmic approaches to classical bin packing and its generalizations.
For online bin packing, a simple heuristic -- Best Fit -- is known to use at most $1.7$ times the optimal number of bins in the worst case \cite{Johnsonetal1974}. Best Fit places an item in a bin where, if the item were to be packed, would leave the least amount of space.
Another competitive heuristic is Sum of Squares (SS) heuristic \cite{csirik2006sum}. In particular, SS is proven to be asymtotically optimal (up to constants) as the episode length grows.
The simple heuristics described above are distribution agnostic. More sophisticated algorithms learn an empirical estimate of the item size distribution, leverage such distribution to solve a linear program, and use its dual to guide the online policy~\cite{IyengarSigman2004}. This approach has been used to solve online packing and covering problems \cite{GuptaMolinaro2014,AgrawalDevanur2015}.
\subsection{Baseline Algorithms}
We use the Sum of Squares (SS) heuristic and Best Fit (BF) as our baseline algorithms.
When the $t$th item of size $s$ arrives, SS picks a bin of level $h^*$ that minimizes the value of the following sum-of-squares potential:
\begin{equation}
\sum_{h=1}^{B-1} (N_{h}(t))^2 \label{SOS eq}.
\end{equation}
\noindent It can be shown that minimizing (\ref{SOS eq}) is equivalent to:
\begin{equation}
h^* = \underset{h:N_h(t-1)>0 \ \text{and} \ h+s \leq B}{\arg\min} [N_{h+s}(t-1) - N_h(t-1)], \label{SOS eq1}.
\end{equation}
\noindent where, $N_0 = N_B = 0$. Intuitively, SS tries to equalize the number of bins at each level. Due its simplicity, we implemented (\ref{SOS eq1}) version of SS.
BF selects a bin at the highest level that can fit the item:
\begin{equation}
h^* = \underset{h:N_h(t-1)>0 \ \text{and} \ h+s \leq B}{\arg\max} h
\end{equation}
\subsection{Reinforcement Learning Algorithm}
We use the Proximal Policy Optimization (PPO) algorithm~\cite{schulman2017proximal}.
We use a two-layer neural network with 256 hidden nodes each for both the actor and the critic. The input to both actor and critic network is the state, the output of the actor network is a vector giving the probabilities of taking any action in the action space, and the output of the critic network is predicted cumulative discounted reward from that state. During training, the agent explores the state space by sampling from the probability distribution of the actions generated by the policy network. We mask actions by reducing the probability of invalid actions to 0. We use a single machine with 4 GPUs and 32 CPUs for our experiments.
We list all the hyperparameters used in Appendix \ref{appendix:bin_size_hp}.
\subsection{Results}
For each sample item size distribution (BW, PP, LW), we train the RL algorithm (PPO) and compare to the baseline algorithms (SS and BF). We consider two variations, bin size of 9, with 100 items and distributions listed in section \ref{sec: bin_packing_prob_form}, and bin size of 100, with 1000 items and the following item size distribution:
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item item sizes: $[1, 2, 3, 4, 5, 6, 7, 8, 9]$
\item item probabilities for BW: \\ $[0.14, 0.10, 0.06, 0.13, 0.11, 0.13, 0.03, 0.11, 0.19]$
\item item probabilities for PP: \\ $[0.06, 0.11, 0.11, 0.22, 0, 0.11, 0.06, 0, 0.33]$
\item item probabilities for LW: $[0, 0, 0, 1/3, 0, 0, 0, 0, 2/3]$.
\end{enumerate}
Figure \ref{fig:RLvsBF_binpacking} plots the reward earned by the RL policy in training (blue) vs the Best Fit baseline (red) for bin size 100 and different item size distributions (BW, PP, and LW) as a function of training time (measured in minutes). The solid lines represent the mean reward of each policy, and the shaded bands represent the min/max rewards. By the end of training, RL either matches or outperforms the baseline policy for all three item size distributions. In particular, the reward gap between RL and baseline is the largest for LW distribution (which is expected, as both BF and SS are known to be sub-optimal for LW distribution).
In Table \ref{table:bin_packing_RL_baseline_comp}, we inspect numerically the trained RL policy vs. baseline for bin size 100.
Supporting what we observed in the initial figures, this table shows the final RL policy outperforms or matches the baseline for each distribution.
We test generalization of the RL policy by evaluating the trained policy with a different item distribution than the one it was trained on. For PP and BW distributions, the trained policies translate well. Both the PP and BW policies perform as well as the baseline solutions for the LW distribution. The policy trained on the LW distribution generalizes reasonably well but does not do as well as the baseline solutions in the BW and PP distributions. We did observe overfitting if we pick model iterations from much later in training. We leave the study of overfitting and generalization across distributions as future work. A note on scaling: the training time for bin size 100 is about 3x, 4x and 10x more than bin size 9 for PP, BW and LW respectively. The bin size 9 results can be found in the supplementary material.
\begin{table}[h!]
\resizebox{\columnwidth}{!}
{%
\begin{tabular}{|c|cc|cc|cc|}
\hline
\multicolumn{1}{|l|}{\multirow{2}{*}{Algorithm}} & \multicolumn{2}{c|}{Perfect Pack} & \multicolumn{2}{c|}{Bounded Waste} & \multicolumn{2}{c|}{Linear Waste} \\
\multicolumn{1}{|l|}{} & $\mu$ & $\sigma$ & $\mu$ & $\sigma$ & $\mu$ & $\sigma$ \\ \hline
RL with PP & -49.0 & 29.5 & -48.0 & 29.5 & -1358 & 44.2 \\ \hline
RL with BW & -47.6 & 29.3 & -53.9 & 26.4 & -1368 & 48.0 \\ \hline
RL with LW & -258.6 & 69.3 & -143.9 & 84.9 & -880.2 & 43 \\ \hline
SS & -56.54 & 28.9 & -56.61 & 30.2 & -2091 & 92 \\ \hline
Best Fit & -52.01 & 29.5 & -51.4 & 28.9 & -1314 & 53 \\ \hline
\end{tabular}
}
\caption{RL and baseline solution comparison for bin packing. Mean and standard deviations are across 100 episodes.}
\label{table:bin_packing_RL_baseline_comp}
\end{table}
Finally, we inspect the relative structure of the policies to ensure that RL is learning a sensible solution. In particular, we plot the state variable values as a function of the number of steps in an episode. Intuitively, the integral of these plots represents the waste, which we want to minimize. An optimal policy should show a (relatively) flat surface. We use bin size of 9 for this analysis for ease of manual inspection and study the linear waste distribution that highlights the difference between the Sum of Squares baseline and RL distinctly. From Figure \ref{fig:bin_packing_LW_dist}, we see that the baseline policy leaves more open bins at a lower fullness, whereas RL only leaves open bins at level 8 (which cannot be closed once they reach that level). This indicates that the learned RL polocy is reasonable. For other distributions, the graphs for both the baseline and RL policy look similar to each other.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{images/linear_waste_sol.png}
\caption{RL vs baseline solution for LW distribution}
\label{fig:bin_packing_LW_dist}
\end{figure}
\section{Multi-Period Newsvendor with Lead Times}
\label{sec:newsvendor}
The Newsvendor problem (see e.g. \citet{zipkin2000foundations}) is a seminal problem in inventory management wherein we must decide on an ordering decision (how much of an item to purchase from a supplier) to cover a single period of uncertain demand. The objective is to trade-off the various costs incurred and revenues achieved during the period, usually consisting of sales revenue, purchasing and holding costs, loss of goodwill in the case of missed sales, and the terminal salvage value of unsold items.
In practice, decisions are rarely isolated to a single period, and they are repeatedly and periodically taken and thus have a downstream impact. \emph{This makes the problem non-trivial and has no known optimal solution}, as compared to the single-period Newsvendor which has a known optimal solution when the demand distribution is known. Additionally, purchased units do not, in general, arrive quasi-instantaneously, but rather after a few periods of transit from the vendor to their final destination, known as the lead time. The presence of lead times further complicates the problem. Solving the multi-period newsvendor problem with lead times and lost sales is a notoriously difficult problem \cite{zipkin2008old}. It requires keeping track of orders placed in different periods, leading to what is known as the curse of dimensionality, rendering any exact solution impractical even for small lead times of 2 and 3 periods, and outright infeasible at higher dimensions. As a result, the problem forms a good test-bed for RL algorithms given that the observation of rewards is delayed by the lead time and that it can be formulated as a Markov Decision Problem. A number of heuristics have been developed for the lost sales problem, often based on order-up-to level policies for the equivalent model with backlogged demand. Comparisons in the performance of these two policies have been studied \cite{janakiraman2007comparison}, and it has been shown that order-up-to policies are asymptotically optimal \cite{huh2009asymptotic}, thus making for good benchmark policies.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{images/newsvendor_ppo_reward.png}
\caption{RL training reward in the multi-period newsvendor problem with Poisson demand and vendor lead period of 5.}
\label{fig:newsvendor_TRPO}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/newsvendor_ckpt_50.png}
\caption{PPO Checkpoint 50}
\label{fig:nv_ckpt_50}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/newsvendor_ckpt_1000.png}
\caption{PPO Checkpoint 1000}
\label{fig:nv_ckpt_1000}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/newsvendor_ckpt_1500.png}
\caption{PPO Checkpoint 1500}
\label{fig:nv_ckpt_1500}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/newsvendor_ckpt_10050.png}
\caption{PPO Checkpoint 10050}
\label{fig:nv_ckpt_10050}
\end{subfigure}
\begin{subfigure}[b]{0.035\textwidth}
\centering
\includegraphics[width=\textwidth]{images/newsvendor_colormap.png}
\label{fig:nv_ckpt_10050}
\end{subfigure}
\caption{The Newsvendor policy graphs show the RL-learned policy for the quantity we will buy, as a function of how much inventory we have already ordered. The axes show the inventory to arrive in three and four periods respectively and the contour lines show the number of items bought by the RL policy. The agent policy improves over the training period. In the final policy, If we have already ordered a lot of inventory, this graph shows we will order less at this timestep.}
\label{fig:newsvendor_policy}
\end{figure*}
\subsection{Problem formulation}
We consider the stationary, single-product, multi-period dynamic inventory management problem with vendor lead time (VLT) and stochastic demand. Here, the VLT $l$ refers to the number of time steps between the placement and receipt of an order. The demand $D$ is assumed to be stationary and Poisson distributed with mean $\mu$. Items are purchased at a cost $c$ and sold at a price $p$, and incur a penalty for lost sales $k$ for each unit of unmet demand while any unit left over at the end of a period incurs a holding cost $h$. Items do not perish. A discount factor $\gamma$ is used. No terminal value is awarded for the inventory state at end of episode.
The problem is formulated as a Markov Decision Process:
\begin{description}[style=unboxed,leftmargin=0cm]
\item[State:] The state $S$ of the problem is given by
\begin{align*}
S=(p,c,h,k,\mu,x_0,\ldots,x_{l-1})
\end{align*}
where $x_0$ is the on-hand inventory, $x_1$ the units to be received one period hence, and so on.
\item[Action:] In each period the state of the system is observed and an action $A=q$ is taken, consisting of the size of the order placed and to arrive $l$ time periods later.
\item[Reward:] We first incur the purchasing cost corresponding to the procured units given the action $a$.
A realization $d$ of the demand $D$ (Poisson distributed with mean $\mu$) is then observed, and demand is satisfied as much as is possible given on-hand levels. Missed sales incur a loss of goodwill $k$ per unit, while leftover units incur a holding cost $h$:
\begin{align*}
R &= p\min(x_0,d) - c a - h (x_0 - d)^+ - k (d - x_0)^+.
\end{align*}
where $(x)^+=\max(x,0)$.
\item[Transition:] The state of the system $S$ is then updated to $S_+$ by moving all pipeline units downstream and incorporating the newly purchased units:
\begin{align*}
S_+ &= (p,c,h,k,\mu,(x_0 - d)^+ +x_1,x_2,\ldots,x_{l-1},a).
\end{align*}
We do not impose action masking because there is no infeasible action in the pre-specified, positive continuous action space. We convert the continuous buy quantity to integer by post-process rounding.
\end{description}
\subsection{Related work}
Data-centric approaches \cite{rudin2014big} and reinforcement learning approachse \cite{oroojlooyjadid2016applying} have recently been suggested for the newsvendor problem. These have so far still remained focused on the single period problem and often trying to learn some of the inputs, such as demand. A few other papers have considered Reinforcement Learning in the context of inventory management, such as \cite{gijsbrechts2018can}, where a dual sourcing problem is tackled using RL.
\subsection{Baseline Algorithm}
As noted in the beginning of this section, it is impractical or even infeasible to solve the multi-stage newvendor problem exactly. However, it is possible to use heuristics that provide good approximations to the optimal solution. In particular, a way to tackle the problem is to approximate it by its backlogging counterpart, where orders are not lost if unsatisfied, for which a closed form solution of the optimal policy exists in the form of an order-up-to policy characterized by the following critical ratio:
\begin{align*}
CR = \frac{p-\gamma c + k}{p-\gamma c + k + h}.
\end{align*}
As a result, letting $z^* = F_l^{-1}(CR)$, where $F_l$ is the cumulative distribution function of the $l$ period demand, the policy is given by:
\begin{align*}
a&= \left(z^* - \sum_{i=0}^{l-1} x_i\right)^+.
\end{align*}
\subsection{Reinforcement Learning Algorithm}
We use Proximal Policy Optimization (PPO) \cite{schulman2017proximal} as implemented in the RLlib package \cite{rllab}, where the policy is represented by a neural network. We use a single machine with 4 GPUs and 32 CPUs for our experiments. We use a fully-connected neural network with hidden layers of size (64,32) and the hyperparameters presented in Appendix \ref{appendix:newsvendor_hp}.
\subsection{Results}
We present the results obtained using a VLT of 5 and time horizon of 40. The economic parameters were chosen so that $p,c\in[0,100]$, $h\in[0,5]$ and $k\in[0,10]$, while the demand mean $\mu$ was such that $\mu\in[0,200]$.
We sampled problem parameters as follows: $p\sim U[0,100]$, $c\sim U[0,p]$, $h\sim U[0,\min(c, 5)]$, $k\sim U[0,10]$ and $\mu\sim U[0,200]$ for the economic and demand parameters; where $U[a,b]$ denotes a uniformly random variable between $a$ and $b$. The initial state was simply set to be $\mathbf{0}$.
Figure~\ref{fig:newsvendor_TRPO} compares the results obtained by the RL algorithm to the baseline. The RL solution eventually beats the benchmark.
While solving this problem numerically is intractable, the optimal inventory policy structures are well known. It is thus of interest to check whether their properties are being learned by the RL algorithm. Given the dimension of the problem, we cannot observe the entire policy, but can investigate slices thereof. We thus fix price, cost, holding cost, penalty for lost sale and mean demand to 50, 25, 0.5, 5 and 100, respectively, and plot the optimal policy in the space $(0,0,0,x_3,x_4)$ in Figure~\ref{fig:newsvendor_policy}, where $x_3$ and $x_4$ stand for the inventory to arrive in three and four periods respectively. The figure shows contour curves of the buying quantity as a function of the inventory state. Intuitively, a good policy will buy less if we already have inventory on-hand (or in the pipeline). Visually, this should look like a smooth, decreasing frontier. We observe that the algorithm is learning this desired policy structure over the training period and we can start to observe monotonicity of the policy along most directions.
\section{Vehicle Routing Problem}
\label{sec_vrp_intro}
One of the most widely studied problems in combinatorial optimization is the traveling salesman problem (TSP), which involves finding the shortest route that visits each node in a graph exactly once and returns to the starting node. TSP is an NP-hard problem and has a variety of practical applications from logistics to DNA sequencing. The vehicle routing problem (VRP) is a generalization of TSP where one or more vehicles are expected to visit the nodes in a graph, for example to satisfy customer demand. VRP is also a well-studied topic and has several applications, especially in supply chain and logistics.
An important extension of VRP is where some of the information about the graph is revealed over time, such as demand at each node and travel time. This class of VRP is called dynamic VRP (DVRP, also known as real-time or online VRP). Stochastic VRP (SVRP) is where one or more problem parameters are stochastic with some known probability distributions (as opposed to arbitrary or adversarial distributions). In many real-life applications, the relevant VRP is both stochastic and dynamic (SDVRP), which is also focus of this work. We formulate a variant of SVRP and compare solution approaches from the Operations Research (OR) and Reinforcement Learning (RL) literature.
\subsection{Problem Formulation}
\label{sec_vrp_pf}
We consider a VRP variant that is of an on-demand delivery driver. Orders arrive on the phone app of the driver in a dynamic manner throughout the problem horizon. Each order has a delivery charge (reward) known to the driver at the time of order creation, and it is assigned to a pickup location (e.g. restaurant) in the city. “City” here refers to the Euclidean space in a grid map where the VRP environment is created. The city consists of mutually exclusive zones that generate orders at different rates. At each time step, an order generated with a constant probability and assigned to a zone (i.e. $p_1=0.5,p_2=0.3,p_3=0.1,p_4=0.1$ for zones 1,…,4). Orders have rewards that come from zone-specific truncated normal distribution with different ranges (i.e. with minimum and maximum dollar values of [8,12], [5,8], [2,5] and [1,3] for each zone, respectively). Orders have delivery time windows, which is within 60 minutes from the creation of the order. The driver has to accept an order and pick up the package from a given location prior to delivery. Orders that are not accepted disappear probabilistically (i.e. with a time-out probability of 0.15 per time step) and assumed to be taken by some other competitor driver in the city. The vehicle has a capacity limit of 4 orders on the vehicle, but the driver can accept unlimited orders and plan the route accordingly. The driver incurs a cost per time step and unit distance traveled (0.1 for both), representing the money value of time and travel costs. The driver’s goal is to maximize the total net reward over an episode of 1000 time steps. This version of VRP is known as stochastic and dynamic capacitated vehicle routing problem with pickup and delivery, time windows and service guarantee. We choose this particular variant, which is less studied in the literature, because it more closely resembles real-world instances of the problem and gives us higher confidence that RL can generalize beyond our toy setup.
\begin{description}[style=unboxed,leftmargin=0cm]
\item [State:]
We include pickup location $p_t$, driver info, and order info. Driver info contains the driver's position $h_t$ and the capacity left $c_t$. Order info contains the orders' location $l_t$, status $w_t$ (open, accepted, picked up or delivered/inactive), the time elapsed since each order's generation $e_t$ and the corresponding dollar value $v_t$. Thus, the state is $S_t = (p_t, h_t, c_t, l_t, w_t, e_t, v_t)$.
\item [Action]
The agent chooses an action $A_t$ from five options -- accept the open order $i \in P$, pick up the accepted order $i \in A$, pick up the accepted order $i \in A$, the pickup location $j \in R$, or wait and stay unmoved.
\item [Reward:] The reward $R_t$ is the total value of all delivered orders $f_t$ minus the cost $q_t$. $f_t$ is divided into $3$ equal parts for reward shaping: when the order gets accepted, picked up, and delivered respectively. Thus we have:
\begin{equation*}
R_t = \frac{1}{3}\Big(\mathbbm{1}_{accepted} + \mathbbm{1}_{picked-up} + \mathbbm{1}_{delievered} \Big) f_t - q_t,
\end{equation*}
where $q_t = (q_{time} + q_{move} + q_{failure})$. $q_{time}$ is the time cost, $q_{move}$ is the moving cost (per time step). $q_{failure}$ is a large penalty ($50$) if the agent accepts an order but fails to deliver within the promised time.
\end{description}
The vehicle's capacity remains unchanged if an order is accepted but not picked up. In effect, this grants the agent the flexibility to accept more orders than available capacity, which can be picked up later when space allows. The action of heading to a specific pickup location enables the agent to learn to stay near popular pick up locations.
We impose action masking during the policy training. The agent cannot perform the following invalid actions: \textit{(i)} pick up an order when its remaining capacity is $0$; \textit{(ii)} pick up an order that is not yet accepted; \textit{(iii)} deliver an order that is not in transit.
\subsection{Related Work} \label{sec_vrp_related_work}
There is a substantial literature on VRP~\cite{EKSIOGLU20091472}. The closest VRP variant to the problem considered in this paper is the Pickup and Delivery Problem with Time Windows (PDPTW) \cite{Cordeau2008}, which has some additional complexities over vanilla VRP. Due to such complexities, there are fewer exact solution approaches \cite{LuDessouky2004,MAHMOUDI201619}, and the majority of the literature focuses on heuristics. When the problem is also stochastic and dynamic, exact solution methods become intractable except for very specific problem settings. In such cases, anticipatory algorithms that simulate sample future scenarios and merge solutions to those samples are a common choice \cite{Ulrike2016}
Reinforcement Learning (RL) methods have been successfully used for solving the Traveling Salesman Probelm (TSP). \citet{bello2016neural} employ a pointer network to optimize the policy, and train an actor-critic algorithm with the negative tour length as the reward signal. \citet{khalil2017learning} develop a single model based on graph embeddings. They use the DQN algorithm to train a greedy policy and graph embedding network simultaneously. For VRP, \citet{kool2018attention} utilize the transformer neural network architecture to develop a model fully based on attention layers. Their proposed model is trained by policy gradients with a greedy baseline, and evaluated on both standard Capacitated VRP (CVRP) and Split Deliverry VRP (SDVRP). \citet{nazari2018reinforcement} further improve the algorithm using embedded inputs and allow the customers and their demands to be stochastic.
\subsection{Baseline Algorithm}
We modify the classical three-index Mixed Integer Programming (MIP) formulation \cite{RopkeCordeau2009,FURTADO2017334}. This deterministic MIP is solved for the available orders in the environment. It is further resolved when a new order arrives, if one of the existing orders expires, or when all of the actions are executed. When we solve the MIP, the orders that had been already accepted or were in transit are modeled as starting conditions. The details of our MIP model is in Appendix \ref{appendix:vrp_mip}. We leave anticipatory models to future work
\ifx
\subsubsection*{Sets}
\begin{vardefs*}
V & Current vehicle location, $V=\{0\}$ \\
P & Pickup nodes (copies of the restaurant nodes, associated with the orders that are not in transit) \\
D & Delivery nodes representing the orders that are not in transit, $D = \{j | j= i + n, i \in P, n=|P| \}$ \\
A & Nodes representing the orders that are accepted by the driver; $A \subset D$ \\
T & Delivery nodes representing the orders that are in transit \\
R & Nodes representing the restaurants, used for final return) \\
N & Set of all nodes in the graph, $N = V \cup P \cup D \cup T \cup R $\\
E & Set of all edges, $E=\{(i, j), \forall i, j \in N\}$
\end{vardefs*}
\subsubsection*{Decision variables}
\begin{vardefs*}
x_{ij} & Binary variable, 1 if the vehicle uses the arc from node $i$ to $j$, 0 otherwise; $i, j \in N$ \\
y_{i} & Binary variable, 1 if the order $i$ is accepted, 0 otherwise; $i \in P$\\
Q_{i} & Auxiliary variable to track the capacity usage as of node $i$; $i \in N$ \\
B_{i} & Auxiliary variable to track the time as of node $i$; $i \in N$
\end{vardefs*}
\subsubsection*{Parameters}
\begin{vardefs*}
n & Number of orders available to pick up, $n = |P|$ \\
c_{ij} & Symmetric Manhattan distance (in miles) matrix between node $i$ and $j$, $(i, j) \in E$ \\
q_i & Supply (demand) at node $i$, $q_0 = |T|; q_i = 1, \forall i \in P; q_i = -1, \forall i \in D \cup T; q_i = 0 \in R$ \\
l_i & Remaining time to deliver order $i$, $i \in D \cup T$ \\
m & Travel cost per mile \\
r_i & Revenue for order associated with pick up node $i$, $i \in P$ \\
U & Vehicle capacity \\
M & A very big number \\
t & Time to travel one mile \\
d & A constant positive service time spent on accept, pickup, delivery
\end{vardefs*}
\subsubsection*{Model}
\begin{equation}
\begin{array}{rrclcl}
& \max_{x, y, Q, B} & \multicolumn{3}{l}{ \sum_{i \in P} r_i y_i - m \sum_{(i,j) \in E} c_{ij} x_{ij} } \\
& \textrm{s.t.} \qquad \sum_{j \in N} x_{ij} &=& y_i & \forall i \in P \\
& \sum_{j \in N} x_{ij} - \sum_{j \in N} x_{i+n,j} & = & 0 & \forall i \in P \\
& y_i & =& 1 & \forall i \in A \\
& \sum_{j \in N} x_{ij} &=& 1 & \forall i \in V \cup T \\
& \sum_{i \in N \setminus R } \sum_{j \in R } x_{ij} &=& 1 &\\
& \sum_{j \in N \setminus R } x_{ji} - \sum_{j \in N} x_{ij} & = & 0 & \forall i \in P \cup D \cup T \\
& Q_i + q_j - M (1-x_{ij} ) &\leq& Q_j & \forall i,j \in N \\
& \max{(0, q_i)} &\leq& Q_i & \forall i \in N \\
& \min{(U, U+q_i)} &\geq& Q_i & \forall i \in N \\
& B_i + d + c_{ij} t - M (1-x_{ij} ) &\leq& B_j & \forall i,j \in N \\
& B_i + c_{i, i+n} t - M (1- y_i ) &\leq& B_{i+n} & \forall i \in P \\
& d \sum_{i \in P \setminus A} y_i & = & B_0 \\
& B_i &\leq& l_i & \forall i \in D \cup T \\
& x_{ij}, y_i &\in& \{0, 1\} & \forall i,j \in N \\
\end{array}
\end{equation}
\fi
\begin{figure*}[h!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{vrp_images/vrp_mapsize_black.png}
\caption{RL vs baseline solution for VRP with 3 pick-up locations, 5 orders and map sizes $5 \times 5$ and $8 \times 8$ }
\label{fig:vrp_mapsize}
\end{subfigure}
\hspace{2em}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=1\linewidth]{vrp_images/vrp_ordernumber_black.png}
\caption{RL vs baseline solution for VRP with 2 pick-up locations, map $5 \times 5$, and number of orders 5 and 10. }
\label{fig:vrp_ordernumber}
\end{subfigure}
\label{fig:vrp}
\caption{RL vs baseline during policy training process.}
\end{figure*}
\subsection{Reinforcement Learning Algorithm}
\label{subsec:vrp_rl}
To train the policy, we apply the APE-X DQN algorithm~\cite{horgan2018distributed} due to its ability to scale by generating more experience replays and picking from them in a distributed prioritized fashion\footnote{We also applied PPO with default hyperparameters provided in RLLib. The reward increased much slower than that of APEX-DQN and was not able to beat the baseline after 1 day of training.}. We use a two-layer neural network with 512 hidden units each. We list all hyperparameters in Appendix \ref{appendix:vrp_hp}. We use a machine with 1 GPU and 8 CPUs for our experiments.
\subsection{Results}
For multiple problem scales determined by map size ($\{5 \times 5, 8 \times 8\}$), maximum number of orders ($order \in \{5, 10\}$) and number of pick-up locations in the map ($n \in \{2, 3\}$), we conduct experiments to compare the behavior of RL and the MIP baseline solutions. We examine the trained RL policy's ability to generalize to different order distributions. The hyperparameters used for algorithm training are taken from RLLib robotics examples without fine-tuning. Overall, the RL approach outperforms the baseline across different instance sizes, and generalizes well for unseen order patterns.
Figure \ref{fig:vrp_mapsize}-\ref{fig:vrp_ordernumber} compares the episodic rewards for the RL policy and the baseline algorithm during training. The shaded band around the mean line shows the minimum and maximum rewards. For readability, the graphs are clipped to skip the initial $3.5$ hours of training as the rewards are highly negative and skew the Y-axis scale. With larger map size or higher order number, the training time required for the agent to achieve rewards equivalent to baseline is higher. This is expected as both the observation and action space increase, the agent requires more exploration to converge to a reasonable policy. Even after three days of training, the rewards for larger instances keep growing gradually. The agent slowly learns to fully utilize the vehicle capacity and to not accept orders which are likely to incur penalty.
As the agent is trained longer, there is potential for the policy to overfit. In order to test generalizability, we train another policy with a shifted hot order-zone distribution $(0.1, 0.5, 0.3, 0.1)$, and evaluate against the baseline results both using the original order-zone distribution $(0.5, 0.3, 0.1, 0.1)$. Table \ref{table:vrp_baseline_comp} summarizes the evaluation results. It is observed that this policy is able to outperform the baseline consistently during evaluation phase.
We also present the rewards with and without the order miss penalty $q_{failure}$ for the same trained policy to further understand the agent's behavior about order delivery misses. The reward values are close for problems with fewer number of pick-up locations and fewer orders. As the number of pick-up locations becomes larger, the gap between the rewards increases. One explanation is the agent struggles with the increased complexity of order deliveries from different pick-up locations, and its action often changes in the middle of a delivery, so the likelihood of missing the order delivery increases. This behavior is also seen if the number of orders is higher. Even though the RL agent reward is better than the baseline, there is still scope for improvement by reducing the number of order delivery misses.
\begin{table}[h!]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|cc|c|}
\hline
\multicolumn{1}{|l|}{\multirow{2}{*}{Problem Instance}} & \multicolumn{2}{|c|}{RL Evaluation Reward} & \multirow{2}{*}{MIP Reward} \\
\multicolumn{1}{|l|}{} & \multicolumn{1}{l}{Without $q_{failure}$ } & \multicolumn{1}{l|}{With $q_{failure}$ } & \\ \hline
\begin{tabular}[c]{@{}c}5 by 5 map, 5 orders\\ 2 pick-up locations \end{tabular} & \begin{tabular}[c]{@{}c@{}}854.45 \\ (136.03)\end{tabular} & \begin{tabular}[c]{@{}c@{}}838.30 \\ (154.12)\end{tabular} & 595.91 \\ \hline
\begin{tabular}[c]{@{}c}5 by 5 map, 5 orders\\ 3 pick-up locations\end{tabular} & \begin{tabular}[c]{@{}c@{}}754.27 \\ (116.48)\end{tabular} & \begin{tabular}[c]{@{}c@{}}730.40 \\ (132.75)\end{tabular} & 642.62 \\ \hline
\begin{tabular}[c]{@{}c}5 by 5 map, 10 orders\\ 2 pick-up locations \end{tabular} & \begin{tabular}[c]{@{}c@{}}774.63 \\ (143.34)\end{tabular} & \begin{tabular}[c]{@{}c@{}}692.34\\ (200.65)\end{tabular} & 640.01 \\ \hline
\begin{tabular}[c]{@{}c}8 by 8 map, 5 orders\\ 2 pick-up locations\end{tabular} & \begin{tabular}[c]{@{}c@{}}548.53 \\ (107.40)\end{tabular} & \begin{tabular}[c]{@{}c@{}}536.55 \\ (112.33)\end{tabular} & 410.58 \\ \hline
\begin{tabular}[c]{@{}c}8 by 8 map, 5 orders\\ 3 pick-up locations \end{tabular} & \begin{tabular}[c]{@{}c@{}}429.20 \\ (102.37)\end{tabular} & \begin{tabular}[c]{@{}c@{}}373.7 \\ (129.98)\end{tabular} & 246.25 \\ \hline
\end{tabular}
}
\caption{RL and baseline solution comparison for VRP. Values in the brackets are standard deviations and mean reward is calculated using 50 episodes.}
\label{table:vrp_baseline_comp}
\end{table}
\section{Conclusion and Future Work}
In this paper, we have established Deep Reinforcement Learning (DRL) benchmarks for three canonical dynamic resource allocation problems: Bin Packing, Newsvendor, and Vehicle Routing. We formulated a Markov Decision Process for each problem, and compared established algorithms with vanilla RL techniques. In each case, RL policy either outperforms or is competitive with the baseline. While we do not overcome the NP-hardness of the problems, as wall-clock training time scales with problem size, we find that DRL is a good tool for these problems.
These results illustrate the potential value of RL for a wide range of real-world industrial online stochastic problems, from order assignment, to retail buying, to real-time routing. Our experiments indicate the following issues as important for making RL solutions more practical in the future: building effective simulators, learning from historical data, initialization of the RL model, overfitting to a particular distribution and enforcement of constraints (e.g. via action masking).
We used out-of-the-box RL algorithms, with almost no problem-specific tweaking and simple 2-layer neural networks. Further research can add value by testing various RL algorithms, neural network structures, etc. and seeing their relative value in each problem especially as problem complexity scales up (i.e., solving real-world instances of these problems will likely require innovation on the RL side). In this paper we only looked at canonical, theoretical models. Further research should endeavor to apply these RL techniques to real-world industrial problems.
\bibliographystyle{aaai}
|
2,877,628,088,927 | arxiv |
\section{Background}
\subsection{Objectives}
In order to make full use of both table-text pair data and raw text data, the above marginal log-likelihood should be optimized jointly:
\begin{equation}
\mathcal{L}(\theta) = \mathbb{E}_{(x,y)\sim \mathcal{D}_p} [\log p_\theta(y|x)] + \mathbb{E}_{y \sim \mathcal{D}_r} [\log p_\theta(y)].
\label{eq:total_loss}
\end{equation}
Directly optimizing Equation \ref{eq:total_loss} is intractable.
Following the idea of variational inference~\citep{kingma2013auto}, a variational posterior $q_\phi(\cdot)$ is constructed as an inference model (dashed lines in Figure \ref{fig:graphic_model}) to approximate the true posterior. Instead of optimizing the marginal log-likelihood in Equation \ref{eq:total_loss}, we maximize the evidence lower bound~($\ELBO$). In Section \ref{sec:object_parallel} and \ref{sec:object_non-parallel}, the $\ELBO$ of table-text pairwise data and raw text data are discussed, respectively.
\subsection{Learning from Table-text Pair Data}
\label{sec:object_parallel}
In this section, we will show the learning loss of table-text pair data.
According to the aforementioned assumption, the content variable ${\bm{c}}$ is observable and follows a delta distribution centred in the hidden representation of the table $x$.
\noindent\textbf{ELBO objective}. \quad Assuming that the template variable ${\bm{z}}$ only relies on the template of target sentence, we introduce $q_\phi(z|y)$ as an approximation of the true posterior $p(z|y,c,x)$,
The $\ELBO$ loss of Equation \ref{eq:mll_parallel} is written as
\begin{align*}
\mathcal{L}_{\text{ELBO}_p}(x,y) = - \mathbb{E}_{q_{\phi_z}(z|y)}\log p_\theta(y|z,c=f_{\text{enc}}(x),x) + D_{\mathrm{KL}} (q_{\phi_z}(z|y) \Vert p(z)), \quad (x,y) \in \mathcal{D}_p.
\end{align*}
The variational posterior $q_{\phi_z}(z|y)$ is assumed as a multivariate Gaussian distribution $\mathcal{N}(\mu_{\phi_z}(y), \Sigma_{\phi_z}(y))$, while the prior $p(z)$ is taken as a normal distribution $\mathcal{N}(0, I)$.
\noindent\textbf{Preserving-Template Loss}. \quad
Without any supervision, the ELBO loss alone does not guarantee to learn a good template representation space. Inspired by the work in style-transfer \citep{hzticml17_disent, disent_in_ST1, baoyu, disent_in_ST2_ACL19}, an auxiliary loss is introduced to embed the template information of sentences into template variable ${\bm{z}}$.
With table, we are able to roughly align the tokens in sentence with the records in the table. By replacing these tokens with a special token \textit{$<$ent$>$}, we can remove the content information from sentences and get the sketchy sentence template, denote as $\Tilde{y}$. We introduce the preserving-template loss $\mathcal{L}_{\text{pt}}$ to ensure that the latent variable $z$ only contains the information of the template.
\begin{equation*}
\mathcal{L}_{\text{pt}}(x,y,\Tilde{y}) = - \mathbb{E}_{q_{\phi_z}(z|y)}\log p_{\eta}(\Tilde{y}|z) = - \mathbb{E}_{q_{\phi_z}(z|y)} \sum_{t=1}^m \log p_{\eta}(\Tilde{y}_t|z, \Tilde{y}_{<t})
\end{equation*}
where $m$ is the length of the $\Tilde{y}$, and $\eta$ denotes the parameters of the extra template generator.
$\mathcal{L}_{\text{pt}}$ is trained via parallel data. In practice, due to the insufficient amount of parallel data, template generator $p_\eta$ may not be well-learned. However, experimental results show that this loss is sufficient to provide a guidance for learning a template space.
\subsection{Learning from Raw Text Data}
\label{sec:object_non-parallel}
Our model is able to make use of a large number of raw data without table since the content information of table could be obtained by the content latent variable.
\noindent\textbf{ELBO objective}. \quad
According to the definition of generative model in Equation~\ref{eq:mll_non-parallel}, the $\ELBO$ of raw text data is
\begin{equation*}
\log p_\theta(y) = \mathbb{E}_{q_\phi(z,c|y)}\log \frac{p_\theta(y,z,c)}{q_\phi(z,c | y)}, \quad y \in \mathcal{D}_r.
\end{equation*}
With the mean field approximation~\citep{xing2002generalized}, $q_\phi(z, c|x)$ can be factorized as: $q_\phi(z,c|y) = q_{\phi_z}(z|y) q_{\phi_c}(c|y)$.
We have:
\begin{align*}
\mathcal{L}_{\text{ELBO}_r}(y) = & - \mathbb{E}_{q_{\phi_z}(z|y) q_{\phi_c}(c|y)}\log p_{\theta}(y|z,c) \\
& + D_{\mathrm{KL}}(q_{\phi_z}(z|y)||p(z)) + D_{\mathrm{KL}}(q_{\phi_c}(c|y)||p(c)), \quad y \in \mathcal{D}_r.
\end{align*}
In order to make use of template information contained in raw text data effectively, the parameters of generation network $p_{\theta}(y|z,c)$ and posterior network $q_{\phi_z}(z|y)$ are shared for pairwise and raw data. In decoding process, for raw text data, we use content variable $c$ as the table embedding for the missing of table $x$.
Variational posterior for $c$ is deployed as another multivariate Guassian $q_{\phi_c}(c|y) = \mathcal{N}(\mu_{\phi_c}(y), \Sigma_{\phi_c}(y))$. Both $p(z)$ and $p(c)$ are taken as normal distribution $\mathcal{N}(0, I)$.
\noindent\textbf{Preserving-Content Loss.}\quad
In order to make the posterior $q_{\phi_c}(c|y)$ correctly infers the content information, the table-text pairs are used as the supervision to train the recognition network of $q_{\phi_c}(c|y)$. To this end, we add a preserving-content loss
\begin{equation*}
\mathcal{L}_{\text{pc}}(x,y) = -\mathbb{E}_{q_{\phi_c}(c|y)}\Vert c-h \Vert^2 + D_{\mathrm{KL}} (q_{\phi_c}(c|y) || p(c)), \quad (x,y) \in \mathcal{D}_p,
\end{equation*}
where $h=f_{\text{enc}}(x)$ is the embedding of table obtained by the table encoder.
Minimizing $\mathcal{L}_{\text{pc}}$ is also helpful to bridge the gap of $c$ between pairwise (taking $c=h$) and raw training data (sampling from $q_\phi(c|y)$).
Moreover, we find that the first term of $\mathcal{L}_{\text{pc}}$ is equivalent to (1) make the mean of $q_\phi(c|y)$ closer to $h$; (2) minimize the trace of co-variance of $q_\phi(c|y)$. The second term serves as a regularization. Detailed explanations and proof are referred in supplementary materials.
\subsection{Mutual Information Loss}
\label{sec:mutual_info}
As introduced by previous works \citep{chen2016infogan,zhao2017infovae,zhao2018unsupervised}, adding mutual information term to $\ELBO$ could alleviate KL collapse effectively and improve the quality of variational posterior. Adding mutual information terms directly imposes the association of content and template latent variables with target sentences.
Besides, theoretical proof\footnote{Proof can be found in Appendix \ref{app:elbo_pt_hinder}} and experimental results show that introducing mutual information bias is necessary in the presence of preserving-template loss $\mathcal{L}_{\text{pt}}({\bm{x}}^p,{\bm{y}}^p)$.
As a result, in our work, the following mutual information term is added to objective
\begin{equation*}
\mathcal{L}_{\text{MI}}(y) = - I(z,y) - I(c,y).
\end{equation*}
\subsection{Training Process}
The final loss of VTM\xspace is made up of the $\ELBO$ losses and extra losses:
\begin{align*}
\mathcal{L}_{tot}(x^p,y^p,y^r) & = \mathcal{L}_{\text{ELBO}_p}(x^p,y^p) + \mathcal{L}_{\text{ELBO}_r}(y^r) + \lambda_{\text{MI}}( \mathcal{L}_{\text{MI}}(y^p) + \mathcal{L}_{\text{MI}}(y^r))\\
& + \lambda_{\text{pt}} \mathcal{L}_{\text{pt}}(x^p,y^p) + \lambda_{\text{pc}} \mathcal{L}_{\text{pc}}(x^p,y^p), \qquad (x^p, y^p) \in \mathcal{D}_p, y^r \in \mathcal{D}_r.
\end{align*}
$\lambda_{\text{MI}}, \lambda_{\text{pt}}$ and $\lambda_{\text{pc}}$ are hyperparameters with respect to auxiliary losses.
The training procedure is shown in Algorithm \ref{alg:train}.
The parameters of generation network $\theta$ and posterior network $\phi_{z,c}$ could be trained jointly by both table-text pair data and raw text data. In this way, a large number of raw text data can be used to enrich the generation diversity.
\begin{algorithm}[t]
\footnotesize
\caption{Training procedure}
\hspace*{0.02in} {\bf Input:}
Model parameters $\phi_z, \phi_c, \theta, \eta$\\
\hspace*{0.49in}
Table-text pair data $\mathcal{D}_p = \{({\bm{x}}, {\bm{y}})_i\}_{i=1}^N$; raw text data $\mathcal{D}_r = \{{\bm{y}}_j\}_{j=1}^M$; $M \gg N$\\
\hspace*{0.02in} {\bf Procedure \textsc{Train}($\mathcal{D}_p, \mathcal{D}_r$):}
\begin{algorithmic}[1]
\State \quad Update $\phi_z,\phi_c,\theta,\eta$ by gradient descent on $\mathcal{L}_{\text{ELBO}_p} + \mathcal{L}_{\text{MI}} + \mathcal{L}_{\text{pt}}+ \mathcal{L}_{\text{pc}}$
\State \quad Update $\phi_z,\phi_c,\theta$ by gradient descent on $\mathcal{L}_{\text{ELBO}_r} + \mathcal{L}_{\text{MI}}$
\State \quad Update $\phi_z,\phi_c,\theta,\eta$ by gradient descent on $\mathcal{L}_{tot}$
\end{algorithmic}
\label{alg:train}
\end{algorithm}
\subsection{Datasets and Baseline models}
\label{sec:dataset}
\input{041data_baselines.tex}
\subsection{Experimental results on SpNLG dataset}
\label{sec:spnlg_exp}
\input{043spnlg_results.tex}
\subsection{Experimental results on Wiki dataset}
\label{sec:wiki_exp}
\input{042wiki_results.tex}
\section{Derivation of the ELBO for parallel data}
\section{Explanation for Preserving-Content loss}
\label{app:explain_commitloss}
The first term of $-\mathcal{L}_{\text{pc}}(x,y)$ is equivalent to:
\begin{align*}
\mathbb{E}_{q_c(c|x)}||c-h||^2 & = \mathbb{E}_{q_c(c|x)} \sum_{i=1}^K (c_i-h_i)^2 \\
& = \sum_{i=1}^K \mathbb{E}_{q_c(c|x)} (c_i-h_i)^2 \\
& = \sum_{i=1}^K [(\mathbb{E}(c_i-h_i))^2 + \text{var}(c_i)] \\
& = \sum_{i=1}^K [(E(c_i)-h_i)^2 + \text{var}(c_i)] \\
& = \sum_{i=1}^K [(\mu_i-h_i)^2 + \Sigma_{ii}] \\
& = ||\mu-h||^2 + tr(\Sigma)
\end{align*}
When we minimize it, we jointly minimize the distance between mean of approximated posterior distribution, and the trace of the co-variance matrix.
\section{Proof for anti-information property of ELBO}
\label{app:elbo_mini_mi}
Consider the KL divergence over the whole dataset (or a mini-batch of data), we have
\begin{align*}
\mathbb{E}_{x\sim p(x)}[D_{\mathrm{KL}}(q(z|x) \Vert p(x))] = & \mathbb{E}_{q(z|x)p(x)} [\log q(z|x) - \log p(z)] \\
= & - H(z|x) - \mathbb{E}_{q(z)} \log p(z) \\
= & - H(z|x) + H(z) + D_{\mathrm{KL}} (q(z) \Vert p(z)) \\
= & I(z,x) + D_{\mathrm{KL}} (q(z) \Vert p(z))
\end{align*}
where $q(z) = \mathbb{E}_{x\sim \mathcal{D}} (q(z|x))$ and $I(z,x) = H(z) - H(z|x)$.
Since KL divergence can be viewed as a regularization term in $\ELBO$ loss, When $\ELBO$ is maximized, the KL term is minimized, and mutual information between $x$ and latent $z$, $I(z,x)$ is minimized. This implies that $z$ and $x$ eventually become more independent.
\section{Proof for the preserving-template loss when posterior collapse happens}
\label{app:elbo_pt_hinder}
When posterior collapse happens, $D_{\mathrm{KL}}(q(z|y) || p(z)) \approx 0$,
\begin{align*}
\mathcal{L}_{pt} (Y,\Tilde{Y}) = &\mathbb{E}_{\Tilde{y} \sim p(\Tilde{y}), y\sim p(y)} \mathbb{E}_{z\sim q(z|y)} \log p_\eta(\Tilde{y}|z) \\
= & \mathbb{E}_{\Tilde{y} \sim p(\Tilde{y})} \mathbb{E}_{z\sim p(z)} \log p_\eta(\Tilde{y}|z) \\
= & \int_{\Tilde{y}} p(\Tilde{y}) \int_z p(z) \log p_{\eta}(\Tilde{y}|z) \text{dz d}\Tilde{y} \\
= & \int_z p(z) \int_{\Tilde{y}} p(\Tilde{y}) \log p_{\eta}(\Tilde{y}|z) \text{dz d}\Tilde{y} \\
= & \mathbb{E}_{z} \mathbb{E}_{\Tilde{y}} [\log p_{\eta}(y)|z ]
= \mathbb{E}_{\Tilde{y} }\log p_{\eta}(y)
\end{align*}
During the back-propagation,
\begin{equation*}
|| \bigtriangledown_{z} \mathcal{L}_{pt} (Y,\Tilde{Y})|| = 0
\end{equation*}
thus, $\phi_z$ is not updated.
\section{Implementation Details}
\label{app:detail}
For the model trained on \textsc{Wiki} dataset, the the dimension of latent template variable is set as 100, and the dimension of latent content variable is set as 200. The dimension of the hidden for table is 300.
For the hyperparameters of total loss $\mathcal{L}_{tot}$, we set $\lambda_{\text{MI}} = 0.5$, $\lambda_{\text{pt}} = 1.0$ and $\lambda_{\text{pc}} = 0.5$.
For the model trained on \textsc{SpNlg} dataset, the dimension of latent template variable is set as 64, and the dimension of latent content variable is set as 100. The dimension of the hidden for table is also 300. For the hyperparameters of total loss $\mathcal{L}_{tot}$, we set $\lambda_{\text{MI}} = \lambda_{\text{pt}} = \lambda_{\text{pc}} = 1.0$.
\section{Case study on \textsc{SpNlg} experiment}
\label{sec: case_spnlg}
\begin{table}[thb]
\centering
\resizebox{\textwidth}{!} {
\begin{tabular}{cp{15cm}}
\toprule
Table & \textbf{name}[nameVariable], \textbf{eatType}[pub], \textbf{food}[French], \textbf{priceRange}[20-25], \textbf{area}[riverside] \\
\hline \hline
Reference & nameVariable is a French place with a price range of £20-25. It is in riverside. It is a pub. \\
\hline
\hline
\multirow{5}{*}{Table2seq-sample} & \textbf{1:} nameVariable is a pub with a price range of £20-25. It is a French restaurant in riverside. \\
& \textbf{2:} nameVariable is a French restaurant in riverside with a price range of £20-25. nameVariable is a pub. \\
& \textbf{3:} nameVariable is a pub with a price range of £20-25 and nameVariable is a French restaurant in riverside. \\
& \textbf{4:} nameVariable is a pub with a price range of £20-25, also it is in riverside. it is a Japanese place. \\
& \textbf{5:} nameVariable is a pub with a average rating and it is a French place in riverside. \\
\hline
\multirow{5}{*}{Temp-KN}& \textbf{1:} nameVariable is in riverside, also it is in riverside. \\
& \textbf{2:} nameVariable is a French restaurant.\\
& \textbf{3:} nameVariable is the best restaurant. \\
& \textbf{4:} nameVariable is in riverside, and nameVariable is in [location]. \\
& \textbf{5:} nameVariable is in. It’s a French restaurant and it is in [location] with food and, even if nameVariable is [food\_qual], it is the best place.\\
\hline
\multirow{5}{*}{VTM\xspace-noraw}& \textbf{1:} nameVariable is a pub with a price range of £20-25. It is a French place in riverside. \\
& \textbf{2:} nameVariable is a pub with a price range of £20-25. it is a pub. It is in riverside. \\
& \textbf{3:} nameVariable is a French place in riverside with a price range of £20-25. It is a pub. \\
& \textbf{4:} nameVariable is a French place in riverside with a price range of £20-25. It is a pub. \\
& \textbf{5:} nameVariable is a French place in riverside with a price range of £20-25. It is a pub. \\
\hline
\multirow{5}{*}{VTM\xspace}& \textbf{1:} nameVariable is a French place in riverside with a price range of £20-25. It is a pub. \\
& \textbf{2:} nameVariable is a pub with a price range of £20-25. It is in riverside. It is a French place. \\
& \textbf{3:} nameVariable is a French pub in riverside with a price range of £20-25, and it is a pub. \\
& \textbf{4:} nameVariable is a French restaurant in riverside and it is a pub. \\
& \textbf{5:} nameVariable is a French place in riverside with a price range of £20-25. It is a pub. \\
\bottomrule
\end{tabular}}
\caption{An example of the generated text by our model
and the baselines on \textsc{SpNlg} dataset.}
\label{tab:qual_spnlg}
\end{table}
\section{Introduction}
\label{sec:intro}
\input{010intro.tex}
\section{Problem Formulation and Notations}
\label{sec:problem}
\input{020problem.tex}
\section{Variational Template Machine}
\label{sec:method}
\label{sec:disent_temp_content}
\input{030method.tex}
\section{Experiment}
\label{sec:exp}
\input{040experiment.tex}
\section{Related Work}
\label{sec:relate_work}
\input{050relatedwork.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{060conclusion.tex}
\subsubsection*{Acknowledgments}
We thank the anonymous reviewers for their insightful comments.
Hao Zhou and Zhongyu Wei are the corresponding authors of this paper.
\section{Submission of conference papers to ICLR 2019}
ICLR requires electronic submissions, processed by
\url{https://openreview.net/}. See ICLR's website for more instructions.
If your paper is ultimately accepted, the statement {\tt
{\textbackslash}iclrfinalcopy} should be inserted to adjust the
format to the camera ready requirements.
The format for the submissions is a variant of the NIPS format.
Please read carefully the instructions below, and follow them
faithfully.
\subsection{Style}
Papers to be submitted to ICLR 2019 must be prepared according to the
instructions presented here.
Authors are required to use the ICLR \LaTeX{} style files obtainable at the
ICLR website. Please make sure you use the current files and
not previous versions. Tweaking the style files may be grounds for rejection.
\subsection{Retrieval of style files}
The style files for ICLR and other conference information are available on the World Wide Web at
\begin{center}
\url{http://www.iclr.cc/}
\end{center}
The file \verb+iclr2019_conference.pdf+ contains these
instructions and illustrates the
various formatting requirements your ICLR paper must satisfy.
Submissions must be made using \LaTeX{} and the style files
\verb+iclr2019_conference.sty+ and \verb+iclr2019_conference.bst+ (to be used with \LaTeX{}2e). The file
\verb+iclr2019_conference.tex+ may be used as a ``shell'' for writing your paper. All you
have to do is replace the author, title, abstract, and text of the paper with
your own.
The formatting instructions contained in these style files are summarized in
sections \ref{gen_inst}, \ref{headings}, and \ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas).
Use 10~point type with a vertical spacing of 11~points. Times New Roman is the
preferred typeface throughout. Paragraphs are separated by 1/2~line space,
with no indentation.
Paper title is 17~point, in small caps and left-aligned.
All pages should start at 1~inch (6~picas) from the top of the page.
Authors' names are
set in boldface, and each name is placed above its corresponding
address. The lead author's name is to be listed first, and
the co-authors' names are set to follow. Authors sharing the
same address can be on the same line.
Please pay special attention to the instructions in section \ref{others}
regarding figures, tables, acknowledgments, and references.
\section{Headings: first level}
\label{headings}
First level headings are in small caps,
flush left and in point size 12. One line space before the first level
heading and 1/2~line space after the first level heading.
\subsection{Headings: second level}
Second level headings are in small caps,
flush left and in point size 10. One line space before the second level
heading and 1/2~line space after the second level heading.
\subsubsection{Headings: third level}
Third level headings are in small caps,
flush left and in point size 10. One line space before the third level
heading and 1/2~line space after the third level heading.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone, regardless of the formatter being used.
\subsection{Citations within the text}
Citations within the text should be based on the \texttt{natbib} package
and include the authors' last names and year (with the ``et~al.'' construct
for more than two authors). When the authors or the publication are
included in the sentence, the citation should not be in parenthesis (as
in ``See \citet{Hinton06} for more information.''). Otherwise, the citation
should be in parenthesis (as in ``Deep learning shows promise to make progress towards AI~\citep{Bengio+chapter2007}.'').
The corresponding references are to be listed in alphabetical order of
authors, in the \textsc{References} section. As to the format of the
references themselves, any style is acceptable as long as it is used
consistently.
\subsection{Footnotes}
Indicate footnotes with a number\footnote{Sample of the first footnote} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches
(12~picas).\footnote{Sample of the second footnote}
\subsection{Figures}
All artwork must be neat, clean, and legible. Lines should be dark
enough for purposes of reproduction; art work should not be
hand-drawn. The figure number and caption always appear after the
figure. Place one line space before the figure caption, and one line
space after the figure. The figure caption is lower case (except for
first word and proper nouns); figures are numbered consecutively.
Make sure the figure caption does not get separated from the figure.
Leave sufficient space to avoid splitting the figure and figure caption.
You may use color figures.
However, it is best for the
figure captions and the paper body to make sense if the paper is printed
either in black/white or in color.
\begin{figure}[h]
\begin{center}
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\end{center}
\caption{Sample figure caption.}
\end{figure}
\subsection{Tables}
All tables must be centered, neat, clean and legible. Do not use hand-drawn
tables. The table number and title always appear before the table. See
Table~\ref{sample-table}.
Place one line space before the table title, one line space after the table
title, and one line space after the table. The table title must be lower case
(except for first word and proper nouns); tables are numbered consecutively.
\begin{table}[t]
\caption{Sample table title}
\label{sample-table}
\begin{center}
\begin{tabular}{ll}
\multicolumn{1}{c}{\bf PART} &\multicolumn{1}{c}{\bf DESCRIPTION}
\\ \hline \\
Dendrite &Input terminal \\
Axon &Output terminal \\
Soma &Cell body (contains cell nucleus) \\
\end{tabular}
\end{center}
\end{table}
\section{Default Notation}
In an attempt to encourage standardized notation, we have included the
notation file from the textbook, \textit{Deep Learning}
\cite{goodfellow2016deep} available at
\url{https://github.com/goodfeli/dlbook_notation/}. Use of this style
is not required and can be disabled by commenting out
\texttt{math\_commands.tex}.
\centerline{\bf Numbers and Arrays}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1in}p{3.25in}}
$\displaystyle a$ & A scalar (integer or real)\\
$\displaystyle {\bm{a}}$ & A vector\\
$\displaystyle {\bm{A}}$ & A matrix\\
$\displaystyle {\tens{A}}$ & A tensor\\
$\displaystyle {\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\
$\displaystyle {\bm{I}}$ & Identity matrix with dimensionality implied by context\\
$\displaystyle {\bm{e}}^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\
$\displaystyle \text{diag}({\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\bm{a}}$\\
$\displaystyle {\textnormal{a}}$ & A scalar random variable\\
$\displaystyle {\mathbf{a}}$ & A vector-valued random variable\\
$\displaystyle {\mathbf{A}}$ & A matrix-valued random variable\\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Sets and Graphs}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle {\mathbb{A}}$ & A set\\
$\displaystyle \mathbb{R}$ & The set of real numbers \\
$\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\
$\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\
$\displaystyle [a, b]$ & The real interval including $a$ and $b$\\
$\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\
$\displaystyle {\mathbb{A}} \backslash {\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\mathbb{A}}$ that are not in ${\mathbb{B}}$\\
$\displaystyle {\mathcal{G}}$ & A graph\\
$\displaystyle \parents_{\mathcal{G}}({\textnormal{x}}_i)$ & The parents of ${\textnormal{x}}_i$ in ${\mathcal{G}}$
\end{tabular}
\vspace{0.25cm}
\centerline{\bf Indexing}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle {a}_i$ & Element $i$ of vector ${\bm{a}}$, with indexing starting at 1 \\
$\displaystyle {a}_{-i}$ & All elements of vector ${\bm{a}}$ except for element $i$ \\
$\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\bm{A}}$ \\
$\displaystyle {\bm{A}}_{i, :}$ & Row $i$ of matrix ${\bm{A}}$ \\
$\displaystyle {\bm{A}}_{:, i}$ & Column $i$ of matrix ${\bm{A}}$ \\
$\displaystyle {\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\tens{A}}$\\
$\displaystyle {\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\
$\displaystyle {\textnormal{a}}_i$ & Element $i$ of the random vector ${\mathbf{a}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Calculus}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
NOTE: the [2ex] on the next line adds extra height to that row of the table.
$\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex]
$\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\
$\displaystyle \nabla_{\bm{x}} y $ & Gradient of $y$ with respect to ${\bm{x}}$ \\
$\displaystyle \nabla_{\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\bm{X}}$ \\
$\displaystyle \nabla_{\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\tens{X}}$ \\
$\displaystyle \frac{\partial f}{\partial {\bm{x}}} $ & Jacobian matrix ${\bm{J}} \in \mathbb{R}^{m\times n}$ of $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$\\
$\displaystyle \nabla_{\bm{x}}^2 f({\bm{x}})\text{ or }{\bm{H}}( f)({\bm{x}})$ & The Hessian matrix of $f$ at input point ${\bm{x}}$\\
$\displaystyle \int f({\bm{x}}) d{\bm{x}} $ & Definite integral over the entire domain of ${\bm{x}}$ \\
$\displaystyle \int_{\mathbb{S}} f({\bm{x}}) d{\bm{x}}$ & Definite integral with respect to ${\bm{x}}$ over the set ${\mathbb{S}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Probability and Information Theory}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle P({\textnormal{a}})$ & A probability distribution over a discrete variable\\
$\displaystyle p({\textnormal{a}})$ & A probability distribution over a continuous variable, or over
a variable whose type has not been specified\\
$\displaystyle {\textnormal{a}} \sim P$ & Random variable ${\textnormal{a}}$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r
$\displaystyle \mathbb{E}_{{\textnormal{x}}\sim P} [ f(x) ]\text{ or } \mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\textnormal{x}})$ \\
$\displaystyle \mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\textnormal{x}})$ \\
$\displaystyle \mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\textnormal{x}})$\\
$\displaystyle H({\textnormal{x}}) $ & Shannon entropy of the random variable ${\textnormal{x}}$\\
$\displaystyle D_{\mathrm{KL}} ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\
$\displaystyle \mathcal{N} ( {\bm{x}} ; {\bm{\mu}} , {\bm{\Sigma}})$ & Gaussian distribution %
over ${\bm{x}}$ with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Functions}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle f: {\mathbb{A}} \rightarrow {\mathbb{B}}$ & The function $f$ with domain ${\mathbb{A}}$ and range ${\mathbb{B}}$\\
$\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\
$\displaystyle f({\bm{x}} ; {\bm{\theta}}) $ & A function of ${\bm{x}}$ parametrized by ${\bm{\theta}}$.
(Sometimes we write $f({\bm{x}})$ and omit the argument ${\bm{\theta}}$ to lighten notation) \\
$\displaystyle \log x$ & Natural logarithm of $x$ \\
$\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\
$\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\
$\displaystyle || {\bm{x}} ||_p $ & $L^p$ norm of ${\bm{x}}$ \\
$\displaystyle || {\bm{x}} || $ & $L^2$ norm of ${\bm{x}}$ \\
$\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\
$\displaystyle \bm{1}_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\
\end{tabular}
\egroup
\vspace{0.25cm}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files.
In particular, do not modify the width or length of the rectangle the text
should fit into, and do not change font sizes (except perhaps in the
\textsc{References} section; see below). Please note that pages should be
numbered.
\section{Preparing PostScript or PDF files}
Please prepare PostScript or PDF files with paper size ``US Letter'', and
not, for example, ``A4''. The -t
letter option on dvips will produce US Letter files.
Consider directly generating PDF files using \verb+pdflatex+
(especially if you are a MiKTeX user).
PDF figures must be substituted for EPS figures, however.
Otherwise, please generate your PostScript and PDF files with the following commands:
\begin{verbatim}
dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps
ps2pdf mypaper.ps mypaper.pdf
\end{verbatim}
\subsection{Margins in LaTeX}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+
from the graphicx package. Always specify the figure width as a multiple of
the line width as in the example below using .eps graphics
\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.eps}
\end{verbatim}
or
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
for .pdf graphics.
See section~4.4 in the graphics bundle documentation (\url{http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps})
A number of width problems arise when LaTeX cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command.
|
2,877,628,088,928 | arxiv | \section{\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\bfserie
\centering
}}
\def\@secnumfont{\bfseries}
\makeatother
\setlength{\textheight}{19.5 cm}
\setlength{\textwidth}{12.5 cm}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\numberwithin{equation}{section}
\setcounter{page}{1}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\gamma}{\gamma}
\newcommand{\Gamma}{\Gamma}
\newcommand{\delta}{\delta}
\newcommand{\Delta}{\Delta}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\zeta}{\zeta}
\newcommand{\eta}{\eta}
\newcommand{\theta}{\theta}
\newcommand{\Theta}{\Theta}
\newcommand{\kappa}{\kappa}
\newcommand{\lambda}{\lambda}
\newcommand{\Lambda}{\Lambda}
\newcommand{\sigma}{\sigma}
\newcommand{\Sigma}{\Sigma}
\newcommand{\tau}{\tau}
\newcommand{\varphi}{\varphi}
\newcommand{\Phi}{\Phi}
\newcommand{\psi}{\psi}
\newcommand{\Psi}{\Psi}
\newcommand{\rho}{\rho}
\newcommand{\omega}{\omega}
\newcommand{\Omega}{\Omega}
\newcommand{\sharp}{\sharp}
\newcommand{\downarrow}{\downarrow}
\newcommand{\uparrow}{\uparrow}
\newcommand{\mathbf}{\mathbf}
\newcommand{\mathcal}{\mathcal}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathrm}{\mathrm}
\newcommand{\mathfrak}{\mathfrak}
\newcommand{\quad}{\quad}
\newcommand{\qquad}{\qquad}
\newcommand{\overset{\mathrm d}{=}}{\overset{\mathrm d}{=}}
\newcommand{\overset{\mathrm d}{\Rightarrow}}{\overset{\mathrm d}{\Rightarrow}}
\newcommand{\overset{\mathrm{a.s.}}{=}}{\overset{\mathrm{a.s.}}{=}}
\newcommand{\widehat}{\widehat}
\newcommand{\widetilde}{\widetilde}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{\mathrm{Re}\,}{\mathrm{Re}\,}
\newcommand{\mathrm{Im}\,}{\mathrm{Im}\,}
\newcommand{\textstyle}{\textstyle}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbb{L}}{\mathbb{L}}
\newcommand{\mathbb R ^d}{\mathbb R ^d}
\newcommand{L_t^{(1)}}{L_t^{(1)}}
\newcommand{L_t^{(2)}}{L_t^{(2)}}
\newcommand{\mathcal L}{\mathcal L}
\newcommand{\int_0^{\infty}}{\int_0^{\infty}}
\newcommand{\par\noindent}{\par\noindent}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathop{\mbox{\rm p-$\lim$}}_{n\to\infty}}{\mathop{\mbox{\rm p-$\lim$}}_{n\to\infty}}
\newcommand{\mathcal}{\mathcal}
\newcommand{\Upsilon}{\Upsilon}
\newcommand{X_t^{(\mu)}}{X_t^{(\mu)}}
\newcommand{\noindent}{\noindent}
\newcommand{\nu_{\mu}(dx)}{\nu_{\mu}(dx)}
\begin{document}
\title[Bifractional noises]{Limits of bifractional Brownian noises}
\author{Makoto Maejima and Ciprian A. Tudor}
\address{Makoto Maejima: Department of Mathematics, Keio University, 3-14-1,
Hiyoshi, Kohoku-ku, Yokohama 223-8522,
Japan} \email{[email protected]}
\address{Ciprian A. Tudor: SAMOS-MATISSE, Centre d'Economie de La
Sorbonne, Universit\'{e} de Panth\'eon-Sorbonne Paris 1, 90, rue de
Tolbiac, 75634 Paris Cedex 13, France.} \email{[email protected]}
\subjclass[2000] {Primary 60F05; Secondary 60H05, 60G18}
\keywords{limit theorems, (bi)fractional Brownian motion, fractional noise}
\begin{abstract}
Let $B^{H,K}=\left (B^{H,K}_{t}, t\geq 0\right )$ be a bifractional Brownian motion
with two parameters $H\in (0,1)$ and $K\in(0,1]$.
The main result of this paper is that the increment process
generated by the bifractional Brownian motion $\left( B^{H,K}_{h+t}
-B^{H,K} _{h}, t\geq 0\right)$ converges when $h\to \infty$ to
$\left (2^{(1-K)/{2}}B^{HK} _{t}, t\geq 0\right )$, where $\left (B^{HK}_{t}, t\geq 0\right)$
is the fractional Brownian motion with Hurst index $HK$. We also
study the behavior of the noise associated to the bifractional
Brownian motion and limit theorems to $B^{H,K}$.
\end{abstract}
\maketitle
\allowdisplaybreaks
\noindent
\section{Introduction}
Introduced in \cite{HV03}, the bifractional Brownian motion, a
generalization of the fractional Brownian motion, has been studied
in many aspects (see \cite{AL}, \cite{ET07}, \cite{KRT}, \cite{LN08}, \cite{NO2}, \cite{RT06} and
\cite{TX07}). This stochastic process
is defined as follows. Let $H\in (0,1)$ and $K\in (0,1]$. Then
$B^{H,K}= \left (B_t^{H,K}, t\ge 0\right)$ is a centered Gaussian process with
covariance
$$
E\left [B^{H,K}_{t}B^{H,K}_{s}\right ] = 2^{-K}\left (
(t^{2H}+s^{2H})^K- |t-s|^{2HK}\right).
$$
When $K=1$, it is the fractional Brownian motion $B^H=\left (B_t^H, t\ge0\right )$
with the Hurst index $H\in (0,1)$.
In general, the process
$B^{H,K}$ has the following basic properties: It is a selfsimilar
stochastic process of order $HK\in (0,1)$, the increments are not
stationary and it is a quasi-helix in the sense of \cite{Kahane}
since for every $s,t\geq 0$, we have
\begin{equation*}
\label{ineq} 2^{-K}\vert t-s \vert ^{2HK}\leq E\left[\left(
B_{t}^{H,K}-B_{s}^{H,K}\right )^2\right] \leq 2^{1-K}\vert t-s \vert
^{2HK}.
\end{equation*}
The trajectories of the process $B^{H,K}$ are $\delta$-H\"older
continuous for any $\delta <HK$ and they are nowhere differentiable.
A better understanding of this process has been presented in the
paper \cite{LN08}, where the authors showed a decomposition of
$B^{H,K}$ with $H,K\in (0,1)$ as follows. Let
$(W_{\theta}, \theta\ge 0)$ be a standard Brownian motion
independent of $B^{H,K}$. For any $K\in (0,1)$, they defined a
centered Gaussian process $X^K=\left (X_t^K, t\ge 0\right)$ by
\begin{equation}
X_t^K = \int_0^{\infty} (1-e^{-\theta t})\theta ^{-(1+K)/2}dW_{\theta}.
\end{equation}
Its covariance is
\begin{equation}
\label{eq:x-cov} E\left [X_t^KX_s^K\right] = \Gamma(1-K)K^{-1}\left
(t^K+s^K-(t+s)^K\right).
\end{equation}
Then they showed, by setting
\begin{equation}
\label{eq:x-hk} X_t^{H,K}:=X^K_{t^{2H}},
\end{equation}
that \begin{equation} \label{deco}
\left (C_1X_t^{H,K}+B_t^{H,K}, t\ge 0\right ) \overset{\mathrm d}{=} \left ( C_2B_t^{HK}, t\ge 0\right ),
\end{equation}
where $C_1= (2^{-K}K(\Gamma(1-K))^{-1})^{1/2}, C_2= 2^{(1-K)/2}$ and
$\overset{\mathrm d}{=}$ means equality of all finite dimensional distributions.
The main purpose of this paper is to study the increment process
$$\left( B^{H,K}_{h+t} -B^{H,K} _{h}, t\geq 0\right)$$ (where $h\geq
0$) of $B^{H,K}$ and the noise generated by
$B^{H,K}$ and to see how close this process is to a process with
stationary increments. In principle, since the bifractional Brownian
motion is not a process with stationary increments, its increment
process depends on $h$.
But in this paper we show, by using the
decomposition (\ref{deco}), that for $h\to \infty$ the increment
process $\left( B^{H,K}_{t+h} -B^{H,K} _{h}, t\geq 0\right)$
converges, modulo a constant, to the fractional Brownian motion with Hurst index
$HK$ in the sense of finite dimensional distributions,
so the dependence of the increment process depending on $h$ decreases
for very large $h$. Somehow, one can interpret that,
for very big starting point, the bifractional Brownian motion has
stationary increments. Then we will try to understand this property
from the perspective of the \lq\lq noise" generated by $B^{H,K}$
i.e. the Gaussian sequence $B^{H,K}_{n+1} -B^{H,K}_{n}$, where
$n\geq 0$ are integers. The behavior of the sequence
$$
Y_{a}(n)= E\left[ \left (B^{H,K}_{a+1} -B^{H,K} _{a}\right )\left
(B^{H,K}_{a+n+1} -B^{H,K}_{a+n}\right )\right],\quad a\in\mathbb N,
$$
(which, if $K=1$, is constant with respect to $a$ and of order
$n^{2H-2}$) is studied with respect to $a$ and with respect to $n$
in order to understand the contributions of $B^{HK}$ and $X^{H,K}$.
We organize our paper as follows. In Section 2 we prove our
principal result which says that the increment process of $B^{H,K}$
converges to the fractional Brownian motion $B^{HK}$. Sections 3-5
contain some consequences and different views of this main result.
We analyze the noise generated by the bifractional Brownian motion
and we study its asymptotic behavior and we interpret the process
$X^{H,K}$ as the difference between "the even part" and "the odd
part" of the fractional Brownian motion. Finally, in Section 6 we
prove limit theorems to the bifractional Brownian from a correlated
non-stationary Gaussian sequence.
\vskip 10mm
\section{The limiting process of the bifractional Brownian motion}
In this section, we prove the following main result; it says that
the increment process of the bifractional Brownian motion converges
to the fractional Brownian motion with Hurst index $HK$.
\begin{theorem}Let $K\in (0,1)$. Then
\label{thm:limit}
$$
\left ( B_{h+t}^{H,K} - B_h^{H,K}, t\ge 0\right ) \overset{\mathrm d}{\Rightarrow} \left
(2^{(1-K)/2}B_t^{HK} ,t\ge 0\right) \quad\text{as}\,\, h\to \infty,
$$
where $\overset{\mathrm d}{\Rightarrow}$ means convergence of all finite dimensional
distributions.
\end{theorem}
To prove Theorem~\ref{thm:limit}, we use the decomposition
(\ref{deco}). It is enough to show that the increment process
associated to $X^{H,K}$ converges to zero; we prove it in the next
result, and actually we measure how fast it tends to zero with
respect to $L^{2}$ norm. It will be useful to compare this rate of
convergence with results in the following sections.
\begin{proposition}\label{Xh}Let $X^{H,K}$ be the process given by {\rm (\ref{eq:x-hk})}. Then as $h\to \infty$
\begin{equation*}
E\left [\left( X_{h+t}^{H,K} - X_h^{H,K} \right )^2\right ]=\Gamma
(1-K)K^{-1} 2^{K}H^{2}K (1-K) t^{2}{h^{2(HK-1)} }(1+o(1)).
\end{equation*}
As a consequence,
$$
\left ( X_{h+t}^{H,K} - X_h^{H,K}, t\ge 0\right ) \overset{\mathrm d}{\Rightarrow} (X(t)\equiv
0, t\ge 0)\quad\text{as} \,\, h\to \infty.
$$
\end{proposition}
\begin{proof} Note from \eqref{eq:x-cov} and \eqref{eq:x-hk} that
$$
E\left [ X_t^{H,K}X_s^{H,K}\right] =\Gamma(1-K)K^{-1}\left(
t^{2HK}+s^{2HK}-\left(t^{2H}+s^{2H}\right)^K\right )
$$
and in particular, for every $t\geq 0$
$$
E \left[ \left ( X_{t} ^{H,K} \right ) ^{2}\right] = \Gamma (1-K) K^{-1} (2-2^{K} )
t^{2HK}.
$$
We have
\begin{align*}
E & \left [ \left ( X_{h+t}^{H,K} - X_h^{H,K}\right )^2\right ]
= E\left [\left ( X_{h+t}^{H,K}\right )^2\right] -2 E\left [
X_{h+t}^{H,K}X_h^{H,K}\right]+ E\left [\left ( X_{h}^{H,K}\right
)^2\right].
\end{align*}
Then
\begin{align*}
I &:=K (\Gamma (1-K)) ^{-1}E \left [ \left( X_{h+t}^{H,K} -
X_h^{H,K}\right ) ^2\right ]\\
&= \Big( (2-2^{K} ) (h+t) ^{2HK} \\
&\hskip 10mm -2 \left( (h+t) ^{2HK} + h^{2HK}
-\left( (h+t)^{2H} + h^{2H} \right) ^{K} \right)
+ (2-2^{K}) h^{2HK}\Big)\\
&=-2^{K} \left( (h+t) ^{2HK} + h^{2HK} \right) + 2 \left( (h+t)
^{2H} + h^{2H} \right)^{K}\\
&= -2^{K} h^{2HK} \left(1+ (th^{-1} ) ^{2HK} +1 \right)+
2h^{2HK}\left( (1+th^{-1} ) ^{2H} + 1 \right) ^{K}.
\end{align*}
Therefore for very large $h>0$ we obtain by using Taylor's expansion
\begin{align*}
I= -2^{K} &h^{2HK} \left( 2 + 2HK th^{-1} + H(2H-1) t^{2} h^{-2}
(1+ o(1)) \right)\\
& + 2h^{2HK} \left(2 + 2Hth^{-1} + H(2H-1) t^{2}h^{-2} (1+ o(1))
\right)^{K}.
\end{align*}
Now we use Taylor expansion for the function $(2+Z) ^{K}$ for $Z$
close to zero. In our case $Z=2Hth^{-1} +H(2H-1) t^{2} h^{-2}+
r(h)$ with $r(h) h^{2} \to 0$ as $h\to \infty$. We obtain
\begin{align*}
I&=-2^{K} h^{2HK} \left ( 2 + 2HK th^{-1} + H(2H-1) t^{2} h^{-2}
(1+ o(1)) \right)\\
& \hskip 10mm + 2h^{2HK} \Big( 2^{K} + K2^{K-1} (2Hth^{-1} +H(2H-1) t^{2}
h^{-2} +r(h) ) \\
& \hskip 15mm + 2^{-1}{K(K-1)} 2^{K-2}(2Hth^{-1} +H(2H-1) t^{2}
h^{-2} +r(h) ) ^{2} + o(h^{-2} ) \Big)\\
&= h^{2HK}2^{K} HK ( -2HK +1 +2H -1+ HK-H)t^{2}h^{-2} (1+o(1))\\
& = h^{2HK} 2^{K} H^{2} K(1-K)t^{2}h^{-2} (1+o(1)).
\end{align*}
Consequently, we have
\begin{align*}
E \left [ \left( X_{h+t}^{H,K} -
X_h^{H,K}\right ) ^2\right ]= \Gamma (1-K)K^{-1}
2^{K}H^2K(1-K)t^2h^{2(HK-1)}(1+o(1)),
\end{align*}
which tends to $0$ as $h\to\infty$, since $HK-1 <0$. \end{proof}
\vskip 10mm
\section{Bifractional Brownian noise}
By considering the bifractional Brownian noise, which are increments
of bifractional Brownian motion, we can understand
Theorem~\ref{thm:limit} in a different way. \noindent Define for every
integer $n\geq 0$, the bifractional Brownian noise
$$
Y_{n} =B^{H,K} _{n+1}-B^{H,K} _{n}.
$$
\vskip 3mm
\begin{remark}
Recall that in the fractional Brownian motion case ($K=1$) we have for
every $a\in\mathbb{N}$ and for every $n\ge 0$,
$E\left[ Y_{a} Y_{a+n} \right] = E\left[ Y_{0} Y_{n}\right].$
\end{remark}
Let us denote
$$
R(0,n)= E[Y_{0}Y_{n}] = E\left[ B^{H,K}_{1}
\left(B^{H,K}_{n+1}-B^{H,K}_{n}\right )\right]
$$
and
\begin{equation}
\label{rbi}R(a, a+n)=E\left [ Y_aY_{a+n}\right ]=E\left[ \left (B^{H,K}_{a+1} -B^{H,K}_{a}\right )
\left (B^{H,K}_{a+n+1} -B^{H,K}_{a+n}\right )\right].
\end{equation}
Let us compute the term $R(a,a+n)$ and understand how different it is from $R(0,n)$.
We have for every $n\ge 1$,
\begin{align}
R(a,a+n) &= 2^{-K} \left( \left( (a+1) ^{2H} + (a+n+1) ^{2H} \right) ^{K} -n^{2HK} \right.\nonumber \\
& \left.\hskip 20mm -\left( (a+1) ^{2H} + (a+n) ^{2H} \right) ^{K} -(n-1)^{2HK} \right. \nonumber \\
& \left. \hskip 20mm -\left( a ^{2H} + (a+n+1) ^{2H} \right) ^{K} -(n+1)^{2HK} \right. \nonumber \\
& \left. \hskip 20mm +\left( a ^{2H} + (a+n) ^{2H} \right) ^{K} -n^{2HK} \right) \nonumber \\
&=: 2^{-K}(f_{a} (n) + g(n)),\label{faga}
\end{align}
where
\begin{align*}
f_{a}(n)&=\left( (a+1) ^{2H} + (a+n+1) ^{2H} \right) ^{K}-\left( (a+1) ^{2H} + (a+n) ^{2H} \right) ^{K}\\
&\hskip 20mm -\left( a ^{2H} + (a+n+1) ^{2H} \right) ^{K} +\left( a
^{2H} + (a+n) ^{2H} \right) ^{K}
\end{align*}
and for every $n\ge 1$,
$$
g(n)= (n+1)^{2HK} +(n-1)^{2HK} -2n^{2HK}.
$$
\begin{remark}
\noindent (i) The function $g$ is, modulo a constant, the covariance
function of the fractional Brownian noise with Hurst index $HK$. Indeed, for
$n\ge 1$,
\begin{equation}\label{gn}
g(n) = 2 E\left [ B_1^{HK}(B_{n+1}^{HK}-B_n^{HK})\right ].
\end{equation}
\vskip 2mm \noindent (ii) $g$ vanishes if $2HK=1$. \vskip 2mm \noindent (iii) The
function $f_{a}$ is a \lq\lq new function" specific to the
bifractional Brownian case. (Note that $f_{a}$ vanishes in the case
$K=1$.) It corresponds to the noise generated by $X^{H,K}$. Indeed,
it follows easily from (\ref{deco}) that
\begin{align}
f_{a}(n) &=
-2^{K}C_{1}^{2} E\left[\left( X^{H,K}_{a+1}-X^{H,K}_{a}\right)
\left( X^{H,K}_{a+n+1}-X^{H,K}_{a+n}\right)\right] \nonumber\\
& \label{cor} = :-2^{K}C_{1}^{2}R^{X^{H,K}}(a,a+n)
\end{align}
for every $a$ and $n\in\mathbb N$.
\end{remark}
We need to analyze the function $f_{a}$ to understand \lq\lq how
far" the bifractional Brownian noise is from the fractional
Brownian noise.
In other words, how far is the bifractional
Brownian motion from a process with stationary increments?
\vskip3mm
\begin{theorem}
\label{thm:fa}For each $n $ it holds that as $a\to\infty$
\begin{equation*}
{f_{a}(n)} = 2H^{2}K (K-1)a^{2(HK-1)} (1+o(1)).
\end{equation*} Therefore $\displaystyle{\lim_{a\to\infty}f_a(n)=0}$ for each $n$.
\end{theorem}
The bifractional Brownian noise is not stationary. However, the
meaning of the theorem above is that it converges to a stationary
sequence.
\vskip 3mm
\begin{proof} We have, for $a\to\infty$,
\begin{align*}
f_a(n)
& =a^{2HK}\Bigl[\bigl\{(1+a^{-1})^{2H}+(1+(n+1)a^{-1})^{2H}\}^K \\
& \hskip 10mm- \bigl\{(1+a^{-1})^{2H}+(1+na^{-1})^{2H}\}^K
-\bigl\{1+(1+(n+1)a^{-1})^{2H}\}^K \\
& \hskip 10mm+ \bigl\{1+(1+na^{-1})^{2H}\}^K\Bigr]\\
& = a^{2HK} \Bigl[ \bigl\{ 1+2Ha^{-1} + H(2H-1)a^{-2}(1+o(1)) \\
& \hskip 20mm+ 1+2H(n+1)a^{-1} + H(2H-1)(n+1)^2a^{-2}(1+o(1)) \bigr\}^K\\
& \hskip 10mm - \bigl\{ 1+2Ha^{-1} + H(2H-1)a^{-2}(1+o(1)) \\
& \hskip 20mm+ 1+2Hna^{-1} + H(2H-1)n^2a^{-2}(1+o(1)) \bigr\}^K\\
& \hskip 10mm- \bigl\{ 1+ 1+2H(n+1)a^{-1} + H(2H-1)(n+1)^2a^{-2}(1+o(1)) \bigr\}^K\\
& \hskip 10mm- \bigl\{ 1+ 1+2H(n+1)a^{-1} + H(2H-1)(n+1)^2a^{-2}(1+o(1))\bigr \}^K\Bigr]\\
& = 2a^{2HK} \Bigl[ \bigl\{ 1+H(n+2)a^{-1} \\
& \hskip 20mm + 2^{-1}H(2H-1)(1+(n+1)^2)a^{-2}(1+o(1)) \bigr\}^K\\
& \hskip 10mm - \bigl\{ 1+H(n+1)a^{-1} + 2^{-1}H(2H-1)(1+n^2)a^{-2}(1+o(1)) \bigr\}^K\\
& \hskip 10mm - \bigl\{ 1+ H(n+1)a^{-1} + 2^{-1}H(2H-1)(n+1)^2a^{-2}(1+o(1)) \bigr\}^K\\
& \hskip 10mm +\bigl\{ 1+ Hna^{-1} + 2^{-1}H(2H-1)n^2a^{-2}(1+o(1)) \bigr\}^K\Bigr]\\
& = 2a^{2HK} \Bigl[ \bigl\{1+ K(H(n+2)a^{-1}\\
& \hskip 20mm +2^{-1}H(2H-1)(1+(n+1)^2)a^{-2}(1+o(1)))\\
& \hskip 20mm +2^{-1}K(K-1)(H(n+2)a^{-1}(1+o(1)))^2(1+o(1))\bigr\}\\
& \hskip 10mm-\bigl \{1+ K(H(n+1)a^{-1}+ 2^{-1}H(2H-1)(1+n^2)a^{-2}(1+o(1)))\\
& \hskip 20mm +2^{-1}K(K-1)(H(n+1)a^{-1}(1+o(1)))^2(1+o(1))\bigr\}\\
& \hskip 10mm- \bigl\{1+ K(H(n+1)a^{-1}\\
& \hskip 20mm + 2^{-1}H(2H-1)(1+(n+1)^2)a^{-2}(1+o(1)))\\
& \hskip 20mm +2^{-1}K(K-1)(H(n+1)a^{-1}(1+o(1)))^2(1+o(1))\bigr\}\\
& \hskip 10mm+\bigl\{1+ K(Hna^{-1}+ 2^{-1}H(2H-1)n^2a^{-2}(1+o(1)))\\
& \hskip 20mm +2^{-1} K(K-1)(Hna^{-1}(1+o(1)))^2(1+o(1))\bigr\}\Bigr]\\
& = 2a^{2HK}\Bigl[ (KH(n+2) - KH(n+1) -KH(n+1) + KHn)a^{-1}\\
& \hskip 10mm + 2^{-1}KH(2H-1)(1+(n+1)^2) + 2^{-1}K(K-1)H^2(1+(n+1)^2) \\
& \hskip 10mm -2^{-1}KH(2H-1)(n^2+1) - 2^{-1}K(K-1)H^2(n+1)^2\\
& \hskip 10mm - 2^{-1}KH(2H-1)(n+1)^2 - 2^{-1}K(K-1)H^2(n+1)^2 \\
& \hskip 10mm +2^{-1}KH(2H-1)n^2 + 2^{-1}K(K-1)H^2n^2)\}a^{-2}(1+o(1))\Bigr]\\
& = 2H^2K(K-1)a^{2(HK-1)}(1+o(1)).
\end{align*}
Since $HK-1<0$, the last term tends to 0 when $a$ goes to the
infinity. \end{proof}
\vskip3mm
\begin{remark}
The fact that the term $f_{a}(n)$ converges to zero as $a\to \infty$
could be seen by Proposition \ref{Xh} since, using H\"older
inequalities,
$$
R^{X^{H,K}}(a, a+n) \leq \left( E\left[\left( X^{H,K}_{a+1}
-X^{H,K}_{a}\right) ^{2}\right]\right) ^{{1}/{2}} \left(
E\left[\left( X^{H,K}_{a+n+1} -X^{H,K}_{a+n}\right)
^{2}\right]\right) ^{{1}/{2}}
$$
and both factors on the right hand side above are of order
$a^{HK-1}$. The result actually confirms that for large $a$,
$X^{H,K}_{a+n+1}-X^{H,K}_{a+n}$ is very close to $X^{H,K}_{a+1}
-X^{H,K}_{a}$.
\end{remark}
\vskip 10mm
\section{The behavior of increments of the bifractional Brownian motion}
In this section we continue the study of the bifractional Brownian
noise (\ref{rbi}). We are now interested in the behavior with
respect to $n $ (as $n\to \infty$). We know that as $n\to \infty$
the fractional Brownian noise with Hurst index $HK$ behaves as
$HK(2HK-~1) n^{2(HK-1)}$. Given the decomposition (\ref{deco}) it is
natural to ask what the contribution of the bifractional Brownian
noise to this is and what the contribution of the process $X^{H,K}$
is. We have the following.
\begin{theorem} For integers $a, n\ge 0$, let $R(a, a+n)$ be given by {\rm (\ref{rbi})}.
Then for large $n$, \label{thm:BHK}
\begin{align*}
R(a, a+n)& =
2^{-K}\left ( 2HK(2HK-1)n^{2(HK-1)} \right .\\
&\left .\hskip 10mm + HK(K-1)\left( (a+1) ^{2H} -a^{2H} \right)
n^{2(HK-1)+(1-2H)} + \cdots\right ) .
\end{align*}
\end{theorem}
\begin{proof} Recall first that by (\ref{faga}) and (\ref{gn}),
\begin{eqnarray*}
R(a, a+n)= 2^{-K} (f_{a}(n) + g(n))
\end{eqnarray*}
and the term $g(n)$ behaves as $2HK(2HK-1) n^{2(HK-1)}$ for large
$n$. Let us study the behavior of the term $f_{a}(n)$ for large $n$.
We have
\begin{align*}
f_{a}(n)&= \left( (a+1) ^{2H} + (a+n+1)^{2H} \right) ^{K} -\left( (a+1) ^{2H} + (a+n)^{2H} \right) ^{K}\\
&\hskip 10mm- \left( a ^{2H} + (a+n+1)^{2H} \right) ^{K}+ \left( a^{2H} + (a+n)^{2H} \right) ^{K}\\
&= n^{2HK} \Bigl[ \left( \left((a+1)n^{-1} \right) ^{2H} + \left( (a+1)n^{-1} +1\right) ^{2H} \right) ^{K} \\
& \hskip 20mm -\left(
\left( (a+1)n^{-1} \right) ^{2H} + \left( {a}{n}^{-1}+1 \right) ^{2H} \right) ^{K}\\
& \hskip 20mm- \left( \left( {a}{n} ^{-1}\right) ^{2H} + \left( (a+1){n}^{-1}+1\right) ^{2H} \right) ^{K}\\
& \hskip 20mm+ \left( \left(
{a}{n}^{-1} \right) ^{2H} + \left( {a}{n}^{-1}+1\right) ^{2H} \right) ^{K}\Bigr]\\
&= n^{2HK} \Bigg [ \bigg ( (a+1) ^{2H} n^{-2H} +1 \\
& \hskip 15mm
+ \sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)(a+1) ^{j+1}
n^{-j-1}\bigg) ^{K} \\
&\hskip 12mm- \bigg( (a+1) ^{2H} n^{-2H} +1\\
&\hskip 15mm + \sum_{j=0}^{\infty}((j+1)!)^{-1}2H(ZH-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg) ^{K} \\
&\hskip 12mm-\bigg( a ^{2H} n^{-2H} +1\\
& \hskip 15mm +
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)(a+1) ^{j+1}
n^{-j-1}\bigg) ^{K} \\\
&\hskip 12mm+ \bigg( a ^{2H} n^{-2H} +1 \\
&
\hskip 15mm +\sum_{j=0}^{\infty}((j+1)!)^{-1} 2H(2H-1)\cdots (2H-j)^{j+1}a ^{j+1}
n^{-j-1}\bigg) ^{K} \Bigg]. \\
\end{align*}
By the asymptotic behavior of the function $(1+ y) ^{K}$ when $y\to0$ we obtain
\begin{align*}
f_{a}(n)
&=n^{2HK}\Bigg[ 1 + \sum_{\ell =0}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell)\\
& \hskip 5mm \times \bigg( (a+1) ^{2H} n^{-2H} +1 \\
& \hskip 10mm +
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)(a+1) ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \\
& \hskip 3mm- 1-\sum_{\ell =0}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell) \\
& \hskip 5mm\times \bigg( (a+1) ^{2H} n^{-2H} +1 \\
& \hskip 10mm +
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \\
& \hskip 3mm- 1-\sum_{\ell =0}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell)\\
& \hskip 5mm\times \bigg( a^{2H} n^{-2H} +1\\
& \hskip 10mm +
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \\
& \hskip 3mm +1+\sum_{\ell =0}^{\infty}((\ell +1)!)^{-1}K(K-1)\cdots (K-\ell)\\
& \hskip 5mm\times \bigg( a ^{2H} n^{-2H} +1 \\
& \hskip 10mm+
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \Bigg] \\
&= n^{2HK}\Bigg[ 1 + \sum_{\ell =1}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell)\\
&\hskip 5mm\times \bigg( (a+1) ^{2H} n^{-2H} +1\\
& \hskip 10mm +
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)(a+1) ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \\
& \hskip 3mm- 1-\sum_{\ell =1}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell)\\
&\hskip 5mm\times \bigg( (a+1) ^{2H} n^{-2H} +1 \\
& \hskip 10mm+
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \\
& \hskip 3mm- 1-\sum_{\ell =1}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell)\\
&\hskip 5mm\times \bigg( a^{2H} n^{-2H} +1\\
& \hskip 10mm +
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \\
& \hskip 3mm+1+\sum_{\ell =1}^{\infty}((l+1)!)^{-1}K(K-1)\cdots (K-\ell)\\
&\hskip 5mm\times \bigg( a ^{2H} n^{-2H} +1 \\
& \hskip 10mm+
\sum_{j=0}^{\infty}((j+1)!)^{-1}2H(2H-1)\cdots (2H-j)a ^{j+1}
n^{-j-1}\bigg)^{\ell +1} \Bigg] \\
&=2^{-1}K(K-1) n^{2HK} \Big[ \left( (a+1) ^{2H} n^{-2H} +2H (a+1) n^{-1} \right.\\
& \left .\hskip 50mm + H (2H-1 ) (a+1) ^{2} n^{-2} \right)^{2} \\
& \hskip 10mm -
\left( (a+1) ^{2H} n^{-2H} +2H an^{-1} + H (2H-1 ) a ^{2} n^{-2} \right)^{2} \\
& \hskip 10mm-\left( a ^{2H} n^{-2H} +H (a+1) n^{-1} + 2H (2H-1 ) (a+1) ^{2} n^{-2} \right)^{2} \\
&\hskip 10mm+ \left( a^{2H} n^{-2H} +H a n^{-1} + 2H (2H-1 ) a ^{2} n^{-2} \right)^{2} \Big] \\
& \hskip 10mm+ \cdots \\
&=HK(K-1) \left( (a+1) ^{2H}-a^{2H}\right) n^{2(HK-1)+(1-2H)}+
\cdots .
\end{align*}
This completes the proof. \end{proof} \vskip 3mm
Let us discuss some consequences of the theorem above.
\begin{remark}
What is the main term in $R(a, a+n)$?
Note that $H>\frac12$ \,\,if and only if\,\,
$2(HK-1)>2(HK-1)+(1-2H)$. Consequently the dominant term for
$R(a,a+n) $ is of order $n^{2HK-2}$ if $H>\frac{1}{2}$ and of order
$n^{2HK-1-2H}$ if $H<\frac{1}{2}$.
\end{remark}
\vskip 3mm
Another interesting observation is that, although the main term of
the covarinace function $R(a,a+n)$ changes depending on
whether $H$ is bigger or less than one half, the long-range
dependence of the process $B^{H,K}$ depends on the value of the product $HK$.
\begin{corollary}
For integers $a\geq 1$ and $n\geq 0$, let $R(a,a+n)$ be given by
{\rm (\ref{rbi})}. Then for every $a\in\mathbb{N}$, we have
\begin{equation*}
\sum_{n\geq 0} R(a, a+n) =\infty \hskip0.5cm \mbox{ if } 2HK>1
\end{equation*}
and
\begin{equation*}
\sum_{n\geq 0} R(a, a+n) <\infty \hskip0.5cm \mbox{ if } 2HK\leq1 .
\end{equation*}
\end{corollary}
\begin{proof} Suppose first that $2HK>1$. Then it forces $H$ to
be bigger than $\frac{1}{2}$ and the dominant term of $R(a, a+n)$
is $n^{2HK-2}$ when $n$ is large. So the series diverges.
Suppose that $2HK<1$. If $H>\frac{1}{2}$, the main term of $R(a,a+n)$
is $n^{2HK-2} $ and the series converges. If $H<\frac{1}{2}$,
then the main term is $n^{2HK-2H-1} $ and the series converges again.
If $2HK=1$ (and thus $H>\frac{1}{2}$) then $R(a, a+n)$ behaves as
$n\to \infty $ as $n^{-1-2H}$ and the series is convergent.
\end{proof}
\begin{corollary}Let $R^{X^{H,K}}(a,a+n)$ be the noise defuned in {\rm (\ref{cor})}. Then
\label{thm:XHK}
\begin{align*}
R&^{X^{H,K}}(a, a+n) \\
& = {\Gamma(1-K)}{K}^{-1}\left (-4HK(K-1)\left((a+1)^{2H}-a^{2H}\right) n^{2(HK-1)+(1-2H)} + \cdots\right ).
\end{align*}
\end{corollary}
\begin{proof} It follows from Theorem 4.1 and the fact that the
covariance function of the fractional Brownian motion with
Hurst parameter $HK$ behaves as $HK(2HK-1)n^{2(HK-1)}$ when $n\to\infty$.
\end{proof}
\vskip 10mm
\section{More on the process $X^{H,K}$}
We will give few additional properties of the process $X^{K}$
defined in (1.1). Recall (1.2) that for every $s,t\geq 0$
\begin{equation*}
R^{X^{K}}(s,t):=E[X^{K}_{s}X^{K}_{t}] =\Gamma(1-K)K^{-1}(t^{K} + s^{K}
-(t+s) ^{K}).
\end{equation*}
Denote by $B^{K/2}=(B^{K/2}_{t}, {t\in \mathbb{R}})$ a fractional
Brownian motion with Hurst index $K/2$ defined for all $t\in\mathbb{R}$ and
let
\begin{equation*}
B^{o, K/2}_{t}= {2}^{-1} \left( B^{K/2}_{t} -B^{K/2}_{-t}\right),
\hskip0.5cm B^{e, K/2}_{t}= {2}^{-1} \left(B^{K/2}_{t}
+B^{K/2}_{-t}\right).
\end{equation*}
The processes $B^{o, K/2} $ and $B^{e, K/2}$ are respectively the
odd part and the even part of the fractional Brownian motion
$B^{K/2}$. Denote by $R^{o, K/2} $ the covariance of the process
$B^{o, K/2}_{t}$, by $R^{e, K/2}$ the covariance of the $B^{e, K/2}$
and by $R^{B^{K/2}}$ the covariance of the fractional Brownian
motion $B^{K/2}$. We have the following facts:
\begin{equation*} \label{XB}
R^{X^{K}}(t,s)= C_{3}R^{B^{K/2}} (t, -s)=C_{3} R^{B^{K/2}} (s, -t)
\end{equation*}
where $C_{3}= 2\Gamma (1-K) K^{-1}$, and
\begin{equation*} \label{rx1}
R^{o, K/2}(t,s)=\frac{1}{2} \left( R^{B^{K/2}}(t,s)- R^{B^{K/2}}
(t,-s)\right)
\end{equation*}
and
\begin{equation*} \label{rx2}
R^{e, K/2}(t,s)=\frac{1}{2} \left( R^{B^{K/2}}(t,s)+ R^{B^{K/2}}
(t,-s)\right).
\end{equation*}
As a consequence
\begin{equation*}
R^{e, K/2}(t,s)-R^{o, K/2}(t,s)=R^{B^{K/2}}(t, -s)=
C_{3}^{-1}R^{X^{K}}(t,s) .
\end{equation*}
From the above computations, we obtain
\begin{proposition}
We have the following equality
$$
C_{3}^{-{1}/{2}}X^{K} + B^{e, K/2} \overset{\mathrm d}{=} B^{o, K/2}
$$
if $X^{K}$ and $B^{e, K/2}$ are independent.
\end{proposition}
Let us go back to the bifractional Brownian noise $R(a,a+n)$ given in (\ref{rbi}).
By the decomposition (\ref{deco}), we have
$$
C_{1} X^{H,K} + B^{H,K} \overset{\mathrm d}{=} C_{2} B^{HK},
$$
where $C_1$ and $C_2$ are as before, and thus we get
\begin{align*}
R(a, a+n)&= C_{2}^{2} R^{B^{HK}}(a, a+n) -C_{1}^{2}R^{X^{H,K}} (a, a+n) \\
&= C_{2}^{2} R^{B^{HK}}(0,n) -C_{3}\left( R^{e, K/2, H}(a,a+n) -
R^{o,K/2, H}(a,a+n) \right)
\end{align*}
where $R^{e, K/2, H}(a,a+n)$ is the noise of the process $B^{e, K/2}
_{t^{2H}}, t\geq 0$.
\begin{remark}
The fact that the covariance function $ R^{X^{K}}(a, a+n)$ of the
process $X^{K/2}$ converges to zero as $a \to \infty $ can be
interpreted as \lq\lq the covariance of the odd part"
$C_{3}R^{B^{o, K/2}}(a,a+n)$ and \lq\lq the covariance of the even
part" $C_{3}R^{B^{e,K/2}}(a,a+n)$ have the same limit
$2^{-1}C_{1}^{2}R^{B^{K/2}}(0,n)$ when $a$ tends to infinity.
\end{remark}
\vskip 10mm
\section{Limit theorems to the bifractional Brownian motion}
In this section, we prove two limit theorems to the bifractional
Brownian motion. Throughout this section, we use the following
notation. Let $0<H<1, 0<K<1$ such that $2HK>1$ and let $(\xi_j,
j=1,2,...)$ be a sequence of standard normal random variables.
Define a function $g(x,y), x\ge 0, y\ge 0$ by
\begin{align}
g(x,y) & = 2^{2-K}H^2K(K-1)(x^{2H}+y^{2H})^{K-2}(xy)^{2H-1}\nonumber\\
& \hskip 20mm + 2^{1-K}HK(2HK-1)|x-y|^{2HK-2}\nonumber\\
& =: g_{1}(x,y) + g_{2}(x,y),\label{g}
\end{align}
for $(x,y)$ with $x\ne y$ and $x\ne 0$ and $y\ne 0$.
\newcommand{\sum_{j=1}^{[nt]}}{\sum_{j=1}^{[nt]}}
\newcommand{\sum_{i=1}^{[nt]}}{\sum_{i=1}^{[nt]}}
\newcommand{\sum_{j=1}^{[ns]}}{\sum_{j=1}^{[ns]}}
\begin{proposition}
\label{prop:conv} Under the notation above, assume that
$E[\xi_i\xi_j]= g(i,j)$. Then
$$
\left (n^{-HK}\sum_{j=1}^{[nt]} \xi_j, t\ge 0\right ) \overset{\mathrm d}{\Rightarrow} \left (B_t^{H,K},
t\ge 0\right ).
$$
\end{proposition}
To prove this, we need a lemma.
\begin{lemma}
$$
\int_0^t\int_0^s g(u,v)dudv = 2^{-K}\left [ (t^{2H}+
s^{2H})^K-|t-s|^{2HK}\right ].
$$
\end{lemma}
\begin{proof} It follows easily from the fact that
$\frac{\partial^{2}R} {\partial x\partial y} (x,y)= g(x,y)$ for
every $x,y\geq 0$ and by using that $2HK>1$.
\end{proof}
\begin{proof}({\it Proof of Proposition~{\rm \ref{prop:conv}}.}) It is enough to
show that
\begin{align*}
I_n &:=E\left [\left (n^{-HK}\sum_{i=1}^{[nt]} \xi_i\right) \left ( n^{-HK}\sum_{j=1}^{[ns]}\xi_j\right )\right ]\\
& \to E[B_t^{H,K}B_s^{H,K}] = 2^{-K}\left ( (t^{2H}+s^{2H})^K-
|t-s|^{2HK}\right).
\end{align*}
We have
$$
I_n = n^{-2HK}\sum_{i=1}^{[nt]}\sum_{j=1}^{[ns]} E[\xi_i\xi_j] = n^{-2HK}\sum_{i=1}^{[nt]}\sum_{j=1}^{[ns]}
g(i,j).
$$
Observe that
\begin{align}
g\left ( \frac{i}{n}, \frac{j}{n}\right ) & = 2^{2-K}H^2K(K-1)\left
(\left (\frac{i}{n}\right )^{2H} +\left (\frac{j}{n}\right )^{2H}
\right )^{K-2}
\left (\frac{ij}{n^2}\right )^{2H-1}\nonumber \\
& \hskip 20mm +2^{1-K}HK(2HK-1)\left |\frac{i}{n}- \frac{j}{n}\right |^{2HK-2}\nonumber \\
& = 2^{2-K}H^2K(K-1)n^{-2H(K-2)-2(2H-1)}(i^{2H}+j^{2H})^{K-2}(ij)^{2H-1}\nonumber \\
& \hskip 20mm + 2^{1-K}HK(2HK-1)n^{-2HK+2}|i-j|^{2HK-2}\nonumber \\
& = n^{2(1-HK)} \Bigl ( 2^{2-K}H^2K(K-1)(i^{2H}+j^{2H})^{K-2}(ij)^{2H-1}\nonumber \\
& \hskip 30mm + 2^{1-K}HK(2HK-1)|i-j|^{2HK-2}\Bigr )\nonumber \\
& = n^{2(1-HK)}g(i,j). \label{gij}
\end{align}
Thus, as $n\to\infty$,
\begin{align*}
I_n & = n^{-2HK}\sum_{i=1}^{[nt]}\sum_{j=1}^{[ns]} n^{2HK-2}g\left
(\frac{i}{n},\frac{j}{n}\right )
= n^{-2} \sum_{i=1}^{[nt]}\sum_{j=1}^{[ns]} g\left (\frac{i}{n},\frac{j}{n}\right ) \\
& \to \int_0^t\int_0^s g(u,v)dudv
= 2^{-K}\left ((t^{2H}+s^{2H})^K - |t-s|^{2HK}\right )\\
&= E[B_t^{H,K}B_s^{H,K}].
\end{align*}
\end{proof} \vskip 3mm
\begin{remark}
This result seems easy to be generalized to more general Gaussian
selfsimilar processes such that their covariance $R$ satisfies
$\frac{\partial R}{\partial x \partial y} \in L^{2} \left( (0,
\infty ) ^{2} \right)$.
\end{remark}
We next consider more general sequence of nonlinear functional of
standard normal random variables. Let $f$ be a real valued
function such that $f(x)$ does not vanish on a set of positive
measure, $E[f(\xi_1)]=0$ and $E[(f(\xi_1))^2]<\infty$. Let $H_k(x)$
denotes the $k$-th Hermite polynomial with highest coefficient 1. We
expand $f$ as follows (see e.g. \cite{DM}):
$$
f(x) = \sum_{k=1}^{\infty}c_kH_k(x),
$$
where $\sum_{k=1}^{\infty}c_k^2k!<\infty,
c_k=E[f(\xi_i)H_k(\xi_j)]$. This expansion is possible under the
assumption $Ef(\xi_1)=0$ and $E[(f(\xi_1))^2]<\infty$. Assume that
$c_1\ne 0$. Now consider a new sequence
$$
\eta_j = f(\xi_j), j=1,2,...,
$$
where $(\xi_j , j=1,2,...)$ is the same sequence of standard normal
random variables as before.
\begin{proposition}
\label{prop:gene-conv} Under the same assumptions of
Proposition~\ref{prop:conv}, we have
$$
\left (n^{-HK}\sum_{j=1}^{[nt]} \eta_j, t\ge 0\right ) \overset{\mathrm d}{\Rightarrow} \left
(c_1B_t^{H,K}, t\ge 0\right ).
$$
\end{proposition}
\begin{proof} Note that $\eta_j = f(\xi_j) = c_1\xi_j +
\sum_{k=2}^{\infty}c_kH_k(\xi_j)$. We have
$$
n^{-HK}\sum_{j=1}^{[nt]}\eta_j = c_1 n^{-HK}\sum_{j=1}^{[nt]}\xi_j +
n^{-HK}\sum_{j=1}^{[nt]}\sum_{k=2}^{\infty}c_kH_k(\xi_j).
$$
By Proposition~\ref{prop:conv}, it is enough to show that
$$
E\left [\left ( n^{-HK}\sum_{j=1}^{[nt]}\sum_{k=2}^{\infty}c_kH_k(\xi_j)\right
)^2\right ]\to 0 \quad\text{as}\,\, n\to\infty.
$$
We have
\begin{align*}
J_n&:= E\left [\left ( n^{-HK}\sum_{j=1}^{[nt]}\sum_{k=2}^{\infty}c_kH_k(\xi_j)\right )^2\right ]\\
& =
n^{-2HK}\sum_{i=1}^{[nt]}\sum_{j=1}^{[nt]}\sum_{k=2}^{\infty}\sum_{\ell=2}^{\infty}c_kc_{\ell}E[H_k(\xi_j)H_{\ell}(\xi_j)].
\end{align*}
In general, if $\xi$ and $\eta$ are jointly Gaussian random
variables with $E[\xi]=E[\eta]=0$, $E[\xi^2]=E[\eta^2]=1$ and
$E[\xi\eta]=r$, then
$$
E[H_k(\xi)H_{\ell}(\eta)] = \delta_{k,\ell}r^kk!,
$$
where
$$
\delta_{k,\ell}=
\begin{cases}
1, & k=\ell,\\
0, & k\ne\ell .
\end{cases}
$$
Thus
\begin{align*}
J_n&= n^{-2HK}{[nt]} \sum_{\ell=2}^{\infty } c_{\ell}^{2}\ell!+
n^{-2HK}\sum _{i,j=1; i\not= j}^{[nt]}\sum_{\ell=2}^{\infty}c_{\ell}^2 (E[\xi_i\xi_j])^{\ell}{\ell}!\\
& =n^{-2HK}{[nt]} \sum_{\ell=2}^{\infty }
c_{\ell}^{2}\ell!+n^{-2HK}\sum _{i,j=1; i\not= j}^{[nt]}\sum_{\ell
=2}^{\infty}c_{\ell}^2g(i,j)^{\ell}\ell !
\end{align*}
Since for every $i,j\geq 1$ ($i\not=j$) one has $\vert g(i,j)\vert
\leq \left( E[\xi_{i} ^{2} ]\right) ^{{1}/{2} } \left( E[\xi_{j}
^{2}] \right) ^{{1}/{2} }= 1$, we get
\begin{align*}
J_{n} & \leq n^{-2HK}{[nt]} \sum_{\ell=2}^{\infty }
c_{\ell}^{2}\ell! +n^{-2HK}\sum_{\ell =2}^{\infty}c_{\ell }^{2}\ell!
\sum_{i,j=1; i\not=j } ^{[nt]} g(i,j)^{2} \\
&\leq t n^{1-2HK} \sum_{\ell =2} ^{\infty } c_{\ell}^{2} \ell! +
n^{2(HK-1)} \left (\sum_{\ell=2}^{\infty}c_{\ell}^2{\ell}!\right )
\left( n^{-2}\sum_{i,j=1; i\not= j} ^{[nt]} g\left( \frac{i}{n},
\frac{j}{n}\right)^{2}\right) ,
\end{align*}
where we have used (\ref{gij})
Here as $n\to \infty$, since
$\sum _{\ell=2}^{\infty } c_{\ell
}^{2} \ell ! <\infty $ and $2HK>1$ we obtain that \\ $t n^{1-2HK}
\sum_{\ell =2} ^{\infty } c_{\ell}^{2} \ell!$ converges to zero as
$n\to \infty$. On the other hand, with $C$ an absolute positive
constant and $g_{1}$ and $g_{2}$ given by (\ref{g}),
$$
n^{-2}\sum_{i,j=1; i\not= j} ^{[nt]} g\left(
\frac{i}{n},\frac{j}{n}\right)^{2}\leq C n^{-2}\left(\sum_{i,j=1;
i\not= j} ^{[nt]} g_{1} \left( \frac{i}{n},\frac{j}{n}\right)^{2}+
\sum_{i,j=1; i\not= j} ^{[nt]} g_{2}\left(
\frac{i}{n},\frac{j}{n}\right)^{2}\right).
$$
The first sum $n^{-2}\sum_{i,j=1; i\not= j} ^{[nt]} g_{1} \left(
\frac{i}{n},\frac{j}{n}\right)^{2}$ is a Riemann sum converging to
the integral $\int_{0}^{t}\int_{0}^{t}g_{1}^{2}(x,y) dxdy $. Note
that this integral is finite because $\vert g_{1}(x,y)\vert \leq C
(xy) ^{HK-1}$ and the integral $\int_{0}^{t}\int_{0}^{t} \vert
x-y\vert ^{2HK-2} dxdy$ is finite when $2HK>1$. Since $ n^{2(HK-1)}\to
0$ we easily get $$n^{2(HK-1)} n^{-2}\sum_{i,j=1; i\not= j} ^{[nt]}
g_{1} \left( \frac{i}{n},\frac{j}{n}\right)^{2}\to 0$$ as $n\to
\infty$.
The second sum involving $g_{2}$ appears in the classical fractional Brownian case
because it is, modulo a constant, the second derivative of the
covariance of the fractional Brownian motion with Hurst parameter
$HK$. The convergence of $$n^{2(HK-1)}n^{-2}\sum_{i,j=1; i\not= j}
^{[nt]} g_{2}\left( \frac{i}{n},\frac{j}{n}\right)^{2}$$ has been
already proved in e.g. \cite{DM}. The proof is completed.\end{proof}
\par\bigskip\noindent
{\bf Acknowledgment.}
The authors are grateful to a referee for his/her useful comments for making
the final version of the paper.
\bibliographystyle{amsplain}
|
2,877,628,088,929 | arxiv | \section{Region I: $x \lesssim 0.2$}
\begin{figure}[t]
\centering
\includegraphics[width=7.5cm]{fig2}
\caption{\label{fig:00} (a) Asymmetry data for $x=0$ measured at ISIS. (b) Relaxation rate used to determine $T_{\rm F}$, plus early-time asymmetry $\overline{A}_0$ used to locate $T_{\rm N}$ [non-normalized data from Fig.~\ref{fig:heatmap} (b) and (a), respectively]. Shaded regions show the ordering and freezing temperatures, $T_{\rm N}$ and $T_{\rm F}$, respectively. (c) Data measured at S$\mu$S, showing spontaneous precession at low temperature (1.51~K data shifted by $-5\%$ for clarity). (d) Temperature evolution of the two precession frequencies.}
\end{figure}
{\it Region I:}
Data measured on the $x=0$ compound La$_{2}$CoO$_4$ at S$\mu$S show spontaneous oscillations in the muon polarization for temperatures below $T \approx 75~{\rm K}$ [Fig.~\ref{fig:00}(c)], confirming a transition to SO. The asymmetry spectra within the ordered regime were found to be best fitted for $t \leq 0.5~{\rm \mu s}$ to the two-frequency relaxation function
$A(t) = \sum\nolimits_{i=1,2} A_i\cos(2 \pi \nu_i t) {\rm e}^{- \lambda_i t} + A_{\rm rel}{\rm e}^{- \lambda_{\rm rel} t},$
where the high and low frequency oscillatory components, with amplitudes $A_1$ and $A_2$, are fixed to their average values of 6\% and 10\%, respectively, and indicate two magnetically inequivalent muon stopping sites. The fitted values of the two precession frequencies $\nu_{1,2}$ are plotted in Fig.~\ref{fig:00}(d), and drop to zero at a temperature $T_{\rm N}$, as expected for a magnetic transition to a disordered, paramagnetic state.
Measurements made at ISIS are well suited to probing the slow dynamics of the system [Fig.~\ref{fig:00}(a)]. An abrupt drop in average early-time asymmetry is apparent upon cooling below $T \approx 75~{\rm K}$ [Fig.~\ref{fig:00}(b)] and a fit to the broadened step function yielded an ordering temperature value of
$T_{\rm N} = 77(2)~{\rm K}$, consistent with the vanishing of the oscillations.
Upon further cooling below $T \approx 70~{\rm K}$ $\overline{A}_0$ does not change, but the longitudinal relaxation rate increases to a peak at around 22.5~K, before dropping again at lower temperatures. This is suggestive of a freezing of dynamics (as the correlation time $\tau$ increases) at much lower temperatures than the transition to magnetic long range SO. This feature is shared by the $x=0.01$ sample, which could indicate that the introduction of frustration by small concentrations of $S=0$ Co$^{3+}$ ions induces a freezing transition for a relaxation channel that has a characteristic timescale within the $\mu$SR dynamical range. The freezing is visible in the asymmetry spectra below $T_{\rm F}$ [see Fig.~\ref{fig:00}(a)] as an increase in the non-relaxing baseline amplitude, reflecting those components of muon-spin polarization parallel to the local magnetic field which can only be relaxed by dynamic field fluctuations.
We note that the value of $T_{\mathrm{N}}$ for $x=0$ is considerably suppressed compared to the accepted value of 275~K reported in Ref.~\cite{yamadaB}. We attribute this to oxygen non-stoichiometry, which is an alternative route to doping holes into the CoO$_2$ layers (for La$_{2-x}$Sr$_x$CoO$_{4+\delta}$ the hole density is given by $n_{\rm h}=x+2\delta$), as seen in the cuprate and nickelate systems \cite{wells,tranquadaC}.
Further annealing of an $x=0$ sample led to a restored value of $T_{\rm N}$ of approximately 275~K, however, the lowered amplitude of the $\mu$SR signal and our subsequent NMR measurements indicate that the annealing process introduces impurity phases, particularly at the surfaces of the crystals.
As Sr is introduced within Region I ($x < 0.2$), $T_{\rm N}$ is found to drop abruptly to around 30~K by $x\approx 0.1$.
This effect, together with that of the oxygen non-stoichiometry, demonstrates the sensitivity of the magnetism within the LSCO system to the concentration of non-magnetic Co$^{3+}$ ions within the CoO$_2$ layers. The presence of holes due to excess oxygen dramatically suppresses the onset temperature compared to the pristine compound, and $T_{\rm N}$ is further reduced rapidly by further addition of holes via Sr doping.
This is in contrast to both the previously reported phase diagram proposed on the basis of neutron scattering \cite{cwik}, and the predictions of percolation theory for static non-magnetic impurities in two dimensions where long range AFM SO would persist up to $x\approx 0.41$ \cite{newman,vajk}.
However, the effect is much less abrupt than in the superconducting series La$_{2-x}$Sr$_x$CuO$_4$ where long range SO is replaced by an IC spin-glass phase by just $x=0.02$ \cite{matsuda} (we find $\frac{{\rm d} T_{\rm N}}{{\rm d} x} \approx -1000~{\rm K/doped~hole}$ at low $x$; an order of magnitude smaller than in the cuprate series). The freezing temperature $T_{\rm F}$ does not decrease in the same manner, but remains at around 25~K across Region I. The convergence of $T_{\rm N}$ and $T_{\rm F}$ above around $x=0.1$ suggests that for these concentrations the peak in relaxation rate is sensitive to the critical divergence in correlation times accompanying the transition to magnetic SO on the muon timescale.
{\it Region II:} As more holes are introduced from further Sr substitution, a marked change in behavior for concentrations $x \geq 0.2$ is encountered, with samples in the region $0.2\leq x\leq 0.5$ showing similar responses, suggestive that they share common features which might relate to stripe ordering.
Stripe order in LSCO has been proposed for $x \geq 0.3$ on the basis of neutron scattering experiments \cite{cwik}.
Doping away from the parent compound introduces disorder and frustration into the planes and intermediate doping levels lead to short-range stripe correlations which stabilize IC magnetic order. The most robust CO within the LSCO system is checkerboard ordering which occurs at half doping ($x=0.5$) \cite{zaliznyak00,zaliznyak01}, where in-plane CO correlation lengths are largest \cite{cwik}, compared to $x=1/8$ for the cuprate system.
Despite having different electronic properties to the cuprates, LSCO samples with $x=0.33$ and $0.25$ which exhibit disordered stripe CO correlations share the distinctive ``hourglass'' magnetic excitation spectrum \cite{tranquadaD,haydenB,hinkov,lipscombe,xu}, as revealed by inelastic neutron scattering (INS) studies \cite{boothroyd,gaw}.
However, the origin of the hourglass spectrum and the nature of the CO in this region is controversial.
Due to the insulating nature of the material and the disparate energy scales of CO and SO \cite{savici} it was suggested that the origin of the IC magnetic order is not stripe-like physics. Furthermore, nanoscale phase separation of regions of the undoped compound and the stable checkerboard CO exhibited by $x=0.5$ doping has been proposed as the source of the hourglass excitation spectrum observed by INS \cite{drees13,drees14}.
Simulations using a cluster spin glass model (CSGM) \cite{andrade} have reproduced the hourglass excitation spectrum \cite{gaw}, where frustrated magnetic ions decorate a background of short-range stripe CO correlations with quenched disorder, strengthening the argument that stripes and hourglass excitation spectrum are intimately linked.
The results of $\mu$SR and NMR measurements of the $x=1/3$ compound revealed the onset of magnetic order within partially disordered charge and spin stripes at around 35~K, with a further glassy freezing of dynamics involving the slow, collective motion of spins within the magnetic stripes at lower temperatures.
We find that the $\mu$SR of $x=0.25$ is similar to the $x=0.33$ material, although the slightly broader features preclude the identification of a second freezing feature at lower temperatures.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig3}
\caption{\label{fig:20} (a) Asymmetry data for $x=0.2$ measured at S$\mu$S showing temperature evolution of oscillations, and the non-relaxing baseline (dashed horizontal lines). Solid lines are fits (see text) (b) Amplitude components; (c) longitudinal relaxation rate and exponent $\beta$; (d) amplitude and frequency of the oscillatory component.}
\end{figure}
A key observation is that behavior of $x=0.2$ is clearly different to those samples in Region I, and suggests stripe correlations exist to lower concentrations $x$ than previously believed. Asymmetry spectra obtained at S$\mu$S [Fig.~\ref{fig:20}(a)] show spontaneous oscillations for temperatures below around 40~K, which were best fitted to the single-frequency relaxation function
$A(t) = A_{\rm osc}\cos(2 \pi \nu t + \varphi) {\rm e}^{- \lambda_{\rm osc} t} + A_{\rm rel}{\rm e}^{-( \Lambda t)^{\beta}} + A_{\rm b},$
for the time window $t \leq 4.6~{\rm \mu s}$, where the transverse relaxation rate $\lambda_{\rm osc}$ was fixed to its average value of $39~{\rm \mu s^{-1}}$.
Broad peaks in the temperature dependence of both longitudinal relaxation rate $\Lambda$ and amplitude $A_{\rm rel}$ [Fig.~\ref{fig:20}(b), (c)]
suggest a freezing temperature $T_{\rm F}=38(7)~{\rm K}$.
Upon cooling below 20~K, there is a gradual increase in the non-relaxing contribution $A_{\rm b}$ [Fig.~\ref{fig:20}(a), (b)] which points to a more static field distribution.
Taken together, these results indicate that slow dynamics within spatially inhomogeneous disordered stripes of magnetic Co$^{2+}$ ions start to freeze out upon cooling below around 40~K, with regions of static, glassy SO gradually appearing as temperatures are lowered further. This description is consistent with a cluster glass, previously observed only at higher $x$ \cite{gaw}.
The $x=0.5$ compound has been of special interest as it has been found to support checkerboard CO below around $T_{\rm CO} \approx 800~{\rm K}$ \cite{zaliznyak00,zaliznyak01,helme}. Susceptibility and neutron scattering measurements reveal magnetic correlations appearing for $T \lesssim 60~{\rm K}$ \cite{moritomo,helme} and a spin freezing transition at around 30~K dominated by $180^{\circ}$ antiferromagnetic interactions between Co$^{2+}$ ions across the non-magnetic Co$^{3+}$ sites \cite{helme}. Our muon data show a broad drop in $\overline{A}_0$ on cooling through around 30~K, coinciding with a peak in relaxation rate and the appearance of spontaneous (heavily damped) oscillations indicating the onset of long range SO. No features are observed around 60~K, where magnetic moments are still fluctuating outside of the muon timescale.
{\it Region III:} For Sr concentrations $x\gtrsim 0.5$, neutron measurements have revealed short range IC magnetic correlations on the neutron timescale \cite{sakiyama}, where glassy SO is likely to be accompanied by short-range CO.
Our measurements in Region III \cite{SIfootnote} detect no long range SO on the muon timescale down to the lowest measured temperatures ($\approx 1.5$~K) for the compounds $x=0.67,~0.75$ and 0.9. For these samples, no muon precession is resolvable and full initial polarization ($\propto A_0$) is maintained across all temperatures.
Spectra were found to be best fitted to
$A(t) = A_{\rm f}{\rm e}^{- \lambda_{\rm f} t} +A_{\rm s}{\rm e}^{- \lambda_{\rm s} t} + A_{\rm b}$,
where the initial asymmetry $A_0 = A_{\rm f} +A_{\rm s} + A_{\rm b}$ was fixed to the high-$T$ value of $\overline{A}_0$ for each compound, and the ratio between fast ($\lambda_{\mathrm{f}}$) and slow ($\lambda_{\mathrm{s}}$) relaxation rates was found to be approximately 100.
The fit parameters show qualitatively similar behavior for the three compounds: both the fast component amplitude $A_{\rm f}$ and relaxation rate $\lambda_{\rm f}$ increase as temperature is decreased from around 15~K in a very similar manner to the gradual increase in IC magnetic superstructure Bragg peak intensity observed using neutron scattering for samples with $x \geq 0.6$ \cite{cwik,boothroydA}. We attribute this behavior to the electronic magnetic moments fluctuating more slowly as temperature is lowered. The magnitude of $\lambda_{\rm f}$ as $T$ approaches zero is greatest for $x=0.75$, indicating longer magnetic correlation times for this concentration. The fraction of the total asymmetry contained within the fast relaxing component behaves in the same manner for the three compounds and does not scale with the concentration of non-magnetic Co$^{3+}$ ions, suggesting that there is no phase separation within these samples.
If SO occurs above 1.5~K, then it could be either short-ranged or still fluctuating too rapidly with respect to the muon timescale to be detectable.
In summary, our $\mu$SR study has elucidated the nature of the magnetic correlations across the phase diagram of the non-cuprate hole-doped layered antiferromagnet La$_{2-x}$Sr$_x$CoO$_4${} on the muon ($\mu {\rm s}$) time scale, enabling us to identify three distinct regions of behavior.
For $x \lesssim 0.2$ the ordering temperature $T_{\rm N}$ for the commensurate nnAFM SO is heavily suppressed by the introduction of holes into the CoO$_2$ layers.
For $0.2 \lesssim x \lesssim 0.6$ ordering temperatures are larger and IC magnetic ordering is likely to be stabilized by stripe correlations in the CO. This region extends to lower hole concentrations than previously thought, implying that the phase diagram bears a greater similarity to that of La$_{2-x}$Sr$_{x}$NiO$_4$ \cite{ulbrich,hayden,chen,sachan}. Finally, above $x \approx 0.6$ spin fluctuations slow upon cooling, but the system remains paramagnetic down to temperatures of 1.5~K.
\begin{acknowledgements}
Part of this work was performed at S$\mu$S, Paul Scherrer Institut, Villigen, Switzerland and ISIS, Rutherford Appleton Laboratory, UK. We are grateful for the provision of beamtime, and to A. Amato and P. J. Baker for experimental assistance. This research project has been supported by the European Commission
under the 7\textsuperscript{th} Framework Programme through the `Research Infrastructures' action of the `Capacities' Programme, NMI3-II Grant number 283883. This work is supported by the EPSRC (UK).
\end{acknowledgements}
|
2,877,628,088,930 | arxiv | \section{Introduction\blfootnote{This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/}}
\label{section:1}
Commonsense is unstated background knowledge that is used to perceive, infer, and understand the physical world, human emotions, reactions, and knowledge of the common facts that most people agree with. Ordinary commonsense helps us to differentiate between simple false and true statements or answer questions, such as ``can an elephant fit into the fridge'' quickly, but they can be difficult for automatic systems \cite{c206796aef5c4a1e8fda075d6fd94673}. Recent advances in machine learning emphasize the importance of commonsense reasoning in natural language processing (NLP) and as a critical component of artificial intelligence (AI). In the fifty-year history of AI research, progress was extremely slow \cite{4aba22e1f5b0492bab5811af4028ff48} in automated commonsense reasoning. However, when transfer learning ~\cite{Yosinski2014HowTA,Goodfellow-et-al-2016} and then transformers were introduced to the NLP world \cite{vaswani1706attention}, great breakthroughs and developments have occurred at an unprecedented pace \cite{pan2009survey,Tan2018ASO}. Advances in machine learning and deep learning methods have been achieved in numerous studies and wide range of disciplines \cite{Panahi2019word2ketSW,nemati2020machine,article,rs12091361,arodz2019quantum,oh2018effects}.
This paper describes a system participating in the SemEval-2020 ``Commonsense Validation and Explanation (ComVE) Challenge'', multiple tasks of commonsense reasoning and Natural Language Understanding (NLU) designed by \cite{wang-etal-2020-semeval}. The competition is divided into three subtasks, which involve testing commonsense reasoning in automatic systems, multiple choice questions, and text generation. In these tasks, participants are asked to improve the performance of previous efforts \cite{wang2019does}. We apply statistical language modeling, or language modeling (LM) for short as one of the most important parts of modern NLP and then transfer learning to reuse a pretrained model on different data distribution and feature space as the starting point of our target tasks. Applying Transfer Learning to NLP significantly improves the learning process in terms of time and computation through the transfer of knowledge from a related task that has already been learned \cite{10.5555/1803899}.
Language modeling is the task of probability distribution over sequences of words. It also assigns a probability for the likelihood of a given word (or a sequence of words) to follow a sequence of words. Language modeling are applied to many sorts of tasks, like: Machine Translation, Speech Recognition, Question Answering, Sentiment analysis, etc. AWD-LSTM (ASGD Weight-Dropped LSTM) \cite{merity2017regularizing} is a fundamental building block of language modeling which uses different gradient update step and it returns the average value of weights in previous iterations instead of current iteration.
Recently, there have been some excellent advancements towards transfer learning, and its success was illuminated by Bidirectional Encoder Representations from Transformers (BERT) \cite{devlin2018bert}, OpenAI transformer (GPT-2) \cite{radford2019language}, Universal Language Model Fine-tuning for Text Classification (ULMFiT) by fast.ai founder Jeremy Howard \cite{howard2018universal}, ELMo \cite{Peters:2018}, and other new waves of cutting-edge methods and architectures like XLNet \cite{yang2019xlnet}, Facebook AI RoBERTa: A Robustly Optimized BERT Pretraining Approach \cite{liu2019roberta}, ALBERT: A Lite BERT for Self-supervised Learning of Language Representations \cite{lan2019albert}, T5 team google \cite{raffel2019exploring}, and CTRL \cite{keskar2019ctrl}. For this work, we employ and fine-tune some of these suitable models.
When BERT was published, it achieved state-of-the-art performance on a number of natural language understanding tasks. As opposed to directional models like word2vec \cite{mikolov2015computing}, which generates a single word representation for each word in the vocabulary and read the text input sequentially, BERT is deeply bidirectional and reads the entire sequence of words at once. Therefore, BERT allows the model to learn the context of a word based on all of its surroundings using two training strategies: Masked Language Model (MLM) and Next Sentence Prediction (NSP). MLM technique is masking out some of the words in the input and then condition each word bidirectionally to predict the masked words. A random sample of the tokens in the input sequence is selected and replaced with the special token `[MASK]' and the objective is a cross-entropy loss on predicting the masked tokens \cite{devlin2018bert}. In this paper, we use a method inspired by MLM.
The paper ``Attention Is All You Need'' \cite{vaswani1706attention} describes a sequence-to-sequence architecture called transformers relying entirely on the self-attention mechanism and does not rely on any recurrent network such as GRU \cite{chung2014empirical} and LSTM \cite{hochreiter1997long}. Transformers consist of Encoders and Decoders. The encoder takes the input sequence and then decides which other parts of the sequence are important by attributing different weights to them. Decoder turns the encoded sentence into another sequence corresponding to the target task.
A huge variety of downstream tasks have been devised to test a model's understanding of a specific aspect of language. The General Language Understanding Evaluation (GLUE) \cite{wang2018glue} and The Natural Language Decathlon (decaNLP) benchmarks \cite{mccann2018natural} consist of difficult and diverse natural language task datasets. These benchmarks span complex tasks, such as question answering, machine translation, textual entailment, natural language inference, and commonsense pronoun resolution. The majority of state-of-the-art transformers models publish their results for all tasks on the GLUE benchmark. For example, models like different modified versions of BERT, RoBERTa, and T5 outperform the human baseline benchmark \cite{zhu2019freelb,wang2019structbert}. For the evaluation phase, GLUE follows the same approach as SemEval.
Our attempts at SemEval-2020 Task4 competition boost performance on two subtasks significantly. Our idea in reframing the first subtask helps to outperform results of state-of-the-art architecture and language models like BERT, AlBERT, and ULMFiT. General-Domain of ULMFiT is to predict the next word in a sequence by a widely used pretrained AWD-LSTM network on the WikiText-103 dataset. ULMFiT could outperform many text classification tasks like emotion detection, early detection of depression, and medical images analysis \cite{lundervold2019overview,xiao2019figure,trotzek2018utilizing}. We were ranked 11 with a very competitive result on the first subtask and achieved rank 6 for the second subtask amongst 40 and 27 teams, respectively.
This paper is organized as follows. In Section 2, we introduce the three subtasks and their datasets. In Section 3, we describe our different applied models and various strategies that were used to fine tune the models for each individual subtask. In Section 4, we present the performance of our system. Finally, we conclude the paper in Section 5.
\section{Task Definition and Datasets}
As discussed, SemEval-2020 Task 4 consists of three subtasks, each designed for a different natural language inference task. Figure \ref{fig:Figure1} shows a sample for each subtask and the corresponding answer of the model.
\begin{itemize}
\item {\bf SubtaskA (Commonsense Validation)}: Given two English statements with similar wordings, decide which one does not make sense. We had access to 10,000, and 2021 human-labeled pairs of sentences for training and trial models, respectively. After releasing the dev set, we combined these two datasets for training phase and used the dev set to test our models.
\item {\bf SubtaskB (Explanation)}: Given the nonsense statement, select the correct option among three to reason why the statement conflicts with human knowledge. The number of samples in the datasets for this task is similar to subtaskA. Each sample contains the incorrect statement from subtaskA and three candidate reasons to explain why this is against commonsense knowledge.
\item {\bf SubtaskC (Reason Generating)}: Given the nonsense statement, generate an understandable reason in the form of a sequence of words to verify why the statement is against human knowledge. Training samples of datasets for this subtask are all of the false sentences in subtaskA as well as for trial and dev set.
\end{itemize}
\begin{figure}[ht]
\includegraphics[width=9cm, height=9cm]{figure}
\centering
\caption{Sample of training data for each subtask}
\label{fig:Figure1}
\end{figure}
\section{Model Description}
Large pretrained language models are definitely the main trend of the latest NLP breakthroughs. As transformers occupy the NLP leaderboards, we choose several state-of-the-art architectures to outperform the baseline of all subtasks significantly. For each subtask, we describe our system separately below.
\subsection{SubtaskA (Commonsense Validation)}
We consider two approaches to address this task: the first method is based on language models, and the second approach uses classifiers. Our experimental process begins with language models which the key idea behind the first approach is to find the probability of appearing each word in statements and then select one with higher multiplication of probabilities.
Our first try involves fine-tuning pretrained model on AWD-LSTM (as described in Section \ref{section:1}), which performs as poor as a random guess. We also try two other different language models: `BERT' the MLM that attempts to predict the original value of the masked words, based on the non-masked words in the sequence of words, and then transformer.
Original BERT uniformly selects 15\% of the input tokens for possible replacement. Of the selected tokens, 80\% are replaced with `[MASK]', 10\% are left unchanged, and 10\% are replaced by a randomly selected vocabulary token. However, our way of using MLM follows these steps:
\begin{enumerate}
\item Add special tokens to the beginning and end of each sentence. \newline [`[CLS]', `He', `drinks', `apple', `[SEP]']
\item Replace each token from left to right by `[MASK]' each time.
[`[MASK]', `He', `drinks', `apple', `[SEP]'],
[`[CLS]', `[MASK]', `drinks', `apple', `[SEP]'],
[`[CLS]', `He', `[MASK]', `apple', `[SEP]'],
[`[CLS]', `He', `drinks', `[MASK]', `[SEP]'],
[`[CLS]', `He', `drinks', `apple', `[MASK]']
\item Feed them to MLM for predicting the probabilities of the original masked tokens.
\item Normalize predicted probabilities using softmax activation function in the output layer.
\item Multiply predicted probabilities of masked tokens for each pair of statement. The correct sentence has a higher probability.
\end{enumerate}
During the consideration of dataset homogeneity, we observe some samples are ended by periods and some others are not. The most frequent reason for using periods is to mark the end of sentences that are not questions or exclamations. By adding a period at the end of all statements, we increase the accuracy by 4\%, which is remarkable. We also try normalization and padding to boost performance of the model to minimize the impact of sequence length. Surprisingly, during normalization in step 4 we observe that normalizing by the length of sequence root of multiplied probabilities does not improve the performance of model. Similarly, normalization using perplexity to evaluate language models does not increase the accuracy of model. Perplexity is the inverse probability of the test set, normalized by the number of words and minimizing perplexity is the same as maximizing probability. We observe that padding does not make any differences in terms of accuracy. Therefore, the result of BERT MLM is almost the same with the baseline, which is achieved by fine-tuned ELMo as reported by \cite{wang2019does}. As a result, our observation shows BERT MLM model is more suitable for long document understanding; however, the maximum length (27) of our samples is too short.
As mentioned, we consider classifiers as the second approach to deal with this task. We show that the classification-based approach is more efficient in recognizing nonsense statements except ULMFiT for text classification. Our main reasons for applying ULMFiT to address subtaskA are its techniques to deal with a small domain of dataset: Discriminative fine-tuning, slanted triangular learning rates instead of using the same learning rate throughout training, and gradual unfreezing neural network layers. However, applying ULMFiT for this task is similar to choosing between any two statements, randomly. On the other hand, our results show the fine-tuned classifier on the pretrained AWD-LSTM, transformer, and random guess yielded results with almost close to 50\% accuracy. As shown in Table 1, these models can not differentiate sentences that make sense from those that do not make sense, properly.
In addition, we apply the ubiquitous architecture of transformers for classification, such as BERT, Albert, and RoBERTa. All these models allow us to pretrain a model on a large corpus of data, such as all Wikipedia articles and English book corpus, and then fine-tune them on downstream tasks. Looking at Table 1, we see RoBERTa outperforms all other models. We find out a significant difference when using fine-tuned Albert and BERT classification. Table \ref{table:kysymys} summarizes the performance of these systems on dev in terms of accuracy.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ | l | l | l | p{5cm} |}
\hline \bf Models & \bf Accuracy \\ \hline
AWD-LSTM & 52.45 \\ \hline
Transformer & 53.8 \\ \hline
ULMFiT & 59.8 \\ \hline
BERT MLM & 74.29 \\ \hline
BERT classification & 88 \\ \hline
Albert classification & 92 \\ \hline
RoBERTa classification & 95 \\ \hline
RoBERTa multiple choice question & 96.08\\ \hline
\end{tabular}
\end{center}
\caption{Experimental results for subtaskA on dev set. }
\label{table:kysymys}
\end{table}
Our idea to boost the performance of all these applied models is reframing the input of subtaskA as a binary classification task to the input of another downstream task, multiple choice questions. As a result, we show fine-tuned RoBERTa for multiple choice questions task gives better results than RoBERTa for classification problem on both dev and test set (See Table \ref{table:kysymys}).
The difference between these two models is paying attention to the statements. In the self-attention layer, the encoder looks at other words in the input sentence as it encodes a specific word. For binary classification models like BERT, RoBERTa, and Albert, we concatenate two statements and then self-attention layer attends to each position in the input sequence, including both statements. However, for RoBERTa multiple choice questions task, we feed each statement to the network separately. Therefore, the attention layer attends to the sequence of words for each individual statement for gathering information that can lead to better encoding for each word.
Question answering task usually provides a paragraph of context and a question. The goal is to answer the question based on the information in the context. For subtaskA, we do not have the context and question; all we have is two options corresponding to the statements which are fed to the network, separately. Our goal is to select the correct statement (answer) from the two options.
As expected, determining optimal hyper-parameters has a significant impact on the accuracy on the performance of the model, and their optimization needs careful evaluation of many key hyper-parameters. We primarily follow the default hyper-parameters of RoBERTa, except for the maximum sequence length, weight decay, and learning rate $\in\{1e-5, 2e-5, 3e-5\}$ which is warmed up over 320 steps with a maximum of 5336 numbers of step to a peak value and then linearly decayed. The other hyper-parameters remained as defaults during the training process for 5 epochs. By searching the hyperparameter space for the optimum values, fine-tuned hyper-parameters achieve 96.08\% and 94.7\% accuracy on dev and test set, respectively. Our result is a big jump from 74.1\% baseline accuracy and competes with 99.1\% accuracy of human performance.
\subsection{SubtaskB (Explanation)}
As described earlier, subtaskB requires world knowledge and targets commonsense reasoning to answer why nonsense statements do not make sense. This type of task seems trivial for humans with a basic knowledge but is still one of the most challenging tasks in the NLP world. However, the baseline for human performance, 97.8\% shows how it is difficult to reason even with a comprehensive commonsense knowledge.
Our goal is to investigate whether transformers like RoBERTa (which its performance was confirmed on subtaskA) can learn commonsense inference given a nonsense statement. The architecture of RoBERTa-large is comprised of 24-layer, 1024-hidden dimension, 16-self attention heads, 355M parameters and pretrained on book corpus plus English Wikipedia, English CommonCrawl News, and WebText corpus.
SubtaskB is a multiple choice question task and we fine-tune hyper-parameters of RoBERTa model to answer questions. In this setting, we concatenate the nonsense statement (context) with each option (endings) and then use three statements as the input of model. For example, `He drinks apple.' is the context and [`Apple juice are very tasty and milk too.', `Apple can not be drunk.', `Apple cannot eat a human.'] is the list of endings. We want to select the ending from three options that is entailed by the context:
\begin{itemize}
\item ``He drinks apple. Apple juice are very tasty and milk too.''
\item ``He drinks apple. Apple can not be drunk.''
\item ``He drinks apple. Apple cannot eat a human.''
\end{itemize}
The set of concatenated examples is fed into the model to predict the answer of questions that require reasoning. We considered a few hyper-parameter settings and figured out the model with hyper-parameters in Table \ref{Table 2} yields the surprising results 93.7\%, compared to the baseline accuracy of 45.6\%.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ | l | l | l | p{8cm} |}
\hline \bf hyper-parameters & \bf value \\ \hline
batch size & 16 \\ \hline
learning rate & $1e-5$ \\ \hline
weight decay & 0.1 \\ \hline
adam epsilon & $1e-8$ \\ \hline
num\_train\_epochs & 5 \\ \hline
max\_steps & 5336 \\ \hline
warmup\_steps & 320 \\ \hline
\end{tabular}
\end{center}
\caption{Tuned hyper-parameters of RoBERTa for subtaskB.}
\label{Table 2}
\end{table}
\subsection{SubtaskC (Reason Generating)}
Based on the subtaskC definition, we can frame subtaskC as a conditional text generation problem. Given a nonsense statement, we expect that the language model will generate commonsense reasons to explain why statement conflicts with our knowledge. We applied the full version of OpenAI GPT2 (Generative Pre-Training), a large-scale unsupervised language model with billions of parameters, trained on a very large corpus of text data. The goal of this model is to automatically generate text, given a sequence of natural language words. The performance of GPT-2 in a zero-shot setting is competitive on many language modeling datasets and various tasks like reading comprehension, translation, and question answering.
GPT-2 architecture claims that the model performs well in generating coherent samples depending on the context, which are fairly represented during the training process. However, we observed that employing GPT-2 for generating texts against the given nonsense statements is poor in performance with unnatural topic switching and 6.1732 BLEU score. We used the Pytorch implementation of GPT-2 (with all default hyper-parameters) that is provided by Huggingface transformers \cite{wolf2019huggingface} for natural language generation.
The GPT-2 is built using transformer decoder blocks. The key behined GPT-2 is called “auto-regression” that outputs one token at a time and after each token is produced, that token is added to the sequence of inputs then the new produced sequence becomes the input to the model.
Notably, we submitted the original test set (including nonesense statements) for the evaluation phase on SemEval-2020 portal and surprisingly, we stand among the first four teams. The competitive BLEU score of 17.2 with the top team shows that subtaskC is challenging enough to receive more research attentions. We believe that our simple and naive efforts indicate significant opportunities for future research to utilize reasoning on commonsense knowledge.
\section{Conclusion}
We evaluated architectures for three commonsense reasoning tasks. First, we found that RoBERTa-large performs better substantially in differentiating sentences that make sense from those that do not make sense compared to other cutting-edge architectures (e.g. Albert, BERT, and ULMFiT). We reframe this classification task to a question answering task to enhance the performance of the fine-tuned RoBERTa to 96.08\%. Second, we achieved significant results on reasoning why false statements do not make sense. We showed that RoBERTa performs well in selecting the correct option among three to infer the commonsense reason and it yields significant result with 93.7\% accuracy compare to baseline using BERT, 45.6\%. With a little effort on generating reasons to explain why false statement conflicts with commonsense knowledge, we observe that the original test set produces 17.2 BLEU score which ranked us among first four teams in the competition with a very competitive results. Our experimental result showed that GPT-2 performs as poor as random generating of a sequence of words for this task. We believe this task has many potentials and challenges for upcoming NLP researches. As another future work, we believe that ensemble learning can reduce the variance of predictions and also improve prediction performance. In ensemble learning, multiple models are generated and combined to address the subtasks and reduce the likelihood of an unfortunate selection of a poor one.
\bibliographystyle{coling}
|
2,877,628,088,931 | arxiv | \section{Introduction}
\label{sec:introduction}
\input{tikz-flowdiagram}
{The use of} {multiple, \textit{connected} robots {in the place of} individually {uncommunicative} robots} provides evident gains by facilitating the inter-robot coordination that allows for work distribution, spatial coverage, and specialization.
An increasing variety of applications leverage such networks of robots, including
logistics~\cite{tilley_Automation_2017,kamagaew_Concept_2011},
resource distribution~\cite{enright_Optimization_2011b, ma_Lifelong_2017a},
transport systems~\cite{hyldmar_Fleet_2019b, dressler_Intervehicle_2014a, ferreira_Selforganized_2010a},
manufacturing~\cite{cherubini_Collaborative_2016}, and agriculture~\cite{noguchi_Robot_2011, albani_Monitoring_2017}.
These applications depend on an orchestration of robots over time and space that allows them to jointly work towards common higher-order goals, to deconflict individual actions in shared environments, and to share information in distributed computing schemes.
Communication and the mutual exchange of information (state and control) are key to facilitating such interactions.
Early work in the multi-robot domain drew from nature-inspired paradigms~\cite{bonabeau1999swarm},
and consequently focused on devising collective behaviors that depended purely on \textit{local} interactions of robots in close proximity~\cite{nagpal2003organizing}. A variety of transmission media (e.g., infrared) are used for such near-field communication schemes~\cite{rubenstein2014programmable, pugh2006relative}.
Other nature-inspired work built on \textit{implicit} communication and self-organization through stigmergy, by which robots coordinate indirectly through traces left in the environment~\cite{beckers2000fom}.
The benefits of such peer-to-peer \textit{decentralized} communication paradigms are manyfold, in particular due to their inherent robustness and scalability~\cite{brambilla2013swarm}.
\textit{Centralized} radio-based communication architectures have become increasingly popular in various instances, especially when the task requires performance guarantees; representative applications include product pickup and delivery~\cite{grippa_Drone_2019a}, item retrieval in warehouses~\cite{enright_Optimization_2011b}, and mobility-on-demand services~\cite{spieser2016shared}.
Improvements in communication technologies, both hardware and software, have furthered more data-intensive applications, such as cloud robotics~\cite{chinchali2021network, kehoe2015survey}.
Explicit communication methods generally assume that robots can broadcast information within a local neighborhood that comprises of tens to hundreds of individuals, or that a fixed network infrastructure is available.
Yet, in reality, densely populated workspaces adversely affect communication capabilities because of practical contention over channel bandwidth and airtime~\cite{gielis2021improving}.
Such networks are additionally burdened by clutter that can induce signal fading, leading to a drastic decrease in the expected communication range.
This problem is compounded by the need for real-time transmission requirements in highly dynamic robot networks. Indeed, topologies and capabilities \textit{demanded} by robotic applications are practically hostile to radio performance (because these radio networks were not initially designed with robotics in mind).
As a consequence, the vast majority of robot applications are designed to merely work around available network technologies, and optimize their performance within the given constraints.
Our review is motivated by a lack of studies that provide a high-level overview of the interplay between communication networks and their role in robotic applications.
Figure~\ref{fig:flowdiagram}, graphically demonstrates the typical architecture of a multi-robot control scheme.
As mentioned prior, robot control algorithms generally do not actively employ the output of the communications network being simulated.
The result is a wide array of optimizations that work in favor of the network, but often not for autonomy, or vice-versa.
Hence, we argue that a better co-optimization scheme (illustrated on the left in Figure~\ref{fig:flowdiagram}) will consider all aspects of the architecture simultaneously.
In this survey, we capture a variety of network architectures and technologies, and a variety of multi-robot applications that employ them.
A careful choice of communications architecture, medium and algorithm are key to ensuring that a given robot task can be completed.
Therefore, we will also explore some of the newer approaches that consider bypassing such hand-crafted selections, and attempt to model inter-robot communications in a data-driven fashion.
\section{Factors Influencing Robot Network Design}
\label{sec:factors}
Choosing an adequate communications architecture, medium, and algorithm are key to ensuring desired robot performance. In the following, we distill the factors that influence the robot network design choices. We elaborate upon them in the following four categories, \textit{(i)} application, \textit{(ii)} robot, \textit{(iii)} algorithm, and \textit{(iv)} environment, and give an illustrative example for each.
\textbf{The application:}
The application defines \textit{what the shared information is for}, and \textit{how the robots need to interact} to solve the problem at hand. \emph{Examples}: Real-world applications such as in environmental monitoring and agriculture require groups of robots to act over large distances (often operating with robots separated by $\sim$1000x body lengths). Such sparsely distributed robot systems, hence, necessitate networking capabilities that can span larger spaces~\cite{tarapore2020sparse}.
Other applications, such as cooperative driving~\cite{hyldmar_Fleet_2019b}, formation control~\cite{preiss2017crazyswarm}, and flocking~\cite{Tolstaya19-Flocking} require uninterrupted, situated, close-range communication for tight inter-robot coordination and control.
\textbf{The robot:} The robot (and the physical hardware) define local \textit{constraints on the frequency and format} of information to be transmitted and received. \textit{Examples:} A quadrotor that uses state information for local stability control requires an update frequency in the order of several hundred Hertz; while on-board IMUs can provide the necessary information for body stabilization, extrinsic pose estimates are still required for tasks, and must be received at relatively high rates (e.g., \SI{100}{Hz})~\cite{preiss2017crazyswarm, kushleyev2013towards}.
Lack of reliable updates naturally poses a significant risk to tasks that require tight coordination, such as outdoor flocking and formation control; while sparse outdoor flight has been demonstrated in a team of 30 drones~\cite{vasarhelyi2018optimized}, there is a dearth of results on \textit{dense and agile} outdoor flight.
Moreover, in GPS-denied environments, robots resort to on-board sensing,
and consequently, require dependable inter-robot communication to achieve group behavior.
\textbf{The algorithm:} The algorithm connects the \textit{application} to the \textit{robot}, and essentially sets conditions on the nature of information that needs to be received (e.g., global or local), and when (e.g., asynchronously or synchronously, and how often). \textit{Examples:} In allocation problems, the optimization objective is often global, and to achieve optimality, we deploy centralized algorithms that collect all robot-to-task assignment costs (e.g., expected travel times) to determine the optimal assignment (e.g., by running the Hungarian algorithm)~\cite{prorok_Redundant_2019a, khamis2015multi}. Similarly, multi-robot path planning has an optimal solution (for both makespan and flowtime objectives), but only when the computational unit has access to full system information~\cite{yu_Optimal_2015}. In the absence of full observability, robots need to resort to locally available knowledge. In \textit{decoupled} prioritized path planning, robots communicate to mutually deconflict their path plans in time-space~\cite{wu_MultiRobot_2019, cap_Asynchronous_2013, desaraju_Decentralized_2012}. Each time a robot's plan changes, its robot neighborhood changes, or a new conflict arises, the deconfliction process restarts.
\textbf{The environment:}
The environment defines \textit{under what conditions} shared information is delivered. \textit{Examples:} Are the robots operating indoors or outdoors, or both~\cite{dong2015distributed}?
Does the workspace afford a fixed (and possibly centralized) communications infrastructure, or must we instead rely on ad-hoc networking?
Is the environment cluttered with obstacles that interfere with wireless signals? What medium can we use, e.g., are the robots operating in air, under water, or in space?
What legal jurisdictions regulate the communication infrastructure?
And finally, is the communication channel safe, or can it be spoofed~\cite{gil_Guaranteeing_2017}, or robots attacked~\cite{saulnier_Resilient_2017a,guerrero-bonilla_Dense_2020}?
\input{tikz-timeline.tex}
\section{Communication Schemes}
\label{sec:comms}
In this section we discuss multi-robot communication from the perspective of the underlying communications technologies, focusing upon the challenges, limitations and optimizations that are relevant in multi-robot system networks. Fig.~\ref{fig:timeline} shows a timeline with key wireless communication mechanisms, and some representative multi-robot applications that they enabled for the first time.
\subsection{Challenges}
\textbf{Synchronicity.} Specifying robotic data flows is often the first consideration in discussing the challenges of a wireless data protocol.
For example, it is often implicitly assumed that multi-robot control algorithms are executed synchronously by every participant~\cite{vandenberg_Reciprocal_2008}. This introduces a hard timing constraint on the maximum allowable delay in message delivery between those participants.
This is, however, a feature that commonly deployed communications protocols are not designed to meet, with `best-effort' message delivery being the standard paradigm~\cite{Koutsiamanis.2018}.
\textbf{Dynamic topologies}. Hard timing constraints are often exacerbated by highly connected communications topologies that are dynamic, where a robot must communicate its status with many different (or sometimes every) participating robot(s). This can lead to a high degree of contention for radio resources since many messages may need to be sent at every control loop. While there are schemes that aim to minimize redundant data transmissions (see Section~\ref{sec:comms_aware}), it remains true that, as multi-robot networks increase in scale, communications technologies must be selected and designed specifically to manage the dynamics of the application~\cite{Fan.2018}, something that is generally overlooked in robotic networks today.
\textbf{Message frequency.} Bandwidth is often employed as a metric to specify the demands on a communications link~\cite{Sarr.2008}. However, this is often an insufficient characterization by itself, since the underlying technology may have significant overheads per message, and robot teams often depend on low-latency messaging as well. This is particularly true for ad-hoc networks where there is no central entity enforcing message scheduling, to the extent that many communications protocols will not approach their rated bandwidths in highly connected ad-hoc topologies, where overheads such as contention dominate radio resource consumption~\cite{gielis2021improving}.
\textbf{Connectivity.}
Since a greater connectivity range implies reachability and information exchange with more robots, it has an obvious impact on the overall messaging rate any specific robot must handle.
The geographical density of robots must be considered in the discussion of range, and there are two key factors of interest. Firstly, as ranges increase, radio-based links are more prone to fading and interference~\cite{Yan.2019} even when transmission power is commensurately increased. Secondly, robot control algorithms often assume a fixed range~\cite{Khodayimehr.2019}, which may result in greater inter-connectivity, and sometimes increased messaging rates in dense scenarios.
\textbf{Dynamic routing.} The case where the required range of communication exceeds the underlying capabilities of the radio hardware onboard must also be considered, as this implies a mesh-type network where a message must traverse multiple robots (network nodes).This first requires planning the robots' paths (discussed in Section~\ref{subsec:comm-aware-planning}), before accounting for the computational and protocol overheads of robots processing messages other than their own. Then, the problem is that of message routing decisions and dynamic topologies. The routing decision problem is generally central to ad-hoc mesh networks; this is only made more challenging by the potential for rapid shifts in communications topology, especially in highly mobile, or large scale robotic scenarios~\cite{Gupta.2016}. Hard timing guarantees, in the range required by robotic control, are not currently available at non-trivial scales (especially with multi-hop routing over dynamic topologies), though some attempts have been made in this direction~\cite{Deng.2021}.
\textbf{Operational environment.} Robotic networks will invariably be required to operate in environments with external noise and interference, which cause unpredictable impacts on link quality. This informs the selection of communications protocols, since some protocols operate in a licensed spectrum with reduced external interference, or are otherwise less prone to external noise due to atmospheric attenuation (60GHz). Doppler shift requires similar consideration, because many communications technologies fail at high relative velocities. Generally speaking, protocols that depend upon fine-grained frequency division multiplexing are more prone to Doppler related errors~\cite{Elmezughi.2021c}, and such schemes are often used in high bandwidth techniques.
\subsection{Communications Scheme Selection}
\label{subsec:comms_protocols}
Despite considerable research interest, there are no current wireless data standards explicitly designed for exchanging information between autonomous robots~\cite{IEEE.2022}. Currently deployed robot-to-robot networks (such as ~\cite{vasarhelyi2018optimized}) depend upon more generic wireless data networking standards which are not typically optimized for the challenges discussed above. In the absence of a specific standard, we will discuss the strengths and weaknesses of existing technologies for the multi-robot control application.
Ad-hoc networks map well to the communications patterns required of decentralized robotic control, and the most relevant for this survey are Mobile Ad-Hoc Networks (MANETs) which deal with the problem of facilitating communication between mobile nodes without coordination from infrastructure ~\cite{D.Ramphull.2021}. More specific forms of interest are Vehicular Ad-Hoc Networks (VANETs) ~\cite{AlHeety.2020} and Flying Ad-hoc networks (FANETS)~\cite{Srivastava.2021}, where the former generally deal with automotive use cases and the latter drones and UAVs, and these are more exposed to dynamic conditions that are expected from robotic ad-hoc networks. Local area networking technologies are well suited for ad-hoc networking.
In contrast with ad-hoc operation, infrastructure networks map more closely to a centralized robotic control, where communications patterns are more similar to traditional bandwidth-focused networking applications. Despite this, hard latency requirements and rapid robot movement require specific mechanisms at the protocol level, which are not common for either Cellular or local area standards.
\subsubsection{Local Area Networks}
The IEEE 802.11 protocol suite, commonly known as `wi-fi', is frequently used due to abundant hardware availability, IP networking interoperability, high data rates and license-free operation. It is also capable of both infrastructure and ad-hoc operations, which simplifies deployment from laboratory environments into the real world. The failures of 802.11 become apparent during such deployment processes~\cite{Tahir.2021, HameedMir.2014}, because larger ranges, robot counts and velocity-induced Doppler shift cause lower message delivery rates than are seen in static 802.11 deployments. 802.11p, and its successor 802.11bd, have both introduced specific modifications to the \textit{physical} layer ~\cite{Naik.2019} to more robustly handle both range and Doppler-induced problems for VANET use-cases, which potentially transfer to robotic control as well.
IEEE 802.15.4 has been commonly used as the basis for a number of different higher-layer protocols, including ZigBee.
It has been deployed in the context of wireless sensor networks and multi-robot systems~\cite{Houda.2015} due to hardware availability, license-free operation, low power usage, and flexible communications models that permit both IP-based and more simplified serial-like messaging. The major drawbacks are relatively low range and data rates. LoRaWAN~\cite{Erturk.2019} is an attractive alternative that maintains the positive aspects of 802.15.4 for the robotic use cases, but with a focus upon long range transmission (up to 16km) and a physical layer that is resilient to Doppler errors; however, its communications model is infrastructure based.
Both 802.11's and 802.15's underlying dependence upon the CSMA/CA collision avoidance scheme allows for the minimization of contention related losses without an authoritative central scheduler~\cite{Ziouva.2002}; however, they have unbounded maximum latency on message delivery, and reduced message delivery rates with higher numbers of robots on the network. LoRaWAN uses a pure ALOHA protocol mechanism~\cite{Adelantado.2017}, and therefore scales even more poorly than the IEEE schemes. These characteristics make these protocols unsuitable for deployment on robots without modifications; fortunately, there are techniques proposed in the literature to help make these technologies more scalable~\cite{Shahin.2018, Petrosky.2019}.
All of the protocols mentioned within this section share similar routing problems when it comes to highly dynamic topologies, in that they depend upon a network-layer routing mechanism to direct traffic without reliably converged information about the current disposition of other nodes. This issue is well covered by \cite{ShumeyeLakew.2020}, which categorizes and surveys many of the different approaches towards this routing problem. \cite{Hentati.2020} includes the routing issue amongst a general survey of the issues in UAV networking.
\subsubsection{Cellular Networks}
Cellular networks avoid many of the problems encountered by local networking standards by making use of a nearly universal infrastructure-based communications model, licensed radio spectrum access, as well as economics of scale, all of which make extremely complex base station hardware and protocols commonplace. The centralized message scheduler and sophisticated radio resource management~\cite{Chataut.2020, Liu.2016} are significantly more scalable than typical ad-hoc networks. These characteristics appear to be a good fit for robot control, however, the financial cost of network access, coupled with limited flexibility in logical network configuration have limited robotic deployments outside controlled environments.
For decentralized robot control, peer to peer traffic is routed through the infrastructure, inducing a minimum latency overhead~\cite{Chen.2018b} that could exceed timing constraints. 4G in particular has an access latency on the order of 50ms~\cite{Tahir.2021}. Additionally, cellular standards are naturally dependent upon the presence of infrastructure, which cannot always be assumed. Furthermore, the radiation pattern of cellular networks is typically setup assuming ground based users, and so aerial robots could experience degraded performance due to leaving the vertical coverage of cell antennas~\cite{Zeng.2019}.
4G LTE supports direct Device-to-Device (D2D) modes that permit devices to communicate with each other in a local region by reserving some subset of radio resources in their local area from the network operator. LTE-V2V is a variant of this specifically for automotive use cases ~\cite{MolinaMasegosa.2017}. This avoids the overhead of using the infrastructure as a relay but also has a cost in minimum association time and is dependent upon the network operator ceding resources on demand. Some proposals extend D2D cellular radio techniques into the unlicensed spectrum, or specifically, licensed sub-bands~\cite{Miao.2021}, however, this also has not been widely used in real world systems due to the recency of the specification and a dearth of capable hardware.
5G introduced the Ultra-Reliable Low Latency Communications (URLLC) service to address the issues with low latency medium access ~\cite{Feng.2021} in a centralized or decentralized manner, with a variety of proposed physical layer ~\cite{Le.2021} and mac layer ~\cite{Ali.2021} techniques. Despite the promise of these approaches, as well as 3GPP release 15 (including URLLC) being released in late 2017, roll out of these technologies into real world networks has been limited, and considerable research is ongoing as to the best implementation methods for 5G's technical goals~\cite{NavarroOrtiz.2020}.
\subsubsection{Hybrid Schemes}
Due to advancements in radio hardware, the most recent 802.11 revision, 802.11ax, has specified a physical radio resource allocation scheme that is far more similar to Cellular standards, with multiple fine-grained frequency divisions being available within a single logical channel. Many proposals have been made to have future Cellular standards directly inter-operate with Cellular networks to leverage the best capabilities of both~~\cite{Lagen.2020}. For robot networks, this may prove to be highly valuable, permitting human control overrides over Cellular infrastructure and low latency robot-to-robot communications with a unified logical network addressing system for easier lab-to-world deployment, and efficient operation in control schemes that have evolving requirements throughout a single deployment. Though exciting, these proposals are still in their nascent phase and many issues remain to be addressed.
Though the state of the art use of OFDMA in both 5G and 802.11ax significantly alleviate the contention problem due to the larger number of transmission slots made available, extremely dense ad-hoc robot networks may still run into the limit of CSMA/CA. Non-orthogonal media access (NOMA) is a very promising technology that has the possibility to further extend the effective simultaneous radio resources available ~\cite{DiZhang.2019}, and therefore reducing the contention problem. In cellular systems, the network operator still has authoritative control over their radio resources, and so, grant-based schemes induce significant overhead despite the reduced contention; though grant-free access has attracted significant attention ~\cite{Shahab.2020}. Even in grant-free schemes, NOMA still requires coordination to ensure that node configurations do not overlap, and hence, message loss due to contention remains an open problem in decentralized networks without a coordinating infrastructure.
\section{Communication-Aware Algorithms}
\label{sec:comms_aware}
Regardless of which underlying communication scheme or protocol is employed, unlimited and unconstrained communication cannot be assumed for any interactive scenario.
A significant amount of literature in multi-robot applications, however, has generally focused on designing control schemes that do not explicitly model this dependency.
This is reflected in the vast majority of literature in robot flocking \cite{soria2021distributed,zhou2021decentralized,tordesillas2021mader,luis2020online}.
We argue that the problem becomes more pronounced in cases where the robots need to deconflict and replan their motions in tight and constrained spaces \cite{zhou2021decentralized,tordesillas2021mader}.
While some consideration for communication asynchronicity is made in some of the more recent works \cite{tordesillas2021mader}, the challenge is generally far from being solved.
One straight-forward approach to handle this is to simply \textit{reduce} the amount of data (frequency, packet-size etc.) that needs to be communicated between agents.
In an exploration problem, this is often done through various novelty metrics that determine whether a new datapoint needs to be communicated \cite{kepler_approach_2020}.
Trawny \etal{} \cite{trawny_cooperative_2009} have proposed localization estimators that perform well by quantizing the transmitted information to very small packets, thereby tackling severe link constraints.
On the other hand, there is also a sustained research interest in modeling the communication channels between the agents, and factoring that as a constraint into the motion planning problem.
This is done primarily to ensure robustness of a control scheme against imperfect and noisy communications.
Alternatively, planning schemes have also considered communication as a sub-task (almost as if ``scheduling'' communications at intervals).
Finally, there are several approaches that consider a joint optimization scheme, where path planning and communication planning are carried out in tandem.
We divide this body of work into these three broad styles.
\subsection{Communication-Aware Planning}
\label{subsec:comm-aware-planning}
As mentioned earlier, planning robot motions or trajectories that consider some model of the underlying communication links is an active area of research.
Mularidharan and Mostofi provide a comprehensive overview of such methods \cite{muralidharan_communication-aware_2021}.
For instance, several authors have considered the task of coverage \& formation control by a team of robots.
Evidently, these domains require explicit factoring of communication constraints into the planning problem \cite{kepler_approach_2020}.
One way this is approached is by analyzing the stability of a formation under various communication link latencies \cite{zeng_joint_2019}.
This can then be then integrated into the control problem for a more reliable system \cite{zeng_wireless_2018, zeng_joint_2019} that is ``aware'' of such latencies.
Formation control laws are also explored that allow agents to maintain some degree of coordination while respecting limited communication ranges of their neighbors \cite{dimarogonas_bounded_2010,ji_distributed_2007}.
This approach is also sometimes utilized in the context of cooperative target localization \cite{stachura_cooperative_2011}, or under constraints of 3G/4G mobile networks \cite{olivieri_de_souza_coordinating_2015}.
Path planning has also been developed such that connectivity with a subset of base-stations \cite{mardani_communication-aware_2019}, or with some agents \cite{hsieh2008maintaining,schouwenaars2006multivehicle} is maintained.
Similar methods that plan for multiple robots (with connectivity constraints) involve using ACO-based (ant colony) planning \cite{fridman2008distributed} or genetic algorithms \cite{hayat2017multi}.
\subsection{Plan-Aware Communications}
In heavily constrained spaces, it is often desirable to design a network architecture that considers the planned path, and seeks opportunities to communicate therein.
Underwater robots, for instance, have very limited communication capacities \cite{doniec2010using,vasilescu2005data}, and this is an active field of interest; Zolich \etal{} \cite{zolich2019survey} provide a comprehensive survey on the various challenges and solutions.
Hollinger \textit{et. al.} have considered scheduling algorithms for underwater robotic sensor networks and show how path planning algorithms depend on these \cite{hollinger_communication_2011,anderson_communication_2021}.
Since bandwidth and interference constraints are much more severe in these environments, such scheduling algorithms often model the \textit{value} of communicating at a particular timestep \cite{anderson_communication_2021,best_planning-aware_2018}.
This also plays a role in determining whether communicating has a positive impact on the state of the robot system \cite{alshehri_modeling_2021,unhelkar2016contact}, and is also studied as an online decision problem \cite{tsiogkas_towards_2019}, and an optimization problem that considers when/what to communicate \cite{marcotte2020optimizing}.
\subsection{Joint Planning}
Several of the works listed in the previous subsections may also be seen as jointly optimizing for communication quality as well as path qualities.
However, there are other approaches that attempt to explicitly model this optimization problem.
For instance, Kantaros and Zavlanos \cite{kantaros_distributed_2016} propose a scheme that \textit{alternates} between the two optimization problems sequentially.
The nature of this scheme often makes it difficult to prove hard guarantees regarding optimality; however, a more hybrid approach in which the two controllers interact can offer more guarantees on network integrity available data-rates \cite{zavlanos_network_2013}.
A joint optimization scheme, on the other hand, can formulate this problem well; for instance, using an LQ (linear-quadratic) form can additionally offer robustness guarantees as well \cite{kassir_decentralised_2012}.
Yet another means of joint optimization is to consider the system as a cyber-physical system (CPS), where the `cyber' controller handles the communications domain, and the `physical' controller handles the kinematics of the robot \cite{fink_robust_2012}.
Such models allow designers to factor various other elements of a CPS system, such as dynamically adapting one of the subsystems (communication capacity) while still maintaining the coupling with the other \cite{le_ny_adaptive_2012,stephan_concurrent_2017}.
\section{Leveraging Machine Learning for Communication}
\label{sec:learning}
Designing bespoke, handcrafted communication protocols and behaviors is tedious and difficult. Firstly, numerous works point to the hardness of synthesizing decentralized policies (that have to operate in a partially observable regime), even when a centralized template is known~\cite{halsted_Survey_2021, amato2013decentralized}, and they leave the question of \textit{how} (what, when, and to whom) to communicate unanswered. Secondly, the vast majority of existing robot communication strategies are based on idealistic operational assumptions, and besides a few specialized approaches to dealing with message loss, delay, or corruption, e.g., ~\cite{gil_Guaranteeing_2017, saulnier_Resilient_2017a, parker_ALLIANCE_1998}, it is not at all clear how to approach such problems in a manner that is transferable across applications. Leveraging machine learning methods is a promising new avenue to tackle some of these challenges.
\subsection{Learning Communication Mechanisms}
Message routing decisions in robotic mesh networks are complicated by highly dynamic topologies. While many routing mechanisms exist in ad-hoc networks, these generally depend upon relatively slowly changing network conditions to function effectively. Many manually specified heuristic methods exist~\cite{ShumeyeLakew.2020}, however these may lead to sub optimal decisions as they may be constructed upon incorrect assumptions about the target network environment. Learning methods provide an attractive alternative, and have been explored in some depth in routing generally ~\cite{Mammeri.2019}. An interesting example in the context of FANETs can be found in Zheng et al. ~\cite{Zheng.2018}, who propose RLSRP which applies an online reinforcement learning method to the routing decision problem and shows improved performance across several metrics, including delivery latency.
Channel modeling and resource allocation are also key networking problems that are challenging for first principles methods to solve that can be improved with learning ~\cite{Bithas.2019}. Unsupervised learning has be applied to channel modeling, which allows for the optimization of transmission power by accurately estimating the quality of links to other network participants ~\cite{Wang.2019c}.
\subsection{Learning Communication Behaviors}
Learning-based methods have proven effective at designing robot control policies for an increasing number of tasks~\cite{rajeswaran_Generalization_2017, tobin_Domain_2017}. Recent work utilizes a data-driven approach to solve multi-robot problems, for example for multi-robot motion planning in the continuous domain~\cite{everett_Motion_2018} or path finding in the discrete domain~\cite{sartoretti_PRIMAL_2019}.
Yet, research on learning how to \textit{synthesize robot-to-robot communication policies} is nascent. From the point of view of an individual robot, its local decision-making system is incomplete, since other agents' unobservable states affect future values. While the manner in which information is shared is crucial to the system's performance, the problem is not well addressed by hand-crafted (bespoke) approaches. Learning-based methods, instead, promise to find solutions that balance optimality and real-world efficiency, by bridging the gap between the qualities of full-information centralized approaches and partial-information decentralized approaches~\cite{prorok2021holy}.
Key to the decentralization of centralized (optimal) policies is the property of \textit{permutation equivariance}. Permutation equivariance ensures that at the robot network level, the set of actions automatically rearranges itself as the agents swap order. One of the earliest works that satisfies this property is~\cite{paulos2019decentralization}. This was concurrently developed by a line of work that builds on Graph Neural Networks (GNNs), which are permutation equivariant by design~\cite{scarselli_graph_2009, Gama19-Architectures, prorok_Graph_2018}. GNNs have since then shown promising results in learning explicit communication strategies that enable complex multi-agent coordination~\cite{modgnn,khan_2020, tolstaya_Learning_2019a, li_Graph_2020a}.
When deploying GNNs in the context of multi-robot systems, individual robots are modeled as nodes, the communication links between them as edges, and the internal state of each robot as graph signals. By sending messages over the communication links, each robot in the graph indirectly receives access to the global state. A key attribute of GNNs is that they compress data as it flows through the communication graph.
In effect, this compresses the global state, affording agents access to relevant encodings of global data.
Since encodings are performed locally (with parameters that can be shared across the entire graph), the policies are intrinsically decentralized.
In cases where the downstream task is tightly coupled with the communication requirements, it is beneficial to optimize the communication strategy jointly with perception and action policies. This was done in~\cite{hu_scalable_2022}, for multi-robot flocking, and in~\cite{li_Graph_2020a}, for multi-agent path planning. These frameworks implement a cascade of a convolutional neural network (CNN) and a GNN, which they jointly train so that image features and communication messages are learned in conjunction to better address the specific task.
Recent work also shows how GNNs can be augmented by \textit{attention modules} to produce \textit{message-aware} communication strategies that allow robots to discern between important and less important message elements~\cite{li_Messageaware_2021}.
Approaches from within the multi-agent reinforcement learning (MARL) community tackle the learning of continuous communication protocols by formulating the problem as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP)~\cite{sukhbaatar_learning_2016, singh2018learning, jiang_learning_2018}. The work in~\cite{das_tarmac:_2019} learns a targeted multi-agent communication strategy by exploiting a signature-based soft attention mechanism (whereby \textit{message relevance} is learned). Similarly, the work in~\cite{serra-gomez_whom_2020} has each robot learn to reason about other robots’ states and to more efficiently communicate trajectory information (i.e., when and to whom), and applies the solution to the problem of collision avoidance. While efficient \textit{cooperative} communication strategies are desirable, the work in~\cite{blumenkamp_Emergence_2020b} shows how separate robot teams can learn to communicate with \textit{adversarial} strategies that contribute to manipulative (non-cooperative) behaviors. Clearly, underlying training paradigms need to be carefully designed to avoid such outcomes.
\section{Challenges and Open Problems}
\label{sec:future}
We finally present some avenues of research and engineering that are worth exploring in order to address our critiques discussed so far.
We categorize them into four broad \textit{Open Problems}.
\textbf{1. Co-design.}
An emergent theme throughout this survey is the lack of approaches that co-design the robot and its communication capabilities.
A variant of this concept \cite{mechraoui2009co,mechraoui2011co} considers a basic parallel reconfiguration of a network as well as the robot's controller that can be beneficial when the robot moves across network stations.
However, a true co-design scheme will jointly evolve all layers of the networking stack to favor the robotic task at-hand.
Design of a meta-system that is able to compute the limitations of robotic requirements as well as network capabilities and dynamically throttle both may be essential to safe deployment of robots into the real world.
Any robotic control algorithm that uses explicit communications is vulnerable to failure if the network unexpectedly under-delivers, and performs sub-optimally if the network over-delivers -- managing this resource allocation problem in a real world multi-robot setting is a subject we will tackle in our future work.
\textbf{2. Data-driven optimization.}
Machine learning, and specifically, reinforcement learning, can drive the development of multi-robot communications into new and interesting paradigms.
Existing approaches that already learn what/when to send (and whom to send to) \cite{serra-gomez_whom_2020, li_Messageaware_2021, paulos2019decentralization} still often depend on hand-designed architectures and specific task groups.
With sufficiently large datasets, novel machine-learning architectures also have the potential to learn to optimize multiple aspects of multi-robot systems at once (e.g., perception, action and communication \cite{hu_scalable_2022}).
\textbf{3. Sim-to-real of robot networks.}
The problems in sim-to-real transfer of robot coordination strategies are generally exacerbated by the ``reality gap" found in communications~\cite{prorok2021holy}.
Practical communication links suffer from message dropouts, asynchronous and out-of-order reception, and decentralized mesh topologies that may not offer reliability guarantees.
Since multi-robot policies are typically trained in a synchronous fashion, these factors are hard to capture and simulate~\cite{blumenkamp2021framework}.
Furthermore, very few studies have captured any of these network effects in a large-scale setting \cite{gielis2021improving}.
Consequently, we find that embedding the reality gap of robot networking into data-driven approaches to multi-robot planning is an open research domain.
\textbf{4. New technologies/schemes.}
As discussed in Section~\ref{subsec:comms_protocols}, there is a need for wireless data standards that specifically target the communication requirements of connected robots.
The \textsc{IEEE 1920} working group is a significant step in this direction, which was formed to propose a protocol that is intended for autonomous robotic networks~\cite{IEEE.2022}.
Such a protocol is likely to be founded on 802.11bd since it is already a significant leap forward \cite{Naik.2019} from the legacy 802.11p standard used in V2V standards today.
Additionally, future 5G updates and 6G cellular communications promise dramatic improvements that hold the potential to bring cloud- and edge-computing at the forefront of many data-intensive multi-robot collaborations.
Finally, we also note that geographic routing in FANETs may be an enabling technology for practically dealing with highly dynamic routing topologies.
This will, however, require holistic developments in robot control algorithms that work in tandem to avoid an additional information distribution problem.
\section{Conclusion}
\label{sec:conclusion}
Through this manuscript, we have presented a survey of communication technologies and their role in enabling multi-robot applications.
We have broadly covered the various technologies that have played key roles in networked robotics, and have also discussed how state-of-the-art robot applications typically deal with network constraints.
Our approach to this has been mostly critical, and thus, has identified several deficiencies in the way robotics and networks have evolved.
Towards the end, we also cover machine learning approaches and their role in developing data-driven communication strategies.
We conclude the article with a list of challenges and open problems that the community currently faces, and also provide an outlook for how learning-based approaches can tackle several of them.
\section{Acknowledgments}
\label{sec:acks}
Jennifer Gielis was supported by an EPSRC Doctoral Training studentship. The authors also gratefully acknowledge support from ARL DCIST CRA W911NF-17-2-0181 and the European Research Council (ERC) Project 949940 (gAIa).
\subsection*{Declarations}
\textbf{Conflict of interest.}
The authors declare that they have no conflicts of interest.
\textbf{Human and Animal Rights, and Informed Consent.}
This article does not contain any studies with human or animal subjects performed by any of the authors.
\printbibliography[title=References]
\end{document}
\section{Electronic Submission}
\label{submission}
Submission to ICML 2021 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/}}
\end{center}
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF\@.
\item Submitted papers can be up to eight pages long, not including references, plus unlimited space for references. Accepted papers can be up to nine pages long, not including references, to allow authors to address reviewer comments. Any paper exceeding this length will automatically be rejected.
\item \textbf{Do not include author information or acknowledgements} in your
initial submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions \emph{under} the figure (and omit titles from inside
the graphic file itself). Place table captions \emph{over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one paragraph and roughly
4--6 sentences. Gross violations will require correction at the
camera-ready phase. The title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
\textbf{Paper Deadline:} The deadline for paper submission that is
advertised on the conference website is strict. If your full,
anonymized, submission does not reach us on time, it will not be
considered for publication.
\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying
author information may appear on the title page or in the paper
itself. Section~\ref{author info} gives further details.
\textbf{Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences and journals during ICML's review
period.
Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
Authors must provide their manuscripts in \textbf{PDF} format.
Furthermore, please make sure that files contain only embedded Type-1 fonts
(e.g.,~using the program \texttt{pdffonts} in linux or using
File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3)
might come from graphics files imported into the document.
Authors using \textbf{Word} must convert their document to PDF\@. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF\@. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} should avoid including Type-3 fonts.
Those using \texttt{latex} and \texttt{dvips} may need the following
two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
It is a zero following the ``-G'', which tells dvips to use
the config.pdf file. Newer \TeX\ distributions don't always need this
option.
Using \texttt{pdflatex} rather than \texttt{latex}, often gives better
results. This program avoids the Type-3 font problem, and supports more
advanced features in the \texttt{microtype} package.
\textbf{Graphics files} should be a reasonable size, and included from
an appropriate format. Use vector formats (.eps/.pdf) for plots,
lossless bitmap formats (.png) for raster graphics with sharp lines, and
jpeg for photo-like images.
The style file uses the \texttt{hyperref} package to make clickable
links in documents. If this causes problems for you, add
\texttt{nohyperref} as one of the options to the \texttt{icml2021}
usepackage statement.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except that
author information (names and affiliations) should be given. See
Section~\ref{final author} for formatting instructions.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{38}^{th}$ International Conference on Machine Learning},
Online, PMLR 139, 2021.
Copyright 2021 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2021\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2021\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$~point thick.
The running head should be centered, bold and in $9$~point type. The
rule should be $10$~points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the \texttt{fancyhdr} package which is included in the ICML
2021 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the specified format.
\subsection{Dimensions}
The text of the paper should be formatted in two columns, with an
overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches
between the columns. The left margin should be 0.75~inches and the top
margin 1.0~inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
ICML uses double-blind review, so author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2021.sty} file, use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless \texttt{accepted} is passed as an argument to the
style file.
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If you are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section. The only exception are manuscripts that are
not yet published (e.g., under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT\@. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look at the Supplementary Material when writing their
review.
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches below the
bottom rule surrounding the title. The authors' names should appear in 10~point
bold type, in a row, separated by white space, and centered. Author names should
not be broken across lines. Unbolded superscripted numbers, starting 1, should
be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote
block of text should be used to list all the affiliations. (Academic
affiliations should list Department, University, City, State/Region, Country.
Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple
affiliations, multiple superscripts should be placed after the name, separated
by thin spaces. If the authors would like to highlight equal contribution by
multiple first authors, those authors should have an asterisk placed after their
name in superscript, and the term ``\textsuperscript{*}Equal contribution"
should be placed in the footnote block ahead of the list of affiliations. A
list of corresponding authors and their emails (in the format Full Name
\textless{}[email protected]\textgreater{}) can follow the list of affiliations.
Ideally only one or two names should be listed.
A sample file with author names is included in the ICML2021 style file
package. Turn on the \texttt{[accepted]} option to the stylefile to
see the names rendered. All of the guidelines above are implemented
by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below the final
address. The heading `Abstract' should be centered, bold, and in 11~point type.
The abstract body should use 10~point type, with a vertical spacing of
11~points, and should be indented 0.25~inches more than normal on left-hand and
right-hand margins. Insert 0.4~inches of blank space after the body. Keep your
abstract brief and self-contained, limiting it to one paragraph and roughly 4--6
sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{Footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and International
Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was
produced, the number of accepted papers for ICML 2008 was unknown and instead
estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to illustrate
your approach and results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
\emph{after} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
Figure~\ref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment \texttt{figure*} in \LaTeX). Always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
Algorithm~\ref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title \emph{above} the table with at least
0.1~inches of space before the title and the same after it, as in
Table~\ref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Data set & Naive & Flexible & Better? \\
\midrule
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material, whereas figures contain graphical material.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns. Place two-column tables at the
top or bottom of the page.
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use \texttt{natbib.sty} and \texttt{icml2021.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to Section~\ref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and use a
hanging indent style, with the first line of the reference flush against the
left margin and subsequent lines indented by 10 points. The references at the
end of this document give examples for journal articles \cite{Samuel59},
conference publications \cite{langley00}, book chapters \cite{Newell81}, books
\cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports
\cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
Please put some effort into making references complete, presentable, and
consistent. If using bibtex, please protect capital letters of names and
abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz
in your .bib file.
\section*{Software and Data}
If a paper is accepted, we strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, \textbf{do not}
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look at this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
|
2,877,628,088,932 | arxiv | \section{Introduction}
Recent years have seen intense investigation of electron transport through semiconducting quantum dot devices~\cite{kouwenhoven}, leading in turn to a strong resurgence of interest in basic Kondo physics~\cite{KG}. The classic example---namely the spin-$1/2$ Kondo effect~\cite{hewson} in which the magnetic moment of an odd-electron dot is quenched by coupling to metallic leads---was observed in quantum dots almost a decade ago. Since then, a continuing goal has been to understand how \emph{coupled}, multiple quantum dot systems may lead to variants of the Kondo effect involving both spin and orbital degrees of freedom.
In this Letter we consider probably the simplest example of such---a symmetrical double quantum dot system in which the interdot coupling is
capacitive. Experimental realisations of such devices have appeared in the literature \cite{blick, wilhelm, chan}, and various aspects of the rich inherent physics have been uncovered in a number of theoretical papers, see e.g.~\cite{pohjola, boese, borda, lopez, ourprl, ourjpcm, mravlje}.
Here we elucidate theoretically a key underlying issue:
the effect on transport of sweeping the dot energy levels through a wide range of values by means of a suitably applied gate voltage.
The low-temperature transport through each dot is heavily dependent on the magnitude of the gate voltage: both the characteristic low-energy Kondo scale of the system, and the zero-bias differential conductance, vary significantly as the energy levels of the dots are lowered from the Fermi level down. If the gate voltage is tuned in such a way that only low-energy spin, or spin/orbital-pseudospin, degrees of freedom are relevant, then the Kondo scale is found (as one usually expects) to be exponentially-small. By contrast, at points where low-energy charge fluctuations can arise, we show that the Kondo scale is much larger---even when the system is intrinsically strongly correlated; leading thereby to correlated-electron physics on
energy scales more amenable to experimental interrogation. We consider, and analyse, this behaviour within a renormalization group (RG) framework.
\section{Model and atomic limit}
The capacitively-coupled double dot model we consider consists of two correlated Anderson impurities coupled by an interdot Coulomb repulsion, with each dot
$i=L,R$ coupled to its own separate lead; viz
\begin{equation}
\label{eq:h}
H = \sum_{i,\mathbf{k},\sigma}\epsilon_\mathbf{k}\acre{\mathbf{k} i \sigma}\ades{\mathbf{k} i \sigma}
+ \sum_{i,\mathbf{k},\sigma}V(\acre{\mathbf{k} i \sigma}\cdes{i\sigma} + \mathrm{h. c.})
+ \sum_{i}(\epsilon \hat n_i + U \hat n_{i\uparrow}\hat n_{i\downarrow})
+ U' \hat n_{L}\hat n_{R}
\end{equation}
with $\hat n_{i\sigma}=\ccre{i\sigma}\cdes{i\sigma}$ and $\hat n_i = \sum_\sigma\hat n_{i\sigma}$. The first term here describes the two metallic leads (with $i=L,R$
again). The second term represents the tunnel coupling of each lead to its corresponding dot, and the final terms define the energies of the double dot itself (including both intra- and interdot Coulomb repulsions $U$ and $U'$). The beauty of a typical experimental quantum dot device is that the parameters entering its effective Hamiltonian are readily adjusted. In particular the dot levels $\epsilon$ can be tuned over a wide energy range by varying the gate voltage. It is this on which we focus here, considering \eref{eq:h} as a function of $\epsilon$ when $U'<U$.
We also consider $\epsilon \le 0$ only, i.e. $\epsilon = -|\epsilon|$, since the alternative corresponds to a `frozen impurity' regime with the dots in essence unoccupied and the basic physics relatively straightforward. Moreover, the behaviour of \eref{eq:h} for $|\epsilon|$ above the particle-hole symmetric point $|\epsilon|=U' + U/2$ is obtainable from that below it by a simple particle-hole transformation; we can thus restrict our analysis to the range $0 < |\epsilon| \le U' + U/2$.
In the $V=0$ (`atomic') limit the dots are decoupled from their leads. The total dot occupation number $n$ versus $|\epsilon|$ then follows the familiar Coulomb blockade (CB) `staircase' pattern, increasing stepwise from $n=0$ to $n=4$ as $|\epsilon|$ is increased. The parameter range of interest involves only the $n=0$, $1$ and $2$ steps, the edges between them lying at $|\epsilon|=0$ ($n=0 \leftrightarrow 1$) and $|\epsilon|=U'$ ($n=1\leftrightarrow 2$). For all points along the $n=1$ step, the degenerate $(n_L,n_R)=(0,1)$ and $(1,0)$ configurations lie lowest in energy, whereas on the $n=2$ step the ground state is $(1,1)$; and at the $n=1\leftrightarrow 2$ CB edge these states are all degenerate.
In what follows we elucidate the behaviour of \eref{eq:h} with increasing
$|\epsilon|$ when the interacting dots are connected to their leads. We show that for all $U'<U$, the `atomic limit' degrees of freedom described above are always quenched as $T\to 0$, such that the ground state is a non-degenerate Fermi liquid phase throughout\footnote{It is known \cite{ourprl, ourjpcm} that in the $n=2$ regime for a critical $U'_\mathrm{c}>U$, the model undergoes a quantum phase transition to a non-Fermi-liquid, doubly-degenerate `charge ordered' phase. }
More central, however, is how the \emph{nature} of this quenching depends on the gate voltage $|\epsilon|$. Near the midpoints of the CB steps, the dot degrees of freedom (be they spin or a combination of spin and orbital-pseudospin) are Kondo-quenched on exponentially small temperature scales. Near the edges of the steps however, where charge fluctuations are important, the behaviour is markedly different. The Kondo temperature $T_K$ is found to be \emph{much} larger, on the order of the dot-lead hybridisation parameter $\Gamma = \pi\rho V^{2}$ (with $\rho$ the host/lead density of states). Since $T_K$ sets the natural scale below which the many-body physics arises, its enhancement means that non-trivial coherent electron transport occurs over a much larger temperature window.
\section{NRG and fixed points}
Results discussed in the next section have been obtained using Wilson's powerful
numerical renormalization group (NRG) approach \cite{kww1,kww2}, technical discussion of how to apply which to the Hamiltonian \eref{eq:h} can be found in \cite{ourjpcm}. NRG is of course an iterative procedure that converges ultimately to a stable \emph{fixed point}\cite{kww1,kww2}, the nature of which indicates the structure of the ground state; and the NRG flows that lead to it (via the unstable fixed points) can be used to obtain static and dynamic properties on all important temperature and energy scales respectively.
Our results will be interpreted in terms of six distinct fixed points, the Hamiltonians for which may be obtained simply by setting each of the bare parameters of \eref{eq:h} to either $0$ or $\infty$. The simplest is the unstable free-orbital (FO) fixed point\cite{kww1}, where $\epsilon = U = U' = \Gamma = 0$. Four more unstable fixed points arise from $\Gamma = 0$ and $U=\infty$ (with fixed ratios of $\epsilon/U$ and $U'/U$), each corresponding to one of the atomic limit states discussed above. Finally there is the stable strong-coupling (SC) fixed point in which $\Gamma = \infty$, to which all NRG flows ultimately tend. \Tref{tab:fp} lists all these fixed points, the conditions under which they obtain, the allowed dot configurations and their degeneracies.
\begin{table}
\caption{\label{tab:fp}NRG fixed points discussed in this work. For detailed discussion, see text.}
\begin{center}
\begin{tabular}{lccccll}
Label & $\Gamma$ & $U$ & $U'$ & $|\epsilon|$ & Allowed dot configurations& Degeneracy\\
\hline
FO & 0 & 0 & 0 & 0 & All & 16\\
VF$_{01}$ & 0 & $\infty$ & $U/2$ & 0 & (0,0), (0,1), (1,0) & 5\\
LM$^{SU(4)}_{1}$ & 0 & $\infty$ & $U/2$ & $U/4$ &(0,1), (1,0)& 4\\
VF$_{12}$ & 0 & $\infty$ & $U/2$ & $U/2$ & (0,1), (1,0), (1,1) &8\\
LM$^{SU(2)}_{2}$ & 0 & $\infty$ & $0$ & $U/2$ & (1,1) & 4\\
SC & $\infty$ & $0$ & $0$ & $0$ & N/A (since $\Gamma = \infty$) & 1
\end{tabular}
\end{center}
\end{table}
As seen in \tref{tab:fp}, the unstable fixed points divide into two sets. The first comprises the `local moment' (LM) fixed points, in which the total dot charge is fixed and hence the only possible dot degrees-of-freedom are those of spin and orbital pseudospin. The second set contains the `valence fluctuation' (VF) fixed points, in which charge fluctuations of the double dot also enter. It is the existence of these two distinct types of fixed point that leads to
interesting physics, as now explained. With decreasing temperature/energy scale, all RG flows begin close to the FO fixed point (we take the host bandwidth $D$ to be the largest energy scale of the model). But for fixed $U$ and $U'$, the subsequent flows then depend solely on the magnitude of $|\epsilon|$ (as controlled by a gate voltage); such that the flow from the final unstable fixed point to the stable SC fixed point determines the characteristic low-energy scale.
For most values of $|\epsilon|$---sufficiently far from the CB edges---the NRG flows are found to pass essentially from FO to one of the LM fixed points, before crossing over to the stable SC fixed point where the dot local moments are quenched. The effective low-energy models here are thus Kondoesque and the low-energy Kondo scale $T_K$ on which the SC fixed point is reached is exponentially-small in $U/\Gamma$ and $U'/\Gamma$ (as considered further below).
If however $|\epsilon|$ is instead tuned to within $\mathcal{O}(\Gamma)$ of a CB edge, the VF fixed points also come into play and the resulting physics is quite different. This is most easily seen at one of the CB edges itself, where there is no sign of any LM fixed points at all. The RG flow instead goes from FO to SC via VF, and the Kondo scale is found to be a simple multiple of $\Gamma$, virtually independent of the bare $U$ and $U'$. So by tuning the system from one CB step to another, one can produce dramatic changes in its Kondo scale and low-energy degrees-of-freedom, which are of course evident in both statics and dynamics.
Before discussing numerical results, we make one further remark. The Kondo physics---and in particular the exponentially small $T_K$---associated with the LM $\to$ SC crossover is a well-known manifestation of strong electron correlation. But while the low-energy scales associated with the CB edges depend only on the bare energy scale $\Gamma$, the corresponding electron dynamics are nonetheless still correlated. Indeed for $U/\Gamma$ sufficiently large, the only region in which electron correlation is \emph{not} important is $\epsilon \gtrsim \Gamma$. As we shall see below, this means that the physics associated with crossing the $n=1\leftrightarrow2$ CB edge is distinct from that arising near $0\leftrightarrow1$ \cite{pohjola}: only the former involves correlated physics throughout, and it is this to which we direct the majority of our attention.
\section{Results}
We first present some static properties, beginning with the entropy. Since each of the fixed points in \tref{tab:fp} has a well defined dot degeneracy, the contribution to the entropy of the system from the dots alone, denoted by $S_\mathrm{imp}$, can be used to determine how the NRG flows depend on the bare parameters. In what follows, it is convenient to work with the dimensionless parameters $|\tilde\epsilon|=|\epsilon|/\pi\Gamma$, $\tilde U=U/\pi\Gamma$ and $\tilde U'=U'/\pi\Gamma$.
\Fref{fig:fig1}(a) shows $S_\mathrm{imp}$ as a function of temperature for fixed $(\tilde U,\tilde U') = (14,12)$ at four representative points along the $|\tilde\epsilon|$ line. These are the three points of `symmetry' at $|\tilde\epsilon| = \tilde U'/2$ (centre of $n=1$ CB step, solid line), $|\tilde\epsilon| =\tilde U'$ ($n=1\leftrightarrow2$ CB edge, long dashes) and
$|\tilde\epsilon| =\tilde U'+\tilde U/2$ (centre of $n=2$ CB step, dot-dashed); plus a fourth point at
$|\tilde\epsilon| = 11$ (short dashes) which illustrates the behaviour slightly away from the
$n=1\leftrightarrow2$ CB edge. In all cases, the high-temperature $\ln 16$
free-orbital behaviour is seen to cross over ultimately to the singlet
strong-coupling regime below some characteristic low temperature scale, but at intermediate temperatures the physics is clearly different in each case.
The $|\tilde\epsilon| = \tilde U'/2$ and $\tilde U'+\tilde U/2$ curves, corresponding to the centres of the $n=1$ and $n=2$ CB steps respectively, show the generic Kondoesque behaviour described above. The long intermediate plateaus of $\ln 4$ each indicate a flow that passes very close to one of the two LM fixed points in \tref{tab:fp}. Specifically, for $|\tilde\epsilon| = \tilde U'/2$ the fixed point is LM$_1^{SU(4)}$ and the low-energy effective model is the one-electron $SU(4)$ spin/orbital Kondo model \cite{borda, boese}. For $|\tilde\epsilon| = \tilde U'+\tilde U/2$ by contrast, it is the LM$_2^{SU(2)}$ fixed point that is approached, and at low energies one observes an effective pair of uncoupled $SU(2)$ spin-Kondo models\cite{ourprl}.
The $|\tilde\epsilon|=\tilde U'$ entropy curve in \fref{fig:fig1}(a) illustrates the distinct behaviour that arises when the RG flow passes close to a VF fixed point---in this case the VF$_{12}$ fixed point of \tref{tab:fp}. No local moment plateaus are observed; low-energy charge fluctuations are now possible and the entropy instead drops to zero on a much higher temperature scale characterised by the hybridisation parameter $\Gamma$. When one is sufficiently close to, but not at, the CB edge at $|\tilde\epsilon|=\tilde U'$, both VF and LM fixed points are observed, as in the $|\tilde\epsilon| = 11$ ($= \tilde U'- 1$) curve in \fref{fig:fig1}(a); here the entropy follows the $|\tilde\epsilon|=\tilde U'$ form at relatively high temperatures before crossing over to a characteristic $\ln 4$ LM shoulder and then finally zero as $T\to 0$.
\begin{figure}[t]
\caption{\label{fig:fig1} For $(\tilde U, \tilde U')=(14,12)$, (a) entropy,
$S_\mathrm{imp}$ \emph{vs} $T/\Gamma$
for $|\tilde\epsilon|=\tilde U'/2=6$ (solid line), $|\tilde\epsilon|=11$ (short dashed), $|\tilde\epsilon|=\tilde U'=12$ (long dashed) and $|\tilde\epsilon|=\tilde U'+\tilde U/2=19$ (dot-dashed). (b) $T=0$
charge susceptibility $D\chi^+_{c,\mathrm{imp}}(0)$ \emph{vs} $|\tilde\epsilon|$.}
\begin{center}
\includegraphics{figs/fig1/fig1.eps}
\end{center}
\end{figure}
To illustrate charge fluctuations near the $1\leftrightarrow2$ CB edge, \fref{fig:fig1}(b) shows the $T=0$ charge susceptibility $D\chi_{c,\mathrm{imp}}^+(0)$ \emph{vs} $|\tilde\epsilon|$ (with $4T\chi_{c}^{+}(T)=\langle[\hat{N}-\langle\hat{N}\rangle]^2\rangle$ and $\hat{N}$ the total charge operator \cite{ourjpcm}). A significant peak occurs at the CB edge at $|\tilde\epsilon|=\tilde U' = 12$, reflecting the availability here of the low-energy charge degrees-of-freedom. Away from this point however, the low-energy effective model involves only spin or spin/orbital degrees of freedom
and $\chi_{c,\mathrm{imp}}^+(0)$ is very small (vanishing as $\tilde U\to\infty$).
The HWHM of the peak gives an estimate of the range over which low-energy charge fluctuations are significant: here we see it is $\mathcal O(1)$, and hence the charge degrees-of-freedom are important only when $|\epsilon|$ is within
$\mathcal O(\Gamma)$ of the CB edge.
We now analyse the Kondo temperature $T_K$ as a function of $|\tilde\epsilon|$ (with $T_K$ taken to be the temperature at which the impurity spin susceptibility satisfies $k_B T\chi_{s,\mathrm{imp}}(T)/(g\mu_B)^2 = 0.1$). As seen already in \fref{fig:fig1}(a), $T_K$ varies by many orders of magnitude as $\epsilon$ is lowered: this is shown more clearly in \fref{fig:fig2}(a) where we plot $\ln T_K/D$ versus $-\epsilon/\pi\Gamma$ for the same fixed $(\tilde U, \tilde U')=(14,12)$ (solid line). Also shown is the total dot charge $n$ (dotted line), showing the characteristic `rounding' of the CB staircase on coupling to the leads. It is quite clear that $T_K$ is a minimum when $n\simeq 1$ or $2$, and a maximum of $\mathcal{O}(\Gamma)$ at the $1\leftrightarrow 2$ CB edge when $n$ can fluctuate easily between the two. Note that $T_K$ also increases as one moves from the $n\simeq 1$ $SU(4)$ Kondo regime towards the $0\leftrightarrow1$ CB edge; but in contrast to the behaviour just described continues to increase as $\epsilon$ becomes positive, reflecting the essential absence of electron correlation in the $n\simeq 0$ regime and a Kondo scale naturally on the order of
$\Gamma$.
The dependence of $T_K$ on $|\tilde\epsilon|$ can be understood more precisely from the effective Kondo models in the $n=1$ and $n=2$ regimes, obtained by straightforward Schrieffer-Wolff transformations and then analysed using Anderson's ``poor man's'' scaling\cite{hewson}. We find in both cases that the leading exponential form of $T_K/D$ is $\exp[-(2\rho J)^{-1}]$, with $\rho J$ given by
\begin{equation}
\label{eq:rhoj}
\pi^2\rho J \sim
\begin{cases}
2/|\tilde\epsilon|-(|\tilde\epsilon|-\tilde U')^{-1} - (|\tilde\epsilon|-\tilde U)^{-1}& \mbox{for }|\tilde\epsilon|\sim\tilde U'/2 \\
({|\tilde\epsilon|-\tilde U'})^{-1} - ({|\tilde\epsilon|-\tilde U'-\tilde U})^{-1} & \mbox{for }|\tilde\epsilon|\sim\tilde U'+\tilde U/2
\end{cases}
\end{equation}
in the strongly correlated ($U\gg\Gamma$) limit. These results for $T_K$ are plotted as dashed lines in \fref{fig:fig2}(a), for the same fixed $\tilde U$ and $\tilde U'$, each scaled by a single multiplicative factor onto the NRG result at the appropriate minimum. The strong agreement with numerical and analytical results over a wide range of $|\tilde\epsilon|$,
provides further confirmation that the low-energy physics of the dots is essentially captured by Kondo models of fixed $n$, except when $|\tilde\epsilon|$ is within $\mathcal{O}(\Gamma)$ of a CB edges and charge fluctuations become important.
\begin{figure}[t]
\caption{\label{fig:entropy}
Variation of $T_K$ with $|\tilde\epsilon|$.
(a) $T_K/\Gamma$ \emph{vs} $|\tilde\epsilon|$ for $(\tilde U,\tilde U')=(14,12)$: solid line from NRG, dashed lines from poor man's scaling.
The dotted line shows the average total dot occupation number $n$ (right-hand scale). (b) NRG $T_K$ versus $|\tilde\epsilon|$ for different inter- and intradot coupling strengths: $(\tilde U,\tilde U')=(10,8)$ (solid line), $(12,10)$ (long dashed), $(14,12)$ (short dashed) and $(18,12)$ (dot dashed).}
\begin{center}
\label{fig:fig2}
\includegraphics{figs/fig2/fig2.eps}
\end{center}
\end{figure}
In \Fref{fig:fig2}(b) we also plot $\ln T_K/D$ versus $|\tilde\epsilon|$ for a range of different coupling strengths. The key point here is that while the scales in the Kondo regimes clearly vary significantly with all the bare parameters of the model (as seen from \eref{eq:rhoj}), those associated with the VF points are largely dependent on the hybridisation strength $\Gamma$ only. For this reason, one would expect all the low-temperature thermodynamics of the model to be essentially independent of $\tilde U$ and $\tilde U'$ near the CB edges, and we have indeed found this to be true.
We now turn to dynamics, focusing on the local single-particle spectrum
$D_i(\omega)$ defined in terms of the
retarded Green function $G_i(t)=-i\theta(t)\langle\{\cdes{i\sigma}(t),\ccre{i\sigma}\}\rangle$ via $D_i(\omega)=-\mathrm{Im}G_i(\omega)$, since this quantity probes the charge fluctuations that control transport across a given dot $i$ (the model considered, where each dot hybridises to its respective lead, is formally equivalent to a 4-lead device where each dot hybridises to two leads with strength $\Gamma/2$).
As shown in \cite{meir, ourjpcm}, the $T=0$ zero-bias differential conductance ($G$) of each dot is related exactly to the
zero-frequency spectrum by $G/(2e^2/h) = \pi\Gamma D_i(0)$
(and the spectrum at non-zero $\omega$ provides an approximation to the conductance at source-drain bias $V=\omega/e$).
The NRG technique can be used in practice to calculate $D_i(\omega)$ on all frequency scales of interest\cite{CHZ}. Near the centre of each CB step, we find that the high-frequency spectrum is dominated by its Hubbard satellites, corresponding physically to sequential tunnelling through the dot. Since these features are readily explained simply as broadened versions of the poles arising in the atomic limit $V=0$, we do not discuss them further here. More interesting is the low-frequency form of the spectrum, since this corresponds to coherent tunnelling through the dot mediated by the Kondo effect. In figures \ref{fig:fig3}(a),(b), we illustrate the evolution of the low-energy Kondo resonance with increasing $|\tilde\epsilon|$ for fixed $(\tilde U,\tilde U')=(10,8)$. \Fref{fig:fig3}(a) shows the spectrum for $|\tilde\epsilon|= 4$, $6$, $7$ and $8$ (dot-dashed, dotted, dashed and solid respectively), and \fref{fig:fig3}(b) shows $|\tilde\epsilon|=10$, $11$ and $13$ (dashed, dotted and solid). Note the very different frequency scales in each case. The width of the resonance, being proportional to $T_K$, is seen to first increase with $|\tilde\epsilon|$ as one moves from the centre of the $n=1$ CB step ($|\tilde\epsilon|=\tilde U'/2=4$) to the maximum at the $1\leftrightarrow2$ CB edge $|\tilde\epsilon|=\tilde U'=8$, where $T_K\sim {\cal{O}}(\Gamma)$.
Thereafter, the resonance narrows until the particle-hole symmetric point at $|\tilde\epsilon|=\tilde U'+\tilde U/2=13$ where $T_K$ is a minimum. In addition to the changing width of the resonance with $|\tilde\epsilon|$, there is also a definite increase in asymmetry near the $1\leftrightarrow2$ CB edge, since here the addition and removal of low-energy electrons to and from the ground state clearly occur with different weights.
\begin{figure}[t]
\caption{\label{fig:fig3}Spectral density of dot $i$ for fixed $(\tilde U,\tilde U')=(10,8)$. (a) and (b) show the evolution of $\pi\Gamma D_i(\omega)$ with increasing $|\tilde\epsilon|$: specifically, $|\tilde\epsilon|= 4$, $6$ and $7$ and $8$ $(=\tilde U')$ [(a): dot-dashed, dotted, dashed and solid respectively]; and $10$, $11$ and $13$ [(b): dashed, dotted and solid]. (c) $T=0$
zero-bias conductance $G/(2e^2/h)=\pi\Gamma D_{i}(0)$, as a function of $|\tilde\epsilon|$.}
\begin{center}
\includegraphics{figs/fig3/fig3.eps}
\end{center}
\end{figure}
Finally, the zero-frequency spectral density $\pi\Gamma D_{i}(0)$, i.e. the
zero-bias differential conductance, is shown as a function of $|\tilde\epsilon|$ in \fref{fig:fig3}(c). Its stepwise increase is readily explained by Fermi-liquid theory, as one can show that $\pi\Gamma D_i(0) = \sin^2(\pi n_i/2)$ with $n_i$ ($\equiv n/2$) the occupation number of dot $i$, whence $\pi\Gamma D_i(0)$ itself mimics the characteristic CB staircase form. For $T \ll T_K$, the zero-bias differential conductance provides as such a direct means of measuring the average occupation number of each dot.
\section{Conclusion}
The capacitively coupled double dot system described by \eref{eq:h} shows a rich range of behaviour as the energy levels of the dots are varied, which we have
elucidated by a combination of NRG and poor man's scaling techniques. By applying a suitable gate voltage, the system can be tuned to display either an $SU(2)\times SU(2)$ spin-Kondo effect ($n=2$), an $SU(4)$ spin/orbital Kondo effect ($n=1$), or correlated mixed-valence physics where the Kondo scale is substantially enhanced and non-trivial coherent electron transport thus arises over a much wider range of temperatures. It can moreover be shown that this essential underlying physics is robust to inclusion of a direct interdot hopping (`$t$'), provided $|t| \lesssim T_{K}$.
\acknowledgments
We are grateful to the EPSRC for financial support.
|
2,877,628,088,933 | arxiv | \section{Introduction} \label{sec:intro}
Observations with the $Einstein$ Observatory and some earlier observations
established that large quantities of gas are cooling below
X--ray emitting temperatures in the cores of many clusters
(see Fabian et al.\ [1984, 1991], and
Fabian [1994] for reviews).
Typical cooling rates are $\sim$100 $M_\odot$ yr$^{-1}$.
I will discuss some recent observations of cluster cooling flows,
and also present some things which I find puzzling.
Of course, the greatest mystery about cooling flows is the existence
and nature of the ultimate repository of the gas seen to cool through
the X-ray band.
If cooling flows are long--lived phenomena,
roughly
$M_{cool} \sim 10^{12} \, M_\odot$
of material would cool over the lifetime of the cluster.
\section{Excess X-ray Absorption} \label{sec:absorp}
\subsection{X-Ray Observations} \label{sec:absorp_obs}
One of the most exciting and potentially important recent discoveries
concerning cooling flows was the detection
of excess X-ray absorption in cluster cooling flows.
Through a re-analysis of $Einstein$ Solid State Spectrometer (SSS)
X-ray spectra of the central regions of cooling flow clusters,
White et al.\ (1991) found evidence for very large amounts of
excess soft X-ray absorption.
The excess column densities of X-ray absorbing material
were typically $\Delta N_H \approx 10^{21}$ cm$^{-2}$.
Excess absorptions have also been found using other detectors on
$Einstein$, $Ginga$, BBXRT, and ASCA
(Johnstone et al.\ 1992;
Lea et al.\ 1982;
Miyaji et al.\ 1993;
Fabian et al.\ 1994).
In several cases, ROSAT PSPC spectral images have shown that the
excess absorption is concentrated to the cooling flow region with
a radius of $\sim$200 kpc.
Thus, the total required mass of cold gas determined by multiplying the
excess column density by the area is about
\begin{equation} \label{eq:mcold}
M_{cold} \approx 1.4 \times 10^{12} \, M_\odot
\left( \frac{ \Delta N_H}{10^{21} \, {\rm cm}^{-2}} \right)
\left( \frac{ r_c}{200 \, {\rm kpc}} \right) \, .
\end{equation}
This is comparable to the total mass expected to cool out of the X-ray band
over a Hubble time.
Because of its soft X-ray band, moderate spectral resolution, reasonably
accurate calibration, and bimodal spectral
response,
the ROSAT PSPC is an excellent instrument for detecting excess
soft X-ray absorption.
Based on the common detection of very large excesses absorptions
($\Delta N_H \approx 10^{21}$ cm$^{-2}$)
directly in the observed spectra with the $Einstein$ SSS spectra
by White et al.\ (1991), one would have expected to have found
many such cases with the ROSAT PSPC.
A number of large excess absorptions have been published based
on PSPC data (Allen et al.\ 1993, 1995; Irwin \& Sarazin 1995).
However, many PSPC spectra do not show such large excess absorptions
covering all of the emission toward the central cooling region of
the cluster
(Breen 1996).
The ROSAT PSPC spectrum of Abell 2029 illustrates this difference.
White et al.\ (1991) found an excess column of
$\Delta N_H = 1.8 \pm 0.5 \times 10^{21}$ cm$^{-2}$ covering all
of the emission in the central 3 arcmin radius of this cluster.
Figure~\ref{fig:a2029pspc} shows the ROSAT PSPC spectrum of the
same inner 3 arcmin circle
(Sarazin et al.\ 1996).
In the left panel, the solid line gives the best-fit spectral
model, including a cooling flow.
The right panel shows the best-fit model if an excess absorption
equal to the White et al.\ value is assumed.
No excess absorption is required to fit the spectrum of the total
emission in this region.
The 90\% confidence upper limit from the ROSAT PSPC spectrum
of the total emission with the inner 3 arcmin radius is
$\Delta N_H < 1.2 \times 10^{20}$ cm$^{-2}$, which is more
than an order of magnitude below the value found by
White et al.\ (1991).
In general, large excess columns are found much less commonly in
the ROSAT PSPC spectra of the total emission toward cooling flows
than in the $Einstein$ SSS spectra of the same regions
(Breen 1996).
However, significant excess columns have been found towards the cooling flow
components of the ROSAT PSPC X-ray spectra of many cooling flow
clusters by fitting these components separately
(Allen 1996).
\begin{figure}[t]
\vspace{2.1in}
\special{psfile=sarazin_fig_1a.ps
angle=-90 voffset=170 hoffset=-15 hscale=30 vscale=30}
\special{psfile=sarazin_fig_1b.ps
angle=-90 voffset=170 hoffset=+166 hscale=30 vscale=30}
\caption{The ROSAT PSPC X-ray spectrum for the central 3 arcmin radius
region of the A2029 cluster
(Sarazin et al.\ 1996).
In the left hand panel, the solid histogram is the best-fit single temperature
plus cooling flow model assuming only Galactic absorption.
In the right hand panel, the solid histogram is the best-fit model with
excess absorption fixed at the value from White et al.\ (1991).}
\label{fig:a2029pspc}
\end{figure}
Another possible source of concern is that some of the largest excess
columns have been found toward clusters which are themselves at
large Galactic columns
(Allen et al.\ 1993; Irwin \& Sarazin 1995; Breen 1996).
Obviously, this cannot be a selection effect, since excess absorption
should be easier to detect if the Galactic column is low.
Moreover, some clusters show excess absorption which is not
centrally condensed and/or which may be associated with Galactic
interstellar features
(David et al.\ 1996).
Now that all of the ROSAT PSPC data is public, it would be very useful
to analyze a large and complete sample of cluster cooling flows to
determine their excess absorption.
It would be very useful to know how common large excess absorptions
are, and what the distribution of columns is (e.g., the fraction
of cooling flow clusters with excess columns greater than $\Delta N_H$).
It would also be very good to look for correlations between
the excess absorption $\Delta N_H$ and the cooling rate
$\dot M$, and between the excess absorption and the Galactic
column.
The possible tendency for large excess columns to correlate
with large Galactic columns might be explained if the excess columns were
due in part to Galactic material.
In order to convincingly eliminate this possibility, it would be very
useful to map the Galactic interstellar atomic and molecular material
toward one or two very good cases of cooling flows with excess absorption.
Ideally, these cases would be chosen to have large and unambiguously
determined excess absorption columns and small Galactic columns in
the existing surveys.
The need for detailed mapping of the Galactic ISM in these directions
comes about because the existing surveys (e.g., Stark et al.\ 1992)
have been made with large beams which are widely spaced.
Given the angular size of nearby cooling flows, the best technique
would probably be to map the Galactic H~I with the VLA with a
fairly compact array, and to use a large single disk telescope to
map the galactic CO distribution in the same direction.
The most direct method to establish that the excess absorption is
associated with the cluster and not with our Galaxy is to measure the
redshift of the oxygen K absorption edge.
In principle, this should be possible with ASCA for the brightest cooling
flows at redshifts $z \ga 0.07$, but low energy calibration problems
have made this extremely difficult
(Sarazin et al.\ 1996).
\subsection{Cooling Flow Models with Intrinsic Absorption}
\label{sec:absorp_model}
The concentration of the excess absorption toward the center of
several cooling flow clusters
(Allen et al.\ 1993; Irwin \& Sarazin 1995) and the correspondence between
$M_{cool}$ and $M_{cold}$ suggest that the excess absorber is located
within the cooling flow.
For simplicity, in all existing analyses of X-ray spectra, the absorber
has been treated as a foreground screen in front of the cooling flow.
Of course, a foreground absorber and absorber mixed with the emitting gas
give different spectra and other properties.
Wise \& Sarazin (1996) have calculated models for the X-ray emission
of cooling flows with internal absorption.
We assume that the cold absorbing gas has the same distribution as that
of the gas cooling out of the X-ray temperature band.
\begin{figure}[t]
\vspace{1.92in}
\special{psfile=sarazin_fig_2.ps
angle=+90 voffset=-31 hoffset=336 hscale=35 vscale=35}
\caption{The effect of internal X-ray absorption on the X-ray surface
brightness profile of a cooling flow (Wise \& Sarazin 1996).
The thin curves show the ROSAT PSPC profile for a
$\dot{M} = 300 \, M_\odot$ yr$^{-1}$, nearly homogeneous
$q=0.1$ model with various amounts of absorption (corresponding to the
labeled fraction of the cooling gas going into the absorber).
The model labeled ``10\%'' has a spectrally determined excess column of
$\Delta N_H \approx 10^{21}$ cm$^{-2}$.
The heavy solid line represents the expected surface brightness
profile for an unabsorbed model with $\dot M(<r) \propto r$,
which is typical of the observed profiles.}
\label{fig:wise_sb}
\end{figure}
The most interesting result associated with the X-ray absorber
is its effect on the X-ray surface brightness profiles of cooling flows.
As shown in Figure~\ref{fig:wise_sb}, internal absorption flattens the
surface brightness profile of a cooling flow.
This occurs because both the absorber and emitter are concentrated to the
center of the cooling flow, and the absorber is thus particularly effective
at reducing the surface brightness in the center.
If the effects of the intrinsic absorption are ignored, this flattening
would be interpreted as evidence that the cooling flow gas is very
inhomogeneous.
For example, the cooling flow model assumed in Figure~\ref{fig:wise_sb} was a
nearly homogeneous model ($q = 0.1$ in the notation of
Wise \& Sarazin [1996]),
while the thick curve shows the surface brightness for very inhomogeneous
model with $\dot M(<r) \propto r$.
\subsection{What is the Excess Absorber?}
\label{sec:absorp_clouds}
In general, the cold material producing this absorption has not been
detected at non--X-ray wavelengths, despite considerable efforts
(e.g., McNamara \& Jaffe 1993;
Antonucci \& Barvainis 1994;
O'Dea et al.\ 1994).
The observational limits based on observations of H~I or CO
have become quite restrictive
(e.g., Voit \& Donahue 1995).
It has been suggested that the absorber might be
very cold molecular clouds
(Ferland et al.\ 1994),
or cold clouds in which all of the volatiles have frozen onto
dust grains
(Daines et al.\ 1994;
Fabian 1994).
However, the X-ray absorbing clouds should be detectable;
they absorb $\sim 3 \times 10^{43}$ ergs s$^{-1}$ of X-rays,
and must be re-radiating this luminosity at some wavelength.
A number of theoretical models have been constructed in order to
determine the physical properties of cold clouds in cooling flows
and decide whether the X-ray absorbing material should have been
detected in the existing H~I and CO observations
(Daines et al.\ 1994;
Ferland et al.\ 1994;
O'Dea et al.\ 1994;
Voit \& Donahue 1995).
These calculations have reached very different conclusions about
the viability of cold clouds in cooling flows.
It would be extremely helpful to understand the origin of this
discrepancy, and to reach a theoretical consensus as to the
physical state of X-ray absorbing clouds or other cold clouds
in cooling flows.
At the moment, we have no generally accepted theoretical
model for the possible origin of the X-ray absorber.
Current observations with ISO should provide important new information
about the nature of X-ray absorbing clouds or other cold clouds in
cooling flows.
ISO observations of the atomic fine structure lines
(e.g., [C~I], [O~I], and [Si~II]) will either detect or strongly
limit atomic gas as a source of the X-ray absorption, as these
lines are the primary coolants of atomic gas under most circumstances.
ISO observations of the continuum emission from cooling flows should
detect or limit the amount of X-ray absorption in dusty clouds.
\section{Magnetic Fields in Cooling Flows} \label{sec:magnet}
In addition to the thermal plasma, the intracluster medium contains
magnetic fields.
In clusters with diffuse radio emission
(e.g., Jaffe 1992),
X-ray limits on the amount of inverse Compton emission
give lower limits to the strength of the magnetic field
which are typically
$B \ga 0.1 \, \mu$G
(e.g., Rephaeli et al.\ 1987).
Faraday rotation measurements
towards background and cluster radio sources have also been used to
determine the intracluster magnetic field
(e.g., Kim et al.\ 1990).
The measured values of and upper limits on the
Faraday rotation are $RM \la 100$ rad m$^{-2}$, where $RM$ is the
rotation measure.
These observations are consistent with an
intracluster field strength and coherence length of roughly
$B \sim 1 \, \mu$G and $l_B \la 10$ kpc.
With this value for the field strength, the ratio of
magnetic to gas pressure is roughly $( P_B / P ) \la 10^{-3}$,
implying that the field is too weak to affect the dynamics of the
outer parts of clusters.
Although cluster magnetic fields may be generally weak, they are
enormously amplified by the compression and inflow in cooling flows
(Soker \& Sarazin 1990).
For frozen-in fields, the pressure associated with the magnetic field
increases dramatically;
e.g., $P_B \propto r^{-4}$ for homogeneous radial inflow.
Soker \& Sarazin showed that the fields should reach equipartition with the
thermal gas pressure within a typical radius of $\sim$20 kpc from the
center of the flow.
In the inner regions of cooling flows, the magnetic field
should then be very important dynamically.
The rapid amplification of the magnetic field in cooling flows also implies
a large increase in the rotation measure.
Soker \& Sarazin (1990) showed that the resulting rotation measures
will be
\begin{eqnarray}
RM & \approx & 4000
\left( \frac{n_c}{3 \times 10^{-3} \, {\rm cm}^{-3}} \right)^{2/3}
\left( \frac{l_{Bc}}{10 \, {\rm kpc}} \right)^{1/2}
\nonumber \\
&& \qquad
\times \left( \frac{{\dot M}_c}{100 \, M_\odot \, {\rm yr}^{-1}} \right)^{1/2}
\left( \frac{ T_c}{7 \times 10^7 \, {\rm K}} \right)^{1/2}
\, {\rm rad} \, {\rm m}^{-2} \, ,
\label{eq:rotm}
\end{eqnarray}
where $n_c$, $T_c$, ${\dot M}_c$, and $l_{Bc}$ are the electron density,
temperature, total cooling rate, and magnetic coherence length,
respectively, at the cooling radius.
In the inner regions of the cooling flow, the magnetic coherence length
is still expected to be about 10 kpc.
Some initial MHD simulations of cluster cooling flow by
Christodoulou \& Sarazin (1996) have confirmed these results.
Often, the central galaxy in a cluster cooling flow is a radio galaxy,
and these radio sources have been used to search for Faraday rotation.
In all cases observed so far, the central radio sources in cluster cooling
flows have either very large Faraday rotations or depolarization.
Examples include M87/Virgo
(Owen et al.\ 1990),
Cygnus A
(Dreher et al.\ 1987),
Hydra A
(Taylor \& Perley 1993),
3C295
(Perley \& Taylor 1991),
A1795, A2199, A2052
(Ge 1991;
Ge \& Owen 1993),
A2029, and A4059
(Taylor et al.\ 1994).
These radio sources have rotation measures of
$RM \approx 10^3 - 2 \times 10^4$ rad m$^{-2}$,
which imply magnetic fields with
$B \ga 10 \, \mu$G and $l_B \sim 10$ kpc.
From a survey of Faraday rotations, Ge (1991) concluded that ``all
sources in the centers of strong cooling flows have high $RM$ ($\ga
1000$ rad m$^{-2}$),'' and that all other sources (in the centers of
clusters without cooling flows and in the outer regions of clusters
with cooling flows) have much smaller $RM$'s ($\la 100$ rad m$^{-2}$).
Taylor et al.\ (1994) found that the rotation measures were positively
correlated with the total cooling rates.
A number of cases have also been found of amorphous radio sources at the
centers of cooling flows which are highly depolarized;
PKS0745-191 (Baum \& O'Dea 1991), and
2A0335+096 (Sarazin et al.\ 1995a)
are the best studied and clearest cases.
These observations confirm the prediction that the magnetic fields
in cooling flows are strong.
Since the amplification of the field is due to compression and inflow,
the large rotation measures provide indirect evidence that gas is
indeed flowing into cooling flows.
The implied fields in the inner regions give magnetic pressures which
are comparable to the very high gas pressures;
thus, it is likely that magnetic fields affect the dynamics of the
gas in these regions.
One important question is the ultimate fate of the magnetic flux which
is advected into cooling flows.
If the flux were not removed by some process, the field would grow
to very large levels.
The magnetic fields might be convected out of the cluster center, or
might be destroyed by field line reconnection.
Physical arguments and initial numerical simulations suggest that
reconnection is more important in cooling flows
(Soker \& Sarazin 1990;
Christodoulou \& Sarazin 1996).
\section{Radio Sources in Cooling Flows} \label{sec:radio}
Most of the central galaxies in cluster cooling flows host
radio galaxies
(Burns 1996).
In fact, many of the most famous, nearby radio galaxies
(e.g., Virgo A, Perseus A, Cygnus A) are located in the centers of
cluster cooling flows.
(However, there do exist cases of moderately strong cooling
flows without a radio source at the center
[e.g., A376, A2319, A2141]).
Most of the radio sources associated with the central galaxies
in cluster cooling flows are FR~I (edge-darkened) sources;
an exception is the Cyg~A, which is an FR~II (edge-brightened)
source.
Recent observations of the FR~I radio sources in cluster cooling flows
suggest that they may be subdivided further into two separate morphologies.
Most of these radio sources show a radio jet or pair of jets which lead
from the nucleus to a pair of radio lobes.
I will refer to such sources as ``lobe-dominated sources.''
Examples of lobe-dominated sources in large cluster cooling
flows include Perseus~A, A1795, A2029, A2597, and A4059.
The second class of FR~I radio sources in cooling flows
are ``amorphous sources''
(Burns 1990;
Baum \& O'Dea 1991).
These seem to be less common that the lobe-dominated sources.
PKS0745-191 and 2A0335+096 are probably the best examples
(Baum \& O'Dea 1991;
Sarazin et al.\ 1995a).
In both of these radio sources, there is radio emission from
the galactic nucleus, but any jets are either very weak or
completely absent, even when the sources have been mapped with
a wide range of angular resolutions and at a wide range of radio
frequencies.
Most of the radio luminosity in these sources
comes from a extended of region diffuse, steep spectral index
emission.
There is no clear evidence for radio lobes or strongly directed
outflow of radio plasma.
A key clue to the dynamics of these radio sources comes from
comparing the pressure of the nonthermal radio emitting plasma
$P_{rad}$ with the pressure of the ambient, thermal, X-ray emitting gas, $P_X$.
We derive the radio pressure from synchrotron theory,
making the usual ``minimum energy'' assumptions.
The average X-ray pressure at the radius of the extended radio
emission (either the lobes or the amorphous emission) is
derived from the azimuthally averaged X-ray surface brightness.
We find that the X-ray and radio pressures are generally
in fairly good agreement (factor of three) for the
lobe-dominated sources.
For example, in A2597
the average radio pressure in the two lobes is
$P_{rad} \approx 1.1 \times 10^{-9}$ dyn cm$^{-2}$,
while the X-ray pressure (at a slightly larger effective radius
because of the small size of the radio source) is
$P_{X} \approx 0.5 \times 10^{-9}$ dyn cm$^{-2}$
(Sarazin et al.\ 1995b).
This suggests that the radio lobes are distinct from and confined
by surrounding thermal gas.
In the amorphous sources, the radio pressures are much smaller
than the X-ray pressures.
In 2A0335+096,
the average radio pressure in the diffuse emission region is
$P_{rad} \approx 2.7 \times 10^{-12}$ dyn cm$^{-2}$,
while the X-ray pressure in the same region (in projection) is
$P_{X} \approx 1.1 \times 10^{-10}$ dyn cm$^{-2}$.
A similar result is found in PKS0745-191
(Baum \& O'Dea 1991).
While it is possible that the disagreement between the radio
pressure and thermal pressure in these objects is due to a
failure of the minimum energy assumptions,
I believe that this discrepancy indicates that the radio
plasma is mixed with the thermal plasma (either on a fine or
coarse scale).
If the relativistic particles which produce the radio emission
occupy the same region as the thermal plasma, the partial pressure of
the radio plasma need not balance the pressure of
the thermal plasma.
\begin{figure}
\vspace{2.25in}
\special{psfile=sarazin_fig_3.ps
voffset=-60 hoffset=85 hscale=35 vscale=35 angle=0}
\caption{Contours of the radio emission
from central galaxy in the cooling flow cluster A4059
are shown superposed on a greyscale representation of
the ROSAT HRI X-ray image
(Huang \& Sarazin 1996).}
\label{fig:a4059_x_radio}
\end{figure}
Another interesting result emerges if one compares the detailed
images of the central regions of cooling flows in radio and
in X-rays.
For the lobe-dominated sources, the X-ray emission and
the radio lobes appear to be anti-correlated.
That is, the projected region of the radio lobes is a region
of fainter X-ray emission, compared to the average X-ray surface
brightness at that radio.
The clearest example of this was shown in the ROSAT HRI X-ray
image of the Perseus cluster by
B\"ohringer et al.\ (1993).
However, we have found a similar effect in the ROSAT HRI
images of A1795, A2029, A2597, and A4059
(Sarazin et al.\ 1992, 1995b; Huang \& Sarazin 1996).
Figure~\ref{fig:a4059_x_radio} shows contours of the
radio emission from the central radio source in
the cooling flow cluster A4059 superposed on the ROSAT HRI
X-ray image
(Huang \& Sarazin 1996).
We see that the X-ray emission is elongated ENE to WSW,
and that the radio lobes appear to be occupy regions of
lower X-ray brightness.
In the amorphous sources, the X-ray and radio emission occupy
the same projected regions.
If anything, the radio and X-ray emission appear to be
positively correlated
(e.g., Sarazin et al.\ 1995a).
This suggests that the radio emission and X-ray emission
come from the same volume of space.
Any process which compressed the thermal gas would increase
both the X-ray and radio emission, and this could produce
some level of correlation.
The polarization properties of these sources also connect the X-ray
emitting thermal plasma with the radio plasma.
As noted above (\S~\ref{sec:magnet}), the radio sources associated with
cluster cooling flows all show either large Faraday rotations or
depolarization.
In every lobe-dominated cooling flow source observed,
a very strong Faraday rotation
($RM \approx 10^{3} - 3 \times 10^{4}$ rad m$^{-2}$)
is observed.
On the other hand, both of the two amorphous sources
which have been observed (PKS0745-191, 2A0335+096)
showed complete depolarization
(Baum \& O'Dea 1991;
Sarazin et al.\ 1995a).
Large Faraday rotations (which imply the the polarization
vector undergoes many rotations) can only be produced if the
magnetized thermal plasma lies in front of the radio plasma.
If the thermal plasma and radio plasma are mixed (either
on a fine or coarse scale) the Faraday rotation will vary
along any line of sight through the radio source.
The differential Faraday rotation along each line of sight will
result in emission which is the superposition of all polarization
angles --- that is, unpolarized radiation.
Thus, the strong Faraday rotation seen in lobe-dominated cooling
flow radio sources indicates that the radio plasma has displaced
the X-ray emitting thermal plasma.
Conversely, the depolarization of the amorphous sources indicates
that the radio plasma and thermal plasma are mixed.
In summary, the comparison of X-ray and radio pressure,
of X-ray and radio images, and of polarization properties
are all consistent with a picture in which the lobes in
the lobe-dominated radio sources have displaced the surrounding
thermal gas, while the radio plasma in the amorphous sources
appears to be mixed with the X-ray emitting gas.
\acknowledgments
This work was supported by NASA Astrophysical\linebreak
Theory Program grant
NAG 5-3057, NASA ASCA grant NAG 5-2526, and NASA ASCA grant 5-3308.
I would like to thank Noam Soker and others at Oranim for the wonderful
job they did in organizing this very useful meeting.
|
2,877,628,088,934 | arxiv | \section{Introduction}
In this work we deal the non-local non-linear eigenvalue problem
\begin{equation}
\label{eq:autovalores}
\begin{cases}
(-\Delta_p)^r u = \lambda\dfrac{\alpha}p|u|^{\alpha-2}u|v|^{\beta} &\text{in } \Omega,\vspace{.2cm}\\
(-\Delta_p)^s u = \lambda\dfrac{\beta}p|u|^{\alpha}|v|^{\beta-2}v &\text{in } \Omega,\\
u=v=0 &\text{in }\Omega^c=\mathbb{R}^N\setminus\Omega,
\end{cases}
\end{equation}
where $p>1,$ $r,s\in(0,1),$ $\alpha,\beta\in(0,p)$ are such that
\begin{equation} \label{eq:a.b}
\alpha + \beta = p, \qquad \min\{\alpha;\beta\}\ge1,
\end{equation}
and $\lambda$ is the eigenvalue.
Here and subsequently $\Omega$ is a bounded smooth domain in $\mathbb{R}^N$ and
$(-\Delta_p)^t$ denotes the fractional
$(p,t)-$Laplacian, that is
\[
(-\Delta_p)^t u(x)\coloneqq 2\text{P.V.} \int_{\mathbb{R}^N}
\dfrac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+sp}} \, dy
\quad x\in\Omega.
\]
The natural functional space for our problem is
\[
\mathcal{W}^{(r,s)}_p(\Omega)\coloneqq\widetilde{W}^{r,p}(\Omega)\times\widetilde{W}^{s,p}(\Omega).
\]
Here $\widetilde{W}^{t,p}(\Omega)$ denotes the space of all
$u$ belong to the fractional Sobolev space
\[
W^{t,p}(\Omega)\coloneqq
\left\{
v\in L^p(\Omega)\colon
\int_{\Omega^2}\dfrac{|v(x)-v(y)|^p}{|x-y|^{N+tp}} dxdy<\infty
\right\}
\]
such that $\tilde{u}\in W^{t,p}(\mathbb{R}^N)$ where $\tilde{u}$ is the extension by
zero of $u$ and $\Omega^2=\Omega\times\Omega.$
For a more detailed description of these spaces and some its properties, see
for instance \cite{Adams,Hitchhiker}.
Note that in our eigenvalue problem we are considering two different fractional operators (since we
allow for $t\neq s$) and therefore the natural space to consider here, that is
$\mathcal{W}^{(r,s)}_p(\Omega)=\widetilde{W}^{r,p}(\Omega)\times\widetilde{W}^{s,p}(\Omega)$,
is not symmetric.
In this context, an eigenvalue is a real value $\lambda$ for which
there is $(u,v)\in\mathcal{W}^{(r,s)}_p(\Omega)$ such that $uv\not\equiv0,$ and
$(u,v)$ is a weak solution of \eqref{eq:autovalores}, i.e.,
\begin{align*}
\int_{\mathbb{R}^{2N}}
\dfrac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(w(x)-w(y))}{|x-y|^{N+rp}}dxdy
&=\lambda\dfrac\alpha{p}\int_{\Omega}|u|^{\alpha-2}u|v|^{\beta}w dx\\
\int_{\mathbb{R}^{2N}}
\dfrac{|v(x)-v(y)|^{p-2}(v(x)-v(y))(z(x)-z(y))}{|x-y|^{N+sp}}dxdy
&=\lambda\dfrac{\beta}p\int_{\Omega}|u|^{\alpha}|v|^{\beta-2}vz dx
\end{align*}
for any $(w,z)\in\mathcal{W}^{(r,s)}_p(\Omega).$ The pair $(u,v)$ is called a corresponding eigenpair.
Observe that if $\lambda$ is an eigenvalue with eigenpair $(u,v)$
then $uv\not\equiv0$ and
\[
\lambda=\dfrac{[u]_{r,p}^p+[v]_{s,p}^p}
{|(u,v)|_{\alpha,\beta}^p},
\]
here
\[
[w]_{t,p}^p\coloneqq\int_{\mathbb{R}^{2N}}
\dfrac{|w(x)-w(y)|^p}{|x-y|^{N+tp}} dxdy
\quad \text{ and }\quad
|(u,v)|_{\alpha,\beta}^p\coloneqq\int_{\Omega} |u|^{\alpha}|v|^{\beta}dx.
\]
Thus
\[
\lambda\ge \lambda_{1,p}
\]
where
\begin{equation}\label{eq:1eraut}
\lambda_{1,p}\coloneqq
\inf\left\{
\dfrac{[u]_{r,p}^p+[v]_{s,p}^p}
{|(u,v)|_{\alpha,\beta}^p}\colon
(u,v)\in\mathcal{W}^{(r,s)}_p(\Omega), uv\not\equiv0
\right\}.
\end{equation}
Our first aim is to show that $\lambda_{1,p}$ is the first eigenvalue
of our problem. In fact, in Section \ref{section:1erAutov}, we prove the following result.
\begin{teo} \label{teo:1eraut}
There is a nontrivial minimizer
$(u_p,v_p)$ of \eqref{eq:1eraut} such that both components are positives, $u_p,v_p > 0$ in $\Omega$,
and $(u_p,v_p)$ is a weak solution of \eqref{eq:autovalores} with
$\lambda=\lambda_{1,p}.$
Moreover, $\lambda_{1,p}$ is simple.
Finally, there is a sequence of eigenvalues $\lambda_n$ such that
$\lambda_n\to\infty$ as $n\to\infty$.
\end{teo}
We don't know if the first eigenvalue is isolated or not.
\medskip
Now, our aim is to study $\lambda_{1,p}$ for large $p$. To this end we look for the asymptotic behaviour of $\lambda_{1,p}$ as $p\to \infty$. From now on for any $p>1,$ $(u_p,v_p)$ denotes the eigen-pair associated to $\lambda_{1,p}$
such that $|(u,v)|_{\alpha,\beta}=1.$ To study the limit as $p\to \infty$ we need to assume that
\begin{equation}
\label{eq:alfabeta}
p\min\{r,s\}\ge N,
\end{equation}
and
\begin{equation}\label{lim.Gamma}
\lim_{p\to \infty} \frac{\alpha_p}{p} = \Gamma, \qquad 0<\Gamma <1.
\end{equation}
Note that the last assumption and the fact that $\alpha_p +\beta_p=p$ implies
\begin{equation}\label{lim.Gamma.2}
\lim_{p\to \infty} \frac{\beta_p}{p} = 1 - \Gamma, \qquad 0<1-\Gamma <1.
\end{equation}
In order to state our main theorem concerning the limit as $p\to \infty$, we need to introduce the following notations:
\[
[w]_{t,\infty} \coloneqq \sup_{(x,y)\in\overline{\Omega}} \frac{| w(y) - w(x)|}{|x-y|^{t}},
\]
\[
\widetilde{W}^{t,\infty}(\Omega)\coloneqq
\left\{w\in C_0(\overline{\Omega})\colon
[w]_{t,\infty}<\infty,\right\},
\quad \mathcal{W}^{(r,s)}_\infty(\Omega)\coloneqq\widetilde{W}^{r,\infty}(\Omega)\times\widetilde{W}^{s,\infty}(\Omega)
\]
and
\[
R(\Omega)\coloneqq\max_{x\in\Omega}\mathop{\mbox{\normalfont dist}}\nolimits(x,\partial\Omega).
\]
Now we are ready to state our second result. It says that there is a limit for $[\lambda_{1,p}]^{\nicefrac{1}{p}}$ and that this limit verifies both a variational characterization and a simple geometrical characterization. In addition, concerning eigenfunctions there is a uniform limit (along subsequences) that is a viscosity solution to a limit PDE eigenvalue problem. The proofs of our results concerning limits as $p\to \infty$ are gathered in Section \ref{sec-p-infty}.
\begin{teo} \label{teo.2.intro}
Under the assumptions \eqref{eq:alfabeta} and \eqref{lim.Gamma}, we have that
$$
\lim_{p\to\infty } [\lambda_{1,p}]^{\nicefrac{1}{p}}= \Lambda_{1,\infty}
$$
where
$$
\Lambda_{1,\infty}
= \inf \left\{
\frac{\max \{ [u]_{r,\infty} ; [v]_{s,\infty} \} }{ \| |u|^{\Gamma} |v|^{1-\Gamma} \|_{L^\infty (\Omega)} }
\colon (u,v)\in \mathcal{W}^{(r,s)}_\infty(\Omega)\right\}.
$$
Moreover, we have the following geometric characterization of the limit eigenvalue:
$$
\Lambda_{1,\infty} = \left[ \frac{1}{R(\Omega)} \right]^{ (1-\Gamma) s + \Gamma r }.
$$
Lastly, there is a sequence $p_j \to \infty$ such that $(u_{p_j},v_{p_j})\to (u,v)$
converges uniformly in $\overline{\Omega},$ where
$(u_\infty,v_\infty)$ is a minimizer of $\Lambda_{1,\infty}$, and
a viscosity solution to
\begin{equation}\label{eq:limite.intro}
\begin{cases}
\min\left\{\mathcal{L}_{r,\infty}u(x);\mathcal{L}_{r,\infty}^+u(x)-\Lambda_{1,\infty} u^{\Gamma}(x) v^{1-\Gamma}(x)\right\}
=0
&\text{ in } \Omega,\\
\min\left\{\mathcal{L}_{s,\infty}u(x);\mathcal{L}_{s,\infty}^+u(x)-\Lambda_{1,\infty} u^{\Gamma}(x) v^{1-\Gamma}(x)\right\}
=0&\text{ in } \Omega,\\
u=v=0 &\text{ in } \mathbb{R}^N\setminus\Omega,
\end{cases}
\end{equation}
where
\[
\mathcal{L}_{t,\infty}w(x)\coloneqq\mathcal{L}_{t,\infty}^+w(x)
+\mathcal{L}_{r,\infty}^-w(x)= \sup_{y\in\mathbb{R}^N}\dfrac{w(x)-w(y)}{|x-y|^{t}}
+\inf_{y\in\mathbb{R}^N}\dfrac{w(x)-w(y)}{|x-y|^{t}}.
\]
\end{teo}
To end the introduction let us briefly refer to previous references on this subject.
The limit of $p-$harmonic functions (solutions to the local $p-$Laplacian, that is,
$-\Delta_p u =-\mbox{div} (|\nabla u|^{p-2} \nabla u)= 0$) as $p\to\infty$ has been extensively studied in the literature
(see \cite{BBM} and the survey \cite{ACJ}) and leads naturally to solutions of the infinity Laplacian, given by
$-\Delta_{\infty} u = - \nabla u D^2 u (\nabla u)^t=0$. Infinity
harmonic functions (solutions to $-\Delta_\infty u =0$) are
related to the optimal Lipschitz extension problem (see the survey
\cite{ACJ}) and find applications in optimal transportation, image
processing and tug-of-war games (see, e.g.,\cite{CMS,GAMPR,PSSW,PSSW2} and the references therein).
Also limits of the eigenvalue problem related to the $p$-Laplacian witth various boundary conditions
have been exhaustively examined, see \cite{GMPR,JL,JLM,RoSain,RoSain2},
and lead naturally to the infinity Laplacian eigenvalue problem (in the scalar case)
\begin{equation}\label{infty.1}
\min \left\{ |\nabla u| - \lambda u ,\ - \Delta_{\infty} u
\right\}=0.
\end{equation}
In particular, the limit as $p\to \infty$ of the first eigenvalue $\lambda_{p,D}$
of the $p$-Laplacian with Dirichlet boundary conditions and of its corresponding positive
normalized eigenfunction $u_p$ have been studied in \cite{JL,JLM}.
It was proved there that, up to a subsequence, the eigenfunctions $u_{p}$ converge uniformly to some Lipschitz
function $u_\infty$ satisfying $\|u_\infty\|_\infty=1$, and
\begin{equation}\label{DefInfEig}
(\lambda_{p,D})^{\nicefrac{1}{p}} \to \lambda_{\infty,D}
= \inf_{u\in W^{1,\infty}(\Omega)} \dfrac{\|\nabla u\|_\infty}{\|u\|_\infty}
= \dfrac{1}{R(\Omega)}.
\end{equation}
Moreover $u_\infty$ is an extremal for this limit variational problem and
the pair $u_\infty$, $\lambda_{\infty,D}$ is a
nontrivial solution to \eqref{infty.1}.
This problem has also been studied from an optimal mass-transport point of view in
\cite{ChdPG}. Note that here the fact that we are dealing with two different operators in the system is reflected in that the limit is given by $$
\Lambda_{1,\infty} = \left[ \frac{1}{R(\Omega)} \right]^{ (1-\Gamma) s + \Gamma r },$$ a quantity that depends on $s$ and $t$.
On the other hand, there is a rich recent literature concerning eigenvalues for systems of $p-$Laplacian type,
(we refer e.g. to \cite{BdF,dpr,FMST,dNP,Z} and references therein). The only references that we know concerning the asymptotic behaviour as $p$ goes to infinity
of the eigenvalues for a system are \cite{BRS} and \cite{dpr} where the authors study the behaviour of the first eigenvalue for a system with the usual local $p-$Laplacian operator.
Finally, concerning limits as $p\to \infty$ in fractional eigenvalue
problems (a single equation), we mention \cite{Brasco,FP,JL}.
In \cite{JL} the limit of the first eigenvalue for the fractional
$p-$Laplacian is studied while in \cite{FP} higher eigenvalues are
considered.
\section{Preliminaries}\label{A}
We begin with a review of the basic results that
will be needed in subsequent sections.
The known results are generally stated without proofs,
but we provide references where
the proofs can be found. Also,
we introduce some of our notational conventions.
\subsection{Fractional Sobolev spaces}
Let $s\in(0,1)$ and $p\in(1,\infty).$
There are several choices for a norm for $W^{s,p}(\Omega),$ we choose the following:
\[
\|u\|_{s,p}^p\coloneqq \|u\|_{L^p(\Omega)}^p+|u|_{s,p}^p
\]
where
\[
|u|_{s,p}^p=\int_{\Omega^2}\dfrac{|u(x)-u(y)|^p}{|x-y|^p}\, dxdy.
\]
Observe that for any $u\in\widetilde{W}^{s,p}(\Omega)$ we get
\[
|u|_{s,p}\le [u]_{s,p}.
\]
Our first aim is to show a Poincar\'e--type inequality.
\begin{lem} \label{lem:poincare}
Let $s\in(0,1).$ For any $p>1,$ there is a positive constant $C,$ independent of $p,$ such that
\[
[u]_{s,p}^p\ge \dfrac{\omega_N}{sp}({\rm{diam}}(\Omega)+1)^{sp}
\|u\|_{L^p(\Omega)}^p
\quad\forall u\in\widetilde{W}^{s,p}(\Omega)
\]
where $\omega_N$ is the $N-$dimensional volume of a Euclidean ball of radius 1.
\end{lem}
\begin{proof}
Let $u\in\widetilde{W}^{s,p}(\Omega).$ Then
\[
[u]_{s,p}^p\geq \int_{\Omega}|u(x)|^p\int_{\Omega_1}\dfrac{1}{|x-y|^{N+sp}} dydx
\]
where $\Omega_1=\{y\in\Omega^c\colon \mathop{\mbox{\normalfont dist}}\nolimits(y,\Omega)\ge1\}.$ Now, we observe that
for any $x\in\Omega$ we have $B_{d+1}(x)^c\subset\Omega_1$ where $d={\rm{diam}}(\Omega).$ Thus
\[
\int_{\Omega_1}\dfrac{dy}{|x-y|^{N+sp}}\ge
\int_{B_{d+1}(x)^c}\dfrac{dy}{|x-y|^{N+sp}}=
\omega_{N}\int_{d+1}^{\infty} \dfrac{d\rho}{\rho^{sp+1}}=\dfrac{\omega_N}{sp}(d+1)^{sp}
\]
for all $x\in\Omega.$ Therefore, we conclude that,
\[
[u]_{s,p}^p\ge \dfrac{\omega_N}{sp}(d+1)^{sp}\|u\|_{L^p(\Omega)}^p.
\]
\end{proof}
The following result will be one of the keys in the proof of Theorem \ref{teo.2.intro}.
\begin{lem}\label{lem:inclusion}
Let $s\in(0,1)$ and $p>\nicefrac{s}{N}.$ If $q\in(\nicefrac{N}{s},p)$
and $t=s-\nicefrac{N}{q}$ then
\[
\|u\|_{L^{q}(\Omega)}\le |\Omega|^{\nicefrac1q-\nicefrac1p}
\|u\|_{L^p(\Omega)} \qquad \text{ and } \qquad |u|_{t,q}\le
{\rm{diam}}(\Omega)^{\nicefrac{N}p}|\Omega|^{\nicefrac2q-\nicefrac2p}|u|_{s,p}
\]
for all $u\in W^{s,p}(\Omega).$
\end{lem}
\begin{proof}
Since $q<p$, the first inequality is trivial, then, we only need to prove the second one.
Let $u\in W^{s,p}(\Omega).$ It follows from H\"older's inequality that
\begin{align*}
|u|_{t,q}^q&=\int_{\Omega^2}\dfrac{|u(x)-u(y)|^q}{|x-y|^{sq}} dxdy\\
&\le
\left(\int_{\Omega^2}\dfrac{|u(x)-u(y)|^p}{|x-y|^{sp}} dxdy\right)^{\nicefrac{q}p}
|\Omega|^{2-2\nicefrac{q}p}\\
&\le\text{diam}(\Omega)^{\nicefrac{Nq}p}
\left(\int_{\Omega^2}\dfrac{|u(x)-u(y)|^p}{|x-y|^{sp+N}} dxdy\right)^{\nicefrac{q}p}
|\Omega|^{2-2\nicefrac{q}p},
\end{align*}
as we wanted to show.
\end{proof}
\subsection{Weak and Viscosity Solutions}
Let us discuss the relation between the weak solutions of
\begin{equation}\label{eq:viscosity1}
\begin{cases}
(-\Delta_p)^s u= f(x) &\text{ in }\Omega,\\
u=0 &\text{ in }\Omega^c,
\end{cases}
\end{equation}
and the viscosity solutions of the same problem.
\medskip
We begin by introducing the precise definitions of weak and viscosity solutions.
\medskip
\noindent{\bf Definition (weak solution).}
Let $f\in W^{-s,p}(\Omega)$ (the dual space of $\widetilde{W}^{s,p}(\Omega)$) and $u\in \widetilde{W}^{s,p}(\Omega).$
We say that $u$ is a weak solution of \eqref{eq:viscosity1} if only if
\[
\int_{\mathbb{R}^{2N}}
\dfrac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+rp}}dxdy
=\langle f,v\rangle
\]
for every $v\in \widetilde{W}^{s,p}(\Omega)$.
Here $\langle \cdot,\cdot\rangle$ denotes the duality pairing of $\widetilde{W}^{s,p}(\Omega)$
with $W^{-s,p}(\Omega).$
\medskip
\noindent{\bf Definition (viscosity solution).}
Let $p\ge2,$ $f\in C(\overline{\Omega})$ and $u\in C(\mathbb{R}^N)$ be such that $u=0$ in
$\Omega^c.$
We say that $u$ is a viscosity subsolution
of \eqref{eq:viscosity1} at a point $x_0\in \Omega$ if and only if
for any test function $\varphi\in C^2_0(\mathbb{R}^N)$ such that
$u(x_0)=\varphi(x_0)$ and $u(x)\le\varphi(x)$ for all $x\in\mathbb{R}^N$ we have that
\[
2\int_{\mathbb{R}^N}
\dfrac{|\varphi(x_0)-\varphi(y)|^{p-2}(\varphi(x_0)-\varphi(y))}{|x_0-y|^{N+sp}} \, dy \le
f(x_0).
\]
We say that $u$ is a viscosity supersolution
of \eqref{eq:viscosity1} at a point $x_0\in \Omega$ if and only if
for any test function $\varphi\in C^2_0(\mathbb{R}^N)$ such that
$u(x_0)=\varphi(x_0)$ and $u(x)\ge\varphi(x)$
for all $x\in\mathbb{R}^N$ we have that
\[
2\int_{\mathbb{R}^N}
\dfrac{|\varphi(x_0)-\varphi(y)|^{p-2}(\varphi(x_0)-\varphi(y))}{|x_0-y|^{N+sp}} \, dy \ge
f(x_0).
\]
Finally, $u$ is called a viscosity solution
of \eqref{eq:viscosity1} if it is both a
viscosity super- and subsolution at $x_0$ for any $x_0\in\Omega$.
\medskip
Following carefully the proof of \cite[Proposition 11]{LL}, we have the following result.
\begin{teo}\label{teo:debilvisco}
Let $p\ge2$ and $f\in C(\overline{\Omega}).$ If $u$ is a weak solution of \eqref{eq:viscosity1}
then it is also a viscosity solution.
\end{teo}
The following result is one of the key to show that every eigen-pair associated to the first
eigenvalue has constant sign. For the proof we refer to \cite[Lemma 12]{LL}.
\begin{lem}\label{lema:viscopositivo}
Let $p\ge 2.$ Assumme $u\ge0$ and $u\equiv0$ in $\Omega^c.$ If $u$ is a
viscosity supersolution of $(-\Delta_p)^su=0$ in $\Omega$ then either
$u>0$ in $\Omega$ or $u\equiv 0$ in $\mathbb{R}^N.$
\end{lem}
\section{The eigenvalue problem}\label{section:1erAutov}
We begin showing that $\lambda_{1,p}$ is the first eigenvalue of
our problem.
\begin{lem}\label{lema:1A1}
There is a nontrivial minimizer
$(u,v)$ of \eqref{eq:1eraut} such that $u,v > 0$ a.e. in $\Omega$
and $(u,v)$ is a weak solution of \eqref{eq:autovalores} with
$\lambda=\lambda_{1,p}.$
\end{lem}
\begin{proof}
Since $C_0^\infty(\Omega)\times C_0^\infty(\Omega)\subset\mathcal{W}^{(r,s)}_p(\Omega),$ we have
\begin{equation}\label{eq:1A1.1}
0\le\inf\left\{
\dfrac{[u]_{r,p}^p+[v]_{s,p}^p}
{|(u,v)|_{\alpha,\beta}^p}\colon
(u,v)\in\mathcal{W}^{(r,s)}_p(\Omega), uv\not\equiv0
\right\}<\infty.
\end{equation}
Now, we consider a minimizing sequence $\{(u_n,v_n)\}_{n\in\mathbb{N}}$ normalized according to
$|(u_n,v_n)|_{(\alpha,\beta)}=1$. It follows from \eqref{eq:1A1.1} that $\{(u_n,v_n)\}$ is bounded in
$\mathcal{W}^{(r,s)}_p(\Omega).$ Then, by the compactness of the Sobolev embedding theorem, there is a subsequence
$\{(u_{n_j},v_{n_j})\}_{j\in\mathbb{N}}$ such that
\begin{align*}
&u_{n_j} \rightharpoonup u \text{ weakly in }\widetilde{\mathcal{W}}^{r,p}(\Omega),
\quad & v_{n_j} \rightharpoonup v \text{ weakly in }\widetilde{\mathcal{W}}^{s,p}(\Omega),\\
&u_{n_j} \to u \text{ strongly in } L^p(\Omega),
\quad & v_{n_j} \to v \text{ strongly in } L^p(\Omega).
\end{align*}
Thus, $|(u,v)|_{(\alpha,\beta)}=1$ and
\[
[u]_{r,p}^p+[v]_{s,p}^p\le\liminf_{j\to\infty}
\left\{ [u_{n_j}]_{r,p}^p+[v_{n_j}]_{s,p}^p \right\} =\lambda_{1,p}.
\]
Therefore $(u,v)$ is a minimizer of \eqref{eq:1eraut}. Moreover, since
\[
[|u|]_{r,p}^p+[|v|]_{s,p}^p\le [u]_{r,p}^p+[v]_{r,p}^p,
\]
we can assume that $u$ and $v$ are non-negative functions.
The fact that this minimizer is a weak solution \eqref{eq:autovalores} with
$\lambda=\lambda_{1,p}$ is straightforward and can be obtained from the arguments in \cite{LL}.
Finally, since $u$ and $v$ are non-negative function and $(u,v)$ is a weak solution of
\eqref{eq:autovalores} with $\lambda=\lambda_{1,p},$ by \cite[Theorem A.1]{Brasco1}, we obtain
$u,v$ are positive functions a.e. in $\Omega.$
\end{proof}
The following result follows from the classical inequality
\[
||a|-|b||<|a-b|\quad \forall ab<0.
\]
\begin{co}\label{co:autopositivo}
If $(u,v)$ is an eigen-pair corresponding to $\lambda_{1,p}$
then $u$ and $v$ have constant sign.
\end{co}
Our next aim is to prove that all the eigen-pairs associated to $\lambda_{1,p}$ are bounded.
For this, we follow ideas from \cite[Theorem 3.2]{Brasco2}.
\begin{lem}\label{lema.cota}
If $(u,v)$ is an eigen-pair associated to $\lambda_{1,p},$
then $u,v \in L^\infty(\mathbb{R}^N).$
\end{lem}
\begin{proof}
Without loss of generality we can assume that $r\le s$ and $u,v>0$ a.e. in $\Omega.$
It follows from the fractional Sobolev embedding theorem
(see, e.g., \cite[Corollary 4.53 and Theorem 4.54]{DD}) that,
if $r>\nicefrac{N}{p}$ then the assertion holds.
Then we need to prove that the assertion also holds in the following
cases:
\begin{description}
\item[Case 1] $r<\nicefrac{N}{p};$
\item[Case 2] $r=\nicefrac{N}{p}.$
\end{description}
Before we start to analyze the different cases, we will show two inequalities.
For every $M>0,$ we define
\[
u_M(x)\coloneqq \min\{u(x),M\}
\quad\text{ and }\quad v_M(x)\coloneqq \min\{v(x),M\}.
\]
Since $(u,v)\in\mathcal{W}^{(r,s)}_p(\Omega),$ it is not hard to verify that $(u_M,v_M)\in\mathcal{W}^{(r,s)}_p(\Omega).$
Moreover if $q\ge 1$ then $(u_M^q,v_M^q)\in\mathcal{W}^{(r,s)}_p(\Omega).$ Then,
since $(u,v)$ is an eigen-pair associated to $\lambda_{1,p},$
$u_M\le u,$ $v_M\le v,$ and $\alpha,\beta\le p,$ we have
\begin{align*}
\int_{\mathbb{R}^{2N}}
\dfrac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(u_M(x)-u_M(y))}{|x-y|^{N+rp}}dxdy
&\le\lambda_{1,p}\int_{\Omega}u^{\alpha+q-1}v^{\beta} dx,\\
\int_{\mathbb{R}^{2N}}
\dfrac{|v(x)-v(y)|^{p-2}(v(x)-v(y))(v_M(x)-v_M(y))}{|x-y|^{N+sp}}dxdy
&\le\lambda_{1,p}\int_{\Omega}u^{\alpha}v^{\beta+q-1} dx.
\end{align*}
Hence, by using \cite[Lemma C2]{Brasco2}, we get
\begin{equation}\label{eq:brasco1}
\begin{aligned}
\dfrac{qp^p}{q+p-1}
\int_{\mathbb{R}^{2N}}\frac{|u_M^{\frac{q+p-1}p}(x)-u_M^{\frac{q+p-1}p}(y)|^{p}}{|x-y|^{N+rp}}dxdy
&\le\lambda_{1,p}\int_{\Omega}u^{\alpha+q-1}v^{\beta} dx,\\
\dfrac{qp^p}{q+p-1}\int_{\mathbb{R}^{2N}}
\dfrac{|v_M^{\frac{q+p-1}p}(x)-v_M^{\frac{q+p-1}p}(y)|^{p}}{|x-y|^{N+rp}}dxdy
&\le\lambda_{1,p}\int_{\Omega}u^{\alpha}v^{\beta+q-1} dx.
\end{aligned}
\end{equation}
We now begin to analyze the different cases.
\noindent{\bf Case 1:} $r<\nicefrac{N}p.$ Since $r\le s,$ then $p_r^\star\le p_s^\star.$
Therefore, by Sobolev's embedding theorem,
\begin{align*}
\left(\int_{\Omega} u_M^{\frac{q+p-1}{p}p_r^\star} dx\right)^{\frac{p}{p_r^\star}}
& \le C(N,r,p,\Omega)\int_{\mathbb{R}^{2N}}\dfrac{|u_M^{\frac{q+p-1}{p}}(x)
-u_M^{\frac{q+p-1}{p}}(y)|^{p}}{|x-y|^{N+rp}}dxdy,\\
\left(\int_{\Omega} v_M^{\frac{q+p-1}{p}p_r^\star} dx\right)^{\frac{p}{p_r^\star}}
&\le C(N,r,s,p,\Omega)\int_{\mathbb{R}^{2N}}
\dfrac{|v_M^{\frac{q+p-1}{p}}(x)-v_M^{\frac{q+p-1}{p}}(y)|^{p}}{|x-y|^{N+rp}}dxdy.
\end{align*}
Then, by \eqref{eq:brasco1}, we get
\begin{align*}
\left(\int_{\Omega} u_M^{\frac{q+p-1}pp_r^\star} dx\right)^{\frac{p}{p_r^\star}}
& \le \dfrac{\lambda_{1,p}}{C(N,r,p,\Omega)}\left(\dfrac{q+p-1}{p}\right)^{p-1}
\int_{\Omega}u^{\alpha+q-1}v^{\beta} dx,\\
\left(\int_{\Omega} v_M^{\frac{q+p-1}p p_r^\star} dx\right)^{\frac{p}{p_r^\star}} &
\le \dfrac{\lambda_{1,p}}{C(N,r,s,p,\Omega)}\left(\dfrac{q+p-1}{p}\right)^{p-1}
\int_{\Omega}u^{\alpha}v^{\beta+q-1} dx.
\end{align*}
By using Fatou's lemma and Young's inequality, we obtain
\begin{align*}
\left(\int_{\Omega} u^{\frac{p+p-1}p p_r^\star} dx\right)^{\frac{p}{p_r^\star}}
& \le \dfrac{\lambda_{1,p}}{C(N,r,p,\Omega)}
\left(\dfrac{p+q-1}{p}\right)^{p-1}
\left(\int_{\Omega}u^{p+q-1} dx+\int_{\Omega} v^{p+q-1} dx\right),\\
\left(\int_{\Omega} v^{\frac{q+p-1}pp_r^\star} dx\right)^{\frac{p}{p_r^\star}} &
\le \dfrac{\lambda_{1,p}}{C(N,r,s,p,\Omega)}\left(\dfrac{q+p-1}{p}\right)^{p-1}
\left(\int_{\Omega}u^{p+q-1} dx+\int_{\Omega} v^{p+q-1} dx\right).
\end{align*}
Taking $\mathcal{Q}=\nicefrac{q+p-1}p,$ we get
\begin{align*}
\left(\int_{\Omega} u^{\mathcal{Q}\frac{Np}{N-rp}} dx\right)^{\frac{\mathcal{Q}(N-rp)}{\mathcal{Q}N}}
& \le \dfrac{\lambda_{1,p}}{C(N,r,p,\Omega)}
\mathcal{Q}^{p-1}
\left(\int_{\Omega}u^{\mathcal{Q}p} dx+\int_{\Omega} v^{\mathcal{Q}p} dx\right),\\
\left(\int_{\Omega} v^{\mathcal{Q}\frac{Np}{N-rp}}dx\right)^{\frac{\mathcal{Q}(N-rp)}{\mathcal{Q}N}} &
\le \dfrac{\lambda_{1,p}}{C(N,r,s,p,\Omega)}\mathcal{Q}^{p-1}
\left(\int_{\Omega}u^{\mathcal{Q}p} dx+\int_{\Omega} v^{\mathcal{Q}p} dx\right).
\end{align*}
Then
\begin{align*}
\|u\|_{L^{\frac{\mathcal{Q}N}{N-rp}p}(\Omega)}^{\mathcal{Q}p}
& \le \dfrac{\lambda_{1,p}}{C(N,r,p,\Omega)}
\mathcal{Q}^{p-1}
\left(\|u\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p} +
\|v\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p}\right),\\
\|v\|_{L^{\frac{\mathcal{Q}N}{N-rp}p}(\Omega)}^{\mathcal{Q}p}
& \le \dfrac{\lambda_{1,p}}{C(N,r,s,p,\Omega)}
\mathcal{Q}^{p-1}
\left(\|u\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p} +
\|v\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p}\right).\\
\end{align*}
Hence
\begin{align*}
&\left(\|u\|_{L^{\frac{\mathcal{Q}N}{N-rp}p}(\Omega)}^{\mathcal{Q}p}
+\|v\|_{L^{\frac{\mathcal{Q}N}{N-rp}p}(\Omega)}^{\mathcal{Q}p}\right)^{\frac1{\mathcal{Q}p}}\\
& \le \left(\dfrac{2\lambda_{1,p}}{C(N,r,s,p,\Omega)}\right)^{\frac{1}{\mathcal{Q}}}
\left(\mathcal{Q}^{\frac1{\mathcal{Q}}}\right)^{\frac{p-1}p}
\left(\|u\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p} +
\|v\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p}\right)^{\frac{1}{\mathcal{Q}p}}.
\end{align*}
Now, taking the following sequence
\[
\mathcal{Q}_0=1 \quad\text{ and }\quad \mathcal{Q}_{n+1}=\mathcal{Q}_n\dfrac{N}{N-rp}
\]
we have
\begin{align*}
&\left(\|u\|_{L^{\mathcal{Q}_{n+1}p}(\Omega)}^{\mathcal{Q}_{n}p}
+\|v\|_{L^{\mathcal{Q}_{n+1}p}(\Omega)}^{\mathcal{Q}_{n}p}
\right)^{\frac1{\mathcal{Q}_{n}p}}\\
& \le \left(\dfrac{2\lambda_{1,p}}{C(N,r,s,p,\Omega)}\right)^{\frac{1}{\mathcal{Q}_{n}p}}
\left(\mathcal{Q}_n^{\frac1{\mathcal{Q}_n}}\right)^{\frac{p-1}p}
\left(\|u\|_{L^{\mathcal{Q}_np}(\Omega)}^{\mathcal{Q}_np} +
\|v\|_{L^{\mathcal{Q}_np}(\Omega)}^{\mathcal{Q}_np}
\right)^{\frac{1}{\mathcal{Q}_np}}
\end{align*}
for all $n\in\mathbb{N}.$ Moreover, since $$\mathcal{Q}_{n+1}=\nicefrac{\mathcal{Q}_nN}{(N-rp)}$$
we have that
\begin{align*}
&\left(\|u\|_{L^{\mathcal{Q}_{n+1}p}(\Omega)}^{\mathcal{Q}_{n}p}
+\|v\|_{L^{\mathcal{Q}_{n+1}p}(\Omega)}^{\mathcal{Q}_{n}p}
\right)^{\frac1{\mathcal{Q}_{n}p}}\\
& \le \left(\dfrac{2\lambda_{1,p}}{C(N,r,s,p,\Omega)}\right)^{\frac{1}{\mathcal{Q}_{n}p}}
\left(\mathcal{Q}_n^{\frac1{\mathcal{Q}_n}}\right)^{\frac{p-1}p}
\left(\|u\|_{L^{\mathcal{Q}_np}(\Omega)}^{\mathcal{Q}_{n-1}p} +
\|v\|_{L^{\mathcal{Q}_np}(\Omega)}^{\mathcal{Q}_{n-1}p}
\right)^{\frac{1}{\mathcal{Q}_{n-1}p}}
\end{align*}
for all $n\ge2.$
Then, iterating the last inequality, we get
\begin{equation}\label{eq:labestia}
\begin{aligned}
&\left(\|u\|_{L^{\mathcal{Q}_{n+1}p}(\Omega)}^{\mathcal{Q}_{n}p}
+\|v\|_{L^{\mathcal{Q}_{n+1}p}(\Omega)}^{\mathcal{Q}_{n}p}
\right)^{\frac1{\mathcal{Q}_{n}p}}\\
&\le \left(\dfrac{2\lambda_{1,p}}{C(N,r,s,p,\Omega)}\right)^{\frac1p\sum_{i=0}^{n}\frac{1}{\mathcal{Q}_i}}
\left(\prod_{i=0}^{n}\mathcal{Q}_i^{\frac{1}{\mathcal{Q}_i}}\right)^{\frac{p-1}p}
\left(\|u\|_{L^{p}(\Omega)}^p +\|v\|_{L^{p}(\Omega)}^p \right)^{\frac1p}
\end{aligned}
\end{equation}
for all $n\ge2.$
Observe that $\mathcal{Q}_n\to\infty$ as $n\to\infty$ due to the fact that $\nicefrac{N}{N-rp}>1.$ Moreover,
\[
\sum_{i=0}^{\infty}\frac{1}{\mathcal{Q}_i}=\dfrac{N}{rp}
\quad\mbox{ and }\quad
\prod_{i=0}^{\infty}\mathcal{Q}_i^{\frac{1}{\mathcal{Q}_i}}
=\left(\frac{N}{N-rp}\right)^{\frac{N}{rpp_r^{\star}}}.
\]
Hence, passing to the limit in \eqref{eq:labestia}, we deduce
\begin{align*}
&\max\{\|u\|_{L^{\infty}(\Omega)},\|v\|_{L^{\infty}(\Omega)}\}\\
&\le
\left(\dfrac{2\lambda_{1,p}}{C(N,r,s,p,\Omega)}\right)^{\frac{N}{rp^2}}
\left(\frac{N}{N-rp}\right)^{\frac{N}{rpp_r^{\star}}\frac{p-1}p}
\left(\|u\|_{L^{p}(\Omega)}^p +\|v\|_{L^{p}(\Omega)}^p \right)^{\frac1p},
\end{align*}
that is $u,v\in L^\infty(\Omega).$
\noindent{\bf Case 2:} $r=\nicefrac{N}p.$ In this case $\mathcal{W}^{(r,s)}_p(\Omega)\hookrightarrow
L^m(\Omega)\times L^m(\Omega)$ for all $m>1$ then
\begin{align*}
\left(\int_{\Omega} u_M^{\frac{q+p-1}{p}2p} dx\right)^{\frac{1}{2}}
& \le C(N,r,p,\Omega)\int_{\mathbb{R}^{2N}}\dfrac{|u_M^{\frac{q+p-1}{p}}(x)
-u_M^{\frac{q+p-1}{p}}(y)|^{p}}{|x-y|^{N+rp}}dxdy,\\
\left(\int_{\Omega} v_M^{\frac{q+p-1}{p}2p} dx\right)^{\frac{1}{2}}
&\le C(N,r,s,p,\Omega)\int_{\mathbb{R}^{2N}}
\dfrac{|v_M^{\frac{q+p-1}{p}}(x)-v_M^{\frac{q+p-1}{p}}(y)|^{p}}{|x-y|^{N+rp}}dxdy.
\end{align*}
Applying the previous reasoning, we get
\begin{align*}
&\left(\|u\|_{L^{2\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p}
+\|v\|_{L^{2{Q}p}(\Omega)}^{\mathcal{Q}p}\right)^{\frac1{\mathcal{Q}p}}\\
& \le \left(\dfrac{2\lambda_{1,p}}{C(N,r,s,p,\Omega)}\right)^{\frac{1}{\mathcal{Q}}}
\left(\mathcal{Q}^{\frac1{\mathcal{Q}}}\right)^{\frac{p-1}p}
\left(\|u\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p} +
\|v\|_{L^{\mathcal{Q}p}(\Omega)}^{\mathcal{Q}p}\right)^{\frac{1}{\mathcal{Q}p}}.
\end{align*}
Now, taking the following sequence
\[
\mathcal{Q}_0=1 \quad\text{ and }\quad \mathcal{Q}_{n+1}=2\mathcal{Q}_n,
\]
the proof follows as in the previous case.
\end{proof}
To show that $\lambda_{1,p}$ is simple, we will prove first that $\lambda_{1,p}$
is the unique eigenvalue with the following property: any eigen-pair associated to $\lambda$ has constant sign.
\begin{teo}\label{teo:autoval1}
Let $(u,v)$ be an eigenfunction associated to
$\lambda_{1,p}$ such that $u,v\ge0$ in $\Omega.$ If $\lambda>0$ is such that
there is an eigen-pair $(w,z)$ associated to $\lambda$ such that
$w,z>0$ then $\lambda=\lambda_1(s,p)$ and
there exist $k_1,k_2\in\mathbb{R}$ such that $w = k_1 u$ and $z=k_2v$ a.e. in $\mathbb{R}^N.$
\end{teo}
\begin{proof}
Since $\lambda_1(s,p)$ is the first eigenvalue we have
that $\lambda_1(s,p)\le\lambda$. Moreover, by \cite[Theorem A.1]{Brasco1},
$u,v>0$ a.e. in $\Omega$ since $(u,v)$ is an eigen-pair associated to $\lambda_{1,p}$
and $u,v\ge0.$
Let $k\in\mathbb{N}$ and define $w_k\coloneqq w+\nicefrac1{k},$
and $z_k\coloneqq z+\nicefrac1{k}.$ We begin proving that $u^{p} / w_k^{p-1}\in
\widetilde{\mathcal{W}}^{r,p}(\Omega).$
It is immediate that $u^{p} / w_k^{p-1}=0$ in $\Omega^c$
and $w_{k}\in L^{p}(\Omega),$ due to the fact that
$u\in L^{\infty}(\Omega),$ see
Lemma \ref{lema.cota}.
On the other hand, for any $x,y\in\mathbb{R}^N$
\begin{align*}
\Biggl|\frac{u}{w_k}(x)&-\frac{u}{w_k}(y)\Biggr|
= \Bigg|
\dfrac{u(x)^{p}-u(y)^p}{w_k(x)^{p-1}}
+\dfrac{u(y)^p\left(w_k(y)^{p-1}-w_k(x)^{p-1}\right)}
{w_k(x)^{p-1}w_k(y)^{p-1}}\Bigg|\\
\le& k^{p-1}\left|u(x)^{p}-u(y)^p\right|
+\|u\|_{L^{\infty}(\Omega)}^p
\dfrac{\left|w_k(x)^{p-1}-w_k(y)^{p-1}\right|}
{w_k(x)^{p-1}w_k(y)^{p-1}}\\
\le&2\|u\|_{L^{\infty}(\Omega)}^{p-1}k^{p-1}p
|u(x)-u(y)|\\
&+\|u\|_{L^{\infty}(\Omega)}^p(p-1)
\dfrac{w_k(x)^{p-2}+w_k(y)^{p-2}}
{w_k(x)^{p-1}w_k(y)^{p-1}}|w_k(x)-w_k(y)|\\
\le& 2\|u\|_{L^{\infty}(\Omega)}^{p-1}k^{p-1}p|u(x)-u(y)|\\
&+\|u\|_{L^{\infty}(\Omega)}^p(p-1)k^{p-1}
\left(\dfrac1{w_k(x)}+\dfrac1{w_k(y)}\right)|w(y)-w(x)|\\
\le& C(k,p,\|u\|_{L^{\infty}(\Omega)})
\left(|u(x)-u(y)|+|w(x)-w(y)|\right).
\end{align*}
Hence, we have that
$u^p/w_{k}^{p-1}\in \widetilde{\mathcal{W}}^{r,p}(\Omega)$
for all $k\in\mathbb{N}$ since $u,w\in
\widetilde{\mathcal{W}}^{r,p}(\Omega).$
Analogously $v^p/z_{k}^{p-1}\in \widetilde{\mathcal{W}}^{s,p}(\Omega).$
Set
\[
L(\varphi,\psi)(x,y)=
|\varphi(x)-\varphi(y)|^p -(\psi(x)-\psi(y))^{p-1}
\left(\dfrac{\varphi(x)^p}{\psi(x)^{p-1}}
-\dfrac{\varphi(y)^p}{\psi(y)^{p-1}}\right)
\]
for all functions $\varphi\ge0$ and $\psi>0.$
By \cite[Lemma 6.2]{Am}, for any $\varphi\ge0$ and $\psi>0$
\[
L(\varphi,\psi)(x,y)\ge 0 \quad \forall(x,y)
\]
Then,
\begin{align*}
0&\le \int_{\Omega^2} \dfrac{L(u,w_k)(x,y)}{{|x-y|^{N+rp}}} dxdy +
\int_{\Omega^2} \dfrac{L(v,z_k)(x,y)}{|x-y|^{N+sp}} dxdy\\
&\le \int_{\mathbb{R}^{2N}} \dfrac{L(u,w_k)(x,y)}{{|x-y|^{N+rp}}} dxdy +
\int_{\mathbb{R}^{2N}} \dfrac{L(v,z_k)(x,y)}{|x-y|^{N+sp}} dxdy\\
&=\lambda_{1,p}\int_{\Omega} |u|^{\alpha}|v|^{\beta} \, dx
-\lambda\dfrac{\alpha}{p}\int_{\Omega} w^{\alpha-1}z^{\beta} \dfrac{u^{p}}{w_k^{p-1}} dx
-\lambda\dfrac{\beta}{p}\int_{\Omega} w^{\alpha}z^{\beta-1} \dfrac{v^{p}}{z_k^{p-1}} dx
\end{align*}
for all $k\in\mathbb{N},$
since $(u,v),(w,z)$ are eigen-pairs associated to $\lambda_{1,p}$ and $\lambda,$ respectively.
On the other hand, by Young's inequality,
\[
\int_{\Omega} w^{\alpha}z^{\beta}\dfrac{u^{\alpha}v^{\beta}}{w_k^{\alpha}z_k^{\beta}}
dx \le \dfrac{\alpha}{p}\int_{\Omega} w^{\alpha-1}z^{\beta} \dfrac{u^{p}}{w_k^{p-1}} dx
+ \dfrac{\beta}{p}\int_{\Omega} w^{\alpha}z^{\beta-1} \dfrac{v^{p}}{z_k^{p-1}} dx
\]
for all $k\in\mathbb{N}.$ Then
\begin{align*}
0&\le \int_{\Omega} \dfrac{L(u,w_k)(x,y)}{{|x-y|^{N+rp}}} dxdy +
\int_{\Omega} \dfrac{L(v,z_k)(x,y)}{|x-y|^{N+sp}} dxdy\\
&\le\lambda_{1,p}\int_{\Omega} |u|^{\alpha}|v|^{\beta} \, dx
-\lambda \int_{\Omega} w^{\alpha}z^{\beta}\dfrac{u^{\alpha}v^{\beta}}{w_k^{\alpha}z_k^{\beta}}dx.
\end{align*}
By Fatou's lemma and the dominated convergence theorem we obtain
\[
0\le \int_{\Omega^2} \dfrac{L(u,w)(x,y)}{{|x-y|^{N+rp}}} dxdy +
\int_{\Omega^2} \dfrac{L(v,z)(x,y)}{|x-y|^{N+sp}} dxdy
\le (\lambda_{1,p}-\lambda)\int_{\Omega} |u|^{\alpha}|v|^{\beta} \, dx.
\]
Then $\lambda=\lambda_{1,p}$ and $L(u,w)=0$ and $L(v,z)=0$ a.e. in $\Omega.$
Finally, again by \cite[Lemma 6.2]{Am}, there exist $k_1,k_2\in\mathbb{R}$ such that $w = k_1 u$
and $z=k_2v$ a.e. in $\mathbb{R}^N.$
\end{proof}
Now, we show that $\lambda_{1,p}$ is simple.
\begin{co}
Let $(u_1,v_1)$ be an eigen-pair associated to
$\lambda_{1,p}$ normalized according to $|(u_1,v_1)|_{\alpha,\beta}=1.$
If $(u,v)$ is an eigen-pair associated to
$\lambda_{1,p}$ then there is a constant $k$ such that
$(u,v)=k(u_1,v_1).$
\end{co}
\begin{proof}
By Theorem \ref{teo:autoval1}, there exist $k_1$ and $k_2$ such that
$u=k_1u_1$ and $v=k_2v_2.$ Without loss of generality, we can assume that $k_1\le k_2.$
Then, since $(u_1,v_1)$ and $(u,v)$ are eigen-pairs
associated to the first eigenvalue $\lambda_{1,p}$ and $|(u,v)|_{\alpha,\beta}=1,$ we get
\[
\left(\left(\dfrac{k_1}{k_2}\right)^{\beta}-1\right)
[u]^p_{r,p}+
\left(\left(\dfrac{k_2}{k_1}\right)^{\alpha}-1\right)
[v]^p_{s,p}=0.
\]
Taking $x=k_1/k_2,$ $a=[u]^p_{r,p}$ and $b=[v]^p_{s,p},$ we get
\[
a(x^{\beta}-1)+b\dfrac{1-x^\alpha}{x^{\alpha}}=0.
\]
Multiplying by $x^{\alpha}$ and by using that $\alpha+\beta=p,$ we obtain
\[
ax^{p}-(a+b)x^{\alpha}+b=0.
\]
To end the proof, we only need to show that 1 is the unique zero of the function
\[
f\colon[0,1]\to\mathbb{R}, \quad f(x)=ax^{p}-(a+b)x^{\alpha}+b.
\]
Observe that, for any $x\in(0,1)$ we have
\[
f^\prime(x)=pa x^{\alpha-1}\left(x^{p-\alpha}-\frac{a+b}{a}\dfrac{\alpha}{p}\right)
=pax^{\alpha-1}\left(x^{\alpha}-\frac{a+b}{a}\dfrac{\alpha}{p}\right).
\]
On the other hand, since $(u_1,v_1)$ is an eigen-pair
associated to $\lambda_{1,p}$ such that $|(u,v)|_{\alpha,\beta}=1,$ we have
\[
a+b=\lambda_{1,p} \quad\text{ and }\quad a=\dfrac{\alpha}p\lambda_{1,p},
\]
then
\[
\dfrac{a+b}{a}=\dfrac{p}{\alpha},
\]
that is
\[
\dfrac{a+b}{a}\dfrac{\alpha}{p}=1.
\]
Hence
\[
f^\prime(x)<0 \quad\forall x\in(0,1).
\]
that is $f$ is decreasing. Therefore $x=1$ is the unique zero of $f.$
\end{proof}
Recall that we made the assumption:
\[
\min\{\alpha,\beta\}\ge1.
\]
Now, if $(u,v)$ is an eigen-pair associated to $\lambda_{1,p}$ then
$$|u|^{\alpha-2}u|v|^\beta,|u|^{\alpha}|v|^{\beta-2}v\in L^\infty(\Omega)$$
due to Lemma \ref{lema.cota}.
Thus, by \cite[Theorem 1.1]{regularidad}, we have the following result.
\begin{lem} \label{lema:regularida}
If $(u,v)$ is an eigen-pair associated to $\lambda_{1,p},$
then there exist $\gamma_1=\gamma_1(N,p,r)\in(0,r]$ and
$\gamma_2=\gamma_2(N,p,s)\in(0,s]$ such that
$(u,v)\in C^{\gamma_1}(\overline{\Omega})\times
C^{\gamma_2}(\overline{\Omega}).$
\end{lem}
Thus, by Lemma \ref{lema:regularida} and Theorem \ref{teo:debilvisco},
we have that
\begin{co} \label{cor:autovisco}
If $(u,v)$ is an eigen-pair associated to $\lambda_{1,p}$
then $u$ is a viscosity solution of
\[
\begin{cases}
(-\Delta_p)^r u= \lambda_{1,p}\dfrac{\alpha}{p} |u|^{\alpha-2}u|v|^{\beta} &\text{ in } \Omega,\\
u=0&\text{ in } \mathbb{R}^N\setminus\Omega,
\end{cases}
\]
and $v$ is a viscosity solution of
\[
\begin{cases}
(-\Delta_p)^s v= \lambda_{1,p}\dfrac{\beta}{p} |u|^{\alpha}|v|^{\beta-2}v &\text{ in } \Omega,\\
v=0&\text{ in } \mathbb{R}^N\setminus\Omega,
\end{cases}
\]
\end{co}
Therefore, by Corollary \ref{cor:autovisco} and Lemma \ref{lema:viscopositivo}, we get
\begin{co}\label{co:autopositivo*}
If $(u,v)$ is an eigen-pair corresponding to the first eigenvalue $\lambda_{1,p}$,
then $|u|,|v|>0$ in $\Omega$.
\end{co}
Finally, we show that there is a sequence of eigenvalues.
\begin{lem}\label{lem:sucesion}
There is a sequence of eigenvalues $\lambda_n$ such that
$\lambda_n\to\infty$ as $n\to\infty$.
\end{lem}
\begin{proof}
We follow ideas from \cite{GAP} and hence we omit the details.
Let us consider
\[
M_\tau = \{(u,v) \in \mathcal{W}^{(r,s)}_p(\Omega)\colon
[u]_{r,p}^p+[v]_{s,p}^p= p \tau \}
\]
and
\[
\varphi (u,v) = \frac{1}{p}
\int_{\Omega} |u|^\alpha|v|^\beta dx.
\]
We are looking for critical points
of $\varphi$ restricted to the manifold $M_\tau$ using a minimax technique.
We consider the class
\[
\Sigma = \{A\subset \mathcal{W}^{(r,s)}_p(\Omega)\setminus\{0\}
\colon A \mbox{ is closed, } A=-A\}.
\]
Over this class we define the genus,
$\gamma\colon\Sigma\to {\mathbb{N}}\cup\{\infty\}$, as
\[
\gamma(A) = \min\{k\in {\mathbb{N}}\colon
\mbox{there exists } \phi\in C(A,{{\mathbb{R}}}^k-\{0\}),
\ \phi(x)=-\phi(-x)\}.
\]
Now, we let $C_k = \{ C \subset M_\tau \colon C
\mbox{ is compact, symmetric and } \gamma ( C) \le k \} $
and let
\begin{equation}
\label{betak} \beta_k = \sup_{C \in C_k} \min_{(u,v) \in C} \varphi(u,v).
\end{equation}
Then $\beta_k >0$ and there is $(u_k,v_k) \in
M_\tau$ such that $\varphi (u_k,v_k) = \beta_k$ and $(u_k,v_k)$ is a weak
eigen-pair with $\lambda_k = \nicefrac{\tau}{\beta_k}.$
\end{proof}
\section{The limit as $p\to \infty$} \label{sec-p-infty}
From now on, we assume that \eqref{eq:alfabeta} and \eqref{lim.Gamma} hold.
Recall that we defined $\Lambda_{1,\infty} $ by
$$
\Lambda_{1,\infty}
= \inf \left\{
\frac{\max \{ [u]_{r,\infty} ; [v]_{s,\infty} \} }{ \| |u|^{\Gamma} |v|^{1-\Gamma} \|_{L^\infty (\Omega)} }
\colon (u,v)\in \mathcal{W}^{(r,s)}_\infty(\Omega)\right\}.
$$
First, we show the geometric characterization of
$\Lambda_{1,\infty}.$ Then, we prove that there exists a sequence of
eigen-pairs $(u_p,v_p)$ associated to $\lambda_{1,p}$ such that
$(u_p,v_p)\to(u_\infty,v_\infty)$ as $p\to \infty$ and $(u_\infty,v_\infty)$ is a minimizer for
$\Lambda_{1,\infty}.$ Finally we will show that
$(u_\infty,v_\infty)$ is a viscosity solution of \eqref{eq:limite}.
\medskip
\subsection{Geometric characterization} Observe that, by Arzel\`a--Ascoli theorem, there exists a minimizer
for $\Lambda_{1,\infty}.$ Moreover, if $(u,v)$ is a minimizer for $\Lambda_{1,\infty}$ then
so is $(|u|,|v|).$ Now, we show the geometric characterization of
$\Lambda_{1,\infty}.$
\begin{lem} \label{lema:caracgeom}
The following equality holds
$$
\Lambda_{1,\infty} = \left[ \frac{1}{R(\Omega)} \right]^{ (1-\Gamma) s + \Gamma r }.
$$
\end{lem}
\begin{proof}
Let us take $(u,v)$ a minimizer for $\Lambda_{1,\infty}$
with $u,v\ge0$ normalized according to $\| u^{\Gamma} v^{1-\Gamma} \|_{L^\infty (\Omega)}=1.$
Therefore, there is a point $x_0 \in \Omega$ such that
$$
u^{\Gamma} (x_0) v^{1-\Gamma} (x_0) =1.
$$
Let us call
$$
a= u (x_0)\qquad \mbox{and} \qquad b= v (x_0).
$$
Then, since $u,v=0$ in $\Omega^c$,
$$
[u]_{r,\infty} = \sup_{(x,y)\in \overline{\Omega}} \frac{| u(y) - u(x)|}{|x-y|^{r}} \geq
\frac{a}{[\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial
\Omega)]^r}
$$
and
$$
[v]_{s,\infty} = \sup_{(x,y)\in\overline{\Omega}}
\frac{| v(y) - v(x)|}{|x-y|^{s}} \geq \frac{b}{[\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^s}.
$$
Therefore, we are left with
$$
\Lambda_{1,\infty} \geq
\inf_{ (a,b,x_0)\in \mathcal{A}} \left\{
\max\left\{ \frac{a}{[\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^r} ; \frac{b}{[\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^s}
\right\}
\right\},
$$
where
\[
\mathcal{A}\coloneqq\{(0,\infty)\times(0,\infty)\times\overline{\Omega}
\colon a^{\Gamma} b^{1-\Gamma} =1\} .
\]
To compute the infimum we observe that we must have
$$
\frac{a}{[\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^r}
= \frac{b}{[\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^s}
$$
that is,
$$
a= b [\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^{r-s}.
$$
Then, using $a^{\Gamma} b^{1-\Gamma} =1$, we obtain
$$
b [\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^{\Gamma (r-s)} =1.
$$
Hence
$$
b = [\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^{\Gamma (s-r)}
$$
and
$$
a= [\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^{(r-s) (1-\Gamma)}.
$$
Therefore, we are left with
$$
\inf_{x_0} [\mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega)]^{- [ (1-\Gamma) s + \Gamma r ]},
$$
that is attained at a point $x_0\in \Omega$
that maximizes the distance to the boundary. That is, letting
$$
R(\Omega) = \mathop{\mbox{\normalfont dist}}\nolimits (x_0, \partial \Omega),
$$
we obtain that
$$
\Lambda_{1,\infty} \geq \left[ \frac{1}{R(\Omega)} \right]^{ (1-\Gamma) s + \Gamma r }.
$$
To end the proof, we need to show the reverse inequality. As before, let $x_0\in \Omega$
be the point where is attained the maximum distance to the boundary. Set
\begin{align*}
u_0(x)&=R(\Omega)^{(r-s)(1-\Gamma)}\left(1-\dfrac{|x-x_0|}{R(\Omega)}\right)_+^{r},\\
v_0(x)&=R(\Omega)^{-(r-s)\Gamma}\left(1-\dfrac{|x-x_0|}{R(\Omega)}\right)_+^{s}.
\end{align*}
We can observe that $(u_0,v_0)\in C^r(\mathbb{R}^N)\times C^s(\mathbb{R}^N),$ $\|u^{\Gamma}_0
v^{1-\Gamma}_0 \|_{L^{\infty}(\Omega)}=1$
and
\[
\max\{[u_0 ]_{r,\infty};[v_0 ]_{s,\infty}\}\le \left[ \frac{1}{R(\Omega)} \right]^{ (1-\Gamma) s + \Gamma r }.
\]
Therefore
$$
\Lambda_{1,\infty} = \inf \left\{
\frac{\max \{ [u]_{r,\infty} ; [v]_{s,\infty} \} }{ \| |u|^{\Gamma} |v|^{1-\Gamma} \|_{L^\infty (\Omega)} }
\colon (u,v)\in \mathcal{W}^{(r,s)}_\infty(\Omega)\right\} \leq
\left[ \frac{1}{R(\Omega)} \right]^{ (1-\Gamma) s + \Gamma r }.
$$
\end{proof}
\begin{re} Observe that $(u_0,v_0)$ is a minimizer of $\Lambda_{1,\infty}.$
\end{re}
\subsection{Convergence} Now, we prove that there exists a sequence of
eigen-pairs $(u_p,v_p)$ associated to $\lambda_{1,p}$ such that
$(u_p,v_p)\to(u,v)$ as $p\to \infty$ and $(u,v)$ is a minimizer for
$\Lambda_{1,\infty}.$
\begin{lem} \label{lema.conv.unif.autov}
Let $(u_{p},v_{p})$ be an
eigen-pair for $\lambda_{1,p}$ such that $u_p$ and $v_p$ are positive and
$|(u,v)|_{\alpha,\beta}=1$. Then, there exists a sequence
$p_j \to \infty$ such that
\[
(u_{p_j},v_{p_j}) \to (u_\infty,v_\infty)
\]
uniformly in ${\mathbb{R}}^N$. The limit $(u_\infty,v_\infty)$ belongs to the space
$\mathcal{W}^{(r,s)}_\infty(\Omega)$ and is a minimizer of $\Lambda_{1,\infty}.$
In addition, it holds that
$$
[\lambda_{1,p}]^{\nicefrac{1}{p}} \to \Lambda_{1,\infty}.
$$
\end{lem}
\begin{proof}
We start showing that
\begin{equation}\label{eq:limt1}
\limsup_{p\to\infty}[\lambda_{1,p}]^{
\nicefrac1p}\le \Lambda_{1,\infty}.
\end{equation}
Let $\gamma>1$ be such that $\gamma\max\{r,s\}<1.$
Then $(u_\gamma,v_\gamma)=(u_\infty^\gamma,v_\infty^\gamma)\in\mathcal{W}^{(r,s)}_p(\Omega)\cap\mathcal{W}^{(r,s)}_\infty (\Omega)$ for all $p>1.$
Thus
\[
[\lambda_{1,p}]^{\nicefrac{1}{p}}\le \dfrac{\left([u_\gamma]_{r,p}^p+[v_\gamma]_{s,p}^p\right)^{\nicefrac{1}{p}}}{|(u_\gamma,
v_\gamma)|_{\alpha,\beta}}
\]
for all $p>1.$ In addition, we observe that $\|u^{\Gamma}_\gamma v^{1-\Gamma}_\gamma \|_{L^{\infty}(\Omega)}=1.$ Then
\begin{align*}
\limsup_{p\to\infty}
[\lambda_{1,p}]^{\nicefrac{1}{p}}&\le
\max\left\{[u_\gamma]_{r,\infty};[v_\gamma]_{s,\infty}\right\}\\
&\le \max\left\{2^{r(\gamma-1)}R(\Omega)^{\gamma(r-s)(1-\Gamma)-r};
2^{s(\gamma-1)}R(\Omega)^{-\gamma(r-s)\Gamma-s}
\right\}.
\end{align*}
Therefore, passing to the limit as $\gamma\to 1$ in the previous inequality and using Lemma \ref{lema:caracgeom},
we get \eqref{eq:limt1}.
Our next step is to show that
\begin{equation}\label{eq:limt2}
\Lambda_{1,\infty} \le
\liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1{p}}.
\end{equation}
Let $p_j>1$ be such that
\begin{equation}\label{eq:inf}
\liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1{p}}
=\lim_{j\to\infty}[\lambda_{j}]^{\nicefrac1{p_j}},
\end{equation}
where $\lambda_j=\lambda_{1,p_j}.$ By \eqref{eq:limt1},
without of loss of generality, we can assume
$$
2\max\{\nicefrac{N}r,\nicefrac{N}s\}< p_1,\quad p_j\le p_{j+1},\quad \text{and }
$$
\begin{equation}\label{eq:vie}
[\lambda_{j}]^{\nicefrac1{p_j}}
=\left([u_{j}]_{r,p_j}^{p_j}+[v_{j}]_{s,p_j}^{p_j}\right)^{\nicefrac{1}{p_j}}
\le \Lambda_{1,\infty} + \varepsilon \qquad \forall j\in\mathbb{N},
\end{equation}
where $\varepsilon$ is any positive number and $(u_j,v_j)$ is an eigen-pair corresponding to $\lambda_{j}$
normalized according to $|(u_{j},v_j)|_{\alpha_j,\beta_j}=1$ ($\alpha_j=\alpha_{p_j},$
$\beta_j=\beta_{p_j}$) and such that $u_j,v_j>0$ in $\Omega.$
Let $q\in(2\max\{\nicefrac{N}r,\nicefrac{N}s\}, p_1),$ $t_1=r-\nicefrac{N}{q}$ and $t_2=s-\nicefrac{N}{q}.$
It follows from \eqref{eq:vie} and Lemmas \ref{lem:poincare} and \ref{lem:inclusion} that
$\{u_j\}$ and $\{v_j\}$ are bounded in $W^{t_1,q}(\Omega)$ and $W^{t_2,q}(\Omega),$ respectively.
Since $q\min\{t_1,t_2\}\ge N,$ taking a subsequence if is necessary, we get
\begin{align*}
u_j\to u_\infty & \text{ strongly in } C^{0,\gamma_1}(\overline{\Omega}),\\
v_j\to v_\infty & \text{ strongly in } C^{0,\gamma_2}(\overline{\Omega}).
\end{align*}
due to the compact Sobolev embedding theorem. Here $0<\gamma_1<t_1-\nicefrac{N}q=r-2\nicefrac{N}{q}$
and $0<\gamma_1<t_2-\nicefrac{N}q=s-2\nicefrac{N}{q}.$ Therefore $u_\infty=v_\infty=0$ on $\partial\Omega.$
On the other hand, by Lemma \ref{lem:inclusion},
\begin{align*}
|u_j|_{t_1,q}&\le
{\rm{diam}}(\Omega)^{\nicefrac{N}{p_j}}|\Omega|^{\nicefrac2q-\nicefrac2{p_j}}|u_j|_{r,p_j}
\le {\rm{diam}}(\Omega)^{\nicefrac{N}{p_j}}|\Omega|^{\nicefrac2q-\nicefrac2{p_j}}
[\lambda_{j}]^{\nicefrac1{p_j}},\\
|v_j|_{t_2,q}&\le
{\rm{diam}}(\Omega)^{\nicefrac{N}{p_j}}|\Omega|^{\nicefrac2q-\nicefrac2{p_j}}|v_j|_{s,p_j}
\le {\rm{diam}}(\Omega)^{\nicefrac{N}{p_j}}|\Omega|^{\nicefrac2q-\nicefrac2{p_j}}
[\lambda_{j}]^{\nicefrac1{p_j}}.
\end{align*}
Then passing to the limit as $j\to\infty$ and using Fatou's lemma, we get
$(u_\infty,v_\infty)\in W^{t_1,q}(\Omega)\times W^{t_2,q}(\Omega)$
and
\begin{align*}
|u_\infty|_{t_1,q}&\le |\Omega|^{\nicefrac2q} \liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1p},\\
|v_\infty|_{t_2,q}&\le |\Omega|^{\nicefrac2q}\liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1p}.
\end{align*}
Now passing to the limit as $q\to\infty$ we obtain
\begin{align*}
[u_\infty]_{r,\infty}\le \liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1p},\\
[v_\infty]_{s,\infty}\le \liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1p},
\end{align*}
that is $(u_\infty,v_\infty)\in\mathcal{W}^{(r,s)}_\infty(\Omega)$ and
\begin{equation}\label{eq:yaestamos}
\max\{[u_\infty]_{r,\infty};[v_\infty]_{r,\infty}\}\le \liminf_{p\to\infty}[\lambda_{1,p}]^{\nicefrac1p}.
\end{equation}
To end the proof we only need to show that $\|u_\infty^{\Gamma}v_\infty^{1-\Gamma}\|_{L^{\infty}(\Omega)}=1.$
For all $q>1$ there exists $j_0\in\mathbb{N}$ such that
$p_j>q$ if $j>j_0$ and therefore, by Fatou's Lemma and H\"older's inequality,
we get
\[
\|u^{\Gamma}_\infty v^{1-\Gamma}_\infty\|_{L^q(\Omega)}^q\le
\liminf_{j\to\infty}\int_{\Omega} u_j^{{\nicefrac{\alpha_j}{p_j}}q} v_j^{{\nicefrac{\beta_j}{p_j}}q}dx
\le \liminf_{j\to\infty} |\Omega|^{1-\frac{q}{p_j}}=1
\]
due to $|(u_j,v_j)|_{\alpha_j,\beta_j}=1.$
Then passing to the limit as $q\to\infty$ we have
\[
\|u_\infty^{\Gamma}v_\infty^{1-\Gamma}\|_{L^{\infty}(\Omega)}\le 1.
\]
On the other hand
\[
1=|(u_j,v_j)|_{\alpha_j,\beta_j}^{\nicefrac{1}{p_j}}\le
\|u_j^{\nicefrac{\alpha_j}{p_j}}v_j^{\nicefrac{\beta_j}{p_j}}\|_{L^{\infty}(\Omega)}|\Omega|^{\nicefrac1{p_j}}\to
\|u_\infty^{\Gamma}v_\infty^{1-\Gamma}\|_{L^{\infty}(\Omega)}.
\]
Therefore $\|u_\infty^{\Gamma}v_\infty^{1-\Gamma}\|_{L^{\infty}(\Omega)}=1.$
\end{proof}
\subsection{Viscosity Solution}
Finally we will show that $(u_\infty,v_\infty)$ is a viscosity solution of
\begin{equation}\label{eq:limite}
\begin{cases}
\min\left\{\mathcal{L}_{r,\infty}u(x);\mathcal{L}_{r,\infty}^+u(x)-\Lambda_{1,\infty} u^{\Gamma}(x) v^{1-\Gamma}(x)\right\}
=0
&\text{ in } \Omega,\\
\min\left\{\mathcal{L}_{s,\infty}u(x);\mathcal{L}_{s,\infty}^+u(x)-\Lambda_{1,\infty} u^{\Gamma}(x) v^{1-\Gamma}(x)\right\}
=0&\text{ in } \Omega,\\
u=v=0 &\text{ in } \mathbb{R}^N\setminus\Omega,
\end{cases}
\end{equation}
where
\[
\mathcal{L}_{t,\infty}w(x)=\mathcal{L}_{t,\infty}^+w(x)
+\mathcal{L}_{r,\infty}^-w(x)= \sup_{y\in\mathbb{R}^N}\dfrac{w(x)-w(y)}{|x-y|^{t}}
+\inf_{y\in\mathbb{R}^N}\dfrac{w(x)-w(y)}{|x-y|^{t}}.
\]
Let us introduce the precise definition of viscosity solution of \eqref{eq:limite}.
\medskip
\noindent{\bf Definition.} Let $(u,v) \in C(\mathbb{R}^N)\times C(\mathbb{R}^N)$ be such that $u,v\ge0$ in $\Omega$
and $u=v=0$ in $\Omega^c.$
We say that $(u,v)$ is a viscosity subsolution
of \eqref{eq:limite} at a point $x_0\in \Omega$ if and only if
for any test pair $(\varphi,\psi)\in C^2_0(\mathbb{R}^N)\times C^2_0(\mathbb{R}^N)$
such that $u(x_0)=\varphi(x_0),$ $v(x_0)=\psi(x_0),$
$u(x)\le\varphi(x)$ and $v(x)\le\psi(x)$ for all $x\in\mathbb{R}^N$ we have that
\begin{align*}
&\min\{\mathcal{L}_{r,\infty}\varphi(x_0); \mathcal{L}_{r,\infty}^+\varphi(x_0)
-\Lambda_{1,\infty}u^{\Gamma}(x_0)v^{1-\Gamma}(x_0)\} \le0,\\
&\min\{\mathcal{L}_{r,\infty}\psi(x_0); \mathcal{L}_{r,\infty}^+\psi(x_0)
-\Lambda_{1,\infty}u^{\Gamma}(x_0)v^{1-\Gamma}(x_0)\} \le0
\end{align*}
We say that $(u,v)$ is a viscosity subsolution
of \eqref{eq:limite} at a point $x_0\in \Omega$ if and only if
for any test pair $(\varphi,\psi)\in C^2_0(\mathbb{R}^N)\times C^2_0(\mathbb{R}^N)$
such that $u(x_0)=\varphi(x_0),$ $v(x_0)=\psi(x_0),$
$u(x)\ge\varphi(x)$ and $v(x)\ge\psi(x)$ for all $x\in\mathbb{R}^N$ we have that
\begin{align*}
&\min\{\mathcal{L}_{r,\infty}\varphi(x_0); \mathcal{L}_{r,\infty}^+\varphi(x_0)
-\Lambda_{1,\infty}u^{\Gamma}(x_0)v^{1-\Gamma}(x_0)\} \ge0,\\
&\min\{\mathcal{L}_{r,\infty}\psi(x_0); \mathcal{L}_{r,\infty}^+\psi(x_0)
-\Lambda_{1,\infty}u^{\Gamma}(x_0)v^{1-\Gamma}(x_0)\} \ge0
\end{align*}
Finally, $u$ is a viscosity solution
of \eqref{eq:limite} at a point $x_0\in \Omega$ viscosity solution, if it is both a viscosity super- and subsolution at every $x_0$.
\medskip
\begin{lem}
$(u_\infty,v_\infty)$ is a viscosity solution of \eqref{eq:limite}.
\end{lem}
\begin{proof}
It follows as in \cite[Section 8]{LL}, we include a sketch here for completeness.
Let us show that $u_\infty$ is a viscosity supersolution of the first
equation in \eqref{eq:limite} (the fact that it is a viscosity sub solution is similar).
Assume that $\varphi$ is a test function touching $u_\infty$ strictly from below at a point $x_0 \in \Omega$.
We have that $u_j -\varphi$ has a minimum at points $x_j \to x_0$. Since $u_j$ is a weak solution (and hence a viscosity solution) to the first equation in our system we have the inequality
$$
- (-\Delta_{p_j})^r \varphi (x_j) + \lambda_{1,p_j}\dfrac{\alpha_j}{p_j} |\varphi|^{\alpha_j-2}\varphi |v|^{\beta_j} (x_j) \leq 0.
$$
Writing (as in \cite{LL})
$$
A_j^{p_j-1} = 2\int_{\mathbb{R}^N} \dfrac{|\varphi(x_j)-\varphi(y)|^{p_j-2}(\varphi(x_j)-\varphi(y))^+}{|x_j-y|^{N+sp_j}} \, dy, $$
$$
B_j^{p_j-1} = 2\int_{\mathbb{R}^N}\dfrac{|\varphi(x_j)-\varphi(y)|^{p_j-2}(\varphi(x_j)-\varphi(y))^-}{|x_j-y|^{N+sp_j}} \, dy $$
and
$$
C_j^{p_j-1} = \lambda_{1,p_j}\dfrac{\alpha_j}{p_j} |\varphi|^{\alpha_j-2}\varphi |v|^{\beta_j} (x_j)
$$
we get
$$
A_j^{p_j-1} + C_j^{p_j-1} \leq B_j^{p_j-1} .
$$
Using that
$$
A_j \to \mathcal{L}_{r,\infty}^+\varphi(x_0),
\qquad B_j \to - \mathcal{L}_{r,\infty}^-\varphi(x_0)
\qquad \mbox{and} \qquad C_j \to \Lambda_{1,\infty}u^{\Gamma}(x_0)v^{1-\Gamma}(x_0)
$$
we obtain
$$
\min\{\mathcal{L}_{r,\infty}\varphi(x_0); \mathcal{L}_{r,\infty}^+\varphi(x_0)
-\Lambda_{1,\infty}u^{\Gamma}(x_0)v^{1-\Gamma}(x_0)\} \leq 0.
$$
\end{proof}
\bibliographystyle{amsplain}
|
2,877,628,088,935 | arxiv | \section{Introduction}
There is a considerable number of works dedicated to measurements of the complex permittivity of soils, certain minerals, and other similar materials \cite{klein_methods_1997, oh_factors_2007, wang_experimental_2016, datsios_characterization_2019, curtis_durable_2001, huynen_wideband_2001, pauli_versatile_2007, kupfer_tdr_2007, tong_complex_2014, piuzzi_measurement_2016, bore_broadband_2016, bore_large_2017, lewandowski_0053_2017, belyaeva_effect_2016, portela_earth_2006, visacro_frequency_2012}.
Most of the works, however, conduct measurements only with small samples \cite{klein_methods_1997, oh_factors_2007, wang_experimental_2016, datsios_characterization_2019, curtis_durable_2001, huynen_wideband_2001, pauli_versatile_2007, kupfer_tdr_2007, tong_complex_2014, piuzzi_measurement_2016, bore_broadband_2016, bore_large_2017, lewandowski_0053_2017, belyaeva_effect_2016}, which does not represent large volumes of soil (needed for the grounding) as long as most soils are usually inhomogeneous. Besides, a significant portion of works does not cover the frequency range of the lightning currents \cite{curtis_durable_2001, huynen_wideband_2001, pauli_versatile_2007, kupfer_tdr_2007, tong_complex_2014, piuzzi_measurement_2016, bore_broadband_2016, bore_large_2017, lewandowski_0053_2017}.
Summarizing, most of the existing studies solve problems unrelated to the groundings and lightning protection, and they cannot be utilized for the \textit{in situ} measurements of the frequency-dependent soil properties for the frequency range from several kHz (or lower) to several MHz. The current work presents an easy-to-use measurement device that allows conducting this kind of measurements.
There are two approaches (known to the author) for the field measurements of the frequency-dependent soil properties: the first approach performs measurements with large soil samples \cite{portela_earth_2006}, the second approach uses the hemispheric electrode \cite{visacro_frequency_2012}. The method \cite{portela_earth_2006} requires a significant amount of time and work for preparations to measurements as long as it needs collecting relatively large samples from appreciable depth. Additionally, if the soil is inhomogeneous (which is frequently the case), several soil samples should be collected. The method \cite{visacro_frequency_2012} needs much less work but still requires some preparations (related to the hemispheric electrode and remote earth). Besides, all the methods that use at least one common electrode for the current and voltage can potentially lead to inaccurate results due to the electrode polarization and contact resistance between the electrode and surrounding soil (if not used correctly).
A measurement device using electrode arrays can eliminate these drawbacks. The work \cite{kuklin_prototype_2019} presents a prototype of such a measurement device, and the work \cite{kuklin_measurements_2019} provides measurement results of a slightly improved version of the device. These preliminary works pointed out measurement inaccuracies and other problems of the device and suggested several improvements. The present paper describes a measurement device that takes into account the previously proposed improvements. The main differences from the previous preliminary device versions are the significantly improved measurement accuracy and added capability to measure low-frequency resistivity (needed for the calculation of the imaginary part of permittivity).
The article has the following structure: part II discusses the reasons behind the basic parameters of the device, part III presents a detailed description of the device (including the functional scheme and main algorithms), part IV presents several measurement results and a comparison with a calculation result. Additionally, the last part addresses important aspects during measurements.
Instead of the imaginary part of permittivity, this work will mostly use resistivity: these quantities are easily derived from each other when needed. The imaginary part of permittivity, however, is used for the calculations.
\section{Device Parameters and Previous Work}
There is a need to choose several basic parameters for the measurement device: generated waveform, amplitude, type of isolation between the current and voltage measurement circuits, and general design of the device. This part presents the reasoning behind the chosen parameters, describes other tested approaches, and suggests additional possible measurement strategies.
\subsection{Electrode Arrays}
The most critical decision accepted for the device design is that the voltage measurement circuit should be isolated from the generator, which allows using electrode arrays. Usage of the electrode arrays is inherently more accurate than other methods (as it eliminates the influence of the contact resistance), it is convenient (especially if no remote earth is needed) and well understood \cite{loke_recent_2013}, due to longstanding applications in geophysics. However, measurements with the electrode arrays are usually conducted at relatively low frequencies. At the higher frequencies, certain factors should be taken into account to achieve accurate measurement results.
\subsection{Measurement Problems at High Frequencies}
Calculations have shown that for particular array configurations, electromagnetic (EM) coupling effects can lead to significant measurement errors \cite{kuklin_using_2018, kuklin_numerical_2019}. Later, these effects have been observed in measurements, either \cite{kuklin_measurements_2019}. Another important factor is sufficient isolation between the current and voltage measurement circuits. While the EM coupling can be eliminated by choosing appropriate array configurations, isolation between circuits is a more challenging problem at high frequencies. Several different approaches were tested to achieve a proper level of isolation.
The usage of the oscilloscope with insulated channels has shown that parasitic capacitance between the channels does not allow obtaining sufficient isolation.
Then, a measurement device was created, consisting of a generator and two separate blocks that measured current and voltage. Each of these blocks contained an analog-to-digital converter (ADC), and the blocks were isolated with optocouplers (used for the data transmission between the blocks). Due to difficulty in synchronizing two ADCs with accuracy better than one ADC clock cycle, measurement results were inaccurate. Besides, optocouplers still could not provide sufficient isolation.
After that, differential probes were used for the current and voltage measurements. With this approach, measurements of the electrical soil characteristics were successful; however, the probes still have some parasitic capacitance that can influence measurement results.
Finally, an optically insulated voltage probe allowed to achieve perfect isolation between measurement circuits and minimize the error related to the isolation.
\subsection{Main Parameters of the Measurement Device}
Apart from the chosen type of isolation between the measurement circuits, there is a need to make several important decisions concerning other device parameters.
Developing high-quality measurement probes with perfect measurement characteristics over a wide frequency range would make the measurement device unreasonably expensive. However, usage of the calibration makes it possible to achieve accurate measurement results while keeping the measurement probes inexpensive and easy to produce. Dividing the measurement process by two parts (calibration and measurement itself) allows taking into account the imperfections of the amplitude and phase characteristics of the measurement circuits. Therefore this approach was utilized for the measurement device.
Another critical parameter is the generated waveform. Among many possible waveforms, the sine wave seems to be one of the easiest to generate and control: it does not change its form depending on the load (due to low output impedance of the generator) and needs only several microchips to generate a signal with a needed frequency and amplitude. Also, the sine wave allows to perform measurements in the presence of noise: in the case of the periodical signals, the fast Fourier transform (FFT) filters out unwanted noise efficiently, and even a small voltage amplitude is enough for the generated signal; in many cases 10 V amplitude (or less) is sufficient.
The electrical soil properties are calculated based on the measured complex admittance value according to the equation:
\begin{equation}\label{eq:admit}
\hat{Y}(\omega) = \frac{\hat{I}(\omega)}{\hat{V}(\omega)} = k\left(\frac{1}{\rho}+j\omega\epsilon'\right),
\end{equation}
where $k$ is the geometric factor that depends on a particular electrode array \cite{sumner_principles_1976}.
In the case of the sine wave, it is possible to measure the admittance (and properties) in different ways: the measured signals can be either directly digitized by ADC or processed by analog circuits and digitized after that. As an example of the second approach, the phase difference and the amplitude ratio between the current and the voltage can be measured by the chip AD8302. However, this chip needs special techniques to measure phase difference correctly \cite{krok_low-cost_2006}. It is also not very clear how noise in the measured signal would influence measurement results. For the first approach, the noise can lead to errors only in particular cases (see below).
\section{Device Description}
This part provides a detailed description of the device: functional scheme, pseudocodes for the most critical parts of the microcontroller software, used electronic components, smartphone application review.
The measurement device consists of three main parts: a block for generation and measurement \textbf{1}, a voltage probe \textbf{2}, and a smartphone \textbf{3} (see Fig.~\ref{fig_func}). The block \textbf{1} is connected between the two current electrodes, and it is responsible for the main functions of the device: digitizing signals, calculation of soil properties, controlling the calibration and measurement. The voltage probe \textbf{2} is connected between the two voltage electrodes. The purpose of the probe is to amplify the measured voltage and convert it to the optical signal. Then the signal is transmitted from the probe \textbf{2} to block \textbf{1} through the optical fiber. The smartphone \textbf{3} is used to control the block \textbf{1} and to perform needed procedures with the measured data.
\begin{figure}
\includegraphics[width=3.3in]{scheme}
\caption{Functional scheme of the measurement device.}
\label{fig_func}
\end{figure}
Fig.~\ref{fig_photo} shows a picture of the device. The length of the rods in the figure is 25 cm. As the connection wires, regular copper wires with 1 mm$^2$ cross-sectional area were used (unless wires are much thinner, their resistance even at higher frequencies is very low compared to other parameters, such as grounding resistance of the rods or input impedance of the voltage probe). Several measurements with coaxial cables did not lead to improvements in accuracy (although the measurements were not extensive, so it is possible that in other circumstances, the coaxial cable could be preferable).
Housings for the block \textbf{1} and voltage probe \textbf{2} are plastic.
\begin{figure}
\includegraphics[width=3.4in]{photo}
\caption{View of the measurement device (smartphone is not shown). The captions correspond to those in Fig.~\ref{fig_func}.}
\label{fig_photo}
\end{figure}
\subsection{Block for Generation and Measurement}
The block for generation and measurement \textbf{1} consists of a power source \textbf{1.1}, sampling circuit \textbf{1.2}, microcontroller \textbf{1.3}, generator \textbf{1.4}, and analog processing circuit \textbf{1.5} (see Fig.~\ref{fig_func}).
The power source \textbf{1.1} consists of a lithium-ion polymer battery \textbf{1.1.1} and several voltage converters \textbf{1.1.2}, which form voltages +3.3 V, +5 V, -5 V, +15 V, and \mbox{-15}~V. The voltage converters use LTC3624 and TPS65131 chips. The battery charger uses the MCP73831 chip. Battery capacity is 2.1 ampere-hours.
The generator \textbf{1.4} consists of direct digital synthesis (DDS) chip AD9834 (\textbf{1.4.1}), voltage-controlled amplifier (VCA) LMH6503 (\textbf{1.4.2}), and operational amplifier LT1210 (\textbf{1.4.3}). VCA allows setting arbitrary amplitude for the generated signal (limited by power source and maximum acceptable supply voltage value for the amplifier \textbf{1.4.3}). The output A is connected to the output of the amplifier LT1210.
The analog processing circuit consists of current shunts \textbf{1.5.3}, receiver HFBR-2416 (\textbf{1.5.4}), and two operational amplifiers THS4503 (\textbf{1.5.1} for the current circuit and \textbf{1.5.2} for the voltage circuit). Current shunts are connected between the ground wire of the device and the output C. Mechanical relays select proper shunt resistance depending on the current through it. The shunts are regular surface-mount device (SMD) resistors (with resistances 10 $\Omega$, 30 $\Omega$, and 100 $\Omega$). The shunts were chosen based on currents' values through them (which depend on the grounding resistance of the current electrodes) and appropriate amplitudes on ADC input after amplification.
Fig.~\ref{fig_shunts} shows a slightly simplified electrical diagram of the block \textbf{1.5.3}. Two relays shown in the diagram allow connecting amplifier \textbf{1.5.1} and output C either to DDS \textbf{1.4.1} (during calibration) or one of the current shunts (during measurements).
\begin{figure}
\includegraphics[width=2.3in]{shunts}
\caption{Electrical diagram of the block \textbf{1.5.3}.}
\label{fig_shunts}
\end{figure}
The sampling circuit consists of two ADC chips ADS5520 (\textbf{1.2.2}) and two "first-in, first-out" (FIFO) memory chips CY7C4255 (\textbf{1.2.1}). In order to set a clock for ADC very accurately, programmable clock synthesizer CDCE913 is used. Two separate ADC chips allow avoiding crosstalk (which appeared in the previous version of the device \cite{kuklin_measurements_2019} due to the usage of one ADC with multiplexed channels).
The outputs B and C are used for the calibration, during which the sine wave from DDS \textbf{1.4.1} goes to these outputs and current amplifier \textbf{1.5.1}. Thus, to perform the calibration, these outputs should also be connected to the voltage probe (output B directly connects to D, and C directly connects to E). The amplifier \textbf{1.4.3} is turned off during calibration to avoid unwanted noise.
The device uses a microcontroller ESP32 (\textbf{1.3}): apart from the regular capabilities (like serial peripheral interface, inter-integrated circuit, general-purpose input/output, and others), it has integrated Bluetooth and Wi-Fi, which allow controlling the device and transmitting measurement data wirelessly.
The microcontroller software is responsible for the main procedures needed for controlling the measurement process and calculation of the properties. As mentioned above, two main procedures are needed to perform measurements: calibration and the measurement itself. Two functions (executed by the microcontroller) are responsible for these operations.
It is convenient first to describe the main variables and arrays used throughout the code. Fig.~\ref{fig_algdata} presents these variables and arrays. Here, "measurement frequencies" are the frequencies at which resistivity and permittivity values are measured (the DDS chip generates these frequency values), and "sampling frequencies" are the ones used to clock ADC. When FFT is applied to the digitized signals, certain array items (from the FFT output data) should be used. The algorithm ensures that each measurement frequency coincides (approximately) with a particular frequency from FFT output. Array $Fi$ contains values of array items corresponding to the frequencies from FFT output. Arrays $Pd$ and $Ar$ contain calibration data: these are the phases and amplitudes to which voltages (or currents) should be corrected to take into account imperfections of the measurement circuit. Measurement results are stored in the $Res$ and $Per$ arrays.
Fig.~\ref{fig_algcalib}, and Fig.~\ref{fig_algmeas} present pseudocodes for the calibration and measurement functions, respectively. Most functions presented here are quite self-descriptive. Also, either because they are simple to implement or very hardware-dependent, only short descriptions are provided for them.
Function \textsc{CalcFreq} is needed for calculations related to the measurement and sampling frequencies: first, the frequencies should be chosen so that each measurement frequency coincides with one of the FFT output frequency (as mentioned above), and second, measurement frequencies should be placed linearly on the logarithmic scale.
During calibration, the same signal goes to the voltage probe and the current measurement circuit. As long as the signal goes through two circuits with different amplitude-frequency and phase-frequency characteristics, two signals at the circuit outputs have frequency-dependent phase difference and amplitude ratio. Functions \textsc{CalcPhaseDiff} and \textsc{CalcAmpRatio} calculate phase difference and amplitude ratio for two phasors (or, in other words, for two signals measured by ADC).
Functions \textsc{SaveCalData} and \textsc{SaveMeasData} save data to flash memory, and function \textsc{ReadCalData} reads data from memory. Their implementation depends on the chosen microcontroller, and usually, it is a typical procedure.
Functions \textsc{SelectGeneratorAmplitude} and \textsc{SelectCurrentShunt} automatically choose the current shunt and generator amplitude, which was proposed in the previous work \cite{kuklin_measurements_2019}. These functions choose the needed parameters based on measured current and voltage amplitudes before each measurement.
Functions \textsc{CorrectPhase} and \textsc{CorrectAmplitude} correct voltage phasor to compensate phase and amplitude errors (determined during calibration).
Functions \textsc{CalcResistivity} and \textsc{CalcPermittivity} calculate resistivity and permittivity from voltage and current according to equation (\ref{eq:admit}). The properties are calculated for $k$~=~1 and transmitted to a smartphone, where the actual value of $k$ is calculated based on electrode coordinates set in the smartphone application.
The calibration and measurement functions use the common function \textsc{MeasVoltageAndCurrent}, shown in Fig.~\ref{fig_algcurvol}. Functions \textsc{SetSamplingFreq}, \textsc{SetMeasurementFreq}, \textsc{WriteDataToFIFO}, \textsc{ReadVoltageFromFIFO}, and \textsc{ReadCurrentFromFIFO} are hardware-dependent. \textsc{SetSamplingFreq} is used to set a certain frequency for the CDCE913 output (ADC clock input). \textsc{SetMeasurementFreq} sets the frequency for DDS chip AD9834. \textsc{WriteDataToFIFO} performs all the needed actions for turning on ADC and writing data to FIFO. Functions \textsc{ReadVoltageFromFIFO} and \textsc{ReadCurrentFromFIFO} are needed to transfer data from FIFO to internal random-access memory (RAM) of the microcontroller so that \textsc{FFT} function could apply fast Fourier transform to the time-domain data. After that, $Fi$ array is used for accessing certain indexes of $volFreq$ and $curFreq$ arrays so that needed complex voltage and current values could be finally obtained.
Apart from the measurements for the frequency range specified by the $Fmin$ and $Fmax$, the device also measures resistivity at a relatively low frequency (500~Hz in this case), the value of which is limited by minimum ADC sampling frequency and FIFO size. This value is needed for the calculation of the imaginary part of permittivity.
Currently, almost no special measures are taken to increase accuracy (algorithmically). However, there are some possible ways to do that. For example, some values can be measured several times (to calculate the average value). Also, certain types of noise (see below) could be eliminated by an algorithm.
Calibration takes around 30 seconds, and measurement needs the same amount of time (for 32 frequencies). I.e., it takes around 1 second per frequency (independently on measured values). Thus, preparation for the measurements (such as the accurate location of the electrodes, for example) usually takes more time than the measurements themselves.
\begin{figure}
\includegraphics[width=3.4in]{algdata}
\caption{Global variables and arrays.}
\label{fig_algdata}
\end{figure}
\begin{figure}
\includegraphics[width=3.4in]{algcalib}
\caption{Pseudocode for the calibration function.}
\label{fig_algcalib}
\end{figure}
\begin{figure}
\includegraphics[width=3.4in]{algmeas}
\caption{Pseudocode for the measurement function.}
\label{fig_algmeas}
\end{figure}
\begin{figure}
\includegraphics[width=3.4in]{algcurvol}
\caption{Function for measurement of the complex current and voltage.}
\label{fig_algcurvol}
\end{figure}
\subsection{Voltage Probe}
The voltage probe \textbf{2} consists of a 1 ampere-hour lithium-ion polymer battery and voltage converters \textbf{2.1}, differential probe \textbf{2.3}, a voltage to current converter \textbf{2.4}, and transmitter \textbf{2.2}. The voltage converters form voltages +5 V (RT9266) and -5 V (MAX660). The differential probe \textbf{2.3} is based on two junction gate field-effect transistors (JFET) SST310 and operational amplifier LT1807. Voltage to current converter \textbf{2.4} consists of an operational amplifier LT1807 and a bipolar junction transistor (BJT) MMBT3904. For the transmitter \textbf{2.2}, HFBR-1414 is used.
As the optical fiber, 62.5/125 (core/cladding diameter) fiber patch cord is used. The straight tip (ST) connectors provide a convenient way to replace the cable if required.
The input resistance of the voltage probe is higher than 20 M$\Omega$; input capacitance is lower than 0.55 pF.
\subsection{Smartphone Application Software}
The device needs means for controlling it, displaying measurement results, saving the results, and others. It is possible to embed these capabilities in the device itself. However, a regular smartphone has all the needed functions using which it is possible to simplify the measurement device significantly. Moreover, creating a smartphone application is usually a more straightforward task than creating a graphical user interface for a touch screen display controlled by a microcontroller. Another advantage is that remote control allows avoiding the measurement error caused by accidental touching of some parts of the device (due to parasitic impedance of the human body). Thus, it was chosen to create a smartphone application for controlling the device.
\begin{figure}
\includegraphics[width=2.3in]{app}
\caption{View of the smartphone application.}
\label{fig_app}
\end{figure}
Fig.~\ref{fig_app} shows the view of the app. The app connects to the measurement device via Bluetooth and allows triggering calibration and measurement processes. It also receives measurement results from the device and saves them (and can display the results as a graph). Also, because most smartphones have a camera and a global positioning system (GPS) receiver, the app allows taking pictures of the measurement site and saving geographic coordinates of the place. Additionally, it is possible to view voltages digitized by ADC to ensure that there is no noise or other factors that can lead to erroneous results. Apart from automatic generator amplitude and current shunt selection (by device), manual selection of these parameters is possible from the application. In order to calculate soil properties correctly, the app allows setting coordinates of the current and voltage electrodes: the app uses the coordinates to calculate the geometric factor $k$ and the depth of investigation for an array. The geometric factor is calculated as:
\begin{equation}\label{eq:geomf}
k = \frac{2\pi}{\frac{1}{r_1}-\frac{1}{r_2}-\frac{1}{R_1}+\frac{1}{R_2}},
\end{equation}
where $r_1$ and $r_2$ are the distances from the first potential electrode to the first and the second current electrodes, $R_1$ and $R_2$ are the distances from the second potential electrode to the two current electrodes. The electrodes' position relative to axes (in the app) is not important, as long as the mutual location of the electrodes corresponds to those in measurements.
Currently, the device itself calculates the needed properties. Another option is to send measured voltages and currents to the smartphone and calculate the properties inside the smartphone. This approach can reduce the time for measurement if a microcontroller is not very efficient. For faster data transmission speed, Wi-Fi can be used. However, this can reduce battery life due to higher power consumption.
\section{Measurements}
This part presents several measurement results and a comparison with a calculation result. The part also addresses several factors affecting the accuracy of measurement results.
Fig.~\ref{fig_arrays} illustrates a top view of electrode arrays used for the measurements.
\begin{figure}
\includegraphics[width=2.5in]{arrays}
\caption{Electrode arrays used in measurements.}
\label{fig_arrays}
\end{figure}
Fig.~\ref{fig_array1} shows the measurement result for the array 1. The value of low-frequency resistivity $\rho_{0}$, in this case, is 718.76~$\Omega\cdot$m. Fig.~\ref{fig_array1eps} shows the same result expressed as complex permittivity. This figure also shows real and imaginary permittivity approximated with the Debye relaxation model (needed for the calculations below).
\begin{figure}
\includegraphics[width=3.4in]{array1}
\caption{Measurement result for the array 1.}
\label{fig_array1}
\end{figure}
\begin{figure}
\includegraphics[width=3.0in]{array1eps}
\caption{Real and imaginary parts of measured permittivity and their approximation.}
\label{fig_array1eps}
\end{figure}
Calculations and measurements for the array 2 allow verifying if there is an agreement between the calculation model and measurement results.
Calculations are made with the finite difference time domain (FDTD) method \cite{yee_numerical_1966, taflove_numerical_1975, taflove_computational_2005}, which uses the auxiliary differential equation method \cite{taflove_computational_2005,okoniewski_simple_1997} to model Debye relaxation (and, subsequently, frequency-dependent soil properties). The Debye function expansion:
\begin{equation}\label{eq:debye}
\hat{\epsilon}_r(\omega) = \epsilon_\infty + \sum_{p=1}^{n} \frac{\Delta\epsilon_p}{1+j\omega\tau_p}.
\end{equation}
where $n$ is the number of Debye terms (poles). Table~\ref{table_param} shows the parameters $\Delta\epsilon$ and $\tau$ of the expansion calculated with the hybrid particle swarm-least squares optimization approach \cite{kelley_debye_2007}. $\epsilon_{\infty}$ equals to 22.979.
Calculation parameters, such as current function \cite{heidler_class_2002}, absorbing boundary conditions \cite{taflove_computational_2005}, the method for modeling thin wires \cite{railton_treatment_2005}, are the same as previously \cite{kuklin_numerical_2019}. Fig.~\ref{fig_arraysside} illustrates the calculation model.
\begin{table}
\caption{\label{table_param}Parameters for the Four-Term Debye Function Expansion}
\begin{ruledtabular}
\begin{tabular}{ccc}
term & $\Delta\epsilon$ & $\tau, s$\\
\hline
1 & $11.100$ & $1.102\cdot10^{-7}$\\
2 & $24.950$ & $1.069\cdot10^{-6}$\\
3 & $137.225$ & $9.984\cdot10^{-6}$\\
4 & $1.102\cdot10^{6}$ & $6.420 \cdot10^{-2}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
The soil properties, measured with the array 1, can be used in calculations with array 2 (measurements with arrays 1 and 2 were made in the vicinity to each other to ensure that the soil properties are approximately the same). After that, a comparison can be made with measurement results for the array 2.
\begin{figure}
\includegraphics[width=2.5in]{arraysside}
\caption{Calculation model for the array 2 ($\Delta l$ is the calculation grid cell size).}
\label{fig_arraysside}
\end{figure}
Fig.~\ref{fig_array2} shows measurement and calculation results for the array 2. Taking into account that the soil is inhomogeneous and different arrays have different sensitivity patterns, this result demonstrates that there are same EM coupling effects in measurements and calculations. The results also provide additional validation to the calculation model.
\begin{figure}
\includegraphics[width=3.4in]{array2}
\caption{Measurement and calculation results for the array 2.}
\label{fig_array2}
\end{figure}
Previous works have shown that the measurement errors can exist even for arrays with perpendicularly located measurement wires \cite{kuklin_measurements_2019, kuklin_numerical_2019}. However, the current device version is more accurate (due to the amplitude correction during measurements, absence of the crosstalk, and better temperature stability of the voltage probe). Therefore similar measurements were repeated here (see array 3 in Fig.~\ref{fig_arrays}). Measurement results for the array 3 (shown in Fig.~\ref{fig_array3}) confirm the previous results.
It should be mentioned that several preliminary measurements have shown that using the connection wires during calibration helps to reduce the measurement error. Thus, it is possible that long connection wires could be a significant contributor to the resistivity measurement error for arrays with perpendicular dipoles. As long as the measurements were preliminary, this should be examined by additional measurements with different arrays and, if possible, verified by calculations.
\begin{figure}
\includegraphics[width=3.4in]{array3}
\caption{Measurement result for the array 3.}
\label{fig_array3}
\end{figure}
\subsection{Important Aspects During Calibration}
As mentioned previously, there are two parameters measured during calibration: phase difference and amplitude ratio.
Accurate phase calibration is crucial: even a small error in angle due to noise or other factors can lead to significant errors during measurements. Besides, the phase difference depends on frequency quite noticeably. Amplitude is usually more uniform throughout the frequency range than the phase (with correctly designed amplifying circuits). However, uncorrected amplitude still can introduce some error: comparing results with no amplitude correction \cite{kuklin_measurements_2019} and results with the correction (above), one can see the noticeable difference.
The voltage probe can be very sensitive to the external EM field: when the device was tested, unwanted signal several times higher than the calibration signal (but not connected to the calibration outputs electrically) was able to cause significant phase and amplitude errors (and measurement errors later). Because of the sensitivity, usage of the differential probe for current measurements can cause errors (due to its proximity to the generator).
Another factor that can influence measurement accuracy is the temperature of the measurement device (during calibration and measurement). The device temperature can change either due to ambient temperature or because of the heating of electronic components (during their regular operation). And if the temperature during a measurement is different from one during calibration, this can lead to measurement error. Thus, either the temperature dependence should be minimized, or some temperature compensation should be used (to take into account the influence of temperature on measurement results). Also, to increase measurement accuracy, calibration could be done each time immediately before measurement.
One should also avoid significant bending of the optical fiber during calibration, as long as this influences the amplitude of the signal.
\subsection{Important Aspects During Measurements}
The most critical parameter for the measurements, as mentioned previously, is the location of connection wires: to reduce EM coupling, the wires should be located perpendicularly.
Another parameter is the length of measurement wires: they should be relatively short (preferably, not longer than several meters). Short distances, however, increase geometric factor $k$ and, therefore, can lead to errors due to small (compared to noise) amplitude of the measured voltage. Thus, distances should not be too short either (unless a small depth of investigation or a specific sensitivity pattern is needed). Both current and voltage rod distance influence EM coupling error; the noise level is mostly influenced by the voltage rod distance \cite{kuklin_measurements_2019} and location of the voltage rods in general.
Usually, FFT effectively filters out high-frequency (and relatively low-amplitude) noise. In some cases, however, low-frequency noise with relatively high amplitude was observed (even for a small distance between voltage electrodes). This noise was able to shift the useful signal beyond allowable ADC input voltage range. This kind of error could be eliminated by detecting erroneous signals algorithmically (and repeating measurements) or by interchanging current and voltage rods during measurements.
There are also some other factors that can slightly influence measurement results at high frequencies, such as generator location along the line between current rods and its elevation above the ground.
\begin{table}
\caption{\label{table_paramdev}Parameters of the Measurement Device}
\begin{ruledtabular}
\begin{tabular}{ccc}
parameter & value\\
\hline
Resistance measurement range & 100 $\Omega$ ... 10 k$\Omega$\\
Capacitance measurement range & 47 pF ... 2.7 nF\\
Resistance accuracy (below 2 MHz) & $\approx$4\%\\
Resistance accuracy (above 2 MHz) & $\approx$10\%\\
Capacitance accuracy (above 30 kHz) & $\approx$6\%\\
Capacitance accuracy (below 30 kHz) & $\approx$10\%\\
Frequency range & 10 kHz ... 4 MHz\\
Frequency accuracy & 0.28 Hz\\
Signal amplitude (current circuit) & 4 mV ... 110 mV\\
Signal amplitude (voltage circuit) & 4 mV ... 140 mV\\
Sensitivity (current circuit) & $\pm$ 12.6 mV\\
Sensitivity (voltage circuit) & $\pm$ 30.3 mV\\
\end{tabular}
\end{ruledtabular}
\end{table}
Below, several parameters of the measurement device are given (see Table~\ref{table_paramdev}). There are challenges in determining the measurement range and accuracy of the measurement device as it is difficult to find reference measurement results (analogous to those in the field) for comparison. A possible approach (used in this work) is to measure resistivities and capacitances and compare results to their known values. However, there are significant differences between the field measurements and measurements with lumped elements. Thus, parameters in Table~\ref{table_paramdev} are only estimations (actual measurement range and accuracy for the field measurements can be different). Possibly, there is a more accurate way to determine these parameters. One of the indicators of accuracy for the field measurements is the good agreement between the measured complex permittivity and the Debye relaxation model (for which real and imaginary parts of the permittivity related by the Kramers-Kronig relations).
In order to test the measurement range and accuracy, a voltage divider was used (see Fig.~\ref{fig_divider}). As the impedances $Z_1$, $Z_2$, and $Z_3$, either resistors or capacitors were used. When a capacitor and resistor are connected in parallel, in some cases there are errors significantly exceeding those during the field measurements. Thus, results in the table were measured for resistors and capacitors separately. Also, to avoid high amplitudes at ADC input, voltage probe amplification was reduced, and VCA output was used as generator output. In addition, measured values seemingly depend on the amplitude of signals (which affects measurement accuracy); therefore, high amplitudes were avoided. This amplitude dependence can be taken into account in the next version of the device.
\begin{figure}
\includegraphics[width=1.4in]{divider}
\caption{Connection to voltage divider.}
\label{fig_divider}
\end{figure}
The frequency range in the table corresponds to that used in the present work. It is limited mostly by ADC clock and FIFO memory (also partially by the voltage probe and generator), and technically it can be wider: starting from several hundreds of Hz till around 10 MHz. However, resistivity measurement accuracy at high frequencies and permittivity accuracy at low frequencies would be low in that case. But low-frequency resistivity measurement is important, as it was mentioned above.
According to the AD9834 datasheet, this DDS chip allows achieving a resolution of 0.28 Hz (with 75 MHz clock), which determines the frequency accuracy of the generator.
Amplifier \textbf{1.5.1} amplifies the signal by 10; amplifier \textbf{1.5.2} and the voltage probe amplify the signal approximately by 8 (depending on frequency). As long as the ADC input range is 2.3 V, this determines maximum amplitudes for the voltage probe and voltages on the current shunts. Minimum amplitudes were determined by gradual reduction of calibration signal until visible errors started to appear. The minimum amplitudes can be lower or higher depending on the acceptable level of measurement errors (thus, this is an approximate value).
Peak-to-peak sensitivity was calculated as $\pm N$ times 2.3 V divided by 4096, where $\pm N$ corresponds to the noise measured in counts, 2.3 V is the ADC input range, 4096 is $2^{12}$ (the ADC has 12 bits resolution). Note that the noise was measured at ADC input, where the signal is amplified by the voltage probe and the analog processing circuit \textbf{1.5}. In addition, FFT allows detecting weak signals even in the presence of noise (amplitudes lower than noise probably can be detected).
\section{Conclusion}
The measurement device presented in the article significantly decreases measurement time (down to 10--20 minutes or so) and improves the convenience of use due to its compact size, making field measurements of frequency-dependent soil properties very easy. Furthermore, the device does not contain very specific electronic components (and does not need special programming skills); therefore, it is relatively easy to replicate the device. Thanks to this, more field experimental results can be obtained, so that the frequency-dependence of soil properties can be investigated more thoroughly. The device could even be used for engineering purposes if the electrical exploration of soil is needed (similarly to the regular low-frequency resistivity measurements). Possibly, the modified version of the device could also be used for measurements with soil samples.
In the cases when larger investigation depths are needed, generator amplitude should be increased.
As long as soils are usually inhomogeneous, more complicated soil models could be needed \cite{datsios_methods_2020, li_inversion_2020}. However, it probably would be appropriate first to determine what accuracy (for inhomogeneities) is needed in practice. It is possible that, in particular cases, it is not very important to know all the inhomogeneities of soil. I.e., it could be enough to know some "effective" soil properties for a volume of soil where grounding is (or will be) located. And possibly those "effective" properties could be measured by the device.
|
2,877,628,088,936 | arxiv | \section{Introduction}
Consider a supervised learning problem with training data $\{(\bm{x}_i, y_i)\}_{i=1}^n$ where $y_i = f^*(\bm{x}_i), i=1, \cdots, n$.
Let $\mathcal{F}$ be the hypothesis space.
Our objective is to find the best approximation to the target function $f^*$ in the hypothesis space by using only information from the training dataset.
We will assume that $\mathcal{F}$ is large enough to guarantee that interpolation is possible, i.e., there exists trial functions $f$
in $\mathcal{F}$ such that $f(\bm{x}_i) = y_i$ holds for all $i=1, \cdots, n$. Trial functions that satisfy this condition are called ``interpolated solutions''.
We are interested in the generalization properties of the following minimum-norm interpolated solution:
\begin{equation}\label{eqn: minimum-norm}
\begin{aligned}
\hat{f} \,\in&\, \argmin_{ f \in \mathcal{F}}\, \|f\|_{\mathcal{F}} \\
&\text{s.t.}\,\,\, f(\bm{x}_i) = y_i,\quad i=1, \dots, n.
\end{aligned}
\end{equation}
Here $\|\cdot\|_{\mathcal{F}}$ is a norm imposed for the model, which {is usually} different for different models. This type of estimators are
relevant for understanding various explicitly or implicitly regularized models and optimization methods. For example, it is well known that for (generalized) linear models, gradient descent converges to minimum $l_2$-norm solutions, if we initialize all the coefficients from zero.
In any case, minimum-norm solutions play an important role in the analysis of modern machine learning models
and they are the focus on this paper.
Assume that the training data $(\bm{x}_i,y_i) \in \mathbb{R}^{d}\times \mathbb{R}, i=1, \dots, n$ are generated from the model
\[
y_i = f^*(\bm{x}_i), \quad i=1,\dots, n,
\]
where $\bm{x}_i\sim P_{\bm{x}}$ and the random draws are independent.
$f^*$ is the target function that we want to estimate from the $n$ training samples. In this paper, we will always assume that $\|\bm{x}_i\|_{\infty}\leq 1, |y_i|\leq 1$. Let $X=(\bm{x}_1,\dots,\bm{x}_n)\in\mathbb{R}^{d\times n}$ and $\mathbf{y}=(y_1,\dots,y_n)^T\in\mathbb{R}^n$.
Let $f(\cdot;\theta)$ denote the parametric model, which could be a random feature model, two-layer or residual neural network model in our subsequent analysis.
We want to find $\theta$ that minimizes the generalization error (also called the population risk)
\[
R(\theta) := \mathbb{E}_{\bm{x},y}[\ell(f(\bm{x};\theta), y)].
\]
Here $\ell(y,y') = \frac{1}{2}(y-y')^2$ is the loss function.
But in practice, we can only deal with the risk defined on the training samples, the empirical risk:
\[
\hat{R}_n(\theta):=\frac{1}{n}\sum_{i=1}^n \ell(f(\bm{x}_i),y_i).
\]
A key question in machine learning is the size of the population risk (or the generalization error) for minimizers of the empirical risk.
In the case of interest here, the minimizers of the empirical risk are far from being unique,
and we will focus on the one with a minimum norm.
We will consider three classes of models. Arguably they are the most representative models in the analysis of modern
machine learning algorithms.
\paragraph*{The random feature model.}
Let $\{\varphi(\cdot; \bm{w}), \bm{w} \in \Omega \}$ be a set of random features over some probability space $\Omega$ endowed with a probability measure
$\mu$.
The random feature model is given by
\begin{equation}\label{eqn: rdf-model}
f_m(\bm{x};\bm{a}) := {\frac{1}{m}}\sum_{j=1}^m a_j \varphi(\bm{x}; \bm{w}_j),
\end{equation}
where $\bm{a}=(a_1,\dots,a_m)^T \in\mathbb{R}^m$ are the parameters to be learned from the data, and $\{\bm{w}_j\}_{j=1}^m$ are i.i.d. random variables drawn from $\mu$. For this model, there is a naturally related reproducing kernel Hlibert space (RKHS) \cite{aronszajn1950theory} $\mathcal{H}_k$ with the kernel defined by
\begin{equation}
k(\bm{x},\bm{x}') := \mathbb{E}_{\bm{w}\sim \mu} [\varphi(\bm{x};\bm{w})\varphi(\bm{x}';\bm{w})].
\end{equation}
For simplicity, we assume that $|\varphi(\bm{x};\bm{w})|\leq 1$.
Define two kernel matrices $K=(K_{i,j}), K^m=(K^m_{i,j})\in \mathbb{R}^{n\times n}$ with
\[
K_{i,j} = k(\bm{x}_i,\bm{x}_j), \quad K^m_{i,j} = \frac{1}{m}\sum_{s=1}^m \varphi(\bm{x}_i;\bm{w}_s)\varphi(\bm{x}_j;\bm{w}_s),
\]
The latter is an approximation of the former.
\paragraph*{The two-layer neural network model.}
A two-layer neural network is given by
\begin{equation}\label{eqn: 2nn}
f_{m}(\bm{x};\theta) = {\frac{1}{m}}\sum_{j=1}^{m}a_j\sigma(\bm{b}_j\cdot \bm{x} + c_j).
\end{equation}
Here $\sigma(t) = \max(0,t)$ is the rectified linear unit (ReLU) activation function.
Let $\theta = \{(a_j,\bm{b}_j, c_j)\}_{j=1}^m$ be all the parameters to be learned from the data.
If we define $\varphi(\bm{x};\bm{b}, c) := \sigma(\bm{b}\cdot \bm{x} + c)$, the two-layer neural network is almost the same as the random feature model \eqref{eqn: rdf-model}. The only difference is that $\{\bm{w}_j\}_{j=1}^m$ in the random feature model is fixed during the training process, while the parameters $\{(\bm{b}_j,c_j)\}_{j=1}^m$ in the two-layer neural network model are learned from the data.
Consider the case where $(\bm{b},c)\sim \pi_0$, where $\pi_0$ is a fixed probability distribution. We define $k_{\pi_0}(\bm{x},\bm{x}')=\mathbb{E}_{\pi_0}[\sigma(\bm{b}\cdot\bm{x}+c)\sigma(\bm{b}\cdot\bm{x}'+c)]$ and the corresponding kernel matrix $K_{\pi_0} = (k_{\pi_0}(\bm{x}_i,\bm{x}_j))\in\mathbb{R}^{n\times n}$. Let $\lambda_n = \lambda_n(K_{\pi_0})$, the smallest eigenvalue of $K_{\pi_0}$, which will be used to bound the network width in our later analysis.
\paragraph*{\textbf{Residual neural networks.}}
Consider the following type of residual neural networks
\begin{equation}\label{eqn: resnet}
\begin{aligned}
\bm{z}_0(\bm{x}) &= V\tilde{\bm{x}} \\
\bm{z}_{l+1}(\bm{x}) &= \bm{z}_{l}(\bm{x}) + {\frac{1}{L}}U_l \sigma(W_l\bm{z}_{l}(\bm{x})) , \quad l=0,\dots, L-1\\
f_L(\bm{x};\theta) &= \bm{\alpha}^T\bm{z}_L(\bm{x})
\end{aligned}
\end{equation}
where $\tilde{\bm{x}}=(\bm{x}^T,1)^T\in\mathbb{R}^{d+1}, W_l \in \mathbb{R}^{m\times D}, U_l\in \mathbb{R}^{D\times m}, \bm{\alpha}\in \mathbb{R}^{D}$ and
\[
V =
\begin{pmatrix}
I_{d+1} \\
0
\end{pmatrix} \in\mathbb{R}^{D\times (d+1)}.
\]
We use $\theta = \{W_1,U_1,\dots, W_L, U_L, \bm{\alpha}\}$ to denote all the parameters to be learned from the training data.
To explicitly show the dependence on the hyper-parameters, we call $f_L(\cdot;\theta)$ a $(L,D,m)$ residual network.
There is a large volume of literature on the theoretical analysis of these models. The most important issue is to estimate the generalization error for
the situation when $d$ is large. In this case, one benchmark for us is the Monte Carlo algorithm. Our hope would be to establish estimates
that are comparable to those for the Monte Carlo algorithms. We call this Monte Carlo-like error rates.
In this regard, Monte Carlo-like estimates of the generalization error were established in \cite{e2018priori} for two-layer neural network
models and in \cite{ma2019priori} for residual network models, when suitable regularization terms are added explicitly to the model.
These regularization terms help to guarantee the boundedness of certain norms which in turn help to control the generalization gap.
{It should be noted that as is the case for integration problems, small improvements can be made on these rates \cite{dick2013high}, typically from $O(n^{-1/2})$ to $O(n^{-1/2-1/d})$. However, these improvements become negligible when $d\gg 1$. In general one should not expect asymptotically better than Monte Carlo-like rates in high dimensions.}
For interpolated solutions, recent literature on their mathematical analyses includes work on the nearest neighbor scheme \cite{belkin2018overfitting}, linear regression \cite{bartlett2019benign,hastie2019surprises}, kernel (ridgeless) regression \cite{belkin2018understand,liang2018just, rakhlin2018consistency,liang2019risk} and random feature model \cite{hastie2019surprises}.
We will study minimum-norm interpolated solution for the three models described above.
{We} prove that the minimum-norm estimators can achieve the
Monte Carlo rate up to logarithmic terms, as long as the target functions are in the right function spaces and the models are sufficiently over-parametrized.
More precisely, we prove the following results.
\begin{itemize}
\item For the random feature model, the corresponding function space is the reproducing kernel Hilbert space (RKHS) associated with
the corresponding kernel. Optimal rate for the generalization error is proved for the $l_2$ minimum-norm interpolated solution when the
model is sufficiently over-parametrized.
\item The same result is proved for two-layer neural network models. The corresponding function space for the two-layer neural network
model is the Barron space define in \cite{e2019barron,e2018priori}. {Naturally the norm used in \eqref{eqn: minimum-norm} is the Barron norm
(note that the Barron norm is different from the Fourier transform-based norm used in Barron's original paper \cite{barron1993universal}).}
\item The same result is also proved for deep residual network models for which the corresponding function space is the flow-induced function spaces defined in \cite{e2019barron}, and the
norm used in \eqref{eqn: minimum-norm} is the flow-induced norm.
\end{itemize}
We remark that over-parametrization is a key for these results. {This can be seen from the work \cite{belkin2019reconciling}, which} experimentally showed that minimum-norm interpolated solutions may generalize very badly if the model is not sufficiently over-parametrized. In contrast, the corresponding explicitly regularized models are always guaranteed to achieve the optimal rate \cite{e2018priori,caponnetto2007optimal,e2019barron}.
To control the gap between population risk and empirical risk, the following notion of complexity of function spaces will be used.
\begin{definition}[Rademacher complexity]
Recall that $\mathcal{F}$ and $\{\bm{x}_i\}_{i=1}^n$ denote the hypothesis space and the training data set respectively.
The Rademacher complexity \cite{shalev2014understanding} of $\mathcal{F}$ with respect to the data is defined by
\[
\rad_n(\mathcal{F}) := \frac{1}{n}\mathbb{E}_{\xi_1,\dots,\xi_n} [\sup_{f\in \mathcal{F}} \sum_{i=1}^n \xi_if(\bm{x}_i)],
\]
where $\{\xi_i\}_{i=1}^n$ are i.i.d random variables with $\mathbb{P}(\xi=1)=\mathbb{P}(\xi=-1)=\frac{1}{2}$.
\end{definition}
We will use the following theorem to bound the generalization error.
\begin{theorem}[Theorem 26.5 of \cite{shalev2014understanding}]\label{thm: rad-gen-err}
Assume that the loss function $\ell(\cdot,y')$ is $Q$-Lipschitz continuous and bounded from above by $C$. For any $\delta\in (0,1)$, with probability $1-\delta$ over the random sampling of the training data, the following generalization bound hold for any $f\in \mathcal{F}$,
\[
R(f) \leq \hat{R}_n(f) + 2 Q \rad_n(\mathcal{F}) + 4C \sqrt{\frac{2\ln(2/\delta)}{n}}.
\]
\end{theorem}
\paragraph*{Notation.} We use $\|\bm{v}\|_q$ to denote the standard $\ell_q$ norm of a vector $\bm{v}$, and $\|\cdot\|$ to denote the $l_2$ norm. For a matrix $A$, we use $\lambda_j(A)$ to denote the $j$-th largest eigenvalue of $A$ and we also define the norm $\|A\|_{1,1}=\sum_{i,j}|a_{i,j}|$. The spectral and Frobenius norms of a matrix are denoted by $\|\cdot\|$ and $\|\cdot\|_F$, respectively. We use $X\lesssim Y$ to mean that there exists a universal constant $C>0$ such that $X\leq C Y$. For any positive integer $d$, we let $\SS^{d}:=\{\bm{w} | \bm{w}\in\mathbb{R}^{d+1}, \|\bm{w}\|_1 = 1\}$ and use $\pi_0$ to denote the uniform distribution over $\SS^d$. For two matrices $A=(a_{i,j}), B=(b_{i,j})$ in $\mathbb{R}^{n\times m}$, if $a_{i,j}\leq b_{i,j}$ for any $i\in [n], j \in [m]$, then we write $ A \preceq B$. For any positive integer $q$, we denote by $[q]:=\{1,\dots, q\}$, $\bm{1}_q = (1,\dots,1)\in\mathbb{R}^q$. For a scalar function $g:\mathbb{R}\to\mathbb{R}$ and matrix $A=(a_{i,j})$, we let $g(A)=(g(a_{i,j}))$.
\section{The random feature model}
Consider the minimum $l_2$ norm solution defined by
\begin{equation}\label{eqn: rf-min-norm}
\begin{aligned}
\hat{\bm{a}}_n := \argmin_{\hat{R}_n(\bm{a})=0} \|\bm{a}\|^2.
\end{aligned}
\end{equation}
About this estimator, we have the following theorem.
\begin{theorem}\label{thm: random-feature}
Assume that $f^*\in \mathcal{H}_k$. For any $\delta\in (0,1)$, assume that $m\geq \frac{8n^2\ln(2n^2/\delta)}{\lambda_n^2(K)}$. Then with probability at least $1-\delta$ over the random sampling of the data and the features, we have
\[
R(\hat{\bm{a}}_n)\lesssim \frac{\|f^*\|_{\mathcal{H}_k}^2+1}{\sqrt{n}}\left(1 + \sqrt{\ln(2/\delta)}\right).
\]
\end{theorem}
To prove the above theorem, we need the following lemma, which says that the two kernel matrices are close when the random feature model is sufficiently over-parametrized.
\begin{lemma}\label{eqn: kernel-approx}
For any $\delta \in (0,1)$, with probability $1-\delta$ over the random sampling of $\{\bm{w}_j\}_{j=1}^m$, we have
\[
\|K-K^m\|\leq \sqrt{\frac{n^2\ln(2n^2/\delta)}{2m}}.
\]
In particular, if $m\geq \frac{2n^2\ln(2n^2/\delta)}{\lambda_n^2(K)}$, we have
\[
\lambda_n(K^m) \geq \frac{\lambda_n(K)}{2}.
\]
\end{lemma}
\begin{proof}
According to the Hoeffding's inequality, we have that for any $\delta' \in (0,1)$, with probability $1-\delta'$ the following holds for any
specific $i,j \in [n]$,
\[
|k(\bm{x}_i,\bm{x}_j)-\frac{1}{m}\sum_{j=1}^m \varphi(\bm{x}_i;\bm{w}_j)\varphi(\bm{x}_j;\bm{w}_j)|\leq \sqrt{\frac{\ln(2/\delta')}{2m}}.
\]
Therefore, with probability $1-n^2\delta'$, the above inequality holds for all $i,j\in [n]$. Let $\delta=n^2\delta'$, the above can be written as
\[
|k(\bm{x}_i,\bm{x}_j)-\frac{1}{m}\sum_{k=1}^m \varphi(\bm{x}_i;\bm{w}_k)\varphi(\bm{x}_j;\bm{w}_k)|\leq \sqrt{\frac{\ln(2n^2/\delta)}{2m}}.
\]
Thus we have
\[
\|K-K^m\| \leq \|K-K^m\|_F \leq \sqrt{\frac{n^2\ln(2n^2/\delta)}{2m}}.
\]
Using Weyl's inequality, we have
\[
\lambda_n(K^m)\geq \lambda_n(K) - \|K-K^m\| \geq \lambda_n(K) - \sqrt{\frac{n^2\ln(2n^2/\delta)}{2m}}.
\]
When $m\geq \frac{2n^2\ln(2n^2/\delta)}{\lambda_n^2(K)}$, we have $\lambda_n(K^m)\geq \frac{\lambda_n(K)}{2}$.
\end{proof}
We first have the following estimate for kernel (ridgeless) regression.
\begin{lemma}\label{lemma: bound of krr}
$\mathbf{y}^T K^{-1}\mathbf{y} \leq \|f^*\|^2_{\mathcal{H}_k}$.
\end{lemma}
\begin{proof}
Consider the following optimization problem
\begin{equation}\label{eqn: pro}
\hat{h}_n = \argmin_{\hat{R}_n(h)=0} \|h\|_{\mathcal{H}_k}^2.
\end{equation}
According to the Representer theorem (see Theorem 16.1 of \cite{shalev2014understanding}), we can write $\hat{h}_n$ as follows
\[
\hat{h}_n = \sum_{i=1}^m \beta_i k(\bm{x}_i,\cdot).
\]
Plugging it into $\hat{R}_n(\hat{h}_n)=0$ gives us that
$
\mathbf{y} = K \bm{\beta},
$
which leads to $\bm{\beta} = k^{-1}\mathbf{y}$. According the Moore-Aronszajn theorem \cite{aronszajn1950theory}, we have
\[
\|\hat{h}_n\|^2_{\mathcal{H}_k} = \bm{\beta}^T K \bm{\beta} = \mathbf{y}^T K^{-1} \mathbf{y}.
\]
By definition $\hat{h}_n$ is the minimum RKHS norm solutions and $\hat{R}_n(f^*)=0$, it follows that
$
\|\hat{h}_n\|_{\mathcal{H}_k}^2 \leq \|f^*\|_{\mathcal{H}_k}^2,
$
So we have $\mathbf{y}^TK^{-1}\mathbf{y}\leq \|f^*\|^2_{\mathcal{H}_k}$.
\end{proof}
The following lemma provides an upper bound for the minimum-norm solution of the random feature model \eqref{eqn: rf-min-norm}.
\begin{lemma}
Assume that $f^*\in \mathcal{H}_k$ with $k(\bm{x},\bm{x}')=\mathbb{E}_{\bm{w}}[\varphi(\bm{x};\bm{w})\varphi(\bm{x}';\bm{w})]$.
Then the minimum-norm estimator satisfies
$$
{\frac{1}{\sqrt{m}}}\|\hat{\bm{a}}_n\|\leq 2 \|f^*\|_{\mathcal{H}_k}
$$
\end{lemma}
\begin{proof}
Let $\Phi=(\Phi_{i,j})\in \mathbb{R}^{n\times m}$ with $\Phi_{i,j}=\varphi(\bm{x}_i;\bm{w}_j)$. Then the solution of problem \eqref{eqn: rf-min-norm} is given by
\[
\hat{\bm{a}}_n = {m}\Phi^T(\Phi\Phi^T)^{-1}\mathbf{y}.
\]
Obviously, $K^m = \frac{1}{m}\Phi\Phi^T$.
Therefore, we have
\begin{align*}
{\frac{1}{m}} \|\hat{\bm{a}}_n\|^2 &= m \mathbf{y}^T(\Phi\Phi^T)^{-1} \Phi \Phi^T(\Phi\Phi^T)^{-1}\mathbf{y} = \mathbf{y}^T (\frac{1}{m}\Phi\Phi^T)^{-1}\mathbf{y} = \mathbf{y}^T (K^m)^{-1}\mathbf{y} \\
&= \mathbf{y}^T K^{-1} \mathbf{y} + \mathbf{y}^T ((K^m)^{-1} - K^{-1}) \mathbf{y} \\
&=\mathbf{y}^T \hat{K}^{-1} \mathbf{y} + \mathbf{y}^T (K)^{-1}(K^m - K) (K^m)^{-1} \mathbf{y} \\
&\leq \mathbf{y}^T \hat{K}^{-1} \mathbf{y} + \|(K^m)^{-1/2}\mathbf{y}\| \|(K^m)^{-1/2}(K-K^m)K^{-1/2}\| \|K^{-1/2}\mathbf{y}\|
\end{align*}
According to Lemma \ref{lemma: bound of krr}, we have $\mathbf{y}^TK^{-1}\mathbf{y}\leq \|f^*\|_{\mathcal{H}_k}^2$. Denote $t=\sqrt{\|\hat{\bm{a}}_n\|^2/{ m}}=\sqrt{\mathbf{y}^T(K^m)^{-1}\mathbf{y}}$, we have
\[
t^2 \leq \|f^*\|_{\mathcal{H}_k}^2 + t\|f^*\|_{\mathcal{H}_k} \|(K^m)^{-1/2}\|\|K-K^m\|\|K^{-1/2}\|.
\]
By Lemma \ref{eqn: kernel-approx}, we have
\begin{align}
t^2 \leq \|f^*\|_{\mathcal{H}_k}^2 + t\|f^*\|_{\mathcal{H}_k} \sqrt{\frac{n^2\ln(2n^2/\delta)}{\lambda_n^2(K) m}}.
\end{align}
Under the assumption that $m\geq \frac{2n^2\ln(2n^2)}{\lambda_n^2(K)}$, we obtain
\begin{equation*}
{ \frac{1}{\sqrt{m}}}\|\hat{\bm{a}}_n\| = t\leq 2 \|f^*\|_{\mathcal{H}_k}.
\end{equation*}
\end{proof}
\paragraph*{Proof of Theorem \ref{thm: random-feature}}
Define $A_C = \{\bm{a} : { \frac{1}{\sqrt{m}}}\|\bm{a}\|\leq C\}$ and $\mathcal{F}_C = \{ f_m(\cdot;\bm{a}) | \bm{a}\in A_C \}$. The Rademacher complexity of $\mathcal{F}_C$ satisfies,
\begin{align*}
\rad_n(\mathcal{F}_C) &= \frac{1}{n}\mathbb{E}_{\xi_i}\sup_{f\in\mathcal{F}_C}\sum_{i=1}^n \xi_i {\frac{1}{m}}\sum_{j=1}^m a_{j}\varphi(\bm{x}_i;\bm{w}_j) = \frac{1}{nm}\mathbb{E}_{\xi_i}\sup_{f\in\mathcal{F}_C}\sum_{j=1}^m a_{j} \sum_{i=1}^n \xi_i \varphi(\bm{x}_i;\bm{w}_j)\\
&\leq \frac{1}{n{m}}\mathbb{E}_{\xi_i}\sup_{f\in\mathcal{F}_C}\sqrt{\sum_{j=1}^m a^2_{j}} \sqrt{\sum_{j=1}^m\left(\sum_{i=1}^n \xi_i \varphi(\bm{x}_i;\bm{w}_j)\right)^2}\\
&\leq \frac{C}{n\sqrt{m}}\mathbb{E} \sqrt{\sum_{j=1}^m\left(\sum_{i=1}^n \xi_i \varphi(\bm{x}_i;\bm{w}_j)\right)^2}\\
&\stackrel{(i)}{\leq} \frac{C}{n\sqrt{m}}\sqrt{\mathbb{E} [\sum_{j=1}^m\left(\sum_{i=1}^n \xi_i \varphi(\bm{x}_i;\bm{w}_j)\right)^2]} \\
& = \frac{C}{n\sqrt{m}}\sqrt{\sum_{j=1}^m\sum_{i,i'=1}^n \mathbb{E}[\xi_i\xi_{i'}] \varphi(\bm{x}_i;\bm{w}_j)\varphi(\bm{x}_{i'};\bm{w}_j)} \stackrel{(ii)}{\leq}\frac{C}{\sqrt{n}},
\end{align*}
where $(i)$ and $(ii)$ follow from the Jensen's inequality and $\mathbb{E}[\xi_i\xi_j]=\delta_{i,j}$, respectively.
Moreover, for any $\bm{a} \in A_C$, we have $|f_m(\bm{x};\bm{a})| = |{ \frac{1}{m}}\sum_{i=1}^m a_j \varphi(\bm{x};\bm{w}_j)|\leq { \frac{1}{m}} \sqrt{ \|\bm{a}\|_2^2 \sum_{j=1}^m \varphi(\bm{x};\bm{w}_j)^2}\leq C$. Thus, for any $f_m(\cdot;\bm{a})\in \mathcal{F}_C$,
the loss function $(f_m(\bm{x};\bm{a})-f(\bm{x}))^2/2$ is $(C+1)-$Lipschitz continuous and bounded above by $(C+1)^2/2$.
Take $C=2\|f^*\|_{\mathcal{H}_k}$. We have $f_m(\cdot;\hat{\bm{a}}_n)\in \mathcal{F}_C$. Thus, Theorem \ref{thm: rad-gen-err} implies
\begin{align}
R(\hat{\bm{a}}_n) &\leq \hat{R}_n(\hat{\bm{a}}_n) + 2 (C+1)\rad_n(\mathcal{F}_C) + \frac{4(C+1)^2}{2} \sqrt{\frac{2\ln(2/\delta)}{n}}\\
&\lesssim \frac{\|f^*\|_{\mathcal{H}_k}^2+1}{\sqrt{n}}\left(1 + \sqrt{\ln(2/\delta)}\right).
\end{align}
\qed
\section{The two-layer neural network model}
First we recall the definition of the Barron space \cite{e2019barron,e2018priori}.
\begin{definition}[Barron space]
Let $\bm{w}=(\bm{b}, c)$ and $\tilde{\bm{x}}=(\bm{x}^T,1)^T$.
Consider functions that admit the following integral representation
\[
f(\bm{x}) = \mathbb{E}_{\bm{w} \sim \pi}[a(\bm{w})\sigma(\bm{w}^T\tilde{\bm{x}})],
\]
where $\pi$ is a probability measure over $\SS^{d}$ and $a(\cdot)$ is a measurable function. Denote $\Theta_f = \{(a,\pi) | f(\bm{x})=\mathbb{E}_{\bm{w}\sim\pi}[a(\bm{w})\sigma(\bm{w}^T\tilde{\bm{x}})]\}$, the Barron norm is defined as follows
\[
\|f\|_{\mathcal{B}} := \inf_{(a,\pi)\in \Theta_f} \mathbb{E}_{\bm{w}\sim\pi} |a(\bm{w})|.
\]
The Barron space is defined as the set of continuous functions with a finite Barron norm, i.e.
\[
\mathcal{B} := \{ f \,|\, \|f\|_{\mathcal{B}}< \infty\}.
\]
\end{definition}
\begin{remark}
Let $k_{\pi}(\bm{x},\bm{x}') :=
\mathbb{E}_{\bm{w} \sim \pi}[\sigma(\bm{w}\cdot \tilde{\bm{x}})\sigma(\bm{w}\cdot\tilde{\bm{x}}')]$. In \cite{e2019barron}, it is proved that $\mathcal{B} = \cup_{\pi \in P(\SS^d)} \mathcal{H}_{k_{\pi}}$, where $P(\SS^d)$ denote the set of Borel probability measures on $\SS^d$.
Therefore the Barron space is much larger than the RKHS.
\end{remark}
Let
\begin{equation}\label{eqn: minimum-norm-two-layer}
\hat{\theta}_n := \argmin_{\hat{R}_n(\theta)=0} \|\theta\|_{\mathcal{P}},
\end{equation}
where $\|\cdot\|_{\mathcal{P}}$ is the discrete analog of the Barron norm (also known as the path norm):
\begin{equation}
\|\theta\|_{\mathcal{P}} := {\frac{1}{m}}\sum_{j=1}^m |a_j|(\|\bm{b}_j\|_1+|c_j|).
\end{equation}
The generalization properties of the above estimator is given by the following theorem.
\begin{theorem} \label{thm: two-layer-net}
If $m\geq \frac{8n^2\ln(2n^2)}{\lambda_n^2}$, then for any $\delta \in (0,1)$, with probability at least $1-\delta$ over the random choice of the training data, we have
\[
R(\hat{\theta}_n) \lesssim \frac{\|f^*\|^2_{\mathcal{B}}+1}{\sqrt{n}}\left(\sqrt{\ln(2d)}+\sqrt{\ln(2/\delta)}\right)
\]
\end{theorem}
Before proving the main result, we need the following lemma.
\begin{lemma}\label{lem: fit-rand-label}
For any $\bm{r}\in \mathbb{R}^n$, there exists a two-layer neural network $f_m(\cdot;\theta)$ with $m\geq \frac{2n^2\ln(4n^2)}{\lambda_n^2}$, such that $f_m(\bm{x}_i;\theta) = r_i $ for any $i\in [n]$ and $\|\theta\|_{\mathcal{P}} \leq \sqrt{\frac{2}{\lambda_n}}\|\bm{r}\|$
\end{lemma}
\begin{proof}
Assume that $\{(\bm{b}_j,c_j)\}_{j=1}^m$ are i.i.d. random variables drawn from $\pi_0$, the uniform distribution over the sphere $\SS^d$.
Recall that $K^m:=(K^m_{i,i'})\in\mathbb{R}^{n\times n}$ with
\[
K^m_{i,i'} = \frac{1}{m}\sum_{j=1}^m \sigma(\bm{b}_j\cdot\bm{x}_i+c_j)\sigma(\bm{b}_j\cdot\bm{x}_{i'}+c_j)
\]
For any $\delta\in (0,1)$, if $m\geq \frac{2n^2\ln(2n^2/\delta)}{\lambda_n^2}$, Lemma \ref{eqn: kernel-approx} implies that the following hold with probability at least $1-\delta$
\begin{equation}\label{eqn: low-bound-eigen}
\lambda_{n}(K^m)\geq \frac{1}{2}\lambda_n.
\end{equation}
Taking $\delta=1/2$, the above inequality holds with probability $1/2$. This means that there must exist $\{(\hat{\bm{b}}_j,\hat{c}_j)\}_{j=1}^m$ such that \eqref{eqn: low-bound-eigen} holds.
Let $\Psi\in\mathbb{R}^{n\times m}$, $\Psi_{i,j} = \sigma(\hat{\bm{b}}_{j}\cdot \bm{x}_i+\hat{c}_j)$, denote the feature matrix. Then
\begin{equation}\label{eqn: 22}
\sigma_n^2(\Psi)=\lambda_n(\Psi\Psi^T) = m \lambda_n(K^m)\geq \frac{1}{2}\lambda_n m.
\end{equation}
We next choose $\bm{a}$ as the solution of the following problem.
\begin{align*}
\hat{\bm{a}} =& \argmin\, \, \|\bm{a}\| \\
&\text{s.t.}\, { \frac{1}{\sqrt{m}}}\Psi\bm{a} = \bm{r}.
\end{align*}
Then
\begin{equation}\label{eqn: 11}
\|\hat{\bm{a}}\|\leq \sigma^{-1}_n(\Psi)\|\bm{r}\|\leq \sqrt{\frac{2}{\lambda_n}} \|\bm{r}\|.
\end{equation}
Consider the two-layer neural network
\[
f_m(\bm{x};\hat{\theta}) = {\frac{1}{m}\sum_{j=1}^m \hat{a}_j \sigma(\hat{\bm{b}}_j\cdot \bm{x}+ \hat{c}_j)}.
\]
Then we have that $f_m(\bm{x}_j;\hat{\theta}) = r_j$ and
$
\|\theta\|_{\mathcal{P}} \leq { \frac{1}{m}}\sum_{j=1}^m |\hat{a}_j|\leq \|\hat{\bm{a}}\|\leq \sqrt{\frac{2}{\lambda_n}} \|\bm{r}\|,
$
where the last inequality follows from \eqref{eqn: 11}.
\end{proof}
The following lemma provides an upper bound to the minimum path norm solutions.
\begin{lemma}\label{lemma: two-layer-m}
Assume that $f^*\in \mathcal{B}$ and $m\geq \frac{6n^2\ln(4n^2)}{\lambda_n^2}$, then the minimum path norm solution \eqref{eqn: minimum-norm-two-layer} satisfies
\[
\|\hat{\theta}_n\|_{\mathcal{P}} \leq 3\|f^*\|_{\mathcal{B}}.
\]
\end{lemma}
\begin{proof}
First by the approximation result of two-layer neural networks (See Proposition 2.1 in \cite{e2018priori}), for any $m>0$, there must exists a two-layer neural network $f_{m_1}(\cdot;\theta^{(1)})$ such that
\begin{equation}
\hat{R}_n(\theta^{(1)})=\|f_{m_1}(\cdot;\theta^{(1)}) - f^*\|^2_{\hat{\rho}} \leq \frac{3\|f^*\|_{\mathcal{B}}^2}{m_1},
\end{equation}
and
\[
\|\theta^{(1)}\|_{\mathcal{P}}\leq 2 \|f^*\|_{\mathcal{B}}.
\]
where $\hat{\rho}(\bm{x})=\frac{1}{n}\sum_{i=1}^n \delta(\bm{x}-\bm{x}_i)$.
Let $\bm{r}=(y_1 - f_{m_1}(\bm{x}_1;\theta^{(1)}),\dots,y_n - f_{m_1}(\bm{x}_n;\theta^{(1)}))\in\mathbb{R}^n$ to be the residual. Then $\|\bm{r}\|\leq \sqrt{\frac{3n}{m_1}}\|f^*\|_{\mathcal{B}}$. Applying Lemma \ref{lem: fit-rand-label}, we know that there exists a two-layer neural network $f_{m_2}(\cdot;\theta^{(2)})$ with $m_2\geq \frac{2n^2\ln(4n^2)}{\lambda_n^2}$ such that
\begin{align}
f_{m_2}(\bm{x}_i;\theta^{(2)}) &= r_i \\
\|\theta^{(2)}\|_{\mathcal{P}} &\leq \sqrt{\frac{2}{\lambda_n}}\|\bm{r}\|\leq \|f^*\|_{\mathcal{B}},
\end{align}
where the last inequality holds as long as $m_1\geq \frac{6n}{\lambda_n}$.
Putting $f_{m_1}(\cdot;\theta^{(1)}), f_{m_2}(\cdot;\theta^{(2)})$ together, let
\[
f_{m_1+m_2}(\bm{x};\theta) = f_{m_1}(\bm{x};\theta^{(1)})+f_{m_2}(\bm{x};\theta^{(2)}),
\]
where $\theta = \{\theta^{(1)},\theta^{(2)}\}$. It is obviously that
\[
\hat{R}_n(\theta)=0, \qquad \|\theta\|_{\mathcal{P}} = \|\theta_1\|_{\mathcal{P}}+\|\theta_2\|_{\mathcal{P}} \leq 3 \|f\|_{\mathcal{B}}.
\]
\end{proof}
\paragraph*{Proof of Theorem \ref{thm: two-layer-net}}
For any $C>0$, let $\mathcal{F}_C = \{f_m(\bm{x};\theta) : \|\theta\|_{\mathcal{P}}\leq C\}$. Using Lemma 4 of \cite{e2018priori}, we have $\rad_n(\mathcal{F}_c)\leq 2C\sqrt{\frac{2\ln(2d)}{n}}$. By the definition of the minimum-norm solution and Lemma \ref{lemma: two-layer-m}, we have
\[
\|\hat{\theta}_n\|_{\mathcal{P}} \leq 3\|f^*\|_{\mathcal{B}}.
\]
Taking $C=3\|f^*\|_{\mathcal{B}}$, then we have $f_m(\cdot;\hat{\theta}_n)\in \mathcal{F}_C$. Since the loss function is $(C+1)$-Lipschitz continuous and bounded from above by $(C+1)^2/2$,
by Theorem \ref{thm: rad-gen-err}, for $\delta \in (0,1)$, the following holds with probability at least $1-\delta$
\begin{align}
R(\hat{\theta}_n)& \leq R_n(\hat{\theta}_n) + 2(C+1) \rad_n(\mathcal{F}_C) + \frac{4}{2}(C+1)^2\sqrt{\frac{2\ln(2/\delta)}{n}} \\
&\lesssim \frac{\|f^*\|^2_{\mathcal{B}}+1}{\sqrt{n}}\left(\sqrt{\ln(2d)}+\sqrt{\ln(2/\delta)}\right)
\end{align}
\qed
\section{The residual neural network models}
First we recall the definition of the flow-induced function spaces $\mathcal{D}_p$ \cite{e2019barron}.
Let $\{\rho_t\}_{t\in [0,1]}$ be a family of Borel probability measures over $\mathbb{R}^{D\times m}\times \mathbb{R}^{m\times D}$. Consider functions $f_{\bm{\alpha}, \{\rho_t\}}$ defined through the following ordinary differential equations (ODE),
\begin{equation}\label{eqn: ode-fun}
\begin{aligned}
\bm{z}(\bm{x},0) &= V\tilde{\bm{x}} \\
\frac{d \bm{z}(\bm{x}, t)}{dt} &= \mathbb{E}_{(U,W)\sim \rho_t}[U\sigma(W\bm{z}(\bm{x},t))] \\
f_{\bm{\alpha}, \{\rho_t\}}(\bm{x}) &= \bm{\alpha}^T \bm{z}(\bm{x}, 1),
\end{aligned}
\end{equation}
where $V\in\mathbb{R}^{D\times (d+1)}, U\in \mathbb{R}^{D\times m}, V\in \mathbb{R}^{m\times D}$ and $\bm{\alpha}\in\mathbb{R}^{D}$. The ODE \eqref{eqn: ode-fun} can be viewed as the continuous limit of the residual network \eqref{eqn: resnet}.
To define the norm for controlling the complexity of the flow map of ODE \eqref{eqn: ode-fun}, we need the following linear ODE
\begin{align*}
\bm{n}_p(0) &= |V| \bm{1}_{d+1} \\
\frac{d \bm{n}_p(t)}{dt} &= 3 \, (\mathbb{E}_{(U,W)\sim \rho_t}[(|U| |W|)^p])^{1/p} \bm{n}_p(t),
\end{align*}
where $A^{q} = (a_{i,j}^q)$ for $A=(a_{i,j})$. {Specifically, $p=1,2$ are used in this paper.}
\begin{definition}[Flow-induced function space]
For a function $f$ that can be represented in the form \eqref{eqn: ode-fun}, we define
\begin{equation}\label{eqn: composition-norm}
\|f\|_{\mathcal{D}_p} = \inf_{f=f_{\bm{\alpha},\{\rho_t\}}} |\bm{\alpha}|^T \bm{n}_p(1) + \|\bm{n}_p(1)\|_1-D,
\end{equation}
to be the {``$\mathcal{D}_p$ norm''} of $f$. The space $\mathcal{D}_p$ is defined as the set of all functions that admit the representation $f_{V,\{\rho_t\}}$ with finite $\mathcal{D}_p$ norm.
\end{definition}
\begin{remark}
It should be noted that the function space $\mathcal{D}_p$ actually depends on $D, m$. We use $\mathcal{D}_p^{D,m}$ to explicitly show this dependence when it is needed. In most cases,
this dependence is omitted in the notation of $\mathcal{D}_p$ for simplicity.
\end{remark}
In addition, we define the following norm to quantify the continuity of the sequence of probability measure $\{\rho_t\}_{t\in [0,1]}$.
\begin{definition}\label{def: lip}
Given a family of probability distribution $\{\rho_t\}_{t\in [0,1]}$, let $S(\{\rho_t\})$ denote the set of positive values $C>0$ that satisfies
\begin{equation}\label{eq:lip}
\left| \mathbb{E}_{\rho_t} U\sigma( W\bm{z})-\mathbb{E}_{\rho_s} U\sigma(W\bm{z}) \right|\preceq C |t-s||\bm{z}|,
\end{equation}
and
\begin{equation}\label{eq:lip2}
\left| \left\|\mathbb{E}_{\rho_t}| U||W|\right\|_{1,1}-\left\|\mathbb{E}_{\rho_s}| U|| W|\right\|_{1,1} \right| \leq C |t-s|,
\end{equation}
for any $t,s\in[0,1]$ and $\bm{z}\in\mathbb{R}^D$. We define the ``Lipschitz norm'' of $\{\rho_t\}_{t\in[0,1]}$ by
\begin{equation}
\|\{\rho_t\}\|_{\lip}=\left\|\mathbb{E}_{\rho_{0}}|U||W|\right\|_{1,1}+\inf_{C\in S(\{\rho_t\})} C.
\end{equation}
\end{definition}
To control the complexity of a residual network, we use the following weighted path norm defined in \cite{ma2019priori}, which can be vied as an discrete analog of \eqref{eqn: composition-norm}.
\begin{definition}\label{eqn: composition-path-norm}
For any residual network $f_L(\cdot;\theta)$ given by \eqref{eqn: resnet}, its weighted path norm is define as,
\end{definition}
\begin{equation}
\|\theta\|_{\mathcal{C}}:= |\bm{\alpha}|^T \prod_{l=1}^L (I+{ \frac{3}{L}}|U_l||W_l|) |V| \bm{1}_{d+1}.
\end{equation}
We can now define the minimum-norm estimator for residual neural networks:
\begin{align}\label{def: minimum-norm-residual-net}
\hat{\theta}_n := \argmin_{\hat{R}_n(\theta)=0} \|\theta\|_{\mathcal{C}},
\end{align}
\begin{theorem}\label{thm: resnet}
Assume that the target function $f^*\in \mathcal{D}_2^{D,m}$ and $c_0(f^*):=\inf_{f_{V, \{\rho_t\}}=f^*}\|\{\rho_t\}\|_{\lip}<\infty$.
If the model is a $(L, D+d+2, m+1)$ residual neural network with the depth satisfying
\[
L\geq C\max\left((m^4D^6c_0^2(f^*) \|f^*\|_{\mathcal{D}_1}^2)^{6},\left(\frac{96nm^2}{\lambda_n}\right)^{\frac{3}{2}}, \frac{n (1+D)}{\lambda_n}, \frac{n^2\ln(2n)}{\lambda_n^2}\right),
\]
where $C$ is a universal constant.
Then for any $\delta \in (0,1), $with probability $1-\delta$ over the choice of the training data, we have
\[
R(\hat{\theta}_n)\lesssim \frac{\|f^*\|^2_{\mathcal{D}_1}+1}{\sqrt{n}} \left(\sqrt{\ln(2d)} + \sqrt{\ln(2/\delta)}\right)
\]
\end{theorem}
The following lemma shows that the addition of two residual networks can be represented by a wider residual network.
\begin{lemma}\label{lem: resnet-add}
Suppose that $f(\cdot;\theta^{(1)})$ and $f(\cdot;\theta^{(2)})$ are $(L_1,D_1,m_1)$ and $(L_2, D_2, m_2)$ residual networks, respectively. Then $F:=f(\cdot;\theta_1)+f(\cdot;\theta_2)$ can be represented as a $(\max(L_1,L_2), D_1+D_2, m_1+m_2)$ residual network $\theta^{(3)}$ and the weighted path norm satisfies
\[
\|\theta^{(3)}\|_{\mathcal{C}} = \|\theta^{(1)}\|_{\mathcal{C}} + \|\theta^{(2)}\|_{\mathcal{C}}.
\]
\end{lemma}
\begin{proof}
Without loss of generality, we assume $L_1=L_2$. Otherwise, we can add extra identity layers without changing the represented function and the path norm. $f(\cdot;\theta^{(1)})$ can be written as
\begin{equation*}
\begin{aligned}
\bm{z}^{(1)}_0(\bm{x}) &= V^{(1)}\tilde{\bm{x}} \\
\bm{z}^{(1)}_{l+1}(\bm{x}) &= \bm{z}^{(1)}_{l}(\bm{x}) + {\frac{1}{L}}U_l^{(1)} \sigma(W_l^{(1)}\bm{z}^{(1)}_{l}(\bm{x})) , \quad l=0,\dots, L-1\\
f(\bm{x};\theta^{(1)}) &= (\bm{\alpha}^{(1)})^T\bm{z}^{(1)}_L(\bm{x}),
\end{aligned}
\end{equation*}
where $U_l^{(1)}\in \mathbb{R}^{D_1\times m_1}, W^{(1)}_l\in \mathbb{R}^{m_1\times D_1}, V^{(1)}\in \mathbb{R}^{D_1\times (d+1)}, \bm{\alpha}^{(1)}\in\mathbb{R}^{D_1}$. Similarly, for $f(\cdot;\theta^{(2)})$, we have
\begin{equation*}
\begin{aligned}
\bm{z}^{(2)}_0(\bm{x}) &= V^{(2)}\tilde{\bm{x}} \\
\bm{z}^{(2)}_{l+1}(\bm{x}) &= \bm{z}^{(2)}_{l}(\bm{x}) + {\frac{1}{L}}U_l^{(2)} \sigma(W_l^{(2)}\bm{z}^{(2)}_{l}(\bm{x})) , \quad l=0,\dots, L-1\\
f(\bm{x};\theta^{(2)}) &= (\bm{\alpha}^{(2)})^T\bm{z}^{(2)}_L(\bm{x}),
\end{aligned}
\end{equation*}
where $U_l^{(2)}\in \mathbb{R}^{D_2\times m_2}, W^{(2)}_l\in \mathbb{R}^{m_2\times D_2}, V^{(2)}\in \mathbb{R}^{D_1\times (d+1)}, \bm{\alpha}^{(2)}\in\mathbb{R}^{D_2}$.
Let
\begin{align}
V = \begin{bmatrix}
V^{(1)}\\
V^{(2)}
\end{bmatrix},
U_l = \begin{bmatrix}
U^{(1)}_l & 0 \\
0 & U^{(2)}_l
\end{bmatrix},
V_l = \begin{bmatrix}
V^{(1)}_l & 0 \\
0 & V^{(2)}_l
\end{bmatrix},
\bm{\alpha} = \begin{bmatrix}
\bm{\alpha}^{(1)} \\
\bm{\alpha}^{(2)}
\end{bmatrix}.
\end{align}
Consider the following residual network
\begin{equation*}
\begin{aligned}
\bm{z}_0(\bm{x}) &= V\tilde{\bm{x}} \\
\bm{z}_{l+1}(\bm{x}) &= \bm{z}_{l}(\bm{x}) + {\frac{1}{L}}U_l \sigma(W_l\bm{z}_{l}(\bm{x})) , \quad l=0,\dots, L-1\\
f(\bm{x};\theta^{(3)}) &= \bm{\alpha}^T\bm{z}_L(\bm{x}),
\end{aligned}
\end{equation*}
where $\bm{z}_l(\bm{x})\in \mathbb{R}^{D_1+D_2}$. Here $f(\cdot;\theta^{(3)})$ is a $(L, D_1+D_2, m_1+m_2)$ residual network and
it is easy to show that
\begin{align*}
f(\bm{x};\theta^{(3)}) &= f(\bm{x}; \theta^{(1)}) + f(\bm{x};\theta^{(2)}) \\
\|\theta^{(3)}\|_{\mathcal{C}} &= \|\theta^{(1)}\|_{\mathcal{C}} + \|\theta^{(2)}\|_{\mathcal{C}}.
\end{align*}
\end{proof}
The following lemma shows that any two-layer neural network can be converted to an residual network, without changing the norm too much.
\begin{lemma}\label{lemma: embedding}
For any two layer neural network $f_m(\cdot;\theta)$ of width $m$. There exists a $(m, d+2, 1)$ residual network $g_{m}(\cdot; \Theta)$ such that
\begin{align*}
g_{m}(\bm{x};\Theta) &= f_{m}(\bm{x};\theta) \quad \forall \bm{x} \in \mathbb{R}^d \\
\|\Theta\|_{\mathcal{C}} & = 3 \|\theta\|_{\mathcal{P}}.
\end{align*}
\end{lemma}
\begin{proof}
Assume the two-layer neural network is given by $f_m(\bm{x};\theta)={\frac{1}{m}}\sum_{j=1}^m a_j \sigma(\bm{b}_j\cdot\bm{x}+c_j)$. Consider the following residual network,
\begin{align*}
\bm{z}_0(\bm{x}) &=
\begin{pmatrix}
I_{d+1}\\
0
\end{pmatrix} \tilde{\bm{x}}\\
\bm{z}_{j+1}(\bm{x}) &= \bm{z}_{j}(\bm{x}) + {\frac{1}{m} }\begin{pmatrix}
\bm{0}_d \\
0 \\
a_j
\end{pmatrix}
\sigma(
\begin{pmatrix}
\bm{b}_j^T & c_j & 0
\end{pmatrix} \bm{z}_j(\bm{x})
), \quad j=1,\dots, m-1\\
g_m(\bm{x};\Theta) &= \bm{e}_{d+2}^T \bm{z}_{m}(\bm{x}),
\end{align*}
where $\bm{e}_{d+2}=(0,\dots, 0,1)^T\in\mathbb{R}^{d+2}$. Obviously, $g_m(\bm{x};\Theta)=f_m(\bm{x};\theta)$ for any $\bm{x}\in\mathbb{R}^d$. Moreover, the weighted path norm satisfies
\begin{align}\label{eqn: res-norm}
\nonumber \|\Theta\|_{\mathcal{C}} & =
\bm{e}_{d+2}^T \left[\Pi_{l=1}^M \left(I + {\frac{3}{m}}\begin{pmatrix}
\bm{0}_d \\
0 \\
|a_l|
\end{pmatrix} \begin{pmatrix}
|\bm{b}_l|^T & |c_l| & 0
\end{pmatrix} \right)\right]
\begin{pmatrix}
I_{d+1} \\
0
\end{pmatrix}
\bm{1}_{d+1} \\
&= {\frac{3}{m}}\sum_{j=1}^M |a_j|(\|\bm{b}_j\|_1+|c_j|) = \|\theta\|_{\mathcal{P}}.
\end{align}
\end{proof}
\paragraph*{Proof of Theorem \ref{thm: resnet}}
By the direct approximation theorem (Theorem 10 in \cite{e2019barron}), for any $\delta_0\in (0,1)$, there exists a $L_1=(m^4D^6c_0^2(f^*) \|f^*\|_{\mathcal{D}_1}^2)^{3/\delta_0}$, such that for any $L\geq L_1$, there exists a $(L, D, m)$ residual network $f_{L}(\cdot; \theta^{(1)})$ such that
\begin{equation}\label{eqn: app-rate-res}
\begin{aligned}
\hat{R}_n(\theta^{(1)})& =\|f^* - f_L(\cdot;\theta^{(1)})\|^2_{\hat{\rho}_n}\\
&\leq \frac{24m^2}{L^{1-2\delta_0/3}}\|f^*\|_{\mathcal{D}_1}^4 + \frac{3C}{L}(1+D+\sqrt{\log L})\|f^*\|^2_{\mathcal{D}_1},
\end{aligned}
\end{equation}
and
\begin{align}\label{eqn: res1}
\|\theta^{(1)}\|_{\mathcal{C}} \leq 9 \|f^*\|_{\mathcal{D}_1},
\end{align}
where $C$ is a universal constant.
Let $\bm{r}=(y_1 - f_L(\bm{x}_1;\theta^{(1)}), \dots, y_n - f_L(\bm{x}_n;\theta^{(1)}))^T\in \mathbb{R}^n$ to be the residual.
$
\|\bm{r}\| = \sqrt{n\hat{R}_n(\theta^{(1)})}.
$
By Lemma \ref{lem: fit-rand-label}, there exists a two-layer neural network $h_M(\bm{x}; \theta)={ \frac{1}{M}} \sum_{j=1}^M a_j \sigma(\bm{b}_j^T\bm{x}+c_j)$ of $M = \frac{2n^2\ln(4n^2)}{\lambda_n^2}$ such that $h_M(\bm{x}_i;\theta)=r_i$ and
\begin{align*}
{\frac{1}{M}}\sum_{j=1}^M |a_j| (\|\bm{b}_j\|_1+|c_j|) &\lesssim \sqrt{\frac{2}{\lambda_n}}\|\bm{r}\|.
\end{align*}
Inserting \eqref{eqn: app-rate-res} gives us
\begin{align}\label{eqn: bound-residual}
\nonumber {\frac{1}{M}} \sum_{k=1}^M |a_k| (\|\bm{b}_k\|_1+|c_j|) &\leq \sqrt{\frac{2n}{\lambda_n}\left( \frac{24m^2}{L^{1-2\delta_0/3}}\|f^*\|_{\mathcal{D}_1}^2 + \frac{3C}{L}(1+D+\sqrt{\log L})\right)}\|f^*\|_{\mathcal{D}_1}\\
&\leq \|f^*\|_{\mathcal{D}_1},
\end{align}
where the last inequality holds as long as $L\geq \max((96m^2n/\lambda_n)^{3/(3-2\delta_0)}, 12Cn(1+D+\sqrt{\log L})/\lambda_n)$.
By Lemma \ref{lemma: embedding}, there exists a $(M,d+1,1)$ residual network $f_M(\cdot;\theta^{(2)})$ such that $f_M(\bm{x}_i;\theta^{(2)})= h_M(\bm{x}_i;\theta)=r_i$ and
\[
\|\theta^{(2)}\|_{\mathcal{C}} = {\frac{3}{M}}\sum_{j=1}^M |a_j|(\|\bm{b}_j\|_1+|c_j|)\leq 3 \|f^*\|_{\mathcal{D}_1}.
\]
Note that $L\geq M$. Applying Lemma \ref{lem: resnet-add}, we conclude that $f_L(\cdot;\theta^{(1)}) + f_M(\cdot;\theta^{(2)})$ can be represented by a $(L, D+d+2, m+1)$ residual network $f_L(\cdot;\theta^{(3)})$, which satisfies
\begin{align*}
\hat{R}_n(\theta^{(3)}) & = 0 \\
\|\theta^{(3)}\|_{\mathcal{C}} &= \|\theta^{(1)}\|_{\mathcal{C}} + \|\theta^{(2)}\|_{\mathcal{C}} \leq 12\|f^*\|_{\mathcal{D}_1},
\end{align*}
where the last inequality follows from \eqref{eqn: res1} and \eqref{eqn: res-norm}. By the definition of the minimum-norm solutions \eqref{def: minimum-norm-residual-net}, we have
\[
\|\hat{\theta}_n\|_{\mathcal{C}} \leq \|\theta_3\|_{\mathcal{C}}\leq 12 \|f^*\|_{\mathcal{D}_1}.
\]
Let $\mathcal{F}_C=\{f_L(\cdot;\theta) : \|\theta\|_{\mathcal{C}}\leq C\}$ denote the set of $(L,D+d+2,m+1)$ residual network with the weighted path norm bounded from above by $C$. Theorem 2.10 of \cite{ma2019priori} states that
\[
\rad_n(\mathcal{F}_C)\leq 3C \sqrt{\frac{2\log(2d)}{n}}.
\]
For any $f\in \mathcal{F}_C$, we have $|f|\leq C$, therefore the loss function is $(C+1)$-Lipschitz continuous and bounded by $(C+1)^2/2$.
Taking $C=12\|f^*\|_{\mathcal{D}_1}$, then we have $f_L(\cdot;\hat{\theta}_n)\in \mathcal{F}_C$.
Applying Theorem \ref{thm: rad-gen-err}, we conclude that with probability at least $1-\delta$ over the sample of the training set, we have
\begin{align*}
R(\hat{\theta}_n)& \leq \hat{R}(\hat{\theta}_n) + 2 (C+1)\rad_n(\mathcal{F}_C) + \frac{4(C+1)^2}{2}\sqrt{\frac{2\ln(2/\delta)}{n}}\\
& \lesssim \frac{\|f^*\|^2_{\mathcal{D}_1}+1}{\sqrt{n}} \left(\sqrt{\ln(2d)} + \sqrt{\ln(2/\delta)}\right),
\end{align*}
where the last inequality holds since $C=12\|f^*\|_{\mathcal{D}_1}$. Taking $\delta_0=1/2$ completes the proof.
\qed
\section{Concluding Remarks}
In this work, we prove that learning with the minimum-norm interpolation scheme can achieve Monte Carlo error rates for three models: the random feature model, the two-layer neural network model
and the residual neural network model. The proofs rely on two assumptions: (1) the model is sufficiently over-parametrized; (2) the labels are clean, i.e. $y_i=f^*(\bm{x}_i)$.
The ``double descent'' phenomenon \cite{belkin2019reconciling} tells us the results are unlikely to be true when the
models are not over-parametrized.
When the data suffers from measurement noise, we also expect that the results will deteriorate.
However, recent work \cite{liang2018just,liang2019risk,zhang2016understanding} showed that for kernel regression, noise may not hurt the generalization error too much, especially in the high-dimensional regime. It would be interesting to consider this issue for neural network models. We leave this to future work.
\subsection*{Acknowledgement}
The work presented here is supported in part by a gift to Princeton University from iFlytek
and the ONR grant N00014-13-1-0338.
|
2,877,628,088,937 | arxiv | \section*{Supplementary material}
In Appendix \ref{app:notation}, we introduce additional notations and objects that will be used in the proofs. In Appendix \ref{app:permut}, we study the equivariance of GCNs and prove Props \ref{prop:permut} and \ref{prop:permut-c}. In Appendix \ref{app:conv}, we prove Theorem \ref{thm:conv} on the convergence of GCNs. In Appendix \ref{app:wass}, we prove the Wasserstein bound in Theorem \ref{thm:stab-eq}. In Appendix \ref{app:stability}, we derive the stability bounds of Section \ref{sec:stability}. Finally, in Appendix \ref{app:technical}, we give technical concentration bounds and in Appendix \ref{app:third} we provide some third-party results for completeness.
\section{Notations}\label{app:notation}
Given a GCN, we define some bounds on its parameters that will be used in the multiplicative constants of the theorem. Recall that the filters are written $h_{ij}^{(\ell)}(\lambda) = \sum_{k=0}^\infty \beta_{ijk}^{(\ell)} \lambda^k$. We define $B_k^{(\ell)} = \pa{\beta_{ijk}^{(\ell)}}_{ji} \in \RR^{d_{\ell+1} \times d_\ell}$ the matrix containing the order-$k$ coefficients, and by $B_{k,\abs{\cdot}}^{(\ell)} = \pa{\abs{\beta_{ijk}^{(\ell)}}}_{ji}$ the same matrix with absolute value on all coefficients. Then, we define the following bounds:
\begin{align*}
H^{(\ell)}_2 &= \mathsmaller{\sum_k \norm{B_k^{(\ell)}}} &H^{(\ell)}_{\partial, 2} &= \mathsmaller{\sum_k \norm{B_k^{(\ell)}}k} \\
H^{(\ell)}_\infty &= \mathsmaller{\sum_k \norm{B_{k,\abs{\cdot}}^{(\ell)}}\pa{\frac{2c_{\max}}{c_{\min}}}^k} &H^{(\ell)}_{\partial,\infty} &= \mathsmaller{\sum_k \norm{B_k^{(\ell)}} k \pa{\frac{2c_{\max}}{c_{\min}}}^{k-1}}
\end{align*}
which all converge by our assumptions on the $\beta_k$. We may also denote $H_2$ by $H_{L^2(P)}$ for convenience but this quantity does not depend on $P$.
Note that, only for $H_\infty$, we use the spectral norm of the matrix $B_{k,\abs{\cdot}}$ with non-negative coefficients, which is suboptimal compared to using $B_{k}$. This is due to a part of our analysis where we do not operate in a Hilbert space but only in a Banach space $\mathcal{B}(\Xx)$, see Lemma \ref{lem:filter_partial}. We also define $\norm{b^{(\ell)}} = \sqrt{\sum_j (b_j^{(\ell)})^2}$.
Given $X$, we define the empirical degree function
\begin{equation}\label{eq:degree_function}
d_X = d_{W,X} \eqdef \frac{1}{n}\sum_i W(\cdot, x_i)
\end{equation}
Which will be denoted by $d_X$ when the kernel is clear.
Although $d_{W,P}$ is bounded away from $0$ by the assumption \eqref{eq:assumption_RG}, this is not necessarily the case for $d_{W,X}$. This is however true with high probability, as shown by the following Lemma.
\begin{lemma}\label{lem:bounded-degree}
Let $\Gamma$ be a model of random graphs. There is a universal constant $C$ such that, if
\begin{equation}\label{eq:proba-bounded-degree}
n \geq C D_\Xx(\rho)^2
\end{equation}
where $D_\Xx(\rho) = \frac{c_{\textup{Lip.}}}{c_{\min}} \sqrt{d_x} + \frac{c_{\max} + c_{\textup{Lip.}}}{c_{\min}}\sqrt{\log\frac{n_\Xx}{\rho}}$, then with probability $1-\rho$, $d_{W,X} \geq c_{\min}/2>0$.
\end{lemma}
\begin{proof}
Apply Lemma \ref{lem:chaining-pointwise} with $f=1$ to obtain the result.
\end{proof}
For $W$ and $X$ such that $d_{W,X}>0$, we define the following empirical Laplacian operator:
\begin{equation}\label{eq:def_laplacian_operator}
\mathcal{L}_X f = \mathcal{L}_{W,X}f \eqdef \frac{1}{n} \sum_i \frac{W(\cdot, x_i)}{\sqrt{d_{X}(\cdot) d_{X}(x_i)}}f(x_i)
\end{equation}
which we will also denote by $\mathcal{L}_X$ when $W$ is clear. Assuming that $d_X \geq c_{\min}/2$, $\mathcal{L}_{W,X}$ is a bounded operator and $\norm{\mathcal{L}_{W,X}}_\infty \leq \frac{2 c_{\max}}{c_{\min}}$.
Given $X = \{x_1,\ldots, x_n\}$ and any dimension $d$, we denote by $S_X$ the normalized sampling operator acting on functions $f:\Xx \to \RR^d$ defined by $S_X f \eqdef \frac{1}{\sqrt{n}}[f(x_1), \ldots, f(x_n)] \in \RR^{n \times d}$. The normalizing factor $\frac{1}{\sqrt{n}}$ is natural: we have $\norm{S_X f}_F \leq \norm{f}_\infty$ and by the Law of Large Numbers $\norm{S_X f}_F \to \norm{f}_{L^2(P)}$ a.s. Finally, given $X$ and $W$, we define $W(X) \eqdef (W(x_i,x_j))_{ij} \in \RR^{n\times n}$, and remark that $L(W(X)) \circ S_X = S_X \circ \mathcal{L}_{W,X}$.
\section{Invariance and equivariance}\label{app:permut}
\begin{proof}[Proof of Prop. \ref{prop:permut}]
The proof is immediate, by observing that $L(\sigma \cdot A) = \sigma \cdot L(A)$, therefore $h(L(\sigma \cdot A)) (\sigma \cdot Z) = \sigma \cdot (h(L(A))Z)$, and permutations commute with the pointwise activation function. For the invariant case, we just observe the final pooling on the equivariant case.
\end{proof}
\begin{proof}[Proof of Prop. \ref{prop:permut-c}]
Let us first observe that the degree function is such that
\[
d_{W, P}(\phi(x)) = \int W(\phi(x), x') d P(x') = \int (\phi \cdot W)(x, x') d(\phi^{-1})_\sharp P(x') = d_{\phi \cdot W, (\phi^{-1})_\sharp P}(x)
\]
Then, we have
\begin{align*}
\phi \cdot (\mathcal{L}_{W, P} f)(x) &= \int \frac{W(\phi(x),x')}{\sqrt{d_{W, P}(\phi(x)) d_{W, P}(x')}} f(x') d P(x') \\
&= \int \frac{(\phi \cdot W)(x,x')}{\sqrt{d_{\phi \cdot W, (\phi^{-1})_\sharp P}(x) d_{\phi \cdot W, (\phi^{-1})_\sharp P}(x')}} (\phi \cdot f)(x') d (\phi^{-1})_\sharp P(x') \\
&= \mathcal{L}_{\phi \cdot W, (\phi^{-1})_\sharp P} (\phi \cdot f)
\end{align*}
Then, by recursion, we have $\mathcal{L}_{\phi \cdot W, (\phi^{-1})_\sharp P}^k (\phi \cdot f) = \mathcal{L}_{\phi \cdot W, (\phi^{-1})_\sharp P}^{k-1} (\phi \cdot (\mathcal{L}_{W, P} f))=\ldots=\phi \cdot \mathcal{L}_{W, P}^k f$, and the same is true for filters $h(\mathcal{L})$. We conclude by observing that permutation commutes with pointwise non-linearity: $\rho \circ (\phi \cdot f) = \phi \cdot(\rho \circ f) = \rho \circ f \circ \phi$. The invariant case follows with a final integration against $P$.
\end{proof}
\section{Convergence of GCNs: proof of Theorem \ref{thm:conv}}\label{app:conv}
We are going to prove Theorem \ref{thm:conv} with the following constants:
\begin{align}
& C_1 \propto \frac{c_{\max}+c_{\textup{Lip.}}}{c_{\min}} \sum_{\ell=0}^{M-1} C^{(\ell)} H^{(\ell)}_{\partial, \infty} \prod_{s=\ell+1}^{M-1} H^{(s)}_2, \notag \\
& C_2 \propto \frac{c_{\max}}{c_{\min}^2} \sum_{\ell=0}^{M-1} C^{(\ell)} H^{(\ell)}_{\partial, 2} \prod_{s=\ell+1}^{M-1} H^{(s)}_2, \notag \\
& C_3 \propto C^{(M)} \notag \\
& \quad \text{with} \quad C^{(\ell)} \eqdef \norm{\theta} \pa{\norm{f}_\infty \prod_{s=0}^{\ell-1} H^{(s)}_\infty + \sum_{s=0}^{\ell-1} \norm{b^{(s)}}\prod_{p=s+1}^{\ell-1} H^{(p)}_\infty} \label{eq:thm-conv-cst}
\end{align}
The proof will mainly rely on an application of Dudley's inequality \cite[Thm 8.1.6]{Vershynin2018} (Lemma \ref{lem:chaining-pointwise-Lapl} in Appendix \ref{app:technical}) and a recent spectral concentration inequality for normalized Laplacian in relatively sparse graphs (Theorem \ref{thm:conc-sparse-Lapl} in Appendix \ref{app:third}).
\begin{proof}
We begin the proof by the equivariant case, the invariant case will simply use an additional concentration inequality. Denoting by $Z^{(\ell)}$ (resp. $f^{(\ell)}$) the signal at each layer of the GCN (resp. the function at each layer of the c-GCN), we have
\[
\text{MSE}_X\pa{\Phi_A(Z), \Phi_{W,P}(f)} = \norm{\frac{\Phi_A(Z)}{\sqrt{n}} - S_X \Phi_{W,P}(f)}_F \leq \norm{\theta} \norm{\frac{Z^{(M)}}{\sqrt{n}} - S_X f^{(M)}}_F
\]
where we recall that $S_X$ is the normalized sampling operator (see App.~\ref{app:notation}). We therefore seek to bound that last term.
Assume that the following holds with probability $1-\rho$: for all $0\leq \ell\leq M-1$
\begin{equation}\label{eq:assum-pointwise}
\sqrt{\sum_j \norm{ \sum_i \pa{h^{(\ell)}_{ij}(L) S_X f^{(\ell)}_i - S_X h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i}}^2} \leq \Delta^{(\ell)}
\end{equation}
Then, using \eqref{eq:rho}, Lemma \ref{lem:filter_partial}, and the fact that for unnormalized sampling $\sqrt{n}S_X \circ \rho = \rho \circ (\sqrt{n}S_X)$, we can show by recursion that $\norm{\frac{Z^{(\ell)}}{\sqrt{n}} -S_X f^{(\ell)}}_F \leq \varepsilon_\ell$ implies
\begin{align*}
&\norm{\frac{Z^{(\ell+1)}}{\sqrt{n}} -S_X f^{(\ell+1)}}_F \\
&=\pa{\sum_j \norm{\frac{1}{\sqrt{n}}\rho\Big(\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(L) z^{(\ell)}_i + b_{j}^{(\ell)}1_n \Big) - S_X \rho\Big(\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i + b_{j}^{(\ell)}1(\cdot) \Big)}^2}^\frac12 \\
&=\pa{\sum_j \frac{1}{n}\norm{\rho\Big(\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(L) z^{(\ell)}_i + b_{j}^{(\ell)}1_n \Big) - \rho\Big(\sqrt{n}S_X\Big(\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i + b_{j}^{(\ell)}1(\cdot)\Big) \Big)}^2}^\frac12 \\
&\leq\pa{\sum_j \norm{\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(L) \frac{z^{(\ell)}_i}{\sqrt{n}} - S_X h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i}^2}^\frac12 \\
&\leq\pa{\sum_j \norm{\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(L) \pa{\frac{z^{(\ell)}_i}{\sqrt{n}} - S_X f^{(\ell)}_i}}^2}^\frac12 \\
&\quad + \pa{\sum_j \norm{\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(L) S_X f^{(\ell)}_i - S_X h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i}^2}^\frac12 \\
&\leq \varepsilon_{\ell+1} \eqdef H^{(\ell)}_2 \varepsilon_\ell + \Delta^{(\ell)}
\end{align*}
Since $\frac{Z^{(0)}}{\sqrt{n}} = S_X f^{(0)}$ we have $\varepsilon_0 = 0$ and an easy recursion shows that
\begin{equation}\label{eq:conv-inter1}
\norm{\frac{Z^{(M)}}{\sqrt{n}} -S_X f^{(M)}}_F \leq \sum_{\ell=0}^{M-1} \Delta^{(\ell)}\prod_{s=\ell+1}^{M-1} H_2^{(s)}
\end{equation}
We now need to prove that \eqref{eq:assum-pointwise} holds with probability $1-\rho$ for all $\ell$ with the appropriate $\Delta^{(\ell)}$.
Recall that $L(W(X)) \circ S_X = S_X \circ \mathcal{L}_{W,X}$, and that by \eqref{eq:proba-bounded-degree}, with probability $1-\rho/2$ we have $\norm{\mathcal{L}_{W,X}}_\infty \leq \frac{2c_{\max}}{c_{\min}}$. Assuming this is satisfied, by Lemma \ref{lem:filter_partial} we have
\begin{align}
&\sqrt{\sum_j \norm{\sum_i \pa{h^{(\ell)}_{ij}(L) S_X f^{(\ell)}_i - S_X h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i}}^2} \notag \\
&\quad\leq \sqrt{\sum_j \norm{\sum_i \pa{h^{(\ell)}_{ij}(L) - h^{(\ell)}_{ij}(L(W(X))}S_X f^{(\ell)}_i}^2} \notag \\
&\qquad + \sqrt{\sum_j\norm{\sum_i S_X\pa{h^{(\ell)}_{ij}(\mathcal{L}_{W,X}) - h^{(\ell)}_{ij}(\mathcal{L}_{W,P})} f^{(\ell)}_i}^2} \notag \\
&\quad\leq H^{(\ell)}_{\partial,2}\norm{L-L(W(X))}\norm{f^{(\ell)}}_\infty \notag \\
&\qquad+ \sum_k \norm{B_k} \sqrt{\sum_i \pa{\sum_{\ell=0}^{k-1}\pa{\frac{2c_{\max}}{c_{\min}}}^\ell \norm{(\mathcal{L}_{W,X}-\mathcal{L}_{W,P})\mathcal{L}_{W,P}^{k-1-\ell} f_i^{(\ell)}}_\infty}^2} \label{eq:conv-inter2}
\end{align}
The first term in \eqref{eq:conv-inter2} is handled with a recent concentration inequality for normalized Laplacian in the relatively sparse graphs with random edges \citep{Keriven2020}, recalled as Theorem \ref{thm:conc-sparse-Lapl} in Appendix \ref{app:third}. We use the following version.
\begin{corollary}[of Theorem \ref{thm:conc-sparse-Lapl}]\label{cor:conc-sparse-Lapl}
Assume \eqref{eq:proba-bounded-degree} is satisfied and $n\geq 1/\rho$ for simplicity. There is a universal constant $C$ such that, if
\begin{equation}\label{eq:alpha_cond}
\alpha_n \geq \frac{C c_{\max}}{c_{\min}^2} \cdot \frac{\log n}{n}
\end{equation}
Then, with probability at least $1-\rho$, we have
\[
\norm{L - L(W(X))} \lesssim \frac{c_{\max}}{c_{\min}^2} \cdot \frac{1}{\sqrt{\alpha_n n}}
\]
\end{corollary}
\begin{proof}
By \eqref{eq:proba-bounded-degree} with the appropriate constant, with probability $1-\rho/2$ we have $d_X \geq c_{\min}/2$, we can therefore apply Theorem \ref{thm:conc-sparse-Lapl} to bound $\norm{L - L(W(X))}$ conditionally on $X$ using $c \sim 1+ \frac{\log(1/\rho)}{\log(n)} \sim 1$, then use a union bound to conclude.
\end{proof}
We now bound the second term in \eqref{eq:conv-inter2}. Define $\rho_k = \frac{C \rho}{(k+1)^2 \sum_\ell d_\ell}$ with $C$ such that $\sum_{k \ell} d_\ell \rho_k = \rho/4$ (even when the filters are not of finite order). Using an application of Dudley's inequality detailed in Lemma \ref{lem:chaining-pointwise-Lapl} in Appendix \ref{app:technical} and a union bound, we obtain with probability $1-\rho/4$ that: for all $i,\ell, k$, we have
\begin{align*}
\norm{(\mathcal{L}_{W,X} - \mathcal{L}_{W,P}) \mathcal{L}^k_{W,P} f^{(\ell)}_i}_\infty &\lesssim \frac{c_{\max}\norm{\mathcal{L}^k_{W,P} f_i^{\ell}}_\infty D_\Xx\pa{\rho_k}}{c_{\min}\sqrt{n}} \\
&\leq \pa{\frac{c_{\max}}{c_{\min}}}^k\frac{c_{\max}\norm{f_i^{\ell}}_\infty D_\Xx\pa{\frac{C \rho}{(k+1)^2 \sum_\ell d_\ell}}}{c_{\min}\sqrt{n}} \\
&\lesssim \pa{\frac{2c_{\max}}{c_{\min}}}^k\frac{(c_{\max}+c_{\textup{Lip.}})\norm{f_i^{\ell}}_\infty D_\Xx\pa{\frac{\rho}{\sum_\ell d_\ell}}}{c_{\min}\sqrt{n}}
\end{align*}
Coming back to the second term of \eqref{eq:conv-inter2}, with probability $1-\rho/4$:
\begin{align*}
\sum_k &\norm{B_k} \sqrt{\sum_i \pa{\sum_{\ell=0}^{k-1}\pa{\frac{2c_{\max}}{c_{\min}}}^\ell \norm{(\mathcal{L}_{W,X}-\mathcal{L}_{W,P})\mathcal{L}_{W,P}^{k-1-\ell} f_i^{(\ell)}}_\infty}^2} \\
&\lesssim \frac{(c_{\max}+c_{\textup{Lip.}}) D_\Xx\pa{\frac{\rho}{\sum_\ell d_\ell}}}{c_{\min}\sqrt{n}} \sum_k \norm{B_k} k\pa{\frac{2c_{\max}}{c_{\min}}}^k \sqrt{\sum_i \norm{f_i^{(\ell)}}_\infty^2} \\
&\leq \frac{(c_{\max}+c_{\textup{Lip.}}) D_\Xx\pa{\frac{\rho}{\sum_\ell d_\ell}}}{c_{\min}\sqrt{n}} H^{(\ell)}_{\partial, \infty} \norm{f^{(\ell)}}_\infty
\end{align*}
At the end of the day we obtain that with probability $1-\rho$, \eqref{eq:assum-pointwise} is satisfied with
\begin{equation*}
\Delta^{(\ell)} \propto \norm{f^{(\ell)}}_\infty \pa{\frac{H^{(\ell)}_{\partial,2} c_{\max}}{c_{\min}^2\sqrt{\alpha_n n}} +\frac{ H_{\partial, \infty}^{(\ell)}(c_{\max}+c_{\textup{Lip.}}) D_\Xx\pa{\frac{\rho}{\sum_\ell d_\ell}}}{c_{\min}\sqrt{n}}}
\end{equation*}
We then use Lemma \ref{lem:bound_cgcn} to bound $\norm{f^{(\ell)}}_\infty$ and conclude.
Finally, in the invariant case we have
\begin{align*}
\norm{\bar{\Phi}_A(Z) - \bar{\Phi}_{W,P}(f)} \leq\textup{MSE}_X\pa{\Phi_A(Z), \Phi_{W,P}(f)} + \norm{\theta}\norm{\frac{1}{n}\sum_i f^{(M)}(x_i) - \mathbb{E}f^{(M)}(X)}
\end{align*}
Using a vector Hoeffding's inequality (Lemma \ref{lem:conc-hilbert}) and a bound on $\norm{f^{(M)}}_\infty$ by Lemma \ref{lem:bound_cgcn} we bound the second term and conclude.
\end{proof}
\section{Wasserstein convergence: proof of Theorem \ref{thm:stab-eq}}\label{app:wass}
We are going to prove Theorem \ref{thm:stab-eq} with the following constants:
\begin{align}
&C_1 \propto \norm{\theta} \pa{\max_{r=1,2}\norm{f_r}_\infty \prod_{s=0}^{\ell-1} H^{(s)}_\infty + \sum_{s=0}^{\ell-1} \norm{b^{(s)}}\prod_{p=s+1}^{\ell-1} H^{(p)}_\infty} &&C_2 = 27^{d_z/4} \label{eq:thm-stab-eq-cst} \\
&C'_1 \propto (n_\Xx n_f)^{\frac{1}{d_x}} \max_{r=1,2} D_r && C'_2 = 27^{d_x/4} \notag
\end{align}
where $D_r$ are defined as \eqref{eq:cgcn-lip} with the function $f_r$ as input.
The proof will mainly rely on results of the concentration rate of the empirical distribution of $iid$ data to its true value in Wasserstein norm \cite{Weed2017} (Theorem \ref{thm:wass} in Appendix \ref{app:third}).
\begin{proof}
For $r=1,2$, define $y_{r,i} = \Phi_{W_r, P_r}(f_r)(x_{r,i})$ which are drawn $iid$ from $Q_{r} \eqdef \Phi_{W_r,P_r}(f_r)_\sharp P_r$, and we denote by $\hat Q_r = n^{-1}\sum_i \delta_{y_{r,i}}$ the empirical distributions. By Theorem \ref{thm:conv} and triangle inequality, we have
\begin{align*}
\min_\sigma \sqrt{n^{-1} \sum_i \norm{\Phi_{A_1}(Z_1)_i - \Phi_{A_2}(Z_2)_{\sigma(i)}}^2} &\leq \min_\sigma \sqrt{\frac{1}{n}\sum_i \norm{y_{1,i} - y_{2,\sigma(i)}}^2} + 2R_n
\end{align*}
The first term is known, among other appellations, as the so-called ``Monge'' formulation of optimal transport (OT) \citep[Chap. 2]{Peyre2019}. For uniform weights, as is the case here, it is known that Monge formulation of OT is equivalent to its Kantorovich relaxation, which in turns gives the traditional Wasserstein metric: by \citep[Prop. 2.1]{Peyre2019}, we have
\[
\sqrt{\frac{1}{n}\sum_i \norm{y_{1,i} - y_{2,\sigma(i)}}^2} = \mathcal{W}_2(\hat Q_1, \hat Q_2) \leq \mathcal{W}_2(Q_1, Q_2) + \sum_{r=1,2} \mathcal{W}_2(\hat Q_r, Q_r)
\]
We must therefore bound the distance between an empirical distribution $\hat Q$ and its true value $Q = g_\sharp P$ for some function $g = \Phi_{W,P}(f)$.
The distribution $Q$ is supported on $g\Xx \subset \RR^{d_z}$, which is bounded by $\norm{g}_\infty$ which can be bounded by Lemma \ref{lem:bound_cgcn}. Hence its covering numbers are as $N(g\Xx, \varepsilon, \norm{\cdot}) \leq (\norm{g}_\infty/\varepsilon)^{d_z}$. We can then conclude by Theorem \ref{thm:wass}.
When $f$ is $(c_f, n_f)$-piecewise Lipschitz however, by Lemma \ref{lem:lip_cgcn} $g$ is $(C,n_\Xx n_f)$-Lipschitz where $C$ is defined as \eqref{eq:cgcn-lip}, and in this case it is easy to see that the covering numbers of $g\Xx$ also satisfy $N(g\Xx, \varepsilon, \norm{\cdot}) \leq n_\Xx n_f(C/\varepsilon)^{d_x}$. Applying again Theorem \ref{thm:wass}, we conclude.
\end{proof}
\section{Stability}\label{app:stability}
In this section, norms $\norm{\cdot}$ always refer to~$L^2(P)$ norms (for functions and operators), and we drop the subscript for simplicity. We denote the pooling operator by $U_P f = \int f dP$. We denote $U = U_P$ and $\mathcal{L} = \mathcal{L}_{W,P}$ for short. For $\tau:\Xx \to \Xx$, we denote $W_\tau = (\Id-\tau) \cdot W$, $P_\tau = (\Id-\tau)_\sharp P$, and $f_\tau = (\Id-\tau) \cdot f$. Then we define the shorthands $\mathcal{L}_{W_\tau} = \mathcal{L}_{W_\tau, P}$, $\mathcal{L}_{P_\tau} = \mathcal{L}_{W,P_\tau}$, the composition operator $T_\tau f = (\Id - \tau) \cdot f$ and $U_\tau = U_{P_\tau}$. Finally, we define $A$ and~$A_\tau$ the integral operators of kernels~$W$ and~$W_\tau$ w.r.t.~the measure~$P$, the corresponding diagonal degree operators by~$D$ and~$D_{W_\tau}$,
so that we have $\mathcal{L} = D^{-1/2} A D^{-1/2}$ and similarly for~$\mathcal{L}_{W_\tau}$. Similarly, we define $D_{P_\tau}$ the diagonal degree operator of $W$ but with respect to $P_\tau$.
We observe that for two functions $f,f'$ and any $P$, we have
\begin{equation}\label{eq:Wass2LP}
\left. \begin{matrix*}[r]
& \abs{\int f dP - \int f' dP} \\
& \mathcal{W}_2(f_\sharp P, f'_\sharp P)
\end{matrix*}\right\rbrace \leq \norm{f-f'}_{L^2(P)}
\end{equation}
The second inequality is immediate by considering the definition $\mathcal{W}_2^2(f_\sharp P, f'_\sharp P) = \inf \left\lbrace \mathbb{E}\norm{f(X) - f'(Y)}^2 ~|~ X \sim P, Y \sim P\right\rbrace$ and taking $X=Y$ as a coupling. We will therefore manipulate mainly $L^2(P)$ norms.
\subsection{Proof of Theorem~\ref{thm:deform_w} (deformation change to~$W$)}
\label{sub:proof_deform_w}
We are going to prove Theorem \ref{thm:deform_w} with constant
\begin{equation}
\label{eq:changeW_cst}
C \propto \norm{\theta}c_{\min}^{-2}\sum_{\ell=0}^{M-1} H_{\partial, 2}^{(\ell)}\prod_{s=0,~s \neq \ell}^{M-1} H_2^{(s)}
\end{equation}
In this setting, we have two random graph models whose only difference is in the choice of kernel, from~$W$ to~$W_\tau$, while fixing~$P$ and~$f$.
This in turn leads to a change in the Laplacian, which is the main quantity that we will need to control.
Since we have assumed the bias to be zero, we have the following Lemma, which we apply with $f=f'$ in this section.
\begin{lemma}\label{lem:changeW_stab}
We have
\begin{equation*}
\norm{\Phi_{W,P}(f) - \Phi_{W_\tau, P}(f')} \leq C \norm{f} \norm{\mathcal{L} - \mathcal{L}_{W_\tau}} + C' \norm{f-f'}
\end{equation*}
with
\begin{align}
C &= \norm{\theta}\sum_{\ell=0}^{M-1} H_{\partial, 2}^{(\ell)}\prod_{s=0,~s \neq \ell}^{M-1} H_2^{(s)} \label{eq:changeW_cst1} \\
C' &= \norm{\theta}\prod_{\ell=0}^{M-1} H_2^{(\ell)} \label{eq:changef_cst1}
\end{align}
\end{lemma}
\begin{proof}
Denoting $f^{(\ell)}$ and $(f^{(\ell)})'$ the functions at each layer, we have using Lemma \ref{lem:filter_partial} and \eqref{eq:rho}:
\begin{align*}
\norm{f^{(\ell)} - (f^{(\ell)})'} &\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} h_{ij}^{(\ell-1)}(\mathcal{L}) f_i^{(\ell-1)} - h_{ij}^{(\ell-1)}(\mathcal{L}_{W_\tau}) (f_i^{(\ell-1)})'}^2} \\
&\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} (h_{ij}^{(\ell-1)}(\mathcal{L}) - h_{ij}^{(\ell-1)}(\mathcal{L}_{W_\tau})) f_i^{(\ell-1)}}_*^2} \\
&\quad + \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} h_{ij}^{(\ell-1)}(\mathcal{L}_{W_\tau}) (f_i^{(\ell-1)} - (f_i^{(\ell-1)})')}^2} \\
&\leq H^{(\ell-1)}_{\partial, 2} \norm{\mathcal{L} - \mathcal{L}_{W_\tau}} \norm{f^{(\ell-1)}} + H^{(\ell-1)}_2 \norm{f^{(\ell-1)} - (f^{(\ell-1)})'}
\end{align*}
An easy recursion and Lemma \ref{lem:bound_cgcn} give the result.
\end{proof}
The rest of the proof then consists in obtaining a bound on the quantity~$\|\mathcal{L}_{W_\tau} - \mathcal{L}\|$ in $L^2(P)$.
We have
\begin{align}
\mathcal{L}_{W_\tau} - \mathcal{L} &= D_{W_\tau}^{-\frac12} A_\tau D_{W_\tau}^{-\frac12} - D^{-\frac12} A D^{-\frac12} \nonumber \\
&= D_{W_\tau}^{-\frac12} A_\tau(D_{W_\tau}^{-\frac12} - D^{-\frac12}) + D_{W_\tau}^{-\frac12} (A_\tau - A) D^{-\frac12} + (D_{W_\tau}^{-\frac12} - D^{-\frac12}) A D^{-\frac12}. \label{eq:ldiff_decomp}
\end{align}
We now bound different operators in this decomposition separately.
\paragraph{Bound on~$\|A_\tau - A\|$.}
Define
\begin{equation}
\label{eq:kdiffdef}
k(x, x') = w(x - \tau(x) - x' + \tau(x')) - w(x - x'),
\end{equation}
so that $A_\tau - A$ is an integral operator with kernel~$k$.
We have, by the fundamental theorem of calculus,
\[
k(x, x') = \int_0^1 \langle \nabla w(x - x' + t (\tau(x') - \tau(x))), \tau(x') - \tau(x) \rangle dt.
\]
Now, note that we have
\begin{align*}
|\tau(x') - \tau(x)| &\leq \|\nabla \tau\|_\infty \cdot \norm{x - x'} \\
|x - x' + t (\tau(x') - \tau(x))| &\geq \frac{1}{2} \norm{x - x'},
\end{align*}
where the last inequality follows from the reverse triangle inequality and the assumption~$\|\nabla \tau\|_\infty \leq 1/2$.
By Cauchy-Schwarz, and since $\norm{\nabla w(x)}$ decreases with~$\norm{x}$, we have
\begin{align*}
\int |k(x, x')| dP(x') &\leq \|\nabla \tau\|_\infty \int \norm{\nabla w((x - x')/2)} \cdot \norm{x' - x} d P(x') \\
&\leq C_{\nabla w} \|\nabla \tau\|_\infty.
\end{align*}
Similarly, we obtain $\int |k(x, x')| d P(x) \leq C_{\nabla w} \|\nabla \tau\|_\infty$.
Then, Schur's test (Lemma \ref{lem:schur}) yields
\begin{equation}
\label{eq:final_bound_deform_w}
\|A_\tau - A\| \leq C_{\nabla w} \|\nabla \tau\|_\infty.
\end{equation}
\paragraph{Bound on~$\|D_{W_\tau}^{-1/2} - D^{-1/2}\|$.}
Define $d = d_{W,P}$ and $d_\tau = d_{W_\tau,P}$. The operator $D_{W_\tau}^{-1/2} - D^{-1/2}$ is diagonal with elements~$d_\tau(x)^{-1/2} - d(x)^{-1/2}$, such that $\|D_{W_\tau}^{-1/2} - D^{-1/2}\| \leq \norm{d_\tau^{-1/2} - d^{-1/2}}_\infty$.
Note that we have
\[
|d_\tau(x) - d(x)| = \abs{\int k(x, x') dP(x')} \leq C_{\nabla w} \|\nabla \tau\|_\infty,
\]
Then as in the proof of Lemma \ref{lem:chaining-pointwise-Lapl} we have $\norm{d_\tau^{-1/2} - d^{-1/2}}_\infty \leq c_{\min}^{-\frac{3}{2}} \norm{d_\tau - d}_\infty \leq C_{\nabla w} c_{\min}^{-\frac{3}{2}}\|\nabla \tau\|_\infty$.
\paragraph{Final bound.}
Note that by Schur's test, we have $\|A\| \leq C_w$, $\|A_\tau\| \leq C_w + C_{\nabla w} \|\nabla \tau\|_\infty$.
Further, we have $\|D^{-1/2}\|, \|D_{W_\tau}^{-1/2}\| \leq c_{\min}^{-1/2}$.
Plugging back into~\eqref{eq:ldiff_decomp}, we have the desired bound.
\subsection{Proof of Theorem~\ref{thm:deform_p_TI} and \ref{thm:deform_p} (deformation change to~$P$)}
In this case, we have two random graph models with distributions~$P$ and $P_\tau$, while~$W$ remains fixed. Depending on the case, the input $f$ will change or not.
\paragraph{Translation-invariant case.} We are going to prove Theorem \ref{thm:deform_p_TI} with $C$ defined as \eqref{eq:changeW_cst} and $C'$ as \eqref{eq:changef_cst1}. Using \eqref{eq:Wass2LP} and Prop \ref{prop:permut-c}, we obtain that
\begin{equation*}
\mathcal{W}_2\pa{\Phi_{W, P_\tau}(f')_\sharp P_\tau, \Phi_{W, P}(f)_\sharp P} \leq \norm{T_\tau \Phi_{W, P_\tau}(f') - \Phi_{W, P}(f)} = \norm{\Phi_{W_\tau, P}(f'_\tau) - \Phi_{W, P}(f)}
\end{equation*}
In the invariant case, we have similarly that $\abs{\bar{\Phi}_{W, P_\tau}(f') - \bar{\Phi}_{W, P}(f)} = \abs{\bar{\Phi}_{W, P_\tau}(f') - \bar{\Phi}_{W, P}(f)} \leq \norm{\Phi_{W_\tau, P}(f'_\tau) - \Phi_{W, P}(f)}$.
Using Lemma \ref{lem:changeW_stab} and the computation of the previous section, we obtain
\begin{equation*}
\norm{\Phi_{W_\tau, P}(f'_\tau) - \Phi_{W, P}(f)} \leq C(C_W+C_{\nabla w}) \norm{f} \norm{\nabla \tau}_\infty + C' \norm{f'_\tau - f},
\end{equation*}
where $C$ is defined by \eqref{eq:changeW_cst} and $C'$ is defined by \eqref{eq:changef_cst1}.
\begin{proof}[Proof of Prop. \ref{prop:degree-NF}] When using degree functions as input, using the proof of Prop \ref{prop:permut-c} and the computations of the previous sections we have
\[
\norm{T_\tau d_{W, P_\tau} - d_{W,P}} = \norm{d_{W_\tau, P} - d_{W,P}} \leq C_{\nabla w} \norm{\nabla \tau}_\infty
\]
\end{proof}
\paragraph{Non Translation-invariant case.} We are going to prove Theorem \ref{thm:deform_p} with constants $C$ defined as \eqref{eq:changeW_cst} and $C'$ defined as \eqref{eq:changef_cst1}. By Assumption \eqref{eq:assume_density} we easily have the following:
\begin{align}\label{eq:equivalent_norm_tau}
C_{P_\tau}^{-1/2}\norm{f}_{L^2(P_\tau)} \leq \norm{f} \leq C_{P_\tau}^{1/2} \norm{f}_{L^2(P_\tau)}
\end{align}
such that for any $k$ we have $\norm{\mathcal{L}_{P_\tau}^k} \leq C_{P_\tau}$ by observing that
\[
\|\mathcal{L}_{P_\tau}^k f\| \leq C_{P_\tau}^{1/2} \|\mathcal{L}_{P_\tau}^k f\|_{L^2(P_\tau)} \leq C_{P_\tau}^{1/2} \|f\|_{L^2(P_\tau)} \leq C_{P_\tau} \|f\|
\]
We can then prove the following Lemma.
\begin{lemma}[Stability in terms of measure change]
We have
\begin{equation}
\|\bar{\Phi}_{W, P_\tau}(f) - \bar{\Phi}_{W, P}(f)\|
\leq C_{P_\tau}^2 C_1 \norm{f} \|\mathcal{L}_{P_\tau} - \mathcal{L}\| + C_2 \norm{f} \|U_\tau - U\|
\end{equation}
with $C_1$ is the same as \eqref{eq:changeW_cst1} and $C_2 = \norm{\theta} \prod_{\ell=0}^{M-1} H_2^{(\ell)}$.
\end{lemma}
\begin{proof}
From a simple triangle inequality and the estimate~$\|U_\tau\| \leq C_{P_\tau}$ since $|U_\tau f| = \abs{\int f d P_\tau} = \abs{\int f q_\tau dP} \leq C_{P_\tau} \norm{f}$, we have
\begin{align*}
\|\bar{\Phi}_{W, P_\tau}(f) - \bar{\Phi}_{W, P}(f)\| &= \norm{ U_\tau \Phi_{W, P_\tau}(f) - U \Phi_{W, P}(f)} \\
&\leq C_{P_\tau} \norm{\Phi_{W, P_\tau}(f) - \Phi_{W, P}(f)} + \norm{U_\tau - U} \norm{\Phi_{W,P}(f)}
\end{align*}
We must now bound the first term. Denoting $f^{(\ell)}$ and $(f^{(\ell)})'$ the functions at each layer, we have using Lemma \ref{lem:filter_partial}, \eqref{eq:rho} and the fact that $\norm{\mathcal{L}_{P_\tau}^k} \leq C_{P_\tau}$:
\begin{align*}
\norm{f^{(\ell)} - (f^{(\ell)})'} &\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} h_{ij}^{(\ell-1)}(\mathcal{L}) f_i^{(\ell-1)} - h_{ij}^{(\ell-1)}(\mathcal{L}_{P_\tau}) (f_i^{(\ell-1)})'}^2} \\
&\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} (h_{ij}^{(\ell-1)}(\mathcal{L}) - h_{ij}^{(\ell-1)}(\mathcal{L}_{P_\tau})) (f_i^{(\ell-1)})'}^2} \\
&\quad + \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} h_{ij}^{(\ell-1)}(\mathcal{L}) (f_i^{(\ell-1)} - (f_i^{(\ell-1)})')}^2} \\
&\leq C_{P_\tau} H^{(\ell-1)}_{\partial, 2} \norm{\mathcal{L} - \mathcal{L}_{P_\tau}} \norm{(f^{(\ell-1)})'} + H^{(\ell-1)}_2 \norm{f^{(\ell-1)} - (f^{(\ell-1)})'}
\end{align*}
Then, we use Lemma \ref{lem:bound_cgcn} and the fact that there is no bias to obtain
\[
\norm{(f^{(\ell-1)})'} \leq C_{P_\tau}^{1/2} \norm{(f^{(\ell-1)})'}_{L^2(P_\tau)} \leq C_{P_\tau}^{1/2} \norm{f}_{L^2(P_\tau)} \prod_{s=0}^{\ell-1} H_2^{(s)} \leq C_{P_\tau} \norm{f} \prod_{s=0}^{\ell-1} H_2^{(s)}
\]
an easy recursion gives the result.
\end{proof}
We first bound~$\|U_\tau - U\|$ easily, by
\begin{align*}
\abs{U_\tau f - Uf} &= \abs{\int f(x) d P_\tau(x) - \int f(x) dP(x)} = \abs{\int f(x) \left( q_\tau(x) - 1 \right) dP(x)} \leq N_P(\tau) \norm{f}
\end{align*}
which is also true for multivariate functions.
We now bound~$\|\mathcal{L}_{P_\tau} - \mathcal{L}\|$.
If we denote $J_\tau$ the diagonal change of variables operator with elements $q_\tau(x)$, we may write
\begin{align*}
\mathcal{L}_{P_\tau} - \mathcal{L} &= D_{P_\tau}^{-1/2} A D_{P_\tau}^{-1/2} J_\tau - D^{-1/2} A D^{-1/2} \\
&= (D_{P_\tau}^{-1/2} - D^{-1/2}) A D_{P_\tau}^{-1/2} J_\tau + D^{-1/2} A (D_{P_\tau}^{-1/2} - D^{-1/2}) J_\tau + D^{-1/2} A D^{-1/2} (J_\tau - \Id)
\end{align*}
The following estimates are easily obtained using Schur's test or by a pointwise supremum: $\|A\| \leq C_w$, $\|J_\tau\| \leq C_{P_\tau}$, $\|J_\tau - \Id\| \leq N_P(\tau)$ and $\|D^{-1/2}\|, \|D_{P_\tau}^{-1/2}\| \leq c_{\min}^{-1/2}$. It is left to bound and $\|D_{P_\tau}^{-1/2} - D^{-1/2}\|$. As before,
\begin{align*}
\|D_{P_\tau}^{-1/2} - D^{-1/2}\| &\leq c_{\min}^{-3/2}\norm{d_\tau - d}_\infty.
\end{align*}
and
\begin{align*}
\abs{d_\tau(x) - d(x)} &= \abs{\int W(x, x') d P_\tau(x') - \int W(x, x') dP(x')} \\
&= \abs{\int W(x, x') \left( q_\tau(x') - 1 \right) dP(x')} \leq C_w N_P(\tau)
\end{align*}
\subsection{Proof of Proposition~\ref{prop:deform_f} (signal deformations to~$f$)}
This is just a triangle inequality, combined with the previous theorems and Prop \ref{prop:permut-c}:
\begin{align*}
\norm{\bar{\Phi}_{W,P}(T_\tau f) - \bar{\Phi}_{W,P}(f)} &\leq \norm{\bar{\Phi}_{W,P}(T_\tau f) - \bar{\Phi}_{W_\tau,P}(T_\tau f)} + \norm{\bar{\Phi}_{W_\tau,P}(T_\tau f) - \bar{\Phi}_{W,P}(f)} \\
&\leq C(C_W + C_{\nabla w})\norm{\nabla \tau}_\infty \norm{T_\tau f} + \norm{\bar{\Phi}_{W,P_\tau}(f) - \bar{\Phi}_{W,P}(f)} \\
&\leq C(C_W + C_{\nabla w}) C_{P_\tau}^{1/2} \norm{f} \norm{\nabla \tau}_\infty + (CC_{P_\tau}^3 C_W + C') \norm{f} N_P(\tau)
\end{align*}
where $C$ is given by \eqref{eq:changeW_cst} and $C'$ is given by \eqref{eq:changef_cst1}.
\section{Technical Lemma}\label{app:technical}
\subsection{Concentration inequalities}
\begin{lemma}[Chaining on non-normalized kernels]\label{lem:chaining-pointwise}
Consider a kernel $W$ and a probability distribution $P$ satisfying \eqref{eq:assumption_RG}, any function $f \in \mathcal{B}(\Xx)$, and $x_1,\ldots, x_n$ drawn $iid$ from $P$. Then, with probability at least $1-\rho$,
\begin{equation*}
\norm{\frac{1}{n} \sum_i W(\cdot, x_i) f(x_i) - \int W(\cdot, x) f(x) dP(x)}_\infty \lesssim \frac{\norm{f}_\infty\pa{c_{\textup{Lip.}} \sqrt{d_x} + (c_{\max} + c_{\textup{Lip.}})\sqrt{\log\frac{n_\Xx}{\rho}}}}{\sqrt{n}}
\end{equation*}
\end{lemma}
\begin{proof}
Without lost of generality, we do the proof for $\norm{f}_\infty \leq 1$. For any $x\in \Xx$, define
\[
Y_x = \frac{1}{n}\sum_i W(x, x_i)f(x_i) - \int W(x, x') f(x') dP(x')
\]
Since $\norm{W(\cdot, x)f}_\infty \leq c_{\max}$, for any fixed $x_0 \in \Xx$, by Hoeffding's inequality we have: with probability at least $1-\rho$,
\[
\abs{Y_{x_0}} \lesssim \frac{c_{\max} \sqrt{\log(1/\rho)}}{\sqrt{n}}
\]
Consider any $j\leq n_\Xx$. For any $x_0 \in \Xx_j$, we have
\[
\sup_{x \in \Xx_j} \abs{Y_x} \leq \sup_{x, x' \in \Xx_j} \abs{Y_x - Y_{x'}} + \abs{Y_{x_0}}
\]
The second term is bounded by the inequality above.
For the first term, we are going to use Dudley's inequality ``tail bound'' version \citep[Thm 8.1.6]{Vershynin2018}. We first need to check the sub-gaussian increments of the process $Y_x$. For any $x,x'\in \Xx_j$, we have
\begin{align*}
\norm{Y_x - Y_{x'}}_{\psi_2} &\lesssim \frac{1}{n} \pa{\sum_{i=1}^n \norm{(W(x, x_i) -W(x', x_i))f(x_i) - (T_{W,P}f(x)- T_{W,P}f(x'))}_{\psi_2}^2}^\frac12 \\
&\lesssim \frac{1}{n} \pa{\sum_{i=1}^n \norm{(W(x, x_i) -W(x', x_i))f(x_i)}_{\psi_2}^2}^\frac12 \\
&\lesssim \frac{1}{n} \pa{\sum_{i=1}^n \norm{(W(x, \cdot) -W(x', \cdot))f(\cdot)}_{\infty}^2}^\frac12 \\
&\leq \frac{c_{\textup{Lip.}}}{\sqrt{n}} d(x,x')
\end{align*}
where we have used, from \citep{Vershynin2018}, Prop. 2.6.1 for the first line, Lemma 2.6.8 for the second, Example 2.5.8 for the third, and the Lipschitz property of $W$ for the last.
Now, we apply Dudley's inequality \citep[Thm 8.1.6]{Vershynin2018} to obtain that with probability $1-\rho$,
\begin{align*}
\sup_{x, x' \in \Xx_j} \abs{Y_x - Y_{x'}} &\lesssim \frac{c_{\textup{Lip.}}}{\sqrt{n}}\pa{\int_0^1 \sqrt{\log N(\Xx, d, \varepsilon)}d\varepsilon + \sqrt{\log(1/\rho)}} \\
&\lesssim \frac{c_{\textup{Lip.}}}{\sqrt{n}}\pa{\sqrt{d_x} + \sqrt{\log(1/\rho)}}
\end{align*}
Combining with the decomposition above and applying a union bound over the $\Xx_j$ yields the desired result.
\end{proof}
\begin{lemma}[Chaining on normalized Laplacians]\label{lem:chaining-pointwise-Lapl}
Consider a kernel $W$ and a probability distribution $P$ satisfying \eqref{eq:assumption_RG}, any function $f \in \mathcal{B}(\Xx)$, and $x_1,\ldots, x_n$ drawn $iid$ from $P$. Assume $n$ satisfies \eqref{eq:proba-bounded-degree}. Then with probability at least $1-\rho$,
\begin{equation}
\norm{(\mathcal{L}_{X} - \mathcal{L}_{P}) f}_\infty \lesssim \frac{c_{\max}\norm{f}_\infty D_\Xx(\rho)}{c_{\min}\sqrt{n}}
\end{equation}
where $D_\Xx(\rho) = \frac{1}{c_{\min}}\pa{c_{\textup{Lip.}} \sqrt{d_x} + (c_{\max} + c_{\textup{Lip.}})\sqrt{\log\frac{n_\Xx}{\rho}}}$.
\end{lemma}
\begin{proof}
Again we assume $\norm{f}_\infty\leq 1$ without lost of generality.
By Lemma \ref{lem:chaining-pointwise} with $f=1$ and \eqref{eq:proba-bounded-degree}, with probability $1-\rho/2$ we have
\[
\norm{d_{X} - d_{P}}_\infty \lesssim \varepsilon_d \eqdef \frac{c_{\textup{Lip.}} \sqrt{d_x \log C_\Xx} + (c_{\max} + c_{\textup{Lip.}})\sqrt{\log\frac{n_\Xx}{\rho}}}{\sqrt{n}} \leq \frac{c_{\min}}{2}
\]
and in particular $d_{X} \geq c_{\min}/2$. In this case, for all $x$, we have
\begin{align*}
\abs{\frac{1}{\sqrt{d_{X}(x)}} - \frac{1}{\sqrt{d_{P}(x)}}} &\leq \frac{\abs{d_{P}(x) - d_{X}(x)}}{\sqrt{d_{X}(x)d_{P}(x)}(\sqrt{d_{X}(x)} + \sqrt{d_{P}(x)})} \lesssim \frac{\varepsilon_d}{c_{\min}^{3/2}}
\end{align*}
and for all $x,y$,
\begin{align*}
\abs{\frac{1}{\sqrt{d_{X}(y) d_{X}(x)}} - \frac{1}{\sqrt{d_{P}(y) d_{P}(x)}}} &\leq \frac{1}{\sqrt{d_{X}(y)}}\abs{\frac{1}{\sqrt{d_{X}(x)}} - \frac{1}{\sqrt{d_{P}(x)}}} \\
&\quad + \frac{1}{\sqrt{d_{P}(x)}}\abs{\frac{1}{\sqrt{d_{X}(y)}} - \frac{1}{\sqrt{d_{P}(y)}}} \lesssim \frac{\varepsilon_d}{c_{\min}^2}
\end{align*}
Now, define $\bar W(x,y) \eqdef \frac{W(x,y)}{\sqrt{d_{P}(x) d_{P}(y)}}$. For all $j\leq n_\Xx$ and $x,x' \in \Xx_j$ and $y \in \Xx$, we have $\norm{\bar W(\cdot, y)}_\infty \leq \frac{c_{\max}}{c_{\min}}$ and
\begin{align*}
\abs{ \bar W(x,y)- \bar W(x',y)} &\leq \frac{\abs{W(x,y)}}{\sqrt{d_P(y)}}\abs{\frac{1}{\sqrt{d_P(x)}} - \frac{1}{\sqrt{d_P(x')}}} + \frac{1}{\sqrt{d_P(x')d_P(y)}}\abs{ W(x,y)- W(x',y)} \\
&\lesssim \frac{c_{\textup{Lip.}} c_{\max}}{c_{\min}^2}d(x,x')
\end{align*}
Hence by applying Lemma \ref{lem:chaining-pointwise} we obtain that with probability $1-\rho/2$,
\begin{align*}
&\norm{\frac{1}{n} \sum_i \bar W(\cdot, x_i) f(x_i) - \int \bar W(\cdot, x) f(x) dP(x)}_\infty \\
&\qquad\qquad\lesssim \varepsilon_W \eqdef \frac{\frac{c_{\max}}{c_{\min}}\pa{\frac{c_{\textup{Lip.}}}{c_{\min}} \sqrt{d_x \log C_\Xx} + \pa{1 + \frac{c_{\textup{Lip.}}}{c_{\min}}}\sqrt{\log\frac{n_\Xx}{\rho}}}}{\sqrt{n}}
\end{align*}
We can now conclude, observing that
\begin{align*}
\norm{(\mathcal{L}_{X} - \mathcal{L}_{P})f}_\infty &= \norm{\frac{1}{n}\sum_i \frac{W(\cdot,x_i)}{\sqrt{d_{X}(\cdot) d_{X}(x_i)}}f(x_i) - \int \frac{W(\cdot,x)}{\sqrt{d_{P}(\cdot) d_{P}(x)}}f(x)dP(x)}_\infty \\
&\leq \sup_x \frac{1}{n} \sum_i \abs{W(x, x_i)f(x_i)} \abs{\frac{1}{\sqrt{d_{X}(x) d_{X}(x_i)}} - \frac{1}{\sqrt{d_{P}(x) d_{P}(x_i)}}} \\
&\quad+ \norm{\frac{1}{n} \sum_i \bar W(\cdot, x_i) f(x_i) - \int \bar W(\cdot, x) f(x) dP(x)}_\infty \\
&\lesssim \frac{c_{\max}}{c_{\min}^2} \varepsilon_d + \varepsilon_W \lesssim \frac{c_{\max}}{c_{\min}^2} \varepsilon_d
\end{align*}
\end{proof}
\subsection{Misc. bounds}
\begin{lemma}[Operator norms of filters]\label{lem:filter_partial}
Let $(E,\norm{\cdot}_E)$ be a Banach space and $(\mathcal{H}, \norm{\cdot}_\mathcal{H})$ be a separable Hilbert space. Let $L,L'$ be two bounded operators on $E$, and $S:E \to \mathcal{H}$ be a linear operator such that $\norm{S}_{\mathcal{H} \to E}\leq 1$. For $1\leq i \leq d$ and $1\leq j \leq d'$, let $h_{ij}=\sum_k \beta_{ijk} \lambda^k$ be a collection of analytic filters, with $B_k = \pa{\beta_{ijk}}_{ji}\in \RR^{d'\times d}$ the matrix of order-$k$ coefficients, with operator norm $\norm{B_k}$. Let $x_1,\ldots, x_{d}\in E$ be a collection of points. Then:
\[
\sqrt{\sum_j \norm{S \sum_i h_{ij}(L)x_i}_\mathcal{H}^2} \leq \pa{\sum_k \norm{B_k} \norm{L^k}} \sqrt{\sum_i \norm{x_i}_E^2}
\]
and
\begin{align*}
\sqrt{\sum_j \norm{S \sum_i (h_{ij}(L)- h_{ij}(L'))x_i}_\mathcal{H}^2} &\leq \sum_k \norm{B_k} \sqrt{\sum_i \pa{\sum_{\ell=0}^{k-1}\norm{L^\ell}\norm{(L-L')(L')^{k-1-\ell} x_i}_E}^2}
\end{align*}
When $\mathcal{H}$ is only a Banach space, the same results hold with $B_{k,\abs{\cdot}} = \pa{\abs{\beta_{ijk}}}_{ji}$ instead of $B_k$.
\end{lemma}
\begin{proof}
Let $\{e_\ell\}_{l\geq 1}$ be an orthonormal basis for $\mathcal{H}$. For all $i,k$, decompose $S L^k x_i = \sum_\ell b_{ik \ell} e_{\ell}$. We have
\begin{align*}
\sqrt{\sum_j \norm{S\sum_i h_{ij}(L)x_i}_\mathcal{H}^2} &\leq \sqrt{\sum_j \norm{\sum_{ik} \beta_{ijk} S L^k x_i}_\mathcal{H}^2} \leq \sum_k \sqrt{\sum_j \norm{\sum_{i \ell} \beta_{ijk} b_{ik\ell} e_\ell}_\mathcal{H}^2}\\
&\leq \sum_k \sqrt{\sum_{\ell} \sum_j\pa{\sum_{i} \beta_{ijk} b_{ik\ell}}^2} \\
&\leq \sum_k \sqrt{\norm{B_k}^2 \sum_{i\ell} b_{ik\ell}^2} = \sum_k \norm{B_k} \sqrt{\sum_{i} \norm{S L^k x_i}_\mathcal{H}^2} \\
&\leq \pa{\sum_k \norm{B_k} C^k} \sqrt{\sum_i \norm{x_i}_E^2}
\end{align*}
The proof of the second claim is obtained in the same way by decomposing $S (L^k -(L')^k)x_i$ in $\mathcal{H}$ and using at the last step:
\begin{align*}
\norm{(L^k - (L')^k)x}_E &= \norm{\sum_{\ell=0}^{k-1}L^\ell(L-L')(L')^{k-1-\ell} x}_E \leq \sum_{\ell=0}^{k-1}C_\ell\norm{(L-L')(L')^{k-1-\ell} x}_E
\end{align*}
Finally, when $\mathcal{H}$ is not a Hilbert space, we directly use
\begin{align*}
\sqrt{\sum_j \norm{S\sum_i h_{ij}(L)x_i}_\mathcal{H}^2} &\leq \sqrt{\sum_j \pa{\sum_{ik} \abs{\beta_{ijk}} C^k \norm{x_i}_E}^2} \leq \sum_k C^k \sqrt{\sum_j \pa{\sum_{i} \abs{\beta_{ijk}} \norm{x_i}_E}^2}\\
&\leq \pa{\sum_k \norm{B_{k,\abs{\cdot}}} C^k} \sqrt{\sum_i \norm{x_i}_E^2}
\end{align*}
\end{proof}
\begin{lemma}[Lipschitz property of discrete GCNs]\label{lem:lip_gcn}
Let $G_1=(A,Z_1)$ and $G_2=(A,Z_2)$ be two graphs with the same structure, and a GCN $\Phi$. Denote by $Z_r^{(M)}$ the signal at the last layer when applying $\Phi$ to $G_r$. We have
\[
\norm{Z_1^{(M)}-Z_2^{(M)}}_F \leq \pa{\prod_{\ell=0}^{M-1} H^{(\ell)}_{2}} \norm{Z_1 - Z_2}_F
\]
\end{lemma}
\begin{proof}
For $j\leq d_\ell$, using Lemma \ref{lem:filter_partial} and \eqref{eq:rho} we write
\begin{align*}
\norm{Z_1^{(M)}-Z_2^{(M)}}_F &\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{M-1}} h_{ij}^{(M-1)}(L) (z_{1,i}^{(M-1)}-z_{2,i}^{(M-1)})}^2} \\
&\leq H^{(M-1)}_{2} \norm{Z_1^{(M-1)}-Z_2^{(M-1)}}_F
\end{align*}
An easy recursion gives the result.
\end{proof}
\begin{lemma}[Bound on c-GCNs]\label{lem:bound_cgcn}
Apply a c-GCN to a random graph model $\Gamma = (P,W,f)$. Denote by $f^{(\ell)}$ the function at each layer. Then we have
\begin{equation}\label{eq:cgcn-bound}
\norm{f^{(\ell)}}_* \leq \norm{f}_* \prod_{s=0}^{\ell-1} H^{(s)}_* + \sum\limits_{s=0}^{\ell-1} \norm{b^{(s)}}\prod\limits_{p=s+1}^{\ell-1} H^{(p)}_*
\end{equation}
where $*$ indicates $L^2(P)$ or $\infty$.
\end{lemma}
\begin{proof}
For $j\leq d_\ell$, using Lemma \ref{lem:filter_partial} and \eqref{eq:rho} we write
\begin{align*}
\norm{f^{(\ell)}}_* &\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} h_{ij}^{(\ell-1)}(\mathcal{L}_{W,P}) f_i^{(\ell-1)} + b_j^{(\ell-1)}}_*^2} \\
&\leq \sqrt{\sum_j \norm{\sum_{i=1}^{d_{\ell-1}} h_{ij}^{(\ell-1)}(\mathcal{L}_{W,P}) f_i^{(\ell-1)}}_*^2} + \norm{b^{(\ell-1)}} \\
&\leq H^{(\ell-1)}_* \norm{f^{(\ell-1)}}_* + \norm{b^{(\ell-1)}}
\end{align*}
An easy recursion gives the result.
\end{proof}
\begin{lemma}[Piecewise Lipschitz property of c-GCNs]\label{lem:lip_cgcn}
Let $\Gamma$ be a random graph model. Assume that $f$ is piecewise $(c_f, n_f)$-Lipschitz. Then, $\Phi_{W,P}(f)$ is piecewise $(C, n_f n_\Xx)$-Lipschitz with
\begin{equation}\label{eq:cgcn-lip}
C = \norm{\theta} \pa{ c_f\prod_{\ell=0}^{M-1} \norm{B_0^{(\ell)}} + \frac{c_{\textup{Lip.}} c_{\max}}{c_{\min}^2}\sum_{\ell=0}^{M-1} H_2^{(\ell)} \norm{f^{(\ell)}}_{L^2(P)} \prod_{s=0}^{\ell-1} \norm{B_0^{(s)}}}
\end{equation}
where $\norm{f^{(\ell)}}_{L^2(P)}$ can be bounded by Lemma \ref{lem:bound_cgcn}.
\end{lemma}
\begin{proof}
Define the partition $\Xx'_1,\ldots, \Xx'_{n_f}$ on which $f$ is Lipschitz, and take $x,x' \in \Xx_i \cap \Xx'_j$ for some $i,j$.
Using the same strategy as in the proof of Lemma \ref{lem:filter_partial}, we have
\begin{align*}
\norm{f^{(M)}(x) - f^{(M)}(x')} &\leq \sqrt{\sum_j\abs{\sum_i h_{ij}^{(M-1)}(\mathcal{L}_P)f_i^{(M-1)}(x) - h_{ij}^{(M-1)}(\mathcal{L}_P)f_i^{(M-1)}(x')}^2} \\
&\leq \sum_k \norm{B_k^{(M-1)}} \sqrt{\sum_i \abs{\mathcal{L}_P^k f_i^{(M-1)}(x) - \mathcal{L}_P^k f_i^{(M-1)}(x')}^2}
\end{align*}
Define $\bar W(x,y) = \frac{W(x,y)}{\sqrt{d_P(x) d_P(y)}}$. As we have seen in the proof of Lemma \ref{lem:chaining-pointwise-Lapl}, $\bar W$ is piecewise $\frac{c_{\textup{Lip.}} c_{\max}}{c_{\min}^2}$-Lipschitz on the $\Xx_i$. So, for $k\geq 1$, for $x,x' \in \Xx_i$ by Schwartz inequality we have
\begin{align*}
\abs{\mathcal{L}_P^k f_i^{(M-1)}(x) - \mathcal{L}_P^k f_i^{(M-1)}(x')} &\leq \norm{\mathcal{L}_P^{k-1} f_i^{(M-1)}}_{L^2(P)} \frac{c_{\textup{Lip.}} c_{\max}}{c_{\min}^2} d(x,x') \\
&\leq \norm{f_i^{(M-1)}}_{L^2(P)} \frac{c_{\textup{Lip.}} c_{\max}}{c_{\min}^2} d(x,x')
\end{align*}
And thus
\begin{align*}
\norm{f^{(M)}(x) - f^{(M)}(x')} &\leq \norm{B_0^{(M-1)}} \norm{f^{(M)}(x) - f^{(M)}(x')} \\
&\quad+ H_2^{(M-1)} \norm{f^{(M-1)}}_{L^2(P)} \frac{c_{\textup{Lip.}} c_{\max}}{c_{\min}^2} d(x,x')
\end{align*}
A recursion gives the result, with Lemma \ref{lem:bound_cgcn}.
\end{proof}
\section{Third-party results}\label{app:third}
\begin{lemma}[Hoeffding's inequality]\label{lem:hoeffding}
Let $X_1,\ldots, X_n \in \RR$ be independent random variables such that $a \leq X_i \leq b$ almost surely. Then we have
\begin{equation}
\mathbb{P}\pa{\abs{\frac{1}{n}\sum_i (X_i- \mathbb{E}X_i)}\geq \varepsilon}\leq 2 \exp\pa{-\frac{2\varepsilon^2 n}{(b-a)^2}}
\end{equation}
\end{lemma}
\begin{lemma}[Generalized Hoeffding's inequality \citep{Rosasco2010}]\label{lem:conc-hilbert}
Let $\mathcal{H}$ be a separable Hilbert space and $\xi_1,\ldots, \xi_n \in \mathcal{H}$ be independent zero-mean random variables such that $\norm{\xi_i} \leq C$ almost surely. Then with probability at least $1-\rho$ we have
\begin{equation}
\norm{\frac{1}{n} \sum_i \xi_i} \leq \frac{C\sqrt{2\log(2/\rho)}}{\sqrt{n}}
\end{equation}
\end{lemma}
\begin{theorem}[Spectral concentration of normalized Laplacian {\cite[Theorem 4]{Keriven2020}}]\label{thm:conc-sparse-Lapl}
Let $A$ be an adjacency matrix of a graph drawn with independent edges $a_{ij} \sim \Ber(\alpha_n p_{ij})$, where $p_{ij} \leq c_{\max}$ and for all $i$, $\frac{1}{n} \sum_j p_{ij} \geq c_{\min}>0$. Denote by $P$ the $n \times n$ matrix containing the $p_{ij}$. There is a universal constant $C$ such that:
\begin{align}
\mathbb{P}\pa{\norm{L(A) - L(P)} \geq \frac{C(1+c)c_{\max}}{c_{\min}^2 \sqrt{n \alpha_n}}} \leq&~ e^{-\pa{\frac{3 c^2}{12 + 4c} - \log(14)} n} + e^{-\frac{3 c^2}{12 + 4c} n \alpha_n + \log(n)} \notag \\
&+ e^{-\frac{3 c_{\min}^2 n \alpha_n}{25 c_{\max}} + \log(n)} + n^{-\frac{c}{4} + 6}
\end{align}
\end{theorem}
\begin{theorem}[Wasserstein convergence \citep{Weed2017}]\label{thm:wass}
Let $(\Xx, d)$ be a compact metric space with $\text{diam}(\Xx)\leq B$ and $N_\varepsilon(\Xx) \leq (B/\varepsilon)^{d_x}$. Let $P$ be a probability distribution on $\Xx$, and $x_1,\ldots, x_n$ drawn iid from $P$. With probability $1-\rho$,
\begin{equation}
\mathcal{W}_2(\hat P, P) \lesssim B\pa{n^{-\frac{1}{d_x}} + \pa{27^{\frac{d_x}{4}} + \log(1/\rho)^{\frac{1}{4}}} n^{-\frac{1}{4}}}
\end{equation}
where $\hat P = n^{-1} \sum_i \delta_{x_i}$.
\end{theorem}
\begin{proof}
The result is obtained by combining Prop. 5 and Prop. 20 in \citep{Weed2017} with $\varepsilon' = 1$, with the assumed simplified expression for the covering numbers of $\Xx$ and a rescaling of the metric such that $B$ disappears from the covering numbers expression.
\end{proof}
\begin{lemma}[Schur's test]\label{lem:schur}
Let~$T$ be the integral operator defined by
\begin{equation*}
Tf(x) = \int k(x, x') f(x') d \mu(x').
\end{equation*}
If the kernel~$k$ satisfies
\begin{equation*}
\sup_x \int |k(x, x')| d \mu(x') \leq C \quad \text{and} \quad \sup_{x'} \int |k(x, x')| d \mu(x) \leq C,
\end{equation*}
then~$T$ is bounded in~$L^2(\mu)$, with~$\|T\|_{L^2(\mu)} \leq C$.
\end{lemma}
\section{Convergence of Graph Convolutional Networks}\label{sec:conv}
In this section, we show that a GCN applied to a random graph $G \sim \Gamma$ will be close to the corresponding c-GCN applied to $\Gamma$.
In the invariant case, $\bar{\Phi}_A(Z)$ and $\bar{\Phi}_{W,P}(f)$ are both vectors in $\RR^{d_{out}}$. In the equivariant case, we will show that the output signal $\Phi_A(Z)_i \in \RR^{d_{out}}$ at each node is close to the function $\Phi_{W,P}(f)$ evaluated at $x_i$. To measure this, we consider the (square root of the) Mean Square Error at the node level: for a signal $Z \in \RR^{n \times d_{out}}$, a function $f:\Xx \to \RR^{d_{out}}$ and latent variables $X$, we define
$
\textup{MSE}_X\pa{Z, f} \eqdef (n^{-1}\sum_{i=1}^n \norm{Z_i - f(x_i)}^2)^{1/2}
$.
In the following theorem we use the shorthand $D_\Xx(\rho) \eqdef \frac{c_{\textup{Lip.}}}{c_{\min}} \sqrt{d_x} + \frac{c_{\max} + c_{\textup{Lip.}}}{c_{\min}}\sqrt{\log\frac{n_\Xx}{\rho}}$.
\begin{theorem}[Convergence to continuous GCN]\label{thm:conv}
Let $\Phi$ be a GCN and $G$ be a graph with $n$ nodes generated from a model $\Gamma$, denote by $X$ its latent variables. There are two universal constants $c_1, c_2$ such that the following holds.
Take any $\rho>0$, assume $n$ is large enough such that $n \geq c_1 D_\Xx(\rho)^2 + \frac{1}{\rho}$, and the sparsity level is such that $\alpha_n \geq c_2 c_{\max} c_{\min}^{-2} \cdot n^{-1}\log n$.
Then, with probability at least $1-\rho$,
\begin{align*}
\textup{MSE}_X\pa{\Phi_A(Z), \Phi_{W,P}(f)} &\leq R_n \eqdef C_1 D_\Xx\pa{\tfrac{\rho}{\sum_\ell d_\ell}}n^{-\frac12} + C_2 (n\alpha_n)^{-\frac12} , \\
\norm{\bar{\Phi}_A(Z) - \bar{\Phi}_{W,P}(f)} &\leq R_n + C_3 \sqrt{\log(1/\rho)} n^{-\frac12} .
\end{align*}
\end{theorem}
\paragraph{Discussion.}
The constants $C_i$ are of the form $C'_i\norm{f}_\infty + C''_i$ and detailed in the appendix. When the filters are normalized and there is no bias, they are proportional to $M \norm{f}_\infty$. In particular, they do not depend on the dimension $d_x$.
The proof use standard algebraic manipulations, along with two concentration inequalities. The first one exploits Dudley's inequality \cite{Vershynin2018} to show that, for a fixed function $f$ and in the absence of random edges, $\mathcal{L}_{W,P} f$ is well approximated by its discrete counterpart. Note here that we do not seek a \emph{uniform} proof with respect to a functional space, since the c-GCN is fixed. This allows us to obtain non-asymptotic rate while relaxing usual smoothness hypotheses \cite{Rosasco2010}. This first concentration bound leads to the standard rate in $\mathcal{O}(1/\sqrt{n})$.
The second bound uses a fairly involved recent concentration inequality for normalized Laplacians of relatively sparse graphs with random edges derived in \citep{Keriven2020}, which gives the term in $\mathcal{O}(1/\sqrt{\alpha_n n})$. Although this second term has a strictly worse convergence rate except in the dense case $\alpha_n \sim 1$, its multiplicative constant is strictly better, in particular it does not depend on the Minkowski dimension $d_x$. The condition $n\geq 1/\rho$, which suggests a polynomial concentration instead of the more traditional exponential one, comes from this part of the proof.
It is known in the literature that using the normalized Laplacian is often more appropriate than the adjacency matrix. If we where to use the latter, a normalization by $(\alpha_n n)^{-1}$ would be necessary \cite{Lei2015}. However, $\alpha_n$ is rarely known, and can change from one case to the other. The normalized Laplacian is adaptative to $\alpha_n$ and does not require any normalization.
\paragraph{Example of applications.} Invariant GCNs are typically used for regression or classification at the graph level. Theorem~\ref{thm:conv} shows that the output of a discrete GCN directly approaches that of the corresponding c-GCN. Equivariant GCNs are typically used for regression at the node level. Consider an ideal function $f^*:\Xx \to \RR^{d_{out}}$ that is well approximated by an equivariant c-GCN $\Phi_{W,P}(f)$ in terms of $L^2(P)$-norm. Then, the error between the output of the discrete GCN $\Phi_A(Z)$ and the sampling of $f^*$ satisfies with high probability
$
\textup{MSE}_X\pa{\Phi_A(Z), f^*} \leq \norm{\Phi_{W,P}(f) - f^*}_{L^2(P)} + R_n + \mathcal{O}(n^{-\frac{1}{4}})
$
using a triangle inequality, Theorem \ref{thm:conv} and Hoeffding's inequality.
\paragraph{Noisy or absent signal.}
Until now, we have considered that the function $f$ was observed without noise. Noise can be handled by considering the Lipschitz properties of the GCN. For instance, in the invariant case, by Lemma \ref{lem:lip_gcn} in Appendix \ref{app:technical}, we have $\norm{\bar{\Phi}_A(Z_1) - \bar{\Phi}_A(Z_2)} \lesssim \frac{1}{\sqrt{n}} \norm{Z_1 - Z_2}_F$. Hence, if the input signal is the noisy $z_i = f(x_i) + \nu_i$, where $\nu$ is centered iid noise, a GCN deviates from the corresponding c-GCN by an additional $n^{-1/2} \norm{(\nu_i)_i}_F$, which converges to the standard deviation of the noise. Interestingly, the noise can be filtered out: for instance, if one inputs $\bar Z = LZ$ into the GCN, then by a concentration inequality it is not difficult to see that the smoothed noise term converges to $0$, and the GCN converges to the c-GCN with smoothed input function $\bar f = \mathcal{L}_{W,P} f$.
In some cases such as spectral clustering \cite{Chen2019c}, one does not have an input signal over the nodes, but has only access to the structure of the graph. In this case, several heuristics have been used in the literature, but a definitive answer is yet to emerge. For instance, a classical strategy is to use the (normalized) degrees of the graph $ Z=A 1_n/(\alpha_n n)$ as input signal \citep{Bruna2013, Chen2019c} (assuming for simplicity that $\alpha_n$ is known or estimated). In this case, using our proofs (Lemma \ref{lem:chaining-pointwise} in the appendix) and the spectral concentration in \cite{Lei2015}, it is not difficult to show that a discrete GCN will converge to its countinuous version with the degree function $f = d_{W,P}$ as input. We will see in Prop. \ref{prop:degree-NF} in the next section that this leads to desirable stability properties.
\section{Continuous GWNN}
A major issue remains: what to do when there is no input signal? One may choose a constant input signal $z=1_n$, but remark that, in particular, when the degree function $d_W$ is constant, the output of \emph{any} equivariant GNN will tend to a constant vector.
In \cite{Vignac2020a} and \cite{Xu2019}, some authors propose a powerful architecture where each node is assigned a unique id (usually a one-hot vector), and equivariance is restored by a final node-wise pooling (akin to a "deep set" architecture). The authors in \cite{Vignac2020a} take inspiration from works \cite{Loukas2019} that prove that a unique id for each node leads to universality, while those of \cite{Xu2019} make a parallel with wavelets on graphs, that is, filtering of Dirac signals. They call the resulting network Graph Wavelets NN (GWNN).
Let us consider the equivariant case (the invariant is simpler). We define an equivariant GWNN as follows here. We consider any equivariant GCN $\Phi_A(\cdot): \RR^n \to \RR^{n \times d_{out}}$ as defined above (that is, with input dimension $d_0=1$).
\begin{equation}
\Psi_A \eqdef \rho\left(\left(\frac{1}{n} \sum_i \Phi_A(e_i)\right) \tilde\theta \right) + 1_n \tilde b^\top \in \RR^{n \times d'_{out}}
\end{equation}
Where $e_i = [0,\cdot, 1, \cdot, 0]$ is the $i$th basis vector of $\RR^n$, $\tilde \theta \in \RR^{d_{out} \times d'_{out}}, \tilde b \in \RR^{d'_{out}}$ are the weights and bias of a MLP. In other words, $\Phi_A$ is applied to \emph{each} $e_i$, then a pooling is performed, and a final fully connected layer is done.
In particular, GWNN are invariant to the choice of ids $e_i$ in the first layer: we have $\frac{1}{n} \sum_i \Phi_A(e_i) = \frac{1}{n} \sum_i \Phi_A(e_{\sigma(i)})$ for any permutation. This makes the whole architecture globally equivariant: since $\Phi_{\sigma \cdot A}(\sigma \cdot z) = \sigma \cdot \Phi_A(z)$, it is easy to see that
\begin{align*}
\Psi_{\sigma \cdot A} = \sigma\cdot \Psi_A
\end{align*}
We know that $\Phi_A(z)$ "tends to" a (sampling of a) function $\Phi_{W,P}(f)$ when $z$ is a suitable sampling of $f$. Does the same holds for GWNN ? Well, intuitively, $e_i$ can be seen as a ``Dirac'' at $x_i$. So, one would expect them to tend to the function
\begin{equation}
\Psi_{W,P} = \rho\circ\left(\tilde \theta^\top\int \Phi_{W,P}(\delta_x) dP(x)\right) + \tilde b
\end{equation}
with proba $1$, as long as "$\Phi_{W,P}(\delta_x)$" makes sense.
One can easily define the filtering of Diracs \emph{when there is no order-$0$ term}, by $\mathcal{L}_{W,P} \delta_x = \int \bar W(\cdot, x') d \delta_x(x') = \bar W(\cdot, x) \in \mathcal{B}(\Xx)$, and so on (recall that $\bar W = W(x,y)/\sqrt{d_W(x)d_W(y)}$ is just the normalized kernel used in the Laplacian). In fact, we will prove that the term of order $0$ in the first layer vanishes. For a filter $h(\lambda) = \sum_{k\geq 0} \beta_k \lambda^k$, we define $h^{\backslash 0} (\lambda) = \sum_{k \geq 0} \beta_{k+1} \lambda^k$, that is, all the coefficients are shifted.
Let $\Phi_A$ be an equivariant GCN. We define the following continuous architecture: for all $x$,
\begin{align*}
f^{(1)}_j (\cdot, x) &= \rho \circ \left( h_{0j}^{(0), \backslash 0}(\mathcal{L}_{W,P})[\bar W(\cdot, x)] + b^{(1)}\right) \\
f^{(\ell + 1)}_j (\cdot, x) &= \rho \circ \left( \sum_{i=1}^{d_\ell} h_{ij}^{(\ell)}(\mathcal{L}_{W,P})[f^{(\ell)}_i(\cdot, x)] + b^{(\ell)}_j \right) \\
\Phi_{W,P}(\cdot,x) &= \theta^\top f^{(M)}(\cdot, x) + b\\
\Psi_{W, P} &= \rho\circ\left(\tilde \theta^\top \int \Phi_{W,P}(\cdot, x) dP(x)\right) + \tilde b
\end{align*}
In other words, bivariate functions are manipulated, $\bar W$ is inputed, the filters at the first layers do not have an order-$0$ term, and a final integration is performed.
\paragraph{Convergence} We have the same result as before, with different constants.
\begin{theorem}[Convergence \textcolor{red}{to finish}]\label{thm:gwnn}
Take any $\rho>0$, assume $n$ is large enough such that $n \geq ...$, and the sparsity level is such that $\alpha_n \geq O( n^{-1}\log n)$.
Then, with probability at least $1-\rho$,
\begin{align*}
\textup{MSE}_X\pa{\Psi_A, \Psi_{W,P}} &\leq O( n^{-\frac12}) + O( (n\alpha_n)^{-\frac12})
\end{align*}
\end{theorem}
\paragraph{Approximation power} Is this more powerful than before ? Yes: GWNN are more general than GCNs with constant input, since a constant input can be emulated by just taking the bias in the first layer and putting the weights to $0$. Moreover, the inclusion is strict: it is easy to build a model with constant degree function, where the output of any GCN is constant, but the output of GWNNs is not.
Let us outline a few examples.
Consider a SBM with two communities: $W_{11} = 1, W_{12} = 1/3, W_{22} = 2/3, \pi_1 = 1/3, \pi_2 = 2/3$. The degree is constant, so normal GCN cannot distinguish anything. However, the above architecture can be such as $g_\theta(W_{11}) = 3, g_\theta(W_{12}) = 0, g_\theta(W_{22}) =1$, such that $\Psi_{W,P}$ is $1$ at nodes in community $1$ and $2/3$ at nodes in community $2$, and so can easily classify them.
\begin{proof}[Proof of Theorem \ref{thm:gwnn}]
We denote by $Z^{(\ell)}(e_q)$ the intermediate layer of $\Phi_A(e_q)$. Also denote the discrete GCN without order-$0$ term:
\begin{align*}
z^{(0)}(x_q) &= [\bar W(x_i, x_q)]_i = L e_q \\
z^{(1)}_j (x_q) &= \rho \left( h_{1j}^{(0), \backslash 0}(L)z^{(0)}(x_q) + b^{(1)}1_n\right) \\
z^{(\ell + 1)}_j (x_q) &= \rho \left( \sum_{i=1}^{d_\ell} h_{ij}^{(\ell)}(L) z^{(\ell)}_i(x_q) + b^{(\ell)}_j 1_n\right)
\end{align*}
and aggregated $Z^{(\ell)}(x_q)$.
We decompose the error:
\begin{align*}
MSE_X(\Psi_A , \Psi_{W,P}) &= \sqrt{\frac{1}{n} \sum_i \abs{\rho\left(\frac{1}{n} \sum_q (Z^{(M)}(e_q) \theta)_i + b\right)\tilde\theta - \rho\left(\int \theta^\top f^{(M)}(x_i, x) dP(x) + b\right) \tilde\theta}^2} \\
&\leq \norm{\theta} \norm{\tilde\theta}\norm{\frac{1}{n} \sum_q \frac{Z^{(M)}(e_q)}{\sqrt{n}} - S_X\int f^{(M)}(\cdot, x) dP(x)}_F \\
&\leq \norm{\theta}\norm{\tilde\theta}\Big( \norm{\frac{1}{n} \sum_q \frac{Z^{(M)}(e_q)}{\sqrt{n}} - \frac{Z^{(M)}(x_q)}{\sqrt{n}}}_F \\
&\quad + \sup_x \norm{\frac{Z^{(M)}(x)}{\sqrt{n}} - S_X f^{(M)}(\cdot, x)}_F \\
&\quad + \sup_y \norm{\frac{1}{n} \sum_q f^{(M)}(y, x_q) - \int f^{(M)}(y, x) dP(x)} \Big)
\end{align*}
So the error has three terms. The first comes from the removal of the order-$0$ term in the first layer, should be trivial. The third is a uniform concentration inequality that can be handled with chaining under proper regularity assumptions. The second is similar to the previous core of the proof, but will probably ask for "bivariate" chaining concentration inequalities, which should be fine.
\paragraph{First error term}
Using tools from previous proof, for each $q$, for $\ell\geq 1$ we bound
\begin{align*}
\norm{Z^{(\ell+1)}(e_q) - Z^{(\ell+1)}(x_q)}_F &= \left(\sum_j \norm{ \rho \left( \sum_i h_{ij}^{(\ell)}(L) z^{(\ell)}_i(e_q) + b_j^{(\ell)}1_n \right) - \rho \left( \sum_i h_{ij}^{(\ell)}(L) z^{(\ell)}_i(x_q) + b_j^{(\ell)}1_n \right) }^2 \right)^\frac12 \\
&\leq \left(\sum_j \norm{ \sum_i h_{ij}^{(\ell)}(L) (z^{(\ell)}_i(e_q) - z^{(\ell)}_i(x_q))}^2 \right)^\frac12 \leq H_2^{(\ell)} \norm{Z^{(\ell)}(e_q) - Z^{(\ell)}(x_q)}_F
\end{align*}
and for $\ell=0$: since $h_{1j}^{(0)}(L) e_q = h_{1j}^{(0),\backslash 0}(L) L e_q + \beta_{1j0} e_q$,
\begin{align*}
\norm{Z^{(1)}(e_q) - Z^{(1)}(x_q)}_F &= \left(\sum_j \norm{ \rho \left( h_{1j}^{(0)}(L) e_q + b_j^{(0)}1_n \right) - \rho \left( h_{1j}^{(0),\backslash 0}(L) L e_q + b_0^{(\ell)}1_n \right) }^2 \right)^\frac12 \leq \norm{(\beta_{1 j 0})_j} \eqdef C_0
\end{align*}
So, the first error is bounded by $\frac{C_0\prod H_2^{(\ell)}}{\sqrt{n}}$.
\paragraph{Third error term}
The third error can be bounded using chaining. Applying Lemma \ref{lem:chaining-pointwise}, if $f^{(M)}$ is $C_f$-bounded and $(n_f, C'_f)$-Lipschitz, with probability $1-\rho$ the third error term is bounded by (\textcolor{red}{Todo: compute constants, clean multivariate chaining})
\begin{equation}
\frac{C'_f \sqrt{d_x} + (C_f + C'_f)\sqrt{\log\frac{n_f}{\rho}}}{\sqrt{n}}
\end{equation}
\paragraph{Second error term} This can be done with chaining on the product space
\end{proof}
\section{Introduction}
Graph Convolutional Networks (GCNs~\cite{Bruna2013, Defferrard2016, Kipf2017}) are deep architectures defined on graphs inspired by classical Convolutional Neural Networks (CNNs~\cite{lecun1989backpropagation}). In the past few years, they have been successfully applied to, for instance, node clustering \citep{Chen2019c}, semi-supervised learning \citep{Kipf2017}, or graph regression \citep{Kearnes2016, Gilmer2017}, and remain one of the most popular variant of Graph Neural Networks (GNN). We refer the reader to the review papers \citep{Bronstein2017, Wu2019a} for more details.
Many recent results have improved the theoretical understanding of GNNs. While some architectures have been shown to be universal \citep{Maron2019a, Keriven2019} but not implementable in practice, several studies have characterized GNNs according to their power to distinguish (or not) graph \emph{isomorphisms} \citep{Xu2018, Chen2019, Maron2019b} or compute combinatorial graph parameters \citep{Chen2020}. However, such notions usually become moot for large graphs, which are almost never isomorphic to each other, but for which GCNs have proved to be successful in identifying large-scale structures nonetheless, \eg for segmentation or spectral clustering \cite{Chen2019c}. Under this light, a relevant notion is that of \emph{stability}: since GCNs are trained then tested on different (large) graphs, how much does a change in the graph structure affect its predictions? In the context of signals defined on Euclidean domains, including images or audio, convolutional representations such as scattering transforms or certain CNN architectures have been shown to be stable to \emph{spatial deformations}~\citep{Mallat2012,Bietti2017,Qiu2018}. However the notion of deformations is not well-defined on discrete graphs, and most stability studies for GCNs use purely discrete metrics that are less intuitive for capturing natural changes in structure~\citep{Gama2018,Gama2019a,Zou2019}.
In statistics and machine learning, there is a long history of modelling large graphs with random models, see for instance \cite{Bollobas2001, Goldenberg2009, Kolaczyk2010a, Matias2014} and references therein for reviews. \emph{Latent space models} represent each node as a vector of latent variables and independently connect the nodes according to a \emph{similarity kernel} applied to their latent representations. This large family of random graph models includes for instance Stochastic Block Models (SBM) \citep{Holland1983}, graphons \cite{Lovasz2012}, random geometric graphs \cite{Penrose2008}, or $\varepsilon$-graphs \cite{Calder2019}, among many others \cite{Matias2014}. A key parameter in such models is the so-called \emph{sparsity factor} $\alpha_n$ that controls the number of edges in $\mathcal{O}(n^2\alpha_n)$ with respect to the number of nodes $n$. The \emph{dense} case $\alpha_n \sim 1$ is the easiest to analyze, but often not realistic for real-world graphs. On the contrary, many questions are still open in the \emph{sparse} case $\alpha_n \sim 1/n$ \citep{Abbe2018}. A middle ground, which will be the setting for our analysis, is the so-called \emph{relatively sparse} case $\alpha_n \sim \log n/n$, for which several non-trivial results are known \citep{Lei2015, Keriven2020}, while being more realistic than the dense case.
\paragraph{Outline and contributions.} In this paper, we analyze the convergence and stability properties of GCNs on large random graphs. We define a ``continuous'' counterpart to discrete GCNs acting on graph models in Section \ref{sec:notations}, study notions of invariance and equivariance to isomorphism of random graph models, and give convergence results when the number of nodes grows in Section \ref{sec:conv}. In particular, our results are fully non-asymptotic, valid for relatively sparse random graphs, and unlike many studies \cite{VonLuxburg2008, Rosasco2010} we do not assume that the similarity kernel is smooth or bounded away from zero. In Section \ref{sec:stability}, we analyze the stability of GCNs to small deformation of the underlying random graph model. Similar to CNNs \citep{Mallat2012, Bietti2017}, studying GCNs in the continuous world allows us to define intuitive notions of model deformations and characterize their stability. Interestingly, for GCNs equivariant to permutation, we relate existing discrete notions of distance between graph signals to a Wasserstein-type metric between the corresponding continuous representations, which to our knowledge did not appear in the literature before.
\paragraph{Related work on large-scale random graphs.} There is an long history of studying the convergence of graph-related objects on large random graphs. A large body of works examine the convergence of the eigenstructures of the graph adjacency matrix or Laplacian in the context of spectral clustering \cite{Belkin2007,VonLuxburg2008,Lei2015,Tang2018a} or learning with operators \cite{Rosasco2010}. The theory of graphons \cite{Lovasz2012} defines (dense) graph limits for more general metrics, which is also shown to lead to spectral convergence \cite{Diao2016}.
Closer to our work, notions of Graph Signal Processing (GSP) such as the graph Fourier Transform have been extended to graphons \cite{Ruiz2020a} or sampling of general Laplacian operators \cite{Levie2019}. Partial results on the capacity of GCNs to distinguish dense graphons are derived in \cite{Magner2020}, however their analysis based on random walks differs greatly from ours.
In general, many of these studies are asymptotic \cite{VonLuxburg2008, Ruiz2020a}, valid only in the dense case \cite{VonLuxburg2008, Rosasco2010, Ruiz2020a, Levie2019, Magner2020}, or assume kernels that are smooth or bounded away from zero \cite{Rosasco2010}, and thus exclude several important cases such as SBMs, $\varepsilon$-graphs, and non-dense graphs altogether. By specifying models of (relatively sparse) random graphs, we derive non-asymptotic, fully explicit bounds with relaxed hypotheses.
\paragraph{Related work on stability.}
The study of stability to deformations has been pioneered by Mallat~\cite{Mallat2012} in the context of the scattering transform for signals on Euclidean domains such as images or audio signals~\cite{bruna2013invariant,anden2014deep},
and was later extended to more generic CNN architectures~\cite{Bietti2017,Qiu2018}.
A more recent line of work has studied stability properties of GCNs or scattering representations on discrete graphs, by considering certain well-chosen discrete perturbations and metrics~\citep{Gama2018,Gama2019,Gama2019a,Zou2019}, which may however have limited interpretability without an underlying model.
In contrast, our continuous setup allows us to define more intuitive geometric perturbations based on deformations of random graph models and to obtain deformation stability bounds that are similar to those on Euclidean domains~\cite{Mallat2012}.
We note that~\cite{Levie2019} also considers GCN representations with continuous graph models, but the authors focus on the different notion of ``transferability'' of graph filters on different discretizations of the \emph{same} underlying continuous graph structure, while we consider \emph{explicit deformations} of this underlying structure and obtain non-asymptotic bounds for the resulting random graphs.
\section{Conclusion and outlooks}
GCNs have proved to be efficient in identifying large-scale structures in graphs and generalizing across graphs of different sizes, which can only partially be explained with discrete graph notions like isomorphisms \cite{Xu2018} or stability with permutation-minimizing metrics \cite{Gama2019}. In contrast, we have shown that combining them with random models of large graphs allows us to define intuitive notions of deformations and stability in the continuous world like the Euclidean case \cite{Mallat2012,Bietti2017,Qiu2018}, with direct applications in community-based social networks or shape analysis on point clouds. For this we derived non-asymptotic convergence bounds, valid on relatively sparse random graphs with non-smooth kernels, and new tools like a Wasserstein-type stability bounds on equivariant c-GCNs.
We believe our work to be a first step toward a better understanding of GNNs on large graphs, with many potential outlooks.
First, it would be useful to improve the dependence of our bounds on regularity properties of the filters, as done in~\cite{Gama2019a} for the discrete setting, while preserving the mild dependence on the number of filters.
In the same vein, finer results may be obtained in particular cases: \eg the case where $\Xx$ is a sub-manifold can be studied under the light of Riemannian geometry, stability bounds on SBMs may be expressed with a direct dependence on their parameters, or more explicit stability bounds may be obtained when the (c-)GCN is a structured architecture like the scattering transform on graphs \cite{Gama2018}. Convergence results can also be obtained for many other models of random graphs like $k$-Nearest Neighbor graphs \cite{Calder2019}.
Finally, while we focus on stability in this paper, as mentioned above the \emph{approximation power} of GCNs (beyond untractable universality \cite{Keriven2019}) can also be expressed through that of their continuous counterpart, and characterizing which functions are computable by a c-GCN (\eg with growing width or number of layers) is of foremost importance.
\clearpage
\section*{Broader Impact}
Graph Neural Networks have been used to many applications, including computer vision, generative models in NLP or protein prediction to cite only a few.
Thus, our work is included in a wide literature whose societal impact and ethical considerations are not one-sided.
We provide here a theoretical understanding of the behaviour of large random graphs, with the most natural application is community detection in social science~\citep{Barabasi2016}.
Our contributions, of a theoretical nature, are far from a direct impact in our opinion.
We do not see continuous GCNs applied directly in the forseeable future otherwise as a proxy for the study of classic GCNs.
Nevertheless, as a stability result, it is a step forward to handle adversarial attacks as highlighted in~\cite{Gama2019a}.
\section*{Acknowledgements}
AB acknowledges support from the European Research Council (grant SEQUOIA 724063). SV is partly supported by ANR JCJC GraVa (ANR-18-CE40-0005).
\small
\bibliographystyle{myabbrvnat}
\section{Preliminaries}\label{sec:notations}
\paragraph{Notations.}
The norm $\norm{\cdot}$ is the Euclidean norm for vector and spectral norm for matrices, and $\norm{\cdot}_F$ is the Frobenius norm.
We denote by $\mathcal{B}(\Xx)$ the space of bounded real-valued functions on $\Xx$ equipped with the norm $\norm{f}_\infty = \sup_x \abs{f(x)}$.
Given a probability distribution $P$ on $\Xx$, we denote by $L^2(P)$ the Hilbert space of $P$-square-integrable functions endowed with its canonical inner product.
For multivariate functions $f=[f_1,\ldots, f_d]$ and any norm $\norm{\cdot}$, we define $\norm{f} = (\sum_{i=1}^d \norm{f_i}^2)^\frac12$.
For two probability distributions $P,Q$ on $\RR^d$, we define the Wasserstein-2 distance $\mathcal{W}_2^2(P,Q) = \inf\lbrace\mathbb{E}\norm{X-Y}^2~|~X \sim P, Y\sim Q\rbrace$, where the infimum is over all joint distributions of $(X,Y)$. We denote by $f_\sharp P$ the push-forward of $P$ by $f$, that is, the distribution of $f(X)$ when $X \sim P$.
A graph $G = (A,Z)$ with $n$ nodes is represented by a symmetric adjacency matrix $A \in \{0,1\}^{n\times n}$ such that $a_{ij} = 1$ if there is an edge between nodes $i$ and $j$, and a matrix of signals over the nodes $Z \in \RR^{n \times d_z}$, where $z_i \in \RR^{d_z}$ is the vector signal at node $i$.
We define the normalized Laplacian matrix as $L=L(A) = D(A)^{-\frac12} A D(A)^{-\frac12}$, where $D(A) = \diag(A 1_n)$ is the degree matrix, and $(D(A)^{-\frac12})_i = 0$ if $D(A)_i = 0$. The normalized Laplacian is often defined by $\Id - L$ in the literature, however this does not change the considered networks since the filters include a term of order $0$.
\paragraph{Graph Convolutional Networks (GCN).}
GCNs are defined by alternating filters on graph signals and non-linearities.
We use analytic filters (said of order-$k$ if $\beta_{\ell}=0$ for $\ell\geq k+1$):
\begin{equation}\label{eq:def_filter}
h:\RR\to\RR,\quad h(\lambda) = \textstyle \sum_{k\geq 0} \beta_k \lambda^k .
\end{equation}
We write $h(L) \!=\! \sum_k \beta_k L^k$, \ie we apply $h$ to the eigenvalues of $L$ when it is diagonalizable.
A GCN with $M$ layers is defined as follows.
The signal at the input layer is $Z^{(0)} = Z$ with dimension $d_0=d_z$ and columns $z_j^{(0)} \in\RR^{n}$.
Then, at layer $\ell$, the signal $Z^{(\ell)} \in \RR^{n \times d_\ell}$ with columns $z_j^{(\ell)} \in \RR^{n}$ is propagated as follows:
\begin{equation}\label{eq:GCN_discrete}
\mathsmaller{\forall j = 1,\ldots d_{\ell+1}, \quad z^{(\ell+1)}_j = \rho\pa{\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(L) z^{(\ell)}_i + b_{j}^{(\ell)} 1_n } \in \RR^{n}},
\end{equation}
where $h^{(\ell)}_{ij}(\lambda) = \sum_k \beta_{ijk}^{(\ell)} \lambda^k$ are learnable analytic filters, $b^{(\ell)}_j \in \RR$ are learnable biases, and the activation function $\rho:\RR \to \RR$ is applied pointwise.
Once the signal at the final layer $Z^{(M)}$ is obtained, the output of the entire GCN is either a signal over the nodes denoted by $\Phi_A(Z) \in \RR^{n \times d_{out}}$ or a single vector denoted by $\bar{\Phi}_A(Z) \in \RR^{d_{out}}$ obtained with an additional pooling over the nodes:
\begin{equation}\label{eq:GCN_discrete_output}
\mathsmaller{\Phi_A(Z) \eqdef Z^{(M)} \theta + 1_n b^\top , \quad \bar{\Phi}_A(Z) \eqdef \frac{1}{n}\sum_{i=1}^n \Phi_A(Z)_i} ,
\end{equation}
where $\theta \in \RR^{d_M \times d_{out}}$, $b \in \RR^{d_{out}}$ are the final layer weights and bias, and $\Phi_A(Z)_i\in \RR^{d_{out}}$ is the output signal at node $i$. This general model of GCN encompasses several models of the literature, including all spectral-based GCNs \cite{Bruna2013, Defferrard2016}, or GCNs with order-$1$ filters \cite{Kipf2017} which are assimilable to message-passing networks \cite{Gilmer2017}, see \cite{Wu2019a, Bronstein2017} for reviews. For message-passing networks, note that almost all our results would also be valid by replacing the sum over neighbors by another aggregation function such as $\max$.
We assume (true for ReLU, modulus, or sigmoid) that the function $\rho$ satisfies:
\begin{equation}\label{eq:rho}
\abs{\rho(x)} \leq \abs{x}, \quad \abs{\rho(x) - \rho(y)} \leq \abs{x-y} .
\end{equation}
Two graphs $G=(A,Z)$, $G'=(A',Z')$ are said to be \emph{isomorphic} if one can be obtained from the other by relabelling the nodes. In other words, there exists a \emph{permutation matrix} $\sigma \in \Sigma_n$, where $\Sigma_n$ is the set of all permutation matrices, such that $A = \sigma \cdot A' \eqdef \sigma A' \sigma^\top$ and $Z = \sigma \cdot Z' \eqdef \sigma Z'$, where ``$\sigma \cdot\!\phantom{.}$'' is a common notation for permuted matrices or signal over nodes.
In graph theory, functions that are \emph{invariant} or \emph{equivariant} to permutations are of primary importance (respectively, permuting the input graph does not change the output, or permutes the output). These properties are hard-coded in the structure of GCNs, as shown by the following proposition (proof in Appendix \ref{app:permut}).
\begin{proposition}\label{prop:permut}
We have $\Phi_{\sigma \cdot A} (\sigma \cdot Z) = \sigma \cdot \Phi_A(Z)$ and $\bar{\Phi}_{\sigma \cdot A} (\sigma \cdot Z) = \bar{\Phi}_A(Z)$.
\end{proposition}
\paragraph{Random graphs.}
Let $(\Xx, d)$ be a compact metric space.
In this paper, we consider latent space graph models where each node $i$ is represented by an unobserved latent variable $x_i \in \Xx$, and nodes are connected randomly according to some \emph{similarity kernel}. While the traditional graphon model~\cite{Lovasz2012} considers (without lost of generality) $\Xx = [0,1]$, it is often more intuitive to allow general spaces to represent meaningful variables \cite{DeCastro2017}. We consider that the observed signal $z_i \in \RR^{d_z}$ is a function of the latent variable $x_i$, without noise for now. In details, a \emph{random graph model} $\Gamma=(P,W,f)$ is represented by a probability distribution $P$ over $\Xx$, a symmetric kernel $W:\Xx\times \Xx \to [0,1]$ and a bounded function $f:\Xx\to \RR^{d_z}$.
A random graph $G$ with $n$ nodes is then generated as follows:
\begin{align}
&\forall j<i\leq n:\quad x_i \stackrel{iid}{\sim} P, \quad z_{i} = f(x_i), \quad a_{ij} \sim \Ber(\alpha_n W(x_i, x_j)) . \label{eq:rg_model}
\end{align}
where $\Ber$ is the Bernoulli distribution. We define $d_{W,P} \eqdef \int W(\cdot, x) d P(x)$ the \emph{degree function} of $\Gamma$.
As outlined in the introduction, the sparsity factor $\alpha_n \in [0,1]$ plays a key role. The so-called \emph{relatively sparse} case $\alpha_n \sim \frac{\log n}{n}$ will be the setting for our analysis.
Let us immediately make some assumptions that will hold throughout the paper. We denote by $N(\Xx, \varepsilon, d)$ the $\varepsilon$-covering numbers (that is, the minimal number of balls of radius $\varepsilon$ required to cover $\Xx$) of $\Xx$, and assume that they can be written under the form $N(\Xx, \varepsilon, d) \leq \varepsilon^{-d_x}$ for some constant $d_x>0$ (called the \emph{Minkowski} dimension of $\Xx$), and $\text{diam}(\Xx)\leq 1$. Both conditions can be obtained by a rescaling of the metric $d$.
Let $c_{\min}, c_{\max}>0$ be constants. A function $f:\Xx \!\to\! \RR$ is said to be $(c_{\textup{Lip.}},n_\Xx)$-piecewise Lipschitz if there is a partition $\Xx_1, \ldots, \Xx_{n_\Xx}$ of $\Xx$ such that, for all $x,x'$ in the same $\Xx_i$, we have $\abs{f(x)-f(x')}\leq c_{\textup{Lip.}} d(x,x')$. All considered random graph models $\Gamma = (P,W,f)$ satisfy that for all $x\in\Xx$,
\begin{align}
\norm{W(\cdot,x)}_\infty \leq c_{\max},\quad d_{W,P}(x) \geq c_{\min}, \quad \text{$W(\cdot,x)$ is $(c_{\textup{Lip.}},n_\Xx)$-piecewise Lipschitz.} \label{eq:assumption_RG}
\end{align}
Unlike other studies \cite{VonLuxburg2008, Rosasco2010}, we do \emph{not} assume that $W$ itself is bounded away from $0$ or smooth, and thus include important cases such as SBMs (piecewise constant $W$) and $\varepsilon$-graphs (threshold kernels).
\paragraph{Continuous GCNs.} Since $d_{W,P} > 0$, we define the normalized Laplacian operator $\mathcal{L}_{W, P}$ by
\begin{equation}\label{eq:def_laplacian_operator_P}
\mathcal{L}_{W,P}f \eqdef \int \frac{W(\cdot, x)}{\sqrt{d_{W,P}(\cdot) d_{W,P}(x)}}f(x)dP(x) .
\end{equation}
Analytic filters on operators are $h(\mathcal{L}) = \sum_k \beta_k \mathcal{L}^k$, with $\mathcal{L}^k = \mathcal{L} \circ \ldots \circ \mathcal{L}$. We do \emph{not} assume that the filters are of finite order (even if they usually are in practice \cite{Defferrard2016}), however we will always assume that $\sum_{k} k \abs{\beta_k} \pa{2c_{\max}/ c_{\min}}^k$ converges.
Similar to the discrete case, we define continuous GCNs (c-GCN) that act on random graph models, by replacing the input signal $Z$ with $f$, the Laplacian $L$ by $\mathcal{L}_{W,P}$, and propagating functions instead of node signals \ie we take $f^{(0)} = f$ the input function with coordinates $f^{(0)}_1, \ldots, f^{(0)}_{d_z}$ and:
\begin{equation}\label{eq:GCN_cont}
\mathsmaller{\forall j=1,\ldots, d_{\ell+1}, \quad f^{(\ell+1)}_j = \rho \circ \pa{\sum_{i=1}^{d_\ell} h^{(\ell)}_{ij}(\mathcal{L}_{W,P}) f^{(\ell)}_i + b_{j}^{(\ell)}1(\cdot) }} ,
\end{equation}
where $1(\cdot)$ represent the constant function $1$ on $\Xx$.
Once the final layer function $f^{(M)}:\Xx \to \RR^{d_M}$ is obtained, the output of the c-GCN is defined as in the discrete case, either as a multivariate function $\Phi_{W,P}(f):\Xx \to \RR^{d_{out}}$ or a single vector $\bar{\Phi}_{W,P}(f)\in \RR^{d_{out}}$ obtained by pooling:
\begin{equation}\label{eq:GCN_cont_output}
\Phi_{W,P}(f) \eqdef \theta^\top f^{(M)} + b 1(\cdot) , \quad \bar{\Phi}_{W,P}(f) = \int \Phi_{W,P}(f)(x)dP(x) .
\end{equation}
Hence the \emph{same} parameters $\{(\beta_{ijk}^{(\ell)}, b_j^{(\ell)})_{ijk\ell}, \theta, b \}$ define both a discrete and a continuous GCN, the latter being (generally) not implementable in practice but useful to analyze their discrete counterpart.
For a random graph model $\Gamma = (P, W, f)$, and any invertible map $\phi: \Xx \to \Xx$, we define $\phi \cdot W \eqdef W(\phi(\cdot), \phi(\cdot))$ and $\phi \cdot f \eqdef f \circ \phi$.
Recalling that $(\phi^{-1})_\sharp P$ is the distribution of $\phi^{-1}(x)$ when $x\sim P$, it is easy to see that $\phi \cdot \Gamma \eqdef ((\phi^{-1})_\sharp P, \phi \cdot W, \phi \cdot f)$ defines the same probability distribution as $\Gamma = (P,W,f)$ over discrete graphs. Therefore, we say that $\Gamma$ and $\phi \cdot \Gamma$ are \emph{isomorphic}, which is a generalization of isomorphic graphons \cite{Lovasz2012} when $P$ is the uniform measure on $[0,1]$. Note that, technically, $\phi$ needs only be invertible on the support of $P$ for the above definitions to hold.
As with discrete graphs, functions on random graph models can be invariant or equivariant, and c-GCNs satisfy these properties (proof in Appendix \ref{app:permut}).
\begin{proposition}\label{prop:permut-c}
For all $\phi$, $\Phi_{\phi \cdot W, (\phi^{-1})_\sharp P} (\phi \cdot f) = \phi \cdot \Phi_{W, P} (f)$ and $\bar{\Phi}_{\phi \cdot W, (\phi^{-1})_\sharp P} (\phi \cdot f) = \bar{\Phi}_{W, P} (f)$.
\end{proposition}
In the rest of the paper, most notation-heavy multiplicative constants are given in the appendix. They depend on $c_{\min}, c_{\max}, c_{\textup{Lip.}}$ and the operator norms of the matrices $B_k^{(\ell)} = (\beta_{ijk}^{(\ell)})_{ij}$.
\section{Numerical experiments}
\begin{figure}
\centering
\includegraphics[height=3cm]{cgcn_conv_illus0crop.png}
\includegraphics[height=3cm]{cgcn_conv_illus1crop.png}
\includegraphics[height=3cm]{cgcn_conv_illus2crop.png}
\includegraphics[height=3cm]{cgcn_convergence.pdf}
\caption{\small Illustration of convergence of a GCN on random graphs with 3D latent positions and input signal $f=1$. Left: output signal with growing number of nodes. Right: convergence with different sparsity levels $\alpha_n$.}
\label{fig:conv}
\end{figure}
In this section, we provide simple numerical experiments on synthetic data that illustrate the convergence and stability of GCNs.
We consider untrained GCNs with random weights in order to assess how these properties result from the choice of architecture rather than learning.
The code is accessible at \url{https://github.com/nkeriven/random-graph-gnn}.
\paragraph{Convergence.} Fig.~\ref{fig:conv} shows the convergence of an equivariant GCN toward a continuous function on a $\varepsilon$-graph with nodes sampled on a 3-dimensional manifold. We take a constant input signal $f=1$ here to only assess the effect of the manifold shape. We then examine the effect of the sparsity level on convergence on the corresponding invariant GCN, taking an average of several experiments with high $n$ and $\alpha_n=1$ as an approximation of the ``true'' unknown limit value. As expected, the convergence is slower for sparse graphs, however we indeed observe convergence to the \emph{same} output for all values of $\alpha_n$.
\paragraph{Stability.} In Fig.~\ref{fig:stab}, we illustrate the stability of GCNs to deformations. We first examine the variations in the output of an equivariant GCN when only re-drawing the random edges, with or without modifying the latent positions (in a deterministic manner). We indeed observe that regions that are only translated, such as the ``flat'' parts of the surface, yield stable output, while deformed regions lead to a deformed output signal. We then verify that a larger deformation leads to a larger distance in output.
\begin{figure}
\centering
\includegraphics[height=3cm]{stab_figcrop.png}
\includegraphics[height=3cm]{stab_fig1crop.png}
\includegraphics[height=3cm]{stab_fig0crop.png}
\includegraphics[height=3cm]{cgcn_stability.pdf}
\caption{\small Illustration of stability of a GCN on random graphs with 3D latent positions. From left to right: output signal; difference with the output on the same latent positions but a new drawing of the random edges; difference with the output on deterministically deformed latent positions and corresponding drawing of random edges; difference in output signal of an invariant GCN with respect to the amplitude of the deformation, averaged over 20 experiments.}
\label{fig:stab}
\end{figure}
\section{Stability of GCNs to model deformations}\label{sec:stability}
Stability to deformations is an essential feature for the generalization properties of deep architectures. \Citet{Mallat2012} studied the stability to small deformations of the wavelet-based scattering transform, which was extended to more generic learned convolutional network, \eg\cite{Bietti2017,Qiu2018}, and tries to establish bounds of the following form for a signal representation~$\Phi(\cdot)$:
\begin{equation}
\label{eq:classical_stability}
\|\Phi(f_\tau) - \Phi(f)\| \lesssim N(\tau) \|f\|,
\end{equation}
where~$f_\tau(x) = f(x - \tau(x))$ is the deformed signal and~$N(\tau)$ quantifies the size of the deformation, typically through norms of its jacobian~$\nabla \tau$, such as~$\|\nabla \tau\|_\infty = \sup_x \norm{\nabla \tau(x)}$. As we have seen in the introduction, it is not clear how to extend the notion of deformation on discrete graphs \cite{Gama2018, Gama2019a}. We show here that it can be done in the continuous world.
We first derive generic stability bounds for discrete random graphs involving a Wasserstein-type metric between the corresponding c-GCNs, then derive bounds of the form \eqref{eq:classical_stability} for c-GCNs by studying various notions of deformations of random graph models. We note that ``spatial'' deformations $x \mapsto x - \tau(x)$ are or course not the only possible choice for these models, and leave other types of perturbations to future work.
\paragraph{From discrete to continuous stability.}
We first exploit the previous convergence result to deport the stability analysis from discrete to continuous GCNs.
Let $G_1$ and $G_2$ be two random graphs with $n$ nodes drawn from models $\Gamma_1$ and $\Gamma_2$, and a GCN $\Phi$. In the invariant case, we can directly apply Theorem \ref{thm:conv} and the triangle inequality to obtain that
$
\norm{\bar{\Phi}_{A_1}(Z_1) - \bar{\Phi}_{A_2}(Z_2)} \leq \norm{\bar{\Phi}_{W_1,P_1}(f_1) - \bar{\Phi}_{W_2,P_2}(f_2)} + 2R_n
$, and study the robustness of $\bar{\Phi}_{W,P}(f)$ to deformations of the model.
The equivariant case is more complex. A major difficulty, compared for instance to \cite{Levie2019}, is that, since we consider two different samplings $X_1$ and $X_2$, there are no implicit ordering over the nodes of $G_1$ and $G_2$, and one cannot directly compare the output signals of the equivariant GCN \eg in Frobenius norm.
To compare two graph representations, a standard approach in the study of stability (and graph theory in general) has been to define a metric that minimizes over permutations $\sigma$ of the nodes (\eg\cite{Gama2018,Gama2019a}), thus we define $\text{MSE}_\Sigma(Z,Z') \eqdef \min_{\sigma \in \Sigma_n} (n^{-1} \sum_i \|Z_i - Z'_{\sigma(i)}\|^2)^{1/2}$. Theorem~\ref{thm:stab-eq} relates this to a Wasserstein metric between the c-GCNs (proof in Appendix \ref{app:wass}).
\begin{theorem}[Finite-sample stability in the equivariant case]\label{thm:stab-eq}
Adopt the notations of Theorem \ref{thm:conv}. For $r=1,2$, define the distribution $Q_r = \Phi_{W_r,P_r}(f_r)_\sharp P_r$. With probability $1-\rho$, we have
\begin{align}
\mathsmaller{\textup{MSE}_\Sigma\pa{\Phi_{A_1}(Z_1), \Phi_{A_2}(Z_2)} \leq \mathcal{W}_2(Q_1, Q_2) + R_n + C_1 \pa{n^{-\frac{1}{d_{z}}} + \pa{C_2 + \sqrt[4]{\log\frac{1}{\rho}} } n^{-\frac{1}{4}}}} \label{eq:stab-eq}
\end{align}
where $C_1$ and $C_2$ are defined in the appendix. When $f_1$ and $f_2$ are piecewise Lipschitz, the last terms can be replaced by $C'_1 (n^{-1/d_{x}} + (C'_2 + \sqrt[4]{\log(1/\rho)} ) n^{-1/4})$ for some $C'_1, C'_2$.
\end{theorem}
In other words, we express stability in terms of a Wasserstein metric between the push-forwards of the measures $P_r$ by their respective c-GCNs. By definition, the l.h.s.~of~\eqref{eq:stab-eq} is invariant to permutation of the graphs $G_r$. Moreover, for $\phi \in \Sigma_P$ by Prop. \ref{prop:permut-c} we have $\Phi_{\phi\cdot W, P}(\phi \cdot f)_\sharp P = \Phi_{W, P}(f)_\sharp(\phi_\sharp P) = \Phi_{W,P}(f)_\sharp P$, and therefore the r.h.s.~of~\eqref{eq:stab-eq} is also invariant to continuous permutation $\phi$.
We recover the rate $R_n$ from Theorem \ref{thm:conv}, as well as a term in $1/n^{1/4}$ and a term that depends on the dimension. In the relatively sparse case, the term in $1/\sqrt{\alpha_n n}$ in $R_n$ still has the slowest convergence rate. The proof uses classic manipulations in Optimal Transport \cite{Peyre2019}, as well as concentration results of empirical distributions in Wasserstein norm \cite{Weed2017}. In particular, it is known that the latter yields slow convergence rates with the dimension $n^{-1/d}$. While the $Q_r$'s live in $\RR^{d_z}$, when the c-GCNs are Lipschitz we can replace $d_z$ by the Minkowski dimension of $\Xx$, which may be advantageous when $\Xx$ is a low-dimensional manifold.
In the rest of this section, we analyze the stability of c-GCNs to deformation of random graph models, directly through the Wasserstein bound above (or simple Euclidean norm in the invariant case). Finite-sample bounds are then obtained with Theorem \ref{thm:conv} and \ref{thm:stab-eq}.
\paragraph{Stability of continuous GCNs: assumptions.}
Assume from now on that $\Xx \subset \RR^{d}$ equipped with the Euclidean norm. Given a diffeomorphism~$\tau : \Xx \to \Xx$, we consider spatial deformations of random graph models of the form $(\Id - \tau)$, and aim at obtaining bounds of the form \eqref{eq:classical_stability} for c-GCNs. Given a reference random graph model~$\Gamma = (P, W, f)$, we may consider perturbations to~$P$,~$W$, or~$f$, and thus define $W_\tau \eqdef (\Id- \tau) \cdot W$, $P_\tau \eqdef (\Id - \tau)_\sharp P$ and $f_\tau \eqdef (\Id - \tau) \cdot f$. Of course, after deformation, we still consider that the assumptions on our random graph models \eqref{eq:assumption_RG} are verified. As can be expected, translation-invariant kernels~$W$ such as Gaussian kernels or $\varepsilon$-graph kernels are particularly adapted to such deformations, therefore we will often make the following assumption:
\begin{equation}
W(x, x') = w(x - x'), \quad C_{\nabla w} \eqdef \sup_{x \in \Xx} \int \norm{\nabla w\pa{\tfrac{x - x'}{2}}} \cdot \norm{x' - x} d P(x') < \infty . \tag{A1} \label{eq:assume_TI}
\end{equation}
We also define $C_W \eqdef \sup_x \int |W(x, x')| d P(x')\leq c_{\max}$. While $C_W$ and~$C_{\nabla w}$ are easily bounded when~$W, \nabla w$ are bounded, they are typically much smaller than such naive bounds when~$W$ and~$\nabla w$ are well localized in space with fast decays, \eg for the Gaussian kernel or a smooth $\epsilon$-graph kernel with compact support (for instance, in the latter case, $C_W$ is proportional to $\varepsilon c_{\max}$ instead of $c_{\max}$).
In the case where $P$ is replaced by $P_\tau$, some of our results will be valid beyond translation-invariant kernels. We will instead assume that $P_\tau$ has a density with respect to $P$, close to one: for all $x$,
\begin{equation}
\mathsmaller{q_\tau(x) \eqdef \frac{d P_\tau}{d P}(x), \quad q_\tau(x), q_\tau(x)^{-1} \leq C_{P_\tau} < \infty, \quad N_{P}(\tau) \eqdef \norm{ q_\tau - 1 }_\infty} . \tag{A2} \label{eq:assume_density}
\end{equation}
When~$(\Id - \tau) \in \Sigma_P$, then we have~$N_P(\tau) = 0$, so that~$N_P(\tau)$ measures how much it deviates from such neutral elements and quantifies the size of deformations.
In particular, when~$P$ is proportional to the Lebesgue measure and~$\|\nabla \tau\|_\infty < 1$, we have $q_\tau(x) = \det(I - \nabla \tau(x))^{-1}$; then, for small enough~$\|\nabla \tau\|_\infty$, we obtain~$N_{P}(\tau) \lesssim d\|\nabla \tau\|_\infty$, recovering the more standard quantity of~\citet{Mallat2012}.
In this case, we also have the bound~$C_{P_\tau} \leq 2^d$ if we assume~$\|\nabla \tau\|_\infty \leq 1/2$.
In the rest of the section, we will assume for simplicity that the considered GCNs $\Phi$ have zero bias at each layer. Unless otherwise written, $\norm{f}$ refers to $L^2(P)$-norm. All the proofs are in Appendix \ref{app:stability}.
\paragraph{Deformation of translation-invariant kernels.}
We first consider applying deformations to the kernel~$W$, which amounts to a perturbation to the edge structure of the graph.
For GCNs, this affects the Laplacian operator used for the filters, and could be seen as a perturbation of the ``graph shift operator'' in the framework of~\citet{Gama2019a}.
The following result shows that in this case the stability of GCN representations, both invariant and equivariant, is controlled by~$\|\nabla \tau\|_\infty$.
\begin{theorem}[Kernel deformation]
\label{thm:deform_w}
Consider a GCN representation~$\Phi$ with no bias and a random graph~$\Gamma = (P, W, f)$. Define $Q = \Phi_{W,P}(f)_\sharp P$ and $Q_\tau = \Phi_{W_\tau,P}(f)_\sharp P$.
Assume~\eqref{eq:assume_TI} and~$\|\nabla \tau\|_\infty \leq 1/2$.
We have
\begin{equation}
\label{eq:deform_w}
\left. \begin{matrix*}[r]
\norm{\bar{\Phi}_{W_\tau,P}(f) - \bar{\Phi}_{W,P}(f)} \\
\mathcal{W}_2(Q,Q_\tau)
\end{matrix*}\right\rbrace \leq C (C_W + C_{\nabla w}) \|f\| \|\nabla \tau\|_\infty ,
\end{equation}
where~$C$ is given in the appendix.
\end{theorem}
\paragraph{Deformation of the distribution.}
Let us now consider perturbations of~$P$ to $P_\tau$, which corresponds to a change in the node distribution. In practice, this may correspond to several, fairly different, ``practical'' situations. We describe two different frameworks below.
In shape analysis, $P$ may be supported on a manifold, and $P_\tau$ can then represent a deformation of this manifold, \eg a character that rigidly moves a body part. In this case in particular, we can expect $\norm{\tau}_\infty$ to be large, but $\norm{\nabla \tau}_\infty$ to be small (\ie large translation but small deformation). Moreover, if the kernel is translation-invariant, there will be little change in the structure of the generated graph. If additionally the input signal of the c-GCN is approximately deformed along with~$P$, then one can expect the outputs to be stable, which we prove in the following theorem.
\begin{theorem}[Distribution deformation, translation-invariant case]
\label{thm:deform_p_TI}
Consider a GCN representation~$\Phi$ with no bias and a random graph~$\Gamma = (P, W, f)$, along with a function~$f'$. Define $Q = \Phi_{W,P}(f)_\sharp P$ and $Q_\tau = \Phi_{W,P_\tau}(f')_\sharp P_\tau$.
Assume~\eqref{eq:assume_TI} and~$\|\nabla \tau\|_\infty \leq 1/2$. We have
\begin{equation}
\left. \begin{matrix*}[r]
\norm{\bar{\Phi}_{W,P}(f) - \bar{\Phi}_{W,P_\tau}(f')} \\
\mathcal{W}_2(Q,Q_\tau)
\end{matrix*}\right\rbrace \leq C (C_W + C_{\nabla w}) \|f\| \|\nabla \tau\|_\infty + C' \norm{f'_\tau - f},
\end{equation}
where~$C,C'$ are given in the appendix.
\end{theorem}
When $f=f'$ are both constant, or when $f' = (\Id-\tau)^{-1} \cdot f$, that is,~$f'$ is the mapping of the original signal~$f$ on the deformed structure, then we have $\norm{f'_\tau - f}=0$. As mentioned before, in the absence of input signal, a standard choice is to take the degree functions as inputs \cite{Bruna2013, Chen2019c}. The next result shows that this choice also leads to the desired stability.
\begin{proposition}\label{prop:degree-NF}
Assume~\eqref{eq:assume_TI} and~$\|\nabla \tau\|_\infty \leq 1/2$. If $f=d_{W,P}$ and $f' = d_{W,P_\tau}$, then we have $\norm{f'_\tau - f} \leq C_{\nabla w} \norm{\nabla \tau}_\infty$.
\end{proposition}
Let us now take a look at the case where $W$ is not translation-invariant. We will then assume that $P_ \tau$ has a density with respect to $P$, and in particular that it has the same support: one may for instance imagine a social network with a slightly changing distribution of user preferences, SBMs with changing community sizes, geometric random graphs \cite{Penrose2008}, or graphons \cite{Lovasz2012}. The analysis here being slightly more complex, we focus on invariant c-GCNs.
\begin{theorem}[Distribution deformation, non-translation-invariant case]
\label{thm:deform_p}
Consider a GCN representation~$\Phi$ with no bias and a random graph~$\Gamma = (P, W, f)$.
Assume~\eqref{eq:assume_density}. We have
\begin{equation}
\label{eq:deform_p}
\norm{\bar{\Phi}_{W,P}(f) - \bar{\Phi}_{W,P_\tau}(f)} \leq \pa{C C_{P_\tau}^3 C_W + C'} \|f\| N_P(\tau),
\end{equation}
where~$C, C'$ are given in the appendix.
\end{theorem}
As mentioned above, in the case where $P$ is the Lebesgue measure, \eg for graphons \cite{Lovasz2012}, then we recover the quantity $N_P(\tau) \lesssim d\norm{\nabla \tau}_\infty$.
\paragraph{Deformations of the signal.}
Finally, we consider deformations of the signal on the graph and show a bound similar to the ones in the Euclidean case~\eqref{eq:classical_stability}. As can be seen in the proofs, this case is in fact a combination of the previous results~\eqref{eq:deform_w} and~\eqref{eq:deform_p}; hence we must assume both \eqref{eq:assume_TI} and \eqref{eq:assume_density} and obtain a dependence on both $\norm{\nabla \tau}_\infty$ and $N_P(\tau)$. Once again we focus on invariant c-GCNs with pooling, similar to classical scattering transform \cite{Mallat2012}.
\begin{proposition}[Signal deformation]
\label{prop:deform_f}
Consider a GCN representation~$\Phi$ with no bias and a random graph~$\Gamma = (P, W, f)$.
Assume~\eqref{eq:assume_TI}, \eqref{eq:assume_density}, and~$\|\nabla \tau\|_\infty \leq 1/2$.
We have
\begin{equation}
\norm{\bar{\Phi}_{W,P}(f) - \bar{\Phi}_{W,P}(f_\tau)} \leq (CC_{P_\tau}^{1/2}(C_W + C_{\nabla w})\norm{\nabla \tau}_\infty + \pa{CC_{P_\tau}^3 C_W + C'} N_P(\tau)) \norm{f} ,
\end{equation}
where~$C,C'$ are given in the appendix.
\end{proposition}
When~$P$ is proportional to the Lebesgue measure, since~$N_P(\tau)$ is controlled by~$\|\nabla \tau\|_\infty$, the GCN is invariant to translations and stable to deformations, similar to Euclidean domains~\citep{Mallat2012}.
We note that studies of stability are often balanced by discussions on how the representation preserves signal (\eg\cite{Mallat2012,Bietti2017,Gama2019}).
In our context, the empirical success of GCNs suggests that these representations maintain good discrimination and approximation properties, though a theoretical analysis of such properties for GCNs is missing and provides an important direction for future work.
|
2,877,628,088,938 | arxiv | \section{Introduction}
\label{}
\noindent
A Higgs boson is an excitation of the Higgs field, the field from which fundamental fermions and massive gauge bosons acquire their masses (with the fermions getting their mass from Yukawa coupling to this field and the gauge boson obtaining it via the Higgs mechanism). The major puzzle which the Higgs mechanism solves is the mass of the electroweak gauge bosons. In order to experimentally study the Higgs in more detail and determine more of its properties, it would be necessary to produce a large number of them on a reliable basis. The LHC is not suitable for this purpose as it has a large QCD background, making it difficult to separate the signal associated with a Higgs decay channel from other interactions which involve multi-jet final states. Although the discovery of the Higgs boson was a key discovery of the LHC, we still lack detailed understanding of its properties and couplings (including whether it is really a fundamental scalar in the sense that leptons are fundamental, or whether it is a composite particle). The branching ratios also need to be measured rigorously and checked against the predictions of the SM and it remains to be seen whether the measured total Higgs decay cross section can be accounted for using only the particles of the SM.
By studying how it couples to its decay products we may also uncover properties which are unexpected or not explained in the SM. It is known that the Higgs has even parity and zero spin and hence it represents an (apparently) fundamental scalar, unlike scalar mesons, which are hadronic composites. Its mass is a free parameter of the SM given by $m_{H}=2\lambda v^2$. It is a neutral particle and as a consequence of its role in generating mass, it couples to mass. In theory, the Higgs can decay to any other particle in the SM but the coupling is proportional to mass, so the largest branching ratio should be to the most massive particle which is kinematically accessible. It follows that the decay mode with the largest branching ratio is $b\overline{b}$. In this work, we study the possiblity of using an ep collider to search for Higgs boson production via Higgs decay to W boson pairs.
At the LHeC, the most probable mechanism for Higgs boson production is a charged current or netural current interaction via W or Z boson fusion, resulting in a Higgs boson, a jet and an electron neutrino [2].
\begin{figure}
\centering
\includegraphics[width=11cm, height=5cm]{lhc}
\newline
Figure 1: Tree Level Feynman Diagrams contributing to Higgs Boson Production via W Boson Fusion
\end{figure}
\noindent
In general, identification of the Higgs at the LHC via hadronic decay of the W boson is not considered viable because it is difficult to distinguish final state jet decays of the Higgs from the huge number of other events at the LHC which involve multi-jet final states. At the LHeC the QCD background is cleaner by a factor of 100 and the DIS final state is clean. It is possible to detect the W boson through its leptonic decays (such as $e^{+}\nu$) but these have relatively small branching ratios. The branching ratio for the W boson decay to two jets is dominant with the ratio of $W^{+} \rightarrow q \bar{q}$ being $(67.41 \pm 0.27)\:\%$ [8]. Given the abundance of this branching fraction, there is the potential to not only study $H \rightarrow \text{WW}^*$ in more detail at the LHeC but also to use it to measure Higgs boson production and so create a precision Higgs factory where the Higgs can be created on a reliable basis. There is interest in studying the HWW coupling because, unlike at the LHC where the HZZ and HWW couplings cannot be separated, the two are separate at the LHeC and the latter could have contributions which are not explained by the SM. A more detailed knowledge of Higgs couplings is also necessary to determine if the fundamental fermions really do obtain their masses via Yukawa coupling to the Higgs field.
Studying the $H \rightarrow W^{+} W^{-}$ process in detail relies on the possibility of tagging on W jets (as has already been done at LHC). The production of a W boson and another virtual W boson is followed by hadronic decays of both Ws, resulting in four or five W jets in the final state (we will leave aside the leptonic decays). When the W boson is very energetic, the two jets will be close together and will merge, meaning that we actually end up reconstructing one single jet characterised by a two-prong structure. The analysis of this channel relies on the possibility of being able to distinguish the W jets from jets due to quarks and gluons produced in strong interactions [5].
\section{Large Hadron Electron Collider}
\noindent
Physicists are currently exploring options for a next-generation collider at the energy frontier: two possibilities are a new electron-positron collider or a LHeC (Large Hadron-Electron Collider). The former would be similar to the LEP (Large Electron-Positron Collider) and the latter would be similar to HERA at DESY but with a greater centre of mass energy compared to CM energy of 318 GeV at HERA. The advantage of an ep collider is that it offers the opportunity to observe phenomena which would be observed in a pp collider with a cleaner decay environment and reduced contamination from unwanted multi-jet final states. One important aspect of the LHeC which separates it from the LEP is that it complements the LHC: the LHeC would provide an electron beam between 60 and 140 GeV (compared to 27.5 GeV for the lepton beam at HERA) which would be collided with the intense hadron beams already provided by the LHC. This would increase the kinematic range by a factor of twenty for $Q^2$ and inverse $x$ and there would be an increase over the integrated luminosity of HERA of two orders of magnitude with a luminosity of $10^{33}$ $\text{cm}^{-2}\:\text{s}^{-1}$. The LHeC could potentially be realized as a ring-ring or a ring-linac configuration. In the ring-ring configuration the same geometry is used for both components and the technology of the ring setup has been extensively studied at HERA and LEP. The electrons are accelerated in a ring whereas in the ring-linac configuration the electrons are accelerated to the required energies in a linear accelerator before being collided with the protons travelling around the LHC. The process of generating intense lepton beams in a storage ring is well-understood. However, the linac-ring configuration has the advantage that the infrastructure of the linac only meets with the ring in the vicinity of the interaction vertex, minimising interference due to hadron beams [5]. An initial 500 MeV electron bunch originating at the injector is accelerated to 10 GeV in each linac, leading to a final energy of 60 GeV at the interaction vertex after passing through the entire setup three times. The 60 GeV beam is then collided with the proton beam from the LHC.
\section{Simulated Samples}
\noindent
The search was performed using a simulated LHeC experiment with $\sqrt{s}=1.3$ TeV ep collisions and luminosity of 100 fb$^{-1}$ per year. In almost all of the analysis of the jet kinematics there was a $p_T$ cut on the jets which would assist in jet reconstruction. Low $p_T$ jets are typical when the jets are formed via hadronization of QCD radiation. These background jets can be due to quarks or gluons emitted by particles inside the signal jet which then fragment and hadronize to form new jets [1].
These false jets typically have smaller values of $p_T$ and so can be removed with a cut on transverse momentum of the jets. When reconstructing the jets we must also consider the separation of the jets, their size and the algorithm used. An algorithm which is in use at LHC (and in simulations of LHeC for consistency purposes) is the anti-$k_T$ algorithm along with a distance parameter. A typical way of trying to categorize signal jets which emerge from a decay is to study how many of them have merged or separated configurations. For example, we could consider a dijet with $\Delta R = 0.4$ as being separated, whereas a pair of jets with separation below this overlap and merge to form one jet. $\Delta R$ is defined as follows:
\[\Delta R = \sqrt{(\Delta \eta)^2 + (\Delta \phi)^2}, \]
\noindent
where $\eta$ is the pseudorapidity and $\phi$ is the usual angular coordinate. The anti-$k_T$ algorithm is an example of a sequential recombination jet algorithm for jet reconstruction. These algorithms normally have as their parameter the power of the energy scale in the distance measure, and the 'anti' comes from the fact that the power is negative, as opposed to the ordinary $k_T$ algorithm.
\[d_{ij}=\text{min}(k_{ti}^{2p},k_{tj}^{2p})\frac{\Delta_{ij}^2}{R^2}.\]
\noindent
Generation of events and cross sections (and numerical evaluation of relevant matrix elements) was carried out via MadGraph5 which was used for generation of signal and background events at the LHeC [1]. The partons produced by MadGraph5 were assigned 4-vectors and were then showered by Pythia, an event generator which simulates the hadronisation and decay of the showers [7]. The resulting events were assigned 4-vectors and interfaced to Delphes, a detector package which includes simulations of systems to trigger on tracks and simulations of calorimeters and muon detectors. Delphes analyses events generated by Pythia and creates a dataset as output which can be used for reconstruction [3].
\section{Event Selection}
\subsection{ Parton Level Plots}
\noindent
We begin by checking that our samples are for the WW* decay where one of the W bosons is off mass shell, and not WW, where both W bosons are on-shell. The latter case makes the Higgs decay to two W bosons effectively impossible since $m_H < 2m_W$. The partonic mass distributions were plotted for W+ and W$-$ and shown to be almost the same and in both cases a sharp peak was observed around the mass of the W boson (approximately 80 GeV) along with a smaller peak between 10 - 50 GeV, reaching a highest point between 30 - 40 GeV. Since the off-shell W boson has a smaller mass this confirmed that the samples generated WW*. It was also confirmed that a plot of the Higgs mass at parton level led to a peak at the Higgs mass as expected (around 125 GeV).
\newline
\includegraphics[width=14cm, height=8cm]{Higgs.png}
\newline
Figure 2: Parton Level Plot of Higgs Mass
\noindent
\subsection{W-jets Selection}
\noindent
A selection criterion on the transverse momentum of the jets was required to remove a large number of the false jets which are present due to background QCD radiation. To determine this criterion, the invariant mass of the two leading jets in each event (jets 1 and 2) was plotted. Kinematically, it was expected that the W jets in an event could be detected via the jet pairs composed of the leading jets and so the jet pair composed of leading jets 1 and 2 should reconstruct to have the invariant mass of an on-shell W boson and the pair composed of leading jets 3 and 4 should reconstruct to the invariant mass of an off shell W boson. A cut of $p_>30$ GeV was found to remove a large number of events with background jets whilst retaining the structure of the W peak, hence it was determined that this cut would be used as a maximum for removing background events, being lowered as necessary to detect the off shell W* (since a cut of $p_{T}>30$ GeV is rather high to see a signal from a W* between 10 and 50 GeV). In fact, a cut of $p_{T}>30$ on all jets is very high for charged-current DIS and parton level observation showed that the transverse momentum $p_{T}$ of the generated Higgs boson peaked at 50 GeV, wheras the $p_{T}$ of the on-shell W boson would be expected to peak at 40 GeV.
\newline
\subsection{Signal Selection}
\noindent
To reconstruct the Higgs mass, a cut of $|\Delta \eta|<1$ (where $\eta$ is the pseudorapidity) was imposed the for difference between two jets in a jet pair. The four $W$-tagged jets being used to reconstruct the Higgs mass can obviously appear in multiple combinations (for example, we could have the jet pairs (1,3) and (2,4) both in the necessary mass window, but not (1,2) and (3,4)). Ideally one would like to consider the $\eta$ differences for all the jets in an event and not just the 4 leading jets, so the masses of all possible permuations of jet pairs in an event was considered where each permutation contains 4 jets. Analysis confirmed that the signal for the Higgs boson could be improved by taking cuts of $p_{T} > 20$ GeV on jets 1 and 2, $p_{T} > 10$ GeV on jets 3 and 4, $|\Delta \eta|<1$ and invariant mass $m$ of jets 3 and 4 larger than 10 GeV. The histogram shows a fairly high number of entries around the Higgs mass region with these selection criteria.
\includegraphics[width=14cm, height=8cm]{9.png}
\newline
Figure 3: Mass Distribution for the 4-Jet System (Cuts of $p_T > 10 $ on (3,4), $m>10 $)
\noindent
\linebreak
Up to this point, all samples used for the analysis assume a value of angular separation between jets of $\Delta R=0.4$. This is likely optimal for the types of jet which are being studied. This assumption was confirmed by repeating the analysis for other values of $\Delta R$ and finding that other values apart from $\Delta R = 0.4 $ or $\Delta R = 0.5$ reduce the number of events falling into the Higgs mass region. The actual optimal value was confirmed later via selection efficiencies.
\includegraphics[width=14cm, height=8cm]{R5.png}
\newline
Figure 4: Mass Distribution for the 4-Jet System ($\Delta R =0.5$)
\newline
\includegraphics[width=14cm, height=8cm]{R7.png}
\newline
Fig 5: Mass Distribution for the 4-Jet System ($\Delta R =0.7$)
\newline
\section{Signal to Background Comparison}
\noindent
A signal to background comparison was difficult for a study of this kind as the main background was due to multi-jet final states from charged-current DIS or from photoproduction of multi-jets. Since it is non-trivial to produce high statistics of multi-jets, it would be normal in this situation to make cuts at parton level and then optimize the $\Delta R$ parameter for the separation between candidate W jets. However, it was still desirable to have a preliminary background sample to show that it was suppressed by the proposed cuts on the signal sample (for example, a sample of background W${-}$ jets). Such a smaple should also take account of the asymmetry in the WW$^{*}$ decay. The analysis was repeated with the BG sample over the same number of events and a comparison made with the previous results.
\newline
\includegraphics[width=16cm, height=5cm]{signalbgcomparison.png}
\newline
Figure 6: Signal to Background Comparison
\newline
\noindent
\linebreak
Even accounting for normalization, the suppression of background events does not look promising so a stricter mass cut is required on the 4-jet of $m<130$ GeV or $m<140$.
\newline
\newline
\includegraphics[width=16cm, height=5cm]{130.png}
\noindent
\newline
Figure 7: Signal to Background Comparison with Mass Cut $m<130$ GeV on 4-Jets
\newline
\linebreak
A strict cut is required on the mass of the 4-jet in order to see a good signal-to-background ratio: 1.44 for $m<130$ and 1.28 for $m<140$ compared to 1.02 for no mass cut.
\newline
\subsection{Selection Efficiencies}
\noindent
Efficiencies were calculated for various cuts on $p_T$ and $m$ at $\Delta R = 0.4$. The selection efficiency was defined as the ratio between the number of events passing the selection criteria with 4-jet masses below 140 GeV and the total number of events. $p_{Tij}$ denotes the $p_T$ cut on the jet pair composed of jets $i$ and $j$ and $m$ is the lower mass cut on the 4-jet.
\newline
\newline
\begin{tabular}{ |p{6cm}|p{2cm}|p{3cm}|p{2cm}|
}
\hline
\hline
Cuts& Signal &Background &S-B Ratio\\
\hline
$p_{T12}>20,p_{T34}>20,m>10$ & 186 &89& 2.09\\
$p_{T12}>20,p_{T34}>20,m>20$ & 161 &78 &2.06\\
$p_{T12}>20,p_{T34}>15,m>20$ & 472 &302 &1.56\\
$p_{T12}>20,p_{T34}>15,m>10$ & 594 &391 &1.52\\
$p_{T12}>20,p_{T34}>10,m>10$ & 1644 &1282& 1.28\\
$p_{T12}>20,p_{T34}>10,m>20$ & 1140 &845 &1.35\\
$p_{T12}>15,p_{T34}>10,m>20$ & 1192 &885 &1.35\\
$p_{T12}>15,p_{T34}>10,m>10$ & 1731 &1353 &1.28\\
$p_{T12}>10,p_{T34}>10,m>20$ & 1195 &887 &1.35\\
$p_{T12}>10,p_{T34}>10,m>10$ & 1738 &1363 &1.28\\
\hline
\end{tabular}
\newline
\newline
Table 1: Number of Events Passing Selection Criteria ($m<140$ GeV)
\newline
\linebreak
One obvious conclusion from the table is that lowering the mass cut from $m>20$ to $m>10$ increases the selection efficiency but lowers the signal-to-background ratio. The difference in signal-to-background ratio in this case is negligible (between 0.03 and 0.07) and so the focus was placed on the candidate cut sets which produced the greatest number of events, especially since signal-to-background was not a major part of the study. The cuts which produced the largest efficiencies were $p_{T12}>20,p_{T34}>10,m>10$, $p_{T12}>15,p_{T34}>10,m>10$ and $p_{T12}>10,p_{T34}>10,m>10$. The analysis was run for the above optimal cut sets over samples with $\Delta R$ going from 0.3 to 0.7.
\newline
\newline
\begin{tabular}{ |p{5cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|
}
\hline
\hline
Cuts& 0.3 &0.4 &0.5 &0.6 &0.7\\
\hline
$p_{T12}>20,p_{T34}>10,m>10$ & 715 &1644& 1416 & 1027&720\\
$p_{T12}>15,p_{T34}>10,m>10$ & 757 &1731 &1488& 1098&765 \\
$p_{T12}>10,p_{T34}>10,m>10$ & 758 &1738 &1494& 1110&777\\
\hline
\end{tabular}
\newline
\newline
Table 2: Number of Events Passing Selection Criteria for $\Delta R = 0.3 - 0.7$
\newline
\linebreak
The analysis was repeated for a mass cut of $<130$ GeV on the 4-jets.
\newline
\newline
\begin{tabular}{ |p{6cm}|p{2cm}|p{3cm}|p{2cm}|
}
\hline
\hline
Cuts& Signal &Background &S-B Ratio\\
\hline
$p_{T12}>20,p_{T34}>20,m>10$ & 132 &43& 3.07\\
$p_{T12}>20,p_{T34}>20,m>20$ & 113 &34 &3.32\\
$p_{T12}>20,p_{T34}>15,m>20$ & 378 &195 &1.94\\
$p_{T12}>20,p_{T34}>15,m>10$ & 482 &260 &1.85\\
$p_{T12}>20,p_{T34}>10,m>10$ & 1418 &982& 1.44\\
$p_{T12}>20,p_{T34}>10,m>20$ & 966 &633 &1.53\\
$p_{T12}>15,p_{T34}>10,m>20$ & 1014 &673 &1.51\\
$p_{T12}>15,p_{T34}>10,m>10$ & 1497 &1048 & 1.43\\
$p_{T12}>10,p_{T34}>10,m>20$ & 1017 &675 &1.51\\
$p_{T12}>10,p_{T34}>10,m>10$ & 1503 &1056 &1.42\\
\hline
\end{tabular}
\newline
\newline
Table 3: Number of Events Passing Selection Criteria ($m<130$ GeV)
\newline
\newline
\begin{tabular}{ |p{5cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|
}
\hline
\hline
Cuts& 0.3 &0.4 &0.5 &0.6 &0.7\\
\hline
$p_{T12}>20,p_{T34}>10,m>10$ & 625 &1418& 1220 & 869&606\\
$p_{T12}>15,p_{T34}>10,m>10$ & 666 &1497 &1285& 930&647 \\
$p_{T12}>10,p_{T34}>10,m>10$ & 667 &1503 &1291& 941&656\\
\hline
\end{tabular}
\newline
\newline
Table 4: Number of Events Passing Selection Criteria for $\Delta R = 0.3 - 0.7$
\newline
\linebreak
This lowered the selection efficiency but increased the signal-to-background ratio. Using calculated values for the Higgs cross-section in an ep collider, a value for the background cross-section and the numbers of events computed in Tables 2 and 4, values were calculated for expected numbers of signal and background events along with the expected significance. The actual signal-to-background ratios were much smaller as a result than the ones calculated initially, since the background cross-section is larger than the signal cross-section.
\newline
\newline
\begin{tabular}{ |p{6cm}|p{2cm}|p{3cm}|p{2cm}|
}
\hline
\hline
Cuts& Signal &Background &Significance\\
\hline
$p_{T12}>20,p_{T34}>20,m>10$ & 568 &120400& 1.6\\
$p_{T12}>20,p_{T34}>20,m>20$ & 486 &95200 &1.6\\
$p_{T12}>20,p_{T34}>15,m>20$ & 1625 &546000 &2.2\\
$p_{T12}>20,p_{T34}>15,m>10$ & 2073 &728000 &2.4\\
$p_{T12}>20,p_{T34}>10,m>10$ & 6097 &2749600& 3.7\\
$p_{T12}>20,p_{T34}>10,m>20$ & 4154 &1772400 &3.1\\
$p_{T12}>15,p_{T34}>10,m>20$ & 4360 &1884400 &3.21\\
$p_{T12}>15,p_{T34}>10,m>10$ & 6437 &2934400 & 3.8\\
$p_{T12}>10,p_{T34}>10,m>20$ & 4373 &1890000 &3.2\\
$p_{T12}>10,p_{T34}>10,m>10$ & 6463 &2956800 &3.8\\
\hline
\end{tabular}
\newline
\newline
Table 5: Expected Numbers of Events ($m<130$ GeV)
\newline
\newline
\begin{tabular}{ |p{6cm}|p{2cm}|p{3cm}|p{2cm}|
}
\hline
\hline
Cuts& Signal &Background &Significance\\
\hline
$p_{T12}>20,p_{T34}>20,m>10$ & 800 &249200& 1.6\\
$p_{T12}>20,p_{T34}>20,m>20$ & 692 &218400 &1.5\\
$p_{T12}>20,p_{T34}>15,m>20$ & 2030 &845600 &2.2\\
$p_{T12}>20,p_{T34}>15,m>10$ & 2554 &1094800 &2.4\\
$p_{T12}>20,p_{T34}>10,m>10$ & 7069 &3589600& 3.7\\
$p_{T12}>20,p_{T34}>10,m>20$ & 4902 &2366000 &3.2\\
$p_{T12}>15,p_{T34}>10,m>20$ & 5126 &2478000 &3.3\\
$p_{T12}>15,p_{T34}>10,m>10$ & 7443 &3788400 &3.8\\
$p_{T12}>10,p_{T34}>10,m>20$ & 5139 &2483600 &3.3\\
$p_{T12}>10,p_{T34}>10,m>10$ & 7473 &3816499 &3.8\\
\hline
\end{tabular}
\newline
\newline
Table 6: Expected Number of Events ($m<140$ GeV)
\newline
\newline
The background sample used at this point was for background due to W${-}$ jets and so the analysis was also run over a sample for more general QCD background. We repeated the analysis for events with invariant 4-jet masses below 130 Gev and added in the effect of the second background. The numbers of events were calculated and then scaled up appropriately.
\newline
\newline
\begin{tabular}{ |p{6cm}|p{2cm}|p{3cm}|p{2cm}|
}
\hline
\hline
Cuts& Signal &Background &Significance\\
\hline
$p_{T12}>20,p_{T34}>20,m>10$ & 800 &453600& 1.2\\
$p_{T12}>20,p_{T34}>20,m>20$ & 692 &364400 &1.1\\
$p_{T12}>20,p_{T34}>15,m>20$ & 2030 &1838400 &1.5\\
$p_{T12}>20,p_{T34}>15,m>10$ & 2554 &2438000 &1.6\\
$p_{T12}>20,p_{T34}>10,m>10$ & 7069 &10218000& 2.2\\
$p_{T12}>20,p_{T34}>10,m>20$ & 4902 &6570800 &1.9\\
$p_{T12}>15,p_{T34}>10,m>20$ & 5126 &6945600 &1.9\\
$p_{T12}>15,p_{T34}>10,m>10$ & 7443 &11015400 &2.2\\
$p_{T12}>10,p_{T34}>10,m>20$ & 5139 &7038800 &1.9\\
$p_{T12}>10,p_{T34}>10,m>10$ & 7473 &11174899 &2.2\\
\hline
\end{tabular}
\newline
\newline
Table 7: Expected Number of Events with QCD Background ($m<130$ GeV)
\newline
\newline
\noindent
The number of background events is now quite high: choosing the signal for the first set of cuts in Table 6, the expected background events would have an uncertainty of $\sqrt{120400} = 347$ such that the signal would be seen at $1.5 \sigma$. An improvement to $3 \sigma$ could likely be achieved with improvements to the analysis and use of a more refined selection strategy and improved technique for signal-to-background comparison.
\section{Conclusion}
\noindent
After cuts and analysis had been carried out, it was found that a good selection scheme for the search which we are studying is $|\Delta \eta| <1, 10<m<85$ GeV, $p_T$ of jets 1 and 2 between $10 - 20$ GeV and $p_T$ of jets 3 and 4 $>10$ GeV and that the selection efficiency is highest for $\Delta R = 0.4$, leading to an efficiency between $7.1 - 7.5 \%$ for finding the invariant 4-jet mass in a mass region $<140$ GeV. In fact, $\Delta R = 0.4$ was optimal for all cut sets where it was varied, but this could be unique to the particular decay we were studying, as other Higgs decays often show a strong dependence on $\Delta R$. It was also found that we could begin to incorporate a background sample due to W${-}$ jets with a signal which would be observed with $3.8 \sigma$, but attempts to improve this figure by adding QCD background resulted in a very high number of background events.
We should also point out this this is a preliminary study and that further work would be required before our conclusion could be stated more firmly. The main reason is that we had not always been running on full signal statistics, only using 10000 events in the interest of efficiency when running samples with many modified versions of the analysis code to find effective cuts (something like a quarter of the full signal statistics). We also did not account for the hadronic branching fraction of the W boson. It is likely that both of these factors led to a small signal cross-section of around 0.1 pb. In our calculations we have assumed a value of 1 ab of luminosity, whereas a revised value of 2 ab would double the signal cross-section. This would however decrease the significance by a factor of $\sqrt{2}$ to compensate. A proper comparison of signal-to-background would be difficult in this study and it might be considered satisfactory to have imposed a selection scheme and optimized $\Delta R$ but it would obviously be desirable to perform a more sophisticated signal-to-background analysis using boosted decision trees [4]. BDT is especially useful when the signal is 'drowned out' by similar-looking background events, since it can be used to identify if events are signal-like or background-like by using Monte Carlo simulations to train the decision tree. This then enables a final determination of signal strength compared to background [6].
It would also be desirable to adjust or refine cuts to increase the efficiency of signal detection, since the mass cuts employed appeared to be drowning out the signal at the region of interest. The next step besides BDT would be to run the analysis again on the full Monte Carlo signal statistics, as this could have an effect on the signal cross-section. The effect of other variables could be studied for both signal and background: in particular, adjustments of $\Delta \eta$ were not investigated for background topologies in this study. The study only considered charged current and QCD background. In a more thorough study, different types of neutral current background could be incorporated.
\section*{Acknowledgements}
\noindent
I would like to thank my supervisors Mario Campanelli and Uta Klein for useful discussions and for providing data samples.
\section*{References}
[1] J. Alwall et al. The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. JHEP, 1405.0301:185:2250–2300, 2014.
\noindent [2] J. Esteves. Understanding the Higgs boson with the Large Hadron Electron Collider. \textit{J. Phys}.: Conf. Ser, 645:012009, 2015.
\noindent [3] A. Giammanco et al. DELPHES 3, A modular framework for fast simulation of a generic collider experiment. JHEP, 02:057:1307.6346, 2014.
\noindent [4] Hoecker et al. TMVA: Toolkit for Multivariate Data Analysis. PoS,
\noindent ACAT:040:physics/0703039, 2007.
\noindent [5] LHeC Study Group. A Large Hadron Electron Collider at CERN: Report on the Physics and Design Concepts for Machine and Detector. 2012.
\noindent [6] G. McGregor et al. Boosted decision trees: an alternative to artificial neural networks. Nucl. Instrum. Meth, A543(2-3):577-584:physics/0408124, 2005.
\noindent [7] S. Mrenna, T Sjostrand and P. Skands. PYTHIA 6.4 Physics and Manual. JHEP, 05:026,:hep.ph/0603175, 2006.
\noindent [8] C. Patrignani et al. Review of Particle Physics. \textit{Chin. Phys. C},
\noindent 40:100001,2016/2017.
\end{document}
|
2,877,628,088,939 | arxiv | \section{Introduction}
As Artificial Intelligence (AI) technology advances, AI will have an ever greater impact on human decision making. But how big will this impact be? Will humans adapt to AI by learning from it and making better decisions themselves in certain domains but not in others? Over time, how quickly or slowly will humans adapt to AI? Moreover,in which domains will AI have more or less influence on human decision making? Answering these questions will become easier with better methods of measuring AI’s impact on human decision making. To this end, we propose an intuitive and objective measure of human adaptation to AI in the game of Go and consider the measure's applicability in other domains.
In proposing our measure of human adaptation to AI, we examined the game of Go for two reasons. First, it is one of the first domains in which AI has pioneered to achieve superhuman performance in a complex decision making problem \cite{silver2016mastering}. This superhuman performance is a necessary condition for measuring AI’s impact on human decision making, not only because it encourages humans to learn from AI, but also because it provides an objective standard to which the quality of human decision making can be compared. On this latter point, suppose the quality of human decision making is at some level: say, one foot tall. Then, a superhuman AI can act as a taller yardstick that enables measuring changes in the one-foot-tall human decision making quality after the introduction of AI. In contrast, a par-human performance AI can act only as an equivalent-height, one-foot ruler that would not enable measurements of \textit{growth} in human decision making quality. Now that AI programs are expected to show superior performance in many other decision making problems, our measure which assumes a superhuman AI can be used in many other settings.
Another reason we examined the game of Go is that a game is an effective setting to test how humans interact with AI and adapt their decision-making. The goal of a game is usually well-defined, and any action taken by human players as well as resulting changes in the environment can be recorded in a database. Using these unique features of a game, researchers have studied various aspects of human decision-making, from error correction to skill acquisition \cite{biswas2015measuring,regan2014human,stafford2014tracing,strittmatter2020life,tsividis2017human}. Among many games, the game of Go presents arguably the most complex task, which explains AlphaGo receiving so much attention when it defeated a top human expert. We speculate that the more complex a domain in which the measure of human adaptation to AI has been tested to work, the more domains of human decision-making the measure could be applied to. In other words, if our measure of human adaptation is useful in a domain as complex as the game of Go, it should be useful in less complex domains as well.
We present two case studies testing the usefulness of our measure. First, we examine the impact of AI on human experts' decision quality. Using historical data of human decisions before and after the emergence of AI such as AlphaGo, we investigate whether and when humans learn to make better decisions like AI. Although additional information provided by AI could be beneficial to human players, the black-box nature of AI's decisions may have hindered human adaptation or generated misinterpretation. Indeed, our results suggest that merely observing AI's actions may not bring a meaningful improvement in human decision making. Observing AI's reasoning processes, however, does seem to improve human decision-making. Second, we also study whether our measure can detect any cheating behavior. Because AI outperforms human experts in the game of Go, it would be tempting for human experts to refer to AI's decisions even in a match between humans. Not surprisingly, there have been reports of cheating via AI not only in professional Go matches but also in chess matches, particularly as more matches are held online during the pandemic. We test whether our measure can detect such misbehavior.
Our measure has broad applications beyond the game of Go. A natural extension is to measure users' performance in games, including chess and video games. We can also compare how stronger and weaker players are affected differently by AI technologies. The measure can be used in many other domains in which AI makes better decisions than humans but humans remain final decision makers due to the high-stakes nature of the decisions.
\section{A Measure for Human Adaptations to AI}
Our measure of human adaptation to AI is rooted in the framework of Reinforcement learning (henceforth RL). RL is suitable because it is studied not just for developing an effective AI but also for explaining human skill learning. Many computer scientists have made RL-based AI programs that solve complex decision problems from playing complex board games to scheduling educational activities \cite{bassen2020reinforcement,mnih2013playing,nazari2018reinforcement,silver2017mastering}. Cognitive scientists also have used RL to study how humans learn new skills \cite{fiorillo2003discrete,holroyd2002neural,niv2009reinforcement,waelti2001dopamine}. Recently, those two areas of research have interacted with each other under the framework of RL \cite{botvinick2019reinforcement,dabney2020distributional,hassabis2017neuroscience}. It suggests that RL is a great tool to describe a commonality in how humans and AI make their choice in Markov decision processes.
The key idea of our measure, which we call \textit{Human-AI Gap}, is basically comparing the quality of actions by an AI program and the quality of actions by humans. But it is not obvious how to define the quality of actions whether it is made by AI or humans. This is because the consequence of actions is hard to pin down under a high-dimensional state space. Modern AI programs based on Deep RL give one way to do that objectively. It usually produces two ingredients used in our measure: value network and policy network. Action-value network\footnote{In Deep Q network algorithm, Q value represents the quality of each action. In other algorithms such as Actor-Critic, there is also a value function to represent the quality of each state which is produced by a policy function. } provides how good each action is under a state. Using this feature, we can quantify the quality of each action at any given state. In addition, the policy network of AI simulates the best action under the same state where humans would face. The value network of AI also evaluates the quality of the action AI simulates. So we calculate the difference between the quality of actions by AI and the quality of actions by humans.
We use notations from previous literature \cite{igami2020artificial,silver2017mastering} to explain our measure more formally. It is defined mostly for the environment of the boardgame Go\footnote{In this setting, a reward function is very sparse only if it is realized at the terminal step. }, but it can be easily modified in any other well-defined Markov decision processes. State space, $\vert S \vert$, represents a set of possible situation of a game. In a match between human players, a human player would face many situations in $\vert S \vert$. Given $S$$\in$$\vert S \vert$, a human player would decide the next move. A human player in $k^{th}$ order observes $S_k$ (state) and decides $a_k$ (action), i.e., a position to place the stone. We formalize the decision rule of human players as follows:
\begin{equation*}
a_k^{Human }=\sigma^{Human}(S_{k};V(S_{k};\theta^{Human}))
\end{equation*}
Human players would use their own evaluation parameter ($\theta^{Human}$) to diagnose how advantageous the current state is, $V(S_{k};\theta^{Human})$. Based on the evaluation, they would apply their own strategy or decision rule ($\sigma^{Human}$), ending up with an action, $a_k^{Human}$. In this decision rule, we abstract away from the complex interaction between human players. Instead, it is more like a single agent problem where each human player have to find an optimal choice at the given state to maximize the total reward. Any strategic responses from the opponent player is subsumed under the transition of the state in our decision rule.
AI programs would also map a given state to an action by their policy network\footnote{To be precise, it chooses an action of higher expected action-value and the process is complemented with Monte Carlo Tree Search. Also, the dimension of state space in the boardgame Go is very large so AI programs take a few key features of each state as input.}. Likewise, we denote the decision rule of AI as follows:
\begin{equation*}
a_k^{AI}=\sigma^{AI}(S_{k};V(S_{k};\theta^{AI})).
\end{equation*}
It is notable that a human and an AI program facing the same state, $S_{k}$, could reach a different action (i.e., $a_k^{Human}\neq a_k^{AI}$). It may result from that the way a human evaluates the state, $\theta^{Human}$, is different from the way AI does, $\theta^{AI}$. Namely, a human may be too optimistic or pessimistic from the perspective of AI. In addition, AI may build a strategy, $\sigma^{AI}$, that does not belong to any traditional strategy of a human, $\sigma^{Human}$. AI trained by itself is free from any conventional human strategies, so it may produce novel actions.
The level of human adaptation may be quantified by comparing the parameters between human players and AI. However it is often not feasible because researchers usually observe only the actions of human players, not their policy nor their evaluation. It is hard to know how they would reach that decision or how they evaluate a certain state. That makes it not practical to back up the underlying decision rule of human players. Also, the parameters of AI programs are also very high-dimensional, which makes the direct comparison harder. Instead, our suggestive measure requires only the actions of human players as data. Then we simulate the counterfactual actions by AI. We evaluate these two different actions, one from humans and the other from AI, by using the value network of AI. We define a measure of \textit{Humna-AI Gap} in the following way.
\begin{equation*}
\label{gap}
\Delta_k \equiv \overbrace{V (S^{'}_{k+1}|a_k=a_k^{AI})}^\text{Quality of a counterfactual AI choice} -\underbrace{V(S^{''}_{k+1}|a_k=a_k^{Human})}_\text{Quality of an actual human choice}
\end{equation*}
First, in each realized state $S_k$ a human player faces, we let AI simulate the counterfactual action, which is $a_k^{AI}$. Second, we let AI quantify the quality of each action, one from a human player and the other from AI by evaluating two following states. Finally, we calculate a difference between those two evaluations. So our measure quantify a difference between an advantage induced by a counterfactual AI choice $V(S^{'}_{k+1}|a_k=a_k^{AI})$ and an advantage induced by an actual human choice $V(S^{''}_{k+1}|a_k=a_k^{Human})$. If an AI comes up with a better action than humans, this gap measure would be positive. If a human chooses the same action as AI, then it would be zero. We use a much superior AI to generate counterfactual actions so this value is usually positive.
One caveat in the measure is that we use not just the policy network of AI but also the value network of AI. Specifically, it means that we evaluate the quality of actions, whether it comes from AI or humans, from the perspective of AI. In this regard, our measure is suitable to quantify the level of human adaptation to AI. It needs precautions when human decision makers have improved their decisions in a way that AI does not appreciate. The value network of AI would not capture that change of behavior. So our measure is more appropriate to calculate the level of human adaptation to AI.
How do we use it? Suppose human players observing the superiority of AI would update their decision rule. This human adaptation can happen in evaluating the state ($\theta^{Human}$), or in building a strategy ($\sigma^{Human}$), or both. Humans with an updated decision rule would produce an action which AI evaluate better. Then researchers can observe our gap measure decreases, closer to zero. To test this hypothesis on whether human players make more AI-like choices, we construct a panel data in a player-month level.
\begin{equation*}
\label{delta_panel}
\Delta_{it}=\frac{1}{n_{it}}\frac{1}{K}\sum_{j=1}^{n_{it}}\sum_{k=1}^{K}\Delta_{jk}
\end{equation*}
where $\Delta_{it}$ denotes player $i$'s average gap in month $t$. $n_{it}$ stands for the number of matches a player $i$ has in month t, and $k$ represents the order of a choice within a match. Now we have a well-constructed panel data, we can test many research hypotheses regarding human adaptation.
\section{Applications of our measure }
\subsection{Adaptation I: Human learning from AI}
\paragraph{Background} For the first study, we use the measure to compare human players' strategies before and after the launch of AI programs. The measure allows us to evaluate how close each move decision of human players is to the level of AI programs, and thus to tell if humans' decisions improved after AI programs were available.
When AI programs did not exist, human players gained new information on Go game strategies by going over top players' move decisions in tournaments, and discussing the strategies with other human players (illustrated in Panel (a)\footnote{"The power of Korean Go? The power of joint research!" Mar/09/2011 (https://www.donga.com/news/Culture/article/all/20110309/35417308/1)} in Figure \ref{fig:change_human_learning}). What made AlphaGo so sensational in the human Go community is that the AI program used many unorthodox tactics. AlphaGo's demonstration of unfamiliar but effective strategies motivated human players to discuss and learn the strategies of AI. Consequently, the way human players study winning strategies completely changed into getting tutored by AI (illustrated in Panel (b)\footnote{"Hone your Go skills with AI instructors" Feb/26/2019 (http://www.munhwa.com/news/view.html?no=2019022601032103009001)} in Figure \ref{fig:change_human_learning}).
\begin{figure}[ht]
\centering
\begin{subfigure}[ht]{0.45\linewidth}
\centering
\includegraphics[width=6cm]{plot_arxiv/baduk_learning.png}
\caption{Human Learning Before AI}
\end{subfigure}
\hfill
\begin{subfigure}[ht]{0.45\linewidth}
\centering
\includegraphics[width=6cm]{plot_arxiv/baduk_learning_after.png}
\caption{Human Learning After AI}
\end{subfigure}%
\caption{Illustration of change of the way human Go players learn winning strategies since AI}
\label{fig:change_human_learning}
\end{figure}
Interestingly, two different AI programs were released at different times. One is AlphaGo in March 2016 and the other is Opensourced AI programs (e.g., LeelaZero) in October 2017. The main difference between the programs is whether players can observe the programs' intermediate reasoning process behind each move decision. AlphaGo and its subsequent versions show its final moves only (i.e., a sequence of positions an AI placed a stone). However, the opensourced programs and education tools provide information on the detailed thought process behind each action of AI as well. They show the contingency plan of AI such as how AI would respond to a hypothetical situation following each counterfactual choice at the current state. This allows human players to review the strategies of AI under various situations. Also, humans observe how AI evaluates each state of a game using the tools, e.g., the current win probability of each player and change in win probability if the human player chooses to deviate from AI's best move\footnote{Although we use the term \textit{win probability}, its exact interpretation is more subtle. Still it is used in the Go community.}. More details on what information is given to human players are explained in the appendix \ref{appendix1}.
We leverage the fact that two AI programs giving two different learning materials to human players were released sequentially. It gives a chance to examine the condition in which human players adapt to AI.
\paragraph{Data}
Our data spans from January 2012 to January 2020 and it comes from official matches between human professional players. It is composed of three datasets: (i) 30,995 matches between 357 Korean professional players, where we observe match date, the identity of players, and match outcomes, (ii) 1.3 million move decisions of the Korean professional players from matches between human players. We webscraped the datasets from Korean Go Association and other website\footnote{Not every match has been saved with a detailed record of move decisions, but we collected the historical data from multiple sources to get more complete history.}. For every single move choice by a human player, we simulate the optimal move decision of AI under the same state of the game and compute our measure by comparing a value network of human players' actual choice and that of AI's choice, respectively. Thus, we have 1.3 million move decisions of human players, 1.3 million move decisions of AI, and the evaluation by AI on each of these decisions. The gap between a value network of human's decision and that of AI's decision is our measure of human decision's effectiveness.
\begin{table}[ht]
\caption{Summary Statistics}
\centering
\footnotesize{
\begin{tabular}{lrrrr}
\hline
\multicolumn{5}{l}{Data 1: Player-level match performance (N=357)}\\
\hline
& Q1 & Median & Mean & Q3 \\
Winning rate (\%) & 31 &46 &43 &81 \\
\hline
\multicolumn{5}{l}{Data 2: Move decisions (N=1,
357,523)}\\
\hline
& Q1 &Median &Mean&Q3 \\
Move counts within a match &176&212&216&254\\
The gap between AI and humans (\%) &0.39&1.68&3.49&4.51 \\
\hline
\end{tabular}}
\label{tab:summary}
\end{table}
Table \ref{tab:summary} shows the summary statistics of the data. Our data includes heterogeneous players in terms of performance, whose winning rate is 43\% on average.
For each match, two professional players make average 216 move decisions, respectively. AI evaluation on each move decision indicates that human players make sub-optimal choices, which results in a 3.5\% of loss in win probability.
\paragraph{Model-free descriptive pattern}
We calculated Human-AI gap ($\Delta$) for every decision made by every human player in our dataset. A value of zero means that the human player completely replicated the AI's decision ($\Delta_k=0 \iff a_k^{Human}=a_k^{AI}$) and made the optimal move, while a positive value indicates the extent to which AI's decision was superior to that of the human player, or put differently, the extent to which the human player's decision quality trailed that of the AI. We present the pattern of $\Delta_k$ in Figure \ref{fig:human_ai_gap_across_moves}. The average Human-AI gap of each set of 10 moves (e.g., $1^{st}-10^{th}$, $11^{th}-20^{th}$, ...) is plotted across the course of match. The solid red line traces Human-AI gap before AlphaGo beat Lee Sedol (from January 2014 to March 2016); the dotted green line traces Human-AI gap between AlphaGo's debut and the release of open-sourced AI programs (from March 2016 to October 2017); lastly, the dashed blue line traces Human-AI gap after open-sourced AI programs were publicly released (from October 2017 to March 2020).
\begin{figure}[ht
\centering
\includegraphics[width=16cm]{plot_arxiv/human_ai_gap_across_moves_exact_dates_v3.png}
\caption{Mean Human-AI Gap over the Course of Match}
\label{fig:human_ai_gap_across_moves}
\end{figure}
\paragraph{Speculation on the inverted U pattern of Human-AI gap across moves} Perhaps the first thing that jumps out in Figure \ref{fig:human_ai_gap_across_moves} is the inverted U pattern of Human-AI gap across moves. We speculate that the pattern emerges from two opposing forces at work. First, as the match progresses (and the move number increases), finding the optimal moves becomes harder because stones interact with one another more. For example, in the beginning, when there are only one, two, or several stones, the black and white stones tend not to interact much with each other, as both players seek to control open territory in the corners and sides that are free from either side's influence. In this early stage, human decisions are not too inferior to AI's decisions, leading to smaller Human-AI gap for earlier moves. As the game progresses (post opening stage), however, black and white stones now clash against each other to fight for territory. In this early to middle stage of the match, optimal moves require thinking not only about how to control more territory as in the beginning, but also how to capture the opponent's stones or how to survive under the opponent's attack. Thus, this clash in the early to middle stage of the match presents a greater challenge for the human decision maker, and as a result, human decisions are inferior to AI's decisions by a greater margin, leading to greater Human-AI gap for moves in the middle stage. Human-AI gap peaks at a point and starts decreasing, however, because of the second, opposing force: the more stones there are on the board, the fewer possible moves become available. For example, in the middle stage, there may be 30 possible moves to consider, whereas by the end of the match, there may be only 5 possible moves to consider. With fewer possible moves to consider, the human player can find the optimal moves more frequently, and as a result, human decision quality can trail that of AI more closely as in the beginning. We speculate that these two opposing forces give rise to the inverted U shape of Human-AI gap.
\paragraph{Insights from the pattern of Human-AI gap} Using our measure of Human-AI gap, we were able to observe the pattern of human decision quality over the course of a match. One possible insight from this observation may be that room for improvement is greatest in the middle stage of the match, because the middle is where Human-AI gap is the greatest. Human players may find that studying moves in the middle stage may improve their game more than studying moves in the early or late stage of the match. Thus, our measure of Human-AI gap can be used to coach human players on what to study. Another possible insight may be that trying to improve human decision making in the middle stage of the match may be futile. Although this contradicts the first insight, it could just as well be true. Human experts may have concluded that improving their middle-stage game is very difficult as compared with improving the early-stage game. That is, although human experts have managed to improve their early-stage game (the red, solid line shifting down to the blue, dashed line for Moves 1-50 in Figure \ref{fig:human_ai_gap_across_moves}), perhaps they could not quite improve their middle-stage game despite much effort (the blue, dashed line overlaps the red, solid line for Moves 51-140 in Figure \ref{fig:human_ai_gap_across_moves}). Improving the middle-stage game may be nearly impossible, perhaps due to intractable complexity and lack of similarity from one match to another, both of which may prevent discovery or learning of any new, general principles. If so, human experts may instead double down and focus their effort even more on improving their early- or late-stage game. Whichever insight may reflect the reality, our measure could be useful in suggesting how human experts' effort should be allocated.
\paragraph{Human learning in the early stage of the game} More important than the inverted U pattern are downward shifts in Human-AI gap (for Moves 1-50). Interestingly, Human-AI gap decreased only a little bit after human players could observe AlphaGo's actions, as evidenced by a barely noticeable downward shift from the "Before AI" curve to "After AlphaGo" curve in Figure \ref{fig:human_ai_gap_across_moves}. In contrast, Human-AI gap dropped markedly after open-sourced AI programs became available, as evidenced by a larger downward shift from "After AlphaGo" curve to "After Open-sourced AI" curve in Figure \ref{fig:human_ai_gap_across_moves}. Will we observe this pattern when we take into account the difference between human players and plot Human-AI gap across time? In the next section, we investigate whether human decision quality indeed increased more after open-sourced AI became available than after AlphaGo's debut. Because the decrease in Human-AI gap was concentrated in the early and middle stages of the game (Moves 1-50), and because no such decrease was readily observable for Moves 51-140, we focus our attention on the first 50 moves in each match when we investigate human decision quality over time next (so $K=50$).
\begin{figure}[ht
\centering
\includegraphics[width=16cm]{plot_arxiv/treatment_coefplot.png}
\caption{Human-AI Gap over Time}
\label{fig:human_ai_gap_across_time}
\end{figure}
\paragraph{Insights from the time trend} We examined Human-AI gap over time after controlling for individual difference. As defined earlier, we construct a variable, $\Delta_{it}$, representing player $i$'s average gap in month $t$. Then we run the following regression: $\Delta_{it}=\alpha_i+\tau_t+\epsilon_{it}$. In Figure \ref{fig:human_ai_gap_across_time}, we report the estimated time trend, $\hat{\tau_t}$. The red vertical line on the left marks March 15, 2016, the date of match between Lee Sedol and AlphaGo, and the red vertical on the right marks October 25, 2017, the date when open-sourced AI program Leela Zero was publicly released, which was followed by releases of similar AI programs and education tools. Consistent with a tiny downward shift in Figure \ref{fig:human_ai_gap_across_moves}, we see little change in Human-AI gap in the period between the two red vertical lines. Human-AI gap significantly drops after the second vertical line indicating the time when open-sourced AI and its analysis tools became available. The decline in gap over time shows that human players adapted to AI programs gradually, more so after the education tools were available. The finding provides suggestive evidence that human players modified their choices and did better in Go matches after their access to AI.
\subsection{Adaptation II: Cheating from AI}
\paragraph{Background}
In this second case study, we examine how our measure can be useful in detecting one negative form of adaptation to AI: receiving help from AI to gain an unfair competitive advantage, i.e., cheating. In September 2020, a 13-year-old professional Go player cheated in a match against a 33-year-old top-notch Go player by using an AI program that suggests optimal moves for given states. The high-stakes match (one of Round of 32 matches in a tournament that awards \$175,000 in total prize money) was held online due to a COVID-19 lockdown, opening door to the possibility of cheating, which would have been difficult to pull off had the match been held offline with many eyes around. Not surprisingly, the cheating player showed an extraordinary performance in the match, and as a result, a debate ensued on online forums regarding whether she received help from AI. The controversy culminated in the cheating player finally confessing her transgression in November 2020. Would our measure be sensitive enough to detect such cheating behaviors?
\paragraph{Data}
We first obtained move data for all 52 matches of the cheating player’s professional career, including the match in which she cheated. We then used an AI program to calculate a winning probability associated with each move in these 52 matches and a winning probability associated with an AI’s optimal move. As in the first study, we subtracted the former from the latter to calculate the Human-AI gap. To be consistent with the previous study, we focused our analysis on the first 50 moves by each player in each match. This resulted in 50 values of Human-AI gap for the cheating player in the cheating match and 2,375 values in the 51 non-cheating matches (after missing values were removed), for a total of 2,425 values in all 52 matches.
\paragraph{Comparison between the cheating match and all other career matches}
We hypothesized that our measure of Human-AI gap could detect an increase in the cheating player’s decision quality when she received assistance from AI as compared with when she did not. Indeed, that is what we found. In the cheating match, the cheating player’s move decisions showed smaller Human-AI gaps (\textit{M} = 1.02, \textit{SD} = 2.13), indicating better decision quality, than move decisions in the non-cheating matches (\textit{M} = 2.35, \textit{SD} = 4.30), \textit{t}(57.78) = 4.23, one-tailed \textit{p} < .001, Cohen's \textit{d} = 0.31 (see Figure \ref{fig:cheating_fig_2}). Two nonparametric tests showed converging results. A Wilcoxon rank-sum test revealed that Human-AI gaps were more likely to be smaller when she cheated (\textit{Mdn} = 0.60) than when she did not cheat (\textit{Mdn} = 0.75), \textit{W} = 67883, one-tailed \textit{p} = .041. Similarly, a two-sample Kolmogorov-Smirnov test revealed that the Human-AI gaps in the cheating match and those in the non-cheating matches had different distributions, \textit{D} = 0.23, one-tailed \textit{p} = .006.
Results from these three tests show that our measure of Human-AI gap can detect higher decision quality from cheating. But can our measure also detect greater \textit{stability} in decision quality from cheating? We hypothesized that Human-AI gap will be more stable across the moves in the cheating match than in the non-cheating matches, because moves suggested by AI will be consistently optimal, whereas moves made by the human player herself will be optimal less consistently (less frequently). In other words, we wanted to test whether our measure will exhibit lower variance (more stability in decision quality) in the cheating than in the non-cheating matches. As expected, a Levene's test revealed that variance in decision quality (i.e., Human-AI gap) was significantly lower in the cheating match (\textit{var} = 4.53, \textit{SD} = 2.13) than in the non-cheating match (\textit{var} = 18.51, \textit{SD} = 4.30), \textit{F}(1, 2423) = 4.85, one-tailed \textit{p} = .014.
This case study shows that our measure could be useful in detecting a negative form of adaptation to AI, namely receiving AI assistance for an unfair competitive advantage in a professional competition.
\begin{figure}[ht
\centering
\includegraphics[width = 16cm]{plot_arxiv/eunji_dist_of_human_ai_gap_means.png}
\caption{Histogram of Human-AI Gap from a cheating player}
\label{fig:cheating_fig_2}
\end{figure}
\section{Discussion}\label{sec:discussion}
\subsection{Implications from our finding}
In the first case study, we demonstrate a surprising result that human professional Go players did not modify their strategy even after they noticed that AlphaGo beat humans. We find that the educational tools accompanied with open-source AI programs led to human players' behavioral change. In Appendix, we corroborate this finding by comparing the behavioral change of human players who had access to AI programs versus who did not\footnote{In Appendix \ref{appendix2}, we examine the gap measure of professional Go players who served in military during the time the AI programs were available. Those who could not learn from the AI programs due to military service were found to show no improvement over time, which corroborates our findings on human adaptation.}. The result emphasizes that it is not enough to show the efficacy of AI to induce a human adaptation. It highlights that humans do not adapt their high-stake decisions to AI until they understand it. As seen in the first study, our measure can be used to track the level of human adaptation to AI, providing what is necessary for human adaptation.
In the second case study, we reconfirm that our measure has an enough statistical power to detect the recent cheating behavior using AI. Human players easily get access to AI programs, which is tempting to use AI even in the match between humans. In 2020, there have been many cheating scandals in chess. It spans from a 17-year-old Polish chess player who was caught using a phone during a match, to a 36-year-old grandmaster chess player who were accused of cheating in a professional match. Similar to our second study, there was a controversy about whether they cheated or not. It would become more difficult to detect this cheating because more human players are adapting their behavior to AI and choosing AI-like choices as shown in the first study. Online chess websites such as Chess.com have invented many tools to detect this cheating behavior\footnote{See https://www.chess.com/article/view/online-chess-cheating}. Our measure which is easy to construct can be one viable way to guarantee a fair play between human players.
Finally, our measure can shed light on additional ramifications of advance in AI programs in the board game Go. Public release of open-source AI-based analysis tools allow human Go players of \textit{all skill levels} to analyze and simulate consequences of any move at any stage of the game. It is a game changer in how human Go players develop their strategy. In the past, high-ranked players got a competitive advantage to study and discuss latest strategies with other high-rated players. Now low-rated players can learn from AI programs which are much superior to the best human players. But whose game improves more from the access to AI, weaker or stronger players’? On the one hand, weaker players may learn more from AI and catch up to stronger players at a faster rate than they otherwise would have, reducing the performance gap. On the other hand, stronger players may better understand and internalize the baffling yet effective moves by AI algorithm, widening the performance gap. Our measure can be used to answer this question whose implication is broader than the context of the board game Go.
\subsection{Other potential applications of the measure}
Our measure can be used in contexts where AI programs generate more effective decisions, but humans remain the final decision maker and are held accountable. Technically, the applicability of the measure involves the players' dynamic optimization and Markov process in decision making. As long as a setting satisfies these substantive and technical conditions, the measure can be used to answer novel questions about human decision-making in response to AI. Below, we discuss potential applications of the measure.
Our measure may reveal factors that lead to slower or faster rate of adaptation to AI. For example, it may reveal that people of certain characteristics (e.g., age, education, experience, training, or any other relevant background) have advantages or disadvantages in adapting to AI as compared with others. If such factors are identified, leaders of an organization or government may face a difficult equity vs. efficiency tradeoff: Should the leadership help the disadvantaged members so that most members of an organization or society adapt to AI at similar rates and enjoy producing similar levels of output? Or should the leadership focus more on encouraging the advantaged members to maximize adaptation to AI at the organization or society level? Answering these questions may not be easy, but our measure can nevertheless help identify such factors and enable organization or government to make appropriate decisions given their environment.
Another natural extension is in the area of personalized education. AI programs leveraging massive amounts of past student data can outperform human teachers in deciding the optimal sequence of materials to present to students at different skill levels. For example, when teaching students a new set of concepts, teachers may contemplate which concepts to teach earlier and which concepts to teach later, because learning is a dynamic process in which learning in earlier stages affects learning in later stages.
That is, as in the game of Go, teachers as decision makers must think not only about which concept to teach at given points (similar to thinking about which move to make given the state), but also about how teaching the concept affects learning at later points (similar to thinking about how the move affects later states), all the while trying to maximize overall learning (similar to maximizing overall winning probability). Teachers can thus learn from AI programs to improve their teaching, and our measure can be used to evaluate teachers' adaptation to AI. Similarly, our measure can help game developers create a better game tutorial, a tool that teaches novice players basic rules and essential mechanics of the game. Here, AI programs may optimize the order and contents of the tutorial, but human game developers make the final decisions on constructing the tutorial. In this case, our measure can be used to evaluate novice players' skills and engagement during and after two tutorials, one created by AI and another created by human developers, and these evaluations can assess human developers' adaptation to AI.
More broadly, our key idea to compare the output of AI and that of humans is useful in other settings. For example, AI technologies significantly advanced not only in medical diagnosis but also in treatment decisions, such as recommending prescriptions. Since each treatment decision would affect the future health status of patients, doctors can be considered to make decisions in a dynamic setting. Even though AI may make better decisions in this context, ultimately human doctors will have the final authority and responsibility in diagnosis and treatment. Recent research shows that collaboration between human doctors and AI significantly improves the accuracy of predictions or diagnoses \cite{tschandl2020human}. As human experts continue with such adaptation to AI, our measure can be used to monitor progress of the adaptation. Many other decision-making problems in business also exhibit features of dynamic optimization \cite{rust2019has}, so firms adopting AI programs as a supporting tool to human managers can use our measure to monitor improvement in human decision-making.
\section{Conclusion}
As AI makes better decisions than humans, human experts adapt to AI. Often, this adaptation comes in a positive form of learning from AI, but it can also appear in a negative form, such as cheating. In this paper, we propose a simple measure, Human-AI gap, and test whether the measure can detect and quantify human adaptation to AI. Our results using this measure yield valuable insights in the game of Go, such as \textit{when} learning occurred (i.e., after observing AI's reasoning processes rather than its mere actions), \textit{where} learning occurred (i.e., early to middle stage of the match, moves 1-50), \textit{for whom} learning occurred (i.e., experts with access to AI as compared with experts in the military without the access; see Appendix \ref{appendix2}), and whether cheating occurred. Moreover, our results suggest that the measure has broader applications in various domains other than game of Go, ranging from managing adaptation to AI in an organization or society, to personalized education, and even to medical diagnosis and treatment.
\section{Acknowledgments}
We appreciate feedback from Kosuke Uetake, Jiwoong Shin, K Sudhir, Vineet Kumar, Ian Weaver and participants at the 2020 Marketing Science conference and Yale Quantitative Marketing seminar for their valuable and constructive feedback. We gratefully acknowledge the support from Korean Go Association to get the data.
\clearpage
\bibliographystyle{plain}
|
2,877,628,088,940 | arxiv | \section{Introduction}
Sequential decision prediction problems, including imitation learning, differ from typical supervised learning tasks in that the actions of the agent affect the distribution of future observed states. The violation of the distributional stationarity assumption inherent in standard machine learning practice results in error compounding. As the agent drifts into states an expert would not have, error increases due to a lack of relevant training data.
Consider, for instance, the task of teaching an autonomous car to stay within road boundaries. A facile approach would be to simply train a supervised learning system where environment states (e.g., road markings) are mapped to expert actions (e.g., the angle and velocity of a human driver) from a dataset of expert driving. During testing, if the car begins to drift off-course (inevitable for any algorithm that does not achieve perfect accuracy), the observed states would begin to differ from the training states. Compounding errors may cause the car to veer completely off-track.
To mitigate this issue, algorithms have been designed (i.e., DAgger \cite{ross2011reduction} and its derivatives) to iteratively aggregate training data on expert behaviour in states that the \textit{agent} visits. An underlying goal of these works is to maximize accuracy while minimizing the number of expert queries. Yet current approaches still require a high amount of expert input, making them infeasible for many real-world tasks. Consider teaching a robot surgeon: Having a surgeon demonstrate tens of thousands of surgeries is impractical. We propose a new algorithm that requires less expert input than DAgger while performing similarly. Our approach outperforms current state-of-the-art DAgger alternatives (i.e., SafeDAgger \cite{zhang2016query}) in query efficiency at a similar computational cost, increasing the breadth of real-world problems that can be solved with an imitation learning approach.
\section{Related Work}
We begin by introducing some notation to facilitate comparison of approaches and guide the rest of this paper. In the most general setting, we are given a set of expert demonstrations consisting of states $s$ and the corresponding expert actions (determined by the expert policy $\pi^{*}$): $\mathcal{D} = \{s_i, \pi^{*}(s_i)\}$. We seek to find a policy $\pi \in \Pi$ that closely mimics the expert policy $\pi^{*}$. We define a surrogate loss function that captures how ``close" the two policies are $\ell(\pi^{*}, \pi)$ and seek to minimize it:
\begin{equation}
\hat{\pi} = arg\,min_{\pi \in \Pi}\; \mathbb{E}_s [\ell(\pi^{*}, \pi)]
\end{equation}
Unfortunately, we do not have knowledge of the underlying expert policy and instead have access only to its manifestation as a map from observed states to actions. Training only on states observed by the expert, as in supervised learning, is known to generally lead to poor performance due to covariate shift \cite{ross2011reduction}. We may improve our estimate of $\pi^{*}$, and thus potentially improve $\ell(\pi^{*}, \pi)$, by collecting additional expert demonstrations at cost $C(s_i)$ during agent-observed state $s_i$. Accordingly, we minimize subject to a maximum cost $\mathcal{C}$
\begin{equation}
min_{\pi \in \Pi}\; \mathbb{E}_s [\ell(\pi^{*}, \pi)] \; s.t. \; \sum_{i=1}^N C(s_i) < \mathcal{C}
\end{equation}
and must make a choice at each agent-observed $s_i$ whether we wish to query the expert. In the original DAgger algorithm \cite{ross2011reduction}, the cost $C(s_i)$ is implicitly assumed to be zero and the expert is always queried. A number of works have enhanced standard DAgger with probabilistic active learning machinery to determine when querying the expert is optimal under non-zero expert cost.
SafeDAgger \cite{zhang2016query} uses an initial set of demonstrations to train a binary risk classifier that predicts whether the agent will make a mistake in a given state, and then uses this classifier to choose when to query the expert. DropoutDAgger \cite{menda2017dropoutdagger} uses the Bayesian interpretation of neural networks with dropout to measure the epistemic uncertainty associated with a state. However, it uses this estimate only to guide action selection, while still querying the expert every time. BAgger \cite{cronrathbagger} incorporates these two ideas by directly modelling the agent's error with respect to the expert as a Gaussian Process or Bayesian Neural Network. Then, it obtains an empirical estimate of a percentile-based worst-case loss to decide whether to query the expert.
In pure reinforcement learning, Bayesian Q-learning \cite{dearden1998bayesian} and Bayesian Deep Q-learning \cite{azizzadenesheli2018efficient} learn a probabilistic $Q$ function that incorporates the agent's uncertainty about future rewards. However, the goal of these works is to achieve an optimal exploration-exploitation trade-off, and they do not address how agents could benefit from access to expert demonstrations.
Unlike other approaches, \texttt{RadGrad} introduces the concept of a loss gradient (Figure \ref{fig:flowchart}). SafeDAgger estimates whether a proposed agent action will exceed the unknown expert action beyond a safety threshold $\tau$ using what we term a loss network. We assume that the loss network is differentiable and query the expert both when the threshold is exceeded but also when the norm of the gradient of the prediction with respect to the concatenated vector of state and proposed action exceeds a separate threshold $\epsilon$. This gradient is a crude proxy for risk. As we describe later, this differs from the concept of uncertainty that Bayesian approaches and ensemble methods are well-suited for.
\begin{figure}[h]
\centering
\includegraphics[height=2in]{flowchart.pdf}
\caption{The \texttt{RadGrad} algorithm queries the expert both when the proposed agent action is predicted to be far from the expert's action, or if the gradient of this error is high with respect to the state and proposed action.}
\label{fig:flowchart}
\end{figure}
\section{Method}
\subsection{RadGrad Algorithm}
The four parts of our \texttt{RadGrad} approach (the primary network, the loss network, the loss gradient, and data aggregation) are summarized in Algorithm \ref{alg:radgrad} and detailed below.
\paragraph{Primary Network}
To find a policy $\pi \in \Pi$ that minimizes $\ell(\pi^{*}, \pi)$, we require a way to express $\pi$. Accordingly, we learn a function that maps from states in $\mathbb{R}^m$ to actions in $\mathbb{R}^n$. We employ a feed-forward neural network with hidden layer sizes $[128, 128, 32, 8]$ and dropout rate of $0.2$, trained using our set of expert demonstrations $\mathcal{D}$. At test time, an observed state $s_i$ is inputted into this function to generate a proposed agent action $\hat{a}_i$.
\paragraph{Loss Network}
We additionally learn a differentiable function, which we term a \textit{loss network}, that maps state and proposed agent action pairs in $\mathbb{R}^{m+n}$ into an estimate of the difference between the proposed agent action and the unknown expert action in $\mathbb{R}^n$. We employ a feed-forward neural network with hidden layer sizes $[128, 128, 64, 64, 32, 32, 16, 16, 8]$ and dropout rate of $0.2$. When the norm of the loss network output exceeds a safety threshold $\tau$, the expert is queried for expert action $a^{*}_i$. This is the strategy specified in Algorithm \ref{alg:radgrad}. An alternative implementation of the loss network, more similar to SafeDAgger, maps state and proposed agent action pairs in $\mathbb{R}^{m+n}$ to the probability that the norm of the loss network output exceeds the safety threshold $\tau$. In the latter case, the expert is queried for $a^{*}_i$ if the predicted probability exceeds $\frac{1}{2}$.
\paragraph{Loss Gradient}
Additionally, we calculate the norm of the gradient of the output of the loss network with respect to its input. If the norm exceeds a threshold $\epsilon$, then we query the expert for $a^{*}_i$. This norm is a proxy for risk. A large norm implies that a small change in either the state or proposed action would have a large impact on the probability of exceeding the threshold $\tau$, and thus the agent is in as, we define, a \textit{risky state} with high potential for error. Without the computationally-costly endeavour of building an ensemble or Bayesian neural network to measure uncertainty proper, we have built a proxy for measuring risk (which we treat as distinct from uncertainty).
\paragraph{Data Aggregation}
Whenever the expert is queried, we append the state and expert action pair $(s_i, a^{*}_i)$ to $\mathcal{D}$. In this work, we choose to take the expert action whenever we query the expert, although more fine-grained rules could be explored in future work. Appending these state-action pairs serves to shift the distribution of training states from those an expert would see to those the agent sees. We retrain the primary network on this new, aggregated dataset to improve agent performance, separating $20\%$ of the dataset for validation so as to reduce the risk of overfitting.
\begin{algorithm}
\caption{\texttt{RadGrad} (Loss Gradient algorithm)}
\begin{algorithmic}[1]
\Procedure{RadGrad}{}
\State Initialize $\mathcal{D} \gets \emptyset$
\State Initialize $\pi_{agent, \: 1}$
\State Initialize loss network $l_{agent, \:1}$
\For{iteration $k = 1 : M$}
\For{epoch $j = 1 : N$}
\State Initialize environment and agent
\For{timestep $i = 1 : T$}
\State Observe state $s_i$
\State $\hat{a}_i \gets \pi_{agent,\:k}(s_i)$
\State $\hat l \gets l_{agent,\:k}(s_i, \hat{a}_i)$
\If{$\hat l > \tau$ or $||\frac{\partial \hat{l}}{\partial [s_i; \hat{a}_i]}|| > \epsilon$}
\State Query the expert to obtain $a^*_i \gets \pi^*(s_i)$
\State Execute $a^*_i$
\State $\mathcal{D} \gets \mathcal{D} \cup \{(s_i, \hat{a}_i, \hat{a}^*_i)\}$
\Else
\State Execute $\hat{a}_i$
\EndIf
\EndFor
\EndFor
\State Train $\pi_{agent,\:k+1}$ and $l_{agent,\:k+1}$ on $\mathcal{D}$
\EndFor
\State \textbf{return} best $\pi_{agent,\:k}$ on validation set
\EndProcedure
\end{algorithmic}
\label{alg:radgrad}
\end{algorithm}
\subsection{Experimental Setup}
\paragraph{Algorithms}
We compare the performance of five primary algorithms in our analysis. These five include three non-gradient algorithms (DAgger, SafeDAgger, and Loss Network) and two gradient algorithms (SafeDAgger Gradient and Loss Network Gradient, or \texttt{RadGrad}). The non-gradient algorithms query the expert and execute the returned action if the loss network threshold is surpassed; the gradient algorithms query the expert in this case as well, but also if the gradient threshold is exceeded. SafeDAgger and SafeDAgger Gradient refer to a loss network that outputs the probability of surpassing $\tau$, as opposed to an output in $\mathbb{R}^n$.
Additionally, we consider the performance of three baseline methods: expert actions, supervised learning, and random selection. The expert action baseline is the reward achieved by the expert on the task. We consider the expert baseline only implicitly; we present loss measures as the difference in reward between the agent algorithm and the expert. We present the results of a simple supervised learning algorithm (which trains on expert demonstrations in expert-observed states only) to display the issue of covariate shift we wish to resolve. Finally, we present the random selection baseline. Random selection queries and follows the expert at random agent-observed states. A random selection baseline is necessary to establish the complexity of active learning approaches is warranted.
Finally, because we observe that random selection performs quite well in practice and hypothesize that an unbiased sampling strategy can be beneficial for convergence and stability, we test two hybrid algorithms: Loss Gradient Random (\texttt{RadGrad} Random) and SafeDAgger Gradient Random. At each timestep a fair coin is flipped to determine whether to use the loss-based versus random strategy.
\paragraph{Environment}
We test our approach in the Reacher-v2 and Hopper-v2 OpenAI gym environments \cite{brockman2016openai}. In Reacher-v2, a robotic arm with two degrees of freedom rotates to reach a randomly-positioned target. This environment maps from $s_i \in \mathbb{R}^{11}$ to $a_i \in \mathbb{R}^2$. In Hopper-v2, a two-dimensional one-legged robot hops as quickly as possible towards a target. The Hopper-v2 environment maps from $s_i \in \mathbb{R}^{11}$ to $a_i \in \mathbb{R}^{3}$. We selected open source environments for easy reproducibility.
\paragraph{Hyperparameters}
Table \ref{tab:hyper} summarizes the hyperparameters we used in our evaluations. These were chosen so as to optimize expert query efficiency while maintaining convergence to the algorithm's best policy. The random baseline hyperparameter was chosen so as to make that strategy competitive with active learning strategies in query efficiency (Equation \ref{eq:effic}).
\begin{table}[h]
\centering
\begin{tabular}{ccccc}
\toprule
Algorithm & Hyperparameter & Reacher-v2 & Hopper-v2 \\
\midrule
Loss & $\tau$ & 0.02 & 0.3\\
Loss Gradient (\texttt{RadGrad}) & $\epsilon$ & 0.002 & 0.2 \\ \midrule
SafeDAgger & $\tau$ & 0.04 & 0.3 \\
SafeDAgger Gradient & $\epsilon$ & 1 & 200 \\ \midrule
Random & $P(\text{Query})$ & $30\%$ & $30\%$ \\ \bottomrule
\end{tabular}
\caption{Hyperparameters for proposed algorithms and baselines. $\tau$ is the threshold on the norm of predicted loss, and $\epsilon$ is the threshold on the norm of the gradient of predicted loss with respect to input and action space. Note the two values of $\tau$ for Reacher-v2 differ since displayed values were individually-optimal.}
\label{tab:hyper}
\end{table}
\section{Results}
Our results show that gradient-based methods can outperform their non-gradient-based counterparts in that they may yield higher rewards with only a modest increase in the number of expert queries required. To compare algorithm performance, we define the \textit{query efficiency} of estimated policy $\hat{\pi}$:
\begin{equation}
\text{Efficiency}(\hat{\pi}) \propto \frac{\text{reward}_{\hat{\pi}} - \text{reward}_{supervised}}{\sum_{i \in \mathcal{D}_{\hat{\pi}}} C(s_i)}
\label{eq:effic}
\end{equation}
This is the difference in loss between a supervised learning policy (trained only on expert actions in expert-observed states) and the active learning policy in question, divided by the total cost of querying the expert at states $s_i$ during the estimation of $\hat{\pi}$. For our purposes, we let $C(s_i) = 1 \: \forall \: s_i$, and thus $\sum_{i \in \mathcal{D}_{\hat{\pi}}} C(s_i)$ is simply the number of times the expert was queried in the estimation of $\hat{\pi}$ (i.e. $\#\mathcal{D}_{\hat{\pi}}$). Accurate policies that require few queries to estimate are query efficient.
Table \ref{tab:performance} shows the test-time performance, that is, average reward when expert demonstrations are not available, of all algorithms in each environment. It also shows, for each setting, the number of expert queries used to estimate the policy and the resulting efficiency.
Figures \ref{fig:reacher} and \ref{fig:hopper} show test-time performance and training-time number of expert queries used over the course of training iterations for the two environments, Reacher-v2 and Hopper-v2, respectively. At each point in training, the current model is deployed on a batch of random test-time environments to generate the curves of performance over time shown in the graphs.
\begin{table}[h]
\centering
\begin{tabular}{@{}lllcllc@{}}
\toprule
& \multicolumn{3}{c}{\textbf{Reacher-v2}} & \multicolumn{3}{c}{\textbf{Hopper-v2}} \\
\midrule
\textbf{Algorithm} & \textit{Queries} & \textit{Loss} & \textit{Efficiency} & \textit{Queries} & \textit{Loss} & \textit{Efficiency}\\
\midrule
SafeDAgger (SD) & 1424 & $1.67 \pm 1.08$ & -6.3 & 60094 & $3547 \pm 5$ & 22 \\
SD Gradient & 1436 & $0.94 \pm 0.62$ & -1.2 & 71591 & $3626 \pm 203$ & 8 \\
SD Gr. Random & 1551 & $0.57 \pm 0.46$ & 1.3 & 53542 & $606 \pm 843$ & 574 \\
\midrule
Loss & 556 & $3.38 \pm 1.83$ & -47 & 16786 & $1547 \pm 631$ & 1270 \\
Loss Gradient & 3332 & $0.70 \pm 0.43$ & 0.2 & 62378 & $3567 \pm 4$ & 18 \\
Loss Gr. Random & 2200 & $0.50 \pm 0.30$ & 1.2 & 64053 & $2342 \pm 605$ & 209 \\
\midrule
DAgger & 3750 & $0.41 \pm 0.62$ & 0.9 & 42682 & $1890 \pm 148$ & 419 \\
Random & 1343 & $0.56 \pm 0.46$ & 1.5 & 24025 & $1892 \pm 122$ & 744 \\
Supervised & 3750 & $0.77 \pm 0.57$ & 0 & 140164 & $3679 \pm 19$ & 0 \\
\bottomrule
\end{tabular}
\caption{Comparison of performance and query efficiency of gradient-based and non-gradient approaches. Displayed loss is loss in increment of expert loss, along with intervals of one standard deviation over 100 trials. Expert policies are obtained from Berkeley's Deep Reinforcement Learning course materials (\url{https://github.com/berkeleydeeprlcourse/homework/tree/master/hw1/experts}). Results presented are from final iteration of fifteen, with 100 trials at each iteration.}
\label{tab:performance}
\end{table}
\begin{figure}[h!]
\centering
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
width = \textwidth,
height = 5cm,
unbounded coords=discard,
ylabel={\textit{Test-set loss vs. expert}},
xmin = 0.5, xmax = 15.5, ymin = 0.0, ymax=17.0,
xticklabels={,,}
font={\footnotesize},
yticklabel style={
text width=2.5em,align=right
},
yticklabel={
\pgfmathprintnumber[
fixed,
precision=0,
zerofill,
]{\tick}
}]
\addplot[mark=*, dashed, mark size = 1, error bars/.cd, y dir=both, y explicit, error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{reacher_final_dagger_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = orange, error bars/.cd, y dir=both,y explicit, error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{reacher_final_loss_002_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = violet, error bars/.cd, y dir=both, y explicit, error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{reacher_final_safedagger_004_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = orange, error bars/.cd, y dir=both, y explicit , error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{reacher_final_gradient_loss_002_0002_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = violet, error bars/.cd, y dir=both, y explicit , error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{reacher_final_safedagger_gradient_004_1_for_plot.csv};
\addplot[mark=*, dotted, mark size = 1, color = gray, error bars/.cd, y dir=both, y explicit , error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{reacher_final_random_30_for_plot.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
width = \textwidth,
height = 5cm,
unbounded coords=discard,
xlabel={\textit{Iteration}},
ylabel={\textit{Number of expert queries}},
xmin = 0.5, xmax = 15.5, ymin = 0,
font={\footnotesize},
yticklabel style={
text width=2.5em,align=right
},
scaled y ticks=false]
\addplot[mark=*, dashed, mark size = 1]
table [x=iteration, y=total_obs, col sep=comma]{reacher_final_dagger_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = orange]
table [x=iteration, y=total_obs, col sep=comma]{reacher_final_loss_002_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = violet]
table [x=iteration, y=total_obs, col sep=comma]{reacher_final_safedagger_004_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = orange ]
table [x=iteration, y=total_obs, col sep=comma]{reacher_final_gradient_loss_002_0002_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = violet]
table [x=iteration, y=total_obs, col sep=comma]{reacher_final_safedagger_gradient_004_1_for_plot.csv};
\addplot[mark=*, dotted, mark size = 1, color = gray ]
table [x=iteration, y=total_obs, col sep=comma]{reacher_final_random_30_for_plot.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}
\begin{tikzpicture}
\begin{customlegend}[legend columns=3,legend style={draw=none,column sep=2ex, font=\small},legend entries={DAgger, Loss Network, SafeDAgger, Loss Gradient, SafeDAgger Gradient, Random Selection}]
\pgfplots@addlegendimage{black,dashed,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{orange,dashed,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{violet,dashed,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{orange,solid,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{violet,solid,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{gray,dotted,fill=black!50!red,sharp plot}
\end{customlegend}
\end{tikzpicture}
\caption{A comparison of active learning approaches to DAgger in the \textit{Reacher-v2} task. DAgger yields rewards most similar to the expert, but gradient-based approaches perform competitively while reducing expert queries. Error bars are based on 100 trials per iteration and indicate $\pm$ one standard deviation.}
\label{fig:reacher}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
width = \textwidth,
height = 5cm,
unbounded coords=discard,
ylabel={\textit{Test-set loss vs. expert}},
xmin = 0.5, xmax = 15.5, ymin = 0.0,
xticklabels={,,}
font={\footnotesize},
yticklabel style={
text width=2.5em,align=right
},
yticklabel={
\ifdim\tick pt=0pt
\pgfmathprintnumber[
fixed,
precision=0,
zerofill,
]{\tick}
\else
\num[
round-mode=figures,
round-precision=2,
]{\tick}
\fi}]
\addplot[mark=*, dashed, mark size = 1, error bars/.cd, y dir=both, y explicit, error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{hopper_final_dagger_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = orange, error bars/.cd, y dir=both,y explicit, error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{hopper_final_loss_03_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = violet, error bars/.cd, y dir=both, y explicit, error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{hopper_final_safedagger_03_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = orange, error bars/.cd, y dir=both, y explicit , error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{hopper_final_gradient_loss_03_02_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = violet, error bars/.cd, y dir=both, y explicit , error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{hopper_final_safedagger_gradient_03_200_for_plot.csv};
\addplot[mark=*, dotted, mark size = 1, color = gray, error bars/.cd, y dir=both, y explicit , error bar style = solid]
table [x=iteration, y=loss, y error=err, col sep=comma]{hopper_final_random_30_for_plot.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
width = \textwidth,
height = 5cm,
unbounded coords=discard,
xlabel={\textit{Iteration}},
ylabel={\textit{Number of expert queries}},
xmin = 0.5, xmax = 15.5, ymin = 0,
font={\footnotesize},
yticklabel style={
text width=2.5em,align=right
},
scaled y ticks= false]
\addplot[mark=*, dashed, mark size = 1]
table [x=iteration, y=total_obs, col sep=comma]{hopper_final_dagger_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = orange]
table [x=iteration, y=total_obs, col sep=comma]{hopper_final_loss_03_for_plot.csv};
\addplot[mark=*, dashed, mark size = 1, color = violet]
table [x=iteration, y=total_obs, col sep=comma]{hopper_final_safedagger_03_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = orange ]
table [x=iteration, y=total_obs, col sep=comma]{hopper_final_gradient_loss_03_02_for_plot.csv};
\addplot[mark=*, solid, mark size = 1, color = violet]
table [x=iteration, y=total_obs, col sep=comma]{hopper_final_safedagger_gradient_03_200_for_plot.csv};
\addplot[mark=*, dotted, mark size = 1, color = gray ]
table [x=iteration, y=total_obs, col sep=comma]{hopper_final_random_30_for_plot.csv};
\end{axis}
\end{tikzpicture}
\end{subfigure}
\begin{tikzpicture}
\begin{customlegend}[legend columns=3,legend style={draw=none,column sep=2ex, font=\small},legend entries={DAgger, Loss Network, SafeDAgger, Loss Gradient, SafeDAgger Gradient, Random Selection}]
\pgfplots@addlegendimage{black,dashed,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{orange,dashed,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{violet,dashed,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{orange,solid,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{violet,solid,fill=black!50!red,sharp plot}
\pgfplots@addlegendimage{gray,dotted,fill=black!50!red,sharp plot}
\end{customlegend}
\end{tikzpicture}
\caption{A comparison of active learning approaches to DAgger in the \textit{Hopper-v2} task. DAgger and random sampling most reliably converge to competitive performance. Error bars are based on 100 trials per iteration and indicate $\pm$ one standard deviation.}
\label{fig:hopper}
\end{figure}
We make three conclusions. First, adding gradient logic to safety-aware baselines (SafeDAgger Gradient and Loss Gradient) improves performance and efficiency on Reacher-v2, and integrating gradient logic with random sampling (SafeDAgger Gradient Random and Loss Gradient Random) further improves average reward as well as efficiency. This result suggests the validity of loss gradients as a proxy for risk, as well as the benefit of unbiased expert sampling. We further note that, although not statistically significant, test-time performance of risk-aware algorithms appears superior to that of DAgger early on. We hypothesize that this occurs because, early in training, risk-aware strategies shift the distribution of training data toward riskier states, causing the trained models to give more importance to those states than DAgger would.
Second, although DAgger shows best performance overall, random sampling is a strong baseline, converging to similar performance as DAgger with substantially improved query efficiency in both Reacher-v2 and Hopper-v2. This indicates that an unbiased sampling strategy may be a competitive model against which proposed active learning strategies should be tested.
Third, most safety- and risk-aware algorithms fail to converge to DAgger performance in the more complex environment Hopper-2. In this setting, DAgger and random sampling stand out as strong algorithms despite their simplicity. While some proposed algorithms (Loss and SafeDAgger Gradient Random) show promising performance and efficiency numbers, the fact that other safety-aware algorithms, including the established SafeDAgger baseline, fail to converge to DAgger performance makes us cautious to make strong conclusions from these data. While it is possible that these algorithms would converge to DAgger performance under more extensive hyperparameter tuning, this result hints at the challenges posed by richer environments to algorithms that aim at outperforming DAgger and random baselines.
A final observation is that in our Reacher-v2 simulations, SafeDAgger Gradient queries the expert fewer times than SafeDAgger proper for much of the training course (Figure \ref{fig:reacher}), even though the condition for querying the expert in SafeDAgger Gradient is more relaxed. We hypothesize that this occurs because SafeDAgger Gradient, by using the gradient of loss as a proxy for risk and obtaining expert demonstrations in the face of such risk, is better able to reduce future risk and thus future need for expert queries. We note, however, that this is not the case for Hopper-v2.
\section{Limitations}
While our work suggests the value of gradients as a proxy for risk in active learning, our experiments are hardly conclusive. Most notably, we did not complete an extensive analysis of the value of gradients in all of the major DAgger derivatives and instead focused our efforts on SafeDAgger.
Due to computational limitations, even though we deployed each trained agent at each training iteration in multiple randomly sampled test-time environments, we executed this procedure only once per algorithm. In other words, only one agent was trained per algorithm. For a more robust evaluation, we would train a number of agents for each algorithm to produce uncertainty estimates for the number of expert queries made as well.
While our Reacher-v2 simulations show that adding gradient logic to active learning decision rules has the potential to improve performance and, thus, should be further investigated, we could not replicate those results in the second, more challenging environment Hopper-v2. Not only did our gradient-based methods not improve performance over DAgger and random selection in Hopper-v2, but even the established SafeDAgger did not converge to a competitive policy in that case. While we do not rule out that more extensive hyperparameter search could improve the performance of those algorithms, we believe this result should be a call for more robust methods that can more easily be transferred to new environments.
Similarly, we believe further investigation of the trade-offs of active learning and unbiased random strategies to be necessary. Not only did we find that a random querying strategy is highly competitive to both DAgger and safety-aware strategies, but most importantly, we also found that random selection was the most robust policy when replicating our experiments on a new environment. Thus, unbiased strategies seem to provide stronger practical guarantees of generalization compared to current state-of-the-art active learning strategies. Of course, purely random sampling may not be possible due to safety risk; in these cases observation weighting may offer a compromise.
\section{Conclusion}
DAgger is able to improve over a purely supervised learning approach by mitigating the problem of covariate shift, but does so at a high expert querying cost. Various DAgger derivatives have been created to limit the number of expert queries made while maintaining similar policy quality as DAgger. We have shown that these methods may be possible to improve by incorporating gradients.
We experienced difficulty in replicating the performance of both popular active learning strategies and our proposed methods in a more complex environment. Further research on the robustness of active learning algorithms across environments is necessary.
Finally, we observed that a random selection algorithm, which obtains unbiased samples of expert demonstrations, is a strongly competitive alternative to both query-intensive and safety-aware methods. Future imitation learning active learning algorithms should compare to a random querying baseline to establish algorithmic complexity is warranted.
\newpage
|
2,877,628,088,941 | arxiv | \section{Introduction}\label{sec:introduction}
As information technology advances, people get informed through numerous media channels spanning over conventional media (e.g., newspapers, radio or TV programs) and modern social media (e.g., mobile APPs, electronic publications, or world wide web (WWW)). Since computers and smart phones become more and more popular, information now spreads at a speed faster than ever before. In particular, people are ubiquitously connected by online social networks, and a person's behavior may quickly affect others, who may further perform some relevant actions. For example, after a celebrity posts a new message on Twitter, many followers read this tweet and then retweet. Then, the friends of these followers may do such actions as well. Consequently, the same tweet could be posted again and again, while more and more people are involved. This phenomenon in social networks is referred to as {\it influence propagation}. Here, such a celebrity could be called the {\it influencer}. Note that, in general, there may be more than one influencer for one particular event.
It is easy to see that influencers may have significant impacts on the dynamics in social networks, and thus the problem of influencer identification has drawn great attention in both academia and industry~\cite{1,2}.~One such pioneer work is about viral marketing~\cite{3,4}, where a new product is advertised to a target group of people such that the advertisement could reach a large fraction of the social network users. In Later work~\cite{5}, the influencer identification problem is commonly formulated as an \textbf{influence maximization problem}: Given an influence propagation model, find $k$ ``seed'' nodes such that the expected number of nodes that eventually get ``influenced'' is maximized. Two propagation models, i.e., the Independent Cascade (IC) model and the Linear Threshold (LT) model, are widely used. In these two graph based models, one of the most important parameters is the edge weight, which represents the probability that a person gets influenced and takes a similar action as what his or her socially connected neighbors do. In existing works~\cite{5,6,7,8,9}, the weight of each edge is usually determined by one of the following methods: 1) assigning a constant (e.g., 0.01), 2) assigning a random number from a small set (e.g., \{0.1, 0.01, 0.001\}), 3) assigning the reciprocal of a node's in-degree, or 4) assigning a value based on real data.
Although accelerated greedy algorithms have been developed~\cite{10,11} to mitigate the high computation cost in influence maximization, all works mentioned above~\cite{5,6,7,8,9,10,11} need significant Monte-Carlo simulations to calculate the expected number of influenced nodes, which prevents their results from being implemented in analyzing large-scale social networks. Recently, a statistics based algorithm~\cite{martingales} and an extended stop-and-stare algorithm~\cite{mssa} haven been proposed to scale up the influence maximization problem with approximation guarantees based on propagation models. However, in order to implement any of them, edge weights must be pre-assigned. To eliminate such a need, the Credit Distribution (CD) model~\cite{12} was proposed to measure the influence merely based on the history of user behaviors. Following~\cite{12}, some extended CD models have been proposed to improve the estimation accuracy over the total number of people that finally get influenced by introducing node features~\cite{13} and time constraints~\cite{14}, respectively.
The aforementioned CD based models can be trained by datasets or event logs composed of user indices, actions, and timestamps. However, the datasets used for the CD based models in existing works~\cite{12,13,14} usually have a simplified structure such that they only record one timestamp of a certain action for each user. By using such datasets, they also implicitly assume that each user takes the same action for at most once. It is easy to see that such a setup is oversimplified, since a user may take the same action multiple times. Moreover, the user who repeatedly performs a certain action is potentially more influential than the one who just performs the same action once.
This issue can be easily verified in social networks like Twitter~\cite{15} or Facebook, where users may participate in the discussion on some topics more than once.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{top1_0_2.png}
\caption{Difference between Action Frequency and Numbers of Users Involved.}
\label{fig:top1}
\end{figure}
In Fig. \ref{fig:top1}, we investigate an action named ``TOP", where we compute over time how frequently the ``TOP" action is taken and the number of users taking this action within the given time interval. It can be observed that they both follow the same decay fashion and there is a big difference between the two sequences of bars. For example, from 23:00:00 to 23:59:59 on July 3rd, there are about 4,000 people taking the ``TOP" action, while such an action is performed more than 6,500 times. The observed difference implies that some users indeed take the action ``TOP'' for multiple times. To further quantify how the users repeat the same action, we define the repetition rate of action $a$ as
\begin{align*}
1 - \frac{\text{Number of Users Performing Action }a}{\text{Number of Times Action } a \text{ is Performed}}.
\end{align*}
The value of repetition rate refers to the percentage of the executions that are not performed for the first time over the total number of executions. For the 100 most common actions in the dataset, we find that there are about 43\% of the actions with repetition rates over 10\% as shown in Fig. \ref{fig:rpr}. Therefore, it is useful to develop a model that can take the repetition of actions into account.
\begin{figure}
\centering
\includegraphics[width=1.8in]{Repetition_Rate_0.png}
\caption{Repetition Rate of the 100 Most Common Actions.}
\label{fig:rpr}
\end{figure}
In this paper, we perform data analysis on a multi-action event log, where the same action for one particular user is recorded for multiple times if the user performs this action repeatedly. To deal with such an event log, we propose a novel \emph{multi-action credit distribution} (mCD) model, which uses the time-dependent ``credit'' to quantify the influence ability for each user. Based on this model, we formulate a budgeted influence maximization problem, which aims to identify a subset of users with the maximum influence ability of the selected subset. In this problem, the objective function, i.e., the influence ability, is submodular; and a knapsack constraint is added to regulate the cost for user selection. This problem is NP-hard.
To solve this problem, we first consider a simplified case with a cardinality constraint. By utilizing submodularity, we develop an efficient streaming algorithm that scans through the user set only once and uses a certain threshold to decide whether a user should be added to the solution set. Such a streaming algorithm is within ($\frac{1}{2}-\epsilon$) of the optimality. Then, we modify the algorithm to solve the general knapsack case, which can guarantee ($\frac{1}{3}-\epsilon$) of the optimality. Experimental results over real Twitter dataset show: 1) Compared to the existing CD and non-CD models, the mCD model is more accurate in estimating the total number of people that finally get influenced by the selected set; and 2) under the mCD model, the proposed streaming algorithms can achieve a similar utility as the greedy algorithm CELF~\cite{11}, while having a much faster computation speed.
The rest of this paper is organized as follows. In Section~2, we describe the mCD model along with the formulation of the influence maximization problem. In Section~3, we introduce a learning algorithm to train the mCD model with the given event log, and present two streaming algorithms to solve the influence maximization problem under a cardinality constraint and a knapsack constraint, respectively. In Section~4, we numerically demonstrate the performance of the proposed mCD model and the corresponding streaming algorithms over real data collected from Twitter. Finally, we conclude the paper in Section~5.
\section{System Model and Problem Formulation}
In this paper, an online social network is modeled as an unweighted directed graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$~\cite{12,13}, where the node set $\mathcal{V}$ is the set of all users and the edge set $\mathcal{E}$ indicates the social relationship among all the users. Specifically, for any $u,v \in \mathcal{V}$, there is a directed edge $(v,u)$ (from $v$ to $u$) if $v$ is socially followed by $u$, which implies that $v$ could potentially ``influence'' $u$. The collected data from this social network is a multi-action event log $\mathcal{L}$ with records in the form of (USER, ACTION, TIME), where a corresponding tuple ($u, a, t$)~$\in \mathcal{L}$ indicates that user $u$ performed action $a$ at time $t$. The action $a$ is from a finite action set~$\mathcal{A}$. Here, each action $a$ corresponds to a specific discussion topic. A user $u$ performed action $a$ means that he/she got involved in the discussion of that topic.
\subsection{Multi-Action Credit Distribution}
In conventional CD model~\cite{12}, the main idea is to assign ``credits'' to the possible influencers according to the event log. The total assigned credits to a user consist of direct credits and indirect credits. If the neighbors of user $u$ perform certain action by following $u$, direct credits are then assigned to user $u$ by its neighbors. If the users get influenced by $u$ through multiple hops, indirect credits will be assigned to $u$. The value of indirect credits are computed along all possible trails. The conventional CD model can effectively quantify the influence ability of each user in a single-action event log, but not for the case with a multi-action event log. Next, we introduce the detailed design of the mCD model that can handle multi-action event logs.
Suppose that in a multi-action event log $\mathcal{L}$, it is recorded that user $u$ performs action $a$ for $A_u(a)$ times. For user-action pair $(u,a)$, if $A_u(a) \geq 1$, let $t_i(u,a)$ denote the timestamp when user $u$ performs acton $a$ for the $i$-th time; otherwise, the timestamp is not needed. Next, we denote $\mathcal{A}_u$ as the set of actions that are performed by user $u$. Note that the conventional CD model is a special case of the proposed mCD model, in which $A_u(a) \in \left\{ 0,1 \right\}$ for all $u \in \mathcal{V}$ and $a \in \mathcal{A}$. Based on the directed graph $\mathcal{G}$ and the multi-action event log $\mathcal{L}$, for any action $a \in \mathcal{A}$, we define a directed graph $\mathcal{G}(a)$ that is generated from $\mathcal{G}$ according to the propagation of action~$a$. Specifically, we define $\mathcal{G}(a) = (\mathcal{V}(a), \mathcal{E}(a))$ such that $\mathcal{V}(a) = \{ v \in \mathcal{V} | A_v(a) \geq 1\}$ and $\mathcal{E}(a) = \{(v,u) \in \mathcal{E} | t_1(v,a) < t_1(u,a), A_u(a) \cdot A_v(a) \geq 1\}$. Then, for any user $u$ who performs action $a$, we let $\mathcal{N}_{in}(u,a) = \{v | (v,u) \in \mathcal{E}(a)\}$ denote the set of direct influencers for user $u$, i.e., the neighbors of user $u$ who perform action $a$ earlier than user $u$. Furthermore, we denote by $\mathcal{N}_{in}(\mathcal{S},a) = \{v | v \in \mathcal{N}_{in}(u,a), u \in \mathcal{S}, v \notin \mathcal{S} \}$ as the neighborhood of a given user set $\mathcal{S}$ with respect to action~$a$.
For a given action $a$, we define a timestamp set $\mathcal{T}_{v,u}(a)=\{t_i(v,a)|t_i(v,a)<t_1(u,a), 1\leq i \leq A_v(a)\}$ for every pair of users $u$ and $v$ such that $u \in \mathcal{V}(a)$ and $v\in \mathcal{N}_{in}(u,a)$, which is a collection of timestamps of $v$ performing action $a$ before user $u$. Intuitively, each time when user $v$ performs the action, it causes influence on user $u$, since $v$ and $u$ have a directed edge $(v,u)$ in $\mathcal{G}(a)$. To take this effect into consideration, we consider a series of delays that can be expressed by the timestamp differences, i.e., $t_1(u,a)-t$, for all $t \in \mathcal{T}_{v,u}(a)$. Note that the conventional CD model just simply uses one delay to calculate the direct credit. Here, on the other hand, we adopt an the effective delay from $v$ to $u$ on action $a$, which is defined as
\begin{align}
\label{eq:h_mean}
\Delta t_{v,u}(a) = \frac{1}{\sum_{t \in \mathcal{T}_{v,u}(a)}{(t_1(u,a) - t)^{-1}}}.
\end{align}
Note that $\Delta t_{v,u}(a)$ equals the harmonic mean of $\{(t_1(u,a)-t)\}$ devided by $|\mathcal{T}_{v,u}(a)|$. There are some useful properties of $\Delta t_{v,u}(a)$: 1) $\Delta t_{v,u}(a)\le \min\{(t_1(u,a)-t)\}$ for $t \in \mathcal{T}_{v,u}(a)$, and 2) $\Delta t_{v,u}(a)$ decreases as $|\mathcal{T}_{v,u}(a)|$ increases.
The definition of $\Delta t_{v,u}(a)$ is inspired by the calculation of parallel resistance, where the effective resistance of multiple parallel resistors is mainly determined by the smallest one, and whenever a new resistor is added in parallel, the effective resistance decreases. Similarly, while every time user $v$ taking action $a$ poses some influence on user $u$, it is reasonable to assume that the most recent action induces the most significant influence. Thus, it is desired that the value of the effective delay $\Delta t_{v,u}(a)$ is dominated by $\min \{ t_1(u,a)-t | t\in \mathcal{T}_{v,u}(a) \}$. In addition, a user $u$ would be more likely to follow his neighbor $v$ on action $a$ if $v$ takes action $v$ for many times. In other words, by repeatedly taking the same action, user $v$ poses stronger influence on user $u$. Based on the definition of effective delay, we next define direct credit and indirect credit.
\textbf{Direct Credit.} This credit is what user $u$ assigns to its neighbor $v$ when $u$ takes the same action $a$ after $v$. The direct credit $\gamma_{v,u}(a)$ is defined as
\begin{align}
\label{eq:r}
\gamma_{v,u}(a) =\exp{\left(-\frac{\Delta t_{v,u}(a)}{\tau_{v,u}}\right)} \cdot \frac{1}{R_{u,a}},
\end{align}
where $\tau_{v,u}$ and $R_{u,a}$ are normalization factors. Note that the direct credit decays exponentially over the effective delay $\Delta t_{v,u}(a)$. Such an exponential expression follows from the original definition of the CD model~\cite{12}. Here, $\tau_{v,u}$ is the mathematical average of the time delays between $v$ and $u$ over all the actions:
\begin{align}
\label{eq:tau}
\tau_{v,u} = \frac{1}{\left|\mathcal{A}_{v2u}\right|}\cdot\sum_{a \in \mathcal{A}_{v2u}}{\frac{\sum_{t \in \mathcal{T}_{v,u}(a)}{\left(t_1(u,a) - t\right)}}{|\mathcal{T}_{v,u}(a)|}},
\end{align}
where $\mathcal{A}_{v2u}$ denotes the set of actions that $v$ takes prior to $u$. In addition, $R_{u,a}$ is given by $$R_{u,a} = \sum_{v \in \mathcal{N}_{in}(u,a)}{\exp{\left(-{\Delta t_{v,u}(a)}/{\tau_{v,u}}\right)}},$$
which ensures that the direct credit assigned to all the neighbors of $u$ for action $a$ sums up to $1$.
To summarize, for $u, v \in \mathcal{V}$, the direct credit given to $v$ by $u$ with respect to action $a$ is given as
\begin{align}
\label{eq: gamma}
\gamma_{v,u}(a)=\left\{
\begin{array}{lcl}
\exp{\left(-\frac{\Delta t_{v,u}(a)}{\tau_{v,u}}\right)} \cdot R_{u,a}^{-1}, & & {(v,u) \in \mathcal{E}(a)}\text{;} \\
0, & & {\text{otherwise.}}
\end{array} \right.
\end{align}
\textbf{Indirect Credit.}
Suppose that $(v,w)$ and $(w,u)$ are in $\mathcal{E}(a)$ such that $v$ and $u$ are connected indirectedly. Then, user $u$ assigns indirect credit to $v$ via $w$ as $\gamma_{v,w}(a) \cdot \gamma_{w,u}(a)$. As such, the total credits given to $v$ by $u$ on action $a$ can be defined iteratively as
\begin{align}
\label{eq:Ga}
\Gamma_{v,u}(a) = \sum_{w \in \mathcal{N}_{in}(u,a)}{\Gamma_{v,w}(a)\cdot \gamma_{w,u}(a)},
\end{align}
where $\Gamma_{v,v}(a) = 1$. Then, the average credit given to $v$ by $u$ with respect to all actions is defined as:
$$ \kappa_{v,u} =\left\{
\begin{array}{lcl}
0, & &|\mathcal{A}_u|=0;\\
\frac{1}{|\mathcal{A}_u|}\sum_{a \in \mathcal{\mathcal{A}}}{ \Gamma_{v,u}(a)}, & & {\text{otherwise}}.
\end{array} \right. $$
Moreover, for a set of influencers $\mathcal{S}\subseteq \mathcal{V}(a)$ on action $a$, we have
$$ \Gamma_{\mathcal{S},u}(a)=\left\{
\begin{array}{lcl}
1, & & {u \in \mathcal{S}};\\
\sum_{w \in \mathcal{N}_{in}(u,a)}{\Gamma_{\mathcal{S},w}(a)\cdot \gamma_{w,u}(a)}, & & {\text{otherwise}}.
\end{array} \right. $$
Similarly, we define the average credit given to $\mathcal{S}$ by $u$ with respect to all the actions as:
$$ \kappa_{\mathcal{S},u} =\left\{
\begin{array}{lcl}
0, & &|\mathcal{A}_u|=0;\\
\frac{1}{|\mathcal{A}_u|}\sum_{a \in \mathcal{\mathcal{A}}}{ \Gamma_{\mathcal{S},u}(a)}, & & {\text{otherwise}}.
\end{array} \right. $$
Note that the average credit $\kappa_{\mathcal{S},u}$ can also be interpreted as the ``influence ability'' of the set $\mathcal{S}$ on a particular user $u$, and the value of $\kappa_{\mathcal{S},u}$ indicates how influential $\mathcal{S}$ is. Finally, we define $\sigma_{mCD}(\mathcal{S})$ as the influence ability of $\mathcal{S}$ over the whole network, which is given as
\begin{align}
\label{objective}
\sigma_{mCD}(\mathcal{S}) = \sum_{u \in \mathcal{V}}{\kappa_{\mathcal{S},u}}.
\end{align}
\textbf{Remark:} $\sigma_{mCD}(\mathcal{S})$ is monotone and submofular. To see this point, it is sufficient to show that $\Gamma_{\mathcal{S}, u}(a)$ is monotone and submodular for any $u\in \mathcal{V}$ and $a\in \mathcal{A}$, since a positive linear combination of monotone and submodular functions is still monotone and submodular~\cite{17}. As the propagation graph $\mathcal{G}(a)$ shares the same acyclic property, similar to the argument in~\cite{12}, we can show that $\Gamma_{\mathcal{S}, u}(a)$ is monotone and submodular by induction. First, we restrict the attention path of $\Gamma_{\mathcal{S}, u}(a)$ with length $0$ by the definition of submodularity, where the attention path is introduced to limit the indirect credit calculation through some path with a length less than a given value. Then, by assuming that submodularity holds when the attention path equals $l$, we can easily show the submodularity for the case of $l+1$. Since the maximum length of the attention path is $|\mathcal{V}|-1$, we could reach the conclusion that $\Gamma_{\mathcal{S}, u}(a)$ is monotone and submodular. Therefore, we could claim that the influence ability function $\sigma_{mCD}(\mathcal{S})$ is monotone and submodular.
\subsection{Budgeted Influence Maximization Problem}
A budgeted influence maximization problem in a social network can be formulated as finding a subset $\mathcal{S}$ of users, i.e., a seed set, from the ground set $\mathcal{V}$ to achieve maximum influence ability within some user selection budget. Note that for a general case, different user selection criteria may lead to different cost. For example, if the users are chosen and paid to spread out certain advertising information, one would expect that a user with more fans charges more than others. This is reasonable since the value of a user is related to how many people he or she can potentially influence over the network. Therefore, we introduce a knapsack constraint to quantify the user selection cost. Suppose there are $n$ users in the dataset. We denote a positive $n \times 1$ weight vector $g = (g_1, g_2, \ldots, g_n)^T$ as the unique cost for selecting each user. Denote $I_\mathcal{S} = (I_{1}, I_{2}, \cdots, I_{n})^T$ as an $n \times 1$ characteristic vector of $\mathcal{S}$, where $I_i = 1$ if $i \in \mathcal{S}$; $I_i = 0$, otherwise. Let $b$ be the total available budget on the cost for selecting users into $\mathcal{S}$. Then, the budgeted influence maximization problem could be cast as
\begin{eqnarray}\label{overallproblem}
\begin{aligned}
&\underset{\mathcal{S} \subseteq \mathcal{V}}{\textrm{maximize}} &&\sigma_{mCD}(\mathcal{S}) \\
&\textrm{subject to} && g^TI_{\mathcal{S}} \leq b.
\end{aligned}
\end{eqnarray}
For simplicity, we normalize problem~(\ref{overallproblem}) as follows. We first divide the knapsack constraint by the minimum weight $g_{\min}=\min \left\{g_i\right\}_{i=1}^n$ on both sides, i.e., $g^TI_\mathcal{S} / g_{\min}\leq b/g_{\min}$. We then treat $g/ g_{\min}$ and $b/g_{\min}$ as a new weight vector $g$ and a new budget constraint $b$ correspondingly, with a slight misuse of notations. After this manipulation, every entry in $g$ is no less than $1$ and the number of selected users will not exceed $b$. It is easy to see that the standardized problem has the same optimal solution as the original problem~(\ref{overallproblem}). For the rest of this paper, we only consider the standardized problem.
With the formulation of the budgeted influence maximization problem, it is worth noting that $\sigma_{mCD}(\mathcal{S})$ is a lower bound of the total number of users that finally get influenced over all actions, as given in Proposition~\ref{prop:1}.
\begin{proposition} \label{prop:1}
$\sigma_{mCD}(\mathcal{S}) \leq |\cup_{a \in \mathcal{A}}\mathcal{V}(a)|.$
\end{proposition}
\begin{proof}
Given an action $a\in \mathcal{A}$ and a user $u \in \mathcal{V}(a)$, let $h$ denote the maximum hops that $u$ distributes credits to $\mathcal{N}_{in}(\mathcal{S},a)$. Then, the total credit can be expressed as
\begin{align*}
\begin{split}
\Gamma_{\mathcal{S},u}(a) = \sum_{w_1 \in \mathcal{N}_{in}(u,a)}\gamma_{w_1, u}(a) \left( \sum_{w_2 \in \mathcal{N}_{in}(w_1,a)}\gamma_{w_2, w_1}(a) \cdot \right. \\
\left. \left( \cdots \left( \sum_{v \in \mathcal{N}_{in}(w_h, a)} \gamma_{v,w_h}(a) \cdot \Gamma_{\mathcal{S},v}(a) \right)\right) \right).
\end{split}
\end{align*}
According to the definition of direct credits, for any $v \in \mathcal{V}$ and $a \in \mathcal{A}$, we have a normalizer $R_{v,a}$ to ensure $$\sum_{v' \in \mathcal{N}_{in}(v,a)} \gamma_{v',v}=1.$$ Thus, for $v \in \mathcal{N}_{in}(w_h, a)$, we have
\begin{align*}
\Gamma_{\mathcal{S}, v}(a) &= \sum_{v' \in \mathcal{S} \cap \mathcal{N}_{in}(v,a)} \gamma_{v',v}(a)\\ &\leq \sum_{v' \in \mathcal{N}_{in}(v,a)} \gamma_{v',v}(a)=1,
\end{align*}
and then,
$$\sum_{v \in \mathcal{N}_{in}(w_h, a)} \gamma_{v,w_h} \cdot \Gamma_{\mathcal{S},v}(a) \leq 1.$$
Analogously, we can show that the total credit given by any $u\in \mathcal{V}(a)$ on action $a \in \mathcal{A}$ is bounded, i.e., $\Gamma_{\mathcal{S},u}(a) \leq 1$. By the definition of influence ability, we have
\begin{align*}
\sigma_{mCD}(\mathcal{S}) &= \sum_{u \in \cup_{a \in \mathcal{A}}\mathcal{V}(a)} \frac{1}{|\mathcal{A}_u| } \sum_{a \in \mathcal{A}} \Gamma_{\mathcal{S},u}(a) \\&= \sum_{u \in \cup_{a \in \mathcal{A}}\mathcal{V}(a)} \frac{1}{|\mathcal{A}_u| } \sum_{a \in \{a| a \in \mathcal{A}, u_u(a) \geq 1\}} \Gamma_{\mathcal{S},u}(a) \\ &= \sum_{u \in \cup_{a \in \mathcal{A}}\mathcal{V}(a)} \frac{1}{|\mathcal{A}_u| } \sum_{a \in \mathcal{A}_u} \Gamma_{\mathcal{S},u}(a) \\ &\leq \sum_{u \in \cup_{a \in \mathcal{A}}\mathcal{V}(a)} \frac{1}{|\mathcal{A}_u| } \cdot |\mathcal{A}_u| \\ &= |\cup_{a \in \mathcal{A}}\mathcal{V}(a)|.
\end{align*}
Note that when we evaluate a single action $a$, $\sigma_{mCD} \left(\mathcal{S} \right)$ provides a
lower bound of $\mathcal{V}(a)$.
\end{proof}
Therefore, problem~(\ref{overallproblem}) is to find a subset $\mathcal{S}$ from the ground set $\mathcal{V}$ to maximize a lower bound of the total number of users that finally get influenced over all actions. In Section~4, we will numerically show that the influence ability $\sigma_{mCD}(\mathcal{S})$ in the mCD model, although as a lower bound, provides a more accurate approximation of $|\mathcal{V}(a)|$ with respect to each action $a \in \mathcal{A}$, compared to the influence ability in the CD model.
As aforementioned, the objective function of problem (\ref{overallproblem}) is monotone and submodular. Therefore, problem~(\ref{overallproblem}) is a submodular maximization problem under a knapsack constraint, which has been proved to be NP-hard~\cite{16}. In general, such a submodular problem can be approximately solved by greedy algorithms~\cite{10,16}. However, due to the large volume of online social network datasets, the implementation of greedy algorithms is not practical. In the next section, we develop an efficient streaming algorithm to solve the budgeted influence maximization problem under the mCD model.
\section{Algorithms}
The proposed algorithm is divided into the following modules, as shown in Fig.~\ref{fig:system_model}. The module ``Model Learner'' is designed to learn the parameters $\{\tau_{v,u}\}$ (the mathematical average time delay between each pair of $v$ and $u$ over all actions) and $\{A_u(a)\}$ (the frequency of $u$ taking action $a$), from the training dataset before solving the optimization problem, such that the algorithm can deal with a newly arriving dataset or test set much more efficiently. Then, for the new or test set of data, we start with the preprocessing module ``Log Scanner'', which scans the dataset to calculate the total credit $\Gamma_{v,u}(a)$ assigned to user $v$ by $u$ for action $a$ by using the already learned $\{\tau_{v,u}\}$ and $\{A_u(a)\}$ from the training set. The last but the most important module ``Problem Solver'' solves the influence maximization problem~(\ref{overallproblem}) based on $\{\Gamma_{v,u}(a)\}$ and outputs the seed set.
\begin{figure}[H]
\centering
\includegraphics[width=3in]{System_Model.pdf}
\caption{Overall Algorithm Structure for Influence Maximization.}
\label{fig:system_model}
\end{figure}
\subsection{Parameter Learning}
The main function of Model Learner is to learn $\{\tau_{v,u}\}$ and $\{A_u(a)\}$ from the event log, where $\tau_{v,u}$ is mainly determined by $\mathcal{T}_{v,u}(a)$ and $A_{v2u} =|\mathcal{A}_{v2u}|$, according to Eq.~(\ref{eq:tau}). Since $\mathcal{T}_{v,u}(a)$ can be directly constructed from the event log according to its definition in Section II-A, the key problem is to compute $A_{v2u}$, or equivalently to find all parents of $u$ for a particular action $a$. Here, we propose Algorithm~\ref{Learning} to obtain $\{\tau_{v,u}\}$ by computing $\{A_{v2u}\}$.
\begin{algorithm}
\caption{MODEL LEARNER}
\label{Learning}
\begin{algorithmic}[1]
\State \textbf{Initialize} $A_{v2u} := 0$ for all users and edges.
\NoDoFor{each action $a$ in training set}
\State $current\_table := \emptyset$.
\State $A_u(a) := 0$ for all users.
\NoDoFor{each tuple $<u,a,t>$ in chronological order}
\State \textbf{if} $A_u(a) \neq 0$, \textbf{then} continue.
\State $parents(u,a) := \emptyset; A_u(a) := A_u(a) + 1$.
\NoDoWhile{$\exists v : (v,u) \in \mathcal{E}$ and $v \in current\_table$}
\State $parents(u,a) := parents(u,a) \cup \{v\}$.
\EndWhile
\State $A_{v2u} := A_{v2u} + 1$, $\forall v \in parents(u,a)$.
\State $current\_table := current\_table \cup \{u\}$.
\EndFor
\EndFor
\State \textbf{Output} $\tau_{v,u}$ according to Eq. (\ref{eq:tau}), $A_u(a)$.
\end{algorithmic}
\end{algorithm}
Specifically, $current\_table$ is maintained to store user indices who have performed $a$ and have been scanned so far; and $parents(u,a)$ is a list of parents of $u$ with respect to the action $a$. The incremental update process for $A_{v2u}$ is repeated with respect to each action $a$, in order to compute the total number of actions propagated from $v$ to $u$. At the end of Algorithm~\ref{Learning}, we use the definition in Eq. (\ref{eq:tau}) to compute the average time delay between every valid user pair.
\subsection{Problem Solving}
\subsubsection{Computation of Marginal Gain}
As problem (\ref{overallproblem}) is NP-hard, it is not practical to obtain the optimal solution over large datasets. For such a problem, a greedy algorithm was proposed in~\cite{17} to obtain a suboptimal solution with a factor ($1-1/e$) away from optimality. At each step, the greedy algorithm scans over all the unselected users, and picks the user with the largest marginal gain. The drawback of the greedy algorithm is that scanning over all the unselected users repeatedly is very time-consuming, especially when the dataset is large. In this section, we propose an alternative way to calculate the marginal gain efficiently.
Based on $\{\tau_{v,u}\}$ and $\{A_u(a)\}$ obtained by ``Model Learner'', we can compute the marginal gain based on the definition of the influence ability given in Eq. (\ref{objective}) directly. However, it requires the computation of the total credit $\{\Gamma_{v,u}(a) \}$ for each user as well as the total credit for each pair of neighbors, which is quite inefficient under a big data setting. Thus, we adopt the following alternative and equivalent method to calculate the marginal gain. First, denote by $\Gamma_{x,u}^{\mathcal{V} - \mathcal{S}}(a)$ the total credit given to $x$ by $u$ on action $a$ through the paths that are contained completely in the subgraph induced by $\mathcal{V} - \mathcal{S} = \left\{v \in \mathcal{V}| v \notin \mathcal{S} \right\}$. Note that when $\mathcal{S}$ is the null set, we have $\Gamma_{x,u}^{\mathcal{V} - \mathcal{S}}(a) = \Gamma_{x,u}(a)$ as defined in Eq. (\ref{eq:Ga}). For the subgraphs, the following lemmas hold.
\begin{lemma}
\label{gamma_v_u}
$\Gamma_{v,u}^{\mathcal{S}-x}(a) = \Gamma_{v,u}^{\mathcal{S}}(a) - \Gamma_{v,x}^{\mathcal{S}}(a) \cdot \Gamma_{x,u}^{\mathcal{S}}(a)$.
\end{lemma}
\begin{lemma}
\label{gamma_S_u}
$\Gamma_{\mathcal{S}+x,u}(a) = \Gamma_{\mathcal{S},u}(a) + \Gamma_{x,u}^{\mathcal{V}-\mathcal{S}}(a) \cdot (1-\Gamma_{S,x}(a))$.
\end{lemma}
Then, we have the following theorem to compute the marginal gain.
\begin{theorem}
\label{th:mg}
In the mCD model, given any subset $\mathcal{S} \subseteq \mathcal{V}$ and an element $x \in \mathcal{V}-\mathcal{S}$, the marginal gain of adding $x$ into $\mathcal{S}$ equals
$$\sum_{a\in A}\left((1-\Gamma_{\mathcal{S},x}(a)) \cdot \sum_{u \in \mathcal{V}}{\frac{1}{A_u} \cdot \Gamma_{x,u}^{\mathcal{V}-\mathcal{S}}(a)} \right).$$
\end{theorem}
The proof of the above lemmas and theorem can be easily obtained by results in~\cite{12}, which is omitted here.~With these observations, when we add a new user $x$ into $\mathcal{S}$, we see that we do not need~to iteratively calculate $\sigma_{mCD}(\mathcal{S}+x)$ and $\sigma_{mCD}(\mathcal{S})$. Instead, we keep updating $\Gamma_{v,u}^{\mathcal{V-S}}(a)$ and~$\Gamma_{\mathcal{S},u}(a)$ using Lemmas~\ref{gamma_v_u} and~\ref{gamma_S_u}, after which we can compute the marginal gain with Theorem~\ref{th:mg}. In Algorithm~\ref{UC} below, ``Log Scanner'' scans over the test set to calculate $\Gamma_{v,u}(a)$ for every user pair $(v,u)$ on every action $a$, and stores the result in $UC[a][v][u]$. This module provides the initialization for the later-on "Problem Solver" module.
\begin{breakablealgorithm}
\caption{LOG SCANNER}
\label{UC}
\begin{algorithmic}[1]
\State \textbf{Initialize }$ UC[a][v][u] := 0$ for all actions and users.
\NoDoFor{each action $a$ in $\mathcal{A}$}
\State $current\_table := \emptyset$.
\State $A_u(a):=0$ for all users.
\NoDoFor{each tuple $<u,a,t>$ in chronological order}
\State \textbf{if} $A_u(a) \neq 0$, \textbf{then} continue.
\State $parents(u) := \emptyset; A_u(a) := A_u(a) + 1$.
\NoDoWhile{$\exists v : (v,u) \in \mathcal{E}$ and $v \in current\_table$}
\State $parrents(u) := parrents(u) \cup \{v\}$.
\EndWhile
\NoDoFor{each $v \in parrents(u)$}
\State Compute $\gamma_{v,u}(a)$ according to Eq. (\ref{eq: gamma}).
\State $UC[a][v][u] := UC[a][v][u] + \gamma_{v,u}(a)$.
\State $UC[a][w][u] := UC[a][w][u] + UC[a][w][v] \cdot \gamma_{v,u}(a),$ $\forall w \in \mathcal{V}$.
\EndFor
\State $current\_table := current\_table \cup \{u\}$.
\State $UC[a][v][v] := 1$, $\forall v \in current\_table$.
\EndFor
\EndFor
\State \textbf{Output} $UC$.
\end{algorithmic}
\end{breakablealgorithm}
With the output $UC$ by Algorithm \ref{UC}, we are ready to compute the marginal gain in the ``Problem Solver" module. The structure of the ``Problem Solver" module can be summarized as follows. In each iteration over the users, if the marginal gain of a candidate user satisfies a particular criterion\footnote{The details of the condition to select candidate users will be explained in Algorithms~\ref{str_k} and \ref{str_b}.}, this user will be added to the seed set. The core of Problem Solver is Algorithm \ref{mg}, which relies on Theorem \ref{th:mg} to compute the marginal gain. In particular, Algorithm~\ref{mg} takes the candidate user $x$, user credit $UC$, and set credit $SC$ as inputs, and returns the marginal gain $mg$ for adding node $x$ into the seed set. Here, the structure of $SC$ is indexed by $[a][x]$; it stores the total credit $\{\Gamma_{\mathcal{S}, x}(a)\}$ given to the current seed set $\mathcal{S}$ by a user $x$ for an action $a$, and is updated whenever a new user is added to the seed set $\mathcal{S}$. Once user $x$ is added to the seed set, $\Gamma_{\mathcal{S},x}(a)$ and $\Gamma_{x,u}^{\mathcal{V}-\mathcal{S}}(a)$ are updated according to Algorithm \ref{update}.
\begin{breakablealgorithm}
\caption{COMPUTE\_MARGINAL\_GAIN($x,UC,SC$)}
\label{mg}
\begin{algorithmic}[1]
\State \textbf{Initialize} $mg := 0; mg_a := 0$ for all actions.
\NoDoFor{each action $a$ such that $\exists u : UC[a][x][u] > 0$}
\NoDoFor{each $u$ such that $UC[a][x][u] > 0$}
\State $mg_a := mg_a + UC[a][x][u] / u_u(a)$.
\EndFor
\State $mg := mg + mg_a \cdot (1- SC[a][x])$.
\EndFor
\State \textbf{return} $mg$.
\end{algorithmic}
\end{breakablealgorithm}
\begin{breakablealgorithm}
\caption{UPDATE($x, UC, SC$)}
\label{update}
\begin{algorithmic}[1]
\State $UC_{old} = UC$, $SC_{old} = SC$
\NoDoFor{each action $a$}
\NoDoFor{each $u$}
\State $UC[a][v][u] := UC_{old}[a][v][u] - UC_{old}[a][v][x] \cdot UC_{old}[a][x][u], \forall v \in \mathcal{V}$.
\State $SC[a][u] := SC_{old}[a][u] + UC_{old}[a][x][u] \cdot (1- SC_{old}[a][x])$.
\EndFor
\EndFor
\State \textbf{return} $UC, SC$.
\end{algorithmic}
\end{breakablealgorithm}
\subsubsection{Influence Maximization Problem Solver}
With the algorithms to efficiently compute the marginal gain and update the total credits, we now arrive at the design of the streaming algorithms to solve problem~(\ref{overallproblem}).
We start with a special case of the knapsack constraint, which is a cardinality constraint (by applying the same weight for every user). Given $k$ as the cardinality limit for $\mathcal{S}$, this simplified problem is cast as
\begin{eqnarray}\label{cardinalityproblem}
\begin{aligned}
&\underset{\mathcal{S} \subseteq \mathcal{V}}{\textrm{maximize}} &&\sigma_{mCD}(\mathcal{S}) \\
&\textrm{subject to} && |\mathcal{S}| \leq k.
\end{aligned}
\end{eqnarray}
In~\cite{18}, a streaming algorithm has been proposed to solve a submodular maximization problem under a cardinality constraint, whose main idea is to use a pre-defined threshold to justify whether a user is good enough to be selected. However, setting the threshold requires the priori knowledge of the optimal value of the problem. In most scenarios, this leads to the chicken and egg dilemma.
To address this issue, we adapt the threshold along the process instead of using a fixed threshold based on the priori knowledge of the optimal value. First, we assume that the maximum influence ability that can be achieved by any user $x\in\mathcal{V}$ is known as $m$ (we will remove this assumption later in this section), where $m = \max_{x \in \mathcal{V}}\sigma_{mCD}(\{ x\})$; we construct an optimum value candidate set $\mathcal{O}:=\{(1+\epsilon)^i | i \in \mathbb{Z}, m \leq (1+\epsilon)^i \leq k\cdot m\}$. Since the objective function is submodular and the cardinality constraint is $k$, it is easy to see that the optimal value lies in $[m, km]$. Moreover, the optimum value candidate set $\mathcal{O}$ has a property that there exist some values close to the true optimal value, as shown by the following lemma.
\begin{lemma} \label{SetO}
Let $\mathcal{O}:=\{(1+\epsilon)^i | i \in \mathbb{Z}, m \leq (1+\epsilon)^i \leq k\cdot m\}$
for some $\epsilon$ with $0 < \epsilon < 1$. Then there exists a value $c \in \mathcal{O}$ such that $(1-\epsilon)\text{OPT} \leq c \leq \text{OPT}$, with \text{OPT} denoting the optimal value for problem~(\ref{cardinalityproblem}).
\end{lemma}
\begin{proof}
First, we choose $x \in \mathcal{V}$ such that $\sigma_{mCD}(\{x\})=m.$ We then have $\textrm{OPT}\ge \sigma_{mCD}(\{x\})=m.$ In addition, let $\{x_1,x_2,\ldots,x_k\}$ be a subset of $V$ such that $\sigma_{mCD}(\{x_1,x_2,\ldots,x_k\})=\textrm{OPT}$. By the submodularity of $\sigma_{mCD}$, we have
\begin{align*}
\begin{split}
\textrm{OPT}&=\sigma_{mCD}(\emptyset)+\sum_{i=1}^k [\sigma_{mCD}(\{x_1,x_2,\ldots,x_i\})- \\
&\sigma_{mCD}(\{x_1,x_2,\ldots,x_{i-1}\})]\\
&\le \sigma_{mCD}(\emptyset)+\sum_{i=1}^k [\sigma_{mCD}(\{x_i\})-\sigma_{mCD}(\emptyset)]\\
&\le \sum_{i=1}^k \sigma_{mCD}(\{x_i\})\le km.
\end{split}
\end{align*}
By setting $c = [1+\epsilon]^{\left\lfloor\log_{1+\epsilon}\textrm{OPT}\right\rfloor}$, we then obtain $$\frac{m}{1+\epsilon}\le \frac{\textrm{OPT}}{1+\epsilon}\le c\le \textrm{OPT}\le km,$$ and $$c\ge \frac{\textrm{OPT}}{1+\epsilon}\ge (1-\epsilon)\textrm{OPT}.$$
\end{proof}
Therefore, by constructing $\mathcal{O}$, we are able to obtain a good estimate $c$ on OPT. However, if we do not have the knowledge on $m$, we need one more scan over the user set to obtain $m$. In Algorithm~\ref{str_k}, we design a singe pass structure where we update $m$ during the iterations over user selection. Specifically, we modify $\mathcal{O}$ as $\mathcal{O}=\{(1+\epsilon)^i | i \in \mathbb{Z}, m \leq (1+\epsilon)^i \leq 2k\cdot m\}$, and maintain the variable $m$ that holds the current maximum marginal value of all single elements when the algorithm scans over the ground set. Whenever $m$ gets updated, the algorithm updates the set $\mathcal{O}$ accordingly. For each user in the ground set, we scan each element $c$ in set $\mathcal{O}$, and add that user into $\mathcal{S}_c$ as long as the marginal gain is larger than $\frac{c}{2k}$ and $|\mathcal{S}_c|\leq k$. The computation of marginal gain is conducted by function COMPUTE\_MARGINAL\_GAIN. Once a user $x$ is added to $\mathcal{S}_c$, we update the user credit $UC_c$ and the set credit $SC_c$ with function UPDATE respectively. The performance of the described streaming algorithm (Algorithm 5) is guaranteed by Theorem~\ref{th:k}.
\begin{theorem} \label{th:k}
Algorithm~\ref{str_k} produces a solution $\mathcal{S}$ such that $\sigma_{mCD}(\mathcal{S}) \geq \left(\frac{1}{2}-\epsilon\right)\textrm{OPT}.$
\end{theorem}
\begin{proof}
Given $c' \in \mathcal{O}$ falling into $[(1-\epsilon) \textrm{OPT}, \textrm{OPT}]$, let us discuss the following two cases for the thread corresponding to $c'$.
\underline{Case 1:} $|\mathcal{S}_c'| = k$. For $1\le i\le k$, let $u_i$ be the element added to $\mathcal{S}_c'$ in the $i$-th iteration of the for-loop. Then we obtain
\begin{align*}
\sigma &_{mCD}(\mathcal{S}_c')=\sigma_{mCD}(\{u_1,u_2,\ldots,u_k\}) \\&\geq \sigma_{mCD}(\{u_1,u_2,\ldots,u_k\})-\sigma_{mCD}(\emptyset)\\
&=\sum_{i=1}^k \big[\sigma_{mCD}(\{u_1,\ldots,u_i\})-\sigma_{mCD}(\{u_1,\ldots,u_{i-1}\}) \big].
\end{align*}
By the condition in Line 8 of Algorithm~1, for $1\le i\le k$, we have
$$\sigma_{mCD}(\{u_1,u_2,\ldots,u_i\})-\sigma_{mCD}(\{u_1,u_2,\ldots,u_{i-1}\})\geq\frac{c}{2k},$$
and hence
$$\sigma_{mCD}(\mathcal{S}_c')\geq \frac{v}{2k} \cdot k \geq \frac{(1-\epsilon)}{2}\textrm{OPT}.$$
\underline{Case 2:} $|\mathcal{S}_c'| < k$. Let $\bar{\mathcal{S}_c'}= \mathcal{S}^* \backslash \mathcal{S}_c'$, where $\mathcal{S}^*$ is the optimal solution to the optimization problem. For each element $a \in \bar{\mathcal{S}_c'}$, we have
\begin{align*}
\sigma_{mCD}(\mathcal{S}_c' \cup \{a\}) - \sigma_{mCD}(\mathcal{S}_c') <\frac{v}{2k}.
\end{align*}
Since $f$ is monotone submodular, we obtain
\begin{align*}
\sigma_{mCD}(\mathcal{S}^*)&-\sigma_{mCD}(\mathcal{S}_c') =\sigma_{mCD}(\mathcal{S}_c' \cup \bar{\mathcal{S}_c'}) -\sigma_{mCD}(\mathcal{S}_c') \\
&\leq \sum_{a\in \bar{\mathcal{S}}}[\sigma_{mCD}(\mathcal{S}_c' \cup \{a\}) - \sigma_{mCD}(\mathcal{S}_c')] \\
&< \frac{v}{2k} \cdot k \leq \frac{1}{2}\sigma_{mCD}(\mathcal{S}^*),
\end{align*}
which implies that
$$\sigma_{mCD}(\mathcal{S}_c') >\frac{1}{2}\sigma_{mCD}(\mathcal{S}^*)=\frac{1}{2}\textrm{OPT}\geq \frac{(1-\epsilon) }{2}\textrm{OPT}.$$
Since $\mathcal{S} = \text{argmax}_{\mathcal{S}_c, c \in \mathcal{O}
}\sigma_{mCD}(\mathcal{S}_c)$, there is $\sigma_{mCD}(\mathcal{S})\geq \sigma_{mCD}(\mathcal{S}_{c})$ for any $c \in \mathcal{O}$. As we have shown that $\sigma_{mCD}(\mathcal{S})\geq \frac{(1-\epsilon) }{2}\textrm{OPT}$, we obtain $$\sigma_{mCD}(\mathcal{S})\geq \sigma_{mCD}(\mathcal{S}_{c'})\geq \left( \frac{1}{2} - \epsilon \right)\text{OPT}.$$
\end{proof}
\begin{breakablealgorithm}
\caption{STREAMING\_ALGORITHM($k,UC$)}
\label{str_k}
\begin{algorithmic}[1]
\State \textbf{Initialize:} $SC[a][u] := 0$ for all actions and users; $m:=0$.
\State $\mathcal{O}:=\{(1+\epsilon)^i | i \in \mathbb{Z}\}$.
\State $\mathcal{S}_c := \emptyset, UC_c := UC$ and $SC_c := SC$ for all $c \in \mathcal{O}$.
\NoDoFor{each $x \in \mathcal{V}$}
\State $m:=\max\{m, \sigma_{mCD}(\{x\})\}$
\State $\mathcal{O}:=\{(1+\epsilon)^i | i \in \mathbb{Z}, m \leq (1+\epsilon)^i \leq 2k\cdot m\}$.
\NoDoFor{$c \in \mathcal{O}$}
\NoThenIf{COMPUTE\_MAGINAL\_GAIN($x,UC_c,SC_c$) $\geq \frac{c}{2k}$ and $|\mathcal{S}_c| < k$}
\State $\mathcal{S}_c := \mathcal{S}_c \cup \{x\}$.
\State $<UC_c, SC_c> :=$ UPDATE($x$, $UC_c$, $SC_c$)
\EndIf
\EndFor
\EndFor
\State \textbf{return} $\mathcal{S} := \text{argmax}_{\mathcal{S}_c, c \in \mathcal{O}
}\sigma_{mCD}(\mathcal{S}_c)$.
\end{algorithmic}
\end{breakablealgorithm}
\subsubsection{Budgeted Influence Maximization Problem Solver}
Now, we consider the more general budgeted influence maximization problem (problem~(\ref{overallproblem})), we first modify the threshold in line~6 of Algorithm~\ref{str_k} to $\frac{2qg_x}{3b}$, where $q\in\mathcal{Q}:=\{(1+3\epsilon)^i | i \in \mathbb{Z}, \frac{m}{1+3\epsilon} \leq (1+3\epsilon)^i \leq 2b \cdot m\}$, $g_x$ is the weight of user $x$, and $b$ is the total budget. Moreover, the modified algorithm keeps searching for a particular user who has dominated influences. The property of such a user is described by Theorem~\ref{lemmabigelement}. At the end of the modified algorithm, we might have two types of sets: one is collected by the modified threshold, and the other exists if a user described in Theorem~\ref{lemmabigelement} is found. The set with a higher objective value will be the final algorithm output. The pseudo-code is presented in Algorithm~\ref{str_b} below. Such an algorithm can solve problem~(\ref{overallproblem}) with ($\frac{1}{3}-\epsilon$)-approximation to the optimal solution according to Theorem 1 in~\cite{19}
\begin{theorem}
\label{lemmabigelement}
Assume $q\in [(1-3\epsilon)\text{OPT}, \text{OPT}$], $x$ satisfies $g_x \geq \frac{b}{2}$, and its marginal gain per weight is larger than $\frac{2q}{3b}$. Then, we have $\sigma_{mCD}(x) \geq \left(\frac{1}{3}-\epsilon\right)\text{OPT}$.
\end{theorem}
\begin{breakablealgorithm}
\caption{BUDEGETED\_STREAMING\_ALGORITHM($b,UC$)}
\label{str_b}
\begin{algorithmic}[1]
\State \textbf{Initialize} $SC[a][u] := 0$ for all actions and users; $m = 0$.
\State $\mathcal{Q}:=\{(1+3\epsilon)^i | i \in \mathbb{Z}\}$
\State $\mathcal{S}_q := \emptyset, UC_q := UC$ and $SC_q := SC$ for all $q \in \mathcal{Q}$.
\NoDoFor{each $x \in \mathcal{V}$}
\State{
$m:= \max\{m, $MG[$x$]:= \\
\qquad \quad COMPUTE\_MAGINAL\_GAIN($x,UC,SC$)$/g_x\}$
}
\State $\mathcal{Q}:=\{(1+3\epsilon)^i | i \in \mathbb{Z}, \frac{m}{1+3\epsilon} \leq (1+3\epsilon)^i \leq 2b \cdot m\}$.
\NoDoFor{$q \in \mathcal{Q}$}
\NoThenIf{$w_x \geq \frac{b}{2}$ and $\frac{MG[x]}{w_x} \geq \frac{2q}{3b}$}
\State $\mathcal{S}_q := \{x\}$.
\State \textbf{break}.
\EndIf
\NoThenIf{COMPUTE\_MAGINAL\_GAIN($x,UC_q,SC_q$) $\geq \frac{2qg_x}{3b}$ and $g^TI_{\mathcal{S}_q \cup \{x\}}\leq b$}
\State $\mathcal{S}_q := \mathcal{S}_q \cup \{x\}$.
\State $<UC_q, SC_q> := UPDATE(x, UC_q, SC_q)$
\EndIf
\EndFor
\EndFor
\State \textbf{return} $\mathcal{S} := \text{argmax}_{q \in \mathcal{Q}}\sigma_{mCD}(\mathcal{S}_q)$.
\end{algorithmic}
\end{breakablealgorithm}
\section{Experimental Results}
We conduct our experiments on a reduced Twitter dataset~\cite{15}, containing about 17,000 users and 100 actions to evaluate the mCD model and the corresponding streaming algorithms. Specifically, we are interested in the following performance metrics: 1) the influence ability of the seed set provided by our proposed streaming algorithm; 2) the gap between the output influence ability and the number of people that truly get influenced; and 3) the running time of the algorithm. All experiments are conducted at a server with a 3.50GHz Quad-Core Intel Xeon CPU E3-1245 and 32GB memory.
\subsection{Experiment Setup}
The Twitter dataset records three different user activities, namely ``retweet", ``quote" and ``reply". In our experiments, an action $a_i$ is claimed if any user reacts (including retweet, quote, and reply to) the post of specific user $u_i$. For example, suppose there are two users, $u_1$ and $u_2$. Then, the action space could be $\mathcal{A} = \{a_1, a_2\}$. When $u_1$ performs action $a_2$, it means that user $u_1$ either ``retweets", ``quotes" or ``replies" to the twitter post of user $u_2$. Note that the cardinality of the action space may not match the cardinality of the user space. In particular, if we can only find records indicating $u_1$ either ``retweets", ``quotes" or ``replies" to the twitter post of user $u_2$, the action space is $\mathcal{A} = \{a_2\}$, where $|\mathcal{A}|=1$. In this way, the considered dataset contains 17,000 users and about 100 different actions. According to the discussion in Section~3, the event log is divided into two parts, where the training set contains 80 different actions and the test set contains 20 different actions.
\subsection{Experimental Results}
In this subsection, we are going to show:
\begin{itemize}
\item The seed set identified by the mCD model has better quality in the sense of the influence ability.
\item Under the same mCD model, the streaming algorithms can achieve close utilities to the Cost-Effective Lazy Forward selection (CELF) algorithm proposed in~\cite{10}, an accelerated greedy algorithm;
\item Under the same mCD model, the streaming algorithms are much faster than the CELF algorithm.
\item As an estimator on the total number of users that get influenced, the mCD model is more accurate than the conventional CD model;
\end{itemize}
Note that it has been about ten years after CELF was proposed, and there have been more recent influence maximization algorithms proposed in, e.g., \cite{martingales} and \cite{mssa}. However, these recent ones were designed to apply different influence models and to solve differently formulated problems, compared with ours. Thus, no performance comparison is given against these methods in this paper as it may not be a fair comparison under our setup. To be more specific, note that one of the main contributions in our paper is to study the budgeted influence maximization problem. Therefore, the experiments mainly focus on how to select a group of users to maximize the influence considering the selection cost of users. However, none of the above two papers studied such a budgeted influence maximization problem. In addition, the algorithms proposed in papers \cite{martingales} and \cite{mssa} are applicable to influence maximization problem under the diffusion models, while our streaming algorithm is built upon the credit distribution model, which is under a quite different problem setup.
For notation simplicity, the output results of the CELF algorithm and the streaming algorithm under the mCD model are denoted by ``mCD\_greedy" and ``mCD\_streaming" respectively.
\subsubsection{Quality of Seed Sets}
First, we focus on the evaluation of the following three models:
\begin{itemize}
\item \textbf{IC model}: a conventional non-credit distribution model with edge probabilities assigned as $0.1$ uniformly (in all experiments, we run $10,000$ MC simulations)~\cite{5};
\item \textbf{CD model}: with direct credit assigned as described in~\cite{12} and the CELF algorithm is used to produce solutions.
\item \textbf{mCD model}: a multi-action credit distribution model proposed in this paper and the CELF algorithm is used to produce solutions.
\end{itemize}
After the seed sets produced by these three kinds of approaches, we compare the influence ability of different results on the mCD model to verify the quality of seed sets. It can be observed in Fig.~\ref{fig:quality} that the influence ability of seed sets picked by the mCD model is larger than those running on the IC models and the conventional CD model. For instance, when $k=50$, the influence ability of the seed set picked by the CELF algorithm on the mCD model is 1350.08, while the influence ability on other two models (CD, IC) is 1280.24 and 220.03, respectively. Based on the curve in Fig.~\ref{fig:quality}, we conclude that our proposed model has an improved capability in identifying seed sets and describing the influence propagation in online social networks.
\begin{figure
\centering
\includegraphics[width=0.8\linewidth]{quality.pdf}
\caption{Influence Ability Comparison over Different Models.}
\label{fig:quality}
\end{figure}
\subsubsection{Influence Ability of Seed Set}
Next, we compare the influence ability of different seed sets obtained by our proposed streaming algorithms and the CELF algorithm under the same mCD model. For both the influence maximization problem and budgeted influence maximization problem, from Fig.~\ref{fig: quality} and~\ref{fig:b_quality}, we can observe that the seed sets provided by our proposed streaming algorithms can achieve utilities close to the CELF algorithm. For instance, in Fig.~\ref{fig: quality}, when $k = 50$, a seed set with 1333.56 influence ability is given by the streaming algorithm (Algorithm~\ref{str_k}), which is only 1\% less than the influence ability given by the CELF algorithm. Moreover, in Fig.~\ref{fig:b_quality}, taking $b=500$ as an example, the influence ability of the seed set provided by the streaming algorithm (Algorithm~\ref{str_b}) is 0.1\% less than the CELF algorithm. Therefore, we conclude that our proposed streaming algorithms are sufficient to identify seed sets with close influence ability to the CELF algorithm.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figure_1_c.pdf}
\caption{Influence Ability Comparison under mCD Model with the Cardinality Constraint.}
\label{fig: quality}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{figure_1.pdf}
\caption{Influence Ability Comparison under mCD Model with the Budget Constraint.}
\label{fig:b_quality}
\end{figure}
\subsubsection{Algorithm Running Time}
It has been shown in Fig. \ref{fig: quality} and Fig.~\ref{fig:b_quality} that our proposed streaming algorithms are able to provide performance close to the CELF algorithm. Such an algorithm maintains a list of current marginal gains of all the elements in the ground set, keeps them updated, and sorts the list in an increasing order recursively. Unlike the CELF algorithm, the proposed streaming algorithm only requires one scan over the user set. Therefore, the resulting lower computation complexity makes the streaming algorithm more practical when the number of elements in the ground set is large. To further examine this argument, we compare the running time (in seconds) for the CELF and the streaming algorithm with the same mCD model in Figs.~\ref{fig:r} and \ref{fig:rb}. It can be seen that for both the influence maximization and budgeted influence maximization problem, our proposed streaming algorithm is several orders of magnitude faster. Especially, in the case of the budgeted influence maximization problem, when the budget is set to be $500$, it takes more than $3,800$ seconds to complete the whole process in CELF, while for the streaming algorithm, it only takes $5.3$ seconds. Meanwhile, the streaming algorithm achieves almost the same performance as CELF, which implies that our proposed streaming algorithm is both efficient and effective.
\begin{figure}
\centering
\includegraphics[width = 1\linewidth]{figure_2_c.pdf}
\caption{Running Time Comparison with the Cardinality Constraint.}
\label{fig:r}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figure_2.pdf}
\caption{Running Time Comparison with the Budget Constraint.}
\label{fig:rb}
\end{figure}
\subsubsection{Estimation on the Number of Influenced People}
Our goal in this experiment is to investigate how the mCD model performs in estimating the number of people that get influenced in the network. We pick 950 actions from the original dataset for this experiment. Since the streaming algorithm (Algorithm \ref{str_k}) is able to achieve close performance to CELF with a much faster speed, we only conduct the streaming algorithm to explore the estimation accuracy. Note that when we set the seed set size equal to the number of initiators for a particular action, the mCD model can always provide the actual number of influenced people \footnote{Given an action, the ground truth is always accessible by simply counting the number of people performing the give action in the dataset.}. Then, we fix the size of the seed set as 50 to explore the estimation accuracy. To better illustrate, we sort actions with increasing popularity. It can be observed in Fig.~\ref{fig:test} that the estimated values obtained by both the CD and the mCD models are smaller than the actual number of users performing the corresponding action in the social network. However, it can be seen that the estimated results by our proposed model are closer to the true values, which means that the estimation with our model is more accurate for a given seed set size.
\begin{figure}
\centering
\includegraphics[width = 0.8\linewidth]{fig5.pdf}
\caption{Estimated Influence for Actions in Test Set.}
\label{fig:test}
\end{figure}
\section{Conclusion}
Our work is novel in three folds: 1) we are the first to study the multi-action event log by extending the credit distribution model, which cannot be directly derived from Goyal's work in \cite{12}; 2) different from previous papers, we focus on the budgeted influence maximization problem under credit distribution models, instead of the influence maximization problem under propagation models that involve edge weights; and 3) we propose a streaming algorithm to solve the budgeted influence maximization problem, whose theoretical analysis is different from Badanidiyuru's results in \cite{18}.
More specifically, we extended the conventional CD model to the mCD model in dealing with the multi-action event log and analyzing the influence ability of users in online social networks. Based on this credit model, an efficient streaming algorithm was developed to provide a solution with ($\frac{1}{2}-\epsilon$)-approximation of the optimal value under the influence maximization problem with a cardinality constraint, and ($\frac{1}{3}-\epsilon$)-approximation under the budgeted influence maximization problem. More specifically, we re-designed the credit assignment method in the CD model by utilizing a modified harmonic mean to handle multi-action event logs. This new credit assignment method not only makes full use of the multi-action event log but also achieves higher accuracy in estimating the total number of people that get influenced without the edge weight assignment and expensive Monte-Carlo simulations. Experiments showed that the mCD model is more accurate than the conventional CD model, and able to identify a seed set with higher quality than both the IC and CD models. Even under the same mCD model, the proposed streaming algorithms are able to achieve similar performance to the CELF greedy algorithm, but several orders of magnitude faster.
|
2,877,628,088,942 | arxiv | \section{Introduction}
A sparse triangular solver is an important computational kernel for an iterative linear solver in various numerical simulations.
It is the main component of the Gauss--Seidel (GS) smoother, SOR method and IC/ILU preconditioning, which are used as building blocks in various computational science or engineering analyses~\cite{Iterativereview, saad, meurant}.
Therefore, the development of a fast multithreaded sparse triangular solver is essential to accelerate these analyses when conducted on not only a single computing node but also a large-scale cluster system of nodes.
For example, the performance of the solver significantly influences the total simulation time of large-scale partial differential equation analysis using a multigrid solver with the GS, IC, or ILU smoother~\cite{Wallin, Buckeridge}.
However, it is well known that the sparse triangular solver, which consists of forward and backward substitutions, cannot be straightforwardly parallelized~\cite{Dongarra, Dongarra2}.
Thus, in this paper, we discuss an effective approach to developing a high-performance multithreaded sparse triangular solver.
There are various methods for parallelizing a sparse triangular solver or its related techniques, and we focus on the parallel ordering (reordering) method, which is one of the most common methods for parallelization of a sparse triangular solver.
There are several well-known orderings, such as dissection and domain decomposition orderings, but multi-color ordering is the most commonly used technique.
It has been used in various applications to parallelize, for example, the ICCG method.
However, it is well known that the multi-color ordering entails a trade-off problem between convergence and the number of synchronizations~\cite{doi4}.
An increase in the number of colors typically results in better convergence, but it also leads to an increase in the number of synchronizations, which is proportional to the number of colors.
The trade-off problem between convergence and parallelism is a common issue for parallel ordering techniques~\cite{Duff}.
One of the solutions for the above trade-off problem is {\it block} multi-coloring.
In this method, multi-color ordering is applied to blocks of unknowns.
The technique has been investigated in several contexts.
The concept of block coloring or block independent sets can be seen in \cite{saad}.
In an early work~\cite{SOR-bc}, it is discussed for the parallel SOR method.
For parallelization of the IC/ILU preconditioned iterative solver, it was first investigated in a finite difference method, that is, structured grid analysis~\cite{brb1, siam-iwa}. In this research, block coloring proved to be effective for improving convergence without increasing thread synchronization.
Following on from these research activities, the algebraic block multi-color ordering method was introduced for a general sparse linear system in \cite{IPDPS2012}.
Although there are various options for coloring or blocking methods~\cite{multi,amc}, this technique has been used in various applications because of its advantages in terms of convergence, data locality, and the number of synchronizations~\cite{semba,tsuburaya}. Particularly, several high-performance implementations of the HPCG benchmark adopt the technique, which shows the effectiveness of the method in a fast multigrid solver with the parallel GS smoother~\cite{intel,HPCG-kumahata,PEZY,HPCG-ICPP,arm}.
However, the block multi-coloring method has a drawback in its implementation using SIMD vectorization.
The calculations in the innermost loop for the parallel substitutions are performed sequentially, which prevents the efficient use of SIMD instructions.
Because the sparse triangular solver is a memory-intensive kernel, its performance on previous computer was not substantially affected by the use of SIMD instructions.
However, to increase the floating-point performance,
recent processors enhance the SIMD instructions and their SIMD width (vector length) is becoming large.
For example, Intel Xeon (Skylake)~\cite{Skylake}, Intel Xeon Phi~\cite{KNL}, and Fujitsu A64FX (ARM SVE) ~\cite{post-K} processors are equipped with 512-bit SIMD instructions. We note that ARM SVE supports at most a 2,048 vector length~\cite{armSVE}.
Considering this trend of processors, we aim to develop a parallel sparse triangular solver in which both multithreading and SIMD vectorization are efficiently used.
In this paper, we propose a new parallel ordering technique in which SIMD vectorization can be used and the advantages of block multi-color ordering, that is, fast convergence and fewer synchronizations, are preserved.
The technique is called ``hierarchical block multi-color ordering'' and it has a mathematically equivalent solution process (convergence) to block multi-color ordering. Moreover, the number of synchronizations in the multithreaded substitutions is the same as that of block multi-color ordering. We conduct five numerical tests using finite element electromagnetic field analysis code and matrix data obtained from a matrix collection, and confirm the effectiveness of the proposed method in the context of the parallel ICCG solver.
\section{Sparse Triangular Solver}
In this paper, we consider the following $n$-dimensional linear system of equations:
\begin{equation}
\mbox{\boldmath $A$} \mbox{\boldmath $x$} = \mbox{\boldmath $b$}.
\eeq{org}
We discuss the case in which the linear system (\ref{org}) is solved using an iterative linear solver involving IC(0)/ILU(0) preconditioning, the Gauss-Seidel (GS) smoother, or the SOR method.
When we discuss a parallel ICCG (precisely IC(0)-CG) solver for (\ref{org}), we assume that coefficient matrix $\mbox{\boldmath $A$}$ is symmetric and positive or semi-positive definite.
For the parallelization of the iterative solver that we consider, the most problematic part is in the sparse triangular solver kernel.
For example, in an IC/ILU preconditioned Krylov subspace iterative solver, the other computational kernels consist of an inner product, matrix-vector multiplication, and vector updates, which can be parallelized straightforwardly.
The sparse triangular solver kernel is given by following forward and backward substitutions:
\begin{equation}
\mbox{\boldmath $y$} = \mbox{\boldmath $L$}^{-1} \mbox{\boldmath $r$},
\eeq{for}
and
\begin{equation}
\mbox{\boldmath $z$} = \mbox{\boldmath $U$}^{-1} \mbox{\boldmath $y$},
\eeq{back}
where $\mbox{\boldmath $r$}$, $\mbox{\boldmath $y$}$, and $\mbox{\boldmath $z$}$ are $n$-dimensional vectors. Matrices $\mbox{\boldmath $L$}$ and $\mbox{\boldmath $U$}$ are, respectively, lower and upper triangular matrices with the same nonzero patterns as the lower and upper triangular parts of $\mbox{\boldmath $A$}$. In ILU (IC) preconditioning, the preconditioning step is given by (\ref{for}) and (\ref{back}), and triangular matrices $\mbox{\boldmath $L$}$ and $\mbox{\boldmath $U$}$ are derived from the following incomplete factorization:
\begin{equation}
\mbox{\boldmath $A$} \simeq \mbox{\boldmath $L$} \mbox{\boldmath $U$}.
\eeq{ILU}
The iteration steps in the GS and SOR methods (smoothers) can be expressed by similar substitutions.
The substitution is an inherently sequential process, and it cannot be parallelized (multithreaded) straightforwardly.
\section{Parallel Ordering Method}
A parallel ordering (reordering) method is one of the most popular parallelization methods for a sparse triangular solver, that is, forward and backward substitutions.
It transforms the coefficient matrix into an appropriate form for parallel processing by reordering the unknowns or their indices.
Let the reordered linear system of (\ref{org}) be denoted by
\begin{equation}
\bar{\mbox{\boldmath $A$}} \bar{\mbox{\boldmath $x$}} = \bar{\mbox{\boldmath $b$}}.
\eeq{reordered}
Then, the {\it reordering} is given by the transformation:
\begin{equation}
\bar{\mbox{\boldmath $x$}} = \mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $x$},
\eeq{xbar}
where $\mbox{\boldmath $P$}_{\pi}$ is a permutation matrix.
When we consider index set $I=\{1, 2, \ldots, n\}$ that corresponds to the index of each unknown,
the reordering is the permutation of the elements of $I$.
In the present paper, the reordering function of the index is denoted by $\pi$; that is, the $i$-th unknown of the original system is moved to the $\pi(i)$-th unknown of the reordered system.
In the reordering technique, the coefficient matrix and right-hand side are given as follows:
\begin{equation}
\bar{\mbox{\boldmath $A$}}=\mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $A$} \mbox{\boldmath $P$}_{\pi}^{\top}, \quad \bar{\mbox{\boldmath $b$}}=\mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $b$}.
\eeq{AB}
\subsection{Equivalence of orderings}
We consider the case in which two linear systems, (\ref{org}) and (\ref{reordered}), are solved using an identical iterative method.
The approximate solution vector at the $j$-th iteration for (\ref{org}) and that for (\ref{reordered}) are denoted by $\mbox{\boldmath $x$}^{(j)}$, and $\bar{\mbox{\boldmath $x$}}^{(j)}$, respectively.
If it holds that
\begin{equation}
\bar{\mbox{\boldmath $x$}}^{(j)} = \mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $x$}^{(j)}
\eeq{xj}
at every $j$-th step under the setting $\bar{\mbox{\boldmath $x$}}^{(0)} = \mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $x$}^{(0)}$ for initial guesses, then we can say that these two solution processes are $equivalent$. For example, in the Jacobi method and most Krylov subspace methods, reordering does not affect convergence; that is, the solution process for any reordered system is (mathematically) equivalent to that for the original system.
However, in the case of the iterative solver that we consider in this paper, such as the IC/ILU preconditioned iterative solver, the solution processes are typically inequivalent.
This is because of the sequentiality involved in the triangular solver (substitutions).
However, there are special cases in which the reordered system has an equivalent solution process to the original system.
In these cases, we say that two (original and new) orderings are equivalent or $\pi$ is an equivalent reordering.
We define the equivalence of two orderings as follows:
In the GS and SOR methods, equivalence is given by (\ref{xj}) under the proper setting of the initial guess.
In IC(0)/ILU(0) preconditioning, equivalence is given as follows:
We denote the incomplete factorization matrices of $\bar{\mbox{\boldmath $A$}}$ by $\bar{\mbox{\boldmath $L$}}$ and $\bar{\mbox{\boldmath $U$}}$. Moreover, the preconditioning step of the reordered linear system is given by $\bar{\mbox{\boldmath $z$}} = (\bar{\mbox{\boldmath $L$}} \bar{\mbox{\boldmath $U$}})^{-1} \bar{\mbox{\boldmath $r$}}$.
If $\bar{\mbox{\boldmath $z$}}=\mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $z$}$ is satisfied under $\bar{\mbox{\boldmath $r$}} = \mbox{\boldmath $P$}_{\pi} \mbox{\boldmath $r$}$, then we say that the orderings are equivalent.
For example, the ICCG (IC(0)-CG) method exhibits an equivalent solution process for both original and reordered linear systems when the orderings are equivalent.
The condition for equivalent reordering is given as follows:
When the following ER condition is satisfied, $\pi$ is the equivalent reordering.
\begin{quote}
{\it ER (Equivalent Reordering) Condition} ---
\begin{eqnarray}
\forall i_{1}, i_{2} \in I \ {\rm such} \ {\rm that} \ a_{i_{1}, i_{2}} \neq 0 \ \vee \ a_{i_{2}, i_{1}} \neq 0, \nonumber \\
\mbox{sgn} (i_{1}-i_{2}) = \mbox{sgn} (\pi (i_{1})-\pi(i_{2})),
\label{equivalence}
\end{eqnarray}
\end{quote}
where $a_{i_{1}, i_{2}}$ denotes the $i_{1}$-th row $i_{2}$-th column element of $\mbox{\boldmath $A$}$.
For a further explanation, we introduce an ordering graph, which is the directed graph that corresponds to the coefficient matrix.
Each node of the graph corresponds to an unknown or its index. An edge between two nodes $i_{1}$ and $i_{2}$ exists only when the $i_{1}$-th row $i_{2}$-th column element or $i_{2}$-th row $i_{1}$-th column element is nonzero.
The direction of the edge (arrow) shows the order of two nodes.
Figure \ref{og} shows an example of the ordering graph.
Using the ordering graph, (\ref{equivalence}) can be rewritten as the statement that the new and original orderings have the same ordering graph.
In \cite{doi3}, the authors stated that the ordering graph provides a unique class of mutually equivalent orderings.
In the appendix, we provide a proof sketch of the relationship between (\ref{equivalence}) and the equivalence of orderings.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.6,clip, bb= 10 150 620 480]{Ordering-Graph-3.pdf}
\caption{Example of an ordering graph}
\label{og}
\end{figure}
\section{Hierarchical Block Multi-Color Ordering Method }
In this paper, we propose a new parallel ordering method for the vectorization and parallelization of a sparse triangular solver.
Additionally, the proposed ordering is intended to inherit the advantages of convergence, number of synchronizations, and data locality from block multi-color ordering (BMC).
The proposed parallel ordering is called hierarchical block multi-color ordering (HBMC), which is equivalent to BMC in terms of convergence.
In the technique, we first order the unknowns by using BMC, and then reorder them again.
We focus on the explanation of the secondary reordering because we use the conventional algorithm shown in \cite{IPDPS2012} for the application of BMC.
Therefore, the original linear system based on BMC is written as (\ref{org}) and secondary reordering is denoted by $\pi$. Thus, the final reordered linear system based on HBMC is given by (\ref{reordered}).
\subsection{Block multi-color ordering (BMC)}
In this subsection, we briefly introduce BMC and some notation required for the explanation of HBMC.
In BMC, all unknowns are divided into blocks of the same size, and the multi-color ordering is applied to the blocks.
Because blocks that have an identical color are mutually independent, the forward and backward substitutions are parallelized based on the blocks in each color. The number of (thread) synchronizations of parallelized (multithreaded) substitution is given by $n_{c}-1$, where $n_{c}$ is the number of colors.
Figure \ref{BMC-1} shows the coefficient matrix that results from BMC.
In the present paper, the block size and $k$-th block in color $c$ are denoted by $b_{s}$ and $b_{k}^{(c)}$, respectively.
In BMC, each unknown (or its index) is assigned to a certain block, as shown in Fig. \ref{block-bmc}, where $n(c)$ is the number of blocks in color $c$.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.65,clip, bb= 105 165 550 480]{BMC-1.pdf}
\caption{Coefficient matrix based on BMC}
\label{BMC-1}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.65,clip, bb= 33 76 615 420]{BMC-block-HBMC-reordering-5.pdf}
\caption{Blocks of BMC and secondary reordering for HBMC}
\label{block-bmc}
\end{figure}
\subsection{Hierarchical block multi-color ordering (HBMC)}
In the proposed HBMC, a new (hierarchical) block structure is introduced. First, we define a level-1 block (or multithreaded block) as follows. The block consists of $w$ consecutive blocks of BMC in each color.
When the $k'$-th level-1 block in color $c$ is written as $\bar{b}_{k'}^{(c)}$,
\begin{equation}
\bar{b}_{k'}^{(c)} = \bigcup_{k=k_{s}+1}^{ks+w} b_{k}^{(c)},
\eeq{simd-block}
where $k_{s}=(k'-1) \times w$.
We note that parameter $w$ is determined by the length of the SIMD vector instruction (SIMD width) of the targeted processor. It is typically 4 or 8, and will be larger in the future.
In our technique, secondary reordering is performed on each level-1 block as shown in Fig. \ref{block-bmc}.
Without loss of generality, we describe the reordering process for a level-1 block, that is, the blocks from $b_{k_{s}+1}^{(c)}$ to
$b_{k_{s}+w}^{(c)}$ of BMC.
In the first step, we pick up the first (top) unknown of each block, $b_{k_{s}+1}^{(c)}$, $b_{k_{s}+2}^{(c)}, \ldots, b_{k_{s}+w}^{(c)}$, and order the picked unknowns. These $w$ unknowns are mutually independent because the blocks in each color are independent in BMC. In the next step, we pick up the second unknown of each block, which are mutually independent, and order them after the unknowns previously ordered. We repeat the process until no unknowns remain. In total, the pick-up process is performed $b_s$ times.
Figure \ref{level1} shows the secondary reordering process in the first level-1 block when $b_{s}=2$ and $w=4$, where each unknown is associated with the diagonal element of the coefficient matrix. In the figure, the colored elements represent nonzero elements.
After the reordering process is complete, we encounter the second-level block structure in the reordered coefficient matrix, which is given by the $w$ $\times$ $w$ (small) diagonal matrices. The level-2 block structure is used for SIMD vectorization of the substitutions.
Figure \ref{HBMC-1} shows the matrix form of the coefficient matrix based on HBMC.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.7,clip, bb= 10 65 435 320]{Level1-block-5.pdf}
\caption{Secondary reordering in a level-1 block}
\label{level1}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.65,clip, bb= 105 165 550 515]{HBMC-1.pdf}
\caption{Coefficient matrix based on HBMC}
\label{HBMC-1}
\end{figure}
\subsubsection{Equivalence between BMC and HBMC}
We prove that HBMC is equivalent to BMC; that is, the convergence rates of the linear solvers based on the two orderings are the same.
Because the secondary reordering for HBMC is {\it locally} performed in each level-1 block,
the order between two unknowns that belong to two different level-1 blocks are preserved in the final order.
Consequently, it holds that
\begin{eqnarray}
\forall i_{1} \in \bar{b}_{k_{1}}^{(c_{1})}, i_{2} \in \bar{b}_{k_{2}}^{(c_{2})} \ {\rm such \ that} \ c_{1} \neq c_{2} \, \vee \, k_{1} \neq k_{2}, \nonumber \\
\mbox{sgn} (i_{1}-i_{2}) = \mbox{sgn} (\pi (i_{1})-\pi(i_{2})).
\label{HBMC1}
\end{eqnarray}
From (\ref{HBMC1}), if the {\it local} ordering subgraphs of BMC and HBMC that correspond to each level-1 block are identical, then the two orderings are equivalent.
Next, we examine the reordering process in a level-1 block.
In the secondary reordering process of HBMC, the order of unknowns that belong to different BMC blocks changes. However, the reordering process for these unknowns does not affect the ordering graph, that is, the convergence.
In BMC, the unknowns that belong to two different blocks in the same color have no data relationship with one another; that is, there are no edges between them in the ordering graph. Therefore, even if we change the order of unknowns
that belong to different BMC blocks, this does not affect the ordering graph.
Consequently, we now pay attention to the influence of reordering inside a BMC block. When we analyze the above picking process, we can confirm that the order of the unknowns that belong to the same BMC block is preserved in the final order.
In each block, we pick up the first unknown, and then the second, and continue this process. Therefore, the order does not change for these unknowns:
\begin{equation}
\forall i_{1}, i_{2} \in b_{k}^{(c)} \ \mbox{sgn} (i_{1}-i_{2}) = \mbox{sgn} (\pi (i_{1})-\pi(i_{2})).
\eeq{reorder1}
When we consider the mutual independence among the BMC blocks in each color, (\ref{HBMC1}) and (\ref{reorder1}), we can demonstrate that secondary reordering $\pi$ does not change the form of the ordering graph. This proves that HBMC is equivalent to BMC.
\begin{figure*}[tbp]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.52,clip, bb= 0 60 388 495]{2d-stencil-example-BMC-2.pdf} &
\includegraphics[scale=0.52,clip, bb= 0 60 388 495]{2d-stencil-example-HBMC-2.pdf} \\
(a) BMC ($b_{s}=4$, $n_{c}=2$) & (b) HBMC ($w=4$, $b_{s}=4$, $n_{c}=2$) \\
& \\
\multicolumn{2}{c}{\includegraphics[scale=0.5,clip, bb= 8 70 458 510]{2d-stencil-example-HBMC-MAT-5.pdf} }\\
\multicolumn{2}{c}{(c) Coefficient matrix based on HBMC} \\
\end{tabular}
\caption{Ordering graphs in a five-point stencil problem and coefficient matrix based on HBMC}
\label{5point-bmc-hbmc}
\end{figure*}
As an example that shows the relationship between BMC and HBMC, Fig. \ref{5point-bmc-hbmc} demonstrates the ordering of nodes (unknowns) in a five-point finite difference analysis. Figures \ref{5point-bmc-hbmc} (a) and (b) show that BMC and HBMC have identical ordering graphs. Consequently, the two orderings are equivalent in terms of convergence. Figure \ref{5point-bmc-hbmc} (c) shows the coefficient matrix based on HBMC, which involves the hierarchical block structures.
\subsection{Parallelization and vectorization of forward and backward substitutions}
Corresponding to the colors of the unknowns, solution vector $\bar{\mbox{\boldmath $x$}}$ and coefficient matrix $\bar{\mbox{\boldmath $A$}}$ are split as
\begin{equation}
\bar{\mbox{\boldmath $x$}}= \left( \begin{array}{c}
\bar{\mbox{\boldmath $x$}}_{1} \\
\bar{\mbox{\boldmath $x$}}_{2} \\
\vdots \\
\bar{\mbox{\boldmath $x$}}_{n_{c}} \\
\end{array}
\right),
\eeq{x}
and
\begin{equation}
\bar{\mbox{\boldmath $A$}}= \left( \begin{array}{cccc}
\bar{\mbox{\boldmath $C$}}_{1,1} & \bar{\mbox{\boldmath $C$}}_{1,2} & \ldots & \bar{\mbox{\boldmath $C$}}_{1,n_{c}}\\
\bar{\mbox{\boldmath $C$}}_{2,1} & \bar{\mbox{\boldmath $C$}}_{2,2} & \ldots & \bar{\mbox{\boldmath $C$}}_{2,n_{c}}\\
\vdots & \vdots & \ddots & \vdots\\
\bar{\mbox{\boldmath $C$}}_{n_{c},1} & \bar{\mbox{\boldmath $C$}}_{n_{c},2} & \ldots & \bar{\mbox{\boldmath $C$}}_{n_{c},n_{c}} \\
\end{array}
\right),
\eeq{ax}
where $\bar{\mbox{\boldmath $x$}}_{c}$ corresponds to the unknowns with color $c$, and $\bar{\mbox{\boldmath $C$}}_{c,d}$ represents the relationship between $\bar{\mbox{\boldmath $x$}}_{c}$ and $\bar{\mbox{\boldmath $x$}}_{d}$.
Hereafter, we assume that the size of $\bar{\mbox{\boldmath $x$}}_{c}$ is a multiple of $b_{s}w$. In the analysis program, the assumption is satisfied using some dummy unknowns.
Let the number of level-1 blocks assigned to color $c$ be denoted by $\bar{n}(c)$, then diagonal block $\bar{\mbox{\boldmath $C$}}_{c,c}$ of $\bar{\mbox{\boldmath $A$}}$ is given by the following block diagonal matrix:
\begin{equation}
\bar{\mbox{\boldmath $C$}}_{c,c} = \left( \begin{array}{cccc}
\bar{\mbox{\boldmath $B$}}_{1}^{(c)} & & & {\bf 0} \\
& \bar{\mbox{\boldmath $B$}}_{2}^{(c)} & & \\
& & \ddots & \\
{\bf 0} & & & \bar{\mbox{\boldmath $B$}}_{\bar{n}(c)}^{(c)} \\
\end{array}
\right),
\eeq{BAMC1}
where $\bar{\mbox{\boldmath $B$}}_{k}^{(c)}$ is the $b_{s}w$ $\times$ $b_{s}w$ matrix that corresponds to the unknowns in the $k$-th level-1 block with color $c$, which we denote by $\bar{b}_{k}^{(c)}$.
Moreover, matrix $\bar{\mbox{\boldmath $B$}}_{k}^{(c)}$ is written as
\begin{equation}
\bar{\mbox{\boldmath $B$}}_{k}^{(c)} = \left( \begin{array}{cccc}
\bar{\mbox{\boldmath $D$}}_{1}^{(k, c)} & \bar{\mbox{\boldmath $E$}}_{1,2}^{(k,c)} & \ldots & \bar{\mbox{\boldmath $E$}}_{1,b_{s}}^{(k,c)} \\
\bar{\mbox{\boldmath $E$}}_{2,1}^{(k,c)} & \bar{\mbox{\boldmath $D$}}_{2}^{(k, c)} & \ldots & \bar{\mbox{\boldmath $E$}}_{2,b_{s}}^{(k,c)} \\
\vdots & \vdots & \ddots & \\
\bar{\mbox{\boldmath $E$}}_{b_{s},1}^{(k,c)}& \bar{\mbox{\boldmath $E$}}_{b_{s},2}^{(k,c)} & \ldots & \bar{\mbox{\boldmath $D$}}_{b_{s}}^{(k, c)} \\
\end{array}
\right),
\eeq{BAMC-level-2}
where $\bar{\mbox{\boldmath $D$}}_{l}^{(k, c)}, (l=1, 2, \ldots, b_{s})$ are $w$ $\times$ $w$ diagonal matrices.
The forward substitution included in ILU(0)/IC(0) preconditioners or GS and SOR methods uses a lower triangular matrix with the same nonzero element pattern as the lower triangular part of $\bar{\mbox{\boldmath $A$}}$. From (\ref{ax}) and (\ref{BAMC1}), lower triangular matrix $\bar{\mbox{\boldmath $L$}}$ is written as
\begin{equation}
\bar{\mbox{\boldmath $L$}}= \left( \begin{array}{cccc}
\bar{\mbox{\boldmath $L$}}_{1,1} & & & \\
\bar{\mbox{\boldmath $L$}}_{2,1} & \bar{\mbox{\boldmath $L$}}_{2,2} & & {\bf 0} \\
\vdots & \ddots & \ddots & \\
\bar{\mbox{\boldmath $L$}}_{n_{c},1} & \bar{\mbox{\boldmath $L$}}_{n_{c},2} & \ldots & \bar{\mbox{\boldmath $L$}}_{n_{c},n_{c}} \\
\end{array}
\right),
\eeq{lx}
and diagonal block $\bar{\mbox{\boldmath $L$}}_{c,c}$ is given by
\begin{equation}
\bar{\mbox{\boldmath $L$}}_{c,c} = \left( \begin{array}{cccc}
\bar{\mbox{\boldmath $L$}}_{1}^{(c)} & & & {\bf 0} \\
& \bar{\mbox{\boldmath $L$}}_{2}^{(c)} & & \\
& & \ddots & \\
{\bf 0} & & & \bar{\mbox{\boldmath $L$}}_{\bar{n}(c)}^{(c)} \\
\end{array}
\right),
\eeq{BAMC2}
where $\bar{\mbox{\boldmath $L$}}_{k}^{(c)}$ is the $b_{s}w$ $\times$ $b_{s}w$ lower triangular matrix that corresponds to block $\bar{b}_{k}^{(c)}$.
The forward substitution for the reordered linear system is given by
\begin{equation}
\bar{\mbox{\boldmath $L$}} \bar{\mbox{\boldmath $y$}} = \bar{\mbox{\boldmath $r$}},
\eeq{for1}
where $\bar{\mbox{\boldmath $r$}}$ is the residual vector in the case of ILU (IC) preconditioning.
Let $\bar{\mbox{\boldmath $y$}}_{c}$ and $\bar{\mbox{\boldmath $r$}}_{c}$ represent, respectively, the segments of $\bar{\mbox{\boldmath $y$}}$ and $\bar{\mbox{\boldmath $r$}}$ that correspond to color $c$, and $\bar{\mbox{\boldmath $y$}}_{k}^{(c)}$ and $\bar{\mbox{\boldmath $r$}}_{k}^{(c)}$ represent the subsegments in the segments that correspond to block $\bar{b}_{k}^{(c)}$; that is,
\begin{equation}
\bar{\mbox{\boldmath $y$}}_{c}= \left( \begin{array}{c}
\bar{\mbox{\boldmath $y$}}_{1}^{(c)} \\
\bar{\mbox{\boldmath $y$}}_{2}^{(c)} \\
\vdots \\
\bar{\mbox{\boldmath $y$}}_{\bar{n}(c)}^{(c)} \\
\end{array}
\right), \ \mbox{and} \ \
\bar{\mbox{\boldmath $r$}}_{c}= \left( \begin{array}{c}
\bar{\mbox{\boldmath $r$}}_{1}^{(c)} \\
\bar{\mbox{\boldmath $r$}}_{2}^{(c)} \\
\vdots \\
\bar{\mbox{\boldmath $r$}}_{\bar{n}(c)}^{(c)} \\
\end{array}
\right).
\eeq{y1}
Then, from (\ref{lx}) and (\ref{for1}), the forward substitution for $\bar{\mbox{\boldmath $y$}}_{c}$ is given by
\begin{equation}
\bar{\mbox{\boldmath $y$}}_{c} = \bar{\mbox{\boldmath $L$}}_{c,c}^{-1} \bar{\mbox{\boldmath $q$}}_{c},
\eeq{for2}
where
\begin{equation}
\bar{\mbox{\boldmath $q$}}_{c} = \bar{\mbox{\boldmath $r$}}_{c} - \sum_{d=1}^{c-1} \bar{\mbox{\boldmath $L$}}_{c,d} \bar{\mbox{\boldmath $y$}}_{d}.
\eeq{for3}
Because vector segments $\bar{\mbox{\boldmath $y$}}_{d} (d=1, \ldots, c-1)$ are computed prior to the substitution (\ref{for2}) and shared among all threads, $\bar{\mbox{\boldmath $q$}}_{c}$ is a given vector in (\ref{for2}). When the segment of $\bar{\mbox{\boldmath $q$}}_{c}$ that corresponds to block $\bar{b}_{k}^{(c)}$ is denoted by $\bar{\mbox{\boldmath $q$}}_{k}^{(c)}$ as in (\ref{y1}), from (\ref{BAMC2}), the forward substitution (\ref{for2}) is expressed as $\bar{n}(c)$ independent steps:
\begin{equation}
\bar{\mbox{\boldmath $y$}}_{k}^{(c)}=(\bar{\mbox{\boldmath $L$}}_{k}^{(c)})^{-1} \bar{\mbox{\boldmath $q$}}_{k}^{(c)} \ (k=1, \ldots, \bar{n}(c)).
\eeq{BAMC3}
Consequently, the forward substitution (\ref{for2}) for color $c$ can be multithreaded with the degree of parallelism given by the number of level-1 blocks of each color, which is approximately $n/(n_{c} \cdot b_{s} \cdot w)$. Each thread processes one or more level-1 blocks in parallel.
Next, we explain how to vectorize each step of (\ref{BAMC3}).
We consider the procedure for the $k$-th level-1 block of color $c$: $\bar{\mbox{\boldmath $y$}}_{k}^{(c)}=(\bar{\mbox{\boldmath $L$}}_{k}^{(c)})^{-1} \bar{\mbox{\boldmath $q$}}_{k}^{(c)}$.
From (\ref{BAMC-level-2}), lower triangular matrix $\bar{\mbox{\boldmath $L$}}_{k}^{(c)}$ is written as
\begin{equation}
\bar{\mbox{\boldmath $L$}}_{k}^{(c)} = \left( \begin{array}{cccc}
\tilde{\mbox{\boldmath $D$}}_{1}^{(k, c)} & & & \\
\bar{\mbox{\boldmath $L$}}_{2,1}^{(k,c)} & \tilde{\mbox{\boldmath $D$}}_{2}^{(k, c)} & & {\bf 0} \\
\vdots & \ddots & \ddots & \\
\bar{\mbox{\boldmath $L$}}_{b_{s},1}^{(k,c)}& \bar{\mbox{\boldmath $L$}}_{b_{s},2}^{(k,c)} & \ldots & \tilde{\mbox{\boldmath $D$}}_{b_{s}}^{(k, c)} \\
\end{array}
\right),
\eeq{BAMC-level-3}
where $\tilde{\mbox{\boldmath $D$}}_{l}^{(k, c)}$ are diagonal matrices.
We split $\bar{\mbox{\boldmath $y$}}_{k}^{(c)}$ and $\bar{\mbox{\boldmath $q$}}_{k}^{(c)}$ into $b_{s}$ segments, each of size $w$.
Let $\bar{\mbox{\boldmath $y$}}_{l}^{(k, c)}$ and $\bar{\mbox{\boldmath $q$}}_{l}^{(k, c)}$ represent the segments that correspond to the level-2 block of the $k$-th level-1 block of color $c$; that is,
\begin{equation}
\bar{\mbox{\boldmath $y$}}_{k}^{(c)}= \left( \begin{array}{c}
\bar{\mbox{\boldmath $y$}}_{1}^{(k, c)} \\
\bar{\mbox{\boldmath $y$}}_{2}^{(k, c)} \\
\vdots \\
\bar{\mbox{\boldmath $y$}}_{b_{s}}^{(k, c)} \\
\end{array}
\right), \ \mbox{and} \ \
\bar{\mbox{\boldmath $q$}}_{k}^{(c)}= \left( \begin{array}{c}
\bar{\mbox{\boldmath $q$}}_{1}^{(k, c)} \\
\bar{\mbox{\boldmath $q$}}_{2}^{(k, c)} \\
\vdots \\
\bar{\mbox{\boldmath $q$}}_{b_{s}}^{(k, c)} \\
\end{array}
\right).
\eeq{level2-y1}
Then, from (\ref{BAMC-level-3}), the forward substitution for level-1 block $\bar{b}_{k}^{(c)}$ is given by the following $b_{s}$ sequential steps:
\begin{equation}
\bar{\mbox{\boldmath $y$}}_{l}^{(k, c)} = (\tilde{\mbox{\boldmath $D$}}_{l}^{(k, c)})^{-1} \bar{\mbox{\boldmath $t$}}_{l}^{(k, c)}, (l=1,2,\ldots,b_{s}),
\eeq{y2-2}
where
\begin{equation}
\bar{\mbox{\boldmath $t$}}_{l}^{(k, c)} =\bar{\mbox{\boldmath $q$}}_{l}^{(k, c)} - \sum_{m=1}^{l-1}\bar{\mbox{\boldmath $L$}}_{l,m}^{(k,c)} \bar{\mbox{\boldmath $y$}}_{m}^{(k, c)}.
\eeq{y2-3}
In the $l$-th step of (\ref{y2-2}), because $\tilde{\mbox{\boldmath $D$}}_{l}^{(k, c)}$ is a diagonal matrix and each element of $\bar{\mbox{\boldmath $t$}}_{l}^{(k, c)}$ can be calculated in parallel, the step consists of $w$ independent steps that can be efficiently vectorized. In other words, the $l$-th step of (\ref{y2-2}) consists of a simple matrix vector multiplication and element-wise vector updates that are directly vectorized.
The backward substitution is parallelized (multithreaded) and vectorized in a similar manner, although it is performed inversely from color $n_{c}$ to 1.
\subsection{Implementation of HBMC}
\subsubsection{Reordering process}
In this section, we discuss the reordering process.
In the technique, any type of algorithm (heuristic) for an implementation of BMC can be used.
In the application of BMC, we set the number of BMC blocks assigned to each thread as a multiple of $w$, except for one of the threads (typically the last-numbered thread). In this circumstance, the application of HBMC, that is, the secondary reordering from BMC, is performed in each thread. Therefore, the reordering process is fully multithreaded.
\subsubsection{Storage format}
In the implementation of the sparse triangular solver, a storage format for sparse matrices~\cite{templates} is typically used.
For example, the factorization matrices in an IC/ILU preconditioned solver are stored in memory using such a format.
Although there are several standard formats,
the sliced ELL (SELL) format~\cite{sell-1} is the most efficient for exploiting the benefit of SIMD instructions, and we used it in our implementation.
In the SELL format, the slice size is an important parameter. In HBMC, we naturally set the slice size as $w$ because the forward and backward substitutions are vectorized every $w$ rows. This leads to the natural introduction of the concept of the SELL-C-$\sigma$ format~\cite{sell-2} to the analysis, which is a sophisticated version of SELL.
\subsubsection{Multithreaded and vectorized substitutions}
The program for each forward and backward substitution consists of nested loops. The outer-most loop is for the color. After the computations for each color, thread synchronization is required. Therefore, the number of synchronizations in each substitution is given by $n_{c}-1$, which is the same as BMC and the standard multi-color ordering (MC). The second loop is for level-1 blocks. Because the level-1 blocks in each color are mutually independent, each thread processes single or multiple level-1 blocks in parallel.
In each level-1 block, the substitution can be split into $b_{s}$ steps, each of which is vectorized with a SIMD width of $w$.
For the vectorized substitution, we used the OpenMP SIMD directive or the Intel intrinsic functions for SIMD instructions. Figure \ref{C-vec} shows a sample C program code for multithreaded and vectorized forward substitution using OpenMP and Intel AVX-512 intrinsic functions.
Additionally, we discuss the special nonzero pattern that appears in $\bar{\mbox{\boldmath $L$}}_{k}^{(c)}$ that corresponds to the level-1 block. In the matrix, all nonzero elements lay on $2b_{s}-1$ diagonal lines. Although we can consider using a hybrid storage format that exploits this special structure, it does not typically result in better performance because of the additional cost of processing the diagonal block and other off-diagonal elements separately. We confirmed this in some preliminary tests.
Finally, we discuss the data access locality. The access pattern for the vector elements in HBMC is different from that in BMC. Therefore, the data access locality can be different between two orderings. However, because the secondary reordering for HBMC is performed inside a level-1 block, the data access locality barely changes; at least, from the viewpoint of the last-level cache, both orderings are considered to be similar.
\begin{figure}[tb]
\begin{center}
\begin{tabular}{|l|}\hline
\verb|for (c = 0; c < nc; c++){|\\
\verb| #pragma omp for private(p, j, num, t, index, \|\\
\verb| mtmp, mval, pos, mb, mdiag)|\\
\verb| for ( k = lev1b[c]; k < lev1b[c+1]; k++ ){|\\
\verb| num = k * 8 * bs;|\\
\verb| index = mat.offset[k];|\\
\verb| j = lev2b[k];|\\
\verb| for ( p = j; p < j + bs; p++ ){|\\
\verb| mtmp = _mm512_load_pd( &r[num] );|\\
\verb| for ( t = 0; t < mat.slen.lev2b[p]; t++ ){|\\
\verb| mval = _mm512_load_pd( &val[index] );|\\
\verb| pos = _mm512_load_epi32( &col[index] );|\\
\verb| mb = _mm512_i32logather_pd( pos, z, 8 );|\\
\verb| mtmp = _mm512_sub_pd( mtmp, \ |\\
\verb| _mm512_mul_pd(mval,mb) );|\\
\verb| index += 8;|\\
\verb| }|\\
\verb| mdiag = _mm512_load_pd( &diaginv[num] );|\\
\verb| mtmp = _mm512_mul_pd( mtmp, mdiag );|\\
\verb| _mm512_store_pd( &z[num], mtmp );|\\
\verb| num = num + 8;|\\
\verb| }|\\
\verb| }|\\
\verb|}|\\ \hline
\end{tabular}
\caption{C program code of multithreaded and vectorized forward substitution with OpenMP directives and Intel AVX-512 intrinsic functions}
\label{C-vec}
\end{center}
\end{figure}
\begin{table*}[tbp]
\centering
\caption{Information about the test environments}
\label{cpuInfo}
\begin{tabular}{|c|c|c|c|}
\hline
System & Cray XC40 & Cray CS400 (2820XT) & Fujitsu CX2550 (M4) \\ \hline
\multirow{2}{*}{Processor type} & Intel Xeon Phi & Intel Xeon & Intel Xeon \\
& (7250, KNL) & (E5-2695 v4, Broadwell) & (Gold6148, Skylake) \\ \hline
\# processors / node & 1 & 2 & 2 \\ \hline
\# cores / processor & 68 & 18 & 20 \\ \hline
Clock (GHz) & 1.4 & 2.1 & 2.4 \\ \hline
Compiler & icc 18.0.3 & icc 17.0.6 & icc 17.0.6 \\ \hline
\multirow{3}{*}{Compile options} & -qopenmp -ipo -O3 & -mcmodel=medium & -mcmodel=medium \\
& -xMIC-AVX512 & -shared-intel -qopenmp & -shared-intel -qopenmp \\
& & -xHost -ipo -O3 & -xCORE-AVX512 -ipo -O3 \\ \hline
\end{tabular}
\end{table*}
\section{Numerical Results}
\subsection{Computers and Test Problems}
We conducted five numerical tests on three types of computational nodes to evaluate the proposed reordering technique in the context of the ICCG method: the computational nodes were Cray XC40, Cray CS400 (2820XT), and Fujitsu CX2550 (M4). The two Cray systems are operated by Academic Center for Computing and Media Studies, Kyoto Univ., whereas the Fujitsu system is at Information Initiative Center, Hokkaido Univ. Table \ref{cpuInfo} lists information about the computational node and compiler used. In the numerical tests, we used all cores of the computational node for execution.
The program code was written in C and OpenMP for the thread parallelization.
For vectorization, we used the intrinsic functions of the Intel Advanced Vector Extensions.
The AVX2 (256 bits SIMD) instruction set was used for the Xeon (Broadwell) processor, whereas the AVX-512 (512 bits SIMD) instruction set was used for the Xeon Phi (KNL) and Xeon (Skylake) processors.
Although we also developed a vectorized program using the OpenMP SIMD directive, its performance was slightly worse than the version with the intrinsic function in most of the test cases.
Thus, in this paper, we only report the results from using the intrinsic function.
For the test problems, we used a linear system that arises from finite element electromagnetic field analysis and four linear systems picked up from the SuiteSparse Matrix Collection.
We selected symmetric positive-definite matrices that are mainly derived from computational science or engineering problems, and have a relatively large
dimension compared with other matrices in the collection.
In the electromagnetic field analysis test, the linear system that arises from the finite element discretization of the IEEJ standard benchmark model~\cite{ieej} was used. The basic equation for the problem is given as
\begin{equation}
\nabla \times (\nu \nabla \times \mbox{\boldmath $A$}_{m}) = \mbox{\boldmath $J$}_{0},
\eeq{ieej}
where $\mbox{\boldmath $A$}_{m}$, $\nu$, and $\mbox{\boldmath $J$}_{0}$ are the magnetic vector potential, magnetic reluctivity, and excitation current density,
respectively. The analysis solved (\ref{ieej}) using a
finite edge-element method with hexahedral elements.
Applying the Galerkin method to (\ref{ieej}), we obtained a linear
system of equations for the test problem.
The resulting linear system was solved using the shifted ICCG method, with the shift parameter given as 0.3.
The name of the dataset of the linear system is denoted by Ieej.
Table \ref{matInfo} lists the matrix information for the test problems.
In this paper, we report the performance comparison of four multithreaded ICCG solvers.
The solver denoted by ``MC'' is based on the multi-color ordering which is the most popular parallel ordering method.
The solver ``BMC'' is the solver based on the block multi-color ordering method.
The solvers ``HBMC (crs\_spmv)'' and ``HBMC (sell\_spmv)'' are based on the proposed HBMC, where the former solver used compressed row storage (CRS) format~\cite{templates} for the implementation of sparse matrix vector multiplication (SpMV) and the latter used the SELL format. In MC and BMC, the CRS format was used.
For the blocking method in BMC and HBMC, we used the simplest one among the heuristics introduced in \cite{IPDPS2012}, in which the unknown with the minimal number is picked up for the newly generated block. For the coloring of nodes or blocks, the greedy algorithm was used for all the solvers. The convergence criterion was set as the relative residual norm (2-norm)
being less than $10^{-7}$.
\begin{table}[tbp]
\centering
\caption{Matrix information for the test problems}
\label{matInfo}
\begin{tabular}{|c|c|c|c|}
\hline
Data set & Problem type & Dimension & \# nonzero \\
\hline
\multirow{2}{*}{Thermal2} & Thermal & \multirow{2}{*}{1,228,045} & \multirow{2}{*}{8,580,313} \\
& problem & & \\ \hline
Parabolic\_fem & CFD & 525,825 & 3,674,625 \\ \hline
G3\_circuit & Circuit problem & 1,585,478 & 7,660,826 \\ \hline
\multirow{2}{*}{Audikw\_1} & Structural & \multirow{2}{*}{943,695} & \multirow{2}{*}{77,651,847} \\
& problem & & \\ \hline
\multirow{2}{*}{Ieej} & Eddy current & \multirow{2}{*}{1,011,920} & \multirow{2}{*}{31,468,056} \\
& analysis & & \\ \hline
\end{tabular}
\end{table}
\begin{table}[tbp]
\centering
\caption{Comparison of the number of iterations}
\label{number-ite}
\begin{tabular}{|c|c|c|c|}
\hline
Dataset$\backslash$method & MC & BMC & HBMC \\
\hline
Thermal2 & 2283 & 2129 & 2129 \\
\hline
Parabolic\_fem & 1145 & 1052 & 1052 \\
\hline
G3\_circuit & 1521 & 1227 & 1227 \\
\hline
Audikw\_1 & 1728 & 1714 & 1715 \\
\hline
Ieej & 580 & 446 & 446 \\
\hline
\end{tabular}
\end{table}
\subsection{Numerical results}
\subsubsection{Equivalence of orderings in convergence and use of SIMD instructions}
First, we examine the equivalence of BMC and HBMC in terms of convergence.
Table \ref{number-ite} lists the number of iterations of the solvers tested on Cray XC40, where the block size of BMC and HBMC was set to 32. Equivalence was confirmed by the numerical results. Moreover, to examine the entire solution process, Figure \ref{conv-behavior} shows the convergence behaviors of BMC and HBMC in the G3\_circuit and Ieej tests. In the figure, the two lines of the relative residual norms overlap, which indicates that the solvers had an equivalent solution process. The equivalence of convergence was also confirmed in all test cases (five datasets $\times$ three block sizes $\times$ three computational nodes). Furthermore, Table \ref{number-ite} shows the advantage in terms of convergence of BMC over MC, which coincides with the results reported in \cite{IPDPS2012}.
Next, we checked the use of SIMD instructions in the solver using the Intel Vtune Amplifier (application performance snapshot) in the G3\_circuit test conducted on Fujitsu CX2550. The snapshot showed that the percentage of all packed floating point instructions in the solver based on HBMC (sell\_spmv) reached 99.7\%, although that in the solver using BMC was 12.7\%.
\subsubsection{Performance comparison}
Table \ref{Results} (a) shows the performance comparison of four solvers in the numerical tests on Cray XC40.
In the tests, HBMC attained the best performance for all datasets, except Audikw\_1.
In the Thermal2 and G3\_circuit tests, HBMC (sell\_spmv) was more than two times faster than the standard MC solver.
When HBMC (crs\_spmv) was compared with BMC, it attained better performance in 11 out of 15 cases (five datasets $\times$ three block sizes), which demonstrates the effectiveness of HBMC for the sparse triangular solver.
Moreover, in all test cases, HBMC (sell\_spmv) outperformed HBMC (crs\_spmv), which implies that an efficient use of SIMD instructions was important on the Xeon Phi-based system.
Table \ref{Results} (b) shows the test results on Cray CS400 (Xeon Broadwell).
In the numerical tests, HBMC attained the best performance for all datasets.
When HBMC (crs\_spmv) was compared with BMC, it attained better performance in 13 out of 15 cases, which shows the effectiveness of HBMC.
Table \ref{Results} (b) also indicates that using the SELL format for the coefficient matrix mostly led to an improvement in solver performance.
Table \ref{Results} (c) shows the test results on Fujitsu CX2550 (Xeon Skylake).
In the numerical tests, HBMC outperformed MC and BMC for four out of five datasets.
For the Audikw\_1 dataset, HBMC did not outperform BMC on Xeon Phi and Skylake, although it was better than BMC on Xeon Broadwell. This result is thought to be
because of the effect of increasing the slice size that was given by the SIMD width, $w$.
In the SELL format, some zero elements were considered as nonzero in the slice.
When there was a significant imbalance among the number of nonzero elements in each row in the slice, the number of elements processed largely increased compared with the implementation with the CRS format.
The Audikw\_1 dataset had this property.
For Audikw\_1, the number of processed elements in SELL increased by 40\% compared with that in CRS, although the increase was 10\% for the G3\_circuit dataset.
For this type of dataset, when the size of the slice increased, the number of processed elements often increased.
The size of the slice was set to 8 for the Xeon Phi and Skylake processors and 4 for the Xeon Broadwell processors.
On the Broadwell processors, the increase in the number of elements when changing CRS to SELL was 28\%, which resulted in better performance for HBMC compared with BMC. In the future, for further acceleration of the solver, we will develop an implementation in which we introduce some remedies for this SELL issue, for example, splitting in two the row that includes an extremely large number of nonzero elements compared with other rows.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.7,clip, bb= 20 45 490 350]{g3-bs-32.pdf} \\
(a) G3\_circuit test \\
\includegraphics[scale=0.7,clip, bb= 20 45 490 350]{ieej-bs-32.pdf} \\
(b) Ieej test
\caption{Convergence behavior of BMC and HBMC}
\label{conv-behavior}
\end{figure}
\begin{table*}[t]
\centering
\caption{Numerical results: execution time (sec.)}
\label{Results}
(a) Cray XC40 (Intel Xeon Phi)
\vspace{0.3\baselineskip}
\begin{tabular}{|c|c|ccc|ccc|ccc|}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{MC} & \multicolumn{3}{|c}{BMC} & \multicolumn{3}{|c}{HBMC (crs\_spmv)} & \multicolumn{3}{|c|}{HBMC (sell\_spmv)} \\
& & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ \\ \hline
Thermal2 & 20.2 & 17.8 & 17.9 & 19.4 & 13.8 & 14.5 & 15.6 & {\bf 9.28} & 9.79 & 11.1 \\
Parabolic\_fem & 2.64 & 2.77 & 2.79 & 2.72 & 2.35 & 2.33 & 2.48 & {\bf 1.72} & {\bf 1.72} & 1.79 \\
G3\_circuit & 7.98 & 7.97 & 7.72 & 9.91 & 5.16 & 5.27 & 5.49 & {\bf 3.35} & 3.55 & 3.61 \\
Audikw\_1 & 109.4 & 73.2 & 69.3 & {\bf 63.9} & 72.7 & 73.2 & 73.3 & 68.3 & 68.0 & 69.1 \\
Ieej & 4.58 & 5.35 & 5.07 & 4.12 & 4.94 & 5.02 & 4.55 & 3.60 & 3.49 & {\bf 3.18} \\
\hline
\end{tabular}
\vspace{0.3\baselineskip}
(b) Cray CS400 (Intel Xeon Broadwell)
\vspace{0.3\baselineskip}
\begin{tabular}{|c|c|ccc|ccc|ccc|}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{MC} & \multicolumn{3}{|c}{BMC} & \multicolumn{3}{|c}{HBMC (crs\_spmv)} & \multicolumn{3}{|c|}{HBMC (sell\_spmv)} \\
& & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ \\ \hline
Thermal2 & 16.3 & 14.8 & 14.8 & 15.3 & {\bf 13.5} & 14.2 & 14.8 & 14.4 & 14.2 & 14.7 \\
Parabolic\_fem & 2.91 & 2.69 & 2.69 & 2.53 & 2.55 & 2.54 & 2.51 & 2.41 & 2.33 & {\bf 2.31} \\
G3\_circuit & 9.39 & 7.70 & 8.15 & 7.92 & 7.62 & 7.90 & 8.20 & {\bf 7.23} & 7.44 & 7.46 \\
Audikw\_1 & 70.9 & 64.2 & 66.5 & 60.1 & 58.7 & 55.8 & {\bf 55.0} & 59.2 & 56.3 & 55.9 \\
Ieej & 6.98 & 6.49 & 6.72 & 5.34 & 6.93 & 6.42 & 5.18 & 5.97 & 5.51 & {\bf 4.83} \\
\hline
\end{tabular}
\vspace{0.3\baselineskip}
(c) Fujitsu CX2550 M4 (Intel Xeon Skylake)
\vspace{0.3\baselineskip}
\begin{tabular}{|c|c|ccc|ccc|ccc|}
\hline
\multirow{2}{*}{Dataset} & \multirow{2}{*}{MC} & \multicolumn{3}{|c}{BMC} & \multicolumn{3}{|c}{HBMC (crs\_spmv)} & \multicolumn{3}{|c|}{HBMC (sell\_spmv)} \\
& & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ & $b_{s}=8$ & $b_{s}=16$ & $b_{s}=32 $ \\ \hline
Thermal2 & 10.1 & 9.74 & 9.38 & 9.21 & {\bf 8.58} & 9.62 & 9.33 & 8.70 & 8.93 & 9.13 \\
Parabolic\_fem & 1.81 & 1.70 & 1.75 & 1.64 & {\bf 1.47} & 1.57 & 1.78 & 1.52 & 1.49 & {\bf 1.49} \\
G3\_circuit & 6.17 & 5.07 & 5.23 & 5.01 & 4.83 & 5.19 & 5.60 & {\bf 4.54} & 5.09 & 4.87 \\
Audikw\_1 & 44.6 & {\bf 34.6} & 36.4 & 36.6 & 35.5 & 37.5 & 37.9 & 37.4 & 37.3 & 39.8 \\
Ieej & 3.76 & 4.71 & 3.72 & 3.56 & 4.20 & 3.69 & 3.37 & 3.81 & 3.51 & {\bf 3.15} \\
\hline
\end{tabular}
\end{table*}
\section{Related Works}
The parallelization of the sparse triangular solver for iterative solvers has been mainly investigated in the context of GS or IC/ILU smoothers for multigrid solvers, the SOR method, and IC/ILU preconditioned iterative solvers. Most of these parallelization techniques are classified into two classes: domain decomposition type methods and parallel orderings~\cite{duff2}.
A simple but commonly used technique in the former class is the additive Schwarz smoother or preconditioner.
The hybrid Jacobi and GS smoother, and block Jacobi IC/ILU preconditioning are typical examples, and they are used in many applications~\cite{local-ICCG,hybridsmoother}.
However, these techniques typically suffer from a decline in convergence when the number of threads (processes) is increased. Although there are some remedies for the deterioration in convergence, for example, the overlapping technique~\cite{radicati}, it is generally difficult to compensate for it when many cores are used.
A parallel ordering or reordering technique is a standard method to parallelize the sparse triangular solver.
We focus on the parallelization of IC/ILU preconditioners or smoothers; however, there are many studies that discuss the application of parallel ordering for GS and SOR methods, for example, \cite{intel2}.
Ref. \cite{duff2} provides an overview of early works on the parallel ordering method when applied to IC/ILU preconditioned iterative solvers.
Parallel orderings were mainly investigated in the context of a structured grid problem (a finite difference analysis), and the concepts of typical parallel ordering techniques, such as red-black, multi-color, zebra, domain decomposition (four or eight-corner), and nested dissection, were established in the 1990s.
In these early research activities, a major issue was the trade-off problem between convergence and the degree of parallelism.
After Duff and Meurant indicated the problem in \cite{Duff}, both analytical and numerical investigations were conducted in \cite{doi4, siam-iwa, doi3, Kuo, doi2, Eijkhout, doi1, benzi}.
The concept of equivalence of orderings and some remedies for the trade-off problem, such as the use of a relatively large number of colors in multi-color ordering or block coloring, were introduced as the results of these research activities.
In practical engineering and science domains, unstructured problems are solved more frequently than structured problems.
Therefore, parallel ordering techniques were enhanced for unstructured problems, and several heuristics were proposed.
Typical examples are hierarchical interface decomposition (HID)~\cite{henon} and heuristics for nodal or block multi-coloring~\cite{multi, amc, IPDPS2012}.
These techniques and other related methods have been used in various application domains, such as CFD, computational electromagnetics, and structure analyses \cite{semba, tsuburaya, CFD, kengo, nvidia}.
Finally, we address the recently reported research results that are related to parallel linear solvers that involve sparse triangular solvers.
Gonzaga de Oliveira et al. reported their intensive numerical test results to evaluate various reordering techniques in the ICCG method in \cite{Oliveira}.
Gupta introduced a blocking framework to generate a fast and robust preconditioner based on ILU factorization in \cite{gupta}.
Chen et al. developed a couple of ILU-based preconditioners on GPUs in \cite{chen}. Ruiz et al. reported the evaluation results of HPCG implementations using nodal and block multi-color orderings on the ARM-based system, which confirmed the superiority of the block coloring method in \cite{arm}.
In this paper, we proposed a parallel ordering that is different from the techniques described above. To the best of our knowledge, there is no parallel ordering method that vectorizes the sparse triangular solver while maintaining the same convergence and number of synchronizations as the block multi-color ordering.
Since the vectorization of SpMV has been intensively investigated~\cite{sell-1,sell-2}, one of conventional approaches is the use of multi-color ordering, in which the substitution is represented as an SpMV in each color.
However, the multi-color ordering suffers from the problems of convergence and data locality, which are also indicated in the latest report~\cite{arm}.
When we consider the numerical results and mathematical properties of the proposed hierarchical block multi-color ordering, it can be regarded as an effective technique for multithreading and vectorizing the sparse triangular solver.
\section{Conclusions}
In this paper, we proposed a new parallel ordering method, hierarchical block multi-color ordering (HBMC), for vectorizing and multithreading the sparse triangular solver. HBMC was designed to maintain the advantages of the block multi-color ordering (BMC) in terms of convergence and the number of synchronizations.
In the method, the coefficient matrix was transformed into the matrix with hierarchical block structures.
The level-1 blocks were mutually independent in each color, which was exploited in multithreading.
Corresponding to the level-2 block, the substitution was converted into $w$ (= SIMD width) independent steps, which were efficiently processed by SIMD instructions.
In this paper, we demonstrated analytically the equivalence of HBMC and BMC in convergence.
Furthermore, numerical tests were conducted to examine the proposed method using five datasets on three types of computational nodes.
The numerical results also confirmed the equivalence of the convergence of HBMC and BMC.
Moreover, numerical tests indicated that HBMC outperformed BMC in 13 out 15 test cases (five datasets $\times$ three systems), which confirmed the effectiveness of the proposed method.
In the best case (G3\_circuit, Cray XC40), HBMC was 2.3 times faster than BMC.
In the future, we will examine our technique for other application problems, particularly a large-scale multigrid application and an HPCG benchmark.
Moreover, for further acceleration of the solver, we intend to introduce a sophisticated storage format or its related technique to our solver.
As other research issues, we will examine the effect of other coloring and blocking strategies on the performance of the solver.
|
2,877,628,088,943 | arxiv |
\section{Introduction}\label{sec:intro}
The recent observations of gravitational waves\xspace from the inspiral and merger of binary black holes herald the era of gravitational-wave\xspace
astronomy \citep{2016PhRvL.116f1102A, 2016PhRvL.116x1103A}. Such cataclysmic, transient, and extragalatic events
are not however the only potential sources of observable gravitational waves\xspace. Galactic neutron stars offer a more local,
and continuous, quasi-monochromatic source of gravitational radiation. Although intrinsically far weaker than
the transient sources that have been observed, their continuous nature allows their signals to be found buried
deep in the noise by coherently integrating over the long observing runs of the gravitational-wave\xspace observatories.
The subset of known pulsars, identified through electromagnetic observations, provides an important possible source of
continuous gravitational waves\xspace. They are often timed with exquisite precision, allowing their rotational phase evolution,
sky location and, if required, binary orbital parameters to be determined very accurately. In turn, these
timings allow us to carry out fully phase-coherent and computationally cheap gravitational-wave\xspace searches over the length of
our observation runs. A selection of known pulsars have already been targeted using data from the initial LIGO,
Virgo, and GEO\,600~ detectors \citep[summarized in][]{2014ApJ...785..119A}, setting upper limits on their signal
amplitudes, though without any detections.
An important milestone is passed when this upper limit falls below the so-called spin-down limit on
gravitational strain for the targeted pulsar. This spin-down limit is determined by equating the power
radiated through gravitational-wave\xspace emission to the pulsar's observed spin-down luminosity (attributed to its loss in rotational
kinetic energy), i.e.\ as would be the case if it were a {\it gravitar} \citep{Palomba:2005,Knispel:2008}, and determining the
equivalent strain expected at the Earth.\footnote{This is known to be a na\"{i}ve limit. For several young
pulsars where the braking index (see Section~\ref{sec:results}) is measured
\citep{2015MNRAS.446..857L,2016ApJ...819L..16A}, we know that it is not consistent with pure gravitational wave\xspace emission, and
other energy-loss mechanisms can be dominant. Effects of this on spin-down limit calculations are discussed
in \citet{Palomba:2000}. Figures~9 and 10 of \citet{2013ApJS..208...17A} also show that for pulsars
observed as {\it Fermi} gamma-ray sources, a not insignificant proportion of their spin-down luminosity is
emitted through gamma-rays.} It can be calculated \citep[see, e.g.][]{2014ApJ...785..119A} using
\begin{equation}\label{eq:h0sd}
h_0^{\rm sd} = \left(\frac{5}{2} \frac{G I_{zz} |\dot{f}_{\rm rot}|}{c^3 d^2 \ensuremath{f_{\rm rot}}\xspace}\right)^{1/2},
\end{equation}
where $f_{\rm rot}$ and $\dot{f}_{\rm rot}$ are the pulsar's frequency and first frequency derivative,
$I_{zz}$ is the principal moment of inertia (for which we generally assume a canonical value of
$10^{38}$\,kg\,m$^2$), and $d$ is the pulsar's distance. In previous searches, this limit has been surpassed
(i.e.\ a smaller limit on the strain amplitude has been obtained) for two pulsars: PSR\,J0534+2200 \citep[the
Crab pulsar;][]{Abbott:2008} and PSR\,J0835\textminus4510 \citep[the Vela pulsar;][]{Abadie:2011}.
In this paper, we provide results from a search for gravitational waves\xspace from \NPULSARS known pulsars using data from the first
observing run (O1) of Advanced LIGO (aLIGO). For the LIGO Hanford Observatory (H1) and LIGO Livingston
Observatory (L1), we used data starting on 2015 September 11 at 01:25:03 UTC and 18:29:03 UTC, respectively, and
finishing on 2016 January 19 at 17:07:59~UTC at both sites. With duty factors of \LHODUTYFACTOR and
\LLODUTYFACTOR for H1 and L1, this run provided \LHOOBSTIME days and \LLOOBSTIME days of data respectively for analysis.
The estimated sensitivity of this search as a function of source frequency is shown in
Figure~\ref{fig:sensest}.\footnote{The sensitivity is taken as $10.8\sqrt{S_n'}$, where
$S_n'$ is the harmonic mean of the observation-time-weighted one-sided power spectral densities, $S_n/T$, for
H1 and L1 (see \url{https://dcc.ligo.org/LIGO-G1600150/public} and
\url{https://dcc.ligo.org/LIGO-G1600151/public}, respectively). The factor of 10.8 gives the 95\% credible
upper limit on gravitational-wave\xspace strain amplitude averaged over orientation angles assuming Gaussian noise
\citep{Dupuis:2005}.} We see that, even with its comparatively short observation time, the O1 data provide a
significant sensitivity improvement over the previous runs, particularly at lower frequencies.
\begin{figure*}[!htbp]
\includegraphics[width=1.0\textwidth]{O1senscurve.pdf}
\caption{Stars show 95\% credible upper limits on gravitational-wave\xspace amplitude, $h_0^{95\%}$, for \NPULSARS pulsars
using data from the O1 run. $\blacktriangledown$ give the spin-down limits for all pulsars (based on
distance values taken from the ATNF pulsar catalog \citep{Manchester:2005}, unless otherwise stated in Tables~\ref{tab:highvalue}
and \ref{tab:allresults}) and assuming the canonical moment of inertia. The upper limits
shown within the shaded circles are those for which the spin-down limits (linked via the dashed vertical lines)
are surpassed with our observations. The gray curve gives an estimate of the expected strain sensitivity for
O1, combining representative amplitude spectral density measurements for both H1 and L1. This estimate is an
angle-averaged value and for particular sources is representative only, whilst the broader range over all
angles for such an estimate is shown, for example, in Figure~4 of \citet{Abbott:2010}. Previous initial
detector run results \citep{2014ApJ...785..119A} for 195 pulsars are shown as red circles, with \NPREVIOUS
of these sources corresponding to sources searched for in O1.
\label{fig:sensest}}
\end{figure*}
\subsection{The signal}
We model the source as a rigidly rotating triaxial star, generating a strain signal at the detector of
\citep[e.g.][]{1998PhRvD..58f3001J}
\begin{align}\label{eq:signal}
h(t) = &h_0 \bigg[\frac{1}{2}F^D_+(t, \alpha, \delta, \psi)(1+\cos{}^2\iota)\cos{\phi(t)}
\nonumber \\
& + F^D_{\times}(t,\alpha, \delta, \psi)\cos{\iota}\sin{\phi(t)}\bigg]
\end{align}
where $h_0$ is the gravitational-wave\xspace strain amplitude, and $F^D_+$ and $F^D_{\times}$ are the antenna responses of
observatory $D$ to the `+' and `$\times$' polarizations. These are dependent on the source sky position (right
ascension $\alpha$ and declination $\delta$) and polarization angle $\psi$. $\iota$ is the inclination of
the star's rotation axis to the line of sight, and $\phi(t)$ represents the evolution of the sinusoidal
signal phase with time.
This phase evolution is usefully represented as a Taylor expansion, so that
\begin{equation}\label{eq:taylor}
\phi(t) = \phi_0 + 2\pi\sum_{j=0}^N
\frac{\mathrel{\overset{\makebox[0pt]{\mbox{\tiny (j)}}}{f_0}}}
{(j+1)!}\left(t-T_0+\delta t(t)\right)^{(j+1)},
\end{equation}
where $\phi_0$ is the initial gravitational-wave\xspace phase at time epoch $T_0$, and
$\mathrel{\overset{\makebox[0pt]{\mbox{\tiny (j)}}}{f_0}}$ is the $j^{\rm th}$ time derivative of the gravitational-wave\xspace
frequency defined at $T_0$. $\delta t(t)$ is the time delay from the observatory to the solar system
barycenter, and can also include binary system barycentering corrections to put the observatory and source in
inertial frames. For the majority of pulsars, expansions to $N=1$ or 2 are all that are required, but for
some young sources, with significant timing noise, expansions to higher orders may be used. For the case of a
source rotating around a principal axis of inertia and producing emission from the $l=m=2$ (spherical harmonic)
mass quadrupole mode (e.g.\ a rigidly rotating star with a triaxial moment of inertia ellipsoid), the gravitational-wave\xspace
frequencies and frequency derivatives are all twice their rotational values, e.g.\ $f = 2\ensuremath{f_{\rm rot}}\xspace$.
\section{Pulsar selection}\label{sec:pulsars}
To reflect the improved sensitivity of LIGO during O1, we targeted pulsars with rotation frequencies, \ensuremath{f_{\rm rot}}\xspace,
of greater than about 10\,Hz, but also included seven promising sources with large spin-down
luminosities\footnote{PSRs J0908\tmin4913, J1418\tmin6058, J1709\tmin4429, J1826\tmin1334, J1845\tmin0743,
J1853\tmin0004, and J2129+1210A} with \ensuremath{f_{\rm rot}}\xspace just below 10\,Hz. The $l=m=2$ quadrupolar emission frequencies
of these targets are therefore greater than $\sim20$\,Hz and within the band of good sensitivity for the
instruments. We did not impose an upper limit on target frequency.
We have obtained timings for \NPULSARS known pulsars in this band. Timing was performed using the 42\,ft
telescope and Lovell telescope at Jodrell Bank (UK), the 26\,m telescope at Hartebeesthoek (South
Africa), the Parkes radio telescope (Australia), the Nan\c{c}ay Decimetric Radio Telescope (France), the
Arecibo Observatory (Puerto Rico) and the {\it Fermi} Large Area Telescope (LAT). Of these, \NPREVIOUS of these have
been targeted in previous campaigns \citep{2014ApJ...785..119A}, whilst \NNEW are new to this search.
For the vast majority of these, we have obtained timing solutions using pulse \ac{TOA} observations that
spanned the O1 run. For those pulsars whose \acp{TOA} did not span O1, we still expect them to maintain very
good coherence when extrapolated to the O1 time. The {\sc tempo}\footnote{\url{http://tempo.sourceforge.net}}
or {\sc tempo2} \citep{Hobbs:2006} pulsar timing codes were used to produce these solutions, which provide us
with precise information on the parameters defining each pulsars phase evolution, including their sky
location and any binary system orbital dynamics if applicable.\footnote{Of the 200 pulsars, 119 are in binary systems.}
\subsection{High-value targets}\label{sec:highvalue}
We identified \NHIGHVALUE sources (Table~\ref{tab:highvalue}) for which we could either improve upon, or
closely approach, the spin-down limit based on Equation~\ref{eq:h0sd}. These are all young pulsars at the
lower end of our sensitive frequency band and include the Crab and Vela pulsars for which the spin-down limit
had already been surpassed \citep{Abbott:2008, Abadie:2011, 2014ApJ...785..119A}.
\begin{deluxetable}{l c c c}
\tablecaption{The high-value targets for which the spin-down limit can be improved upon or closely
approached.\label{tab:highvalue}}
\tablehead{
\colhead{PSR} &
\colhead{$f$ (Hz)} &
\colhead{$d$ (kpc)} &
\colhead{$h_0^{\rm spin-down}$}}
\startdata
J0205+6449\tablenotemark{$\dagger$} & 30.4 & 3.2\phantom{$^{*\#}$} & $4.3\ee{-25}$ \\
J0534+2200 (Crab) & 59.3 & 2.0\phantom{$^{*\#}$} & $1.4\ee{-24}$ \\
J0835\textminus4510 (Vela) & 22.4 & 0.3\phantom{$^{*\#}$} & $3.4\ee{-24}$ \\
J1302\textminus6350\tablenotemark{$\ddagger$} & 41.9 & 2.3\phantom{$^{*\#}$} & $7.7\ee{-26}$ \\
J1809\textminus1917 & 24.2 & 3.7\phantom{$^{*\#}$} & $1.2\ee{-25}$ \\
J1813\textminus1246 & 41.6 & 2.5\tablenotemark{$*$}\phantom{$^{\#}$} & $2.0\ee{-25}$ \\
J1826\textminus1256 & 18.1 & 1.2\tablenotemark{$\#$}\phantom{$^{*}$} & $7.1\ee{-25}$ \\
J1928+1746 & 29.1 & 8.1\phantom{$^{*\#}$} & $4.4\ee{-26}$ \\
J1952+3252 (CTB\,80) & 50.6 & 3.0\phantom{$^{*\#}$} & $1.0\ee{-25}$ \\
J2043+2740 & 20.8 & 1.1\phantom{$^{*\#}$} & $9.2\ee{-25}$ \\
J2229+6114 & 38.7 & 3.0\phantom{$^{*\#}$} & $3.3\ee{-25}$
\enddata
\tablenotetext{$\dagger$}{This pulsar was observed to glitch during O1 on MJD 57345.}
\tablenotetext{$\ddagger$}{This pulsar is in a binary system and as such was not able to be searched for
with the $5n$-vector method.}
\tablenotetext{$*$}{This distance is a lower limit on the distance from \citet{2014ApJ...795..168M}. It is
slightly higher than the distance of 1.9\,kpc used for calculations in \citet{2014ApJ...785..119A}.}
\tablenotetext{$\#$}{This distance is that taken from the lower distance range from
\citet{2016MNRAS.458.2813V} \citep[using values from][]{Wang:2011}.}
\tablecomments{Unless otherwise stated, all distances are those from {\tt v1.54} of the ATNF Pulsar Catalog
\citep{Manchester:2005}.}
\end{deluxetable}
\section{Analyses}\label{sec:analyses}
Following \citet{2014ApJ...785..119A}, we used three largely independent methods for carrying out the search
for the \NHIGHVALUE high-value targets: the time-domain-based {\it Bayesian}
\citep{Dupuis:2005} and $\mathcal{F}$/$\mathcal{G}$-{\it statistic} \citep{Jaranowski:2010} methods, and the
frequency-domain-based {\it 5$n$-vector} method \citep{2010CQGra..27s4016A,Astone:2012}. For the other
\NLOWVALUE targets only the {\it Bayesian} method was applied.
We refer the reader to \citet{2014ApJ...785..119A} and references therein for more detailed descriptions of
these methods. Generally, the methods were not modified for O1, although there have been some significant
improvements to the {\it Bayesian} method, which are described in Appendix~\ref{app:bayesian}.
In addition, the results from the $5n$-vector method used an earlier data release, with a slightly different
instrumental calibration~\citep{2016arXiv160203845T}, than that used for the two other methods. The calibrations applied differ, however,
by less than 3\% in amplitude and less than $3^{\circ}$ in phase for all high-value sources.
For one high-value target, PSR\,J1302\textminus6350, the {\it 5$n$-vector} method was not used. This pulsar is in a
binary system, which is not currently handled by this method. PSR\,J0205+6449 underwent a glitch on MJD 57345
(2015 November 19), causing the rotation frequency to increase by $\sim 8.3\ee{-6}$\,Hz. Because of the uncertain
relation between the gravitational-wave\xspace and electromagnetic signal phases over a glitch, we analyzed both the
pre-and-post-glitch periods independently and combined these incoherently to give the final result. To the
best of our knowledge, none of our other sources glitched during the course of O1.
The results from the {\it Bayesian} method incorporate uncertainties into the pulsars' phase evolutions. If
the fits to pulsar \acp{TOA} from electromagnetic observations provided uncertainties on any fitted
parameters, then these parameters were also included in the search space (in addition to the four main
unknown signal parameters, $h_0$, $\phi_0$, $\cos{\iota}$ and $\psi$, defined with equations~\ref{eq:signal} and \ref{eq:taylor}). Prior probabilities
for these additional parameters were defined as Gaussian distributions, using their best-fit values and
associated errors as means and standard deviations \citep[see, e.g.][]{Abbott:2010}. Upper limits are produced
from the posterior probability distributions on $h_0$, by marginalizing all other parameters over their prior
ranges (see Appendix~\ref{app:prior}) and calculating the $h_0$ value bounding (from zero) 95\% of the
probability \citep[e.g., Equation~3.3 of][]{Abbott:2007a}.
Observations of \ac{PWNe} around several pulsars allow us to put prior constraints on their orientation angles
$\iota$ and $\psi$, detailed in Appendix~\ref{app:restricted}. For these pulsars, any results given include
both those based on the standard prior ranges for the orientation angles given in Equation~\ref{eq:priors},
as well as those based on these restricted ranges.
\section{Results}\label{sec:results}
For all pulsars, we quote 95\% credible/confidence upper limits on the gravitational-wave\xspace amplitude $h_0$ set using
coherently combined data from both H1 and L1.\footnote{For the Bayesian results, these are credible limits
bounded from zero, whilst for the frequentist results these are confidence limits.} We use this value to also
set limits on the mass quadrupole moment $Q_{22}$ of the $l=m=2$ mode of the star \citep{Owen:2005} via
\begin{equation}\label{eq:q22}
Q_{22} = h_0\left( \frac{c^4 d}{16\pi^2G \ensuremath{f_{\rm rot}}\xspace^2} \right) \sqrt{\frac{15}
{8\pi}}.
\end{equation}
In turn, this is related to the star's fiducial equatorial ellipticity
$\varepsilon$ through
\begin{equation}\label{eq:epsilon}
\varepsilon = \frac{Q_{22}}{I_{zz}} \sqrt{\frac{8\pi}{15}}.
\end{equation}
To calculate $\varepsilon$, we use the canonical moment of inertia of $I_{zz} = 10^{38}$\,kg\,m$^2$ \citep[see,
e.g., Chapter 6 of][]{PulsarAstronomy}. We also
quote the ratio of our observed $h_0$ limits to the spin-down limits calculated using Equation~\ref{eq:h0sd}.
The distances used to calculate $Q_{22}$ and $\varepsilon$ are (unless otherwise stated in
Table~\ref{tab:highvalue} or \ref{tab:allresults}) taken from {\tt v1.54} of the ATNF pulsar catalog
\citep{Manchester:2005},\footnote{\url{http://www.atnf.csiro.au/people/pulsar/psrcat/}} and in most cases
are calculated from the observed dispersion measure \citep[noting that distance uncertainties of 20\% or more
are not uncommon; see, e.g.\ Figure~12 of][]{2002astro.ph..7156C}. For the spin-down limit calculation, we
generally use values of $\dot{f}_{\rm rot}$ provided from the electromagnetic-pulse-arrival-time fits used in
our search. If, however, an intrinsic period derivative, i.e.\ a period derivative corrected for proper motion
effects \citep{Shklovskii:1969} or globular cluster accelerations, is given in the ATNF catalog, then that
value is used. If an intrinsic period derivative is not given for a globular cluster pulsar, then the spin-down
limit is instead based on an assumed characteristic spin-down age of $\tau = 10^9$\,yr. The characteristic
age \citep[see, e.g., Chapter 6 of][]{PulsarAstronomy} is defined as
\begin{equation}\label{eq:tau}
\tau = -\frac{f_{\rm rot}}{\dot{f}_{\rm rot}(n-1)},
\end{equation}
where $n$ is the braking index ($n=f_{\rm rot}\ddot{f}_{\rm rot}/\dot{f}_{\rm rot}^2$), which has a value of
$n=3$ for purely magnetic dipole radiation, whilst we adopt the $n=5$ case for purely gravitational radiation.
The calibration procedure for the aLIGO instruments and their amplitude uncertainties during the initial part
of O1 are described in detail in \citet{2016arXiv160203845T}. After O1 was completed, the calibration was updated,
and the maximum calibration uncertainties estimated over the whole run give a $1\sigma$ limit on the combined
H1 and L1 amplitude uncertainties of $\lesssim 14\%$. This is the conservative level of uncertainty on the
$h_0$ upper limits, and any quantities derived linearly from them, from the gravitational-wave\xspace observations alone.
The results for all targets, except the high-value targets discussed in Section~\ref{sec:highvalue}, are shown
in Table~\ref{tab:allresults}. For each pulsar, we produce two probability ratios, or odds (discussed in
Appendix~\ref{app:evidence}): $\mathcal{O}_{\textrm{S/N}}$, Equation~\ref{eq:or1}, comparing the probability
that the data from both detectors contain a coherent signal matching our model to the probability that they
both contain just (potentially non-stationary) Gaussian noise; and, $\mathcal{O}_{\rm {S/I}}$,
Equation~\ref{eq:oddsratio}, comparing the probability that the data from both detectors contain a coherent
signal matching our model to the probability of the data containing combinations of independent signals or
noise. The latter of these is an attempt to account for incoherent interference in the detectors (e.g.\
produced by instrumental line artifacts) that can mimic the effects of a signal. The distributions of these
odds for all our sources can be seen in Figure~\ref{fig:odds}.\footnote{For each source, a different prior
volume was used, so directly comparing odds values between sources should be treated with caution.} We find
that the largest ratio for $\mathcal{O}_{\rm {S/I}}$ is 8 for PSR\,J1932+17. Although this is larger than any other
source and favors a coherent signal over the alternative incoherent-\textit{or}-noise hypothesis by over a
factor of eight, it is not yet strong enough evidence for a signal \citep[e.g.\ in the interpretation scaling
of][]{Jeffreys:1931}, especially considering the multiple searches that are performed. The largest
$\mathcal{O}_{\textrm{S/N}}$ value is for PSR\,J1833\tmin0827, with a value of $2.5\ee{12}$ in favor of the
signal model. However, as is apparent from the $\mathcal{O}_{\rm {S/I}}$ value of $3\ee{-6}$ and the
posterior distributions of parameters, it is clear that the very large $\mathcal{O}_{\rm S/N}$ comes from
strong interference in the data, whilst there is no support for a coherent signal in both detectors.
\begin{figure}[!htbp]
\includegraphics[width=0.49\textwidth]{bayesfactors.pdf}
\caption{Distributions of the probability ratios $\mathcal{O}_{\textrm{S/N}}$ and $\mathcal{O}_{\rm
{S/I}}$ for the observed pulsars.
\label{fig:odds}}
\end{figure}
The $h_0$ upper limits from this analysis (including those from the high-value targets) are shown in
Figure~\ref{fig:sensest}. The figure also contains the upper limits obtained for the 195 pulsars targeted
using data from the initial detector era \citep{2014ApJ...785..119A}. We find that, on average, for pulsars
that were both analyzed here and in previous runs, our new results have over two and a half times better
sensitivity. The largest improvement is a factor of eight for PSR\,J0024\tmin7204C at $f=347.4$\,Hz. For four
pulsars, the new results are slightly less sensitive than the previous analyses, although in the worst
case this is only by $\lesssim 10\%$.\footnote{This is for PSR\,J1833\textminus0827 at $23.4$\,Hz, for which
there appears to be a large amount of incoherent interference between the detectors.}
Figure~\ref{fig:ellq22} shows corresponding limits on the fiducial ellipticity $\varepsilon$ and mass
quadrupole moment $Q_{22}$. Figure~\ref{fig:ratios} shows a histogram of the ratios between our upper limits
and the spin-down limits.
The accelerations that pulsars experience in the cores of globular clusters can mask their true spin-down
values. It is sometimes possible to determine these accelerations and correct for their effect on spin-down.
As mentioned above, when such a correction is available, we have calculated the spin-down limits based on this
corrected spin-down value. In cases where the correction is not available we have instead assumed each pulsar
has a characteristic age of $\tau = 10^9$\,yr and under the assumption of gravitational-radiation-dominated
spin-down, calculated a na\"{i}ve spin-down via Equation~\ref{eq:tau}, which has then been used for
the spin-down limit calculation. As proposed in \citet{Pitkin:2011}, for these pulsars we could instead
invert the process and use the $h_0$ upper limit to set a limit on the spin-down of the pulsars (at least
under the assumption that they are gravitars, with $n=5$). Given that the maximum observed spin-up for a
globular cluster pulsar is $\sim 5\ee{-14}$\,Hz\,s$^{-1}$, we can say that the negative of this can be used as
an approximation for the largest magnitude {\it spin-down} that could be masked by intracluster
accelerations.\footnote{In \citet{2006CQGra..23S...1O} and \citet{Pitkin:2011}, it is stated that
$-5\ee{-13}$\,Hz\,s$^{-1}$ is roughly the largest magnitude spin-down that could be masked by globular cluster
accelerations. This is mainly based on the maximum observed spin-up for a globular cluster pulsar
(PSR\,J2129+1210D) being $\sim 5\ee{-13}$\,Hz\,s$^{-1}$ as given in {\tt v1.54} of the ATNF pulsar catalog
\citep{Manchester:2005}. However, this value appears to be wrong, with the original observations for
PSR\,J2129+1210D \citep{Anderson:1993} giving a value of just under $\sim 5\ee{-14}$\,Hz\,s$^{-1}$. This is
still the maximum observed spin-up for any globular cluster pulsar.} Of the globular cluster pulsars for
which the intrinsic spin-down is not known, we find that our upper limits on $h_0$ give the smallest limit on
the absolute spin-down value due to gravitational waves\xspace for PSR\,J1623\tmin2631 of $\dot{f} = -3.2\ee{-13}$\,Hz\,s$^{-1}$.
Although this value is probably too large to be masked by accelerations, it is of the same order as the
spin-downs for two globular cluster millisecond pulsars, PSRs J1823-3021A \citep{2011Sci...334.1107F} and
J1824-2452A \citep{2013ApJ...778..106J}, both with apparently large intrinsic spin-down values.
\subsection{High-value targets}
Table~\ref{tab:highvalueres} shows the results for the high-value targets (Section~\ref{sec:highvalue}) for
each of the three analysis methods discussed in Section~\ref{sec:analyses}. The results from the methods are
broadly consistent. For pulsars that have restricted priors on orientations, the results using these are shown
alongside the results from the full prior orientation range. We find that for \hl{eight}\xspace of these pulsars,
we achieve a sensitivity that surpasses the indirect spin-down limit.
Table~\ref{tab:highvalueres} also contains an estimate of the maximum surface deformation of the $l=m=2$
mode, $R\varepsilon_{\rm surf,22}$, for each of the pulsars. This is based on Figure~2 of
\citet{Johnson-McDaniel:2013}, where we adopt a scaling of $R\varepsilon_{\rm surf,22} \approx
25(\varepsilon/10^{-4})$\,cm maximized over equations of state and possible stellar masses. We also find
that for five of these pulsars (PSRs J0534+2200, J1302\tmin6350, J1813\tmin1246, J1952+3252, and J2229+6114)
the $l=m=2$ surface deformations are smaller than the rotational ($l=2$, $m=0$) surface deformation for all
equations of state.\footnote{For this we have assumed a 1.4\,M$_{\odot}$ star and used approximate scalings
calculated from Table~1 of \citet{Johnson-McDaniel:2013}, taking into account that the rotational deformation
scales with $f_{\rm rot}^2$.} For the Vela pulsar (PSR\,J0835\tmin4510) and PSR\,J0205+6449, the $l=m=2$
surface deformations are smaller than the rotational deformations for roughly half of the equations of state
used in \citet{Johnson-McDaniel:2013}. There is no expected relation between the scales of these two
deformations, but it is intriguing to compare them nonetheless.
\begin{figure*}[!htbp]
\includegraphics[width=1.0\textwidth]{ellfreq.pdf}
\caption{Limits on fiducial ellipticities ($\varepsilon$) and mass quadrupole moments ($Q_{22}$).
$\blacktriangledown$ show the values based on the spin-down limits for these pulsars. The pulsars for which
the spin-down limit is surpassed are highlighted within larger shaded circles and linked to their spin-down
limit values with dashed vertical lines. Also shown are diagonal lines of constant characteristic age,
$\tau$, for gravitars (with braking indices of $n=5$) calculated via $\varepsilon^{\rm sd} = 1.91\ee{5}
f_{\rm rot}^{-2}/\sqrt{(n-1)\tau I_{38}}$, where $I_{38}$ is the principal moment of inertia in units of
$10^{38}$\,kg\,m$^2$ (where we set $I_{38}=1$). \label{fig:ellq22}}
\end{figure*}
\begin{figure}[!htbp]
\includegraphics[width=0.49\textwidth]{ratiohist.pdf}
\caption{Ratio between our observed $h_0^{95\%}$ limits and the spin-down limits for all pulsars.
\label{fig:ratios}}
\end{figure}
\section{Discussion}\label{sec:discussion}
We have searched for gravitational-wave\xspace emission from the $l=m=2$ quadrupole mode of \NPULSARS known pulsars. There is no
significant evidence for a signal from any of the sources. We have been able to set 95\% credible upper
limits on the gravitational-wave\xspace amplitudes from all these sources, and from these derived limits on each star's
fiducial ellipticity and quadrupole moment.
In earlier analyses, the indirect spin-down limits on the gravitational-wave\xspace amplitude had been surpassed for two pulsars:
PSR\,J0534+2200 \citep[the Crab pulsar;][]{Abbott:2008} and PSR\,J0835\textminus4510 \citep[the Vela
pulsar;][]{Abadie:2011}. We improve upon the previous limits for these two pulsars by factors of $\gtrsim
3$. We find that for the Crab and Vela pulsars, less than $\sim 2\ee{-3}$ and $\sim 10^{-2}$ of the
spin-down luminosity is being lost via gravitational radiation, respectively (assuming the distance is precisely
known and using the fiducial moment of inertia of $10^{38}$\,kg\,m$^2$). The observed braking indices of these pulsars provide
constraints on the contribution of gravitational-wave\xspace emission to the spin-down, under the assumption that the spin-down is due only to a
combination of electromagnetic and gravitational-wave\xspace losses. These braking index constraints are more stringent, i.e.\ give smaller limits on
the gravitational-wave\xspace emission, than the na\"ive spin-down limit given in Equation~\ref{eq:h0sd} \citep[see][]{Palomba:2000}. Our results, however,
surpass even these more stringent limits and are therefore compatible with the observed braking indices. We surpass
the spin-down limits of six further pulsars. All these are young pulsars with large spin-down luminosities,
and as such our limits translate to large ellipticities/quadrupole moments that are at the upper end of some
maximally allowed values \citep[see e.g.][]{Owen:2005,Pitkin:2011,Johnson-McDaniel:2012}. If we assume that
internal toroidal magnetic fields are the source of any stellar mass quadrupole \citep{Bonazzola:1996}, then
we can use our limits on ellipticities as constraints on the magnitude of the internal field strength. For
the Crab pulsar, PSR\,J1813\tmin1246, PSR\,J1952+3252, and PSR\,J2229+6114, which have roughly comparable
ellipticity limits, the internal magnetic field strength is limited to $\lesssim 10^{16}$\,G
\citep[e.g.][]{Cutler:2002, Haskell:2007bh}. For comparison, the Crab pulsar's inferred external polar
magnetic field at its surface is $\sim 4\ee{12}$\,G. Due to this being a rough order of magnitude estimate,
this value is the same as that previously quoted for the Crab pulsar in \citet{2014ApJ...785..119A}, although
the limit is now valid for several more pulsars.
For any neutron star equation of state, the lower bound on the mass quadrupole (due to the internal magnetic field,
which may be very weak) is many orders of magnitude less than the upper bound. Therefore, it is always important
to acknowledge that these upper limits on particular stars do not allow us to place constraints on neutron star
equations of state.
Of all the pulsars, the smallest 95\% credible limit on $h_0$ that we find is \MINHZERO for \MINHZEROPULSAR.
The smallest ellipticity and $Q_{22}$ quadrupole moments are \MINELL and \MINQ, respectively, for \MINEQPULSAR,
which is a relatively nearby pulsar at \MINEQDIST. Although neither of these pulsars surpasses their
fiducial spin-down limits, it is interesting to note that there are \WITHINFACTORTEN that we are able to
constrain to within a factor of 10 of their spin-down limits (see Figure~\ref{fig:ratios}). For
PSR\,J0437\textminus4715 (which is nearby, at 0.16\,kpc), we are in fact only 1.4 times above the spin-down
limit. Therefore, an equivalent increase in detector sensitivity of that factor, or a $1.4^2\approx 1.9$
times longer run, would allow us to surpass the spin-down limit. Alternatively, the spin-down limit would be
surpassed if the true moment of inertia for PSR\,J0437\textminus4715 were a factor of 1.9 times larger
than $I_{38}$, which is well within plausible values. As this is a millisecond pulsar, it would give
ellipticity constraints of less than a few $10^{-8}$, or $l=m=2$ quadrupole moment constraints of
$\lesssim 10^{30}$\,kg\,m$^2$, compared to the much larger constraints typically found for the young pulsars
in Table~\ref{tab:highvalue}. Using the conversion in \citet{Cutler:2002} the constraints on the internal
toroidal fields for this pulsar would be $\lesssim 10^{13}$\,G, which is similar to the external field
strengths of young pulsars.
This search has imposed a model in which the gravitational-wave\xspace signal phase evolutions must be tightly locked to the
pulsars' rotational evolutions determined through electromagnetic observations. There are mechanisms
\citep[discussed in, e.g.][]{Abbott:2008}, however, that could lead to small deviations between the phase
evolution and observed rotation. Additionally, there are many pulsars for which highly accurate
timings do not exist or are not available from observations coincident with ours.\footnote{One desirable
source that we no longer have accurate timings for is PSR\,J0537\textminus6910, an X-ray pulsar in the Large
Magellanic Cloud, for which we relied on the now-defunct RXTE satellite.} There are several such sources for
which the spin-down limit could be surpassed and these are being searched for in O1 data using narrow-band
searches \citep[see, e.g.][]{narrowb:2015}, covering a small range in frequency and frequency derivative to
account for uncertainties in the exact parameters (LIGO Scientific Collaboration \& Virgo Collaboration 2017, in
preparation). All-sky broadband searches for unknown rotating neutron stars are also underway.
In the near future, increasing sensitivities and considerably longer observing runs are planned for aLIGO and
Advanced Virgo \citep{2016LRR....19....1A}. This will give us several times greater sensitivity with which to
search for gravitational-wave\xspace signals, and in any event will allow us to surpass the spin-down limits for 10 or
more pulsars. Future searches will also address gravitational-wave\xspace emission at not just twice the rotation frequency, but
also at the rotation frequency \citep[e.g.][]{2015MNRAS.453.4399P}, further increasing the likelihood of a
first detection of continuous gravitational waves.
\acknowledgements
The authors gratefully acknowledge the support of the United States National Science Foundation (NSF) for the
construction and operation of the LIGO Laboratory and Advanced LIGO as well as the Science and Technology
Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of
Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the
GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. The
authors gratefully acknowledge the Italian Istituto Nazionale di Fisica Nucleare (INFN), the French Centre
National de la Recherche Scientifique (CNRS), and the Foundation for Fundamental Research on Matter supported
by the Netherlands Organisation for Scientific Research, for the construction and operation of the Virgo
detector and the creation and support of the EGO consortium. The authors also gratefully acknowledge research
support from these agencies as well as by the Council of Scientific and Industrial Research of India,
Department of Science and Technology, India, Science \& Engineering Research Board (SERB), India,
Ministry of Human Resource Development, India, the Spanish Ministerio de Econom\'ia y Competitividad,
the Vicepresid\`encia i Conselleria d'Innovaci\'o, Recerca i Turisme and the Conselleria d'Educaci\'o i
Universitat of the Govern de les Illes Balears, the National Science Centre of Poland, the European
Commission, the Royal Society, the
Scottish Funding Council, the Scottish Universities Physics Alliance, the Hungarian Scientific Research Fund
(OTKA), the Lyon Institute of Origins (LIO), the National Research Foundation of Korea, Industry Canada and
the Province of Ontario through the Ministry of Economic Development and Innovation, the Natural Science and
Engineering Research Council Canada, Canadian Institute for Advanced Research, the Brazilian Ministry of
Science, Technology, and Innovation, Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP),
Russian Foundation for Basic Research, the Leverhulme Trust, the Research Corporation, Ministry of Science
and Technology (MOST), Taiwan and the Kavli Foundation. The authors gratefully acknowledge the support of the
NSF, STFC, MPS, INFN, CNRS and the State of Niedersachsen/Germany for provision of computational resources.
Pulsar observations with the Lovell telescope and their analyses are supported through a consolidated grant
from the STFC in the UK. The Nan\c{c}ay Radio Observatory is operated by the Paris Observatory, associated
with the French CNRS. A.~Ridolfi and P.~C.~C.~Freire gratefully acknowledge financial support by the European
Research Council for the ERC Starting grant BEACON under contract no.\ 279702. A.~Ridolfi is member of the
International Max Planck research school for Astronomy and Astrophysics at the Universities of Bonn and
Cologne and acknowledges partial support through the Bonn-Cologne Graduate School of Physics and Astronomy.
LIGO Document No.\ LIGO-P1600159.
|
2,877,628,088,944 | arxiv | \section{Introduction}
The relation between the continuum and emission-line luminosities is
important for our understanding of AGN physics. Such a relationship,
known as the Baldwin effect, states that the broad emission-line
equivalent width (EW) decreases as the continuum luminosity
increases (Baldwin 1977; for a review see Osmer \& Shields 1999).
The Baldwin effect was first discovered for the \civ~$\lambda$1549
line (Baldwin 1977) and was confirmed later for most of the other
strong UV emission lines (Kinney, Rivolo \& Koratkar 1990), as well
for the optical hydrogen Balmer lines (Gilbert \& Peterson 2003;
Goad, Korista \& Knigge 2004).
The physical origin of the Baldwin effect is still unclear, but is
probably related to the luminosity-dependent continuum variations.
According to a theoretical work by Korista, Baldwin \& Ferland
(1998), the ionizing continuum shape is softened if the luminosity
increases. The reduction in the ionizing photons at a given
UV/optical luminosity leads to smaller equivalent widths of the
broad emission lines. Indeed, some accretion disk models can also
explain the continuum softening naturally in the case of increasing
luminosity (Netzer, Laor \& Gondhalekar 1992). However, other
factors, such as the selection effect, variability, and light
travel-time effects, may also account for some parts of or the
scatters of the Baldwin effects (Jones \& Jones 1980; Murdoch 1983;
Krolik et al. 1991; Pogge \& Peterson 1992; Peterson et al. 2002).
Recent studies, including Baskin \& Laor (2004) and Bachev et al.
(2004), have revealed the possible correlations between the \civ~EW
and some parameters defining the Eigenvector 1 of AGNs (Boroson \&
Green 1992). Shang et al. (2003) also show that the Eigenvector 1
can contribute to the scatters of the Baldwin effect.
The Baldwin effect can be expressed by the simple formula $ EW
\propto L_c^{\beta} $, where $L_c$ is the continuum luminosity, or
alternatively by $ L_{\rm line} \propto L_c^{\alpha}$, where $L_{\rm
line}$ is the emission line luminosity (evidently $\alpha=1+\beta$).
There are two kinds of Baldwin effects treated in the literature.
One is the {\it global} Baldwin effect, which is obtained from the
single-epoch observations for an ensemble of AGNs. The other is the
{\it intrinsic} Baldwin effect, which represents the line-continuum
relation in a single variable AGN. For the global Baldwin effect,
the $\beta$ value was found to be $-0.17$ and $-0.12$ for \civ~and
Ly $\alpha$ lines, respectively (Kinney et al. 1990; Pogge \&
Peterson 1992). Some studies also indicate steeper slopes of the
global Baldwin effect for the relatively high ionization lines than
those found for the low ionization ones (Wu et al. 1983; Kinney et
al. 1987, 1990; Baldwin et al. 1989; Zheng et al. 1997). The
flattening of the slope of the global Baldwin effect at lower
continuum luminosities was mentioned previously by Osmer \& Shields
(1999) as a second-order effect, and was also noticed recently by
Baskin \& Laor (2004) from a study of 81 BQS quasars.
The intrinsic Baldwin effect (usually expressed as $ f_{line}
\propto f_c^{\alpha}$, where $f_{line}$ and $f_c$ are the emission
line and continuum fluxes, respectively), has been studied only for
several strongly variable AGNs. The derived $\alpha$ value varies
from 0.1 to 0.6 for different AGNs, with a mean value around 0.4
(Kinney et al. 1990; Pogge \& Peterson 1992). Recently, Gilbert \&
Peterson (2003) and Goad et al. (2004) studied the intrinsic Baldwin
effect for the best-studied Seyfert 1 galaxy NGC 5548 using the data
of 13-year-observations and found that the $\alpha$ value for the
H$\beta$ line varies in a range from 0.4 to 1. Goad et al. (2004)
find that the $\alpha$ value decreases as the continuum flux
increases, suggesting the slope of the intrinsic Baldwin effect is
not constant. Wamsteker \& Colina (1986) and Osmer \& Shields (1999)
also notice the slope change of the intrinsic \civ~Baldwin effect
for another Seyfert 1 galaxy, Fairall 9, which clearly shows the
flattening of the EW value as the continuum flux decreases.
Interestingly, the trend of the slope change in the intrinsic
Baldwin effect seems consistent with that of the global Baldwin
effect. However, this may not indicate that the physics behind them
is totally the same. For example, either the different metallicity
or black-hole mass has been invoked as possible origin of at least
part of the global Baldwin effect (Warner, Hamann \& Dietrich 2004),
but this is clearly not relevant to the intrinsic Baldwin effect of
a single AGN where these parameters are fixed.
To further test the non-constant slope of the intrinsic Baldwin
effect, in this paper we investigate the relationship between the
\civ~emission line flux and the UV continuum flux for another
well-known nearby Seyfert 1 galaxy NGC 4151 that has been
extensively observed in the UV band by the {\it International
Ultraviolet Explorer} (IUE), {\it Hubble Space Telescope}
(HST), and {\it Hopkins Ultraviolet Telescope} (HUT) in the past three decades
(Boksenberg et al. 1978; Clavel et al. 1987, 1990; Ulrich et al.
1991; Crenshaw et al. 1996, 2000; Kriss et al. 1992, 1995; Weymann
et al. 1997; Kraemer et al. 2001; see Ulrich 2000 for a detailed
review). Both the UV emission-line flux and continuum flux of NGC
4151 varied about two orders of magnitude in this long observation
period, making it one of the best targets for studying the variation
in the slope of the Baldwin effect.
In Sect.~2 we present the data analysis of the UV spectra from IUE,
HST, \& HUT. In Sect.~3 we show our result for the varying slope of
the intrinsic \civ~Baldwin effect. Finally we give our conclusions
and briefly discuss our result in Sect.~4.
\section{Data analysis of the archived UV spectra}
To investigate the slope variation of the intrinsic \civ~Baldwin
effect in NGC~4151, we measured the flux of the \civ~emission-line
and the flux of UV continuum observed in a long period.
\subsection{The data set}
As a nearby ($\rm cz=995\,km~s^{-1}$) bright Seyfert 1 galaxy, NGC
4151 has been extensively observed by IUE (Boksenberg et al. 1978;
Clavel et al. 1987, 1990; Ulrich 1996; Ulrich et al. 1991; Crenshaw
et al. 1996; Edelson et al. 1996). However, different results have
been found for the correlation between the UV continuum and the
emission-line flux variations. For example, from the observations in
the 1993 IUE campaign Crenshaw et al. (1996) found that the
emission-line light curve does not correlate well with the continuum
variation over the short duration of observations. But from the
long-term light curves over two decades, the response of the
emission-line flux with the continuum variation was clearly observed
(Ulrich et al. 1991; Ulrich 2000). In this paper, we re-investigate
this correlation using the 468 archived low-resolution
large-aperture spectra of IUE/SWP from 1978 to 1996 (JD 2443722.59
to JD 2450243.40) where the \civ~emission line was clearly detected.
Furthermore, NGC~4151 was observed by HUT at a resolution of
3\,{\AA} in December 1990 and March 1995 (Kriss et al. 1992, 1995),
and the 8 archived HUT spectra are available. These observation were
done in a time period that partly overlapped with that of the IUE
observations. Therefore, the HUT spectra can be used to test the
measurement accuracy of the IUE data.
NGC 4151 has also been frequently observed by HST. The 15 HST/STIS
archived spectra in 1998--2002 are available and used in our study.
However, the archived HST/GHRS spectra were skipped because they do
not cover the whole red part of the \civ~line profile (Weymann et
al. 1997). In all of the 15 HST/STIS spectra, the whole \civ~line
profile is shown clearly.
In total, we collected 490 archived UV spectra of NGC 4151 with a
wavelength range from about 1300\,{\AA} to 1800\,{\AA} observed by
IUE, HUT, \& HST in the period of 1978--2002.
\subsection{The continuum and \civ~emission-line flux determinations}
Low-resolution spectra of NGC 4151 taken with IUE and HUT clearly
show broad emission and remarkable central absorption in the
\civ~line profile. However, high-resolution spectra from HST have
revealed many more other subtle emission and absorption features
(Weymann et al. 1997; Crenshaw et al. 2000; Kraemer et al. 2001).
Therefore, the \civ~line profile of NGC 4151 is very complex and
needs to be treated with caution. Because 468 of the 490 archived
spectra in our study were obtained by IUE, the lower resolution of
IUE spectra does not allow a more accurate fitting of the subtle
emission and absorption features. In order to avoid the systematic
uncertainties caused by different fitting models, in this study we
simply adopt the same spectral-fitting model for the HST and HUT
spectra as for the IUE spectra.
The accurate measurement of the continuum flux is important.
Usually, the local continuum fitting with a power-law or a straight
line can produce reasonable measurement if the fitting is made with
caution. Here we use a straight line to fit the local continuum in
the selected continuum windows, namely 1260--1290\,{\AA},
1420--1460\,{\AA}, and 1805--1835\,{\AA}. The results are
satisfactory. In fitting the continuum, we find that the C~{\sc ii}
$\lambda$1334\,{\AA} line has relatively strong effects on the
continuum flux measurement at 1350\,{\AA}. Therefore, we adopt the
continuum flux at 1440\,{\AA} for our study of the line-continuum
relation of NGC 4151 because the lines around 1440\,{\AA}, like
O~{\sc iv}] $\lambda$1402\,{\AA} and N~{\sc iv}]
$\lambda$1486\,{\AA}, are relatively weak. The continuum flux at
1440\,{\AA} was taken as the weighted mean over the wavelength range
1420-1460\,{\AA}, and the flux uncertainty is estimated from its
standard deviation. No correction is made for the extinction, as it
is negligible for NGC 4151 (Kriss et al. 1992).
In the IUE spectra of NGC 4151, the \civ~$\lambda\lambda$1548.2,
1550.8 lines are blended with a few variable components covering a
wavelength range from 1480\,{\AA} to 1700\,{\AA} (see Clavel et al.
1987 for the details). To reduce the contamination from other lines,
we fit the continuum-subtracted spectra within the
wavelength range from 1450\,{\AA} to 1720\,{\AA} simultaneously with ten individual
Gaussian components. Five lines like N~{\sc iv}]
$\lambda$1486\,{\AA}, L1 $\lambda$1518\,{\AA}, L$'$2
$\lambda$1576\,{\AA}, L2 $\lambda$1594\,{\AA}, O~{\sc iii}]
$\lambda$1663\,{\AA} are fitted with a single Gaussian component.
The \civ~line is fitted with three Gaussian components: a broad
emission component, a narrow emission component, and a narrow
absorption component. The H~{\sc ii} $\lambda$1640\,{\AA} emission
line is fitted with two Gaussian components including a narrow one
and a broad one. Two narrow and variable satellite emission lines
(L1 and L2) were first noticed by Ulrich et al. (1985) when NGC 4151
was at a low continuum state. Using the high-resolution HST/STIS
spectra of NGC 4151, Crenshaw et al. (2000) indicate that these
satellite emission lines (L1, L$'$2, and L2) shown in the IUE
spectra are not in fact separate lines, but rather the residual
emission that appears isolated due to the low-ionization absorption
lines (primarily Si~{\sc ii}\,$\lambda$1526.7\,{\AA}, Si~{\sc
ii}$^*\,\lambda$1533.4\,{\AA}, and Fe~{\sc ii} multiples) overlaying
the large part of wings of \civ. Assuming them to be the satellite
emission lines can slightly underestimate the total emission line
flux of \civ~in the low-continuum case. The effects of such a
simplification on our result will be addressed in Sect.~3.
The nonlinear least-square Levenberg-Marquart minimization method is
used in our spectral fitting. Unlike Clavel et al. (1987)
who fixed the line widths of some Gaussian components, we
allow the fitting parameters for all components, such as the line
central wavelength, velocity dispersion, and the area under a line,
to vary at certain ranges. The uncertainties of these parameters are
obtained using a method similar to the one adopted in Clavel et al.
(1991). Three examples of our multi-component fitting of the spectra
taken at the low, moderate, and high flux states with IUE, HST, \&
HUT, respectively are shown in Fig.~1. The fittings are satisfactory
in general. However, sometimes the red peak around 1540--1560\,{\AA}
is too sharp to be fitted perfectly, especially for the spectra
taken by HST. This leads to a slight underestimation of the total
\civ~emission line flux but will not affect our result
significantly.
\begin{figure}
\begin{center}
\includegraphics[angle=0,height=10cm,width=8.6cm]{fig1.ps}
\caption{Sample spectra of NGC 4151 from IUE (lower panel), HST
(middle panel), \& HUT (upper panel) observations and fitting
results around the 1450--1720\,{\AA} region. The three spectra (from
bottom to top) are for the low, moderate, and high flux states,
respectively. The dotted line represents the observed spectrum
after the continuum subtraction. The thin solid lines represent the
best-fit spectral components (see text for details). The thick solid
line denotes the sum of these components. The residual of the fit
is also shown in the lower part of each panel. Flux is in units of
$\rm 10^{-13}~erg~s^{-1}~cm^{-2}\AA^{-1}$.}
\label{Fig.1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,height=10cm,width=9.4cm]{fig2.ps}
\caption{Light curves of the continuum flux at 1440\,{\AA}, the total
\civ~flux and the broad emission line component of NGC 4151 in
1978--2002. The crosses, squares, and circles represent the IUE,
HUT, and HST data, respectively. The average uncertainty was shown
in the upper-left part of each panel. The continuum flux is in
units of $\rm 10^{-13}~erg~s^{-1}~cm^{-2}{\AA}^{-1}$ and the
\civ~line is in units of $\rm 10^{-13}~erg~s^{-1}~cm^{-2}$.}
\label{Fig.2}
\end{center}
\end{figure}
In Fig. 2, we show the light curves of the continuum flux at
1440\,{\AA}, the total \civ~line flux (the sum of the modeled broad
emission, narrow emission, and absorption components), and the broad
emission-line component flux from the IUE, HUT, \& HST observations
in 1978--2002. A huge broad peak in 1991--1998 and several sub-peaks
are clearly shown in all three light curves. The overall response of
the \civ~emission line with the continuum variation is clear in
these long-term light curves. This also confirms that the
emission-line response with the continuum cannot be revealed by the
short duration observations (Crenshaw et al. 1996) but can be
clearly detected by the long-term spectral monitorings.
\section{The intrinsic Baldwin effect of NGC 4151}
Using the measured flux data for both \civ~emission line and UV
continuum, we can study the slope variation in the \civ~Baldwin
effect.
\begin{figure}
\begin{center}
\includegraphics[angle=0,height=9cm,width=9.1cm]{fig3.ps}
\caption{Correlations of the total \civ~line flux and the equivalent
width (EW) with the continuum flux at 1440\,{\AA} for NGC 4151.
Symbols have the same meanings as in Fig. 2. The average uncertainty
is shown in the upper-left part of each panel. The continuum flux,
line flux, and EW are in units of $\rm
erg~s^{-1}~cm^{-2}{\AA}^{-1}$, $\rm erg~s^{-1}~cm^{-2}$, and {\AA},
respectively. The curvature of the Baldwin relationship is clearly
shown in both panels. The results from HUT \& HST are fully
consistent with those from IUE.}
\label{Fig.3}
\end{center}
\end{figure}
\subsection{The non-constant slope}
The Baldwin effect represents the dependence of the equivalent
width (EW) of \civ~emission line on the UV continuum flux. Figure 3
shows the total \civ~emission-line flux and the line EW against the
UV continuum flux at 1440\,{\AA} for NGC 4151. The data from HUT \&
HST are fully consistent with those from IUE, clearly suggesting a
non-constant slope in the log-log plots. We notice that the trend
of the curvature in the upper panel of Fig. 3 is similar to that
found for the intrinsic H$\beta$ Baldwin effect of NGC 5548 (see
Fig. 4 of Goad et al. 2004) and Fairall 9 (see Fig.~2 of Wamsteker \& Colina 1986),
which also show the steepening of the
slope at the lower flux state and the flattening of the slope at the
higher flux state. In the EW-continuum plot (the lower panel of
Fig. 3), the flattening of the slope in the low continuum state
is also consistent with the trend noticed by Osmer \& Shields
(1999) for Fairall 9. This suggests that
the non-constant slope of the Baldwin relationship may be usual for
AGNs.
To demonstrate the non-constant slope of the Baldwin effect for NGC
4151 more clearly, we divide the observations into 4 epochs. Epoch 1
(JD2443722--JD2445654) covers the IUE observations in 1978--1983,
which shows at least two sub-peaks in the light curve. Epoch 2
(JD2445766--JD2447631) covers the IUE observations in 1984--1989,
which represents the lowest flux state in all the UV observations.
Epoch 3 (JD2447948--JD2450243) covers the IUE observations in
1990--1996 and the HUT observations in 1990 and 1995. The number of
spectra in this epoch is significantly larger than others since it
includes 203 spectra obtained in a one-month intensive IUE campaign
in 1993, which covers the highest flux state of NGC 4151. Epoch 4
covers the HST observations in 1998--2002. Although only 15 HST/STIS
spectra are available, we can still clearly see that both the
continuum flux and \civ~emission line flux fade away in this epoch
from the major peak. The numbers of spectra, the average UV
continuum flux, and the estimated slope of the Baldwin effect in
these 4 epochs are listed in Table 1. The variation in the slope
$\alpha$ with the UV continuum flux is shown clearly in Fig. 4. The
slope $\alpha$ varies from 0.58 in epoch 3 (the highest flux case)
to 0.83 in epoch 2 (the lowest flux case). The result from the HST
observations ($\alpha=0.78$ in epoch 4) confirms the trend obtained
from the IUE and HUT observations in the other 3 epochs. A smaller
slope $\beta$ value ($-$0.17) in epoch 3 is consistent with the
almost constant EW value at the lower flux state noticed for Fairall
9 by Wamsteker \& Colina (1986) and Osmer \& Shields (1999).
The slope $\alpha$ values for NGC 4151
are also well within the range for the broad H$_\beta$ line of NGC
5548 found by Goad et al. (2004), who indicated that the slope
$\alpha$ varies from 0.4 to 1 in the 13-year observations of NGC
5548.
\begin{table*}
\caption{The average continuum flux and slope of the Baldwin effect
in four different observation epochs. The standard deviations of
these two values are also given. The flux is in units of $\rm
10^{-13}~erg~s^{-1}~cm^{-2}\AA^{-1}$. } \label{table1}
$$
\begin{array}{cccccccc}
\hline
\noalign{\smallskip}
\rm{Epoch} & \rm{JD~~Time} & \rm{No.~of~Spectra} &F_{1440\AA} & \sigma(F_{1440\AA}) & \alpha &
\beta & \sigma
\\
\noalign{\smallskip}
\hline
1& \rm{JD2443722-JD2445654} & \rm{75(IUE)} & 1.077 & 0.574 & 0.721 & -0.279 & 0.025\\
2& \rm{JD2445766-JD2447631} & \rm{85(IUE)} & 0.490 & 0.307 & 0.833 & -0.167 & 0.059\\
3& \rm{JD2447948-JD2450243} & \rm{308(IUE)+7(HUT)}& 3.607 & 1.294 & 0.581 & -0.419 & 0.025\\
4 & \rm{JD2450855-JD2452403} & \rm{15(HST)}& 0.844 & 0.754 & 0.777 & -0.223 & 0.056\\
\noalign{\smallskip}
\hline
\end{array}
$$
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[angle=0,height=8.5cm,width=8.5cm]{fig4.ps}
\caption{Variation of the slope $\alpha$ with the continuum flux for
NGC 4151 in 4 observation epochs. The epoch number is shown for each
point. The epoch 4 is for the HST observations in 1998--2002. See
Table 1 for details. }
\label{Fig.4}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,height=10cm,width=9.2cm]{fig5.ps}
\caption{Variations in the broad, narrow, and absorption components
of \civ line with the UV continuum flux at 1440\,{\AA} for NGC 4151.
Symbols have the same meanings as in Fig. 2. The average errors are
shown in the upper-left part of each panel. The continuum
flux and line flux are in units of $\rm erg~s^{-1}~cm^{-2}\AA^{-1}$ and $\rm
erg~s^{-1}~cm^{-2}$ respectively. }
\label{Fig.5}
\end{center}
\end{figure}
\subsection{The effects of absorption features}
The non-constant slope of the intrinsic Baldwin effect has been
found so far for NGC 4151 (this paper), Fairall 9 (Wamsteker
\& Colina 1986; Osmer \& Shields
1999), and NGC 5548 (Goad et al. 2004).
However, unlike Fairall 9
and NGC 5548, which show no absorptions and weak ones in their UV
spectra, respectively (Crenshaw et al. 1999), NGC 4151 shows much
more remarkable absorption features, especially in the \civ~line
profile (Boksenberg et al. 1978; see also Fig. 1). These absorption
components also vary with the UV continuum. From Fig. 1 we see that
the central absorption is stronger when the continuum flux is
higher. This absorption feature has been found to actually combine
various absorption lines with different ionization potentials (Kriss
et al. 1992; Weymann et al. 1997; Crenshaw et al. 2000; Kraemer et
al. 2005). Considering the absorption feature, a natural question
then arises: is the non-constant slope of the \civ~Baldwin effect in
NGC 4151 due to the variation in absorption at different continuum
flux levels? To answer this question, we investigated the
variations in the broad emission, narrow emission, and absorption
components of \civ~line. Figure 5 shows the variations in these
components with the UV continuum flux. We can see that the flux
values of all these three components decrease when the continuum
flux decreases. The variation in the absorption component is very
similar to that of the narrow emission-line component. Moreover, we
noticed that in the higher continuum flux state, the broad-line
component always dominates the total \civ~flux. In the lower
continuum flux state, however, the narrow emission-line component
becomes dominant. Both the broad-line component and the central
absorption components become relatively weak.
If we consider the Baldwin
effect of the broad emission-line component alone, we also can
obtain a non-constant slope with $\alpha$ varying from 0.74$\pm$0.02
in the highest flux epoch to 1.33$\pm$0.12 in the lowest flux epoch,
which clearly indicates that the non-constant slope may not be
dominated by the absorptions.
As mentioned in Sect.~2, besides the remarkable central absorption
component, there are also many subtle absorption features in the
wings of \civ~profile of NGC 4151, as revealed by the
high-resolution HST spectra (Weymann et al. 1997; Crenshaw et al.
2000; Kraemer et al. 2001). These complex features make it more
difficult to extract the information of the unabsorbed \civ~flux. In
our spectral fitting, we only use a Gaussian component to represent
the central absorption of \civ~and treat the absorption-induced
`emission' features in the wings of \civ~as satellite emission lines
(L1, L$'$2, and L2). As these features are in fact caused by the
low-ionization absorption lines overlaying the large part of wings
of \civ, our simplification would underestimate the total flux of
\civ. As indicated by Crenshaw et al. (2000), these low-ionization
absorption features are weak in the high-flux state and become
prominent when the UV continuum drops to a low state. Therefore, the
simplification of our spectral fitting is fine when NGC 4151 is
bright (e.g. the continuum flux larger than $\rm
10^{-13}\,erg~s^{-1}~cm^{-2}\AA^{-1}$,
as in epochs 1 and 3). Even from the result obtained in these two epochs
when NGC 4151 is relatively bright, the slope variation of the
Baldwin effect is still evident (see Fig. 4). Using the {\it
HST/STIS} spectra, we also investigate the difference in the
spectral fitting with our model and the model consisting of three
Gaussian components to represent three low-ionization absorption
lines at around 1526\,\AA, 1533\,\AA, and 1577\,{\AA}, respectively.
The difference is about 2\% when the UV continuum flux is larger
($F_{1440\AA}=2.4\times \rm 10^{-13}\,erg~s^{-1}~cm^{-2} \AA^{-1}$
on JD2450855) and about 13\% when the UV continuum flux is smaller
($F_{1440\AA}=1.4\times \rm 10^{-14}\,erg~s^{-1}~cm^{-2} \AA^{-1}$
on JD2451334). Although the equivalent widths of these
low-ionization absorption lines become greater when the source is
fainter, we see that the simplification of our spectral fitting does
not lead to significant underestimation of the total \civ~flux even
in the faint state of NGC 4151. Together with the similar trend in
the Baldwin effect found for Fairall 9, which does not show
absorptions in \civ, and for NGC 5548, which does show some narrow
absorption features in \civ~but not in H$_\beta$, we believe that
the non-constant slope of the \civ~Baldwin effect in NGC 4151 should
not be driven mainly by the absorption effect. However, more
quantitative studies of the effects of complex absorption features
on the Baldwin effect of NGC 4151 are still needed.
\section{Conclusion and discussion}
With the long-term UV spectral data of IUE, HUT, \& HST, we found
the non-constant slope in the intrinsic \civ~Baldwin effect of
Seyfert 1 galaxy NGC 4151. The trend of the slope change with the UV
continuum flux variation is similar to those found for two other
Seyfert 1s, Fairall 9 and NGC 5548, suggesting that the non-constant
slope may be usual for AGNs. The physical origin of such a
non-constant slope is probably related to the different response of
the broad-line emission to the continuum variations at different
luminosity levels (Korista et al. 1998).
From the theoretical point of view, an accretion disk at different
luminosity levels (corresponding to different accretion rates)
should have different accretion modes. Probably the AGNs accrete in
a radiatively inefficient accretion flow (see Narayan, Mahadevan \&
Quataert 1998 for a review) at a lower accretion rate and in a
standard optically thick disk (Shakura \& Sunyaev 1973) or a slim
disk (Abramowicz et al. 1988) at a higher accretion rate. Different
accretion modes radiate differently, producing a relatively hard
spectrum at a lower accretion rate and a soft spectrum at a higher
accretion rate. Because the ionizing continuum of AGNs mainly comes
from the radiation of the accretion disk, the difference in the
disk-emitting spectrum may lead to the different response of the
emission line to the ionizing continuum through the photoionization
process. For NGC 4151, the lowest and highest continuum fluxes at
1440\,{\AA} in the 1978--2002 observational period are about $\rm
3\times10^{-15}\,erg~s^{-1}cm^{-2}\AA^{-1}$ and $\rm
6\times10^{-13}\,erg~s^{-1}cm^{-2}\AA^{-1}$, which correspond to
luminosities at 1440\,{\AA} of \rm $\rm 10^{41}\,erg~s^{-1}$ and
$\rm 2\times 10^{43}\,erg~s^{-1}$, respectively. If we adopt the
black-hole mass for NGC 4151 as $\rm 1.33\times 10^7M_\odot$
(Peterson et al. 2004) and assume $f_\nu\propto \nu^{-0.5}$ in the
UV/optical band and
$L_{bol}\simeq 9L_{5100\,\AA}$ for deriving the bolometric
luminosity (Kaspi et al. 2000), we can estimate the Eddington ratio
($L_{bol}/L_{Edd}$, usually taken as a measure of the dimensionless
accretion rate) in the lowest and highest flux states of NGC 4151 as
0.001 and 0.2, respectively. Clearly, the dimensionless accretion
rate of NGC 4151 varies more than two orders of magnitude. Because
the critical dimensionless accretion rate between a radiatively
efficient accretion flow and a radiatively inefficient one is about
0.01 (Narayan et al. 1998), most probably NGC 4151 accretes in a
radiatively inefficient accretion flow at the lower flux state but
in a radiatively efficient one at the higher flux state. Such a
change of accretion modes can not only produce different spectral
energy distributions in the ionizing continuum, but can also produce
different non-linear responses of the \civ~emission line and then
lead to the non-constant slope of the Baldwin effect in the
different luminosity levels.
Previous studies have revealed the importance of the light-travel
time effect on the Baldwin effect (Krolik et al. 1991; Pogge \&
Peterson 1992; Peterson et al. 2002). Correction of such an effect
can substantially reduce the scatters of the intrinsic Baldwin
effect. However, the time lag for NGC 4151 between the \civ~emission
line and the UV continuum flux is not well-determined. Clavel et al.
(1990) obtained a delay of $3.2\pm 3$ days using the two-month data
with a mean interval of 3.4 days of the IUE campaign in 1988--1989
when NGC 4151 was relatively faint. Crenshaw et al. (1996) failed
to detect any correlation between the \civ~line and UV continuum
flux from a short but intensive IUE campaign in December 1993 when
NGC 4151 was relatively bright. We also used the whole IUE data set
in 1978--1996 and estimated the time lag between the total \civ~line
flux and the continuum flux at 1440\,{\AA} with an interpolation
cross-correlation function (ICCF) method developed by Gaskell \&
Sparke (1986), Gaskell \& Peterson (1987), and White \& Peterson
(1994). We found that the time lag is about 1.9 days. We tried to
use such a time lag to correct the light-travel time effect for NGC
4151 and found that our result does not change. This is mainly
because the mean interval of the whole UV data set in 1978--2002 is
about 15 days, which is significantly longer than the estimated time
lag.
In conclusion, we find the non-constant slope of the intrinsic
Baldwin effect in NGC 4151 and suggest that such a non-constant
slope may not be unusual for AGNs. Obviously, more studies on more
AGNs are needed to confirm our result. This will require intensive,
long-term, and high-resolution UV spectral monitoring with the
current and future space UV observatories on some strongly variable
AGNs. We expect that these future studies will help us to extract
more accurate emission-line and continuum fluxes and establish a
more reliable line-continuum relation for AGNs.
\begin{acknowledgements}
We are grateful to Prof. Brad Peterson for kindly providing an ICCF
program to calculate the time lag, to the anonymous referee for
his/her constructive comments which help to improve the paper
significantly, and to Mr. Lei Qian and Mr. Bingxiao Xu for many
helpful discussions. The authors are supported by the National
Natural Science Foundation of China (Grants No. 10473001, No.
10525313, and No. 10521001), the RFDP Grant (No. 20050001026), and
the Key Grant Project of Chinese Ministry of Education (No. 305001).
\end{acknowledgements}
|
2,877,628,088,945 | arxiv |
\section{Conclusions and future directions}
In this work, we introduced a novel nonparametric method for adaptive control and prediction that estimates the unknown dynamics over a reproducing kernel Hilbert space. By restricting to the space $\mathcal{F}_2$, we analyzed efficient finite-dimensional randomized approximations that scale well to high dimension.
A promising future direction of work is to study the Banach space $\mathcal{F}_1$ of single-layer neural networks of the form $h(\cdot) = \int_{\Theta} \Phi(\cdot, \theta)\mu(d\theta)$ for a signed Radon measure $\mu$~\citep{bach_breaking, bengio_convex}.
The space $\mathcal{F}_1$ admits convergence analyses for gradient-based optimization via the theory of Wasserstein gradient flows, as well as efficient approximation via particle methods~\citep{MeiE7665, rotskoff2019trainability}. Such approaches could in principle be generalized to the adaptive control setting considered here via the velocity gradient methodology.
The primary difficulty in this direction is that standard Lyapunov arguments rely on the existence of an inner product in the function space, and it is unclear how to construct a Lyapunov function for estimation over a Banach space.
An alternative direction to consider more general spaces of functions is to use the recently-developed neural tangent kernel~\citep{jacot18ntk}, which
argues that wide neural networks have an RKHS structure similar to
$\mathcal{F}_2$. Nevertheless, networks in this regime do not exhibit feature learning, which draws into question their advantage over the methods already considered in this work.
\section{Introduction}
\label{sec:intro}
One of the fundamental assumptions of nonlinear adaptive systems theory is
that the uncertainty of the system can be written as a linear expansion in a set of known basis functions that are nonlinear in the system state.
While such linear parameterizations enable the derivation of efficient algorithms with provable guarantees, results outside of this restrictive regime are scarce.
Notable examples typically leverage notions of monotonicity~\citep{astolfi03immersion,tyukin07adaptation} or convexity~\citep{annaswamy98adaptive,fradkov99} to make the underlying learning problem tractable.
Here we broaden the applicability of adaptive control by relaxing this classical assumption. In statistical learning, nonlinear function approximation is handled through the use of reproducing kernel Hilbert spaces (RKHSs)~\citep{cucker02learning}, which are infinite-dimensional function spaces that admit tractable algorithms reminiscent of finite-dimensional linear regression. Inspired by this approach, we develop an adaptive input that learns directly over an RKHS without reference to a finite-dimensional vector of parameters.
One significant drawback of RKHSs is computational. While the representer theorem ensures that estimation in an RKHS can always be cast as a finite expansion over the dataset, the number of parameters grows with its size, which makes learning on large datasets computationally demanding. A key breakthrough in overcoming this difficulty was the theory of random Fourier features, which shows that
elements in many RKHSs can be approximated in the linear span of a finite set of \textit{random} basis functions with high probability. Remarkably, the number of
random basis functions needed can be shown to scale polynomially~\citep{rahimi08uniform} in the
function norm and the ambient dimension, which enables efficient computation even in
high-dimensional spaces.
In the dynamical systems setting considered in this work, the system trajectory plays the role of the dataset, and the horizon plays the role of its size. Paralleling the statistical learning setting, the complexity of the nonparametric adaptive input that we introduce grows with this horizon. To overcome this complication, we leverage the theory of random features to provide high-probability guarantees on the possibility of uniformly approximating the nonparametric input via a finite-dimensional expansion in random basis functions. Importantly, this approach leads to efficient update laws that match the computational complexity of parametric methods while retaining the expressivity of the RKHS.
We focus on two primary problem settings. The first setting is the classical problem of adaptive control with matched uncertainty, where the uncertainty is assumed to live in the span of the control matrix. Our second application is in adaptive state estimation, where we seek to learn a model of an unknown dynamics governing the evolution of a particular state variable. As a byproduct of our analysis, we exhibit a duality between these two problems reminiscent of the duality between LQR and Kalman filtering in linear control theory. In both settings, we assume that the unmodelled component can be written as the sum of a term that can be linearly parameterized with known physically-motivated basis functions and a term assumed to live in an RKHS. This setup captures the practically relevant setting where a learner can leverage some available physical knowledge of the system but also must perform estimation in a purely unstructured fashion to achieve ideal performance.
The paper is organized as follows. In Section~\ref{sec:related}, we review related work and summarize our contribution. In Section~\ref{sec:problem}, we formulate the adaptive control and prediction problems. In Section~\ref{sec:np_rslts}, we develop a theory of nonparametric adaptive control, building upon a simple observation reminiscent of the ``kernel trick'' in machine learning. In Section~\ref{sec:rf}, we review the theory of random Fourier features, which we subsequently apply in Section~\ref{sec:p_results} to design practical adaptive algorithms that asymptotically drive the control or prediction error to a ball around zero. The radius of the ball scales with the approximation error of the random feature expansion, and we give an explicit bound on the number of features needed to ensure that the tracking or prediction error falls below a tolerance threshold $\varepsilon$ with high probability.
In Section~\ref{sec:simulations}, we first study the performance of the nonparametric method in comparison to its randomized approximations on a synthetic adaptive control problem. We subsequently illustrate the effectiveness of its randomized approximations in very high dimension
by constructing an adaptive predictor for a $60$-dimensional
Hamiltonian dynamical system describing the motion of a collection of particles interacting through a $1/r^2$ potential.
\section{Nonparametric adaptive control and prediction}
\label{sec:np_rslts}
In this section, we present our primary result in the nonparametric setting. Given a Lyapunov function for the error dynamics as stated in Assumption~\ref{assmp:lyapunov}, the standard procedure in adaptive nonlinear control is to approximate the unknown dynamics $h(x)$ appearing in \eqref{eqn:gen_dyn}~\&~\eqref{eq:error_dynamics} by an expansion in known basis functions $\Phi:\ensuremath{\mathbb{R}}^n\rightarrow\ensuremath{\mathbb{R}}^{d\times p}$~\citep{sanner_nn}
\begin{equation}
\label{eqn:parametric_approx}
\hat{h}(x,t) = \Phi(x)\hat{\alpha}(t),
\end{equation}
and to update the parameter estimates $\hat{\alpha}(t)\in\ensuremath{\mathbb{R}}^p$ according to a Lyapunov-based update law
\begin{equation}
\label{eqn:linear_update_standard}
\dot{\hat{\alpha}}(t) = -\gamma \Phi(x)^\mathsf{T} g_e(x, t)^\mathsf{T} \nabla Q (e, t),
\end{equation}
for $\gamma > 0$ a learning rate.
\subsection{Nonparametric form}
We start with the following simple observation about the construction in \eqref{eqn:parametric_approx}~\&~\eqref{eqn:linear_update_standard}, which is analogous to the ``kernel trick'' in machine learning.
\begin{obs}[Kernel trick]
\label{obs:kernel}
Assume $\hat{\alpha}(0) = 0$\footnote{Note that this is without loss of generality, since any non-zero $\hat{\alpha}(0)$ results in a non-zero
$\hat{h}(\cdot, 0)$ which can simply be absorbed into $h$.}. Then the adaptive approximation \eqref{eqn:parametric_approx} with parameters updated according to the algorithm \eqref{eqn:linear_update_standard} is equivalent to the nonparametric approximation
\begin{equation}
\label{eqn:kernel_input}
\hat{h}(x, t) = \int_0^t \mathsf{K}(x, x(\tau)) c(\tau)d\tau,
\end{equation}
where we have defined the kernel function $\mathsf{K}:\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}^n\rightarrow\ensuremath{\mathbb{R}}^{d\times d}$ and coefficients $c(t)\in\ensuremath{\mathbb{R}}^d$ as:
\begin{align*}
\mathsf{K}(x, y) &= \Phi(x)\Phi(y)^\mathsf{T},\\
c(t) &= -\gamma g_e(x(t), t)^\mathsf{T} \nabla Q(e(t), t).
\end{align*}
\end{obs}
The proof is simple and proceeds by formally writing the solution of \eqref{eqn:linear_update_standard} as an integral over time. Observation~\ref{obs:kernel} demonstrates that the function estimates formed by classical adaptive control algorithms only depend on inner products between the basis functions and do not, in principle, require any reference to a vector of parameter estimates. This implies that the basis functions need not be finite-dimensional so long as they admit a computationally inexpensive procedure for computing their inner products, which is precisely the case for an RKHS.
\paragraph{Data-adapted centers} Restricting to the case where $\mathsf{K}(\cdot, \cdot)$ is the Gaussian kernel, \eqref{eqn:kernel_input} can be seen as leaving a ``trail'' of Gaussians along the system trajectory $x(\tau)$ for $\tau < t$. In this sense, similar to kernel machines in statistical learning, \eqref{eqn:kernel_input} automatically constructs data-adapted centers at which to place spatially-localized basis functions.
\paragraph{Complexity} The price paid for the expressivity in the representation \eqref{eqn:kernel_input} is that $\hat{h}(x, t)$ now obeys a partial differential equation that must be solved over a horizon of length $t$ at each $x \in \ensuremath{\mathbb{R}}^n$,
\begin{equation}
\label{eqn:pde}
\frac{\partial \hat{h}}{\partial t}(x, t) = \mathsf{K}(x, x(t))c(t).
\end{equation}
While \eqref{eqn:pde} is decoupled in space so that a global solve is not required, past work from time $\tau < t$ cannot be re-used at time $t$. Hence, unlike standard parametric methods that incur an $\mathcal{O}(1)$ cost at each timestep, solving \eqref{eqn:pde} for the value of $\hat{h}(x, t)$ at a given spatial location $x$ incurs an $\mathcal{O}(t)$ cost at each time $t$. For most applications, this is prohibitively expensive, and we now turn to efficient approximation schemes that circumvent this difficulty.
\subsection{Random feature space}
Observation~\ref{obs:kernel} motivates us to work with function classes
described by kernels. The following definition
introduces the notion of an operator-valued kernel.
\begin{mydef}[Operator-valued reproducing kernel, see e.g.,~\cite{carmeli08operator}]
\label{def:op_kernel}
A kernel $\mathsf{K} : \ensuremath{\mathbb{R}}^{n} \times \ensuremath{\mathbb{R}}^{n} \rightarrow \ensuremath{\mathbb{R}}^{d \times d}$ is said to be an operator-valued reproducing
kernel for an RKHS $\mathcal{H}$ if
\begin{enumerate}[(i)]
\item For every $\{x_i\}_{i=1}^{N} \subseteq \ensuremath{\mathbb{R}}^n$ and $\{w_i\}_{i=1}^{N} \subseteq \ensuremath{\mathbb{R}}^d$, it holds
that $\sum_{i,j=1}^{N} \ip{w_i}{\mathsf{K}(x_i, x_j) w_j} \geq 0$.
\item $\mathsf{K}(\cdot, x)w \in \mathcal{H}$ for every $x\in \ensuremath{\mathbb{R}}^n$ and $w\in\ensuremath{\mathbb{R}}^d$.
\item $\mathcal{H}$ can be written
$\mathcal{H} = \mathsf{cl}\left\{f \,\Bigg|\, \exists\: \{x_i\}_{i=1}^{n}, \{w_i\}_{i=1}^{n}\: s.t.\: f(\cdot) = \sum_{i=1}^n\mathsf{K}(\cdot, x_i)\omega_i \right\}$.
\end{enumerate}
\end{mydef}
The adaptive algorithms we formulate will be valid for any RKHS $\mathcal{H}$ with a known operator-valued kernel $\mathsf{K}$. However, we focus on RKHSs with specific structure that will enable the design of efficient randomized approximations. These function spaces are described by the following assumption.
\begin{assmp}[The function class $\mathcal{F}_2$, see e.g.,~\cite{bach_breaking}]
\label{assmp:rf_kernel}
The unknown dynamics $h$ lies in an RKHS $\mathcal{H}$ with known operator-valued kernel $\mathsf{K}$. Moreover, $\mathsf{K}$ may be written in terms of a feature map $\Phi:\ensuremath{\mathbb{R}}^n\times\Theta\rightarrow\ensuremath{\mathbb{R}}^{d\times d_1}$ as
\begin{align}
\mathsf{K}(x, y) = \int_{\Theta} \Phi(x, \theta) \Phi(y, \theta)^\mathsf{T} d\nu(\theta), \label{eq:kernel_form}
\end{align}
with $d_1\leq d$ and where $\nu$ is a known probability measure on a measurable space $\Theta$.
\end{assmp}
In Assumption~\ref{assmp:rf_kernel}, we have overloaded the definition of the feature map $\Phi$ as a generalization of the structure of $\mathsf{K}$ seen in Observation~\ref{obs:kernel}. Assumption~\ref{assmp:rf_kernel} is not very restrictive, as many rich kernels applied in practice -- such as the Gaussian and Laplace kernels -- can readily written in this form. In particular, the operator-valued generalization of Bochner's theorem~\citep{brault16randomfeatures} states that any translation-invariant kernel can be written in the form \eqref{eq:kernel_form} with a feature map
\begin{equation}
\label{eqn:bochner}
\Phi(x, \theta) = B(w) \cos(w^\mathsf{T} x + b),
\end{equation}
where $\Theta \subseteq\ensuremath{\mathbb{R}}^{n+1}$, $\theta = (w, b)$, $w\in\ensuremath{\mathbb{R}}^n$, $b\in\ensuremath{\mathbb{R}}$, and for suitable choices of $\nu$ and $B : \ensuremath{\mathbb{R}}^n\rightarrow \ensuremath{\mathbb{R}}^{d\times d_1}$.
Under Assumption~\ref{assmp:rf_kernel}, it is well-known (c.f.~\cite{bach_breaking}, Appendix A) that $h \in \mathcal{H}$ can be written, for some square-integrable signed density $\alpha:\Theta\rightarrow\ensuremath{\mathbb{R}}^{d_1}$ with respect to the base measure $\nu$, as the integral
\begin{equation}
\label{eqn:f2_func}
h(\cdot) = \int_{\Theta}\Phi(\cdot, \theta)\alpha(\theta)d\nu(\theta), \:\: \norm{h}^2_{\mathcal{H}} = \norm{\alpha}^2_{L_2(\Theta, \nu)}.
\end{equation}
The corresponding Hilbert space is referred to as $\mathcal{F}_2$~\citep{bach_breaking, bengio_convex}. $\mathcal{F}_2$ is related to the Banach space of single-layer neural networks $\mathcal{F}_1$, which may be obtained by taking the union over all possible base measures for $\mathcal{F}_2$. The space $\mathcal{F}_2$ is convenient for our purposes because it allows us to treat the infinite-dimensional density over parameters $\alpha$ similar to a standard finite-dimensional vector of parameters. To do so, we introduce a second moment regularity condition that will ensure the nonparametric input leads to a stable and convergent feedback system.
\begin{assmp}[Second moment regularity of $\Phi$]
\label{assmp:second_moment_bound}
For every $x \in \ensuremath{\mathbb{R}}^n$, the second moment of the feature matrix is finite, i.e., $\int_\Theta \opnorm{\Phi(x, \theta)}^2 d\nu(\theta) < \infty$.
Furthermore, for every $R > 0$,
\begin{align*}
\sup_{\substack{\norm{x}_2 \leq R, \norm{y}_2 \leq R,\\ x \neq y}} \frac{\left(\int_\Theta \opnorm{\Phi(x, \theta) - \Phi(y, \theta)}^2 d\nu(\theta)\right)^{1/2}}{\norm{x-y}_2} < \infty.
\end{align*}
\end{assmp}
To obtain accurate parametric approximations, we may sample points $\theta_i \in \Theta$ i.i.d. from the base measure $\nu$. This has the effect of discretizing the density $\alpha$ into a vector of parameters that can be learned using standard adaptive methods.
\subsection{Main results}
For simplicity of exposition, we restrict to the case where $\alpha_p = 0$ in \eqref{eqn:gen_dyn} to focus on convergence of the nonparametric input; the randomized approximations in Section~\ref{sec:p_results} will adapt over both physical and mathematical parameter estimates simultaneously. Moreover, we focus here on the setting of adaptive control. Later, the proof of Theorem~\ref{thm:dp_finite_approx} will highlight a duality between adaptive control and adaptive prediction that immediately implies an analogous result for prediction.
The following theorem demonstrates that the nonparametric adaptation algorithm leads to a stable and convergent trajectory.
\begin{restatable}[Convergence]{mythm}{npconv}
\label{thm:nonparametric_conv}
Consider system \eqref{eqn:gen_dyn} under Assumptions~\ref{assmp:lyapunov},~\ref{assmp:rf_kernel}, and~\ref{assmp:second_moment_bound}. Fix $\alpha_p = 0$ and let $\gamma > 0$.
Then the adaptive control input
\begin{align*}
u(x, t) &= - \gamma\int_0^t \mathsf{K}(x, x(\tau))g_e(x(\tau), \tau)^\mathsf{T} \nabla Q(e(\tau), \tau)d\tau
\end{align*}
ensures that both $x(t)$ and $e(t)$ exist and are uniformly bounded for all $t \geq 0$. Moreover, $u(\cdot, t) \in \mathcal{H}$ for all $t \geq 0$ and $\lim_{t\rightarrow \infty} \norm{x(t) - x_d(t)}_2 = 0$.
\end{restatable}
Next, we study the interpolation properties of the input $u(x, t)$ along the desired trajectory. To this end, we
strengthen Definition~\ref{def:local_lip_local_bound} to be uniform in $t$.
\begin{mydef}
\label{def:local_lipschitz_x_uniform_t}
Let $E_1$ and $E_2$ be normed vector spaces.
A function $f(x, t)$ mapping $E_1 \times \ensuremath{\mathbb{R}}_{\geq 0} \mapsto E_2$
is said to be locally Lipschitz in $t$ uniformly in $x$ if the following
two conditions hold for every $R > 0$:
\begin{align*}
\sup_{\norm{x}_{E_1}\leq R} \sup_{\substack{t_1, t_2 \in \ensuremath{\mathbb{R}}_{\geq 0}, \\t_1 \neq t_2}} \frac{\norm{f(x, t_1) - f(x, t_2)}_{E_2}}{\abs{t_1-t_2}} &< \infty, \\
\sup_{t \in \ensuremath{\mathbb{R}}_{\geq 0}} \sup_{\substack{\norm{x_1}_{E_1} \leq R,\\ \norm{x_2}_{E_1} \leq R, \\x_1 \neq x_2}} \frac{\norm{f(x_1, t) - f(x_2, t)}_{E_2}}{\norm{x_1 - x_2}_{E_1}} &< \infty.
\end{align*}
\end{mydef}
With Definition~\ref{def:local_lipschitz_x_uniform_t} in hand, we may state the following theorem.
\begin{restatable}[Interpolation]{thm}{interpolation}
\label{thm:interpolation}
Consider the setting of Theorem~\ref{thm:nonparametric_conv}. Suppose furthermore that both $f_e(e, t)$ and $g_e(x, t)$ are locally Lipschitz in their first argument
uniformly in $t$.
Finally, suppose that for every $R > 0$,
\begin{align*}
\int_\Theta \sup_{\norm{x}_2 \leq R} \opnorm{\Phi(x, \theta)}^2 d\nu(\theta) < \infty.
\end{align*}
Then the nonparametric input asymptotically interpolates the unknown in the span of the control matrix, $\lim_{t \rightarrow \infty} \norm{g_e(x(t), t)(u(x(t), t) - h(x(t)))}_2 = 0$.
\end{restatable}
Mirroring the finite-dimensional setting considered by~\citet{boffi_neco_imp_reg}, we now demonstrate that the adaptive input in Theorem~\ref{thm:nonparametric_conv} converges to the minimum RKHS-norm interpolating solution.
\begin{restatable}[Implicit regularization]{thm}{impreg}
\label{thm:imp_reg}
Consider the setting of Theorem~\ref{thm:nonparametric_conv}. Define the interpolating set over the trajectory
\begin{equation*}
\mathcal{A} := \left\{\bar{h} \in \mathcal{H} : \bar{h}(x(t)) = h(x(t)), \ \forall t\geq 0\right\},
\end{equation*}
and assume that $\lim_{t\rightarrow\infty}u(\cdot, t)\in \mathcal{A}$. Then,
\begin{equation}
\lim_{t\rightarrow\infty}u(\cdot, t) \in \argmin_{\bar{h}\in\mathcal{A}}\:\norm{\bar{h}(\cdot)}_{\mathcal{H}}.
\end{equation}
\end{restatable}
Given these results for the computationally expensive nonparametric input, we now turn to develop a theory of efficient randomized approximation schemes.
\section{Acknowledgments}
NMB thanks Eric Vanden-Eijnden and Joan Bruna for many instructive discussions on the function spaces $\mathcal{F}_1$ and $\mathcal{F}_2$. All authors thank Pannag Sanketi and Vikas Sindhwani for helpful feedback.
\section{Randomized adaptive control and prediction}
\label{sec:p_results}
We now demonstrate how the nonparametric input in Theorem~\ref{thm:nonparametric_conv} can be approximated using the uniform approximation theory of Section~\ref{sec:rf} to obtain adaptive control and prediction algorithms with high-probability guarantees of convergence. We state completely general results under the assumption that the unknown dynamics $h(\cdot)$ can be uniformly approximated to a desired degree of accuracy, similar to the classical results of~\citet{sanner_nn} but in a generalized context. Taking $h(\cdot)$ to lie in the function space $\mathcal{F}_2$ and applying the results of Section~\ref{sec:rf} immediately gives a sufficient bound on the number of random features needed to track the desired trajectory to a given tolerance.
\subsection{Deadzones}
Before we present our main approximate algorithms, we first introduce the notion of a deadzone. Since any finite-dimensional
approximation to $h(\cdot)$ will have some non-zero approximation error,
any adaptive algorithm cannot learn below this noise floor;
a deadzone allows us to disable adaptation when the only residual error
remaining is due to approximation error.
\begin{mydef}
\label{def:dead}
Let $\Delta > 0$.
A continuously differentiable function $\sigma_\Delta : \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}$ is called
$\Delta$-admissible deadzone if:
\begin{enumerate}[(i)]
\item $0 \leq \sigma_\Delta$ and $\sigma_\Delta(x) = 0$ for all $x \in [0, \Delta]$,
\item $0 \leq \sigma'_\Delta$ and $\sigma'_\Delta(x) = 0$ for all $x \in [0, \Delta]$,
\item $\sigma'_\Delta$ is locally Lipschitz. \label{eq:condition_locally_lip}
\end{enumerate}
The function $\sigma_\Delta$ is called a $(\Delta, L, B)$-admissible deadzone if
condition \eqref{eq:condition_locally_lip} is replaced with the condition that
$\sigma'_\Delta$ is $L$-Lipschitz and $B$-bounded.
\end{mydef}
We now give some examples of $\Delta$-admissible deadzones.
The first example is a direct extension of the deadzone used
in \citet{sanner_nn}.
\begin{restatable}{ex}{sdeltadeadzone}
\label{prop:s_delta_deadzone}
Fix a scalar $\delta > 0$. Let $s_\delta : \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}_{\geq 0}$
be defined as $s_\delta(x) := (x-\delta) \mathbf{1}\{x > \delta\}$.
For any $\Delta > 0$, the function $x \mapsto s_{\sqrt{\Delta}}^2(\sqrt{x})$ is a
$(\Delta, 1/(2\Delta), 1)$-admissible deadzone.
\end{restatable}
An issue with a deadzone based on $s_\delta$ is that the Lipschitz
constant of the derivative diverges with vanishing $\Delta$. This makes it challenging to prove sharp ``approximate interpolation'' results similar to Theorem~\ref{thm:interpolation}.
To remedy this issue, we construct a deadzone
where the Lipschitz constant of the derivative is
decoupled from $\Delta$. The following construction
is directly inspired by smooth approximations to the hinge loss
for support vector machines (see e.g. \citet{chapelle07svm}).
\begin{restatable}{ex}{sdeltagammadeadzone}
\label{prop:s_delta_gamma_deadzone}
Fix $\delta > 0$ and $\gamma > 0$.
Define $s_{\delta,\gamma}$ as:
\begin{align*}
s_{\delta,\gamma}(x) := \begin{cases}
0 &\text{if } x \leq \delta, \\
\frac{(x-\delta)^2}{4\gamma} &\text{if } x \in (\delta, \delta+2\gamma), \\
x - (\delta+\gamma) &\text{if } x \geq \delta + 2\gamma.
\end{cases}
\end{align*}
For any $\Delta > 0$ and $\gamma > 0$, the function $s_{\Delta,\gamma}$ is a $(\Delta, 1/(2\gamma), 1)$-admissible deadzone.
\end{restatable}
Worked details of Examples~\ref{prop:s_delta_deadzone} and~\ref{prop:s_delta_gamma_deadzone} may be found in Appendix~\ref{app:p_results}. Our results to come will be stated in terms of an arbitrary deadzone according to Definition~\ref{def:dead}, but concrete instantiations can be found via these prescriptions.
\subsection{Adaptive control}
We are now ready to present our main result
in the setting of approximate control. The following is a general result about adaptive control with uniform approximation that can be applied with an arbitrary choice of basis.
\begin{restatable}[Adaptive control with finite-dimensional approximation]{mythm}{acfiniteapprox}
\label{thm:ac_finite_approx}
Suppose that Assumption~\ref{assmp:lyapunov} holds.
Let $\alpha_{\ell,0} := \arg\min_{\alpha \in O_\ell} \psi_\ell(\alpha)$ for $\ell \in \{p, m\}$.
Fix $B_{\alpha_p} > 0$ satisfying $\bregdp{\alpha_p}{\alpha_{p,0}} \leq B_{\alpha_p}$,
$B_{\alpha_m} > 0$, and
$R$ satisfying
\begin{align*}
R > \mu_1^{-1}\left( Q(e(0), 0) + B_{\alpha_p} + B_{\alpha_m} \right).
\end{align*}
Suppose there exists a finite $C_e$ such that
for every $T > 0$:
\begin{align}
\max_{t \in [0, T]} \norm{e(t)}_2 \leq R \text{ implies } \norm{x(T) - x_d(T)}_2 \leq C_e R. \label{eq:small_error_implies_small_state}
\end{align}
Let $\Psi : \ensuremath{\mathbb{R}}^n \rightarrow \ensuremath{\mathbb{R}}^{d \times m}$ be a locally Lipschitz feature map.
Define the constants
\begin{align*}
B_d &:= \sup_{t \geq 0} \norm{x_d(t)}_2, \\
B_x &:= C_e R + B_d, \\
B_{g_e} &:= \sup_{t \geq 0} \sup_{\norm{x}_2 \leq B_x} \opnorm{g_e(x, t)}, \\
B_{\nabla Q} &:= \sup_{t \geq 0} \sup_{\norm{e}_2 \leq R} \norm{\nabla Q(e, t)}_2, \\
B_{\mathrm{approx}} &:= \inf_{\bregdm{\alpha_m}{\alpha_{m,0}} \leq B_{\alpha_m}} \sup_{\norm{x}_2 \leq B_x} \norm{\Psi(x)\alpha_m - h(x)}_2.
\end{align*}
Let $\Delta$ be any positive constant satisfying
\begin{align*}
\Delta \geq \mu_2(\rho^{-1}(2B_{g_e} B_{\nabla Q} B_{\mathrm{approx}})),
\end{align*}
and let $\sigma_\Delta$ be a $\Delta$-admissible deadzone.
Then the dynamical system
\begin{align*}
\dot{x} &= f(x, t) + g(x, t)(u(x, t) - Y(x, t) \alpha_p - h(x)), \\
\dot{e} &= f_e(e, t) + g_e(x, t)( u(x, t) - Y(x, t) \alpha_p - h(x)), \\
u(x, t) &= Y(x, t) \hat{\alpha}_p + \Psi(x) \hat{\alpha}_m, \\
\frac{d}{dt} \nabla \psi_p(\hat{\alpha}_p) &= -\sigma'_\Delta(Q(e, t)) Y(x, t)^\mathsf{T} g_e(e, t)^\mathsf{T} \nabla Q(e, t), \\
\frac{d}{dt} \nabla \psi_m(\hat{\alpha}_m) &= - \sigma'_\Delta(Q(e, t)) \Psi(x)^\mathsf{T} g_e(e, t)^\mathsf{T} \nabla Q(e, t),
\end{align*}
with initial conditions $x(0) = x_0$,
$e(0) = m(x_0, 0)$, $\hat{\alpha}_p(0) = \alpha_{p,0}$,
and $\hat{\alpha}_m(0) = \alpha_{m,0}$ has a
solution $(x(t), e(t), \hat{\alpha}_p(t), \hat{\alpha}_m(t))$ that exists for all $t \geq 0$.
Furthermore,
\begin{align*}
\limsup_{t \rightarrow \infty} \norm{e(t)}_2 \leq \mu_1^{-1}(\Delta).
\end{align*}
\end{restatable}
Theorem~\ref{thm:ac_finite_approx} can be used in conjunction with the results of Section~\ref{sec:rf} to obtain a high-probability guarantee for control, as illustrated by the following example.
\begin{ex}[Adaptive control with random features]
\label{ex:rf_adaptive}
Suppose for simplicity that $\bregdm{x}{y} = \frac{1}{2}\norm{x-y}_2^2$ is the Euclidean distance.
Fix a positive integer $K$,
and let $\delta \in (0, 1)$.
Assume that $h\in\mathcal{F}_2(B_h)$ under
Assumption~\ref{assmp:weights_biases},
and again for simplicity assume that
the kernel is decomposable as in Section~\ref{sec:rf:examples}.
Set $B_{\alpha_m} = B_h^2/2$. Let $\{\theta_i\}_{i=1}^{K}$ be i.i.d.\ draws from $\nu$. Then, by Equation~\ref{eq:final_rf_bound},
with probability at least $1-\delta$
there exists $\alpha_m = (\alpha_{m,1}, ..., \alpha_{m,K}) \in \ensuremath{\mathbb{R}}^{K d_1}$ satisfying $\norm{\alpha_{m,i}}_2 \leq B_h/K$ for $i=1, ..., K$ and
\begin{align*}
\sup_{\norm{x}_2 \leq B_x} \norm{ h(x) - \Psi\left(x; \{\theta_i\}_{i=1}^{K}\right) \alpha_m }_2 \leq \frac{C(h, \delta) (B_x \sqrt{n} + \sqrt{d_1})}{\sqrt{K}},
\end{align*}
with $\Psi(x; \{\theta_i\}_{i=1}^{K}) = \begin{bmatrix} \Phi(x, \theta_1), ...,\Phi(x, \theta_K) \end{bmatrix} \in \ensuremath{\mathbb{R}}^{d \times K d_1}$.
Here, $C(h, \delta) > 0$ is a constant that depends only on $h$ and $\delta$.
Note that
\begin{align*}
\bregdm{\alpha_m}{0} = \frac{1}{2}\norm{\alpha_m}_2^2 = \frac{1}{2}\sum_{i=1}^{K} \norm{\alpha_{m,i}}_2^2 \leq \sum_{i=1}^{K} \frac{B_h^2}{2K^2} = \frac{B_h^2}{2K} \leq B_{\alpha_m},
\end{align*}
so that $B_{\mathrm{approx}} \leq \frac{C(h, \delta) (B_x \sqrt{n} + \sqrt{d_1})}{\sqrt{K}}$.
Hence, to ensure $\limsup_{t\rightarrow\infty}\norm{e(t)}_2 \leq \varepsilon$ for some $\varepsilon > 0$, it suffices to take $K$ satisfying
\begin{align*}
K \geq \frac{4 B_{g_e}^2 B_{\nabla Q}^2 C(h, \delta)^2 (B_x \sqrt{n} + \sqrt{d_1})^2}{\rho^2(\mu_2^{-1}(\mu_1(\varepsilon)))}.
\end{align*}
Suppose that
$\mu_1(x) = \mu x$, $\mu_2(x) = L x$, and $\rho(x) = \beta x$\footnote{For $V(t)$ a quadratic Lyapunov function certifying exponential stability, it is a simple calculation to show that one can take $Q(t) = \sqrt{V(t)}$ to obtain such linear functions for $\mu_1, \mu_2$ and $\rho$.}. Then this bound simplifies to
\begin{align*}
K \geq \frac{4}{\beta^2 \varepsilon^2} \left(\frac{L}{\mu}\right)^2 B_{g_e}^2 B_{\nabla Q}^2 C(h, \delta)^2 (B_x \sqrt{n} + \sqrt{d_1})^2.
\end{align*}
\end{ex}
\paragraph{Approximation region} For simplicity of presentation, we have chosen the approximation region in Theorem~\ref{thm:ac_finite_approx}
large enough to cover the variation of the error signal throughout
adaptation. Alternatively, the approximation region can be specified \textit{a-priori}, and sliding mode control can be used to force the system to stay inside the approximation region. Such a formulation requires additional technical assumptions on the error dynamics.
\paragraph{Contraction} Assume that the error dynamics is contracting. Then we may take $Q(e, t)$ to be the Riemannian energy as in Remark~\ref{rmk:lyap_contr} and set $\psi_\ell(\cdot) = \frac{1}{2}\norm{\cdot}_2^2$
for $\ell \in \{p, m\}$ to recover the contraction metric-based adaptation law due to~\cite{brett_adapt}
\begin{equation*}
\dot{\hat{\alpha}}_m = -\Psi(x)^\mathsf{T} g_e(x, t)^\mathsf{T} M(e, t)\gamma_s(e, 0, t).
\end{equation*}
Here, $\gamma_s(e, 0, t)$ denotes the tangent vector to a geodesic in the metric $M(e, t)$ between $e$ and the origin at the endpoint $e$ (a similar metric-based update also holds for $\hat{\alpha}_p$).
\paragraph{Mirror descent} By analogy to mirror descent, the choice of potential functions $\psi_p(\cdot)$ and $\psi_m(\cdot)$ can be used to regularize the learned physical and random feature models, or can be used to improve convergence when adapted to the problem geometry~\citep{boffi_neco_imp_reg}. The random sinusoidal features considered in Section~\ref{sec:rf} are uniformly bounded in $\ell_\infty$ norm independent of the number of parameters. This observation suggests that, for a large number of features, a potential function strongly convex with respect to the $\ell_1$ norm such as the hypentropy potential due to~\citet{pmlr-v117-ghai20a} may lead to improved performance.
\paragraph{Interpolation} We conclude our treatment of adaptive control by presenting an approximate version of Theorem~\ref{thm:interpolation}, which demonstrates
how the approximation error from finite-dimensional truncation
translates into an interpolation error for the learned dynamics approximation.
Specifically, if Theorem~\ref{thm:ac_finite_approx}
is invoked with a $(\Delta, L, B)$-admissible deadzone,
then the following result shows that the interpolation error is bounded by $O\left(\sqrt{\mu_1^{-1}(\Delta)(1+L)}\right)$. This motivates the construction in Example~\ref{prop:s_delta_gamma_deadzone}.
\begin{restatable}[Approximate interpolation]{mythm}{acapproxinterp}
\label{thm:ac_approx_interp}
Suppose the hypotheses of Theorem~\ref{thm:ac_finite_approx} hold.
Let $\sigma_\Delta$ denote a
$(\Delta, L, B)$-admissible deadzone,
and assume that $f_e$, $g_e$, and $Y$ are locally Lipschitz in
their first arguments uniformly in $t$.
Then there exist constants $C_1 > 0$ and $C_2 > 0$ not depending on
$\Delta$ such that
\begin{align*}
\limsup_{t \rightarrow \infty} \norm{g_e(x(t), t)(u(x(t), t) - Y(x(t), t) \alpha_p - h(x(t)))}_2 \leq C_1 \sqrt{\mu_1^{-1}(\Delta)(1+L)} + C_2 \mu_1^{-1}(\Delta).
\end{align*}
\end{restatable}
\subsection{Adaptive prediction}
Similar to Theorem~\ref{thm:ac_finite_approx}, the following theorem designs a predictor by leveraging the ability to uniformly approximate the unknown dynamics to a suitable degree of accuracy.
\begin{restatable}[Adaptive prediction with uniform approximation]{mythm}{dpfiniteapprox}
\label{thm:dp_finite_approx}
Suppose that the trajectory $x(t)$ of the system $\dot{x} = f(x, t)$
is uniformly bounded.
Choose a continuous and locally Lipschitz $k(\hat{x}, x)$%
such that $f(\hat{x}, t) + k(\hat{x}, x(t))$ is
contracting in a metric $M : \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \mathsf{Sym}^{n \times n}_{\geq 0}$ with rate $\lambda > 0$,
and suppose that the metric $M$ satisfies $\mu I \preccurlyeq M(\hat{x}, t) \preccurlyeq L I$ for all $\hat{x}$ and $t$.
Let $\gamma(\cdot; \hat{x}, x, t):[0, 1]\rightarrow\ensuremath{\mathbb{R}}^n$ denote a geodesic between $\hat{x}$ and $x$ in the metric $M(\hat{x}, t)$, and let $\gamma_s(s; \hat{x}, x, t)$ denote the derivative of $s \mapsto \gamma(s; \hat{x}, x, t)$.
Suppose that the map
$(\hat{x}, t) \mapsto \norm{\gamma_s(0; \hat{x}, x(t), t)}_2$
is locally bounded in $\hat{x}$ uniformly in $t$.
Fix any $B_{\alpha_p} > 0$ satisfying $\bregdp{\alpha_p}{\alpha_{p,0}} \leq B_{\alpha_p}$,
any $B_{\alpha_m} > 0$, and
any $R$ satisfying
\begin{align*}
R > \sqrt{\frac{Q(\hat{x}(0), 0) + B_{\alpha_p} + B_{\alpha_m}}{\mu}}, \:\: Q(\hat{x}, t) := E_{M(\cdot, t)}(\hat{x}, x(t)).
\end{align*}
Let $\Psi : \ensuremath{\mathbb{R}}^n \rightarrow \ensuremath{\mathbb{R}}^{d \times m}$ be a locally Lipschitz feature map.
Define the following constants
\begin{align*}
B_x &:= \sup_{t \geq 0} \norm{x(t)}_2, \\
B_{\hat{x}} &:= R + B_x, \\
B_\gamma &:= \sup_{t \geq 0} \sup_{\norm{\hat{x}}_2 \leq B_{\hat{x}}} \norm{\gamma_s(0; \hat{x}, x(t), t)}_2, \\
B_{\mathrm{approx}} &:= \inf_{\bregdm{\alpha_m}{\alpha_{m,0}} \leq B_{\alpha_m}} \sup_{\norm{\hat{x}}_2 \leq B_{\hat{x}}} \norm{\Psi(\hat{x})\alpha_m - h(\hat{x})}_2.
\end{align*}
Choose any $\Delta$ satisfying $\Delta \geq \frac{L^2 B_\gamma B_{\mathrm{approx}}}{\lambda \mu}$,
and let $\sigma_\Delta$ be a $\Delta$-admissible deadzone.
Then the dynamical system
\begin{align*}
\dot{\hat{x}} &= \hat{f}(\hat{x}, \hat{\alpha}_p, \hat{\alpha}_m, t) + k(\hat{x}, x(t)), \\
\hat{f}(\hat{x}, \hat{\alpha}_p, \hat{\alpha}_m, t) &= Y(\hat{x}, t)\hat{\alpha}_p + \Psi(\hat{x}) \hat{\alpha}_m, \\
\frac{d}{dt} \nabla \psi_p(\hat{\alpha}_p) &= -\sigma'_\Delta(Q(\hat{x}, t)) Y(\hat{x}, t)^\mathsf{T} \nabla Q(\hat{x}, t), \\
\frac{d}{dt} \nabla \psi_m(\hat{\alpha}_m) &= - \sigma'_\Delta(Q(\hat{x}, t)) \Psi(\hat{x})^\mathsf{T} \nabla Q(\hat{x}, t),
\end{align*}
with initial conditions $\hat{x}(0) = \hat{x}_0$,
$\hat{\alpha}_p(0) = \alpha_{p,0}$, and $\hat{\alpha}_m(0) = \alpha_{m,0}$ has a solution that exists for all $t \geq 0$. Furthermore,
\begin{align*}
\limsup_{t \rightarrow \infty} \norm{\hat{x}(t) - x(t)}_2 \leq \sqrt{\frac{\Delta}{\mu}}.
\end{align*}
\end{restatable}
\paragraph{Duality} The proof of Theorem~\ref{thm:dp_finite_approx} highlights a duality between the nonlinear adaptive control and nonlinear adaptive prediction problems reminiscent of the duality between LQR and Kalman filtering in linear control theory. Intuitively, any model capable of predicting the time evolution of a system could be used to control the system. Conversely, a model that can be used to control a system could instead be used to predict its evolution.
\paragraph{Interpolation} Theorem~\ref{thm:dp_finite_approx} assumes that the true system state $x(t)$ is measured continuously and concludes that the learned prediction $\hat{x}(t)$ will asymptotically become consistent with the observed measurements up to a level specified by the accuracy of the uniform approximation. Applying duality, the interpolation result in Theorem~\ref{thm:ac_approx_interp} shows that the learned model $\hat{f}(\hat{x}, \hat{\alpha}_p, \hat{\alpha}_m, t)$ becomes approximately consistent with the true model along the trajectory $x(t)$.
\paragraph{Discrete sampling} In practical applications, measurements of the true system state are obtained at discrete instants, and an open-loop predictor with fixed parameters is used to extrapolate beyond them. The parameters are then updated according to a discretized adaptation law when measurements are received. In
Appendix~\ref{app:sample}, we demonstrate how the nominal contraction properties required by Theorem~\ref{thm:dp_finite_approx} can be preserved with discrete measurements by taking the feedback term $k(\hat{x}, x)$ to have a sufficiently high contraction rate in comparison to the spacing between measurements $\Delta t$.
\section{Problem Formulation}
\label{sec:problem}
\paragraph{Adaptive control} We study nonlinear dynamical systems in \textit{matched uncertainty form}
\begin{equation}
\label{eqn:gen_dyn}
\dot{x} = f(x, t) + g(x, t)\left(u(x, t) - Y(x, t) \alpha_p - h(x)\right),
\end{equation}
where $f:\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}^n$ is the ``nominal dynamics'' representing the behavior of the system in the absence of any inputs, $g : \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_{\geq 0}\rightarrow \ensuremath{\mathbb{R}}^{n\times d}$ is the control matrix describing how an input enters the system, $u:\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_{\geq 0}\rightarrow \ensuremath{\mathbb{R}}^d$ is the control input chosen by the learner,
$Y : \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}^{d \times p}$
is a matrix of basis functions describing the system's physical structure, $\alpha_p \in \ensuremath{\mathbb{R}}^p$ is a corresponding vector of physical parameters,
and $h \in \mathcal{H}$ is a disturbance in an operator-valued RKHS $\mathcal{H}$ of functions mapping $\ensuremath{\mathbb{R}}^n\mapsto\ensuremath{\mathbb{R}}^d$~\citep{carmeli08operator}\footnote{A formal definition of an operator-valued RKHS will be provided in Section~\ref{sec:np_rslts}.}. Both $h$ and $\alpha_p$ are unknown, and the goal is to drive $x(t)$ to a bounded desired trajectory $x_d(t)$ by learning a suitable input $u(x, t)$ online. As a supervisory signal, the learner observes an error $e(t) \in \ensuremath{\mathbb{R}}^s$ at each $t$ with dynamics
\begin{align}
\dot{e} = f_e(e, t) + g_e(x, t) (u(x, t) - h(x)), \label{eq:error_dynamics}
\end{align}
where $f_e : \ensuremath{\mathbb{R}}^s \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}^s$ and $g_e : \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}^{s \times d}$.
\begin{rmk}
Our formulation with $h$ autonomous can be relaxed by considering an RKHS of functions mapping $\ensuremath{\mathbb{R}}^{n+1} \mapsto \ensuremath{\mathbb{R}}^{d}$, i.e., by treating time explicitly as an input variable.
\end{rmk}
\paragraph{Adaptive prediction} We study nonlinear dynamical systems that can be additively decomposed
\begin{equation*}
\dot{x} = f(x, t) = Y(x, t)\alpha_p + h(x),
\end{equation*}
where $f:\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_{\geq 0}\rightarrow\ensuremath{\mathbb{R}}^n$ is an unknown dynamics
composed of terms that have a similar interpretation to the control setting. The goal is to learn an approximation $\hat{f}: \ensuremath{\mathbb{R}}^n\times \ensuremath{\mathbb{R}}_{\geq 0}\rightarrow \ensuremath{\mathbb{R}}^n$ of the true dynamics $f$ by designing an estimator
\begin{equation}
\label{eqn:dyn_predict}
\dot{\hat{x}} = \hat{f}(\hat{x}, t) + k(\hat{x}, x(t))
\end{equation}
that will ensure $\hat{x}(t)$ asymptotically approaches $x(t)$. In \eqref{eqn:dyn_predict}, $k:\ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}^n\rightarrow\ensuremath{\mathbb{R}}^n$ is a feedback term satisfying $k(x, x) = 0$ for all $x$ that is used to ensure $\hat{x}(t)$ remains close to $x(t)$ during learning. In this setting, the error signal can be taken as the prediction error $e(t) = \hat{x}(t) - x(t)$. Moreover, the estimator state $\hat{x}(t)$ plays the role of $x(t)$ from the control setting, while $x(t)$ plays the role of the desired trajectory $x_d(t)$.
\subsection{Notation}
We consider algorithms that update estimates of the physical parameters
$\hat{\alpha}_p \in O_p \subseteq \ensuremath{\mathbb{R}}^p$ and model parameters (when applicable) $\hat{\alpha}_m \in O_m \subseteq \ensuremath{\mathbb{R}}^m$ online, where $O_p$ and $O_m$
are open convex subsets.
We fix twice differentiable mirror maps\footnote{See e.g. \cite[Section 4.1]{bubeck_fnt_book} for a definition.} (potential functions) $\psi_p : O_p \rightarrow \ensuremath{\mathbb{R}}$ (resp. $\psi_m : O_m \rightarrow \ensuremath{\mathbb{R}}$)
that are strongly convex with respect to a norm $\norm{\cdot}$ on $O_p$
(resp. $\norm{\cdot}'$ on $O_m$)
and have locally Lipschitz Hessians.
For a potential $\psi$, we let $\bregd{\alpha}{\hat{\alpha}} = \psi(\alpha) - \psi(\hat{\alpha}) - \nabla \psi(\hat{\alpha})^\mathsf{T}(\hat{\alpha} - \alpha)$ denote the Bregman divergence associated with $\psi$.
We use $\norm{\cdot}_2$ to denote the $\ell_2$ norm,
$\opnorm{\cdot}$ to denote the $\ell_2\rightarrow \ell_2$ operator norm of a matrix,
$B_2^n(R)$ to denote the closed $\ell_2$ ball of radius $R$ in $\ensuremath{\mathbb{R}}^n$,
$\mathbb{S}^{n-1}$ to denote the unit sphere in $\ensuremath{\mathbb{R}}^n$, $\ensuremath{\mathbb{R}}_{\geq 0}$ to denote the non-negative reals, and
$\mathsf{Sym}_{\geq 0}^{n \times n}$ to denote the set of symmetric positive semidefinite $n \times n$ matrices.
More generally, for a normed vector space $E$, $\norm{\cdot}_E$ denotes
its norm, and $B_E(R)$ denotes a closed ball in $E$ of radius $R$.
For a measure $\nu$, measurable space $\Theta$, and positive integer $q$,
the space $L_2^q(\Theta, \nu)$ denotes the real Hilbert space of
square integrable measurable functions $f : \Theta \rightarrow \ensuremath{\mathbb{R}}^q$ with norm
$\norm{f}_{L_2^q(\Theta, \nu)}^2 = \int_\Theta \norm{f(\theta)}_2^2 d\nu(\theta)$.
We will often drop the dependence on $q$ when it is clear from the context. Finally, for a positive definite metric $M : \ensuremath{\mathbb{R}}^n \rightarrow \mathsf{Sym}_{\geq 0}^{n \times n}$,
the Riemannian energy $E_M : \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \rightarrow \ensuremath{\mathbb{R}}_{\geq 0}$
is defined as:
\begin{align*}
E_M(x, y) := \inf_{\gamma} \int_0^1 \gamma_s(s)^\mathsf{T} M(\gamma(s)) \gamma_s(s) ds, \:\: \gamma_s(s) = \frac{d \gamma}{ds}(s),
\end{align*}
where the infimum ranges over smooth curves $\gamma$ satisfying
$\gamma(0) = x$ and $\gamma(1) = y$.
\subsection{Assumptions}
To make the above learning problems tractable and to simplify our presentation of results, we require some standard definitions and assumptions. The first requirement is regularity of the nominal dynamics, control matrix, and basis functions.
\begin{mydef}
\label{def:local_lip_local_bound}
Let $E_1$ and $E_2$ be normed vector spaces.
A function $f(x, t)$ mapping $ E_1 \times \ensuremath{\mathbb{R}}_{\geq 0} \mapsto E_2$ is said to be locally Lipschitz in $x$ if for every finite $T > 0$ and $R > 0$,
\begin{equation*}
\sup_{t \in [0, T]} \sup_{\substack{\norm{x}_{E_1} \leq R,\\ \norm{y}_{E_1} \leq R, \\ x \neq y}} \frac{ \norm{f(x, t) - f(y, t)}_{E_2}}{\norm{x-y}_{E_1}} < \infty.
\end{equation*}
Furthermore, $f$ is said to be locally bounded in $x$ uniformly in $t$
if for every finite $R > 0$,
\begin{equation*}
\sup_{t \in \ensuremath{\mathbb{R}}_{\geq 0}} \sup_{\norm{x}_{E_1} \leq R} \norm{f(x, t)}_{E_2} < \infty.
\end{equation*}
\end{mydef}
\begin{assmp}[Dynamics regularity]
\label{assmp:basic}
The functions $f$, $g$, and $Y$ are known to the learner. Moreover, $f$, $g$, $Y$, and $h$ are locally Lipschitz in $x$ and locally bounded in $x$ uniformly in $t$.
\end{assmp}
Our second requirement is a set of reasonable conditions on the error to ensure it provides a suitable signal for learning.
\begin{assmp}[Error regularity]
\label{assmp:error}
$f_e$ and $g_e$ are locally Lipschitz in their first argument and locally bounded in their first argument uniformly in $t$. Moreover, the following three conditions hold:
\begin{enumerate}[(i)]
\item In the absence of the disturbance and any input, zero error is a fixed point,
\begin{align}
\label{eqn:error_fixed_pint}
f_e(0, t) &= 0 \:\: \text{for all}\:\: t \geq 0.
\end{align}
\item Bounded error implies a bounded deviation from the desired trajectory,
\begin{align}
\label{eq:bounded_error_bounded_x}
\sup_{t \in [0, T]} \norm{e(t)}_2 < \infty &\text{ implies } \sup_{t \in [0, T]} \norm{x(t)-x_d(t)}_2 < \infty \:\: \text{for all}\:\: T > 0.
\end{align}
\item A convergent error signal implies a convergent trajectory
\begin{align}
\label{eq:error_signal_asymp}
\lim_{t \rightarrow \infty} \norm{e(t)}_2 = 0 &\text{ implies } \lim_{t \rightarrow \infty} \norm{x(t)-x_d(t)}_2 = 0.
\end{align}
\end{enumerate}
\end{assmp}
To demonstrate that such error signals can be constructed in practice, we provide a few simple illustrative examples.
\begin{ex}[Systems with regularity]
Consider a system satisfying Assumption~\ref{assmp:basic}. Then $e(t) = x(t) - x_d(t)$ satisfies the requirements in Assumption~\ref{assmp:error}.
\end{ex}
\begin{ex}[Controllable linear time-invariant systems]
Consider the linear time-invariant system $f(x, t) = Ax$ and $g(x, t) = B$ with the pair $(A, B)$ controllable. Let $z(t) \in \ensuremath{\mathbb{R}}^n$ denote the state of the system expressed in control canonical form, and let $z_d(t)\in\ensuremath{\mathbb{R}}^n$ denote the corresponding desired trajectory. Define $e(t) = H(s)\left(z_1(t) - z_{d, 1}(t)\right)$ where $H(s)$ is a stable transfer function with at most $n-1$ poles and $z_i(t)$ denotes the $i^{\text{th}}$ component of $z$. Then $e(t)$ satisfies the requirements of Assumption~\ref{assmp:error}.
\end{ex}
The following stability assumption on the error model is key to our analysis. This assumption is equivalent to requiring that in the absence of any disturbance and adaptive input, the system will nominally tend to the desired trajectory.
\begin{assmp}[Lyapunov stability of the error]
\label{assmp:lyapunov}
The error system \eqref{eq:error_dynamics} admits a continuously differentiable Lyapunov function $Q: \ensuremath{\mathbb{R}}^s\times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}$ satisfying for every $e \in \ensuremath{\mathbb{R}}^s$ and $t \geq 0$,
\begin{enumerate}[(i)]
\item $\nabla Q(e, t)$ and $\frac{\partial Q}{\partial t}(e, t)$ are
locally bounded in $e$ uniformly in $t$,
\item $\nabla Q(e, t)$ is locally Lipschitz in $e$,
\item $\ip{\nabla Q(e, t)}{f_e(e, t)} + \frac{\partial Q}{\partial t}(e, t) \leq -\rho(\norm{e}_2)$, and
\item $\mu_1(\norm{e}_2) \leq Q(e, t) \leq \mu_2(\norm{e}_2)$,
\end{enumerate}
where $\rho, \mu_1,$ and $\mu_2$ denote class-$\mathcal{K}_\infty$ functions.
\end{assmp}%
While we focus on Lyapunov stability of the error dynamics, our results encompass incremental forms of stability such as contraction~\citep{lohmiller98contraction}.
\begin{rmk}[Contraction]
\label{rmk:lyap_contr}
We say that the error system is contracting in a metric $M:\ensuremath{\mathbb{R}}^s\times\ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \mathsf{Sym}^{s\times s}_{\geq 0}$ if for some $\lambda > 0$,
\begin{align}
\frac{\partial f_e}{\partial e}(e, t)^\mathsf{T} M(e, t) + M(e, t) \frac{\partial f_e}{\partial e}(e, t) + \dot{M}(e, t) \preccurlyeq -2\lambda M(e, t), \:\: \forall e \in \ensuremath{\mathbb{R}}^s, t \in \ensuremath{\mathbb{R}}_{\geq 0}.
\end{align}
Taking the first variation of the Riemannian energy between the error $e$ and the zero trajectory $Q(e, t) = E_{M(\cdot, t)}(e, 0)$ shows that $\ip{\nabla Q(e, t)}{f_e(e, t)} + \frac{\partial Q}{\partial t}(e, t) \leq -2\lambda Q(e, t)$, so that the energy serves as an exponentially stable Lyapunov function. This correspondence will be used in the prediction setting with $e(t) = \hat{x}(t) - x(t)$.
\end{rmk}
\section{Discrete sampling}
\label{app:sample}
Assume that measurements of the true system state $\left\{x(t_i)\right\}_{i=0}^{\infty}$ are received at potentially non-uniformly spaced intervals $t_i = t_0 + \sum_{i'=0}^{i-1} \Delta t_{i'}$. Denote $\hat{x}_i = \hat{x}(t_i)$ and let $\phi_{t_i+\Delta t_i}(\hat{x}_i)$ denote the flow from time $t_i$ to time $t_{i+1} = t_i + \Delta t_i$ of the
system $\dot{\hat{x}} = f(\hat{x}, t)$ starting at $\hat{x}(t_i) = \hat{x}_i$. We are interested in the contraction properties of the hybrid system
\begin{align*}
\hat{x}_{i+1/2} &= \phi_{t_i + \Delta t_i}(\hat{x}_i), \:\: \hat{x}_{i+1} = k_i(\hat{x}_{i+1/2}, x_{i+1}),
\end{align*}
where $x_{i+1}$ denotes the measurement $x(t_{i+1})$. The following result is similar to~\citet[Eq. 6]{process_control}.
\begin{restatable}[]{myprop}{sample}
\label{prop:disc_meas}
Suppose that there exists some $\Theta : \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}^{n\times n}$ and $0 < \beta < 1$ such that $y_{i+1} = k_i(y_i, x)$ is contracting as a discrete-time dynamical system with rate $\beta$ for any $x$, i.e.,
\begin{equation*}
F_i := \Theta(y_{i+1}, t_{i+1})\frac{\partial k_i}{\partial y}(y_i, x)\Theta(y_i, t_i)^{-1}, \:\: F_i^\mathsf{T} F_i \preccurlyeq \beta I.
\end{equation*}
Assume that $k_i(x, x) = x$ for all $x \in \ensuremath{\mathbb{R}}^n$, and denote by
\begin{align*}
\bar{\lambda}_i &= \sup_{t\in [t_i, t_{i+1}]}\lambda_{\max}\Bigg\{\mathsf{Sym}\left(\dot{\Theta}(\hat{x}(t), t) + \Theta(\hat{x}(t), t)\frac{\partial \hat{f}}{\partial \hat{x}}(\hat{x}(t), t)\Theta(\hat{x}(t), t)^{-1}\right)\Bigg\}
\end{align*}
the maximum expansion rate of the open loop dynamics between $t_i$ and $t_{i+1}$ in the metric $M(\hat{x}, t) = \Theta(\hat{x}, t)^\mathsf{T} \Theta(\hat{x}, t)$. Then the Riemannian energy in the metric $M(\hat{x}, t)$ obeys
\begin{equation*}
E(\hat{x}_{i+1}, x_{i+1}) \leq \beta e^{\bar{\lambda}_i \Delta t_i} E(\hat{x}_i, x_i).
\end{equation*}
\end{restatable}
\begin{proof}
Let $M_{i}(\cdot) = M(\cdot, t_{i})$.
Let $t_{i+1/2} = t_i + \Delta t_i^{-}$ denote the instant before the measurement. Let $\gamma^{i+1}:[0, 1] \rightarrow \ensuremath{\mathbb{R}}^n$ denote a geodesic in the metric $M_{i+1}(\cdot)$ between $\hat{x}_{i+1}$ and $x_{i+1}$,
and let $\gamma^{i+1}_s = \frac{d}{ds} \gamma^{i+1}$. The Riemannian energy under the metric $M_{i+1}$ is then
\begin{align*}
E(\hat{x}_{i+1}, x_{i+1}) &= \int_0^1 \gamma_s^{i+1}(s)^\mathsf{T} M_{i+1}\left(\gamma^{i+1}(s)\right)\gamma_s^{i+1}(s) ds.
\end{align*}
Now let $\gamma^{i+1/2}: [0, 1]\rightarrow \ensuremath{\mathbb{R}}^n$ denote a geodesic in the metric $M_{i+1/2}(\cdot)$ between $\hat{x}_{i+1/2}$ and $x_{i+1}$. Observe that because $k_i(x, x) = x$ for all $x$, $\hat{x}_{i+1/2} = x_{i+1}$ is a fixed point. Then, by contraction of $k(\hat{x}, x)$ in $\hat{x}$ with rate $0 < \beta < 1$ (cf.~\citet{lohmiller98contraction},~\citet{pham08}):
\begin{align*}
\int_0^1 \gamma_s^{i+1}(s)^\mathsf{T} M_{i+1}\left(\gamma^{i+1}(s)\right)\gamma_s^{i+1}(s) \:ds \leq \beta \int_0^1 \gamma_s^{i+1/2}(s)^\mathsf{T} M_{i+1/2}\left(\gamma^{i+1/2}(s)\right)\gamma_s^{i+1/2}(s) \:ds.
\end{align*}
Let $\gamma^i : [0, 1]\rightarrow \ensuremath{\mathbb{R}}^n$ denote a geodesic in the metric $M(\hat{x}, t)$ between $\hat{x}_i$ and $x_i$, let $\psi_t(s) = \Theta\left(\gamma^{i}(s), t\right)\gamma_s^{i}(s)$, and define
\begin{align*}
J_t(s) &= \dot{\Theta}\left(\gamma^{i}(s), t\right) + \Theta\left(\gamma^{i}(s), t\right)\frac{\partial f}{\partial x}\left(\gamma^{i}(s), t\right)\Theta\left(\gamma^{i}(s), t\right)^{-1}.
\end{align*}
Observe that $\frac{d}{dt}\psi_t(s) = J_t(s)\psi_t(s)$. Hence,
\begin{align*}
\frac{d}{dt}\left[\gamma_s^{i}(s)^\mathsf{T} M\left(\gamma^{i}(s), t\right)\gamma_s^{i}(s)\right] &= 2 \psi_t(s)^\mathsf{T} J_t(s) \psi_t(s)\\
&\leq \bar{\lambda} \psi_t(s)^\mathsf{T} \psi_t(s)\\
&= \bar{\lambda}_i \gamma_s^{i}(s)^\mathsf{T} M\left(\gamma^{i}(s), t\right)\gamma_s^{i}(s).
\end{align*}
Then, by the comparison lemma,
\begin{align*}
\gamma_s^{i+1/2}(s)^\mathsf{T} M_{i+1/2}\left(\gamma^{i+1/2}(s)\right)\gamma_s^{i+1/2}(s) &\leq e^{\bar{\lambda}_i \Delta t_i}\gamma_s^{i}(s)^\mathsf{T} M_i\left(\gamma^{i}(s)\right)\gamma_s^{i}(s).
\end{align*}
Plugging this in to our previous bound,
\begin{align*}
E(\hat{x}_{i+1}, x_{i+1}) &\leq \beta e^{\bar{\lambda}_i \Delta t_i}\int_0^1 \gamma_s^{i}(s)^\mathsf{T} M_i\left(\gamma^{i}(s)\right)\gamma_s^{i}(s) \:ds.
\end{align*}
Observing that $\int_0^1 \gamma_s^{i}(s)^\mathsf{T} M\left(\gamma^{i}(s), t\right)\gamma_s^{i}(s)ds = E(\hat{x}_i, x_i)$ completes the proof.
\end{proof}
\section{Preliminary results}
Let $E$ and $E'$ be normed vector spaces.
Denote by $\mathcal{L}(E, E')$ the space
of linear operators from $E$ to $E'$
equipped with the operator norm.
A function $f(x, t)$ mapping $E \times \ensuremath{\mathbb{R}}_{\geq 0} \mapsto F$
with $E$ and $F$ normed vector spaces is said to be
locally bounded in $x$ if for every $R > 0$ and $T > 0$,
\begin{align*}
\sup_{t \in [0, T]} \sup_{\norm{x}_E \leq R} \norm{f(x, t)}_{F} < \infty.
\end{align*}
\begin{myprop}
\label{prop:local_composition}
Let $\{E_i\}_{i=1}^{2}$, $\{F_i\}_{i=1}^{2}$ be normed vector spaces and let
$f_i : E_i \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow F_i$ for $i \in \{1, 2\}$
be locally
Lipschitz. Then the following hold
\begin{enumerate}[(i)]
\item If $E_1 = E_2$ and $F_1 = F_2$, then the sum
$(x, t) \mapsto f_1(x, t) + f_2(x, t)$
is locally Lipschitz.
\item If $E_1 = E_2$, $F_1 = \mathcal{L}(F_2, F_3)$,
and both $f_1$ and $f_2$ are locally bounded,
then the
product $(x, t) \mapsto f_1(x, t) f_2(x, t)$ is locally
Lipschitz and locally bounded.
\item If $F_1 = E_2$ and both $f_1$ and $f_2$ are locally bounded, then the composition $(x, t) \mapsto f_2(f_1(x, t), t)$ is locally Lipschitz and locally bounded.
\end{enumerate}
\end{myprop}
\begin{proof}
Let $R$ and $T$ be arbitrary positive constants.
Let $C = C(R, T) > 0$ be a finite positive constant such that:
\begin{align*}
\sup_{t \in [0, T]} \norm{f_1(x, t) - f_1(y, t)}_{F_1} &\leq C \norm{x-y}_{E_1} \:\: \forall x,y \in B_{E_1}(R), \\
\sup_{t \in [0, T]} \norm{f_2(x, t) - f_2(y, t)}_{F_2} &\leq C \norm{x-y}_{E_2} \:\: \forall x,y \in B_{E_2}(R).
\end{align*}
Now, let $x, y \in B_{E_1}(R)$ and $t \in [0, T]$
be arbitrary.
\paragraph{The sum $(x, t) \mapsto f_1(x, t) + f_2(x, t)$.}
Observe that
\begin{align*}
\norm{f_1(x, t) + g_1(x, t) - (f_2(y, t) + g_2(y, t))}_{F_1} &\leq \norm{f_1(x, t) - f_1(y, t)}_{F_1} + \norm{f_2(x, t) - f_2(y, t)}_{F_1} \\
&\leq 2C \norm{x-y}_{E_1}.
\end{align*}
\paragraph{The product $(x, t) \mapsto f_1(x, t) f_2(x, t)$.}
Let $C' = C'(R, T) > 0$ be a finite positive constant such that:
\begin{align*}
\sup_{t \in [0, T]} \sup_{\norm{x}_{E_1} \leq R} \max\{\norm{f_1(x, t)}_{\mathcal{L}(F_2, F_3)}, \norm{f_2(x, t)}_{F_2}\} \leq C'.
\end{align*}
Now observe that
\begin{align*}
\norm{f_1(x, t)f_2(x, t) - f_1(y, t)f_2(y, t)}_{F_3} &\leq \norm{f_1(x, t)-f_1(y,t)}_{\mathcal{L}(F_2, F_3)}\norm{f_2(x,t)}_{F_2} \\
&\qquad + \norm{f_1(y,t)}_{\mathcal{L}(F_2, F_3)}\norm{f_2(x,t)-f_2(y,t)}_{F_2} \\
&\leq 2C C' \norm{x-y}_{E_1}.
\end{align*}
The fact that the composition is locally bounded is immediate.
\paragraph{The composition $(x, t) \mapsto f_2(f_1(x, t), t)$.}
First, let $C' = C'(R, T)$ be such that:
\begin{align*}
\sup_{t \in [0, T]} \sup_{\norm{x}_{E_1}\leq R} \norm{f_1(x, t)}_{F_1} \leq C'.
\end{align*}
Next, let $C'' = C''(R, T)$ be such that:
\begin{align*}
\sup_{t \in [0, T]} \norm{f_2(x, t) - f_2(y, t)}_{F_2} \leq C'' \norm{x - y}_{E_2} \:\: \forall x, y \in B_{E_2}(C').
\end{align*}
Then we have:
\begin{align*}
\norm{f_2(f_1(x, t), t) - f_2(f_1(y, t), t)}_{F_2} \leq C'' \norm{f_1(x, t) - f_1(y, t)}_{F_1} \leq C C'' \norm{ x-y}_{E_1}
\end{align*}
This shows that the composition is locally Lipschitz.
The fact that the composition is locally bounded is immediate.
\end{proof}
Now, let $E$ and $F$ be normed vector spaces and
let $U \subseteq E$.
A function $ f: U \rightarrow F$ is said to be globally
Lipschitz (uniformly Lipschitz) if
\begin{align*}
\sup_{x, y \in U, x \neq y} \frac{\norm{f(x) - f(y)}_F}{\norm{x-y}_E} < \infty.
\end{align*}
A function $f : U \rightarrow F$ is said to be globally bounded
(uniformly bounded) if:
\begin{align*}
\sup_{x \in U} \norm{f(x)}_F < \infty.
\end{align*}
\begin{myprop}
\label{prop:global_lipschitz_composition}
Let $\{E_i\}_{i=1}^{2}$ and $\{F_i\}_{i=1}^{2}$ be collections of normed vector spaces.
Let $f_i : U_i \rightarrow F_i$ with $U_i \subseteq E_i$ for $i \in \{1, 2\}$ be globally Lipschitz. Then the following hold
\begin{enumerate}
\item If $E_1 = E_2$ and $F_1 = F_2$, then the sum $x \mapsto f_1(x) + f_2(x)$ is globally Lipschitz.
\item If $E_1 = E_2$, $F_1 = \mathcal{L}(F_2, F_3)$, and both $f_1$ and $f_2$ are globally bounded, then the product $x \mapsto f_1(x) f_2(x)$ is globally Lipschitz and globally bounded.
\item If $F_1 = E_2$ and both $f_1$ and $f_2$ are globally bounded, then the composition $(x, t) \mapsto f_2(f_1(x))$ is globally Lipschitz and globally bounded.
\end{enumerate}
\end{myprop}
\begin{proof}
Nearly identical proof as Proposition~\ref{prop:local_composition}
\end{proof}
The following result generalizes Barbalat's lemma to the case when the limiting
value of a function $f$ only converges to a ball. We first state Barbalat's lemma, and then state our generalization.
\begin{myprop}[Barbalat's lemma]
Let $f \in C^1(\ensuremath{\mathbb{R}}_{\geq 0}, \ensuremath{\mathbb{R}})$ satisfy $\lim_{t\rightarrow \infty} f(t) < \infty$. Further assume that $f'$ is uniformly continuous. Then $\lim_{t\rightarrow \infty} f'(t) = 0$.
\end{myprop}
\begin{myprop}[Generalized Barbalat's lemma]
\label{prop:generalized_barbalat}
Let $f\in C^1(\ensuremath{\mathbb{R}}_{\geq 0}, \ensuremath{\mathbb{R}})$ satisfy
$\limsup_{t \rightarrow \infty} \abs{f(t) - \alpha} \leq \varepsilon$ for some $\alpha \in \ensuremath{\mathbb{R}}$ and $\varepsilon \geq 0$. Further assume that $f'$ is $L$-Lipschitz.
Then,
\begin{align*}
\limsup_{t \rightarrow \infty} \abs{f'(t)} \leq 2 \sqrt{\varepsilon L}.
\end{align*}
\end{myprop}
\begin{proof}
Suppose for a contradiction that
$\limsup_{t \rightarrow \infty} \abs{f'(t)} > 2\sqrt{\varepsilon L}$.
Then there exists an increasing sequence $\{t_n\}_{n \geq 1}$ with $t_n \rightarrow \infty$ such that $\abs{f'(t_n)} > 2\sqrt{\varepsilon L}$ for all $n \geq 1$.
Define $\delta := 2\sqrt{\varepsilon L}/L$. Then for any $n \geq 1$, we have
\begin{align*}
\bigabs{\int_{t_n}^{t_n+\delta} f'(t) dt} &= \bigabs{ \delta f'(t_n) + \int_{t_n}^{t_n+\delta} (f'(t) - f'(t_n)) dt}, \\
&\geq \delta \abs{f'(t_n)} - \int_{t_n}^{t_n+\delta} \abs{f'(t)-f'(t_n)} dt, \\
&> \delta 2 \sqrt{\varepsilon L} - L \int_{t_n}^{t_n+\delta} \abs{t_n-t} dt, \\
&= \delta 2\sqrt{\varepsilon L} - \frac{L}{2}\delta^2, \\
&= 2\varepsilon.
\end{align*}
This lower bound implies that for any $n \geq 1$
\begin{align*}
\abs{ f(t_n + \delta) - f(t_n) } &= \bigabs{\int_{t_n}^{t_n+\delta} f'(t) dt} > 2\varepsilon.
\end{align*}
This bound implies
\begin{align*}
2 \varepsilon &< \limsup_{n \rightarrow \infty} \abs{f(t_n + \delta) - f(t_n)},\\
&\leq \limsup_{t \rightarrow \infty} \abs{f(t+\delta) - f(t)},\\
&\leq \limsup_{t \rightarrow \infty} \abs{f(t+\delta)-\alpha} + \limsup_{t \rightarrow \infty} \abs{f(t) - \alpha}, \\
&\leq 2\varepsilon,
\end{align*}
which yields a contradiction.
\end{proof}
In adaptive control, a typical use of Barbalat's lemma
is to conclude (via deadzones) that the error signal tends to a small value. In the sequel, we will use
Barbalat's lemma in conjunction with the
generalized Barbalat's lemma (Proposition~\ref{prop:generalized_barbalat})
to argue that both the error signal and the \emph{time derivative} of the error
signal are small. The time derivative of the error
signal can be written as a nominal term plus the
error of the adaptive signal. By controlling
this quantity, we will be able to show that the
error of the adaptive signal is small as well,
allowing us to prove approximate interpolation type results (Theorem~\ref{thm:ac_approx_interp}).
\paragraph{Sharpness of the bound} Proposition~\ref{prop:generalized_barbalat}
is sharp in the following sense.
Fix any $\varepsilon > 0$ and $\omega \in \ensuremath{\mathbb{R}}$,
and define $f(t) := \varepsilon \sin\left(\sqrt{\frac{\omega}{\varepsilon}} t\right)$.
Clearly $\limsup_{t \rightarrow \infty} \abs{f(t)} = \varepsilon$, and furthermore
\begin{align*}
f'(t) = \sqrt{\varepsilon \omega} \cos\left(\sqrt{\frac{\omega}{\varepsilon}} t\right), \:\: f''(t) = - \omega \sin\left(\sqrt{\frac{\omega}{\varepsilon}}t\right).
\end{align*}
This shows that the smallest valid global
Lipschitz constant for $f'$ is $\omega$.
Furthermore,
\begin{equation*}
\limsup_{t \rightarrow \infty} \abs{f'(t)} = \sqrt{\varepsilon \omega}.
\end{equation*}
\section{Omitted proofs for Section~\ref{sec:np_rslts}}
\subsection{Proof of Theorem~\ref{thm:nonparametric_conv}}
We first state the following technical lemma.
\begin{mylemma}
\label{lem:existence}
Let $E$ denote the Banach space $E := \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^s \times L_2(\Theta, \nu)$ equipped with the norm $\norm{(x, e, \hat{\alpha})}_E := \max\{ \norm{x}_2, \norm{e}_2, \norm{\hat{\alpha}}_{L_2(\Theta, \nu)} \}$. Write $z = (x, e, \hat{\alpha})$ for $z \in E$ and define the function $F : E \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow E$ as:
\begin{align*}
F(z, t) := \begin{bmatrix}
f(x, t) + g(x, t)\left( \int_\Theta \Phi(x, \theta) \hat{\alpha}(\theta) d\nu(\theta) - h(x) \right) \\
f_e(e, t) + g_e(x, t)\left( \int_\Theta \Phi(x, \theta) \hat{\alpha}(\theta) d\nu(\theta) - h(x)\right) \\
-\gamma \Phi(x, \cdot)^\mathsf{T} g_e(x, t)^\mathsf{T} \nabla Q(e, t)
\end{bmatrix}.
\end{align*}
Then, under Assumption \ref{assmp:second_moment_bound}, $F(z, t)$ is locally Lipschitz in $z$ with respect to $\norm{\cdot}_E$. That is, for each $R > 0$ and $T > 0$,
letting $B_E(R) := \{ z \in E : \norm{z}_E \leq R\}$,
\begin{equation*}
\sup_{t\in [0,T]} \sup_{z_1,z_2 \in B_E(R)} \frac{\norm{F(z_1, t) - F(z_2, t)}_E}{\norm{z_1 - z_2}_E} < \infty.
\end{equation*}
\end{mylemma}
\begin{proof}
By the composition rules for locally Lipschitz functions (cf. Proposition~\ref{prop:local_composition}),
it suffices to show that
the functions $\psi_1 : E \rightarrow \ensuremath{\mathbb{R}}^d$
and $\psi_2 : E \rightarrow \mathcal{L}(\ensuremath{\mathbb{R}}^{d_1}, L_2(\Theta, \nu))$ defined by
\begin{align*}
\psi_1((x, e, \hat{\alpha})) &:= \int_\Theta \Phi(x, \theta) \hat{\alpha}(\theta) d\nu(\theta), \\
\psi_2((x, e, \hat{\alpha}))(q) &:= \Phi(x, \cdot)^\mathsf{T} q \:\: \forall q \in \ensuremath{\mathbb{R}}^{d_1}.
\end{align*}
are locally Lipschitz and locally bounded. We view both $\psi_1$ and $\psi_2$ as functions defined on $E$, consistent with their appearance in the definition of $F(z, t)$; however, clearly $\psi_1$ is independent of $e$ and $\psi_2$ is independent of both $e$ and $\hat{\alpha}$.
Because $\psi_1$ and $\psi_2$ do not depend on time $t$,
locally Lipschitz implies locally bounded.
We first show that $\psi_1$ is locally Lipschitz.
Fix an $R > 0$ and let $z_1 = (x_1, e_1, \hat{\alpha}_1)$, $z_2 = (x_2, e_2, \hat{\alpha}_2)$ be contained in $B_E(R)$.
By Assumption~\ref{assmp:second_moment_bound},
there exists a $C = C(R) > 0$ such that the following conditions hold:
\begin{align*}
\sup_{x \in B_2^n(R)} \int_\Theta \opnorm{\Phi(x, \theta)}^2 d\nu(\theta) &\leq C^2, \\
\int_\Theta \opnorm{\Phi(x_1, \theta) - \Phi(x_2, \theta)}^2 d\nu(\theta) &\leq C^2 \norm{x_1 - x_2}_2^2 \:\:\forall x_1,x_2 \in B_2^n(R).
\end{align*}
By the triangle inequality and Cauchy-Schwarz,
\begin{align*}
&\norm{ \psi_1(z_1) - \psi_1(z_2) }_2 \\
&\leq \sqrt{\int_\Theta \opnorm{\Phi(x_1, \theta) - \Phi(x_2, \theta)}^2 d\nu(\theta)} \sqrt{ \int_\Theta \norm{\hat{\alpha}_1(\theta)}_2^2 d\nu(\theta)} \\
&\qquad + \sqrt{\int_\Theta \opnorm{\Phi(x_1, \theta)}^2 d\nu(\theta)} \sqrt{ \int_\Theta \norm{\hat{\alpha}_1(\theta) - \hat{\alpha}_2(\theta)}_2^2 d\nu(\theta) } \\
&\leq CR \norm{x_1-x_2}_2 + C\norm{\alpha_1 - \alpha_2}_{L_2(\Theta, \nu)} \leq C(1+R) \norm{z_1-z_2}_E.
\end{align*}
This shows that $\psi_1$ is locally Lipschitz.
To show that $\psi_2$ is locally Lipschitz, by Cauchy-Schwarz,
\begin{align*}
\norm{\psi_2(z_1) - \psi_2(z_2)}_{\mathcal{L}(\ensuremath{\mathbb{R}}^{d_1}, L_2(\Theta, \nu))} &= \sup_{\norm{q}_2 = 1} \norm{ (\Phi(x_1, \cdot) - \Phi(x_2, \cdot))^\mathsf{T} q }_{L_2(\Theta, \nu)} \\
&= \sup_{\norm{q}_2 = 1} \left( \int_\Theta \norm{ (\Phi(x_1, \theta) - \Phi(x_2, \theta))^\mathsf{T} q}_2^2 d\nu(\theta) \right)^{1/2} \\
&\leq \left( \int_\Theta \opnorm{\Phi(x_1, \theta) - \Phi(x_2, \theta)}^2 d\nu(\theta) \right)^{1/2} \\
&\leq C \norm{x_1 - x_2}_2 \leq C \norm{z_1 - z_2}_E.
\end{align*}
\end{proof}
We now require the following result concerned with existence and uniqueness of solutions to ordinary differential equations defined on Banach spaces. This result will be used in conjunction with Lemma~\ref{lem:existence} to assert the existence of our nonparametric input over an interval of time. Via a Lyapunov argument, we can then extend the interval to infinity.
\begin{myprop}[Existence of a maximal solution (see e.g., Proposition 11.8 of \citet{driver04book})]
\label{prop:banach_existence}
Let $E$ be a Banach space, $U$ be an open subset of $E$, $T \subseteq \ensuremath{\mathbb{R}}$ be an interval of time containing $0$, and $F: U\times T \rightarrow E$ be a continuous vector field on $E$. Assume that $F$ is locally Lipschitz in the following sense.
For every $x_0 \in U$ and compact $I \subseteq T$,
there exists finite positive $L = L(x_0, I)$ and $R = R(x_0, I)$
such that:
\begin{align*}
\sup_{t \in I} \norm{f(x, t) - f(y, t)}_E \leq \norm{x - y}_E \:\: \forall x, y \in B_E(x_0, R).
\end{align*}
Then for each $x_0 \in U$, there exists a maximal interval $I(x_0) = (a(x_0), b(x_0)) \subseteq T$ with $a(x_0) \in [-\infty, 0)$ and
$b(x_0) \in (0, +\infty]$ such that the ordinary differential equation
\begin{equation*}
\dot{x}(t) = F(x(t), t),\:\:\: x(0) = x_0
\end{equation*}
has a unique continuously differentiable solution $x: I(x_0) \rightarrow U$.
\end{myprop}
We may now state our proof of the main nonparametric theorem.
\npconv*
\begin{proof}
By Assumption~\ref{assmp:rf_kernel}, there exists a signed density $\alpha(\theta) \in L_2(\Theta, \nu)$ such that
\begin{equation*}
h(\cdot) = \int_{\Theta}\Phi(\cdot, \theta)\alpha(\theta)d\nu(\theta), \:\: \norm{h}_{\mathcal{H}}^2 = \norm{\alpha}^2_{L_2(\Theta, \nu)}.
\end{equation*}
Define the signed density $\hat{\alpha} : \Theta\times\ensuremath{\mathbb{R}}_{\geq 0}\rightarrow\ensuremath{\mathbb{R}}^{d_1}$ by
$\hat{\alpha}(\cdot, 0) = 0$ and the pointwise update for $\theta \in \Theta$,
\begin{equation*}
\frac{\partial\hat{\alpha}}{\partial t}(\theta, t) = -\gamma \Phi(x(t), \theta)^\mathsf{T} g_e(x(t), t)^\mathsf{T} \nabla Q(e(t), t).
\end{equation*}
Observe that by Lemma~\ref{lem:existence} and Proposition~\ref{prop:banach_existence}, there exists some maximal $T_{\max} \in (0, \infty]$ such that the curve $t\mapsto (x(t), e(t), \hat{\alpha}(t))$ exists, is unique, and is continuously differentiable. Moreover, we may write the input as
\begin{align*}
u(x, t) = \int_\Theta \Phi(x, \theta)\hat{\alpha}(\theta, t) d\nu(\theta).
\end{align*}
By means of contradiction, let us suppose that $T_{\max} < \infty$. For $t \in [0, T_{\max})$, define $\tilde{\alpha}(\theta, t) := \hat{\alpha}(\theta, t) - \alpha(\theta)$ so that
\begin{equation*}
u(\cdot, t) - h(\cdot) = \int_{\Theta}\Phi(\cdot, \theta)\tilde{\alpha}(\theta, t)d\nu(\theta).
\end{equation*}
Now consider the Lyapunov-like function $V : [0, T_{\max}) \rightarrow \ensuremath{\mathbb{R}}$,
\begin{equation*}
V(t) = Q(e(t), t) + \frac{1}{2\gamma}\norm{\tilde{\alpha}(\cdot, t)}_{L_2(\Theta, \nu)}^2.
\end{equation*}
We note that because $L_2(\Theta, \nu)$ is a real Hilbert space,
the map $u \mapsto \norm{u}_{L_2(\Theta,\nu)}^2$ is (Fr{\'{e}}chet) differentiable with derivative $h \mapsto 2 \ip{u}{h}_{L_2(\Theta, \nu)}$ Therefore, by the differentiability of the curve $t \mapsto \tilde{\alpha}(\cdot, t)$ and the chain rule, we have:
\begin{align*}
\frac{d}{dt} \int_\Theta \norm{\tilde{\alpha}(\theta, t)}^2_2 d\nu(\theta) = 2\bigip{\tilde{\alpha}(\cdot, t)}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, t)}_{L_2(\Theta, \nu)}.
\end{align*}
Computing the time derivative, for any $t \in [0, T_{\max})$,
\begin{align*}
\dot{V}(t) &= \frac{\partial Q}{\partial t}(e(t), t) + \nabla Q(e(t), t)^\mathsf{T} \left(f_e(e(t), t) + g_e(x(t), t)\left(u(x(t), t) - h(x(t))\right)\right) \\
&\qquad + \frac{1}{\gamma}\bigip{\tilde{\alpha}(\cdot, t)}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, t)}_{L_2(\Theta, \nu)},\\
%
%
&\leq -\rho\left(\norm{e(t)}_2\right) + \nabla Q(e(t), t)^\mathsf{T} g_e(e(t), t)\left(u(x(t), t) - h(x(t))\right) \\
&\qquad + \frac{1}{\gamma}\bigip{\tilde{\alpha}(\cdot, t)}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, t)}_{L_2(\Theta, \nu)},
\end{align*}
where we have applied Assumption~\ref{assmp:lyapunov}. Now, observe that
\begin{align*}
\bigip{\tilde{\alpha}(\cdot, t)}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, t)}_{L_2(\Theta, \nu)} = -\gamma \int_{\Theta}\ip{\tilde{\alpha}(\theta, t)}{\Phi(x(t), \theta)^\mathsf{T} g_e(x(t), t) \nabla Q(e(t), t)}d\nu(\theta)
\end{align*}
so that the last two terms in $\dot{V}(t)$ cancel, and hence:
\begin{equation*}
\dot{V}(t) \leq -\rho(\norm{e(t)}_2).
\end{equation*}
Now, because $\dot{V}(t) \leq 0$ for all $t \in [0, T_{\max})$,
$V(t) \leq V(0)$.
Therefore, since $Q(e(t), t) \geq \mu_1(\norm{e(t)}_2)$
and $\mu_1$ is a class $\mathcal{K}_\infty$ function,
\begin{align*}
\sup_{t\in [0, T_{\max})}\norm{e(t)}_2 < \infty, \:\: \sup_{t\in [0, T_{\max})}\norm{\hat{\alpha}(\cdot, t)}_{L_2(\Theta, \nu)} < \infty.
\end{align*}
Furthermore, $\sup_{t \in [0, T_{\max})} \norm{x(t)-x_d(t)}_2 < \infty$
by requirement \eqref{eq:bounded_error_bounded_x} on the error signal,
and since $x_d$ is uniformly bounded, we also have that
$\sup_{t \in [0, T_{\max})} \norm{x(t)}_2 < \infty$.
This contradicts that $T_{\max}$ is finite, so we conclude that $T_{\max} = \infty$. This implies that
\begin{align*}
\sup_{t\geq 0} \max\{\norm{x(t)}_2, \norm{e(t)}_2, \norm{\tilde{\alpha}(\cdot, t)}_{L_2(\Theta,\nu)}\} < \infty,
\end{align*}
so that $u(\cdot, t) \in \mathcal{H}$ for all $t \geq 0$. This proves the first two claims. Now, integrating both sides of $\dot{V}(t)$,
\begin{equation*}
\int_0^\infty \rho\left(\norm{e(t)}_2\right)dt \leq V(0).
\end{equation*}
To complete the proof, we now need to show that $t \mapsto \rho(\norm{e(t)}_2)$ is uniformly
continuous on $[0, \infty)$ and apply Barbalat's lemma.
We first show that $e(t)$ is uniformly Lipschitz in $t$.
To do so, we bound $\sup_{t \geq 0} \norm{\dot{e}(t)}_2$ and apply $\norm{e(t_1) - e(t_2)}_2 \leq \sup_{t \geq 0} \norm{\dot{e}(t)}_2 \abs{t_1 - t_2}$.
Let $C_\Phi := \sup_{t \geq 0} \left( \int_\Theta \norm{\Phi(x(t), \theta)}_2^2 d\nu(\theta) \right)^{1/2}$.
Because $x(t)$ is uniformly bounded, $C_\Phi$ is finite by Assumption~\ref{assmp:second_moment_bound}.
Next,
\begin{align*}
\norm{ u(x(t), t) - h(x(t)) }_2
\leq \left( \int_{\Theta} \opnorm{\Phi(x(t), \theta)}^2 d\nu(\theta) \right)^{1/2} \norm{\tilde{\alpha}(\cdot, t)}_{L_2(\Theta, \nu)}
\leq C_\Phi \sqrt{2\gamma V(0)}.
\end{align*}
Now, observe that
\begin{align*}
\norm{\dot{e}(t)}_2 &\leq \norm{f_e(e(t), t)}_2 + \opnorm{g_e(x(t), t)} \norm{ u(x(t), t) - h(x(t))}_2 \\
&\leq \norm{f_e(e(t), t)}_2 + \opnorm{g_e(x(t), t)} C_\Phi \sqrt{2\gamma V(0)}.
\end{align*}
Because both $f_e$ and $g_e$ are locally bounded in $x$ uniformly in $t$, $\norm{\dot{e}(t)}_2$ is uniformly bounded in $t$.
Therefore, $t \mapsto \norm{e(t)}_2$ is uniformly Lipschitz and
$t \mapsto \norm{e(t)}_2$ is uniformly continuous.
Now, because $\rho$ is continuous, it is uniformly continuous on
the range of $t \mapsto \norm{e(t)}_2$.
Since the composition of two uniformly continuous functions remains uniformly continuous,
$t \mapsto \rho(\norm{e(t)})$ is uniformly continuous.
By Barbalat's lemma, this implies that $\lim_{t \rightarrow \infty} \rho( \norm{e(t)}_2 ) = 0$. By continuity of $\rho$ and the fact that $\rho(a) = 0$ if and only if $a = 0$, we conclude that $\lim_{t \rightarrow \infty} \norm{e(t)}_2 = 0$.
From the requirement \eqref{eq:error_signal_asymp}
on the error signal, we conclude that $\lim_{t \rightarrow \infty} \norm{x(t)-x_d(t)}_2$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:interpolation}}
\interpolation*
\begin{proof}
Recall that the error dynamics satisfy:
\begin{align*}
\dot{e}(t) = f_e(e(t), t) + g_e(x(t), t)(u(x(t), t) - h(x(t))).
\end{align*}
From the proof of Theorem~\ref{thm:nonparametric_conv},
$\lim_{t \rightarrow \infty} e(t) = 0$.
If we show in addition that $t \mapsto \dot{e}(t)$ is uniformly Lipschitz,
then by Barbalat's lemma (applied to each coordinate),
$\lim_{t \rightarrow \infty} \dot{e}(t) = 0$.
Since $f_e(0, t) = 0$ and $f_e$ is locally Lipschitz in $e$ uniformly in $t$,
$\lim_{t \rightarrow \infty} \dot{e}(t) = 0$ implies
that $\lim_{t \rightarrow \infty} \norm{g_e(x(t), t)(u(x(t), t) - h(x(t)))}_2 = 0$.
It remains to show that $t \mapsto \dot{e}(t)$ is uniformly Lipschitz.
By the composition rule (cf. Proposition~\ref{prop:global_lipschitz_composition}),
it suffices to show that the functions:
\begin{align*}
t \mapsto f_e(e(t), t), \:\: t \mapsto g_e(x(t), t), \:\: t \mapsto u(x(t), t), \:\: t \mapsto h(x(t)),
\end{align*}
are all uniformly Lipschitz and bounded.
From the proof of Theorem~\ref{thm:nonparametric_conv},
both $t \mapsto e(t)$ and $t \mapsto x(t)$ are uniformly bounded, and
$t \mapsto e(t)$ is uniformly Lipschitz. A nearly identical argument shows that
$t \mapsto x(t)$ is also uniformly Lipschitz.
Therefore, since $f_e$, $g_e$, and $h$ are all locally Lipschitz and locally bounded uniformly in $t$,
it is clear that $t \mapsto f_e(e(t), t)$,
$t \mapsto g_e(x(t), t)$, and $t \mapsto h(x(t))$ are all uniformly Lipschitz and uniformly bounded.
To see that $t \mapsto u(x(t), t)$ is also uniformly Lipschitz, we first
choose a finite constant $C > 0$ such that
\begin{align*}
\sup_{t \geq 0} \max\{ \norm{x(t)}_2, \opnorm{g_e(x(t), t)}, \norm{\nabla Q(e(t), t)}_2 \} \leq C.
\end{align*}
Now observe that
for every $\theta$ and $t$,
\begin{align*}
\bignorm{ \frac{\partial \hat{\alpha}}{\partial t}(\theta, t) }_2 &= \gamma \bignorm{\Phi(x(t), \theta)^\mathsf{T} g_e(x(t), t)^\mathsf{T} \nabla Q(e(t), t) }_2 \\
&\leq \gamma \opnorm{\Phi(x(t), \theta)} \opnorm{g_e(x(t), t)} \norm{\nabla Q(e(t), t)}_2 \\
&\leq \gamma C^2 \opnorm{\Phi(x(t), \theta)}.
\end{align*}
Put $C_\Phi := \left( \int_\Theta \sup_{\norm{x}_2 \leq C} \opnorm{\Phi(x, \theta)}^2 d\nu(\theta) \right)^{1/2}$, which is finite by assumption.
Fix $t_1, t_2$, and for $i \in \{1, 2\}$ define:
\begin{align*}
u_i := u(x(t_i), t_i), \:\:
\Phi_i(\cdot) := \Phi(x(t_i), \cdot), \:\:
\hat{\alpha}_i(\cdot) := \hat{\alpha}(\cdot, t_i).
\end{align*}
We have:
\begin{align*}
\norm{ \hat{\alpha}_1 - \hat{\alpha}_2 }_{L_2(\Theta, \nu)} &=
\left( \int_\Theta \norm{\hat{\alpha}(\theta, t_1) - \hat{\alpha}(\theta, t_2)}_2^2 d\nu(\theta) \right)^{1/2} \\
&\leq \left( \int_\Theta \bignorm{\int_{t_1}^{t_2} \frac{\partial \hat{\alpha}}{\partial t}(\theta, t) dt }^2_2 d\nu(\theta) \right)^{1/2} \\
&\leq \left( \int_\Theta \left( \int_{t_1}^{t_2} \bignorm{\frac{\partial \hat{\alpha}}{\partial t}(\theta, t)}_2 dt \right)^2 d\nu(\theta) \right)^{1/2} \\
&\leq \gamma C^2 \left( \int_\Theta \sup_{\norm{x}_2 \leq C} \opnorm{\Phi(x, \theta)}^2 d\nu(\theta) \right)^{1/2} \abs{t_1 - t_2} \\
&\leq \gamma C^2 C_\Phi \abs{t_1-t_2}.
\end{align*}
Next, let
$C'_\Phi$ be a finite constant such that
\begin{align*}
\left(\int_\Theta \opnorm{\Phi(x, \theta) - \Phi(y, \theta)}^2 d\nu(\theta)\right)^{1/2} \leq C'_\Phi \norm{x-y}_2 \:\:\forall x, y \in B_2^n(C).
\end{align*}
Then,
\begin{align*}
\norm{u_1-u_2}_{2} &\leq \int_\Theta \norm{\Phi_1(\theta)\hat{\alpha}_1(\theta) - \Phi_2(\theta)\hat{\alpha}_2(\theta)}_2 d\nu(\theta) \\
&\leq \int_\Theta \opnorm{\Phi_1(\theta) - \Phi_2(\theta)}\norm{\hat{\alpha}_1(\theta)}_2 d\nu(\theta) + \int_\Theta \opnorm{\Phi_2(\theta)} \norm{\hat{\alpha}_1(\theta) - \hat{\alpha}_2(\theta)}_2 d\nu(\theta) \\
&\leq C \left( \int_\Theta \opnorm{\Phi(x(t_1), \theta) - \Phi(x(t_2),\theta)}^2 d\nu(\theta)\right)^{1/2} + C_\Phi \norm{\hat{\alpha}_1 - \hat{\alpha}_2}_{L_2(\Theta, \nu)} \\
&\leq C C'_\Phi \norm{x(t_1) - x(t_2)}_2 + \gamma C^2 C^2_\Phi \abs{t_1-t_2} \\
&\leq (C^2 C'_\Phi + \gamma C^2 C^2_\Phi) \abs{t_1-t_2}.
\end{align*}
This shows that $t \mapsto u(x(t), t)$ is uniformly Lipschitz.
To conclude, we argue that $t \mapsto u(x(t), t)$ is uniformly bounded:
\begin{align*}
\norm{ u(x(t), t) }_2 &\leq \int_\Theta \opnorm{ \Phi(x(t), \theta) } \norm{\hat{\alpha}(t)}_2 d\nu(\theta) \\
&\leq \sqrt{\int_\Theta \opnorm{\Phi(x(t), \theta)}^2 d\nu(\theta)} \norm{\hat{\alpha}(t)}_2 \\
&\leq C_\Phi \norm{\hat{\alpha}(t)}_2.
\end{align*}
The proof of Theorem~\ref{thm:nonparametric_conv}
shows that $\norm{\tilde{\alpha}(t)}_2$ is uniformly bounded,
and therefore so is $\norm{\hat{\alpha}(t)}_2$ by the triangle inequality.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:imp_reg}}
\impreg*
\begin{proof}
From Theorem~\ref{thm:nonparametric_conv}, $u(\cdot, t) \in \mathcal{H}$ for all $t \geq 0$.
Let $\bar{h}(\cdot) \in \mathcal{H}$ be arbitrary. Then by
Assumption~\ref{assmp:rf_kernel}
there exists $\bar{\alpha} \in L_2(\Theta, \nu)$ such that
\begin{equation*}
\bar{h}(x) = \int_\Theta \Phi(x, \theta)\bar{\alpha}(\theta)d\nu(\theta).
\end{equation*}
Consider the Lyapunov-like function $V: \ensuremath{\mathbb{R}}_{\geq 0}\rightarrow \ensuremath{\mathbb{R}}$,
\begin{equation*}
V(t) = \frac{1}{2}\norm{u(\cdot, t) - \bar{h}}_{\mathcal{H}}^2 = \frac{1}{2}\norm{\hat{\alpha}(\cdot, t) - \bar{\alpha}}_{L_2(\Theta, \nu)}^2,
\end{equation*}
where $\hat{\alpha}(\cdot, t) \in L_2(\Theta, \nu)$ was defined in the proof of Theorem~\ref{thm:nonparametric_conv} by the partial differential equation
\begin{equation*}
\frac{\partial\hat{\alpha}}{\partial t}(\theta, t) = -\gamma \Phi(x(t), \theta)^\mathsf{T} g_e(x(t), t)^\mathsf{T} \nabla Q (e(t), t), \:\: \hat{\alpha}(\theta, 0) = 0.
\end{equation*}
Computing the time derivative of $V$,
\begin{align*}
\dot{V}(t) = \bigip{\hat{\alpha}(\cdot, t) - \bar{\alpha}}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, t)}_{L_2(\Theta, \nu)}.
\end{align*}
Integrating both sides of the above from $0$ to $t$,
\begin{equation*}
\frac{1}{2}\norm{\hat{\alpha}(\cdot, t) - \bar{\alpha}}_{L_2(\Theta, \nu)}^2 = \frac{1}{2}\norm{\bar{\alpha}}_{L_2(\Theta, \nu)}^2 + \int_0^t \bigip{\hat{\alpha}(\cdot, \tau)-\bar{\alpha}}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, \tau)}_{L_2(\Theta, \nu)}d\tau,
\end{equation*}
Define $\hat{\alpha}_{\infty}(\theta)$ to be the density such that $\lim_{t\rightarrow\infty} u(\cdot, t) = \int_{\Theta} \Phi(\cdot, \theta)\hat{\alpha}_{\infty}(\theta)d\nu(\theta)$.
Taking the limit as $t\rightarrow\infty$ of both sides,
\begin{align}
\frac{1}{2}\norm{\hat{\alpha}_{\infty} - \bar{\alpha}}_{L_2(\Theta, \nu)}^2 &= \lim_{t\rightarrow\infty}\frac{1}{2}\norm{\hat{\alpha}(\cdot, t) - \bar{\alpha}}_{L_2(\Theta, \nu)}^2 \nonumber \\
&= \frac{1}{2}\norm{\bar{\alpha}}_{L_2(\Theta, \nu)}^2 + \int_0^\infty \bigip{\hat{\alpha}(\cdot, \tau)-\bar{\alpha}}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, \tau)}_{L_2(\Theta, \nu)}d\tau. \label{eqn:imp_reg_intermediate}
\end{align}
Now take $\bar{h}(\cdot) \in \mathcal{A}$. Observe that, by definition of $\mathcal{A}$,
for any $\tau \geq 0$,
\begin{align*}
\bigip{\bar{\alpha}}{ \frac{\partial \hat{\alpha}}{\partial t}(\cdot, \tau)}_{L_2(\Theta, \nu)} &= -\gamma\int_{\Theta}\bar{\alpha}(\theta)^\mathsf{T} \Phi(x(\tau), \theta)^\mathsf{T} g_e(x(\tau), t)^\mathsf{T} \nabla Q(e(\tau), \tau)\\
%
%
&= -\gamma \bar{h}(x(\tau))^\mathsf{T} g_e(x(\tau), \tau)^\mathsf{T} \nabla Q(e(\tau), \tau)\\
&= -\gamma h(x(\tau))^\mathsf{T} g_e(x(\tau), \tau)^\mathsf{T} \nabla Q(e(\tau), \tau).
\end{align*}
Hence, \eqref{eqn:imp_reg_intermediate} may be re-written,
\begin{align*}
\frac{1}{2}\norm{\hat{\alpha}_\infty - \bar{\alpha}}_{L_2(\Theta, \nu)}^2 &= \frac{1}{2}\norm{\bar{\alpha}}_{L_2(\Theta, \nu)}^2 + \int_0^\infty \bigip{\hat{\alpha}(\cdot, \tau)}{\frac{\partial \hat{\alpha}}{\partial t}(\cdot, \tau)}_{L_2(\Theta, \nu)}d\tau\\
&\qquad + \gamma\int_0^\infty h(x(\tau))^\mathsf{T} g(x(\tau), \tau)^\mathsf{T} \nabla Q (x(\tau), \tau)d\tau,
\end{align*}
which has eliminated the dependence of the right-hand side on $\bar{\alpha}$ except for in the first term.
Let $\bar{\mathcal{A}} := \{ \bar{\alpha} \in L_2(\Theta, \nu) : h(\cdot) = \int_\Theta \Phi(\cdot, \theta) \bar{\alpha}(\theta) d\nu \in \mathcal{A} \}$.
Since $\hat{\alpha}_\infty \in \bar{\mathcal{A}}$ by assumption, taking
the arg min over both sides of the above equation,
\begin{align*}
\hat{\alpha}_\infty \in \argmin_{\bar{\alpha} \in \bar{\mathcal{A}}} \norm{\bar{\alpha}}_{L_2(\Theta, \nu)}.
\end{align*}
The claim now follows by the correspondence
between $L_2(\Theta,\nu)$ and $\mathcal{H}$.
\end{proof}
\section{Omitted proofs for Section~\ref{sec:rf}}
\subsection{Proof of Proposition~\ref{prop:uniform_approx}}
\uniform*
\begin{proof}
We first define a truncated target function $h_\eta(x)$ and its truncated approximation $\hat{h}_\eta(x; \{\theta_i\}_{i=1}^{K})$
\begin{align*}
h_\eta(x) &:= \int_{\Theta} \Phi_\eta(x, \theta) \alpha(\theta) \: d\nu(\theta), \\
\hat{h}_\eta(x; \{\theta_i\}_{i=1}^{K}) &:= \frac{1}{K} \sum_{i=1}^{K} \Phi_\eta(x, \theta_i) \alpha(\theta_i).
\end{align*}
Clearly, for each $x \in \ensuremath{\mathbb{R}}^n$,
\begin{align*}
\mathbb{E}_{\{\theta_i\}_{i=1}^{K}} \hat{h}_\eta(x; \{\theta_i\}_{i=1}^{K}) &= h_\eta(x).
\end{align*}
Now, consider two sets $\{\theta_i\} \subseteq \Theta$ and $\{\tilde{\theta}_i\} \subseteq \Theta$ that differ in only one index $i$. Observe that
\begin{align*}
\norm{ \hat{h}_\eta(\cdot; \{\theta_i\}_{i=1}^{K}) - \hat{h}_\eta(\cdot; \{\tilde{\theta}_i\}_{i=1}^{K})}_\infty \leq \frac{2 B_\Phi(\eta) B_h}{K}.
\end{align*}
Hence, by McDiarmid's inequality, with probability at least $1-\delta/2$,
\begin{align*}
\norm{\hat{h}_\eta(\cdot; \{\theta_i\}_{i=1}^{K}) - h_\eta(\cdot)}_\infty &\leq \mathbb{E} \norm{\hat{h}_\eta(\cdot; \{\theta_i\}_{i=1}^{K}) - h_\eta(\cdot)}_\infty + \sqrt{2} B_\Phi(\eta) B_h \sqrt{\frac{\log(2/\delta)}{K}} \\
%
%
&\leq \frac{2}{K} \mathbb{E} \bignorm{\sum_{i=1}^{K} \varepsilon_i \Phi_\eta(\cdot; \theta_i) \alpha_i}_\infty + \sqrt{2} B_\Phi(\eta) B_h \sqrt{\frac{\log(2/\delta)}{K}},
\end{align*}
where the last inequality follows by a standard symmetrization argument.
Define the event $\mathcal{E}$ as
\begin{align*}
\mathcal{E} := \left\{ \max_{i=1, ..., K} \sup_{x \in X} \opnorm{\Phi(x, \theta_i)} \leq B_\Phi(\eta) \right\}.
\end{align*}
By our assumption on $B_\Phi$ and a union bound, we have that $\Pr(\mathcal{E}^c) \leq \delta/2$.
Furthermore, $\Phi(\cdot, \cdot)$ and $\Phi_\eta(\cdot, \cdot)$ agree on $\mathcal{E}$ by definition, so that
\begin{align*}
\mathbf{1}\{\mathcal{E}\}\bignorm{ \frac{1}{K} \sum_{i=1}^{K} \Phi(\cdot, \theta_i) \alpha_i - h(\cdot) }_\infty &=\mathbf{1}\{\mathcal{E}\}\bignorm{\frac{1}{K} \sum_{i=1}^{K} \Phi_\eta(\cdot, \theta_i) \alpha_i - h(\cdot) }_\infty \\
%
%
&\leq \bignorm{\frac{1}{K} \sum_{i=1}^{K} \Phi_\eta(\cdot, \theta_i) \alpha_i - h_\eta(\cdot) }_\infty + \norm{ h(\cdot) - h_\eta(\cdot) }_\infty.
\end{align*}
We now focus on bounding the term on the right-hand side.
We write
\begin{align*}
h(x) - h_\eta(x) = \int_{\Theta} \mathbf{1}\{ \opnorm{\Phi(x, \theta)} > B_\Phi(\eta) \} \Phi(x, \theta) \alpha(\theta) d\nu(\theta).
\end{align*}
This implies the estimate
\begin{align*}
\norm{h(\cdot) - h_\eta(\cdot)}_\infty &\leq \sup_{x \in X} B_h \mathbb{E}_{\theta \sim \nu} \mathbf{1}\{ \opnorm{\Phi(x, \theta)} > B_\Phi(\eta) \} \opnorm{\Phi(x, \theta)} \\
%
%
&\leq B_h \sqrt{\eta} \sqrt{ \sup_{x \in X} \mathbb{E}\opnorm{\Phi(x, \theta)}^2 }\\
%
%
&= B_h \sqrt{\frac{\delta\sup_{x \in X} \mathbb{E}\opnorm{\Phi(x, \theta)}^2 }{2K}}.
\end{align*}
The claim now follows by a union bound.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:rademacher_bound}}
To state the proof of the proposition, we will require the following useful result.
\begin{mylemma}[\cite{maurer16vectorcontraction}, Corollary 4]
\label{lem:mauer}
Let $\mathcal{X}$ be any set, let $(x_1, \hdots, x_n)\in\mathcal{X}^n$, let $\mathcal{F}$ be a class of functions $f: \mathcal{X} \rightarrow \ell_2$, and let $h_i : \ell_2 \rightarrow \ensuremath{\mathbb{R}}$ have Lipschitz constant $L$. Then,
\begin{equation*}
\mathbb{E} \sup_{f\in\mathcal{F}}\sum_i \varepsilon_i h_i(f(x_i)) \leq \sqrt{2}L\mathbb{E} \sup_{f\in\mathcal{F}} \sum_{i,k}\varepsilon_{i, k}f_k(x_i)
\end{equation*}
where the $\varepsilon_{ik}$ are i.i.d.\ Rademacher random variables and $f_k(x_i)$ is the $k^\text{th}$ component of $f(x_i)$.
\end{mylemma}
\noindent We may now proceed with the proof.
\rad*
\begin{proof}
Put $\alpha_i = \alpha(\theta_i)$ and $M_{\eta,i} := M_\eta(w_i)$. We write, by definition of the $\norm{\cdot}_\infty$-norm and duality,
\begin{align*}
\mathbb{E} \bignorm{ \sum_{i=1}^{K} \varepsilon_i \Phi_\eta(x; \theta_i) \alpha_i }_\infty &= \mathbb{E} \sup_{x \in X} \sup_{\psi \in \mathbb{S}^{d_1-1}} \sum_{i=1}^{K} \varepsilon_i \psi^\mathsf{T} M_{\eta,i} \alpha_i \phi(w_i^\mathsf{T} x + b_i).
\end{align*}
Towards applying Lemma~\ref{lem:mauer}, for a tuple $(x, \psi) \in X \times \mathbb{S}^{d_1-1}$,
define
\begin{equation*}
f_{x,\psi}(w, b) := \begin{pmatrix} w^\mathsf{T} x + b \\ \psi \end{pmatrix}.
\end{equation*}
Next, define $h_i : \ensuremath{\mathbb{R}} \times \mathbb{S}^{d_1-1} \rightarrow \ensuremath{\mathbb{R}}$ as
\begin{equation*}
h_i(v_1, v_2) := v_2^\mathsf{T} M_{\eta,i} \alpha_i \phi(v_1).
\end{equation*}
We need to show that $h_i$ is Lipschitz continuous. Let $v = (v_1, v_2)$, $w = (w_1, w_2)$, and observe that
\begin{align*}
\abs{h_i(v_1, v_2) - h_i(w_1, w_2)} \leq \sqrt{2} B_h B_\Phi(\eta) \norm{v-w}_2,
\end{align*}
where we have applied the triangle inequality. Now, let $\{\xi_i\}_{i=1}^{K} \subseteq \{\pm 1\}$
and $\{\zeta_i\}_{i=1}^{K} \subseteq \{\pm 1\}^{d_1}$ be independent random vectors with i.i.d.\ Rademacher random variables as entries. Then, by Lemma~\ref{lem:mauer},
\begin{align*}
\mathbb{E} \sup_{x \in X} \sup_{\psi \in \mathbb{S}^{d_1-1}} \sum_{i=1}^{K} \varepsilon_i \psi^\mathsf{T} M_{\eta,i} \alpha_i \phi(w_i^\mathsf{T} x + b_i) &= \mathbb{E} \sup_{x \in X, \psi \in \mathbb{S}^{d_1-1}} \sum_{i=1}^{K} \varepsilon_i h_i(F_{x, \psi}(w_i, b_i)), \\
%
%
&\leq 2 B_h B_\Phi(\eta) \mathbb{E} \sup_{x \in X, \psi \in \mathbb{S}^{d_1-1}} \sum_{i=1}^{K} \bigip{\begin{pmatrix}\xi_i\\\zeta_i\end{pmatrix}}{\begin{pmatrix} w_i^\mathsf{T} x + b_i \\ \psi \end{pmatrix}}, \\
%
%
&= 2 B_h B_\Phi(\eta) \left[ \mathbb{E} \sup_{x \in X} \sum_{i=1}^{K} \xi_iw_i^\mathsf{T} x + \mathbb{E} \sup_{\psi \in \mathbb{S}^{q-1}} \sum_{i=1}^{K} \zeta_i^\mathsf{T} \psi \right], \\
%
%
&\leq 2 B_h B_\Phi(\eta) \left[ B_X \mathbb{E} \bignorm{ \sum_{i=1}^{K} \xi_iw_i}_2 + \mathbb{E} \bignorm{\sum_{k=1}^{K} \zeta_i}_2 \right], \\
%
%
&\leq 2 \sqrt{K} B_h B_\Phi(\eta) \left[ B_X \sqrt{\mathbb{E} \norm{w_1}_2^2} + \sqrt{d_1} \right].
\end{align*}
This completes the proof.
\end{proof}
\section{Omitted proofs for Section~\ref{sec:p_results}}
\label{app:p_results}
\subsection{Details of Example~\ref{prop:s_delta_deadzone}}
First, for any $\delta > 0$, we define the function
\begin{align*}
\bar{s}_\delta(x) := \frac{s_\delta(x)}{x}.
\end{align*}
\begin{mylemma}
\label{lem:sbar_delta_sqrt_lipschitz}
For any $\delta > 0$, the function $x \mapsto \bar{s}_\delta(\sqrt{x})$ is $\frac{1}{2\delta^2}$-Lipschitz on $\ensuremath{\mathbb{R}}_{\geq 0}$.
\end{mylemma}
\begin{proof}
Fix $x, y \in \ensuremath{\mathbb{R}}_{\geq 0}$.
Without loss of generality, suppose that $x \leq y$ (otherwise, we may flip the roles of $x$ and $y$).
If $y \leq \delta^2$, then
$\bar{s}_\delta(\sqrt{x}) = \bar{s}_\delta(\sqrt{y}) = 0$,
in which case the claim is trivial.
Now suppose that $x \geq \delta^2$.
On $[\delta, \infty)$,
we have that $\bar{s}_\delta$ coincides with
$x \mapsto 1-\delta/x$, and hence
\begin{align*}
\bar{s}_\delta(\sqrt{x}) - \bar{s}_\delta(\sqrt{y}) &= 1 - \frac{\delta}{\sqrt{x}} - \left( 1 - \frac{\delta}{\sqrt{y}}\right)
= \delta \left( \frac{1}{\sqrt{y}} - \frac{1}{\sqrt{x}} \right).
\end{align*}
The function $x \mapsto 1/\sqrt{x}$ is $\frac{1}{2\delta^3}$-Lipschitz on $[\delta^2, \infty)$, and hence
\begin{align*}
\abs{\bar{s}_\delta(\sqrt{x}) - \bar{s}_\delta(\sqrt{y})} \leq \frac{1}{2\delta^2} \abs{x - y}.
\end{align*}
Finally, we suppose that
$x \leq \delta^2 \leq y$.
By concavity of the square root on $\ensuremath{\mathbb{R}}_{\geq 0}$,
\begin{align*}
\sqrt{y} \leq \delta + \frac{1}{2\delta}(y - \delta^2).
\end{align*}
Therefore,
\begin{align*}
\abs{\bar{s}_\delta(\sqrt{x}) - \bar{s}_\delta(\sqrt{y})} = \abs{\bar{s}_\delta(\sqrt{y})} &= 1 - \frac{\delta}{\sqrt{y}}
= \frac{\sqrt{y}-\delta}{\sqrt{y}}
\leq \frac{\sqrt{y} - \delta}{\delta}
\leq \frac{y - \delta^2}{2\delta^2}
\leq \frac{y - x}{2\delta^2}
= \frac{\abs{y-x}}{2\delta^2}.
\end{align*}
\end{proof}
\sdeltadeadzone*
\begin{proof}
It is straightforward to check that $\frac{d}{dx} s^2_{\sqrt{\Delta}}(\sqrt{x}) = \bar{s}_{\sqrt{\Delta}}(\sqrt{x})$.
Conditions (i) and (ii) are immediately satisfied. To check condition (iii), observe that by Lemma~\ref{lem:sbar_delta_sqrt_lipschitz},
$\bar{s}_{\sqrt{\Delta}}(\sqrt{x})$ is $1/(2\Delta)$-Lipschitz.
Finally, $\bar{s}_{\sqrt{\Delta}}(\sqrt{x}) \leq 1$ for all $x \geq 0$.
\end{proof}
\subsection{Details of Example~\ref{prop:s_delta_gamma_deadzone}}
\sdeltagammadeadzone*
\begin{proof}
It is easy to check that the derivative $s'_{\Delta,\gamma}$ exists, is continuous,
and is given by:
\begin{align*}
s'_{\Delta,\gamma}(x) = \begin{cases}
0 &\text{if } x \leq \Delta, \\
\frac{x-\Delta}{2\gamma} &\text{if } x \in (\Delta, \Delta+2\gamma), \\
1 &\text{if } x \geq \Delta+2\gamma.
\end{cases}
\end{align*}
It is also easy to check that this derivative is $\frac{1}{2\gamma}$-Lipschitz
and bounded by $1$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:ac_finite_approx}}
\begin{myprop}
\label{prop:ode_rhs_lipschitz}
Fix any $\Delta > 0$.
Let $\sigma_\Delta$ be $\Delta$-admissible.
Define
$F : \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^s \times O_p \times O_m \times \ensuremath{\mathbb{R}}_{\geq 0} \rightarrow \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^s \times \ensuremath{\mathbb{R}}^p \times \ensuremath{\mathbb{R}}^m$
as:
\begin{align*}
F(x, e, \hat{\alpha}_p, \hat{\alpha}_m, t) &:= \begin{bmatrix}
f(x, t) + g(x, t)(Y(x, t) \tilde{\alpha}_p + \Psi(x) \hat{\alpha}_m - h(x)), \\
f_e(e, t) + g_e(x, t)(Y(x, t) \tilde{\alpha}_p + \Psi(x) \hat{\alpha}_m - h(x)), \\
-\sigma'_\Delta(Q(e, t)) [\nabla^2 \psi_p(\hat{\alpha}_p)]^{-1} Y(x, t)^\mathsf{T} g_e(e, t)^\mathsf{T} \nabla Q(e, t), \\
-\sigma'_\Delta(Q(e, t)) [\nabla^2 \psi_m(\hat{\alpha}_m)]^{-1} \Psi(x)^\mathsf{T} g_e(e, t)^\mathsf{T} \nabla Q(e, t).
\end{bmatrix}.
\end{align*}
The function $F(x, e, \hat{\alpha}_p, \hat{\alpha}_m, t)$ is locally
Lipschitz in $(x, e, \hat{\alpha}_p, \hat{\alpha}_m)$.
\end{myprop}
\begin{proof}
The functions
$f, f_e, g, g_e, h, Y, \Psi, \nabla Q, B_h$, and $\sigma'_\Delta$ are all
locally Lipschitz and locally bounded
by assumption.
As long as we can check that both
$\zeta_1(e, t) := \sigma'_\Delta(Q(e, t))$ and
$\zeta_{2,\ell}(\hat{\alpha}) := [\nabla^2 \psi_\ell(\hat{\alpha})]^{-1}$
for $\ell \in \{p, m\}$
are locally Lipschitz and locally bounded, then
the result follows via repeated applications
of the sum and product composition rules~(Proposition~\ref{prop:local_composition}).
We now verify that $\zeta_1$ is locally Lipschitz
and locally bounded.
Since $\nabla Q(e, t)$ is locally bounded, this means that
$Q(e, t)$ is locally Lipschitz.
Furthermore, since $0 \leq Q(e, t) \leq \mu_2(\norm{e}_2)$,
it is clear that $Q(e, t)$ is locally bounded.
Next, $\sigma'_\Delta$ is locally Lipschitz by admissibility.
Since $\sigma'_\Delta$ does not depend on time, then it is
also locally bounded.
This shows that $\zeta_1$ is locally Lipschitz and bounded,
since it is the composition of two locally Lipschitz and bounded functions.
For $\zeta_{2,\ell}$, we first observe that,
since $\psi_\ell$ is strongly convex with respect to a norm $\norm{\cdot}$ on $O_\ell$, there exists a $c > 0$ such that $\nabla^2 \psi_\ell(\hat{\alpha}) \succcurlyeq c I$ for all $\hat{\alpha} \in O_\ell$.
Next, for any invertible square matrices $A, B$, we have the algebraic identity
$A^{-1} - B^{-1} = A^{-1} (B-A) B^{-1}$.
Therefore, for any two $\hat{\alpha}_1, \hat{\alpha}_2$,
\begin{align*}
\opnorm{ [\nabla^2 \psi_\ell(\hat{\alpha}_1)]^{-1} - [\nabla^2 \psi_\ell(\hat{\alpha}_2)]^{-1} } \leq c^{-2} \opnorm{ \nabla^2 \psi(\hat{\alpha}_1) - \nabla^2 \psi(\hat{\alpha}_2) }.
\end{align*}
Because the potential $\psi_\ell$ has locally Lipschitz Hessians, this shows that $\zeta_{2,\ell}$ is locally Lipschitz. Since $\zeta_{2,\ell}$ does not depend on time, it is also
locally bounded.
\end{proof}
\begin{myprop}
\label{prop:bregman_lower_bound}
Let $O \subseteq \ensuremath{\mathbb{R}}^\ell$ be an open convex set, and
let $\psi : O \rightarrow \ensuremath{\mathbb{R}}$ be a strongly convex potential with respect to some norm $\norm{\cdot}$ on $O$.
Then we have that
\begin{align*}
\inf_{\substack{\alpha,\hat{\alpha} \in O, \\\alpha \neq \hat{\alpha}}} \frac{\bregd{\alpha}{\hat{\alpha}}}{\norm{\alpha - \hat{\alpha}}_2^2} > 0.
\end{align*}
\end{myprop}
\begin{proof}
Because all norms on $\ensuremath{\mathbb{R}}^\ell$ are equivalent, strong convexity
on $O$ there exists a $c > 0$ such that
$\nabla^2 \psi(a) \succcurlyeq c I$ for all $a \in O$.
By Taylor's theorem, for any arbitrary $\alpha, \hat{\alpha}$,
\begin{align*}
\bregd{\alpha}{\hat{\alpha}} &= \frac{1}{2} (\alpha-\hat{\alpha})^\mathsf{T} \left[ \int_0^1 \nabla^2 \psi((1-t)\hat{\alpha}+\alpha t) dt \right] (\alpha - \hat{\alpha})
\geq \frac{c}{2} \norm{\alpha-\hat{\alpha}}^2_2.
\end{align*}
The claim now follows.
\end{proof}
\acfiniteapprox*
\begin{proof}
By Proposition~\ref{prop:ode_rhs_lipschitz},
the right-hand side of the dynamical system on $(x, e, \hat{\alpha}_p, \hat{\alpha}_m)$ is locally Lipschitz, and
therefore there exists a maximal time $T_{\max} > 0$
such that there exists a unique $C^1$ curve
$t \mapsto (x(t), e(t), \hat{\alpha}_p(t), \hat{\alpha}_m(t))$
that satisfies the dynamics on $[0, T_{\max})$.
We now define the candidate Lyapunov function $V : [0, T_{\max}) \rightarrow \ensuremath{\mathbb{R}}_{\geq 0}$ as
\begin{align*}
V(t) = \sigma_\Delta(Q(e(t), t)) + \bregdp{\alpha_p}{\hat{\alpha}_p} + \bregdm{\alpha_m}{\hat{\alpha}_m}.
\end{align*}
where $\alpha_m$ is the minimizing $\alpha_m$ in the definition
$B_{\text{approx}}$\footnote{Such a minimizing
$\alpha_m$ exists since the function $\alpha_m \mapsto \sup_{\norm{x}_2 \leq B_x} \norm{\Psi(x) \alpha_m - h(x)}_2$
is continuous and the set
$\{ \bregdm{\alpha_m}{\alpha_{m,0}} \leq B_{\alpha_m} \}$ is closed.
}.
Taking the time derivative of $V$ and suppressing dependence on time,
\begin{align*}
\frac{d}{dt} V(t) &= \sigma'_\Delta(Q)\left(\ip{\nabla Q }{ f_e + g_e (Y\tilde{\alpha}_p + \Psi \alpha_m - h) } + \frac{\partial Q}{\partial t} \right) \\
&\qquad+ \bigip{\frac{d}{dt} \nabla \psi_p(\hat{\alpha}_p)}{\tilde{\alpha}_p} + \bigip{\frac{d}{dt} \nabla \psi_m(\hat{\alpha}_m)}{\tilde{\alpha}_m} \\
&\leq \sigma'_\Delta(Q)\left( - \rho(\norm{e}_2) + \ip{\nabla Q}{ g_e(Y\tilde{\alpha}_p + \Psi \alpha_m - h) }\right) \\
&\qquad+ \bigip{\frac{d}{dt} \nabla \psi_p(\hat{\alpha}_p)}{\tilde{\alpha}_p} + \bigip{\frac{d}{dt} \nabla \psi_m(\hat{\alpha}_m)}{\tilde{\alpha}_m} \\
&= -\sigma'_\Delta(Q)\rho(\norm{e}_2) + \sigma'_\Delta(Q)\bigip{g_e^\mathsf{T} \nabla Q}{ \Psi\hat{\alpha}_m - h} - \sigma'_\Delta(Q) \bigip{ g_e^\mathsf{T} \nabla Q}{\Psi\tilde{\alpha}_m} \\
&= -\sigma'_\Delta(Q)\rho(\norm{e}_2) + \sigma'_\Delta(Q) \bigip{g_e^\mathsf{T} \nabla Q}{\Psi \alpha_m - h} \\
&\leq -\sigma'_\Delta(Q)\rho(\norm{e}_2) + \sigma'_\Delta(Q) \norm{g_e^\mathsf{T} \nabla Q}_2 \norm{\Psi \alpha_m - h}_2.
\end{align*}
Because $\sigma_\Delta$ is a $\Delta$-admissible deadzone,
$\sigma'_\Delta > 0$ only when $Q > \Delta$.
But since $Q(e, t) \leq \mu_2(\norm{e}_2)$,
we have $\norm{e}_2 > \mu_2^{-1}(\Delta)$.
Therefore,
\begin{align}
\frac{d}{dt} V(t) &\leq- \sigma'_\Delta(Q)\rho(\mu_2^{-1}(\Delta)) + \sigma'_\Delta(Q) \norm{g_e^\mathsf{T} \nabla Q}_2 \norm{\Psi \alpha_m - h}_2. \label{eq:d_dt_V_one}
\end{align}
Let
$T_0$ be defined as
\begin{align*}
T_0 := \sup\{ T \in [0, T_{\max}) \mid \norm{e(t)}_2 \leq R \:\: \forall t \in [0, T]\}.
\end{align*}
Note that since
\begin{align*}
\norm{e(0)}_2 \leq \mu_1^{-1}(V(0)) \leq \mu_1^{-1}\left( Q(e(0), 0) + B_{\alpha_p} + B_{\alpha_m} \right) < R,
\end{align*}
$T_0$ is well-defined.
Now, by means of contradiction,
suppose $T_0 < T_{\max}$.
For every $t \in [0, T_0]$, by \eqref{eq:small_error_implies_small_state}, we have that
$\norm{x(t) - x_d(t)}_2 \leq C_e R$,
and hence $\norm{x(t)}_2 \leq C_e R + B_d = B_x$.
Hence, by the definition of $B_{g_e}$ and $B_{\nabla Q}$,
from \eqref{eq:d_dt_V_one}
and the requirement that
$\Delta \geq \mu_2(\rho^{-1}(2 B_{g_e} B_{\nabla Q} B_{\mathrm{approx}}))$, for every $t \in [0, T_0]$,
\begin{align*}
\frac{d}{dt} V(t) &\leq - \sigma'_\Delta(Q)\rho(\mu_2^{-1}(\Delta)) + \sigma'_\Delta(Q) B_{g_e} B_{\nabla Q} B_{\mathrm{approx}} \\
&\leq - \sigma'_\Delta(Q)\rho(\mu_2^{-1}(\Delta))/2.
\end{align*}
Hence, $V(t) \leq V(0)$ for all $t \in [0, T_0]$.
On the other hand, since $T_0$ is maximal,
we must have that $\norm{e(T_0)}_2 = R$,
otherwise, if $\norm{e(T_0)}_2 < R$,
by continuity of the solution $e(t)$
on $[0, T_{\max})$,
there would exist a $\delta > 0$
such that for all $t \in [0, T_0+\delta]$,
we have $\norm{e(t)}_2 \leq R$.
This means then that,
\begin{align*}
V(0) \geq V(T_0) \geq \mu_1(\norm{e(T)}_2) = \mu_1(R) > \mu_1(\mu_1^{-1}(V(0))) = V(0),
\end{align*}
a contradiction. Hence $T_0 = T_{\max}$.
Now we argue that $T_{\max}$ cannot be finite.
Suppose towards a contradiction that $T_{\max}$ is finite.
We already have
$\max_{t \in [0, T_{\max})} \norm{e(t)}_2 \leq R$.
This implies that $\norm{x(t)}_2 \leq C_e R + B_d = B_x$
for $t \in [0, T_{\max})$.
Finally,
since $V(t) \leq V(0)$ on all $t \in [0, T_{\max})$,
this shows that both $\norm{\hat{\alpha}_p(t)}_2$
and $\norm{\hat{\alpha}_m(t)}_2$ are uniformly bounded
for all $t \in [0, T_{\max})$ via Proposition~\ref{prop:bregman_lower_bound}.
This contradicts the maximality of $T_{\max}$, showing
that $T_{\max} = \infty$.
To continue the proof,
we integrate the inequality
$\frac{d}{dt} V(t) \leq - \sigma'_\Delta(Q) \rho(\mu_2^{-1}(\Delta))/2$ to conclude that
\begin{align*}
\int_0^\infty \sigma'_\Delta(Q(e(t), t)) dt \leq \frac{2 V(0)}{\rho(\mu_2^{-1}(\Delta))}.
\end{align*}
We now argue that the integrand
$t \mapsto \sigma'_\Delta(Q(e(t), t))$ is uniformly continuous.
To do this, we
will argue that (a) $t \mapsto Q(e(t), t)$ is uniformly bounded,
(b) $t \mapsto e(t)$ is uniformly Lipschitz,
and (c) $t \mapsto Q(e(t), t)$ is uniformly Lipschitz.
To see (a), we note that $Q(e(t), t) \leq V(t) \leq V(0)$.
To see (b), we note that:
\begin{align*}
\norm{\dot{e}(t)}_2 \leq \norm{f_e(e(t), t)}_2 + \opnorm{g_e(x(t), t)} (\opnorm{Y(x(t), t)}\norm{\tilde{\alpha}_p(t)}_2 + \opnorm{\Psi(x(t))}\norm{\hat{\alpha}_m(t)}_2 + \norm{h(x(t))}_2).
\end{align*}
Since
$f_e$, $g_e$, $Y$, $\Psi$, and $h$ are locally bounded
in the first argument uniformly in $t$,
and since
$\norm{\hat{\alpha}_p(t)}_2$ and
$\norm{\hat{\alpha}_m(t)}_2$ are uniformly bounded,
this shows that $\norm{\dot{e}(t)}_2$ is uniformly bounded,
and hence $t \mapsto e(t)$ is uniformly Lipschitz.
To see (c), we observe that:
\begin{align*}
&\abs{Q(e(s), s) - Q(e(t), t)} \\
&\leq \abs{Q(e(s), s) - Q(e(s), t)} + \abs{Q(e(s), t) - Q(e(t), t)} \\
&\leq \left[ \sup_{t \geq 0} \sup_{\norm{e}_2 \leq R} \bigabs{\frac{\partial Q}{\partial t}(e, t)}\right] \abs{s-t} + \left[ \sup_{t \geq 0} \sup_{\norm{e}_2 \leq R} \norm{\nabla Q(e, t)}_2 \right]\norm{e(s) - e(t)}_2.
\end{align*}
Since $\frac{\partial Q}{\partial t}$
and $\nabla Q$ are both locally bounded in $e$
uniformly in $t$, and since $t \mapsto e(t)$ is uniformly Lipschitz,
we see that $t \mapsto Q(e(t), t)$ is also uniformly Lipschitz.
We now argue that $t \mapsto \sigma'_\Delta(Q(e(t), t))$ is uniformly continuous.
Since $\sigma'_\Delta$ is locally Lipschitz,
it is uniformly Lipschitz on $[0, V(0)]$.
Therefore, $t \mapsto \sigma'_\Delta(Q(e(t), t))$ is the composition of two Lipschitz functions, and is hence Lipschitz
(and therefore uniformly continuous).
From this, we apply Barbalat's lemma to conclude that:
\begin{align*}
\lim_{t \rightarrow \infty} \sigma'_\Delta(Q(e(t), t)) = 0.
\end{align*}
Since $\sigma_\Delta$ is a $\Delta$-admissible deadzone, this implies that
\begin{align*}
\limsup_{t \rightarrow \infty} \norm{e(t)}_2 \leq \mu_1^{-1}(\Delta).
\end{align*}
\end{proof}
\subsection{Proof of Theorem~\ref{thm:dp_finite_approx}}
\newcommand{\bar{f}}{\bar{f}}
\dpfiniteapprox*
\begin{proof}
We proceed by reduction to Theorem~\ref{thm:ac_finite_approx}. Observe that we may write the predictor \eqref{eqn:dyn_predict} in the matched uncertainty form \eqref{eqn:gen_dyn} with $g(\hat{x}, t) = I$
\begin{equation*}
\dot{\hat{x}} = f(\hat{x}, t) + k(\hat{x}, x(t)) + \left(\hat{f}(\hat{x}, \hat{\alpha}, t) - f(\hat{x}, t)\right).
\end{equation*}
This is an adaptive control problem with input $\hat{f}(\hat{x}, \hat{\alpha}, t)$ and desired trajectory $x(t)$. The ``nominal dynamics'' $\bar{f}(\hat{x}, t) := f(\hat{x}, t) + k(\hat{x}, x(t))$ is contracting
at rate $\lambda$ in the metric $M$ by assumption, meaning that
\begin{align*}
\frac{\partial \bar{f}}{\partial \hat{x}}(\hat{x}, t)^\mathsf{T} M(\hat{x}, t) + M(\hat{x}, t) \frac{\partial \bar{f}}{\partial \hat{x}}(\hat{x}, t) + \dot{M}(\hat{x}, t) \preccurlyeq - 2 \lambda M(\hat{x}, t) \:\:\forall \hat{x} \in \ensuremath{\mathbb{R}}^n, t \in \ensuremath{\mathbb{R}}_{\geq 0}.
\end{align*}
Let the error signal $e(t) := \hat{x}(t) - x(t)$.
The error dynamics are
\begin{align*}
\dot{e} &= f(\hat{x}, t) - f(x(t), t) + k(\hat{x}, x(t)) + \left( \hat{f}(\hat{x}, \hat{\alpha}, t) - f(\hat{x}, t)\right) \\
&= \bar{f}(\hat{x}, t) - f(x(t), t) + \left( \hat{f}(\hat{x}, \hat{\alpha}, t) - f(\hat{x}, t)\right).
\end{align*}
Hence we can define
\begin{align*}
f_e(e, t) := \bar{f}(e + x(t), t) - f(x(t), t), \:\: g_e(x, t) := I.
\end{align*}
We first check that $f_e(e, t)$ is locally Lipschitz and locally bounded uniformly in $t$ via Proposition~\ref{prop:local_composition}.
First we consider $(e, t) \mapsto f(e + x(t), t)$.
Write this map as the composition $f(\phi(e, t), t)$ with
$\phi(e, t) := e + x(t)$. Since the signal $x(t)$ is uniformly bounded,
it is clear that $\phi$ is both locally Lipschitz and locally bounded
uniformly in $t$. Since the outer function $f(x, t)$ is also locally
Lipschitz and locally bounded uniformly in $t$, the composition remains locally Lipschitz and locally bounded uniformly in $t$.
Next, since $k(\hat{x}, x)$ is locally Lipschitz in $\hat{x}$
and continuous,
and since $x(t)$ is uniformly bounded, the function
$(\hat{x}, t) \mapsto k(\hat{x}, x(t))$ is locally Lipschitz
and locally bounded uniformly in $t$.
By an identical composition argument, so is $(e, t) \mapsto k(e + x(t), x(t))$. Finally, the function $(e, t) \mapsto f(x(t), t)$ is trivially locally Lipschitz and locally bounded uniformly in $t$
since $x(t)$ is bounded. Therefore, $f_e$ is locally Lipschitz and locally bounded uniformly in $t$.
The Jacobian $\frac{\partial f_e}{\partial e}(e, t) = \frac{\partial \bar{f}}{\partial \hat{x}}(e + x(t), t)$,
which shows that $f_e(e, t)$ is contracting at rate $\lambda$ in the
metric $M_e(e, t) := M(e + x(t), t)$.
Furthermore, it is easy to check that
$e = 0$ is a particular solution to $\dot{e} = f_e(e, t)$,
as $k(x, x) = 0$ for all $x$.
Therefore, $f_e$ admits an exponentially stable Lyapunov function $Q(e, t) = E_{M_e(\cdot, t)}(e, 0)$ that satisfies:
\begin{align*}
\ip{\nabla Q(e, t)}{f_e(e, t)} + \frac{\partial Q}{\partial t}(e, t) \leq - 2 \lambda Q(e, t) \:\: \forall e \in \ensuremath{\mathbb{R}}^n, t \in \ensuremath{\mathbb{R}}_{\geq 0}.
\end{align*}
Moreover, because $\mu I \preceq M_e(e, t) \preceq L I$,
\begin{equation*}
\mu\norm{e}_2^2 \leq Q(e, t) \leq L\norm{e}_2^2 \:\: \forall e \in \ensuremath{\mathbb{R}}^n, t \in \ensuremath{\mathbb{R}}_{\geq 0}.
\end{equation*}
Now, observe that
\begin{equation*}
\nabla Q(e, t) = M_e(e, t)\gamma_s(0; e + x(t), x(t), t),
\end{equation*}
so that $B_{\nabla Q} = \sup_{t\geq 0}\sup_{\norm{e}_2 \leq R}\norm{\nabla Q(e, t)}_2 \leq L B_\gamma$.
Furthermore, by the boundedness of $M$ and
the assumption that
$(\hat{x}, t) \mapsto \norm{\gamma_s(0; \hat{x}, x(t), t)}_2$ is
locally bounded in $\hat{x}$ uniformly in $t$,
we have that $\nabla Q(e, t)$ is locally bounded in $e$ uniformly in $t$.
Similarly, since
\begin{align*}
\frac{\partial Q}{\partial t}(e, t) = - \gamma_s(1; e + x(t), x(t), t)^\mathsf{T} M_e(e, t) f_e(e, t),
\end{align*}
by the boundedness of $M$, the assumption that
$(\hat{x}, t) \mapsto \norm{\gamma_s(1; \hat{x}, x(t), t)}_2$
is locally bounded
in $\hat{x}$ uniformly in $t$ (since geodesics have constant speed,
we have $\norm{\gamma_s(1; \hat{x}, x(t), t)}_2 = \norm{\gamma_s(0; \hat{x}, x(t), t)}_2$),
we have that $\frac{\partial Q}{\partial t}(e, t)$ is locally bounded in $\hat{x}$ uniformly in $t$. Hence, we can invoke Theorem~\ref{thm:ac_finite_approx}
with
\begin{align*}
C_e = 1, \:\:
B_d = B_x, \:\:
B_x = B_{\hat{x}}, \:\:
B_{g_e} = 1, \:\:
B_{\nabla Q} = L B_\gamma, \\
\rho\left(\norm{e}_2\right) = 2\lambda \mu \norm{e}_2^2, \:\: \mu_1\left(\norm{e}_2\right) = \mu\norm{e}_2^2, \:\:
\mu_2\left(\norm{e}_2\right) = L\norm{e}_2^2.
\end{align*}
The result now follows.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:ac_approx_interp}}
\acapproxinterp*
\begin{proof}
From the proof of Theorem~\ref{thm:ac_finite_approx},
the solution $t \mapsto (x(t), e(t), \hat{\alpha}_p(t), \hat{\alpha}_m(t))$
exists for $t \geq 0$, is unique, and is continuously differentiable. Furthermore,
by Proposition~\ref{prop:bregman_lower_bound} we have the following
uniform estimates
\begin{align*}
\sup_{t \geq 0} \norm{e(t)}_2 \leq R, \:\: \sup_{t \geq 0} \norm{x(t)}_2 \leq B_x, \:\: \sup_{t \geq 0} \norm{\hat{\alpha}_\ell(t)}_2 \leq \sqrt{\frac{B_{\alpha_\ell}}{c_\ell}} + \norm{\alpha_{\ell,0}}_2 + \sqrt{\frac{V(0)}{c_\ell}}, \:\: \ell \in \{p, m\}.
\end{align*}
Here, $c_p$ (resp. $c_m$) is a constant depending only on the ambient dimension $p$ and $\psi_p$ (resp. $m$ and $\psi_m$).
Now, applying that
$f, g, f_e, g_e, Y, \Psi$, and $h$ are all locally bounded in their first arguments uniformly in $t$,
that $\nabla^2 \psi_\ell$ is uniformly
bounded from below for $\ell \in \{p, m\}$,
and the assumption that $\sigma'_\Delta$ is $B$-bounded, we conclude that
$\dot{x}(t)$, $\dot{e}(t)$, $\dot{\alpha}_p(t)$,
and $\dot{\alpha}_m(t)$ are all uniformly bounded. Hence
$x(t)$, $e(t)$, $\alpha_p(t)$, and $\alpha_m(t)$ are uniformly Lipschitz
with Lipschitz constants that do not depend on $\Delta$ and $L$.
Next, the fact that $f_e$, $g_e$, $Y$, $\Psi$, and $h$ are locally Lipschitz and locally bounded in their first arguments
uniformly in $t$
implies that $\dot{e}(t)$ is uniformly Lipschitz,
with a Lipschitz constant that depends affinely on $L$.
Therefore, by Proposition~\ref{prop:generalized_barbalat},
we have that:
\begin{align*}
\limsup_{t \rightarrow \infty} \norm{\dot{e}(t)}_2 \leq C_1 \sqrt{\mu_1^{-1}(\Delta)(1+L)},
\end{align*}
for a constant $C_1$ that does not depend on $\Delta$ and $L$.
Now for any $t$,
\begin{align*}
\norm{g_e(x(t), t)(u(x(t), t) - Y(x(t), t) \alpha_p - h(x(t)))}_2 &\leq \norm{\dot{e}(t)}_2 + \norm{f_e(e(t), t)}_2 \\
&\leq \norm{\dot{e}(t)}_2 + C_2 \norm{e(t)}_2,
\end{align*}
where $C_2$ does not depend on $\Delta$ and $L$. Taking the $\limsup$ on both sides yields the claim.
\end{proof}
\section{Random feature approximation}
\label{sec:rf}
\subsection{Approximation theory}
We now demonstrate how the function space $\mathcal{F}_2$ leads to efficient randomized approximation algorithms. These randomized algorithms will enable us to restore the computational advantages of classical finite-dimensional parametric approximations while retaining the expressiveness of the RKHS $\mathcal{F}_2$ with high probability. Roughly speaking, the approach will be to apply the law of large numbers to the expectation \eqref{eqn:f2_func}, which leads to a finite-dimensional approximation
\begin{equation*}
h(\cdot) \approx \frac{1}{K}\sum_{i=1}^K \Phi(\cdot, \theta_i)\alpha_i,
\end{equation*}
where the $\theta_i \sim \nu$ are drawn i.i.d.\ from the base measure $\nu$ and the $\alpha_i = \alpha(\theta_i) \in \ensuremath{\mathbb{R}}^{d_1}$ are treated as parameters to be learned. $K$ denotes the number of sampling points and will tune the accuracy of the approximation. We provide a bound on the number of random features $K$ needed to ensure that there exists a set of weights $\{\alpha_i\}$ capable of $\varepsilon$-uniformly approximating $h$ on a fixed compact set $X\subset\ensuremath{\mathbb{R}}^n$. To begin, let $B_\Phi(\delta)$ be any function that satisfies,
for any $\delta \in (0, 1)$,
\begin{align*}
\Pr_{\theta \sim \nu}\left( \sup_{x \in X} \opnorm{\Phi(x, \theta)} > B_\Phi(\delta) \right) \leq \delta.
\end{align*}
Then, for any $\eta \in (0, 1)$, define a truncated version of $\Phi$ as
\begin{align*}
\Phi_\eta(x, \theta) := \Phi(x, \theta) \mathbf{1}\left\{ \opnorm{\Phi(x, \theta)} \leq B_\Phi(\eta) \right\}.
\end{align*}
We will be interested in approximating functions over the subset
\begin{equation*}
\mathcal{F}_2(B) = \left\{f(\cdot) = \int_{\Theta}\Phi(\cdot, \theta)\alpha(\theta)d\nu(\theta) \,\Bigg|\, \esssup_{\theta\in\Theta}\norm{\alpha(\theta)}_2 \leq B\right\} \subset \mathcal{F}_2,
\end{equation*}
which is dense in $\mathcal{F}_2$ as $B\rightarrow\infty$~\citep{rahimi08uniform}; this bound on the density $\alpha(\theta)$ is needed to obtain a uniform approximation result. With this notation in hand, we may extend the approximation theory of \citet{rahimi08uniform}
to vector-valued functions.
\begin{restatable}[Approximation error]{myprop}{uniform}
\label{prop:uniform_approx}
Let $X \subset \ensuremath{\mathbb{R}}^n$ be compact. Fix $\delta \in (0, 1)$, $B_h > 0$, $h \in \mathcal{F}_2(B_h)$, and a positive integer $K$.
Let $\theta_1, ..., \theta_K$ be i.i.d.\ draws from $\nu$.
Put $\eta = \frac{\delta}{2K}$.
With probability at least $1-\delta$, there exist
weights $\{\alpha_i\}_{i=1}^{K} \subset \ensuremath{\mathbb{R}}^{d_1}$ such that $\norm{\alpha_i}_2 \leq B_h$ for
$i=1, ..., K$, and
\begin{align*}
\bignorm{\frac{1}{K} \sum_{i=1}^{K} \Phi(\cdot, \theta_i) \alpha_i - h }_\infty &\leq \frac{2}{K} \mathbb{E} \bignorm{\sum_{k=1}^{K} \varepsilon_i \Phi_\eta(\cdot, \theta_i)\alpha(\theta_i)}_\infty \\
&\qquad + \sqrt{2} B_\Phi(\eta) B_h \sqrt{\frac{\log(2/\delta)}{K}} + B_h \sqrt{\frac{\delta\sup_{x \in X} \mathbb{E}\opnorm{\Phi(x, \theta)}^2 }{2K}}.
\end{align*}
Above, each $\varepsilon_i$ is an i.i.d.\ Rademacher random variable\footnote{i.e., $\Pr(\varepsilon_i = 1) = \Pr(\varepsilon_i = -1) = 1/2$.} and $\norm{f}_{\infty} := \sup_{x\in X}\norm{f(x)}_2$.
\end{restatable}
In order to bound the Rademacher complexity term
appearing in Proposition~\ref{prop:uniform_approx},
we now make a few more assumptions on the structure of $\Phi(x, \theta)$. These assumptions are motivated by the operator-valued Bochner's theorem \citep{brault16randomfeatures}.
\begin{assmp}
\label{assmp:weights_biases}
The feature space $\Theta \subseteq\ensuremath{\mathbb{R}}^{n+1}$, so that $\theta \in \Theta$ may be written as $\theta = (w, b)$ with $w \in \ensuremath{\mathbb{R}}^n$ and $b\in\ensuremath{\mathbb{R}}$. Moreover, the feature map can be factorized as $\Phi(x, \theta) = \phi(w^\mathsf{T} x + b)M(w)$ for $M:\ensuremath{\mathbb{R}}^{n}\rightarrow \ensuremath{\mathbb{R}}^{d \times d_1}$ and a $1$-Lipschitz scalar function $\phi : \ensuremath{\mathbb{R}} \rightarrow [-1, 1]$.
\end{assmp}
Because $\abs{\phi} \leq 1$, we may take $B_\Phi(\delta)$ to be any function that satisfies
$\Pr( \opnorm{M(w)} > B_\Phi(\delta)) \leq \delta$.
Accordingly, we have
$\Phi_\eta(x, \theta) = M_\eta(w) \phi(w^\mathsf{T} x + b)$
with $M_\eta(w) := M(w) \mathbf{1}\{ \opnorm{M(w)} \leq B_\Phi(\eta) \}$. With these extra assumptions in place, we can bound the Rademacher complexity term as follows.
\begin{restatable}[Rademacher complexity bound]{myprop}{rad}
\label{prop:rademacher_bound}
Let Assumption~\ref{assmp:weights_biases} hold, and denote $B_X := \sup_{x \in X} \norm{x}_2$. Then for any $\eta \in (0, 1)$,
\begin{align*}
\frac{2}{K} \mathbb{E} \bignorm{ \sum_{i=1}^{K} \varepsilon_i \Phi_\eta(\cdot; \theta_i) \alpha(\theta_i)}_\infty &\leq \frac{4 B_h B_\Phi(\eta)}{\sqrt{K}} \left[ B_X \sqrt{\mathbb{E} \norm{w_1}_2^2} + \sqrt{d_1} \right].
\end{align*}
\end{restatable}
Combining Proposition~\ref{prop:uniform_approx} and Proposition~\ref{prop:rademacher_bound}, we have that
with probability $1-\delta$,
\begin{align}
&\inf_{\{\alpha_i\}_{i=1}^{K} \subseteq \ensuremath{\mathbb{R}}^{d_1} : \norm{\alpha_i}_2 \leq B_h} \bignorm{\frac{1}{K} \sum_{k=1}^{K} \Phi(\cdot, \theta_i) \alpha_i - h}_\infty \nonumber \\
&\leq \frac{B_h}{\sqrt{K}} \bigg[ 2 B_\Phi\left(\frac{\delta}{2K}\right) \left(2 B_X \sqrt{\mathbb{E}\norm{w_1}_2^2} + 2 \sqrt{d_1} + \sqrt{\log(2/\delta)}\right) + \sqrt{\frac{\delta}{2} \mathbb{E}\opnorm{M(w)}^2 } \bigg]. \label{eq:final_rf_bound}
\end{align}
To simplify this expression, we now look at some particular choices of kernels.
\subsection{Examples of Reproducing Kernels}
\label{sec:rf:examples}
In what follows, we apply the bound \eqref{eq:final_rf_bound} to a few examples from~\citet{brault16randomfeatures} and~\citet{minh16operator}.
Let $\mathsf{k}(x - z)$ be an arbitrary scalar shift-invariant kernel
and denote by $\mu$ the normalized inverse Fourier transform of $\mathsf{k}(\cdot)$.
We will assume generically that $\mathbb{E}_{w \sim \mu}\norm{w}_2^2 \asymp n$ where $\mu$ denotes the marginal of $\nu$ over $b$.
\paragraph{Decomposable kernels}
Let $\mathsf{K}(x, z) = A \mathsf{k}(x - z)$ for any positive semidefinite $A = BB^\mathsf{T}$.
Then $\Phi(x, \theta) = B\cos(w^\mathsf{T} x + b)$ and $B_\Phi(\delta) = \opnorm{B}$.
Here, the approximation error bound \eqref{eq:final_rf_bound}
scales as $\frac{B_h \opnorm{B}}{\sqrt{K}} \left(B_X \sqrt{n} + \sqrt{d_1} \right)$.
\paragraph{Curl-free kernel}
Let $n=d$ and set $\mathsf{K}(x, z) = -\nabla^2 \mathsf{k}(x - z)$.
Then $A(w) = ww^\mathsf{T}$ and $\Phi(x, \theta) = w \cos(w^\mathsf{T} x + b)$. If $\mu \sim N(0, \sigma^2 I)$, then
$B_\Phi(\delta) = \sqrt{n} + 2 \sigma \sqrt{\log(1/\delta)}$
by standard Gaussian concentration results.
The approximation error bound \eqref{eq:final_rf_bound} then scales as $\frac{B_h (B_x \vee 1)}{\sqrt{K}}(n + \log{K})$.
\paragraph{Divergence-free kernel}
Again let $n=d$. Set $\mathsf{K}(x, z) = (\nabla^2 - I \Delta) \mathsf{k}(x-z)$,
where $\Delta$ is the Laplacian and $I$ is the identity matrix.
Then $A(w) = \norm{w}_2^2 P^\perp_{w}$,
where $P_{M}$ denotes the orthogonal projection onto the range of $M$ and
$P^\perp_{M} = I - P_{M}$. Hence,
$\Phi(x, \theta) = \norm{w}_2 P^\perp_{w} \cos(w^\mathsf{T} x + b)$.
If $\nu \sim N(0, \sigma^2 I)$, then
$B_\Phi(\delta) = \sqrt{n} + 2\sigma \sqrt{\log(1/\delta)}$.
The approximation error bound \eqref{eq:kernel_form}
also scales as $\frac{B_h (B_x \vee 1)}{\sqrt{K}}(n + \log{K})$.
\paragraph{Kernels leveraging prior physical information}
Any known physical structure can easily be combined with
reproducing kernels. As a concrete example,
suppose the state $x$ decomposes as $x = (x_1, x_2) \in \ensuremath{\mathbb{R}}^{n_1 + n_2}$, and
that the disturbance factorizes as $h(x) = h_1(x_1) h_2(x_2)$, where $h_1 : \ensuremath{\mathbb{R}}^{n_1} \rightarrow d$ is a known vector-valued function and
$h_2 : \ensuremath{\mathbb{R}}^{n_2} \rightarrow \ensuremath{\mathbb{R}}$ is an unknown function in an RKHS with scalar kernel $\mathsf{k}$.
Then we can set $\mathsf{K}((x_1, x_2), (z_1, z_2)) = h_1(x_1) h_1(z_1)^\mathsf{T} \mathsf{k}(x_2, z_2)$.
This type of structural simplification is common
in, e.g., robotic applications~\citep{sanner_robot}.
\section{Related Work and Summary of Contributions}
\label{sec:related}
\paragraph{Uniform approximation for adaptive control} Most related to the present contribution is a line of work by initiated by~\cite{kurdila13rkhs} and followed by~\cite{bobade18rkhs}, who study adaptive control and estimation in RKHSs. In these works, a nonparametric input is treated as an ideal, non-implementable abstraction, and this abstract input is approximated via orthogonal projections or a fixed grid of radial basis functions. Asymptotic convergence results are shown for the approximations, but no finite-sample theory is given, and the grid of centers is chosen in an \textit{ad-hoc} fashion. By gridding the space, these past approaches essentially reduce to a classic line of work by~\cite{sanner_nn}, who approximate an unknown dynamics uniformly with a sum of radial basis functions. These basis functions are spaced on a regular grid, and the grid resolution is chosen based on considerations from sampling theory to ensure a sufficient degree of uniform approximation for the control application. Importantly, while these gridding-based approaches are suitable and highly efficient for low-dimensional systems, they become intractable for higher-dimensional systems. From the perspective of constructing a regular grid, ``low-dimensional'' is often as restrictive as four-dimensional, which is easily surpassed by modern control applications.
\paragraph{Randomization and dimensionality-dependence} We show that a nonparametric controller can be implemented as the action of a certain kernel integral operator against a known signal over the system trajectory, and we provide an intuitive derivation via the celebrated ``kernel trick''. This result naturally leads to the randomized approximation methods developed here, which can be seen as a stochastic alternative to a fixed grid of basis functions. The main advantage of randomization is computational: due to concentration of measure, the number of basis functions needed for our construction grows polynomially in the state and input dimension of the underlying control problem. This permits our method to scale to much higher-dimensional systems than prior methods based on gridding, which require a number of basis functions that grows \textit{exponentially} in dimension. Moreover, our work provides a natural path towards developing a theory of adaptive control with more expressive function classes such as single-layer neural networks~\citep{bach_breaking, bengio_convex}, as well as alternative approximation schemes such as the Nystr{\"o}m method~\citep{online_kernel}.
\paragraph{Random feature approximations} Our randomized algorithm is based on random Fourier features~\citep{rahimi07randomfeatures,rahimi08uniform,rahimi08kitchensinks} and their extension to vector-valued functions~\citep{brault16randomfeatures,minh16operator}. We build heavily on the results of~\cite{rahimi08uniform}, who prove that the $L_\infty$ approximation error
over a compact set for a function $f$ in an RKHS $\mathcal{H}$
decays as $O(1/\sqrt{K})$, where $K$ is the number of features drawn
from a particular distribution induced by $\mathcal{H}$. This rate matches that due to~\cite{barron93universal} for approximation of functions whose gradients have absolutely integrable Fourier transforms via sums of sigmoidal basis functions.
\paragraph{Control and robotic learning} In control and robotics applications, several authors have utilized
random features for function approximation in learning
stable vector fields~\citep{sindhwani18vectorfields},
control contraction metrics~\citep{singh20learning},
Lyapunov functions~\citep{boffi20learningcertificates},
and in velocity gradient-based adaptation~\citep{boffi21regret}.
However, these works do not analyze the effect of the
approximation error introduced by random features
on the control performance, nor do they provide any bounds
on the number of random features needed to achieve a
specified level of uniform approximation.
\paragraph{Generality of results} While the focus of this work is on nonparametric adaptive control and randomized approximation schemes, we have written our results generally to capture a variety of different
settings in adaptive control, including Lyapunov-based adaptive control~\citep{krstic95adaptivebook}, speed/velocity gradient methods~\citep{fradkov99,krstic95adaptivebook},
mirror descent~\citep{boffi_neco_imp_reg}, and contraction metrics~\citep{brett_adapt}. We believe that this unification of results represents one of the most general treatments of nonlinear adaptive control available in the literature, and see it to be of independent interest.
\section{Simulations}
\label{sec:simulations}
We now study the empirical performance of the nonparametric method and its randomized approximation. In the control setting we directly compare the kernel and approximate inputs. In prediction we illustrate the ability of the random feature approximation to scale to high-dimensional systems. In addition, we study the convergence of the prediction and interpolation errors as a function of $K$.
\subsection{Adaptive control}
Here we consider a synthetic example in adaptive control to compare the nonparametric adaptive input to its randomized approximation.
\paragraph{System dynamics} We study the stable linear time-invariant system
\begin{align}
\dot{x} = A \left(x - \frac{3}{2}\mathbf{1}\right) + u(x, t) - h(x), \:\: x \in \ensuremath{\mathbb{R}}^5, \:\: h(x) = \sin(x) \mathrm{erf}(x), \label{eq:adaptive_control_exp}
\end{align}
where $A$ is a known matrix
with eigenvalues lying entirely in the left half-plane and $\mathbf{1}$ denotes the vector of ones. The operations defining $h$ are applied elementwise to each coordinate.
The error signal is set to $e(t) = x(t) - \frac{3}{2}\mathbf{1}$, and the desired trajectory is constant at the nominal equilibrium point
$x_d(t) = \frac{3}{2}\mathbf{1}$. This system admits a Lyapunov function $Q(x, t) = \frac{1}{2} (x-x_d(t))^\mathsf{T} P (x-x_d(t))$,
where $P$ is the unique positive definite solution to the Lyapunov matrix equation
$A^\mathsf{T} P + P A = - I$.
\paragraph{Implementation}
We apply a nonparametric input generated by the Gaussian kernel
\begin{align*}
\mathsf{K}(x, y) = \exp\left(-\frac{\norm{x-y}_2^2}{2\sigma^2}\right) I, \:\: \sigma = 0.1.
\end{align*}
For its randomized approximation, we use the
random Fourier features described in Section~\ref{sec:rf:examples}. Both the randomized and nonparametric adaptive laws are obtained by forward Euler integration with a fixed timestep
$\Delta t = 0.001$. At each time, the kernel input \eqref{eqn:kernel_input}
is evaluated via a Riemann sum approximation at the same resolution,
\begin{align*}
u(x, t) = \int_0^t K(x, x(\tau))c(\tau)d\tau \approx \sum_{i=0}^{n_t}K(x, x(t_i))c(t_i)\Delta t
\end{align*}
with $n_t = t/\Delta t$. This corresponds to solving the pointwise-decoupled partial differential equation
\begin{equation*}
\frac{\partial u}{\partial t}(x, t) = K(x, x(t))c(t)
\end{equation*}
again via forward Euler integration with a timestep $\Delta t$.
\begin{figure}[h]
\centering
\begin{tabular}{ll}
\begin{overpic}[width=.475\textwidth]{figures_12_21/control/kernel_input_control_long.pdf}%
\put(5, 60){\textbf{A}}
\end{overpic}&
\begin{overpic}[width=.475\textwidth]{figures_12_21/control/kernel_input_input_magnitude_long.pdf}%
\put(5, 60){\textbf{B}}
\end{overpic}\\
\begin{overpic}[width=.475\textwidth]{figures_12_21/control/kernel_input_ideal_comparison.pdf}%
\put(5, 60){\textbf{C}}
\end{overpic}&
\begin{overpic}[width=.475\textwidth]{figures_12_21/control/interpolation.pdf}%
\put(5, 60){\textbf{D}}
\end{overpic}
\end{tabular}
\caption{\textbf{Adaptive control} (A) Tracking error as a function of time. Error bars display the $20\%/80\%$ quantiles over $20$ trials (draws of the $\theta_i$) for each choice of $K$. Solid lines display the median. The tracking error decreases monotonically with the number of features, and the kernel input obtains the best performance by several orders of magnitude. (B) Magnitude of the adaptive input over time. The kernel input obtains the best performance despite using the lowest input magnitude. (C) Comparison of the performance of the disturbed system with kernel input (solid) to the performance of the nominal system without disturbance $\dot{x} = A\left(x - \frac{3}{2}\mathbf{1}\right)$ (dot-dashed). Despite the presence of the uncertainty, the kernel input converges faster to the equilibrium point. (D) Interpolation error as a function of time. Similar to the tracking error, the interpolation error decreases monotonically with increasing $K$, and the kernel input obtains the best performance by several orders of magnitude.}
\label{fig:control}
\end{figure}
\paragraph{Results (Figure~\ref{fig:control})}%
Error bars around the random feature curves display the $20\%$ and $80\%$ quantiles, while the solid central curves display the corresponding median. In comparison to each value of $K$, the kernel input obtains the best tracking performance both transiently and asymptotically by several orders of magnitude. The tracking error at each fixed time decreases monotonically as a function of $K$ (Figure~\ref{fig:control}A).
The overall magnitude of the adaptive control input $\norm{u(x, t)}_2$ decreases monotonically as a function of $K$, and the kernel input consistently applies the lowest magnitude input despite obtaining the best performance (Figure~\ref{fig:control}B).
Similar to the tracking error, the kernel input obtains the best dynamics approximation by several orders of magnitude, and the dynamics interpolation error decreases monotonically as a function of $K$ for each fixed time (Figure~\ref{fig:control}D).
\paragraph{Comparison to the nominal system} The performance of the kernel input can be compared to the performance of the nominal system $\dot{x} = A\left(x - \frac{3}{2}\mathbf{1}\right)$. Despite the presence of the uncertainty, the kernel input obtains performance that exceeds that of the nominal system (Figure~\ref{fig:control}C).
This counter-intuitive observation can be understood as follows. Given any stable matrix $A'$ that shares a common Lyapunov function with $A$, we can treat the discrepancy $(A - A')\left(x - \frac{3}{2}\mathbf{1}\right)$ as a disturbance
to the system with dynamics $A'$:
\begin{align*}
\dot{x} &= A'\left(x - \frac{3}{2}\mathbf{1}\right) + u(x, t) - \bar{h}(x),\\
\bar{h}(x) &= h(x) + (A - A')\left(x - \frac{3}{2}\mathbf{1}\right).
\end{align*}
In this setting, the \textit{same} adaptive algorithm can easily be shown to lead to a convergent feedback system. From this perspective, we can view the kernel input as bootstrapping the performance of the disturbed system to match the performance of another system with a common Lyapunov function.
Since $A'$ can be taken to be any stable matrix that shares a Lyapunov function with $A$, we can think of the kernel
input as implicitly choosing the $A'$ that yields the
fastest convergence.
\subsection{Adaptive prediction}
The infinite-dimensional input considered in Theorem~\ref{thm:nonparametric_conv} enjoys guarantees that are independent of the system dimension. As shown by \eqref{eq:final_rf_bound}, the accuracy of the random feature approximation only depends polynomially on the system dimension. These observations suggest that the nonparametric input and its randomized approximations should scale well to high-dimensional systems.
\paragraph{Failures of uniform gridding}
Any gridding-based approach must depend \textit{exponentially} on the system dimension, and as a consequence suffers from the curse of dimensionality. Modern robotic systems, for instance, easily have state dimension in the twenties or thirties, which renders such approaches inapplicable for robotic control. For illustration, consider a uniform gridding method as suggested by the calculations in~\cite{sanner_nn}. For
a nine-dimensional system, placing a meager ten basis functions in each direction would require one billion total basis functions, a computationally and statistically intractable number. Here, we study the efficiency of our randomized method in forming a predictive model of a sixty-dimensional system.
\begin{figure}[t]
\centering
\begin{tabular}{ll}
\begin{overpic}[width=.475\textwidth]{figures_12_21/prediction/err_norm.pdf}%
\put(5, 55){\textbf{A}}
\end{overpic}&
\begin{overpic}[width=.475\textwidth]{figures_12_21/prediction/interpolation.pdf}%
\put(5, 55){\textbf{B}}
\end{overpic}\\
\begin{overpic}[width=.475\textwidth]{figures_12_21/prediction/q_fig.pdf}%
\put(5, 55){\textbf{C}}
\end{overpic}&
\begin{overpic}[width=.475\textwidth]{figures_12_21/prediction/vary_features_wide.pdf}%
\put(5, 55){\textbf{D}}
\end{overpic}
\end{tabular}
\caption{\textbf{Adaptive prediction} (A) Prediction error for position $\norm{\hat{q}(t) - q(t)}_2$ and momentum $\norm{\hat{p}(t) - p(t)}_2$ as a function of time with $K=2500$. The error smoothly decreases to a ball around zero. (B) Dynamics approximation error $\norm{\nabla_{\hat{q}}\widehat{H}(\hat{p}(t), \hat{q}(t), t) - \nabla_{q}H(p(t), q(t))}_2$ and $\norm{\nabla_{\hat{p}}\widehat{H}(\hat{p}(t), \hat{p}(t), t) - \nabla_{p}H(p(t), q(t))}_2$ as a function of time with $K=2500$. As learning proceeds, the dynamics approximation becomes consistent with the true dynamics up to a ball around zero. (C) Position prediction as a function of time for $K=2500$ (see-through: prediction, solid: ground truth). An initially inaccurate period ($t \lesssim 10$) includes rapid oscillations of the prediction around the true trajectory. As learning proceeds, the prediction converges to a smooth and accurate prediction of the true trajectory, so the two become indistinguishable. (D) Prediction and interpolation errors at $t=100$ as a function of the number of features $K$ (solid: median, error bars: 20\% / 80\% quantiles over $10$ trials per $K$ value). As $K$ increases, the asymptotic prediction error decreases as a power law $\sim K^{-\xi}$, and the interpolation error decreases as a distinct power law $\sim K^{-\zeta}$. Best-fit power laws obtained via nonlinear least squares are shown in dashed with $\xi \approx 1.28 \pm 0.03$ and $\zeta \approx 0.77 \pm 0.03$.}
\label{fig:prediction}
\end{figure}
\paragraph{$m$-body system} Consider a system of $m$ point masses interacting via Newtonian gravitation in $d$ dimensions, and denote by $q_i\in\ensuremath{\mathbb{R}}^d$ the position of mass $i$ and $p_i\in\ensuremath{\mathbb{R}}^d$ the momentum of mass $i$. Assuming equal masses, such a system admits a Hamiltonian in non-dimensionalized units
\begin{equation*}
H\left(\left\{p_i\right\}_{i=1}^m, \left\{q_i\right\}_{i=1}^m\right) = \sum_{i=1}^m\frac{\norm{p_i}_2^2}{2} - \sum_{i < j}^m\frac{1}{\norm{q_i - q_j}_2},
\end{equation*}
and a corresponding symplectic dynamics $\dot{q}_i = \frac{\partial H}{\partial p_i}$, $\dot{p}_i = -\frac{\partial H}{\partial q_i}$. We denote by $x = (q^\mathsf{T}, p^\mathsf{T})^\mathsf{T} \in \ensuremath{\mathbb{R}}^{2md}$ with $q \in \ensuremath{\mathbb{R}}^{md}$ and $p \in \ensuremath{\mathbb{R}}^{md}$ vectors containing the stacked $q_i$ and $p_i$ over $i$.
\paragraph{Hamiltonian estimation} Let $\hat{q}_i \in \ensuremath{\mathbb{R}}^d$ and $\hat{p}_i \in \ensuremath{\mathbb{R}}^d$ denote estimates of the coordinates and momenta of the masses. Similar to~\cite{boffi_neco_imp_reg}, consider learning a model of the Hamiltonian $\widehat{H}\left(\left\{\hat{p}_i\right\}_{i=1}^m, \left\{\hat{q}_i\right\}_{i=1}^m, t\right)$ by evolving the state estimates according to
\begin{align*}
\dot{\hat{q}}_i &= \frac{\partial \widehat{H}}{\partial \hat{p}_i}\left(\left\{\hat{p}_i\right\}_{i=1}^m, \left\{\hat{q}_i\right\}_{i=1}^m, t\right) + k\cdot\left(q_i(t) - \hat{q}_i\right),\\
%
%
\dot{\hat{p}}_i &= -\frac{\partial \widehat{H}}{\partial \hat{q}_i}\left(\left\{\hat{p}_i\right\}_{i=1}^m, \left\{\hat{q}_i\right\}_{i=1}^m, t\right) + k\cdot\left(p_i(t) - \hat{p}_i\right),
\end{align*}
where $k > 0$ denotes a measurement gain and where $q_i(t)$ and $p_i(t)$ denote measurements of the true system state. The error signals $\tilde{q}(t) = \hat{q}_i(t) - q_i(t)$ and $\tilde{p}(t) = \hat{p}_i(t) - p_i(t)$ can be used to update the Hamiltonian estimate $\widehat{H}$ until $\hat{q}_i(t)$ and $\hat{p}_i(t)$ become consistent with $q_i(t)$ and $p_i(t)$.
\paragraph{Symplectic kernel} Define the symplectic matrix
\begin{equation*}
J = \begin{pmatrix} 0 & I \\ -I & 0\end{pmatrix}, \:\:\: \text{so that} \: \: \: \dot{x} = J\nabla_x H(x)
\end{equation*}
and let $\hat{x} = (\hat{q}^\mathsf{T}, \hat{p}^\mathsf{T})^\mathsf{T}$ with $\hat{q}\in\ensuremath{\mathbb{R}}^{md}$ and $\hat{p}\in\ensuremath{\mathbb{R}}^{md}$ the stacked vectors of $\hat{q}_i$ and $\hat{p}_i$ over $i$. We search for the Hamiltonian estimate $\widehat{H}$ over an RKHS $\mathcal{H}_{\mathsf{k}}$ corresponding to a scalar-valued translation-invariant kernel $\mathsf{k}:\ensuremath{\mathbb{R}}\rightarrow\ensuremath{\mathbb{R}}$. Similar to the curl-free kernel seen in Section~\ref{sec:rf:examples}, we define the \textit{symplectic kernel}
\begin{equation}
\label{eqn:ham_kernel}
\mathsf{K}(x, y) = -J\nabla^2 \mathsf{k}(x-y)J^\mathsf{T},
\end{equation}
which describes the RKHS corresponding to the dynamics $J\nabla_{\hat{x}} \widehat{H}(\hat{x})$ for $\widehat{H}\in\mathcal{H}_{\mathsf{k}}$. Taking $\mathsf{k}(\cdot)$ to be the Gaussian kernel, we may write \eqref{eqn:ham_kernel} as
\begin{equation*}
\mathsf{K}(x, y) = -J\mathbb{E}[ww^\mathsf{T}\cos(w^\mathsf{T} x + b)\cos(w^\mathsf{T} x + b)]J^\mathsf{T}
\end{equation*}
with the expectation taken over $w \sim \mathsf{N}(0, \sigma_w^2 I)$ and $b\sim\mathsf{Unif}(0, 2\pi)$. Let $\Psi:\ensuremath{\mathbb{R}}^{2md} \rightarrow \ensuremath{\mathbb{R}}^K$ denote a vector of random features. We may take each component $\Psi_i(\hat{x}) = \cos(w_i^\mathsf{T} \hat{x} + b_i)$ with the $(w_i, b_i)$ i.i.d. samples and write, for $\gamma > 0$ a learning rate,
\begin{align*}
\widehat{H}(\hat{x}, t) &= \Psi(\hat{x})^\mathsf{T} \hat{\alpha}(t),\\
%
%
\dot{\hat{\alpha}}(t) &= -\gamma \left(\left[\nabla_{\hat{p}}\Psi(\hat{x})\right]^\mathsf{T} \tilde{q}(t) - \left[\nabla_{\hat{q}}\Psi(\hat{x})\right]^\mathsf{T}\tilde{p}(t)\right).
\end{align*}
\paragraph{Results}
We consider the sixty-dimensional ten body problem ($m=10$) in three dimensions ($d=3$).
With $K=2500$ features, the prediction and interpolation errors for the positions $\hat{q}(t) - q(t)$, momenta $\hat{p}(t) - p(t)$, and corresponding dynamics $\nabla_{\hat{p}}\widehat{H}$ and $-\nabla_{\hat{q}}\widehat{H}$ are driven to a small ball around zero (Figure~\ref{fig:prediction}A/B).
In the early stages of learning ($t \lesssim 5$), the trajectory prediction oscillates around the target trajectory. As learning proceeds, the prediction becomes smoother and accurately tracks the true system trajectory (Figure~\ref{fig:prediction}C).
As the number of random features $K$ increases, the sizes of the asymptotic balls in both the prediction and interpolation errors decrease as a power law in $K$ (Figure~\ref{fig:prediction}D).
\paragraph{Power law exponents} Let $\xi$ denote the exponent $\limsup_{t\rightarrow\infty} \norm{\hat{x}(t) - x(t)}_2 \sim K^{-\xi}$ in the power law for the prediction error, and let $\zeta$ denote an analogous quantity for the interpolation error.
Nonlinear least-squares fits lead to estimates ($\pm$ denotes 95\% confidence intervals) $\xi \approx 1.28 \pm 0.03$ and $\zeta \approx 0.77 \pm 0.03$.
For the adaptive predictor considered in this section, a Lyapunov function for the nominal error dynamics is the quadratic $V(t) = \norm{e(t)}_2^2$.
Moreover, due to the feedback term $k\cdot\left(x(t) - \hat{x}(t)\right)$, the nominal dynamics is exponentially stable with rate $k$, and we may take $\rho(\norm{e}_2) \propto \norm{e}_2^2$.
This setting was considered in Example~\ref{ex:rf_adaptive} and leads to the analytical predictions $\xi= 1/2$ and $\zeta = 1/4$, where $\zeta = \xi/2$ follows after an application of Theorem~\ref{thm:ac_approx_interp}.
The rates we obtain empirically are
faster than the $\mathcal{O}\left(1/\sqrt{K}\right)$ Monte-Carlo rate for random feature approximations
predicted by our theory.
One plausible explanation for this observation is that more features are required to see the $\mathcal{O}\left(1/\sqrt{K}\right)$ tail behavior, as suggested by the flattening of the curve observed near $K \approx 20,000$.
|
2,877,628,088,946 | arxiv | \section{Introduction} Monolayers and heterostructures of
two-dimensional (2D) materials with spin-orbit interaction offer
promise for observing many novel physical effects~\cite{geim2013van,
novoselov-science-353-2016, manchon-NatMat-14-871-2015}. In
particular, it has been proposed that topological insulators or
semiconductors with Rashba interactions coupled with superconductors
may host Majorana fermions, which are potential building blocks for
topological quantum computers~\cite{sau-PRL-104-040502-2010,
fu-PRL-100-096407-2008}.
In addition to 2D materials that exist in the hexagonal phase, such as
graphene and the transition metal dichalcogenides (TMDCs), 2D
materials with square lattices have been successfully
fabricated~\cite{chang-science-353-274-2016,
moayed-NatComm-8-15721-2017}. Recently, the Rashba effect has been
observed in thin layers (6--20 nm) of lead sulfide
(PbS)~\cite{moayed-NatComm-8-15721-2017}, where an external electric
field is used to break the inversion symmetry. However, the
spin-splitting is not large. In our previous work based on density
functional theory (DFT) calculations, we found that lead chalcogenide
monolayers PbX (X=S, Se, Te) have large Rashba coupling
$\lambda\sim1$~eV\AA~in their non-centrosymmetric buckled
phase~\cite{hanakata-PRB-96-161401-2017}. In addition, the spin
texture can be switched in a non-volatile way by applying an electric
field or mechanical strain, which puts these materials into the family
of ferroelectric Rashba semiconductors
(FERSCs)~\cite{sante-AdvMat-25-1521-2013,
sante-PRB-91-161401-2015}. This spin-switching mechanism has
recently been observed experimentally in thin films GeTe where the
surface is engineered to have either an inward or outward electric
polarization~\cite{rinaldi-NL-18-2751-2018}.
In reality, monolayers experience strains due to substrates, defects,
and so on, where local strains may change the electronic properties of
monolayers. Important examples of such effects are pseudo-Landau
levels in graphene blisters~\cite{levy-Science-329-544-2010} and band
gap shifts in biaxially strained
MoS$_2$~\cite{castellanos-NL-13-5361-2013}. Recently, spatial
variations of Rashba coupling due to variations in local electrostatic
potentials were reported in InSb~\cite{bindel-NatPhys-12-920-2016}. To
date, most theoretical studies of lead chalcogenide monolayers have
been based solely on DFT calculations~\cite{liu-NanoLett-15-2657-2015,
wan-AdvMat-29-1521-2017}. However, because DFT is limited to the
simulation of small systems, typically several nanometers, it is
difficult to model inhomogeneous strains over large spatial areas
using DFT.
In this paper, based on our previous tight-binding (TB)
model~\cite{hanakata-PRB-96-161401-2017, rodin-PRB-96-115450-2017}, we
develop a continuum model to predict strain-induced changes in the
spin and electronic properties of buckled PbX monolayers. We have also
performed DFT calculations to validate our TB predictions. Due to the
buckled structure of PbX, the angular dependence becomes important as
the relative angle between hybrid orbitals of the top and bottom layer
can change substantially~\cite{hanakata-PRB-96-161401-2017}. We note
that some studies on (non-buckled) SnTe and PbX (X=S, Se, Te)
rock-salt type materials have incorporated strain effects in the TB,
but did not include the changes in hopping parameters due to angle
changes~\cite{tang-NatPhys-10-964-2014, barone-physica-7-1102-2013}.
In contrast, our TB formulation incorporates the effects due to
changes in (i) bond distance and (ii) angle between nearest neighbors
as well as (iii) lattice vector deformation.
In the low-energy Hamiltonian, the biaxial (or uniaxial) strains can
be described as gauge fields, which are equivalent to, by minimal
coupling, the application of in-plane magnetic fields. The
out-of-plane strain is directly related to the out-of-plane
polarization and this also modifies the Rashba parameter. Within this
framework we are able to quantify the Rashba fields in terms of the
strain fields.
\begin{figure*}
\includegraphics[width=16cm]{figure1a_1g.pdf}
\caption{(a) Schematic top and side views of a buckled $AB$
monolayer. (b) Undeformed and deformed Brillouin zone as the monolayer
is stretched in the $x$ and $y$ direction. (c) Representative band
structures of strained PbS along symmetry points
$X$-$\Gamma$-$Y$-$M$-$X$ and (d) close to $M$. (e) Relative change
in the Rashba parameters obtained from DFT calculations as a function of
strain $\epsilon$ for PbS, PbSe, and PbTe. Energy spin-splitting of
PbS for isotropic strains of (f) $\epsilon=0.00$ and (g)
$\epsilon=0.10$. It can be seen that the $M$ points are originally
located at $|k_{x,y}|=\pi/a_0$ and shifted closer to the center
under a strain of $\epsilon=0.10$.}
\label{fig:fig1}
\end{figure*}
\section{Tight-binding} Lead chalcogenide PbX (X=S, Se, Te) consists of
two atoms per unit cell, denoted by $A$ and $B$ atoms,
respectively. Lead is a heavy atom (Z(Pb)=82), and it is crucial for
creating large spin-orbit interaction (SOI). The schematic top and
side views of a buckled $AB$ lattice are shown in
fig.~\ref{fig:fig1}(a). $\pmb{a}$ is the unit lattice vector and
$\pmb{\delta}_j$ is the vector connecting atom $i$ and its $j$
neighbor. We denote the relaxed bond length between the neighboring
$A$ and $B$ atoms by $d$, the vector connecting $A$ and $B$ atoms in
the $(0, 0)$ unit cell $\pmb{\delta}_1=d (\alpha,\alpha,-\gamma)$
where $\alpha=\frac{\cos\theta}{\sqrt 2}$, $\gamma=\sin\theta$, and
$\theta$ is the buckling angle (with $\theta=0$ corresponding to a
flat lattice).
The bands near the Fermi level are mostly composed of $s$ and $p$
orbitals from both $A$ and $B$
atoms~\cite{hanakata-PRB-96-161401-2017}. The bands near the symmetry
points can be described within the TB framework including first
nearest neighbors and SOI. The full derivation of the TB model can be
found in our previous works~\cite{hanakata-PRB-96-161401-2017,
rodin-PRB-96-115450-2017}, and thus we will only outline the
important parts; a more detailed derivation can be found in
Appendix~\ref{sec:tb}.
For the two atom $AB$ unit cell shown in Fig.~\ref{fig:fig1}(a),
the relevant orbital basis involves
$\{s^A, p_x^A, p_y^A, p_z^A, s^B, p_x^B, p_y^B, p_z^B\}$.
To write down the hopping matrix, we use the Slater-Koster
matrix elements for the orbitals of neighboring
atoms~\cite{slater-PR-94-1498-1954}. As we include the SOI,
$H_\mathrm{SOI} = T_\mathcal{X}\left(\frac{L_+\otimes s_-+L_-\otimes
s_+}{2}+L_z\otimes s_z\right)$
(where $\mathcal{X}=A, B$), we will write our Hamiltonian in angular momentum
basis. The dimension of the total Hilbert space is $16\times16$ with
new basis of
$|\mu\rangle\rightarrow|m\rangle|m_{\rm orb}\rangle|s\rangle$, where
$m=\{|A\rangle, |B\rangle\}$ is the sublattice degree of freedom,
$m_{\rm orb}=\{|0, 0\rangle, |1, 1\rangle, |1, -1\rangle, |1,
0\rangle\}$
is the orbital angular momentum degree of freedom, and $s=\{(|+\rangle, |-\rangle\}$ is
the spin degree of freedom.
We found a Rashba-like dispersion near the $\Gamma$ and $M$ points
when the two sublattices are not
equivalent~\cite{hanakata-PRB-96-161401-2017,
rodin-PRB-96-115450-2017}. In this paper, we develop a continuum
strain model describing changes in the Rashba dispersion near the $M$
point, and thus the Hamiltonian is expanded around the $M$ point
${\bf k}=(\pi/a, \pi/a)$. Exactly at $M$ [$q=0$], the Hamiltonian
decomposes into several uncoupled blocks and the wave function of the
conduction band is given by
$|\Psi^\pm\rangle_{mn}
=c_0|m\rangle\otimes|1,\pm1\rangle\otimes|\mp\rangle+c_1|m\rangle\otimes|1,0\rangle\otimes|\pm\rangle\pm
ic_2|n\rangle\otimes|1,\mp1\rangle\otimes|\mp\rangle$,
with $c_0$, $c_1$, and $c_2$ being real
numbers~\cite{hanakata-PRB-96-161401-2017,
rodin-PRB-96-115450-2017}. The Hamiltonian for the valance band can
be obtained by interchanging $m$ and $n$.
Projecting the Hamiltonian onto the conduction band subspace we obtain the effective Rashba-like Hamiltonian
\small
\begin{align}
H_\mathrm{eff}^{mn} &= \lambda\left[\left(\mathbf{q}\times\pmb{\sigma}\right)\cdot\hat z\right]:\begin{pmatrix}
|\Psi^+\rangle_{mn}
\\
|\Psi^-\rangle_{mn}
\end{pmatrix}\,,
\label{eq:H_eff_M}
\end{align}
\normalsize where $\mathbf{q}$ is the momenta,
$\pmb{\sigma}=(\sigma_x, \sigma_y, \sigma_z)$,
$\lambda\equiv a\sin2\theta \Delta c_1c_2$ is the Rashba parameter,
and $\Delta=V_{pp\sigma}-V_{pp\pi}$. The coefficients $c_0, c_1, c_2$
can be obtained from the DFT calculations. Since we know the buckling
angle $\theta$ we can can evaluate $\Delta$. All of the relevant
(unstrained) parameters are tabulated in Appendix~\ref{sec:dft}.
\section{Strain-induced gauge fields}
\begin{figure*}
\includegraphics[width=12cm]{figure2a_2b.pdf}
\caption{Schematic changes in the Rashba dispersions due to (a)
in-plane strains and (b) out-of-plane strains. The linear Rashba
dispersions at the $M$ for unstrained systems are colored blue.
Under positive in-plane strains, the Rashba points shift closer to
$\Gamma$ and the strength of Rashba parameters decrease (smaller
slope) with increasing strains. On the other hand, under
out-of-plane strain, the strength of Rashba parameters increases
with increasing uniaxial out-of-plane strain while the Rashba points
do not shift. }
\label{fig:fig2}
\end{figure*}
Since the SOI is independent of lattice distortions, in this
derivation we will focus on the spinless Hamiltonian and then
reintroduce the spin terms. We will focus on the conduction band only,
as the changes in valence band should be similar.
Under deformation a vector connecting two points in a unit cell $i$
can be approximated as
$\pmb{r}'_j-\pmb{r}'_i\simeq \pmb{\delta}_j +\pmb{\delta}_j\cdot\nabla
\pmb{u}(\pmb{r}_i)$,
where $\pmb{u}=(u_x, u_y, u_z)$ is the displacement vector, and
$\nabla \pmb{u}=\tilde{\pmb{\epsilon}} + \tilde{\pmb{\omega}}$. In
this work we focus on deformation that does not involve local rotation
$\tilde{\pmb{\omega}}=0$. Similarly, between two lattice vectors
$\pmb{R}'_j-\pmb{R}'_i\simeq \pmb{a}^i +\pmb {a}^i\cdot\nabla
\pmb{u}(\pmb{R}_i)$.
Alterations in bond distance will result in changes in the hopping
energies. Since studies of lead chalcogenides under strain are very
limited, we follow the Wills-Harrison's
argument~\cite{harrison2004elementary} and assume that the hopping
energy $t\propto r^{-\beta_{\mu\nu}}$. Similar considerations also
have been used for strained TMDCs~\cite{cazalilla-PRL-113-077201-2014,
rostami-PRB-92-195402-2015, fang2017electronic} and
phosphorene~\cite{jiang-PRB-91-235118-2015,
sisakht-PRB-94-085417-2016}. Note that the hopping matrix derived
from Slater-Koster has angular dependence and these relative angles
should change due to strain. Assuming the hopping matrix depends on
bond distance only, the modified hopping parameter, in terms of the
strain tensor $\tilde{\pmb{\epsilon}}$, is
$t'_{ij, \mu\nu}(\delta_{ij}) \simeq t_{ij, \mu\nu}(1- \beta_{\mu\nu}
\frac{1}{d^2}\pmb{\delta}_j\cdot\tilde{\pmb{\epsilon}}\cdot{\pmb{\delta}}_j)$~\cite{cazalilla-PRL-113-077201-2014,
rostami-PRB-92-195402-2015}. This approximation
is also the case for graphene, where the hopping modulation is
approximated as
$t'(\pmb{\delta}_{ij}) = te^{-\beta (|\pmb{\delta}_{ij}|/d-1)}$. In
particular, this approximation works well for {\it flat} graphene
under strain because the angle between $p_z$ orbitals does not
change. The angular dependence becomes more important when
deformations, such as nanobubbles and kirigami patterns, create large
curvature (bending)~\cite{qi-PRB-90-125419-2014,
qi-PRB-90-245437-2014}. In buckled lead chalcogenides, however, the
relevant hopping terms for the Rashba dispersion depend on the
buckling angle even in the simple case of biaxial
strains~\cite{hanakata-PRB-96-161401-2017}. Thus we will include this
angular dependence, and we will show that this is important to capture
the changes in Rashba coupling with uniaxial strain.
Let the unstrained vector connecting an atom $A$ and its neighbor be
defined as $\pmb{\delta}_j=(x, y, z)$ and the equilibrium distance
$r=d$. Here we show the derivation for $t_{p_xp_z}$, while the others
can be found by following the same procedure. We assume
$\Delta(r')=\Delta_0\left(\frac{r}{r'}\right)^\beta$ and we expect
$\beta\approx 3$~\cite{harrison2004elementary}. In Cartesian
coordinates the strained hopping is given by
$t_{p_xp_z}(x',y',z')=\frac{x'z'}{r'^2}\Delta_0\left(\frac{r}{r'}\right)^\beta$,
and by Taylor expansion we obtain, \small
\begin{widetext}
\begin{equation}
\delta t_{ij, p_xp_z}(x', y', z') \simeq-t_{ij, p_xp_z}(x, y, z) \Big(\Big[(2+\beta)-(r/x)^2\Big]\frac{1}{r^2}{\bf x}\cdot({\bf x}'-{\bf x})-\Big[2+\beta\Big]\frac{1}{r^2}{\bf y}\cdot({\bf y}'-{\bf y})-\Big[(2+\beta)-(r/z)^2\Big]\frac{1}{r^2}{\bf z}\cdot({\bf z}'-{\bf z})\Big).
\end{equation}
\end{widetext}
\normalsize Within the strain approximation
${\bf x}'-{\bf
x}=\hat{x}\cdot\tilde{\pmb{\epsilon}}\cdot{\pmb{\delta}}_j$.
If we alter only the bond distance while keeping the angle constant,
we will get the same expression as above when angular effects are
assumed to be negligible.
The interlattice-spinless Hamiltonian in reciprocal space can be written as
\small
\begin{widetext}
\begin{align}
H^{\rm int}_{\rm orb}({\bf k})=&\sum_{\mu, \nu}\sum_{\langle ij \rangle}(t_{ij, \mu\nu} +\delta t_{ij, \mu\nu})e^{i{\bf k}\cdot\pmb{\Delta}_j(1+\tilde{\pmb{\epsilon}})}c^{\dagger}_{i, {\bf k}, \mu}c_{j, {\bf k}, \nu}+h.c.\nonumber\\
=&\underbrace{\sum_{\mu, \nu}\sum_{\langle i, j\rangle}t_{ij, \mu\nu}e^{i{\bf k}\cdot\pmb{\Delta}_j}c^{\dagger}_{i, {\bf k}, \mu}c_{j, {\bf k}, \nu}}_{H_0}+\underbrace{\sum_{\mu, \nu}\sum_{\langle i, j\rangle} it_{ij, \mu\nu}{\bf k}\cdot\tilde{\pmb{\epsilon}}\cdot\pmb{\Delta}_j e^{i{\bf k}\cdot\pmb{\Delta}_j}c^{\dagger}_{i, {\bf k}, \mu}c_{j, {\bf k}, \nu}}_{H^{(1)}}+\underbrace{\sum_{\mu, \nu}\sum_{\langle i, j\rangle}\delta t_{ij, \mu\nu}e^{i{\bf k}\cdot\pmb{\Delta}_j}c^{\dagger}_{i, {\bf k},\mu}c_{j, {\bf k},\nu}}_{H^{(2)}}+\mathcal{O}(\epsilon^2),
\end{align}
\end{widetext}
\normalsize where $\langle ij\rangle$ is the sum over nearest neighbor
pairs and $\pmb{\Delta}_j={\bf R}_j-{\bf R}_i$. The first term $H_0$ is
the unstrained Hamiltonian, $H^{(1)}$ is the correction due to lattice
deformation, and $H^{(2)}$ is the correction from the altered hopping
parameter due to changes in both the interatomic distance and angle between
orbitals.
\section{Homogenous isotropic strains} We start with a simple deformation with no shear \small
$\tilde{\pmb{\epsilon}}=\begin{pmatrix}
\epsilon_{xx} & 0 & 0 \\
0 & \epsilon_{yy} & 0\\
0 & 0 & \epsilon_{zz}
\end{pmatrix}.$
We will focus on the matrix elements that are relevant to the
conductions band, such as $|A\rangle|1, 0\rangle$ and
$|B\rangle|1,1\rangle$. In the {\it angular momentum} basis, the
correction from $H^{(1)}$ and $H^{(2)}$ at $M$ is given by \small
\begin{widetext}
\begin{align}
_A\langle 1, 0|H^{(1)}|1, 1\rangle_B=&a_0\sqrt{2}\alpha_0\Delta_0\gamma_0\Big[\epsilon_{xx}\pi/a_0+q_x\epsilon_{xx}-i\epsilon_{yy}\pi/a_0-iq_y\epsilon_{yy}\Big]\nonumber\\
_A\langle 1, 0|H^{(2)}|1, 1\rangle_B=& -a_0\sqrt{2} \alpha_0 \gamma_0 \Delta_0 \alpha_0^2(2+\beta)\Big[ (\epsilon_{xx} + f_1 \epsilon_{yy} + f_2\epsilon_{zz})q_x- (f_1\epsilon_{xx} + \epsilon_{yy} + f_2 \epsilon_{zz})iq_y\Big]\,
\label{eq:delta_H}
\end{align}
\end{widetext}
\normalsize where
$\epsilon_{ij}=\frac{1}{2}\Big(\frac{\partial u_i}{\partial
x_j}+\frac{\partial u_j}{\partial x_i} +\frac{\partial u_l}{\partial
x_i} \frac{\partial u_l}{\partial x_j}\Big)$,
$f_1=1-\frac{1}{\alpha_0^2(2+\beta)}$ and
$f_2=\frac{\gamma_0^2}{\alpha_0^2}-\frac{1}{\alpha_0^2(2+\beta)}$. Note
that $a_0, \alpha_0, \beta_0, \gamma_0, \Delta_0$ are the {\it
unstrained} geometrical and hopping parameters. $H^{(1)}$ is
independent of the $z$ direction strains (e.g $\epsilon_{xz}$) because
the lattice vector ${\bf R}$ and ${\bf k}$ are two-dimensional.
Because of the symmetry of $M$, we found that the first correction at
$M$ due to bond alterations is first order in $\epsilon$ {\it and}
momentum $q$. In graphene, the first correction from hopping
modulation that is linear in $\epsilon$ (but not proportional to $q$)
is not zero~\cite{guinea-NatPhys-6-30-2010, juan-PRB-88-155405-2013,
masir-ssc-175-76-2013}. We have to include the contributions of
$H^{(1)}$ up to first order in $q$ as well because in $H^{(2)}$
($\beta$-dependent term) we keep terms up to first order in $q$ and
$\epsilon$.
To obtain $\beta$ we will consider an isotropic strain
$\epsilon\cdot 1_{3\times3}$. Notice that the change in low-energy
Hamiltonian of Eq.~\ref{eq:H_eff_M} due to $H^{(1)}$ and $H^{(2)}$ at
$M$ can be written as gauge potentials, \small
\begin{align}
H_\mathrm{eff} &=-i\lambda_0\begin{pmatrix}
0&(q_x-iq_y) + \pmb{A}_1 + \pmb{A}_2\\
(q_x+iq_y) + \pmb{A}_1^* + \pmb{A}_2^* &0
\end{pmatrix}\,.
\label{eq:Heffmod}
\end{align}
\normalsize
where
\small
$\pmb{A}_1= \begin{pmatrix} \epsilon\pi/a_0 + \epsilon\,q_x\\
-i\epsilon\pi/a_0 -i\epsilon\,q_y
\end{pmatrix}\quad {\rm and}\quad$
$\pmb{A}_2 = -\beta\begin{pmatrix}\epsilon\,q_x\\
-i\epsilon\,q_y
\end{pmatrix}\,$
\normalsize where we have used $2\alpha_0^2+\gamma_0^2=1$ to simplify
$\pmb{A}_1, \pmb{A}_2$ and $\lambda_0$ is the unstrained Rashba
parameter.
$\pmb{A}_2$ and the second term of $\pmb{A}_1$ are {\it proportional}
to $q$. This modifies the strength of Rashba parameter
$\frac{\lambda}{\lambda_{0}}-1\simeq(1-\beta)\epsilon$. This
alteration in the Rashba term is similar to the modification of Fermi
velocity in graphene~\cite{juan-PRL-108-227205-2012,
juan-PRB-88-155405-2013, masir-ssc-175-76-2013}.
We next present our DFT results to validate our TB
predictions. Details of DFT calculations and the unstrained
geometrical parameters of buckled PbS, PbSe, and PbTe can be found in
Appendix~\ref{sec:dft}. Strains are applied to the relaxed buckled phase. In order to
find the effects that come from changes in bond distance only, we
deformed the monolayer in the DFT simulations by changing the bond
distance while keeping the angle constant. The lattice vectors and
atomic positions are not relaxed under this deformation. The Rashba
parameters $\lambda$ are obtained by taking the derivative of the
energy dispersion in the vicinity of the $M$ point, $|q|<0.1\pi/a$. Under
isotropic deformations, we found that $\lambda$ at $M$ decreases with
increasing strain (weakening of the hopping interaction), as expected
from Eq.~\ref{eq:Heffmod}, shown in fig.~\ref{fig:fig1}(c)-(e). A direct
comparison between DFT results and TB with strain-included allows us
to extract $\beta$. By fitting DFT data points to a straight line, we
obtained $\beta=3.25, 3.20, 2.97$ for PbS, PbSe, and PbTe,
respectively (fig.~\ref{fig:fig1}(e)). We see that the value of
$\beta$ would be different if the lattice deformation correction was
not included.
As we stretch the lattice, the Brillouin zone (BZ) will shrink, and the corner of the
BZ ($M$ point) will shift as
$(\frac{\pi}{a_0},
\frac{\pi}{a_0})\rightarrow(\frac{\pi}{a_0(1+\epsilon)},
\frac{\pi}{a_0(1+\epsilon)})\simeq(\frac{\pi}{a_0}(1-\epsilon),
\frac{\pi}{a_0}(1-\epsilon))$,
where $a_0$ is the undeformed lattice constant. For positive strains,
the $M$ point shifts towards the $\Gamma$ point (relative to the
undeformed BZ), shown in fig.~\ref{fig:fig1}(b). In our modified TB
model, the $M$ point is displaced due to the first term of the lattice
deformation correction $\pmb{A}_1$ (see Eq.~\ref{eq:Heffmod}). The
momentum shifts due to lattice deformations are also found in
graphene~\cite{kitt-PRB-85-115432-2012}. The changes in Rashba
dispersion and its locations due to strains are illustrated in
fig.~\ref{fig:fig2}.
To show the momentum shifts relative to the undeformed (reference)
state, we plot the energy spin-splitting at the conduction band of PbS
obtained from the DFT results as a function of $k_x, k_y$, shown in
fig.~\ref{fig:fig1}(f) and (g). Note that momenta are in units of
$\pi/a_0$. Originally the $M$ points are located at
$|k_{x, y}|=\pi/a_0$ and are shifted closer to $\Gamma$
($|k'_{x, y}|\approx0.9\pi/a_0$) when an isotropic strain of
$\epsilon=0.10$ is applied. The momentum shift is linear with strains
${\bf k}\cdot\tilde{\pmb{\epsilon}}$, consistent with several previous
works~\cite{kitt-PRB-85-115432-2012, masir-ssc-175-76-2013}. This
Rashba-point shift due to strains is equivalent to applying
in-plane-magnetic fields $\mathbf{B}_{\rm ex}$ to the system,
\begin{equation}
H =\lambda_0\left[\left(\mathbf{q}-\frac{e\mathbf{A}_{\rm ex}}{c}\right)\times\pmb{\sigma}\right]\cdot \hat{z}+m_\perp\sigma_zB_\perp+m_\parallel\mathbf{B}_\parallel\cdot\sigma_\parallel\,
\label{eq:magnetic}
\end{equation}
where $m_\perp=-\mu_B(c_1^2-2c_2^2)$,
$m_\parallel=-\mu_Bc_1(\frac{c_0}{\sqrt{2}}+c_1+c_0)$, and $\mu_B$ is
the Bohr magneton. For completeness the derivation of
Eq.~\ref{eq:magnetic} is included in
Appendix~\ref{sec:MagneticField}. As an illustration, we can choose an
external field of $\mathbf{A}_{\rm ex}=(0, 0, B_xy-B_yx)$, upon which
the in-plane magnetic field is given by
$\mathbf{B}_{\rm ex}=\nabla\times \mathbf{A}_{\rm ex}=(B_x, B_y, 0)$.
Since the Bohr magneton is small, in order to get a similar effect of
2\% strain using magnetic fields, one has to apply external magnetic
fields with an approximate strength of
$|B_{\rm
ex}|\sim\frac{\sqrt{2}\,0.02\,\pi\lambda_0}{a_0\,m_\parallel}\approx
600$ Tesla (by Eq.~\ref{eq:Heffmod} and Eq~\ref{eq:magnetic}).
\begin{figure}
\includegraphics[width=8cm]{figure3a_3c.pdf}
\caption{(a) Out-of-plane polarization $\Delta\vec{\mathcal{P}}_z$ as
a function of out-of-plane strain $\epsilon_{zz}$. (b) Linear
relationship between $\lambda$ and $\epsilon_{zz}$ which is
consistent with TB predictions. (c) Rashba parameter $\lambda$ as a
function of $\Delta\vec{\mathcal{P}}_z$. All data points are obtained
from the DFT calculations.}
\label{fig:fig3}
\end{figure}
\section{Electric polarization and Rashba field}
Proposals have been made to change the spin texture (i.e. sign of
$\lambda$) by changing the electric
polarization~\cite{sante-AdvMat-25-1521-2013,
liebmann-AdvMat-28-560-2016, leppert-jpcl-7-3683-2016,
qihang-NL-13-5264-2013}. Rinaldi {\it et al.} found that the
spin-texture in FERSC GeTe films indeed depends on the locations of
the atoms on the surface, which dictate the direction of the electric
polarization~\cite{rinaldi-NL-18-2751-2018}. In DFT simulations of
SnTe thin films, which have a structure similar to PbX, it also has
been shown that near the vacuum one of the atomic species buckles
outward while the other species buckles
inward~\cite{qian-NR-8-967-2015}. While the proportionality between
Rashba parameter and spontaneous electric polarization is well known,
it will be useful to understand this mechanism in PbX from a
microscopic view, where the changes in Rashba parameters can be
understood in terms of interactions between atoms and the external
applied strains. We will show that our strain-dependent TB model
captures how the out-of-plane strain, which is proportional to the
out-of-plane polarization, modifies the Rashba fields.
By the modern theory of polarization, the electric polarization is
given by\cite{vanderbilt-PRB-47-1651-1993}
$\vec{\mathcal{P}}=\frac{1}{V}\sum_{\tau}q^{\rm{ion}}_{\tau}{\bf
R}_{\tau}-\frac{2i\rm{e}}{(2\pi)^3}\sum_{n}^{\rm{occ}}\int_{BZ}d^3{\bf
k}e^{-i\vec{k}\cdot{\bf R}}\Big\langle \Psi_{n{\bf
k}}\Big|\frac{\partial \Psi_{n{\bf k}}}{\partial {\bf
k}}\Big\rangle$,
where $q_\tau$ is the ionic charge plus the core electrons,
${\bf R}_{\tau}$ is the position of ions, $V$ is the unit cell volume,
$\rm{e}$ is the elementary charge, $n$ is the valence band index,
${\bf k}$ is the wave vector, and $\Psi_{n{\bf k}}$ is the electronic
wave function. The first term is the contribution from core electrons
and ions, and the second term is the electronic contribution defined
as the adiabatic flow of current, which can be calculated from the
Berry phase (BP)~\cite{vanderbilt-PRB-47-1651-1993}. The spontaneous
polarization is calculated by taking the difference between the
polarization of the polar (buckled) state and the non-polar
(reference) state,
$\Delta \vec{\mathcal{P}}=\vec{\mathcal{P}}_{\rm
polar}-\vec{\mathcal{P}}_{\rm non-polar}$.
We estimate the thickness to be 0.5 nm in order to compare the
polarizations to typical bulk ferroelectrics. Details can be found in
Appendix~\ref{sec:ferro}. In the DFT simulations we distort the ions
in the $z$ direction while keeping the in-plane lattice vectors fixed
at the relaxed buckled values. We report only the spontaneous
polarizations of PbS and PbSe, as PbTe is
metallic~\cite{hanakata-PRB-96-161401-2017}. A modified Berry phase
calculation is needed to evaluate polarization of ferroelectric
metals~\cite{filippetti-NatCom-7-11211-2016}; however this is beyond
the scope of our present study.
From the DFT results we found that the core electronic plus ionic and
the electronic contribution (BP) are proportional to the distance
between Pb and X (X=S, Se) in the $z$ direction (plotted in
Appendix~\ref{sec:ferro}). This gives a proportionality between
$\Delta \vec{\mathcal{P}}_z$ and $\epsilon_{zz}$, as shown in
fig.~\ref{fig:fig3}(a). Compressing the monolayer in the $\hat{z}$
with strain $\epsilon_{zz}<0$ results in a decrease in $\lambda$,
shown in fig.~\ref{fig:fig3}(b). This is {\it opposite} to the case of
isotropic deformation (see fig.~\ref{fig:fig1}(e)). This result is
consistent with TB predictions. In the previous discussion, we found
that increasing bond distance ($\epsilon>0)$ generally weakens the
hopping interaction and thus decreases $\lambda$. Using relaxed
geometrical parameters (i.e buckling angle $\theta_0$) and from
Eq.~\ref{eq:delta_H}, $\lambda$ is expected to decrease with
compressive strain in the $\hat{z}$ as $f_2$ is negative. We also want
to note that there is no gauge-field $\pmb{A}_1$ since ${\bf k}$ is
two-dimensional, and thus $M$ is not shifted. The changes in Rashba
dispersion and its locations due to out-of-plane strains are
illustrated in fig.~\ref{fig:fig2}(b). Notice that not including the
angular dependence in the hopping correction will not capture this
effect. The inclusion of the angular dependence is particularly
important for the PbX monolayer due to its buckled nature. Overall,
this suggests that the out-of-plane internal electric polarization
acts as an in-plane gauge field in the low-energy
Hamiltonian. Assuming {\it small} strains, we found that
$\lambda\propto |\vec{\mathcal{P}}_z|$. This result is important as it
establishes a direct relationship between the Rashba field and the
out-of-plane polarization which is also proportional to the
out-of-plane strain $\epsilon_{zz}$. Recently, several works have also
studied strain-induced piezoelectricity in boron
nitride~\cite{droth-PRB-94-075404-2016} and
TMDCs~\cite{rostami-npj2d-2-15-2018}. Several experimental works use
out-of-plane magnetic fields (parallel to the polar axis of Rashba
materials) to measure the Rashba parameter as the Landau level
spectrum changes with the strength of the Rashba
parameter~\cite{bindel-NatPhys-12-920-2016,
bordacs-PRL-111-166403-2013}. One could also use this experimental
approach to detect variations in the Rashba parameter in PbX due to
out-of-plane strains.
\section{Conclusions}
We have developed a TB model where the electronic changes in PbX can
be described within continuum mechanics. We found the scaling exponent
that modifies the hopping parameter to be $\beta\simeq3$. In the
low-energy Hamiltonian, the effect of strains can be described as
gauge fields, which are equivalent to, by minimal coupling,
application of an in-plane magnetic field. Our theory describes how
the location of the Rashba point and the strength of the Rashba field
can be engineered by applying strains. The out-of-plane strain in
particular is directly related to the out-of-plane
polarization. Within this framework we are able to understand the
connection between the Rashba and ferroelectricity.
Our strain-dependent TB model should be applicable for calculating the
effects of inhomogeneous strain on the spatially-resolved Rashba
fields over a large region, whereas this calculation would not be
feasible within a reasonable time using a DFT approach. Employing
classical atomistic simulations (e.g. molecular dynamics) together
with strain-dependent TB will be an efficient tool for studying larger
and more realistic systems with strain modulation due to substrates,
indentors~\cite{levy-Science-329-544-2010,
castellanos-NL-13-5361-2013, georgi-NL-17-2240-2017} or geometrical
cuts~\cite{qi-PRB-90-245437-2014, hanakata-Nanoscale-8-458-2016}.This
will open possibilities of using lead chalcogenides for strain and
electric-controlled spintronic devices.
|
2,877,628,088,947 | arxiv | \section{Introduction}
\label{section-Introduction}
Relativistic jets are often launched from the vicinity of accreting black holes. They are observed to
be produced in stellar-mass black-hole binary systems and are believed to be the fundamental part
of the gamma-ray burst phenomenon. Powerful relativistic jets are also ejected by accreting supermassive
black holes in some active galactic nuclei. There is no doubt that the jet-launching mechanism is
related to accretion onto black holes, but there
has been no general agreement as to the ultimate source of energy of these spectacular high energy
phenomena. In principle, relativistic jets can be powered either by the black hole gravitational pull
or by its rotation (spin), with large-scale magnetic fields invoked as energy extractors in both cases.
Black-hole rotational energy
extraction due to weakly magnetized accretion was considered by
\citet{1975PhRvD..12.2959R} (see also
\cite{1978PhRvD..17.1518D}). In the context of strongly magnetized jets,
\citet{Blandford-1977} (hereafter BZ) proposed a model of electromagnetic extraction of black
hole's rotational energy based on the analogy with the classical
Faraday disk (unipolar induction) phenomenon. The difficulty with applying this analogy to a rotating black hole was
a viable identification of the analogue of the Faraday disk in a setup where the surface of the rotating body
(the black hole's surface) is causally disconnected from the rest of the Universe. It seems now that this
problem has been clarified and solved (\cite{Komissarov-2006,Komissarov-2009} and references therein). Another
subject of discussion about the physical meaning of the BZ mechanism was its relation to the
black-hole rotational energy extraction process proposed by \citet{Penrose-1969}, in which an infalling
particle decays into two in the ergoregion, with one of the decay products being absorbed by the
black hole, and the other one reaching infinity, with energy larger than that of the initial, infalling parent particle {(see \cite{WaghD89} for a review)}.
The energy gain in this (``mechanical") Penrose process is explained by the negative (``seen" from infinity)
energy of the ergoregion-trapped particle absorbed by the black hole. In the BZ mechanism, particle inertia can be
neglected; therefore it clearly is not a mechanical Penrose process. \citet{Komissarov-2009}
argues that the BZ mechanism is an example of an {\sl energy counterflow}, a black-hole extraction phenomenon
supposed to be more general than the Penrose process.
In the present article we discuss the relation between any mechanism extracting black-hole rotational energy and the mechanical Penrose process
using a general-relativistic, covariant description of the energy fluxes in the metric of a
stationary and axisymmetric rotating black hole (this framework encompasses the Kerr metric as
the special case of a black hole surrounded by non-self-gravitating matter). In particular, using energy and angular-momentum conservation laws, we prove that {\sl for {\tt any} matter or
field, tapping the black-hole rotational energy is possible if and only if
negative energy and angular momentum are absorbed by the black hole and no torque at the black-hole horizon is necessary (or possible)}. The conditions on energy and angular-momentum fluxes through the horizon
are analogous to those on particle energy and angular momentum in the mechanical Penrose process.
From these conditions, we deduce a necessary condition for a general (passive) electromagnetic field configuration to allow black-hole energy extraction through the Penrose process. In the case of stationary, axisymmetric, and force-free fields we obtain the well-known condition \cite{Blandford-1977} on the angular speed of the field lines.
We also describe the Penrose process in terms of the Noether current. This description is
particularly useful in the description of results of numerical simulations.
Finally, we use our generalized Penrose process framework to interpret the results of recent numerical studies of accretion onto black holes by \cite{Tchekhovskoy-2011,Tchekhovskoy-2012,McKinney-2012}, which indicate that the BZ mechanism can tap the black-hole rotational energy very efficiently
(efficiency $\eta > 100\%$).
These simulations are based on
large-scale numerical simulations involving a particular state of
accretion around rotating black holes: ``magnetically arrested disks''
(MADs), first in Newtonian gravity \citep[see, e.g.,][]{Igu2003,Narayan-2003}, and later in GR
\citep[e.g.,][]{Tchekhovskoy-2011}, \cite{Tchekhovskoy-2012}). MADs were
also called ``magnetically choked accretion flows'' (MCAFs) in \cite{McKinney-2012}. We show that the resulting configurations satisfy the Penrose-process conditions for black-hole energy energy extraction.
Our results agree, in most respects, with those obtained
by \citet{Komissarov-2009}.
The difference between the two approaches worth noticing, is that we derive our generalized Penrose condition from the
fundamental, and universally accepted, {\it null energy condition}, while Komissarov introduces a new concept of the {\it energy counterflow}.
This difference will be investigated in a future paper.
More than 30 years ago \citet{Carte79}, analyzing the BZ mechanism in a covariant framework obtained several results similar to ours. Using energy and angular-momentum {\sl rates} (integrated fluxes, while we use energy and angular momentum) he showed the necessity of a negative energy absorption rate at the horizon for this mechanism to operate. Strangely, his paper has almost never been cited in the context of the discussion of the Penrose-BZ process. Our treatment is more general than that of Carter since we use a general energy-momentum tensor, {while Carter considered fields that are time periodic
(cf. Sec.~6.4.2 of Ref.~\cite{Carte79}). Moreover}
we obtain a new condition on a general electromagnetic field configuration
{[Eq.~(\ref{e:nec_cond_EM}) below]} and we apply it to interpret recent numerical simulation of relativistic jet production.
In a recent paper \citep{Penna-13} the MAD simulations have been described in the framework of the so-called ``membrane paradigm" \citep{Thorne-86}.
This picture of the interaction of electromagnetic fields with the black-hole surface has the
advantage of using the analogues of the usual electric and magnetic fields in a 3-D flat space. \citet{Penna-13} showed that the results of MAD simulations can be consistently described in the membrane framework.
\section{The mechanical Penrose process}
\label{Section-Penrose-particles}
\citet{Penrose-1969} considered\footnote{See also \cite{Penrose-1971}} a free-falling
particle that enters the ergosphere of a rotating black
hole with energy $E_1 = - \vw{\eta}\cdot\vw{p}_1$, where $\vw{\eta}$
is the Killing vector associated with stationarity [see also
Eq.~(\ref{e:dodt_eta}) below], $\vw{p}_1$ the particle
4-momentum vector and the dot denotes the spacetime scalar product:
$\vw{\eta}\cdot\vw{p}_1 = \w{g}(\vw{\eta},\vw{p}_1) = g_{\mu\nu}
\eta^\mu p_1^\nu = \eta_\mu p_1^\mu$.
Here $\w{g}$ is the metric tensor, whose signature is chosen to be $(-,+,+,+)$.
Note that although $E_1$ is called an \emph{energy}, it
is not the particle's energy measured by any observer since $\vw{\eta}$ is not a unit
vector (i.e. cannot be considered as the 4-velocity of any observer), except in the
asymptotically flat region infinitely far from the black hole. For this reason $E_1$ is often called
the \emph{energy at infinity}. The virtue of $E_1$ is to remain constant
along the particle's worldline, as long as the latter is a geodesic, i.e., as long as
the particle is free falling. In the ergoregion, the particle disintegrates
into two particles with, say, 4-momenta $\vw{p}_2$ and $\vw{p}_*$.
Their conserved energies are, respectively, $E_2 = - \vw{\eta}\cdot\vw{p}_2$
and $\Delta E_H = - \vw{\eta}\cdot\vw{p}_*$ (the notation $\Delta E_H$ is
for future convenience). The first particle escapes to infinity,
which implies $E_2 > 0$, while the second one falls into the black hole.
Since in the ergoregion $\vw{\eta}$ is a spacelike vector (from the very definition
of an ergoregion), it is possible to have $\Delta E_H<0$ on certain
geodesics. The falling particle is then
called a \emph{negative energy particle}, although its energy measured by any
observer, such as for instance a zero-angular-momentum observer (ZAMO),
remains always positive.
At the disintegration point, the conservation of 4-momentum
implies $\vw{p}_1 = \vw{p}_2 + \vw{p}_*$; taking the scalar product
with $\vw{\eta}$, we deduce that
$E_1 = E_2 + \Delta E_H$. Then, as a result of $\Delta E_H < 0$, we get $E_2 > E_1$.
At infinity, where the constants $E_1$ and $E_2$ can be interpreted
as the energies measured by an inertial observer at rest with respect to the
black hole (thanks to the asymptotic behavior of $\vw{\eta}$), one has
clearly some energy gain: the outgoing particle is more energetic than
the ingoing one. This is the so-called \emph{mechanical Penrose process} of
energy extraction from a rotating black hole.
In other words, the sufficient and necessary condition for
energy extraction from a rotating black hole is
\begin{equation}
\label{eq:Penr_E}
\Delta E_H <0.
\end{equation}
From the condition that energy measured locally by a ZAMO must be non-negative
one obtains (see e.g. \cite{Hartle-2003})
\begin{equation}
\label{eq:Penr_Epos}
\omega_H \Delta J_H\leq \Delta E_H,
\end{equation}
where $\omega_H$ is the angular velocity of the black hole (defined below) and $\Delta J_H$ is the angular-momentum
of the negative-energy particle absorbed by the black hole, defined by
$\Delta J_H = \vw{\xi} \cdot \vw{p}_*$, where $\vw{\xi}$ is the Killing vector associated
with axisymmetry.
Without loss of generality, we take $\omega_H \ge0$.
Equations~(\ref{eq:Penr_E})-(\ref{eq:Penr_Epos}) imply that $\omega_H \neq 0$ and
\begin{equation}
\label{eq:Penr_J}
\Delta J_H < 0 .
\end{equation}
It worth stressing that in the mechanical Penrose process, particles move on geodesics along
which (by construction) energy is conserved. Therefore the negative-energy particle must
originate in the ergoregion, the only domain of spacetime where such particle can exist. In
the general case of interacting matter or fields, negative energy at the horizon does not imply negative energy elsewhere.
Soon after Penrose's discovery that rotating black holes may be energy sources, it was suggested
that the mechanical Penrose process may power relativistic jets observed in quasars.
However, a careful analysis by \cite{Bardeen-1972, Wald-1974,
Kovetz-1975, Piran-1977} {(see also \cite{WaghD89})}, showed that it is unlikely that negative energy states,
necessary for the Penrose process to work, may be achieved through the particles
disintegration and/or collision inside the ergosphere. This conclusion has been confirmed more recently by
\cite{Bejger-2012,Harada-12,Zaslavskii-12} for high energy particle collisions. The reason is that in the case of
collisions, the particles with positive energies cannot escape because they must have large but
negative radial momenta. Thus, they are captured (together with the negative energy
particles) by the black hole. {Note that for charged particles evolving
in the electromagnetic field of a Kerr-Newman black hole, the efficiency of the
mechanical Penrose process can be very large \cite{BhatDD85,WaghD89}.}
Attempts to describe the BZ mechanism as a mechanical Penrose process have been unsuccessful (\cite{Komissarov-2009}
and references therein).
This leaves electromagnetic processes as the only astro\-phy\-si\-cally realistic way to extract
rotational energy from a rotating black hole.
\section{General relativistic preliminaries}
\label{Section-general-relativistic}
\subsection{The spacetime symmetries}
\label{sub-section-Kerr-symmetries}
The spacetime is modeled by a four-dimensional smooth manifold $\mathscr{M}$ equipped with
a metric $\w{g}$ of signature $(-,+,+,+)$.
We are considering a rotating uncharged black hole that is stationary and axisymmetric.
If the black hole is isolated, i.e., not surrounded by self-gravitating matter or electromagnetic
fields, the spacetime $(\mathscr{M},\w{g})$ is described by the Kerr metric (see
\ref{ap:Kerr}).
Here and in Secs. \ref{s:conservation_laws} to \ref{sect:em}, we do not restrict to this case and consider a generic stationary and
axisymmetric metric $\w{g}$. As already mentioned in Sec.~\ref{Section-Penrose-particles}, we
denote by $\vw{\eta}$ the Killing vector associated with stationarity and by $\vw{\xi}$
that associated with axisymmetry.
In a coordinate system $(x^\alpha)=(t,x^1,x^2,x^3)$ adapted to stationarity,
i.e. such that
\begin{equation} \label{e:dodt_eta}
\der{}{t} = \vw{\eta} ,
\end{equation}
the components $g_{\alpha\beta}$ of the metric tensor are independent of the coordinate $t$.
In a similar way,
if the coordinate $x^3$, say, corresponds to the axial symmetry, the components
$g_{\alpha\beta}$ will be independent of this coordinate.
\subsection{The black-hole horizon}
\label{sub-section-Kerr-horizon}
The event horizon $\mathcal{H}$ is a null hypersurface; if it is stationary and
axisymmetric, the symmetry generators $\vw{\eta}$ and $\vw{\xi}$ have to
be tangent to it (cf. Fig.~\ref{f:horizon_vectors}). Moreover,
any null normal $\vw{\ell}$ to $\mathcal{H}$ has to be a linear
combination of $\vw{\eta}$ and $\vw{\xi}$: up to some rescaling by a
constant factor, we may write
\begin{equation}
\label{e:ell_eta_xi}
\vw{\ell} = \vw{\eta} + \omega_{H} \vw{\xi} ,
\end{equation}
where $\omega_{H}\geq 0$ is constant over $\mathcal{H}$ (rigidity theorem, cf. \cite{Carte79})
and is called the \emph{black-hole angular velocity}.
Since $\omega_{H}$ is constant, $\vw{\ell}$ is itself a Killing
vector and $\mathcal{H}$ is called a \emph{Killing horizon}. For a Kerr black hole of mass $m$
and angular momentum $a m$, we have $\omega_H = a/[2m r_H]$, where
$r_H=m+\sqrt{m^2-a^2}$ is the radius of the black-hole horizon.
Since $\mathcal{H}$ is a null hypersurface, the normal $\vw{\ell}$ is null,
$\vw\ell\cdot\vw\ell = 0$. For this reason, $\vw\ell$ is both normal
and tangent to $\mathcal{H}$. The field lines of $\vw{\ell}$ are null geodesics
tangent to $\mathcal{H}$; they are called the \emph{null generators} of
$\mathcal{H}$. One of them is drawn in Fig.~\ref{f:horizon_vectors}.
\begin{figure}
\centerline{\includegraphics[height=0.27\textheight]{Fig_1_lasota.eps}}
\caption{\label{f:horizon_vectors} \small
Spacetime diagram showing the event horizon of a Kerr black hole
of angular momentum parameter $a/m=0.9$. This three-dimensional diagram is
cut at $\theta=\pi/2$ of the four-dimensional spacetime.
The diagram is based on the 3+1 Kerr coordinates
$(t,r,\phi)$ described in Appendix~\ref{ap:Kerr} and the axes are labelled in units of $m$.
The event horizon $\mathcal{H}$ is the blue cylinder of radius
$r= r_H = 1.435 m$ (this value results from $a=0.9m$ via (\ref{e:KE:r_H_def})) and the green cone is the future light cone at the
point $(t=0, \theta=\pi/2, \phi=0)$ on $\mathcal{H}$.
The null vectors $\vw{\ell}$ and $\vw{k}$ (drawn in green) are tangent to this light cone, but not $\vw{\eta}$ which, although tangent to $\mathcal{H}$, being spacelike lies outside of the light cone. Note that relation (\ref{e:ell_eta_xi}) holds with
$\omega_H = 0.313 m^{-1}$ (cf. Appendix~\ref{ap:Kerr}). The green line, to which
$\vw{\ell}$ is tangent, is a null geodesic tangent to $\mathcal{H}$; if the figure was extended upward, it would show up as a helix.
$\vw{n}$ is the (timelike) unit normal to the hypersurface $t=0$.
$\vw{s}$ is the (spacelike) unit normal to the 2-sphere $\mathscr{S}_0$ defined by
$t=0$ and $r=r_H$. Note that this 2-sphere is drawn here as a circle (the basis of the cylinder) because the dimension along $\theta$ has been suppressed.
The vector $\vw{b}$ is the unit vector along $\vw{\xi} = \partial/\partial\phi$.
The vectors $(\vw{n},\vw{s},\vw{b})$ form an orthonormal basis (drawn in red) for the metric $\w{g}$.
}
\end{figure}
Let $(x^\alpha)=(t,x^1,x^2,x^3)$ be a coordinate system on $\mathscr{M}$ that is
adapted to the stationarity, in the sense of (\ref{e:dodt_eta}), and
regular on $\mathcal{H}$. In the case of a Kerr black hole, this means that $(x^\alpha)$
are not the standard Boyer-Lindquist coordinates, which are well known to
be singular on $\mathcal{H}$. Regular coordinates on $\mathcal{H}$ are the Kerr coordinates,
either in their original version \citep{Kerr-1963} or in the 3+1 one,
and the Kerr-Schild coordinates, which are used in the numerical computations by
\citet{Tchekhovskoy-2010,Tchekhovskoy-2011,McKinney-2012} discussed in
Sec.~\ref{sect:mad}. See Appendix~\ref{ap:Kerr} for more details on the
coordinate system and the coordinate representation of $\vw \ell$.
Then from (\ref{e:dodt_eta}) and (\ref{e:ell_eta_xi}),
$t$ is the parameter along the null geodesics generating $\mathcal{H}$ for which $\vw{\ell}$
is the tangent vector:
\begin{equation}
\label{e:ell_dxdt}
\ell^\alpha = \frac{\mathrm{d} x^\alpha}{\mathrm{d} t} .
\end{equation}
(Note that in general $t$ is not an affine parameter along these geodesics.)
Since the coordinates $(t,x^i)$ are assumed regular on $\mathcal{H}$,
the 2-surfaces $\mathscr{S}_t$ of constant $t$ on $\mathcal{H}$ provide a regular slicing of $\mathcal{H}$ by a family
of spacelike 2-spheres. Let us denote by $\vw{k}$ the future-directed null vector field defined on $\mathcal{H}$
by the following requirements (cf. Fig.~\ref{f:horizon_vectors}):
\begin{enumerate}
\item $\vw{k}$ is orthogonal to $\mathscr{S}_t$,
\item $\vw{k}$ obeys
\begin{equation}
\label{e:k_ell}
\vw{k}\cdot\vw{\ell} = -1.
\end{equation}
\end{enumerate}
Then, at each point of $\mathscr{S}_t$, $\mathrm{Span}(\vw{k},\vw{\ell})$ is the timelike 2-plane orthogonal
to $\mathscr{S}_t$.
Note that $\vw{k}$ is transverse to $\mathcal{H}$ (i.e. is not tangent to it) and that, contrary to $\vw{\ell}$,
the vector $\vw{k}$ depends on the choice of the coordinates $(t,x^i)$ (more precisely on the slicing $(\mathscr{S}_t)_{t\in\mathbb{R}}$ of $\mathcal{H}$, see e.g. \cite{Gourgoulhon-2005}).
The 2-surfaces $\mathscr{S}_t$ of constant $t$ on $\mathcal{H}$ are spacelike 2-spheres corresponding to what is commonly understood as
the ``black-hole surface", in analogy to ``stellar surface".
\subsection{Energy condition}
\label{sub-section-Energy-conditions}
Let $\w{T}$ be the energy-momentum tensor of matter and non-gravitational fields
surrounding the black hole. We shall assume that it fulfills the
so-called \emph{null energy condition} at the event horizon:
\begin{equation} \label{e:Tll}
\left. T_{\mu\nu} \ell^\mu \ell^\nu \right\vert _{\mathcal{H}} \geq 0 .
\end{equation}
This is a very mild condition, which is satisfied by any ordinary matter
and any electromagnetic field.
In particular, it follows (by some continuity argument timelike $\rightarrow$ null)
from the standard \emph{weak energy condition}
\citep{HawkiE73}, according to which energy measured locally by observers is always non-negative.
\section{Energy and angular-momentum conservation laws}
\label{s:conservation_laws}
In the mechanical Penrose process particles move on geodesics along which
the energy $E$ and the angular
momentum $J$, as defined in Sec.~\ref{Section-Penrose-particles},
are conserved quantities. Therefore they can be evaluated anywhere along the particle trajectories.
In particular at the black-hole surface where an energy flux can be calculated. In the general case of matter
with nongravitational interactions (e.g. a perfect fluid) or a field (e.g., electromagnetic) the energy
and angular momentum must be evaluated using the conservation equations and in such a case the fluxes of
the conserved quantities play the role equivalent to that of energy and angular momentum in the case of
particles.\footnote{In \citet{Abramowicz-2010} where generalizing the Penrose process was attempted, Eqs. (B.3) and
(B.4) are not correct because the ``energy at infinity" and ``angular momentum at infinity" that are used there, are not conserved
quantities}
\subsection{Energy conservation}
\label{s:energy_conservation}
Let us consider the ``energy-momentum density'' vector $\vw{P}$ defined by
\begin{equation}
\label{e:def_P}
P^\alpha = - T^\alpha_{\ \mu} \eta^\mu .
\end{equation}
If matter and nongravitational fields obey the standard
\emph{dominant energy condition}\citep{HawkiE73} then $\vw{P}$ must be a future-directed
timelike or null vector as long as $\vw{\eta}$ is timelike, i.e. outside
the ergoregion. In the ergoregion, where $\vw{\eta}$ is spacelike,
there is no guarantee that $\vw{P}$ is timelike or null and even
when it is timelike, $\vw{P}$ can be past-directed (an example in provided
in Fig.~\ref{f:negative_energy} below).
Therefore, $\vw{P}$ cannot be interpreted as a physical energy-momentum density, hence
the quotes in the above denomination.
Moreover, even outside the ergoregion,
$\vw{P}$ does not correspond to the energy-momentum density measured by
any physical observer, since $\vw{\eta}$ fails to be some observer's 4-velocity, not being
a unit vector, except
at infinity (cf. the discussion in Sec.~\ref{Section-Penrose-particles}).
The vector $\vw{P}$ is known as the \emph{Noether current} associated
with the symmetry generator $\vw{\eta}$ \citep{Szaba09, JaramG11}. It is conserved
in the sense that
\begin{equation}
\label{e:P_conserved}
\nabla_\mu P^\mu = 0 .
\end{equation}
This is easily proved from the definition (\ref{e:def_P})
by means of (i) the energy-momentum conservation law
$\nabla_\mu T^{\mu\nu} = 0$, (ii) the Killing equation obeyed by
$\vw{\eta}$ and (iii) the symmetry of the tensor $\w{T}$.
By Stokes' theorem, it follows from (\ref{e:P_conserved}) that
the flux of $\vw{P}$ through any closed\footnote{i.e. compact without boundary.} oriented hypersurface $\mathscr{V}$ vanishes:
\begin{equation} \label{e:flux_P_V}
\oint_{\mathscr{V}} \w{\eps}(\vw{P}) = 0,
\end{equation}
where $\w{\eps}(\vw{P})$ stands for the 3-form obtained by setting $\vw{P}$ as the first argument of the Levi-Civita tensor $\w{\eps}$ (or volume 4-form) associated with the spacetime metric $\w{g}$:
\begin{equation} \label{e:def_epsP}
\w{\eps}(\vw{P}) := \w{\eps}(\vw{P},.,.,.) .
\end{equation}
In terms of components in a right-handed basis,
\begin{equation} \label{e:epsP_comp}
\epsilon(\vw{P})_{\alpha\beta\gamma} = P^\mu \epsilon_{\mu\alpha\beta\gamma} =
\sqrt{-g} P^\mu [\mu,\alpha,\beta,\gamma],
\end{equation}
where $g := \det(g_{\alpha\beta})$ and $[\mu,\alpha,\beta,\gamma]$ is the alternating symbol
of four indices, i.e. $[\mu,\alpha,\beta,\gamma]=1$ ($-1$) if
$(\mu,\alpha,\beta,\gamma)$
is an even (odd) permutation of $(0,1,2,3)$, and $[\mu,\alpha,\beta,\gamma]=0$
otherwise.
Note that the integral (\ref{e:flux_P_V}) is intrinsically
well defined, as the integral
of a 3-form over a three-dimensional oriented manifold. The proof of (\ref{e:flux_P_V})
relies on Stokes' theorem according to which the integral over $\mathscr{V}$ is equal to
the integral over the interior of $\mathscr{V}$ of
the exterior derivative of the 3-form $\w{\eps}(\vw{P})$; the latter being
$\bm{\mathrm{d}} [ \w{\eps}(\vw{P}) ] = (\nabla_\mu P^\mu) \w{\eps}$, it
vanishes identically as a consequence of (\ref{e:P_conserved}).
\begin{figure}
\centerline{\includegraphics[height=0.20\textheight]{Fig_2_lasota.eps}}
\caption{\label{f:hypersurfaces} \small
Closed hypersurface $\mathscr{V}=\Sigma_1 \cup \Delta\mathcal{H} \cup \Sigma_2 \cup \Sigma_{\rm ext}$.
The green arrows depict the orientation of $\mathscr{V}$, which is given by $\w{\eps}(\vw{m})$.}
\end{figure}
Let us apply (\ref{e:flux_P_V}) to the hypersurface $\mathscr{V}$
defined as the following union:
\begin{equation} \label{e:VV_union}
\mathscr{V} := \Sigma_1 \cup \Delta\mathcal{H} \cup \Sigma_2 \cup \Sigma_{\rm ext},
\end{equation}
where (cf. Fig.~\ref{f:hypersurfaces})
\begin{itemize}
\item $\Sigma_1$ ($\Sigma_2$) is a compact spacelike hypersurface delimited by two 2-spheres,
$\mathcal{S}_1$ and $\mathcal{S}_1^{\rm ext}$ ($\mathcal{S}_2$ and $\mathcal{S}_2^{\rm ext}$), such that $\mathcal{S}_1$ ($\mathcal{S}_2$) lies on $\mathcal{H}$ and $\mathcal{S}_1^{\rm ext}$ (resp. $\mathcal{S}_2^{\rm ext}$) is located far from the black hole;
\item $\Sigma_2$ is assumed to lie entirely in the future of $\Sigma_1$;
\item $\Delta\mathcal{H}$ is the portion of the event horizon $\mathcal{H}$ delimited by $\mathcal{S}_1$ and $\mathcal{S}_2$;
\item $\Sigma_{\rm ext}$ is a timelike hypersurface having $\mathcal{S}_1^{\rm ext}$ and $\mathcal{S}_2^{\rm ext}$ for
boundaries.
\end{itemize}
We may choose, but this is not mandatory, the 2-spheres $\mathcal{S}_1$ and $\mathcal{S}_2$ to coincide with
some slices of the foliation $(\mathscr{S}_t)_{t\in\mathbb{R}}$ of $\mathcal{H}$ mentioned in Sec.~\ref{sub-section-Kerr-horizon}:
$\mathcal{S}_1 = \mathscr{S}_{t_1}$ and $\mathcal{S}_2 = \mathscr{S}_{t_2}$.
We choose the orientation of $\mathscr{V}$ to be towards its exterior,
but the final results do not depend upon this choice. The orientation of $\mathscr{V}$
is depicted by the vector
$\vw{m}$ in Fig.~\ref{f:hypersurfaces}. Note that this vector does not have to be normal to the various
parts of $\mathscr{V}$ (in particular it is not normal to $\Delta\mathcal{H}$). Its role is only to indicate that the orientation of $\mathscr{V}$ is given by the 3-form $\w{\eps}(\vw{m})$
restricted to vectors tangent to $\mathscr{V}$. More precisely, $\vw{m}$ is defined as follows:
\begin{itemize}
\item on $\Sigma_1$, $\vw{m} = - \vw{n}_1$, the vector $\vw{n}_1$ being
the future-directed unit timelike normal to $\Sigma_1$;
\item on $\Sigma_2$, $\vw{m} = \vw{n}_2$, the future-directed unit timelike normal to $\Sigma_2$;
\item on $\Sigma_{\rm ext}$, $\vw{m} = \vw{s}$, the unit spacelike normal
to $\Sigma_{\rm ext}$ oriented towards the exterior of $\mathscr{V}$;
\item on $\Delta\mathcal{H}$, $\vw{m}=\vw{k}$, the future-directed null vector
introduced above [cf. (\ref{e:k_ell})].
\end{itemize}
In view of (\ref{e:VV_union}), the property (\ref{e:flux_P_V}) gives
\begin{equation} \label{e:flux_P_sum_int}
\int_{\Sigma_1\downarrow} \w{\eps}(\vw{P}) \
+ \int_{{\ \atop\stackrel{\scriptstyle \Delta\mathcal{H}}{\scriptstyle \leftarrow}}} \w{\eps}(\vw{P}) \
+ \int_{\Sigma_2\uparrow} \w{\eps}(\vw{P}) \
+ \int_{{\ \atop\stackrel{\scriptstyle \Sigma_{\rm ext}}{\scriptstyle \rightarrow}}} \w{\eps}(\vw{P})= 0 ,
\end{equation}
where the arrows indicate the orientation (cf. Fig.~\ref{f:hypersurfaces}).
Let us then define the \emph{energy contained in $\Sigma_1$} by
\begin{eqnarray} \label{e:def_E1}
E_1 &:=& \int_{\Sigma_1\uparrow} \w{\eps}(\vw{P})
= - \int_{\Sigma_1} P_\mu n_1^\mu \, \mathrm{d} V \nonumber \\
&=& \int_{\Sigma_1} T_{\mu\nu} \eta^\mu n_1^\nu \, \sqrt{\gamma}
\, \mathrm{d} x^1 \, \mathrm{d} x^2 \, \mathrm{d} x^3 ,
\end{eqnarray}
the \emph{energy contained in $\Sigma_2$} by
\begin{eqnarray}\label{e:def_E2}
E_2 &:=& \int_{\Sigma_2\uparrow} \w{\eps}(\vw{P})
= - \int_{\Sigma_2} P_\mu n_2^\mu \, \mathrm{d} V \nonumber \\
&=& \int_{\Sigma_2} T_{\mu\nu} \eta^\mu n_2^\nu \, \sqrt{\gamma}
\, \mathrm{d} x^1 \, \mathrm{d} x^2 \, \mathrm{d} x^3 ,
\end{eqnarray}
the \emph{energy captured by the black hole between $\Sigma_1$ and $\Sigma_2$} by
\begin{eqnarray} \label{e:E_H}
\Delta E_H &:=&
\int_{{\ \atop\stackrel{\scriptstyle \Delta\mathcal{H}}{\scriptstyle \leftarrow}}} \w{\eps}(\vw{P})
= - \int_{\Delta\mathcal{H}} P_\mu \ell^\mu \, \mathrm{d} V\nonumber \\
& =& \int_{\Delta\mathcal{H}} T_{\mu\nu} \eta^\mu \ell^\nu \, \sqrt{q} \, \mathrm{d} t \, \mathrm{d} y^1 \, \mathrm{d} y^2
\end{eqnarray}
and the \emph{energy evacuated from the system between $\Sigma_1$ and $\Sigma_2$}
by
\begin{eqnarray}\label{e:E_ext}
\Delta E_{\rm ext} &:=&
\int_{{\ \atop\stackrel{\scriptstyle \Sigma_{\rm ext}}{\scriptstyle \rightarrow}}} \w{\eps}(\vw{P})
= \int_{\Sigma_{\rm ext}} P_\mu s^\mu \, \mathrm{d} V \nonumber \\
& =& - \int_{\Sigma_{\rm ext}} T_{\mu\nu} \eta^\mu s^\nu \, \sqrt{-h}
\, \mathrm{d} t \, \mathrm{d} y^1 \, \mathrm{d} y^2 .
\end{eqnarray}
In the above formulas,
\begin{itemize}
\item $\mathrm{d} V$ is the volume element induced on each hypersurface
by the spacetime Levi-Civita tensor $\w{\eps}$;
\item $(x^1,x^2,x^3)$ are generic coordinates
on $\Sigma_1$ and $\Sigma_2$ that are right-handed with respect
to the hypersurface orientation;
\item $\gamma$ is the determinant of the
components with respect to the coordinates $(x^1,x^2,x^3)$ of the 3-metric
$\w{\gamma}$ induced by $\w{g}$ on $\Sigma_1$ or $\Sigma_2$;
\item
$(t,y^1,y^2)$ are generic right handed coordinates on $\Sigma_{\rm ext}$;
\item
$h$ is the determinant of the
components with respect to the coordinates $(t,y^1,y^2)$ of the 3-metric
$\w{h}$ induced by $\w{g}$ on $\Sigma_{\rm ext}$ ($h<0$ since $\Sigma_{\rm ext}$
is timelike);
\item $(t,y^1,y^2)$ are
right-handed coordinates on $\Delta\mathcal{H}$ such that $t$ is the parameter
along the null geodesics generating $\mathcal{H}$ associated with the null normal
$\vw{\ell}$ (cf. (\ref{e:ell_dxdt}));
\item $q$ is the determinant
with respect to the coordinates $(y^1,y^2)$ of the 2-metric
induced by $\w{g}$ on the 2-surfaces $t=\mathrm{const}$ in $\Delta\mathcal{H}$.
\end{itemize}
The second and third equalities in
each of equations (\ref{e:def_E1})-(\ref{e:E_ext}) are established in Appendix~\ref{ap:flux_integrals}.
With the above definitions,
(\ref{e:flux_P_sum_int}) can be written as the energy conservation law
\begin{equation} \label{e:ener_cons}
{E_2 + \Delta E_{\rm ext} - E_1 = - \Delta E_H } .
\end{equation}
Notice that the minus sign in front of $E_1$ arises from the change of orientation
of $\Sigma_1$ between (\ref{e:flux_P_sum_int}) and the definition
(\ref{e:def_E1}) of $E_1$.
\subsection{Angular-momentum conservation}
\label{sub:angmoment}
In a way similar to (\ref{e:def_P}), we define the angular-momentum density vector by
\begin{equation} \label{e:def_M}
M^\alpha = T^\alpha_{\ \mu} \xi^\mu .
\end{equation}
Since $\vw{\xi}$ is a Killing vector, $\vw{M}$ obeys the conservation law
\begin{equation} \label{e:divM}
\nabla_\mu M^\mu = 0 .
\end{equation}
Let us introduce the
\emph{angular momentum contained in $\Sigma_1$} and that
\emph{contained in} $\Sigma_2$ by
\begin{eqnarray}
\label{e:def_J1}
J_1& :=& \int_{\Sigma_1\uparrow} \w{\eps}(\vw{M}) = - \int_{\Sigma_1} M_\mu n_1^\mu \, \mathrm{d} V \nonumber \\
& =& - \int_{\Sigma_1} T_{\mu\nu} \xi^\mu n_1^\nu \, \sqrt{\gamma}
\, \mathrm{d} x^1 \, \mathrm{d} x^2 \, \mathrm{d} x^3
\end{eqnarray}
and
\begin{eqnarray}
J_2 &:=& \int_{\Sigma_2\uparrow} \w{\eps}(\vw{M}) = - \int_{\Sigma_2} M_\mu n_2^\mu \, \mathrm{d} V \nonumber \\
&= &- \int_{\Sigma_2} T_{\mu\nu} \xi^\mu n_2^\nu \, \sqrt{\gamma}
\, \mathrm{d} x^1 \, \mathrm{d} x^2 \, \mathrm{d} x^3,
\end{eqnarray}
the \emph{angular momentum captured by the black hole between $\Sigma_1$ and $\Sigma_2$} by
\begin{eqnarray} \label{e:J_H}
\Delta J_H &:=&
\int_{{\ \atop\stackrel{\scriptstyle \Delta\mathcal{H}}{\scriptstyle \leftarrow}}} \w{\eps}(\vw{M})
= - \int_{\Delta\mathcal{H}} M_\mu \ell^\mu \, \mathrm{d} V\nonumber \\
&=& - \int_{\Delta\mathcal{H}} T_{\mu\nu} \xi^\mu \ell^\nu \, \sqrt{q} \, \mathrm{d} t \, \mathrm{d} y^1 \, \mathrm{d} y^2
\end{eqnarray}
and the \emph{angular momentum evacuated from the system between $\Sigma_1$ and $\Sigma_2$} by
\begin{eqnarray} \label{e:def_Jext}
J_{\rm ext} &:=&
\int_{{\ \atop\stackrel{\scriptstyle \Sigma_{\rm ext}}{\scriptstyle \rightarrow}}}
\w{\eps}(\vw{M})
= \int_{\Sigma_{\rm ext}} M_\mu s^\mu \, \mathrm{d} V \nonumber \\
& =& \int_{\Sigma_{\rm ext}} T_{\mu\nu} \xi^\mu s^\nu \, \sqrt{-h}
\, \mathrm{d} t \, \mathrm{d} y^1 \, \mathrm{d} y^2 .
\end{eqnarray}
We deduce then from (\ref{e:divM}) that, similarly to (\ref{e:ener_cons}),
\begin{equation} \label{e:angu_mom}
{J_2 + J_{\rm ext} - J_1 = - \Delta J_H } .
\end{equation}
\subsection{Explicit expressions in adapted coordinates}
\label{s:adapted_coord}
Let us call \emph{adapted coordinates} any right-handed spherical-type coordinate system
$(x^\alpha) = (t,r,\th,\phi)$ such that (i)
$t$ and $\phi$ are associated with the two spacetime symmetries, so that the two independent Killing vectors
are $\vw{\eta} = \partial/\partial t$ and $\vw{\xi} = \partial/\partial \phi$,
(ii) the event horizon $\mathcal{H}$ is the hypersurface defined by $r=\mathrm{const} = r_H$,
(iii) the timelike hypersurface $\Sigma_{\rm ext}$ is defined by
$r=\mathrm{const}=r_{\rm ext}$ and $t\in[t_1,t_2]$, where $t_1$ and $t_2$
are two constants such that $t_1<t_2$ and (iv) the spacelike hypersurface $\Sigma_1$
($\Sigma_2$) is defined by $t=t_1$ and $r\in[r_H, r_{\rm ext}]$
($t=t_2$ and $r\in[r_H, r_{\rm ext}]$).
Then $\Delta\mathcal{H}$ is the hypersurface defined by $r=r_H$ and
$t\in[t_1, t_2]$.
In the case of Kerr spacetime, an example of adapted coordinates are the
3+1 Kerr coordinates described in Appendix~\ref{ap:Kerr}.
On $\Sigma_1$ or $\Sigma_2$, $(r,\th,\phi)$ are coordinates that are right-handed
with respect to the ``up'' orientation of these hypersurfaces used in the
definitions (\ref{e:def_E1})-(\ref{e:def_E2}) of $E_1$ and $E_2$. Consequently,
\begin{eqnarray}
E_{1,2} &=& \int_{\Sigma_{1,2}} \epsilon(P)_{r\th\phi} \, \mathrm{d} r\, \mathrm{d}\th \, \mathrm{d}\phi\nonumber \\
&=& \int_{\Sigma_{1,2}} \sqrt{-g} P^t \, \underbrace{[t, r, \th,\phi]}_{1}\nonumber
\, \mathrm{d} r\, \mathrm{d}\th \, \mathrm{d}\phi ,
\end{eqnarray}
where the second equality results from (\ref{e:epsP_comp}).
Now, (\ref{e:def_P}) yields $P^t = - T^t_{\ \, \mu}\eta^\mu = -T^t_{\ \, t}$
since $\eta^\alpha = (1,0,0,0)$ in adapted coordinates.
We conclude that
\begin{eqnarray} \label{e:E_1_E_2_adapted}
E_1 &=& - \int_{\Sigma_1} \, T^t_{\ \, t} \, \sqrt{-g}
\, \mathrm{d} r \, \mathrm{d} \th \, \mathrm{d} \phi \nonumber \\
\quad\mbox{and}\quad&& \nonumber \\
E_2 &=& - \int_{\Sigma_2} \, T^t_{\ \, t} \, \sqrt{-g}
\, \mathrm{d} r \, \mathrm{d} \th \, \mathrm{d} \phi .
\end{eqnarray}
As a check, we note that the above formulas can also be recovered from the
expressions involving $T_{\mu\nu}\eta^\mu n_{1,2}^\nu$ in
(\ref{e:def_E1})-(\ref{e:def_E2}). Indeed, the unit timelike normal $\vw{n}$ to
$\Sigma_1$ or $\Sigma_2$ obeys $n_\alpha = (-N,0,0,0)$, where
$N$ is the lapse function of the spacetime foliation by $t=\mathrm{const}$
hypersurfaces (see e.g. \cite{Gourg12}). Accordingly
$T_{\mu\nu} \eta^\mu n^\nu = T^\nu_{\ \, \mu} \eta^\mu n_\nu = T^t_{\ \, t} (-N)$.
Since $N\sqrt{\gamma} = \sqrt{-g}$, we get (\ref{e:E_1_E_2_adapted}).
On $\Delta\mathcal{H}$, $(t,\th,\phi)$ are coordinates that are right handed
with respect to the ``inward'' orientation used in the definition
(\ref{e:E_H}) of $\Delta E_H$. Indeed
\begin{eqnarray}
\w{\eps}(\vw{m},\vw{\partial}_t, \vw{\partial}_\th, \vw{\partial}_\phi) &=&
\w{\eps}(\vw{k},\vw{\partial}_t, \vw{\partial}_\th, \vw{\partial}_\phi)\nonumber \\
&=& k^r \epsilon_{rt\th\phi} = - \underbrace{k^r}_{<0}
\underbrace{\epsilon_{tr\th\phi}}_{>0} > 0 . \nonumber \\
\end{eqnarray}
Accordingly,
\begin{eqnarray}
\Delta E_H &=& \int_{\Delta\mathcal{H}} \epsilon(P)_{t\th\phi} \, \mathrm{d} t\, \mathrm{d}\th \, \mathrm{d}\phi\nonumber \\
& =& \int_{\Delta\mathcal{H}} \sqrt{-g} P^r \, \underbrace{[r, t, \th,\phi]}_{-1}
\, \mathrm{d} t\, \mathrm{d}\th \, \mathrm{d}\phi , \nonumber \\
\end{eqnarray}
where the second equality results from (\ref{e:epsP_comp}).
Since $P^r = -T^r_{\ \, t}$ from (\ref{e:def_P}), we get
\begin{equation} \label{e:E_H_adapted}
\Delta E_H = \int_{\Delta\mathcal{H}} T^r_{\ \, t} \, \sqrt{-g} \, \mathrm{d} t \, \mathrm{d} \theta
\, \mathrm{d} \phi .
\end{equation}
On $\Sigma_{\rm ext}$, it is $(t,\phi,\th)$, and not $(t,\th,\phi)$, that
constitutes a right-handed coordinate system with respect to the
orientation used in the definition (\ref{e:E_ext}) of $\Delta E_{\rm ext}$.
Indeed
\begin{eqnarray}
\w{\eps}(\vw{m},\vw{\partial}_t, \vw{\partial}_\phi, \vw{\partial}_\th) &=&
\w{\eps}(\vw{s},\vw{\partial}_t, \vw{\partial}_\phi, \vw{\partial}_\th)\nonumber \\
& =& s^r \epsilon_{rt\phi\theta} = \underbrace{s^r}_{>0}
\underbrace{\epsilon_{tr\th\phi}}_{>0} > 0 . \nonumber \\
\end{eqnarray}
We have therefore
\begin{eqnarray}
\Delta E_{\rm ext} &=& \int_{\Sigma_{\rm ext}} \epsilon(P)_{t\phi\th} \, \mathrm{d} t\, \mathrm{d}\th \, \mathrm{d}\phi\nonumber \\
&=& \int_{\Sigma_{\rm ext}} \sqrt{-g} P^r \, \underbrace{[r, t, \phi,\th]}_{1}
\, \mathrm{d} t\, \mathrm{d}\th \, \mathrm{d}\phi , \nonumber \\
\end{eqnarray}
Substituting $-T^r_{\ \, t}$ for $P^r$, we get
\begin{equation} \label{e:E_ext_adapted}
\Delta E_{\rm ext}
= - \int_{\Sigma_{\rm ext}} T^r_{\ \, t} \, \sqrt{-g}
\, \mathrm{d} t \, \mathrm{d} \th \, \mathrm{d} \phi .
\end{equation}
The formulas for the angular momentum are similar to the above ones,
with $T^t_{\ \, t}$ replaced by $-T^t_{\ \, \phi}$
and $T^r_{\ \, t}$ replaced by $-T^r_{\ \, \phi}$:
\begin{eqnarray} \label{e:J_1_J_2_adapted}
J_1 &=& \int_{\Sigma_1} \, T^t_{\ \, \phi} \, \sqrt{-g}
\, \mathrm{d} r \, \mathrm{d} \th \, \mathrm{d} \phi \nonumber \\
\quad\mbox{and}\quad &&\nonumber \\
J_2 &=& \int_{\Sigma_2} \, T^t_{\ \, \phi} \, \sqrt{-g}
\, \mathrm{d} r \, \mathrm{d} \th \, \mathrm{d} \phi ,
\end{eqnarray}
\begin{equation} \label{e:J_H_adapted}
\Delta J_H = - \int_{\Delta\mathcal{H}} T^r_{\ \, \phi} \, \sqrt{-g} \, \mathrm{d} t \, \mathrm{d} \theta
\, \mathrm{d} \phi ,
\end{equation}
\begin{equation} \label{e:J_ext_adapted}
\Delta J_{\rm ext}
= \int_{\Sigma_{\rm ext}} T^r_{\ \, \phi} \, \sqrt{-g}
\, \mathrm{d} t \, \mathrm{d} \th \, \mathrm{d} \phi .
\end{equation}
Expressions (\ref{e:E_1_E_2_adapted})-(\ref{e:E_ext_adapted})
and (\ref{e:J_1_J_2_adapted})-(\ref{e:J_ext_adapted}), as well as the
energy conservation law (\ref{e:ener_cons}) and the angular-momentum conservation
law (\ref{e:angu_mom}), are rederived in Appendix~\ref{ap:spherical}, via a pure
coordinate-based calculation.
\section{General conditions for black-hole rotational energy extraction}
\label{section-General}
\subsection{General case}
For definiteness, let us consider that $\Sigma_1$ and $\Sigma_2$ are parts of a foliation of spacetime
by a family of spacelike hypersurfaces $(\Sigma_t)_{t\in\mathbb{R}}$:
\begin{equation}
\Sigma_1 = \Sigma_{t_1} \quad\mbox{and}\quad \Sigma_2 = \Sigma_{t_2} \quad\mbox{with}\quad t_2 > t_1 .
\end{equation}
For instance, in the case of a Kerr black hole, the hypersurface label $t$ can be chosen to be the Kerr-Schild time coordinate introduced in Appendix~\ref{ap:Kerr}.
In (\ref{e:ener_cons}), we may then interpret $E_1$ as the ``initial energy'', i.e. the energy
``at the time $t_1$'',
$E_2$ as the ``final energy'', i.e. the energy ``at the time $t_2$'' and
$\Delta E_{\rm ext}$ as the energy evacuated from the system between the times $t_1$ and $t_2$.
Accordingly, the ``energy gained by the world outside of the black hole'' between $t_1$ and $t_2$ is
defined as
\begin{equation} \label{e:energy_gain}
\Delta E := E_2 + \Delta E_{\rm ext} - E_1 .
\end{equation}
Then, energy will be extracted from the black hole if, and only if $\Delta E > 0$.
In view of the conservation law (\ref{e:ener_cons}), we conclude that energy is extracted from a black hole if, and only if,
\begin{equation} \label{e:Penrose_process}
{ \Delta E_H < 0 } .
\end{equation}
We refer to any process that accomplishes this as a \emph{Penrose process}.
Let us assume that the energy-momentum tensor obeys the \emph{null energy condition}
(cf. Sect. \ref{sub-section-Energy-conditions}) on the event horizon:
$\left. T_{\mu\nu} \ell^\mu \ell^\nu \right| _{\mathcal{H}} \geq 0 $ [Eq.~(\ref{e:Tll})].
As mentioned above, this is a rather mild condition, implied by the standard weak energy condition.
From (\ref{e:ell_eta_xi}), (\ref{e:def_P}) and (\ref{e:def_M}), it follows that
\[
T_{\mu\nu} \ell^\mu \ell^\nu = T_{\mu\nu}(\eta^\nu + \omega_{H} \xi^\nu) \ell^\mu
= - P_\mu \ell^\mu + \omega_{H} \, M_\mu \ell^\mu .
\]
Integrating (\ref{e:Tll}) over $\Delta\mathcal{H}$ yields then
\begin{equation}
- \int_{\Delta\mathcal{H}} P_\mu \ell^\mu \, \mathrm{d} V + \omega_{H} \int_{\Delta\mathcal{H}} M_\mu \ell^\mu \, \mathrm{d} V
\geq 0 ,
\end{equation}
where we have used the fact that $\omega_{H}$ is constant.
Using (\ref{e:E_H}) and (\ref{e:J_H}), the above relation can be rewritten as
$\Delta E_H - \omega_{H} \Delta J_H \geq 0$,
i.e.
\begin{equation} \label{e:omegJ_EH}
{ \omega_{H} \Delta J_H \leq \Delta E_H } .
\end{equation}
In view of (\ref{e:omegJ_EH}) and $\omega_{H} \geq 0$, the black-hole energy extraction condition
(\ref{e:Penrose_process}) implies
\begin{equation}
\label{e:Penrose_am}
{\Delta J_H < 0} .
\end{equation}
We conclude the following:
\begin{quote}
For a matter distribution
or a nongravitational field obeying the null energy condition,
a necessary and sufficient condition for energy extraction from a rotating black
hole is that it absorbs negative energy $\Delta E_H$ and negative angular momentum $\Delta J_H$.
\end{quote}
\begin{figure}
\centerline{\includegraphics[width=0.45\textwidth]{Fig_3_lasota.eps}}
\centerline{\includegraphics[width=0.45\textwidth]{Fig_4_lasota.eps}}
\caption{\label{f:Econs} \small Two views of the energy balance in a Penrose process. {\sl Top}: Global (GL) with $E_2 > E_1$
and $\Delta E_{\rm ext}=0$.
{\sl Bottom}: local (LC) stationary view with $E_2 = E_1$ but $\Delta E_{\rm ext}= - \Delta E_H >0$. The region of spacetime concerned
with this view is marked ``LC" on the top figure.}
\end{figure}
Eqs. (\ref{e:Penrose_process}), (\ref{e:omegJ_EH}) and (\ref{e:Penrose_am}) are identical with
Eqs. (\ref{eq:Penr_E}), (\ref{eq:Penr_Epos}) and (\ref{eq:Penr_J}) describing the condition for the
Penrose process. They describe the same physics: in order to extract energy from a rotating black
hole one must feed it negative energy and angular momentum.
{\sl Any extraction of black hole's rotational energy by interaction with matter and/or (nongravitational) fields is a Penrose
process.}
\subsection{Penrose process in terms of the Noether current $\vw{P}$}
\label{s:Penrose_Noether_current}
Given the expression (\ref{e:E_H}) of $\Delta E_H$, we note that the
Penrose-process condition
(\ref{e:Penrose_process}) implies $P_\mu \ell^\mu > 0$ on some part
of $\Delta\mathcal{H}$. Since $\vw{\ell}$ is a future-directed null vector,
$P_\mu \ell^\mu > 0$ if, and only if, $\vw{P}$ is either (i) spacelike or
(ii) past directed timelike or past directed null. Therefore, we conclude that
\begin{quote}
A necessary condition for a Penrose process to occur is to have
the Noether current $\vw{P}$ be spacelike or
past directed (timelike or null) on some part of $\Delta\mathcal{H}$.
\end{quote}
As we already noticed in Sec.~\ref{s:energy_conservation}, if the matter
or fields fulfil the standard dominant energy condition, the vector
$\vw{P}$ is always future directed timelike or null outside the ergoregion;
therefore it can be spacelike or past directed only in the ergoregion.
\subsection{Applications of the Penrose-process energy balance}
\label{subsect: applications}
The energy balance equations derived above can be applied to basically two views of energy extraction from a black hole.
First, one can use global (GL) spacetime view applied
to theoretically described ``real" astrophysical systems (Fig. \ref{f:Econs} - top). Matter and/or fields have limited space extent, the timelike hypersurface $\Sigma_{\rm ext}$ is placed sufficiently far so that $\Delta E_{\rm ext}=0$. When there is energy extraction,
i.e. when $\Delta E >0$, then $E_2 > E_1$. This is the view we will have in mind in
Secs.~\ref{s:examples} and \ref{sect:em}.
When dealing with numerical simulations, however, such global view is usually unpractical. The simulation is performed in a box of limited size and
the system is brought to stationary state. The view presented in the bottom part of Fig. \ref{f:Econs} is then more adapted to the energy balance.
Because of stationarity one has $E_2 = E_1$ but $\Delta E_{\rm ext}>0$. When the numerical code conserves energy very well, the energy balance implies
$\Delta E_H<0$. This is the view applied in Sec. \ref{sect:mad}.
\section{Various examples of the Penrose process}
\label{s:examples}
In what follows we will apply Eqs. (\ref{e:def_E1}) to (\ref{e:ener_cons}) and (\ref{e:def_J1}) to (\ref{e:angu_mom})
to various black-hole plus matter (or fields) configurations. We first show that
in the case of particles one recovers the
standard Penrose-process formulae. Then we shall apply our formalism to the cases of
a scalar field and a perfect fluid.
The case of the electromagnetic field is treated in Sec.~\ref{sect:em}.
\subsection{Mechanical Penrose-process test}
\label{sub:mechP}
Let us show that the formalism developed above reproduces the mechanical Penrose
process for a single particle that
breaks up into two fragments in the ergoregion.
The energy-momentum tensor of a massive particle of mass
$\mathfrak{m}$ and 4-velocity $\vw{u}$ is (cf. e.g. \cite{PoissPV11})
\begin{eqnarray} \label{e:T_particle}
T_{\alpha\beta}(M) = \mathfrak{m} && \int_{-\infty}^{+\infty}
\delta_{A(\tau)}(M)\;
g_\alpha^{\ \, \mu}(M,A(\tau)) u_\mu(\tau) \; \nonumber \\
&& \qquad \qquad \times
g_\beta^{\ \, \nu}(M,A(\tau)) u_\nu(\tau) \; \mathrm{d}\tau ,
\end{eqnarray}
where $M\in\mathscr{M}$ is the spacetime point at which $T_{\alpha\beta}$ is evaluated,
$\tau$ stands for the particle's proper time, $A(\tau)\in\mathscr{M}$ is the spacetime point
occupied by the particle at the proper time $\tau$,
$g_\alpha^{\ \, \mu}(M,A)$ is the parallel propagator from the point $A$ to the point $M$ along
the unique geodesic\footnote{Thanks to the Dirac distribution in (\ref{e:T_particle}),
only the limit $M\rightarrow A$ matters, so that we can assume that there is a unique geodesic
connecting $A$ to $M$.} connecting $A$ to $M$ (cf. Sec.~5 of \cite{PoissPV11} or Appendix~I of \cite{Carro04})
and $\delta_{A}(M)$ is the Dirac distribution on $(\mathscr{M},\w{g})$ centered at the point $A$: it is
defined by the identity
\begin{equation}
\int_{\mathscr{U}} \delta_A(M) f(M) \, \sqrt{-g} \, \mathrm{d}^4 x = f(A) ,
\end{equation}
for any four-dimensional domain $\mathscr{U}$ around $A$ and any scalar field $f:\;\mathscr{U}\rightarrow\mathbb{R}$.
In terms of a coordinate system $(x^\alpha)$ around
$A$:
\begin{equation} \label{e:delta_A_M_explicit}
\delta_A(M) = \frac{1}{\sqrt{-g}} \, \delta(x^0-z^0) \,\delta(x^1-z^1) \,\delta(x^2-z^2) \,\delta(x^3-z^3) ,
\end{equation}
where $\delta$ is the standard Dirac distribution on $\mathbb{R}$, $(x^\alpha)$ are the coordinates
of $M$, $(z^\alpha)$ those of $A$ and $g$ is the determinant of the components of
the metric tensor with respect to the coordinates $(x^\alpha)$.
\begin{figure}
\centerline{\includegraphics[height=0.20\textheight]{Fig_5_lasota.eps}}
\caption{\label{f:particle} \small
Penrose process for a particle. The dashed line $\mathcal{E}$ marks the ergosphere.}
\end{figure}
The Noether current corresponding to (\ref{e:T_particle}) is
formed via (\ref{e:def_P}):
\begin{eqnarray} \label{e:P_particle}
P_\alpha(M) &=& \mathfrak{m} \int_{-\infty}^{+\infty} \delta_{A(\tau)}(M)
\left[ - \;
g_\sigma^{\ \, \nu}(M,A(\tau)) u_\nu(\tau)
\eta^\sigma(M) \right] \nonumber \\
&& \quad \quad \quad \quad \quad \quad \times g_\alpha^{\ \, \mu}(M,A(\tau)) u_\mu(\tau) \; \mathrm{d}\tau .
\end{eqnarray}
This means that $\vw{P}$ is a distribution vector whose support
is the particle's worldline and that is collinear to the particle's 4-velocity.
Let us choose $\Sigma_1$ and $\Sigma_2$ such that $\Sigma_1$ encounters the original particle $\mathscr{P}_1$
(mass $\mathfrak{m}_1$, 4-velocity $\vw{u}_1$) at the event $A_1$,
$\Sigma_2$ encounters the escaping fragment $\mathscr{P}_2$ (mass $\mathfrak{m}_2$, 4-velocity $\vw{u}_2$) at the
event $A_2$ and the infalling fragment $\mathscr{P}_*$ (mass $\mathfrak{m}_*$, 4-velocity $\vw{u}_*$) crosses
the horizon on $\Delta\mathcal{H}$, at the event $A_H$ (cf. Fig.~\ref{f:particle}).
By plugging (\ref{e:T_particle}) into (\ref{e:def_E1}), we get
\begin{eqnarray}
E_1 &= &\mathfrak{m}_1 \int_{\Sigma_1} \int_{-\infty}^{+\infty}
\delta_{A(\tau)}(M)\; g_\mu^{\ \, \rho}(M,A(\tau)) (u_1)_\rho(\tau) \;\nonumber \\
&& \qquad \qquad \times g_\nu^{\ \, \sigma}(M,A(\tau)) (u_1)_\sigma(\tau) \;
\eta^\mu(M) \, n_1^\nu(M) \nonumber \\
&& \qquad \qquad \qquad \qquad \qquad \times \sqrt{\gamma}
\mathrm{d} x^1 \, \mathrm{d} x^2 \, \mathrm{d} x^3\, \mathrm{d}\tau .\label{e:Part_E1}
\end{eqnarray}
This formula (see Appendix~\ref{ap:Part_sp}) can be reduced to
\begin{equation} \label{e:E_1_particle}
E_1 = - \mathfrak{m}_1 \left. (\eta_\mu u_1^\mu) \right| _{A_1} = - \mathfrak{m}_1 \, \eta_\mu u_1^\mu ,
\end{equation}
where the second equality stems from the fact that $\eta_\mu u_1^\mu$ is constant along
$\mathscr{P}_1$'s worldline, since the latter is a geodesic and $\vw{\eta}$ is a Killing vector.
That $\mathscr{P}_1$'s worldline is a geodesic follows from the energy-momentum conservation
law $\nabla_\mu T^{\alpha\mu}=0$ with the form (\ref{e:T_particle}) for the energy-momentum tensor
(see Sec.~19.1 of \cite{PoissPV11} for details).
We recover in (\ref{e:E_1_particle}) the standard expression of the energy involved in textbook
discussions of the Penrose process (see
\cite{Carro04,Hartle-2003,Wald-1984} and Sec.~\ref{Section-Penrose-particles}).
Similarly, for the outgoing particle one gets
\begin{equation}
E_2 = - \mathfrak{m}_2\, \eta_\mu u_2^\mu .
\end{equation}
For the particle crossing the horizon, by
plugging (\ref{e:T_particle}) with the characteristics of the infalling fragment $\mathscr{P}_*$ into (\ref{e:E_H}), we get
\begin{eqnarray}
\Delta E_H &=& \mathfrak{m}_* \int_{\Delta\mathcal{H}} \int_{-\infty}^\infty \delta_{A(\tau)}(M) \, (u_*)_\mu(\tau) \eta^\mu(M) \;(u_*)_\nu(\tau)\nonumber \\
&&
\qquad \qquad\qquad \quad \times \ell^\nu(M) \, \sqrt{q} \, \mathrm{d} t \, \mathrm{d} y^1 \, \mathrm{d} y^2 \, \mathrm{d}\tau . \label{e:Part_EH}
\end{eqnarray}
As shown in Appendix~\ref{ap:Part_null} this can be reduced to
\begin{equation} \label{e:E_H_particle}
\Delta E_H = - \mathfrak{m}_* \left. (\eta_\mu u_*^\mu) \right| _{A_H} = - \mathfrak{m}_* \, \eta_\mu u_*^\mu .
\end{equation}
As for $\mathscr{P}_1$ and $\mathscr{P}_2$, the independence of $\eta_\mu u_*^\mu$ from the specific point of
$\mathscr{P}_*$'s worldline where it is evaluated results from the fact that $\mathscr{P}_*$'s worldline is
a geodesic.
Finally, in the present case, we have clearly $\Delta E_{\rm ext} = 0$.
Therefore the energy gain formula (\ref{e:energy_gain}) reduces to
$\Delta E = E_2 - E_1$ and we recover the standard Penrose process
discussed in Sec.~\ref{Section-Penrose-particles}: $E_2 > E_1$ if, and only if,
$\Delta E_H < 0$, i.e., if and only if $\eta_\mu u_*^\mu > 0$. This is possible only in the ergoregion, where the Killing vector
$\vw{\eta}$ is spacelike.
Note that $\eta_\mu u_*^\mu > 0$ implies that the term in square brackets in
(\ref{e:P_particle}) is negative, so that the Noether current
$\vw{P}_*$ of $\mathscr{P}_*$ is a timelike vector (being collinear to $\vw{u}_*$)
that is \emph{past directed}. This is in agreement with
the statement made in Sec.~\ref{s:Penrose_Noether_current} and
is illustrated in Fig.~\ref{f:negative_energy}.
\begin{figure}
\centerline{\includegraphics[height=0.3\textheight]{Fig_6_lasota.eps}}
\caption{\label{f:negative_energy} \small
Spacetime diagram showing the 4-velocity $\vw{u}_*$ and the energy-momentum
density vector $\vw{P}_*$ of a negative-energy particle $\mathscr{P}_*$ entering
the event horizon of a Kerr black hole
of angular-momentum parameter $a/m=0.9$
(see Figs.~\ref{f:horizon_vectors} and \ref{f:particle}). At the horizon, the particle is
characterized by the following coordinate velocity: $\mathrm{d} r/\mathrm{d} t = -0.32$,
$\mathrm{d}\th/\mathrm{d} t = 0$, and $\mathrm{d}\phi/\mathrm{d} t = -0.18 \omega_H$, resulting in the
4-velocity $u_*^\alpha = (2.38, -0.76,0,-0.13)$ and in the positive scalar
product $\eta_\mu u_*^\mu = 0.042$. The ``vector'' $\vw{P}_*$, which is actually
a distribution, is drawn with an arbitrary scale.
}
\end{figure}
\subsection{Scalar field (super-radiance)}
\label{s:scalar_field}
Let us consider a complex scalar field $\Phi$ ruled by the standard
Lagrangian
\begin{equation}
\mathcal{L} = - \frac{1}{2} \left[
\nabla_\mu \bar\Phi \nabla^\mu \Phi + V(|\Phi|^2) \right] ,
\end{equation}
where $\bar\Phi$ stands for $\Phi$'s complex conjugate and
$V(|\Phi|^2)$ is some potential ($V(|\Phi|^2) = \left({\mathfrak{m}}/{\hbar}\right)^2 |\Phi|^2$ for
a free field of mass $\mathfrak{m}$).
The corresponding energy-momentum tensor is
\begin{equation}
T_{\alpha\beta} = \nabla_{(\alpha} \bar\Phi \nabla_{\beta)} \Phi
- \frac{1}{2} \left[ \nabla_\mu \bar\Phi \nabla^\mu \Phi + V(|\Phi|^2) \right]
g_{\alpha\beta} .
\end{equation}
Let us plug the above expression into (\ref{e:E_H}); using adapted
coordinates $(t,r,\th,\phi)$ (cf. Sec.~\ref{s:adapted_coord}), we have
$\eta^\mu \nabla_\mu \Phi = \dert{\Phi}{t}$ and
$\ell^\mu \nabla_\mu \Phi = \dert{\Phi}{t} + \omega_H\dert{\Phi}{\phi}$.
In addition,
$g_{\mu\nu} \eta^\mu \ell^\nu = 0$, since $\vw{\eta}$ is tangent
to $\mathcal{H}$ and $\vw{\ell}$ is the normal to $\mathcal{H}$
(cf. Sec.~\ref{sub-section-Kerr-horizon}). Therefore, we get
\begin{eqnarray} \label{e:E_H_scalar}
\Delta E_H& =&
\int_{\Delta\mathcal{H}}
\left[ \der{\Phi}{t} \der{\bar\Phi}{t} + \frac{\omega_H}{2}
\left( \der{\Phi}{t} \der{\bar\Phi}{\phi}
+ \der{\bar\Phi}{t} \der{\Phi}{\phi} \right) \right]\nonumber \\
&& \qquad \qquad\qquad \qquad\qquad \times \sqrt{q} \, \mathrm{d} t \, \mathrm{d} \th \, \mathrm{d} \phi .
\end{eqnarray}
Let us consider a rotating scalar field of the form
\begin{equation}
\Phi(t,r,\th,\phi) = \Phi_0(r,\th) e^{i(\omega t - m\phi)},
\end{equation}
where $\Phi_0(r,\th)$ is a real-valued function, $\omega$ is a constant
and $m$ some integer. Then, (\ref{e:E_H_scalar}) becomes
\begin{equation}
\Delta E_H =
\int_{\Delta\mathcal{H}} \Phi_0^2 \omega(\omega-m\omega_H) \,
\sqrt{q} \, \mathrm{d} t \, \mathrm{d} \th \, \mathrm{d} \phi .
\end{equation}
In view of (\ref{e:Penrose_process}),
we deduce immediately that a necessary and sufficient condition for a Penrose
process to occur is
\begin{equation} \label{e:Penrose_process_scalar}
0 < \omega < m\omega_H .
\end{equation}
In this context, the Penrose process is called \emph{super-radiance}
(see, e.g., \cite{Wald-1984} and \cite{LaszloR13}).
{Condition (\ref{e:Penrose_process_scalar}) was obtained by
Carter \cite{Carte79} in the more general case of a
(not necessarily scalar) tensor field that is periodic in $t$
with period $2\pi/\omega$.}
\subsection{Perfect fluid}
\label{sub:pf}
Let us now consider a perfect fluid of 4-velocity $\vw{u}$, proper energy density
$\varepsilon$ and pressure $p$. The corresponding energy-momentum tensor is
\begin{equation}
\label{eq:Tpf}
T_{\alpha\beta} = (\varepsilon + p) u_\alpha u_\beta + p g_{\alpha\beta} .
\end{equation}
Accordingly, and using $g_{\mu\nu} \eta^\mu \ell^\nu = 0$
as in Sec.~\ref{s:scalar_field}, formula (\ref{e:E_H}) becomes
\begin{equation} \label{e:E_H_perfect_fluid}
\Delta E_H
= \int_{\Delta\mathcal{H}} (\varepsilon + p) \, \eta_\mu u^\mu \, \ell_\nu u^\nu
\, \sqrt{q} \, \mathrm{d} t \, \mathrm{d} y^1 \, \mathrm{d} y^2 .
\end{equation}
$\vw{\ell}$ being a future-directed null vector and $\vw{u}$ a future-directed timelike vector, we have
necessarily
\begin{equation} \label{e:u_ell_neg}
\ell_\nu u^\nu < 0 .
\end{equation}
According to (\ref{e:Penrose_process}), the Penrose process takes place if, and only if, $\Delta E_H < 0$. From (\ref{e:E_H_perfect_fluid}), (\ref{e:u_ell_neg}) and the assumption $\varepsilon+p \geq 0$ (the weak energy condition), we conclude that
for a perfect fluid, a necessary condition for the Penrose process to occur is
\begin{equation} \label{e:cond_Penrose_fluid}
{ \eta_\mu u^\mu > 0 \ \ \mbox{in some part of $\Delta\mathcal{H}$} } .
\end{equation}
We may have $\eta_\mu u^\mu > 0$ in some part of $\Delta\mathcal{H}$ only because $\vw{\eta}$ is there a spacelike vector
(for $\mathcal{H}$ is inside the ergoregion).
Note that (\ref{e:u_ell_neg}) and (\ref{e:ell_eta_xi}) imply
\begin{equation}
\omega_H \xi_\mu u^\mu < - \eta_\mu u^\mu .
\end{equation}
Hence, in the parts of $\Delta\mathcal{H}$ where $\eta_\mu u^\mu > 0$, we have $\xi_\mu u^\mu < 0$.
Therefore
for a perfect fluid, a necessary condition for the Penrose process to occur is
\begin{equation}
{ \xi_\mu u^\mu < 0 \ \ \mbox{in some part of $\Delta\mathcal{H}$} } .
\end{equation}
In other words, the fluid flow must have some azimuthal component
counterrotating with respect to the black hole
in some part of $\Delta\mathcal{H}$. However, no physical process extracting black-hole rotational
energy through interaction with a perfect fluid is known.
In the special case of dust (fluid with $p=0$), the fluid lines are geodesics
and we recover from (\ref{e:cond_Penrose_fluid}) the single-particle condition
$\Delta E_H < 0$, with $\Delta E_H$ given by (\ref{e:E_H_particle}).
\section{Electromagnetic fields}
\label{sect:em}
\subsection{General electromagnetic field}
Let us consider some electromagnetic field, described by the field 2-form $\w{F}$.
For the moment we will deal with the most general case, i.e. that $\w{F}$ is not
necessarily stationary or axisymmetric. Of course this is possible only if $\w{F}$ is a passive field,
i.e. does not contribute as a source to the Einstein equation, so that the spacetime metric remains stationary and axisymmetric.
The electromagnetic energy-momentum tensor is given by the standard formula:
\begin{equation} \label{e:T_EM}
T_{\alpha\beta} = \frac{1}{\mu_0} \left( F_{\mu\alpha} F^\mu_{\ \, \beta}
- \frac{1}{4} F_{\mu\nu} F^{\mu\nu} \; g_{\alpha\beta} \right) .
\end{equation}
Accordingly, the integrand in formula (\ref{e:E_H}) for $\Delta E_H$ is
\[
\w{T}(\vw{\eta},\vw{\ell}) = \frac{1}{\mu_0} \left( F_{\mu\rho} \eta^\rho F^\mu_{\ \, \sigma}
\ell^\sigma
- \frac{1}{4} F_{\mu\nu} F^{\mu\nu} \; \vw{\eta}\cdot\vw{\ell} \right) .
\]
Now, since $\vw{\eta}$ is tangent to $\mathcal{H}$ and $\vw{\ell}$ normal to $\mathcal{H}$,
one has $\vw{\eta}\cdot\vw{\ell}= 0$. There remains then
\begin{equation} \label{e:Tem_eta_ell_FF}
\mu_0 \w{T}(\vw{\eta},\vw{\ell}) = F_{\mu\rho} \eta^\rho F^\mu_{\ \, \sigma}
\ell^\sigma .
\end{equation}
Let us introduce on $\mathcal{H}$ the ``pseudoelectric field'' 1-form (\cite{Carte73,Carte79,Damou79,Damou82})
\begin{equation} \label{e:def_E}
{ \w{E} := \w{F}(., \vw{\ell}) }.
\end{equation}
If $\vw{\ell}$ were a unit timelike vector, $\w{E}$ would be a genuine electric field, namely the electric field measured by the observer whose 4-velocity is $\vw{\ell}$. But in the present case,
$\vw{\ell}$ is a null vector, so that such a physical interpretation does not hold.
$\w{E}$ is called a \emph{corotating electric field} in \cite{Carte73,Carte79}
because $\vw{\ell}$ is the corotating Killing vector on $\mathcal{H}$.
Note that, \footnote{In this section, we are using index-free notations. In particular, the action of a 1-form on a vector is denoted by brackets, $\langle \w{E} , \vw{\ell} \rangle = E_\mu \ell^\mu$, and the scalar product of two vectors is denoted with a dot,
$\vw{u}\cdot\vw{v} = g_{\mu\nu} u^\mu v^\nu = u_\nu v^\nu$.} thanks to the antisymmetry of $\w{F}$,
\begin{equation} \label{e:E_ell_zero}
\langle \w{E} , \vw{\ell} \rangle = 0 .
\end{equation}
This implies that the vector $\vw{E}$ deduced from the 1-form $\w{E}$ by
metric duality (i.e. the vector of components
$E^\alpha = g^{\alpha\mu} E_\mu = F^\alpha_{\ \, \mu}\ell^\mu$)
is tangent to $\mathcal{H}$. Equation~(\ref{e:Tem_eta_ell_FF}) can be written as
\begin{equation} \label{e:Tem_F_E_eta}
{ \mu_0 \w{T}(\vw{\eta},\vw{\ell}) = \w{F}(\vw{E},\vw{\eta}) }.
\end{equation}
Thanks to (\ref{e:ell_eta_xi}) and (\ref{e:def_E}), this expression can be recast as
\begin{eqnarray}
\mu_0 \w{T}(\vw{\eta},\vw{\ell}) =\w{F}(\vw{E},\vw{\ell} - \omega_H \,\vw{\xi})
& =& \w{F}(\vw{E},\vw{\ell}) - \omega_H \w{F}(\vw{E},\vw{\xi})\nonumber \\
& =& \langle \w{E},\vw{E} \rangle - \omega_H \w{F}(\vw{E},\vw{\xi}) ,\nonumber
\end{eqnarray}
i.e.
\begin{equation}
\label{e:felectric}
{ \mu_0 \w{T}(\vw{\eta},\vw{\ell}) = \vw{E}\cdot\vw{E} - \omega_H \w{F}(\vw{E},\vw{\xi})
} .
\end{equation}
Given expression (\ref{e:E_H}) for $\Delta E_H$, we conclude that the necessary condition for the
Penrose process to occur is
\begin{equation} \label{e:nec_cond_EM}
{ \omega_H \w{F}(\vw{E},\vw{\xi}) > \vw{E}\cdot\vw{E} \ \ \mbox{in some part of $\Delta\mathcal{H}$} } .
\end{equation}
Note that since $\vw{E}$ is tangent to $\mathcal{H}$ [cf. (\ref{e:E_ell_zero})] and
$\mathcal{H}$ is a null hypersurface, $\vw{E}$ is either a null vector or a spacelike one,
so that in (\ref{e:nec_cond_EM}) one has always
\begin{equation}
\label{e:Egt0}
{ \vw{E}\cdot\vw{E} \geq 0 } .
\end{equation}
Equation (\ref{e:nec_cond_EM}) is the most general condition on any electromagnetic field configuration allowing
black-hole energy extraction through a Penrose process. Obviously, for $\omega_H=0$ there is no energy extraction.
\subsection{Stationary and axisymmetric electromagnetic field}
In this section, we assume that the electromagnetic field obeys the spacetime symmetries,
which is expressed by
\begin{equation}
\w{\mathcal{L}}_{\vw{\eta}} \w{F} = 0 \qquad\mbox{and}\qquad \w{\mathcal{L}}_{\vw{\xi}} \w{F} = 0 ,
\end{equation}
where $\w{\mathcal{L}}_{\vw{v}}$ stands for the Lie derivative along the vector field $\vw{v}$.
Then it can be shown (see e.g. \cite{GourgMUE11} for details) that
$\w{F}$ is entirely determined by three scalar fields $\Phi$, $\Psi$, and $I$ such that
\begin{eqnarray}
& & \w{F}(.,\vw{\eta}) = \bm{\mathrm{d}}\Phi \label{e:def_Phi} \\
& & \w{F}(.,\vw{\xi}) = \bm{\mathrm{d}}\Psi \label{e:def_Psi} \\
& & {}^\star\w{F}(\vw{\eta},\vw{\xi}) = I , \label{e:def_I}
\end{eqnarray}
where $\bm{\mathrm{d}}$ is the exterior derivative operator (reducing to the gradient for a scalar field
such as $\Phi$ or $\Psi$) and ${}^\star\w{F}$ stands for the Hodge dual of $\w{F}$.
Note that, being defined solely from $\w{F}$ and the Killing fields $\vw{\eta}$ and $\vw{\xi}$,
$\Phi$, $\Psi$, and $I$ are gauge-independent quantities. Introducing an electromagnetic potential
1-form $\w{A}$ such that $\w{F} = \bm{\mathrm{d}}\w{A}$, one may use the
standard electromagnetic gauge freedom to choose $\w{A}$ so that
\begin{equation}
\Phi = \langle \w{A}, \vw{\eta} \rangle = A_t
\qquad\mbox{and}\qquad
\Psi = \langle \w{A}, \vw{\xi} \rangle = A_\varphi .
\end{equation}
In addition to (\ref{e:def_Phi})-(\ref{e:def_I}), one has (see e.g. \cite{GourgMUE11})
$\w{F}(\vw{\eta},\vw{\xi}) = 0$
and
\begin{equation} \label{Phi_Psi_stax}
\w{\mathcal{L}}_{\vw{\eta}} \Phi = \w{\mathcal{L}}_{\vw{\xi}} \Phi = 0
\qquad\mbox{and}\qquad
\w{\mathcal{L}}_{\vw{\eta}} \Psi = \w{\mathcal{L}}_{\vw{\xi}} \Psi = 0 ,
\end{equation}
which means that the scalar fields $\Phi$ and $\Psi$ obey the two spacetime symmetries.
From the definition (\ref{e:def_E}) and expression (\ref{e:ell_eta_xi}) of $\vw{\ell}$,
the corotating pseudoelectric field $\w{E}$ is
\[
\w{E} = \w{F}(., \vw{\ell}) = \w{F}(., \vw{\eta}) + \omega_H \w{F} (., \vw{\xi})
= \bm{\mathrm{d}}\Phi + \omega_H \bm{\mathrm{d}}\Psi ,
\]
where the last equality follows from (\ref{e:def_Phi}) and (\ref{e:def_Psi}).
Since $\omega_H$ is constant, we conclude that the 1-form $\w{E}$ is a pure gradient:
\begin{equation} \label{e:E_dPhi_omH_Psi}
{\w{E} = \bm{\mathrm{d}}(\Phi + \omega_H \Psi) } .
\end{equation}
\emph{Remark:} If the electromagnetic field is not passive, i.e. if it contributes
significantly to the spacetime metric via the Einstein equation,
then $\w{T}(\vw{\ell},\vw{\ell})$ must vanish in order for the black hole to be in equilibrium
(otherwise it would generate some horizon expansion, via the Raychaudhuri equation;
see, e.g., \cite{Carte73}.
Since by (\ref{e:T_EM}), $\w{T}(\vw{\ell},\vw{\ell}) = \mu_0^{-1} \vw{E}\cdot\vw{E}$,
this implies that $\vw{E}$ is a null vector. Being tangent to $\mathcal{H}$, the only possibility
is to have $\vw{E}$ collinear to $\vw{\ell}$: $\vw{E} = f \vw{\ell}$. Then for any
vector $\vw{v}$ tangent to $\mathcal{H}$, one has $\vw{v}\cdot\vw{E} = 0$.
In view of (\ref{e:E_dPhi_omH_Psi}), we get the remarkable result that
{\cite{Carte73}}
\begin{equation}
\label{e:const_pot}
\Phi + \omega_H \Psi \mbox{\ is constant over\ }\mathcal{H}.
\end{equation}
Returning to the case of passive fields we notice that thanks to (\ref{e:def_Phi}), the $\Delta E_H$ integrand (\ref{e:Tem_F_E_eta}) becomes
\begin{equation}
\label{e:edphi}
\mu_0 \w{T}(\vw{\eta},\vw{\ell}) = \vw{E} \cdot\vw{\nabla} \Phi .
\end{equation}
In a similar way, from (\ref{e:def_Psi}) one deduces that the $\Delta J_H$ integrand $\mu_0 \w{T}(\vw{\xi},\vw{\ell}) = \w{F}(\vw{E},\vw{\xi})$ takes the form of
\begin{equation}
\label{e:edpsi}
\mu_0 \w{T}(\vw{\xi},\vw{\ell}) = \vw{E} \cdot\vw{\nabla} \Psi .
\end{equation}
In view of (\ref{e:E_dPhi_omH_Psi}), we get
\begin{equation} \label{e:T_eta_ell_EM_stax}
{ \mu_0 \w{T}(\vw{\eta},\vw{\ell}) = \vw{\nabla} \Phi \cdot \vw{\nabla} (\Phi + \omega_H \Psi) } .
\end{equation}
\subsection{Force-free stationary and axisymmetric field (Blandford-Znajek)}
Let us assume that the electromagnetic field is force free, in addition of
being stationary and axisymmetric:
\begin{equation}
\w{F}(\vw{j}, .) = 0 ,
\end{equation}
where $\vw{j}$ is the electric 4-current.
In particular, $\w{F}(\vw{j},\vw{\eta}) = 0$ and $\w{F}(\vw{j},\vw{\xi}) = 0$.
From (\ref{e:def_Phi}) and (\ref{e:def_Psi}), it follows immediately that
\begin{equation} \label{e:jdPhi_zero}
\vw{j} \cdot\vw{\nabla} \Phi = 0
\qquad\mbox{and}\qquad
\vw{j} \cdot\vw{\nabla} \Psi = 0 .
\end{equation}
Taking into account that $\Phi$ and $\Psi$ are stationary and axisymmetric
[cf. (\ref{Phi_Psi_stax})], we may rewrite (\ref{e:jdPhi_zero}) in a coordinate system
$(t,r,\theta,\varphi)$ adapted to stationarity and axisymmetry as
\[
j^r \der{\Phi}{r} + j^\theta \der{\Phi}{\theta} = 0
\qquad\mbox{and}\qquad
j^r \der{\Psi}{r} + j^\theta \der{\Psi}{\theta} = 0 .
\]
We deduce that, generically, there exists a function $\omega = \omega(\Psi)$ such that
\begin{equation}
\label{e:dephidepsi}
\bm{\mathrm{d}} \Phi = -\omega(\Psi) \bm{\mathrm{d}}\Psi .
\end{equation}
Equation (\ref{e:T_eta_ell_EM_stax}) becomes then
\begin{equation} \label{e:T_eta_ell_BZ}
{ \mu_0 \w{T}(\vw{\eta},\vw{\ell}) = \omega(\Psi) \left(
\omega(\Psi) - \omega_H \right) \, \vw{\nabla}\Psi \cdot\vw{\nabla} \Psi } .
\end{equation}
Notice also that from (\ref{e:edphi}), (\ref{e:edpsi}) and (\ref{e:dephidepsi}) it follows that for an axisymmetric, stationary and force-free field
\begin{equation}
\label{e:eomegj}
\Delta E_H=\omega(\Psi) \Delta J_H.
\end{equation}
Now, we have
\[
\vw{\ell}\cdot \vw{\nabla}\Psi = \vw{\eta} \cdot \vw{\nabla}\Psi
+ \omega_H \vw{\xi} \cdot \vw{\nabla}\Psi
= \underbrace{\w{\mathcal{L}}_{\vw{\eta}} \Psi}_{0}
+ \omega_H \underbrace{\w{\mathcal{L}}_{\vw{\xi}} \Psi}_{0} = 0 .
\]
This means that the vector $\vw{\nabla}\Psi$ is tangent to $\mathcal{H}$.
Since the latter is a null hypersurface, it follows that $\vw{\nabla}\Psi$ is either null or spacelike.
Therefore, on $\mathcal{H}$,
\begin{equation}
\vw{\nabla}\Psi \cdot\vw{\nabla} \Psi \geq 0 .
\end{equation}
Accordingly (\ref{e:T_eta_ell_BZ}) yields
\[
\w{T}(\vw{\eta},\vw{\ell}) < 0 \iff
\left\{ \begin{array}{l}
\omega(\Psi) \left(
\omega(\Psi) - \omega_H \right) < 0\\
\vw{\nabla}\Psi \cdot\vw{\nabla} \Psi \not= 0 .
\end{array}\right.
\]
i.e.
\begin{equation}
{ \w{T}(\vw{\eta},\vw{\ell}) < 0 \iff
\left\{ \begin{array}{l}
0 < \omega(\Psi) < \omega_H \\
\vw{\nabla}\Psi \cdot\vw{\nabla} \Psi \not= 0
\end{array}\right.} .
\end{equation}
We recover the result (4.6) of Blandford and Znajek's article \cite{Blandford-1977}.
In view of (\ref{e:E_H}) and (\ref{e:Penrose_process}), we may conclude the following:
\begin{quote}
For a stationary and axisymmetric force-free electromagnetic field, a necessary condition for the
Penrose process to occur is
\begin{equation}
\label{e:bzomega}
{ 0 < \omega(\Psi) < \omega_H\ \ \mbox{in some part of $\Delta\mathcal{H}$} } .
\end{equation}
\end{quote}
In particular, for a nonrotating black hole ($\omega_H = 0$), no Penrose process can occur.
The condition (\ref{e:bzomega}) can be compared to the condition
(\ref{e:Penrose_process_scalar}) for a scalar field.
\section{Simulations of electromagnetic extraction of black-hole rotational energy}
\label{sect:mad}
Until recently, the relevance of the Blanford-Znajek process to observed
high energy phenomena such as relativistic jets has been hotly debated
and the efficiency of this mechanism put in doubt
(see, e.g., \cite{Ghosh-1997,Livio-1999}). Providing jet production efficiencies of less than $\sim 20\%$, general relativistic magnetohydrodynamic (GRMHD) simulations were not of much help in ending the controversy. Only recently a new physical setup of GRMHD simulations (\cite{Tchekhovskoy-2011,McKinney-2012}) produced the first clear evidence of net energy extraction by magnetized accretion onto a spinning black hole.
These simulations were carried out with general relativistic MHD code
HARM \citep{Gammie03} with recent improvements \citep{mb09,Tchekhovskoy-2011}.
\subsection{The framework}
\label{sect:framework}
The BZ efficiency can be defined as BZ power normalized by $\dot Mc^2$:
\begin{equation}
\label{e:BZ_eff}
\eta_{\rm BZ}=\frac{\left[P_{\rm BZ}\right]_t}{\left[\dot
M\right]_tc^2}=\frac{\kappa}{4\pi c}\left[\phi^2_{BH}\right]_t
\left(\frac{\omega_H r_g}{c}\right)^2 f(\omega_H)
\end{equation}
where $\dot M$ is the accretion rate; $\left[.\, .\, . \, \right]_t$ designates the time average; $\kappa\approx0.05$ depends weakly on the magnetic field geometry, $\phi_{BH}^2={\Phi}^2_{BH}/\dot M r_g^2 c$, $\Phi_{BH}$ being the magnetic flux through the black-hole surface; $f(\omega_H) \approx 0.77$ for $a_*=1$, where $a_*=J/m^2$ \citep{Tchekhovskoy-2012}; $r_g=Gm/c^2$ is black-hole gravitational radius.
The efficiency $\eta_{\rm BZ}$ depends on spin and the magnetic flux
on the black hole. The spin is limited by $a_*< 1.0$ ($\omega_H <
c/r_S$; where $r_S=2Gm/c^2$); the magnetic flux is limited by two
factors. (1) How much of it can be pushed on to the black hole. (2) How
much of it can be accumulated by diffusion through the accretion
flow. In an MHD turbulent disk, accumulation of dynamically-important
magnetic field is possible only if it is not geometrically thin,
i.e. only if $h/r \sim 1$ \citep{Lubow-94}. \citet{Tchekhovskoy-2011}
considered ``slim" disks ($h/r\sim0.3$) in which initially poloidal
magnetic fields are accumulated at the black hole until they obstruct
the accretion and lead to the formation of a so-called magnetically
arrested disk (\cite{Igu2003,Narayan-2003}). In such a configuration $\phi_{BH} \sim 40$ for $a_*=0.99$, leading to $\eta_{\rm BZ} > 100\%$, i.e., to {\sl net} energy extraction from
a rotating black hole.
This result, as well as subsequent simulations of various MAD\footnote{These were also called magnetically choked accretion flows by \citet{McKinney-2012}.} configurations \citep[][]{McKinney-2012}, leaves little doubt that the Blandford-Znajek mechanism can play a fundamental role in launching of (at least some) relativistic jets from the vicinity of black-hole surfaces. This conclusion is supported
by observational evidence of the role of spin {\sl and} accumulated magnetic flux in launching of relativistic jets both in microquasars and active galactic nuclei (see, e.g.,\cite{Narayan-2012,MNS13,NMT13,siksta13,sb13}).
In the previous section we obtained several conditions for the occurrence of a Penrose process in the presence of electromagnetic fields. All these criteria follow from the fundamental requirement $\Delta E_H <0$. The most
general criterion applies to any electromagnetic field configuration: from the definition (\ref{e:E_H}) and the general condition (\ref{e:Penrose_process}) we deduced a specific (necessary) condition (\ref{e:nec_cond_EM}) for the electromagnetic fields on the horizon.
We then showed that in the case of stationary and axisymmetric force-free fields the condition (\ref{e:Penrose_process})
is equivalent to the \citet{Blandford-1977} condition on the angular velocity of the magnetic field lines. In this section we will apply
these conditions to the results GRMHD simulations of magnetized jets we have discussed above. The aim of this exercise is twofold. First, using rigorous general-relativistic criteria we will confirm that the MAD BZ mechanism is indeed a Penrose process as surmised by \citet{Tchekhovskoy-2011}. Second, our Penrose-process conditions can be used as a diagnostic tool to test the physical and mathematical consistency of numerical calculations reputed to represent the Blandford-Znajek/Penrose process.
In dealing with results of numerical simulations,
we will adopt the 3+1 Kerr coordinates $(t,r,\th,\phi)$ described in Appendix~\ref{ap:Kerr},
which are adapted coordinates in the sense defined in Sec.~\ref{s:adapted_coord}.
The energy captured by the black hole over $\Delta\mathcal{H}$ is given by
(\ref{e:E_H_adapted}). Since for the 3+1 Kerr coordinates,
$\sqrt{-g} = (r^2 + a^2 \cos^2\th)\sin\th$ [cf. (\ref{e:det_g_Kerr})], we get
\begin{equation} \label{e:ehindex}
\Delta E_H = \int_{\Delta\mathcal{H}} {\dot e}_H \,
(r_H^2 + a^2 \cos^2\th)\sin\th \, \mathrm{d} t \, \mathrm{d} \theta
\, \mathrm{d} \phi ,
\end{equation}
where we have defined
\begin{equation} \label{e:def_dot_e_H}
{\dot e}_H := - \left. P^r \right| _{\mathcal{H}}
= \left. T^r_{\ \, t} \right| _{\mathcal{H}} .
\end{equation}
As a check of (\ref{e:ehindex}), we may recover it from the last integral
in Eq. (\ref{e:E_H}), noticing that $\eta^\mu = (1,0,0,0)$,
$\ell_r = (r_H^2 + a^2 \cos^2\th) / (2m r_H)$,
and $\sqrt{q} = 2 m r_H \sin\theta$ [cf. (\ref{e:sqrt_det_q}) in Appendix~\ref{ap:Kerr}].
A formula analogous to (\ref{e:ehindex}), with
${\dot e}_H$ replaced by $-T^r_{\ \, \phi}$,
gives $\Delta J_H$ [cf. (\ref{e:J_H_adapted})]; accordingly, we define
\begin{equation} \label{e:def_dot_j_H}
{\dot \jmath}_H := - \left. M^r \right| _{\mathcal{H}}
= - \left. T^r_{\ \, \phi} \right| _{\mathcal{H}} .
\end{equation}
Since, as discussed in Sec.\,\ref{subsect: applications}, in numerical simulations one assumes stationarity, and $\Sigma_2$ is deduced from $\Sigma_1$ by time translation, one
must have $E_2=E_1$ (see Fig. \ref{f:Econs}).
Therefore, to test the Penrose-process condition (\ref{e:Penrose_process}) and (\ref{e:omegJ_EH}) and show the details of the BZ mechanism, we found it convenient to use the energy and angular-momentum \emph{flux densities}
$\dot e_H(t,\theta,\phi)$ and $\dot {\jmath}_H(t,\theta,\phi)$ defined by (\ref{e:def_dot_e_H}) and (\ref{e:def_dot_j_H}),
and plot their ($t-$ and $\phi$-averaged) longitudinal distribution on the $t$-constant 2-surface ${\mathscr{S}_t}$ (the black hole's surface; see Sec. \ref{sub-section-Kerr-horizon}) on $\mathcal{H}$.
In the MAD simulations the energy-momentum tensor is the sum of the perfect fluid (\ref{eq:Tpf}) and the electromagnetic (\ref{e:T_EM}) tensors:
\[
T_{\mu\nu}= T^{\rm( MA)}_{\mu\nu} + T^{\rm (EM)}_{\mu\nu}.
\]
Consequently we define $\dot e_{\rm MA}:=T^{{\rm( MA)}\,r}_{\qquad t}$ and $\dot {\jmath}_{\rm MA}:=- T^{{\rm( MA)}\,r}_{\qquad \phi}$; $\dot e_{\rm EM}$ and $\dot {\jmath}_{\rm EM}$ are defined in an analogous way through the electromagnetic energy-momentum tensor.
In the simulation of force-free fields $\dot e_{\rm MA}=0$.
\begin{figure}
\centerline{
\includegraphics[width=0.4\textwidth]{Fig_7_lasota.eps}}
\caption{\small
Values of $\omega_H F_{\mu\nu} E^\mu \xi^\nu - E_\mu E^\mu$ and $E_\mu E^\mu$ plotted as a function of $\theta$ for a stationary, axisymmetric and force-free field with $a_*=0.99$. The necessary condition: (\ref{e:penrose_nc}) for the occurrence of a Penrose process is satisfied over all ${\mathscr{S}_t}$ (except the poles).}
\label{f:Electric}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=0.4\textwidth]{Fig_8_lasota.eps}}
\caption{\small Comparison of $-\omega_H F_{\mu\nu} E^\mu \xi^\nu + E_\mu E^\mu$ with the integrands of (\ref{e:ehindex}) for the same field configuration and black-hole spin as in Fig.~\ref{f:Electric}. }
\label{f:condition}
\end{figure}
The pseudoelectric field (\ref{e:def_E}) is $E_\alpha = F_{\alpha\mu} \ell^\mu$.
Therefore in the index notation, the general necessary condition (\ref{e:nec_cond_EM}) for the Penrose process to occur takes the form
\begin{equation}
\label{e:penrose_nc}
\omega_H F_{\mu\nu} E^\mu \xi^\nu - E_\mu E^\mu\vert_{\mathcal{H}}>0.
\end{equation}
In the case of MAD simulations, which are intrinsically time variable, we run the simulations long enough to achieve quasisteady state in which all quantities fluctuate about their mean values, so we use the time average of the left-hand side
in (\ref{e:penrose_nc}).
\subsection{Force-free stationary electromagnetic field}
\label{subsect:ff}
As a warm-up, we present the results of simulations of black-hole rotational energy extraction by a
force-free electromagnetic field. As illustration, we consider the simple case of a paraboloidal magnetic field for an $a_*=0.99$ black hole. The field configuration corresponds to the $\nu=1$ case of \citet{Tchekhovskoy-2010}, where additional
information about the setup of the problem can be found. We have chosen a paraboloidal field in preference to a monopole because of the similarity of results with those of MAD simulations discussed later.
\begin{figure}
\centerline{
\includegraphics[width=0.4\textwidth]{Fig_9_lasota.eps}}
\caption{\small
$\omega_F/\omega_H$ plotted as function of $\theta$ for a stationary, axisymmetric force-free paraboloidal magnetic field; $a_*=0.99$. The condition (\ref{e:bzomega}) for the occurrence of a Penrose process is satisfied over the entire black-hole 2-surface ${\mathscr{S}_t}$.}
\label{f:omegaf}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=0.4\textwidth]{Fig_10_lasota.eps}}
\caption{\small Values of energy and angular-momentum density fluxes on the black-hole surface as function of $\theta$ for a force-free field and $a_*=0.99$. In this case $\dot e = \dot e_{\rm EM}$. $\dot e$ is everywhere negative on ${\mathscr{S}_t}$ in agreement with the Penrose-process condition (\ref{e:Penrose_process}); the same is true by construction of $\dot {\jmath}$ and (\ref{e:Penrose_am}) is obviously satisfied.
}
\label{f:eminusomegal}
\end{figure}
First, in Fig.~\ref{f:Electric} we present the results of testing the general condition (\ref{e:penrose_nc}). It is satisfied on the whole of the black-hole surface ${\mathscr{S}_t}$. Also (\ref{e:Egt0}) is satisfied which confirms that the simulations correctly reproduce the spacetime structure near and at the horizon. Since condition (\ref{e:penrose_nc}) follows from the requirement of negative energy on the horizon we checked the consistency of the numerical scheme by comparing the expression $- \omega_H F_{\mu\nu} E^\mu \xi^\nu - E_\mu E^\mu$ with two forms of the integrand in (\ref{e:ehindex}). As expected the values of the two expressions are identical (see Fig. \ref{f:condition}).
The force-free BZ condition (\ref{e:bzomega}) is satisfied everywhere on the black hole's surface (Fig. \ref{f:omegaf}). Since in a force-free field $\dot e_H=\omega_H \dot {\jmath}_H$
[cf. (\ref{e:eomegj})] the Penrose-process condition (\ref{e:Penrose_am}) follows directly from $\Delta E_H <0$ [Eq.~(\ref{e:Penrose_process})]; see Fig. \ref{f:eminusomegal}.
Since it satisfies the required conditions on the horizon, the BZ
mechanism described by numerical simulations of the interaction of a
force-free field with a spinning black hole is a Penrose process.
\subsection{Magnetically arrested disks}
\label{subsect:MAD}
Before discussing the results of GRMHD MAD simulations in the context of the BZ/Penrose mechanism, we have first to present the underlying assumptions in more detail.
The simulations are performed in a ``box" of {\sl finite} size delimited by $\Delta \mathcal{H}$ and $\Sigma_{\rm ext}$ in space and $\Sigma_1$ and
$\Sigma_2$ in time.
It is supposed that $\Sigma_{\rm ext}$ is located at some reasonably large radius ($\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 30 r_g$), which is far from the horizon but still well inside the converged volume of the simulation.
One also assumes that the times $t_1$ and $t_2$ corresponding, respectively, to $\Sigma_1$ and $\Sigma_2$ are sufficiently far apart so that time averages are well defined and the system is in a steady state during this time.
In a steady state $E_2=E_1$; i.e., the energy contained inside the volume defined by the boundaries $\Delta \mathcal{H}$ and $\Sigma_{\rm ext}$ is independent of time.
Simulation shows that $\Delta E_{\rm ext}>0$, i.e., there is a net flow of energy out of the system. From energy conservation (\ref{e:ener_cons}) one should therefore have
$\Delta E_H <0$ on some part of $\Delta \mathcal{H}$.
Below we will show that stationary MAD models of energy extraction from a spinning black hole satisfy this condition and are an electromagnetic realization of a Penrose process.
\begin{figure}
\centering
\centering
\includegraphics[width=0.4\textwidth]{Fig_11_lasota.eps}
\caption{ \small
Same as in Fig. \ref{f:Electric} but for a MAD simulation with $a_*=0.99$. Here the time- and $\phi$-averaged quantities are used: $\omega_H \langle F_{\mu\nu} E^\mu\,\rangle \xi^\nu - \langle E_\mu E^\mu\, \rangle$ and $\langle E_\mu E^\mu \rangle$. The necessary condition (\ref{e:penrose_nc}) for the occurrence of a Penrose process is satisfied over all ${\mathscr{S}_t}$.}
\label{f:Emad}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig_12_lasota.eps}
\caption{\small Comparison of $-\omega_H \langle F_{\mu\nu} E^\mu \rangle \xi^\nu + \langle E_\mu E^\mu\rangle$ with the integrands of (\ref{e:ehindex}) for the same field configuration and black-hole spin as in Fig. \ref{f:Emad}.}
\label{f:madcondition}
\end{figure}
We will use the results of the model A0.99N100 of \citet{McKinney-2012}. In this model the initial magnetic field is poloidal, $a_*=0.99$, and the disk is moderately thick: the half-thickness $h$ satisfies $h/r \sim 0.3$ at $R_{\rm ext}=30 r_g$ and $h/r\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 0.1$ at the black-hole surface.
We will first examine if the MAD simulations satisfy the Penrose-process conditions (\ref{e:penrose_nc}), (\ref{e:Penrose_process}) and (\ref{e:Penrose_am}). As for the force-free fields, we start with checking condition (\ref{e:penrose_nc}) for the electromagnetic fields on the black-hole surface. As shown in Fig. \ref{f:Emad}
$\omega_H \langle F_{\mu\nu} E^\mu\,\rangle \xi^\nu - \langle E_\mu E^\mu\, \rangle >0 $ everywhere on the black-hole surface, which implies that the electromagnetic energy is negative everywhere on $\Delta\mathcal{H}$. Indeed, as shown in Fig. \ref{f:madcondition} the electromagnetic energy density $T^{EM}_{\mu\nu}\eta^\mu\ell^\nu$ is everywhere negative on the black-hole surface.
\begin{figure}
\centerline{
\includegraphics[height=0.35\textwidth]{Fig_13_lasota.eps}}
\caption{\small Same as in Fig. \ref{f:eminusomegal} for a MAD configuration. The black-hole spin is $a_*=0.99$. The electromagnetic energy density flux is everywhere negative on $\mathscr{S}_t$. The total energy density $\dot e $ is negative everywhere except in the equatorial belt where matter accretion dominates the energy balance. The condition $\dot \jmath < 0$ is satisfied everywhere on $\mathscr{S}_t$ (see the text for details).
}
\label{f:eminusomegal_mad}
\end{figure}
In the GRMHD MAD simulations accretion of matter plays an essential role in accumulating magnetic field lines on the black hole, and contrary to the force-free case the energy-momentum of matter is not negligible. In Fig. \ref{f:eminusomegal_mad} in addition to the electromagnetic and matter energy density fluxes we plot the sum of the two representing the total energy flux. One can see that $\dot e$ is negative on the black-hole surface $\mathscr{S}_t$ except near the equator where energy absorption is dominated by matter accretion. Therefore the simulations of rotational energy extraction from a $a_*=0.99$ spinning black hole by a MAD field configuration satisfy the condition (\ref{e:Penrose_process}) on part of the black-hole surface and therefore describe a Penrose process involving electromagnetic fields. This is confirmed by the angular-momentum density flux being negative on the whole of the black hole surface.
We see that the angular-momentum flux is negative over the entire horizon, while the energy flux is negative only over the part of the surface exterior to the equatorial accretion flow. This is a characteristic property of the MAD configuration
because the rest-mass energy flux due to the accreted mass overwhelms the energy flux into the black hole and makes it positive, while this matter carries in very little angular momentum because its angular momentum is sub-Keplerian due to the action of strong magnetic fields that extract its angular-momentum and carry it away in the form of magnetized winds.
To get more insight into the workings of the simulated black-hole rotational energy extraction process one has to leave the horizon and see what is happening in the bulk above the black-hole surface.
We have shown that GRMHD MAD simulations of black-hole rotational energy extraction describe a Penrose process but because of the approximations made we have not learned how this process works in detail. In the case of free particles we know what is happening: a particle decays in the ergoregion into one with negative and another one with positive energies. The one with negative energy cannot leave the ergoregion and must be created there because negative energies exist only in the ergoregion and energy along the trajectories is conserved. This cannot be the case for a perfect fluid (with nonzero pressure) or an electromagnetic field. However, the mechanical case can serve as a guide to what is happening in a more general case. For MAD simulations, one cannot expect to see negative energies in the ``bulk" since by stationarity energy is constant. However, the workings of the Penrose process should be apparent through the behavior of the Noether current $\vw P$.
Far from the black hole, the Noether current $\vw{P}$ is future directed timelike or null and is such that positive energy flows outwards.
Near the black hole, in the ergoregion, $\vw{P}$ should become spacelike or past directed. This is indeed what is happening in our simulations.
Figs.~\ref{f:psqf} and \ref{f:psqm} show the behavior of $\vw P$ in numerical results for the force-free and the MAD cases respectively. We see that for a force-free configuration $P^2=0$ at the surface of the ergosphere whereas in the MAD
simulations the $P^2=0$ surface is very close to the surface of
the ergosphere in the polar jet regions but lies inside of it
elsewhere. These patterns are in full agreement with Figs. \ref{f:eminusomegal} and \ref{f:eminusomegal_mad}. They demonstrate the fundamental role played by the ergoregion in extracting black-hole energy of rotation. This can be explained analytically as follows.
In the relativistic MHD code HARM, it is assumed that the Lorentz force on a
charged particle vanishes in the fluid frame:
\begin{equation}
\label{eq:lorentz}
u_{\mu}F^{\mu\nu}=0.
\end{equation}
Then a magnetic four-vector $b^{\mu}$ is defined as
\begin{equation}
\label{eq:b}
b^{\mu}:= \frac{1}{2}\epsilon^{\mu\nu\alpha\beta}u_{\nu}F_{\alpha\beta},
\end{equation}
with
\begin{equation}
\label{eq:bu}
b_{\mu}u^{\mu}=0,
\end{equation}
following from ${\w F}$ antisymmetry.
\begin{figure}
\center
\centerline{\includegraphics[width=1.0\columnwidth]{Fig_14_lasota.eps}\hfill }
\center
\caption{\label{f:psqf} \small Color maps of $P^2$ in monopolar force-free
spin $a_*=0.99$. The surface of the
ergosphere is shown with cyan lines, the stagnation surface with
orange lines. The region in which $\vw P$ is spacelike is shown in
orange, and the region in which $\vw P$ is timelike is shown in blue
(see color bar). Black-and-white striped lines represent the magnetic field lines.
As discussed in the main text, in a force-free configuration $\vw P$
becomes null at the surface of the ergosphere.}
\end{figure}
This allows the electromagnetic energy-momentum tensor (\ref{e:T_EM}) to be written in the form of \citep{Gammie03}:
\begin{equation}
\label{eq:tmunumhd}
T^{\rm (EM)}_{\mu\nu} = b^2 u_\mu u_\nu + \frac{1}{2}b^2 g_{\mu\nu} - b_\mu b_\nu.
\end{equation}
\begin{figure}
\centerline{
\includegraphics[width=1.0\columnwidth]{Fig_15_lasota.eps}}
\center
\caption{\label{f:psqm} \small Color maps of $P^2$ in the MAD simulations for a black hole with
spin $a_*=0.99$. Color codes and lines as in Fig. \ref{f:psqf}. In this case the surface
$P^2=0$ nearly coincides with the surface of the ergosphere in
magnetically-dominated polar jets, but lies inside of the surface of
the ergosphere otherwise. }
\end{figure}
Therefore for the electromagnetic Noether current $P_\mu^{\rm (EM)}=T^{\rm (EM)}_{\mu\nu}\eta^{\nu}$ one has
\begin{equation}
\label{eq:pff}
P^\mu_{(\rm EM)} P_\mu^{(\rm EM)} = P^2_{(\rm EM)} = \frac{1}{4} b^4 g_{tt}.
\end{equation}
Since $g_{tt} > 0$ inside ergosphere and $< 0$ outside, this fully
explains the numerical results seen in Fig.~\ref{f:psqf}:
\begin{eqnarray}
P^2_{(\rm EM)} &>& 0 \quad {\rm inside\ ergosphere},\\
P^2_{(\rm EM)} &<& 0 \quad {\rm outside\ ergosphere.}
\end{eqnarray}
Notice that this result applies not only to stationary axisymmetric
electromagnetic force-free field but also to time-dependent fully 3D
(nonaxisymmetric) configurations. However, the above property of $\vw P$
applies only to the electromagnetic force-free case.
To see this let us use the general energy-momentum tensor
\[
T_{\mu\nu}= T^{\rm( MA)}_{\mu\nu} + T^{\rm (EM)}_{\mu\nu}.
\]
with $\w T^{\rm( MA)}$ and $\w T^{\rm (EM)}$ given by (\ref {eq:Tpf}) and (\ref{eq:tmunumhd}) respectively.
One obtains then
\begin{equation}
\label{eq:p2general}
P^2 = \left(\frac{1}{2} b^2+p\right)^2 g_{tt} - A,
\end{equation}
with
\begin{equation} A = 2(\Gamma-1) u b_t^2 + u_t^2 (\rho+u+p+b^2) [(2-\Gamma)u+\rho],
\end{equation}
where $u=\epsilon - \rho$ is the internal energy and the adiabatic index $\Gamma$ ($p=(\Gamma - 1 )u$) satisfies $1 \le \Gamma \le 2$ (in the MAD simulations $\Gamma=4/3$).
For dust ($p=0$) one gets
\[
P^2= - \left(\rho u_t\right)^2,
\]
i.e., the Noether current is always timelike (but past directed for negative energy wordlines; see Sec. \ref{sub:mechP}).
For the force-free case ($b^2 \gg \rho$, $p\ll \rho$) one recovers (\ref{eq:pff}) but in general (e.g. for $\Gamma=4/3$) $A > 0$.
Since
$P_{\rm EM}^2 = 0$ precisely at the surface of the ergosphere the same
applies to the full Noether current in the highly magnetized regions:
there $P^2 \approx P_{\rm EM}^2 = 0$ approximately at the ergosphere. In the weakly
magnetized disk-corona region, however, $P^2=0$ will deviate from the
ergosphere by at least order unity.
The first term on the right-hand side of Eq. (\ref{eq:p2general}) is positive
inside the ergosphere. Since the second term is nonpositive for $1
\le\Gamma\le 2$ the surface $P^2 = 0$ lies \emph{inside} the
ergosphere as seen in Fig.~\ref{f:psqm}.
Also shown in Figs. ~\ref{f:psqf} and ~\ref{f:psqm} is the stagnation limit at which the field drift velocity
changes sign ($u^r=0$; inside this limit the velocity is pointing
inwards). Inside the stagnation surface, an energy counterflow
\cite{Komissarov-2009} is present: while the fields drift inwards, the
energy flows outwards. The stagnation limit is always outside the ergoregion; for $a_*=0.99$ it is very close to the ergosphere but for, e.g., $a_*=0.9999$ the two surfaces are still well separated. The shapes and location of our stagnation limits are different from those found by \citet{Okamoto-2006} and \citet{Komissarov-2009}. The reasons for these differences will be addressed in a future paper.
\section{Discussion and conclusions}
\label{section-Conclusions}
We proved that for any type of matter or (nongravitational) fields satisfying the weak energy condition, the black hole's rotational energy can be extracted if, and only if,
negative energy and angular momentum are absorbed by the black hole. Applied to the case of a single particle, the general criterion (\ref{e:Penrose_process}) leads to the standard condition for a mechanical Penrose process. {For a general electromagnetic field,
the criterion (\ref{e:Penrose_process}) leads to the condition (\ref{e:nec_cond_EM}) on the electromagnetic field at the horizon, which does not seem to have been expressed before.}
In a sense our findings are obvious (which does not mean they are trivial). They follow from the fact that the black-hole surface is a stationary null hypersurface. Hence it can only absorb matter or fields; it cannot emit anything, cannot emit energy. No torque can be applied to the horizon, since a torque results from a difference of material/field fluxes coming from the opposite sides of a surface \citep{Abramowicz-2010}. The only way to lose energy, independent of the nature of the medium the hole is interacting with, is by absorbing a negative value of it. And, since the energy in question must be rotational, it must absorb negative angular momentum to slow it down.
Our results do not specify how the effect of net negative energy absorption by a black hole is achieved. The conditions for black-hole energy extraction do not guarantee the existence of such a process in the real world. As is well known, the mechanical Penrose process requires splitting of particles in the ergoregion but no realistic way of achieving black-hole energy extraction has been found. Using fluids (perfect or not) does not seem very promising in this context. The only known black-hole energy extracting process that might be at work in the Universe is the BZ mechanism. We showed that the process of energy extraction described by GRMHD simulations of magnetically arrested disk flows around rapidly spinning black holes is a Penrose process. This has been deduced before from energy conservation and efficiencies well in excess of 100\% but we showed that the solutions found by these simulations satisfy the rigorous and general conditions required by general relativity. Considering that black holes are purely general-relativistic objects this is a reassuring conclusion.
It is worth stressing that when in the GRMHD simulations the Noether current has a positive flux in the outward direction everywhere (including at the BH horizon), it does not correspond to the flow of any {\sl physical} energy out of the black hole, since the ``energy" associated with the Noether current is not a measurable quantity: no physical observer can measure it, except at infinity, where the Killing vector $\eta$ becomes a unit timelike vector and therefore is eligible as the 4-velocity of a physical observer: an inertial observer at rest with respect to the BH location.
As mentioned above, the main (and only important) difference between the mechanical and other versions of the Penrose process is that in the first version, particles move along geodesics and therefore energy is conserved on their trajectories. Therefore the motion of a particle crossing the horizon with negative energy is from its start restricted to the ergoregion. This does not have to be the case of interacting matter and fields. It is still true that the ``outgoing flow of energy at infinity in the Penrose process is inseparable from the negative energy at infinity of an infalling `object' "\citep[to quote][]{Komissarov-2009}, but this inseparability concerns the negative energy of the object when it is absorbed by the black hole. On its way to the final jump into the hole, the object's energy may vary depending on its interactions with the medium it is part of.
A detailed description of these processes in the framework of the GRMHD simulations will be the subject of a future work.
\begin{acknowledgments}
MA and JPL thank Serguei Komissarov for an enlightening and stimulating exchange of Emails.
Research reported here was partially supported by Polish NCN Grants No. UMO-2011/01/B/ST9/05439, No. UMO-2011/01/B/ST9/05437, and
No. DEC-2012/04/A/ST9/00083.
Research at the Silesian University in Opava was supported by
the Czech CZ.1.07/2.3.00/20.0071 ``Synergy'' grant for international collaboration. JPL acknowledges a grant from the French Space Agency CNES
and EG, Grant No. ANR-12-BS01-012-01 ``Analyse Asymptotique en Relativit\'e G\'en\'erale''
from Agence Nationale de la Recherche.
Support for this work was provided by a Princeton Center for Theoretical Science
Fellowship and by NASA through Einstein Postdoctoral Fellowship Grant No. PF3-140115 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under Contract No. NAS8-03060.
We acknowledge support by the NSF through TeraGrid/XSEDE resources provided by NICS Kraken and LONI QueenBee, where simulations were carried out; NICS Nautilus, where data were analyzed; and TACC Ranch and NCSA MSS, where data were backed up, under Grants No. TG-AST100040 (A.T.) and TG-AST080026N (R.N.).
\end{acknowledgments}
|
2,877,628,088,948 | arxiv | \section{Introduction}
There has been a renewed interest in the search for new constructions of lattices from error-correcting codes due to their various recent applications, such as coding for fading wiretap channels \cite{KosiOngOggier}, Gaussian relay networks \cite{Adaptative}, compound fading channels \cite{Our} and index codes \cite{Index}, to name only a few. For the applications considered in these works, it is desirable to lift a code over a finite field into a lattice that possesses a rich algebraic structure, often inherited from the properties of number fields. In the present work we provide an unified analysis of these constructions and investigate ``random-coding''-like results for such lattices. Our focus is on the problem of finding dense structured lattice packings, although our techniques have a much broader scope of applications (as discussed in Section \ref{sec:conclusion}).
Indeed, finding the densest packing is a central subject in the Geometry of Numbers, with a variety of well-established connections to Coding Theory. Let $\Delta_n$ denote the best possible sphere packing density achievable by a Euclidean lattice of dimension $n$. The celebrated Minkowski-Hlawka theorem (e.g. \cite{Cassels,Leker}) gives the lower bound $\Delta_n \geq \zeta(n)/2^{n-1}$ for all $n\geq 2$, where $\zeta(n) = 1+1/2^n+1/3^n+\ldots$ is the Riemann zeta function. Up to very modest asymptotic improvements, this is still, to date, the best lower bound for the best packing density in high dimensions.
Typical methods for establishing the theorem depend on the construction of random ensembles of lattices and on mean-value results \cite{Siegel,Rogers}. Rush \cite{Rush1989} and Loeliger \cite{Loeliger} obtained the lower bound from integer lattices constructed from linear codes in $\mathbb{F}_p^n$, in the limit when $p\to\infty$, with random-coding arguments. Improvements of the lower bound, in turn, strongly rely on additional (algebraic) structure. For instance, Vance \cite{Vance} showed that the best quaternionic lattice in dimension $m$ (with real equivalent in dimension $4m$) has density at least $3m \zeta(4m)/e 2^{4m-3}\leq \Delta_{4m}$. Using lattices built from cyclotomic number fields, Venkatesh \cite{Venkatesh} established the bound $\Delta_{2\varphi(m)} \geq m/2^{2\varphi(m)-1}$, where $\varphi(m)$ is Euler's totient function (since $m$ can grow as fast as a constant times $\varphi(m)\log \log \varphi(m)$ this provides the first super-linear improvement). From another perspective, Gaborit and Z\'emor \cite{Zemor}, and Moustrou \cite{Moustrou} exploited additional coding-theoretic and algebraic structures to significantly reduce the family size of ensembles containing dense lattices.
\\[1\baselineskip]
\noindent \textbf{Main Contributions.}
In this work we investigate general random lattices obtained from error correcting codes. The objective of this study is two-fold: we provide unified analyses and coding-theoretic proofs of the aforementioned results, as well as a simple condition to verify if any new construction can be used to build ensembles containing dense lattices.
We start from the fairly general definition of a \textit{reduction}, i.e. a mapping that takes a lattice into the space $\mathbb{F}_p^n$. For a general reduction we prove the following:
\begin{thm} Let $\phi_{p}: \Lambda \to \mathbb{F}_{p}^n$ be a family of reductions (surjective homomorphisms), where $\Lambda$ is a lattice of rank $m$ in the Euclidean space. Consider the ensemble
$$\mathbb{L}_{p} = \left\{\beta \phi_p^{-1}(\mathcal{C}): \mathcal{C} \mbox{ is a } k-\mbox{dimensional code in } \mathbb{F}_p^n\right\},$$
for an appropriate normalization factor $\beta$ so that all lattices have volume $V$. Denote by $\mathcal{N}_{\Lambda^{\prime}}(r) = \#( \mathcal{B}_r \cap \Lambda')$ the number of primitive points of $\Lambda$ inside a ball of radius $r$. If the first minimum of $\Lambda_{p} = \ker \phi_{p}$ satisfies
\begin{equation}\liminf_{p\to \infty}\left( \frac{\lambda_1(\Lambda_{p})}{p^{n/m}}\right) > 0, \mbox{ then }
\label{eq:nonDeg}
\end{equation}
$$\lim_{p\to\infty} E_{\mathbb{L}_{p}}[\mathcal{N}_{\Lambda^\prime}(r)] = (\zeta(m)V)^{-1} \text{\upshape vol } \mathcal{B}_r,$$
where the average is with respect to the uniform distribution on $\mathbb{L}_{p}$.
\label{thm:2}
\end{thm}
A slightly stronger version of the above result is precisely stated in Theorem \ref{thm:AverageEnsembleGeneral}. We shall refer to a family of reductions that satisfies condition \eqref{eq:nonDeg} as \textit{non-degenerate}. Non-degeneracy is indeed a very mild condition, and is satisfied, for instance, if $\Lambda_p$ has non-vanishing Hermite parameter (e.g., $\Lambda_p = p\mathbb{Z}^n$). Non-degenerate constructions immediately yield lattices satisfying the Minkowski-Hlawka lower bound (see the discussion in the end of Section \ref{sec:prelim}). By choosing specific suitable families of non-degenerate reductions, we can further improve this density and obtain a number of interesting corollaries. We highlight one of them:
\begin{cor}\label{cor:betterCyclotomic} Let $\mathcal{O}_K$ be the ring of integers of a degree $n$ number field $K$ containing $r(K)$ roots of unity. For any integer $t\geq 2$, there exists an $\mathcal{O}_K$-lattice with dimension $t$ and packing density
$$\Delta \geq \frac{r(K) t \zeta(tn)(1-\varepsilon)}{e(1-e^{-t}) 2^{tn}},$$
for any $\varepsilon > 0$.
\end{cor}
This proves for instance, the existence of ideal lattices in any dimension with density better, by a linear factor, than the Minkowski-Hlawka lower bound. This also recovers,
for $t=2$ and a judicious choice of number field and degree, the density in \cite[Thm. 1]{Venkatesh} and \cite[Thm. 2]{Moustrou}. By allowing reductions to codes over matrix rings (rather than the field $\mathbb{F}_p$), we provide, in Section \ref{sec:redRings}, a coding-theoretic proof of the existence of dense Hurwitz lattices, as in \cite{Vance}.
Here is how Theorem 1 may be interpreted: the density of the kernel (coarse) lattice $\Lambda_p$ is improved by ``adjoining'' a code $\mathcal{C}$ to it, through the reduction $\phi_p$. Now if $\Lambda_p$ itself has a reasonable density, we can improve it up to the Minkowski-Hlawka bound. Building on this idea, we show in Section \ref{sec:conc} that if we start from a suitable reduction so that the base (fine) lattice $\Lambda$ is not so thick (in terms of its \textit{covering density}) and such that the kernels are not so sparse (in terms of \textit{packing density}), we can bound the required size of $p$ to be within a finite range. For instance, we show that by starting from the family of Craig's lattices \cite{Splag}, we can build dense lattices from codes with alphabet-size $p = O((n\sqrt{\log n})^{1+\nu})$, where $\nu > 0$ is any (small) positive constant. This improves significantly the size of codes required by usual constructions \cite{Rush1989} \cite{Loeliger} (where $p$ cannot be smaller than $p = \omega(n^{3/2})$). As observed in \cite[pp. 18-19]{Splag}, the works of Rush (and Loeliger) already significantly reduce the family sizes of typical proofs of the Minkowski-Hlawka lower bound. It is worth mentioning that, in terms of absolute family size, the best result is achieved by \cite{Zemor} by restricting the average to double-circulant codes or \cite{Moustrou} using cyclotomic lattices. For instance, while the logarithm of the search space for Craig's lattices reductions has size $n^2 \log \log n$, the family based on double-circulant codes has size $n \log n$ (we refer the reader to Table \ref{tab:comparison} for more details). We leave it as an open question to whether coupling a good reduction with a smaller family of codes can further reduce the overall search space.
\\[1\baselineskip]
\noindent \textbf{Organization. } This work is organized as follows. In Section \ref{sec:prelim} we describe some basic definitions and notation. In Section \ref{sec:genRed} we establish our main result on general reductions and several corollaries. In Section \ref{sec:numbFields} we consider reductions induced by quotients of ideals in the ring of integers of a number field, proving the main corollaries. In Section \ref{sec:redRings} we construct random Hurwitz and Lipschitz lattices from codes over matrix rings. In Section \ref{sec:conc}, we discuss an ``algorithmic'' version of the main theorem and draw the final conclusions.
\section{Preliminaries and Notation}
\label{sec:prelim}
The Euclidean norm of $\mathbf{x} \in \mathbb{R}^m$ is denoted by $\left\|\mathbf{x} \right\| = (x_1^2+\ldots+x_m^2)^{1/2}$. The ball of radius $r$ in $\mathbb{R}^m$ is denoted by $\mathcal{B}_r = \left\{\mathbf{x} \in \mathbb{R}^m: \left\|\mathbf{x} \right\| \leq r \right\}$.
A \textit{lattice} $\Lambda$ is a discrete additive subgroup of $\mathbb{R}^m$. Denote by $\mbox{span } \Lambda$ the minimal subspace of $\mathbb{R}^m$ that contains $\Lambda$. The \textit{rank} of $\Lambda$ is defined to be the dimension of $\mbox{span } \Lambda$. The quotient $(\mbox{span } \Lambda)/ \Lambda$ is compact, and its volume, denoted by $V(\Lambda)$, is said to be the \textit{volume of }$\Lambda$. The \textit{first minimum} $\lambda_1(\Lambda)$ of $\Lambda$ is the shortest Euclidean norm of non-zero vectors in $\Lambda$. In general, the $i$-th minima of $\Lambda$ are defined as
\begin{equation*}
\lambda_i(\Lambda) = \min\left\{r : \dim\left( \mbox{span}\left\{\mathcal{B}_r \cap \Lambda\right\}\right) = i\right\}.
\end{equation*} The \textit{packing density} of a rank $m$ lattice $\Lambda$ is defined as $$\Delta(\Lambda) = \frac{\mbox{vol }\mathcal{B}_{\lambda_1/2}}{V(\Lambda)}.$$
We say that a point in $\mathbf{x} \in \Lambda$ is \textit{primitive} if the intersection between $\Lambda$ and the open line segment $\left\{\alpha \mathbf{x} : \alpha \in (0,1)\right\}$ is the empty set. The set of all primitive points in $\Lambda$ is denoted by $\Lambda^\prime$.
Theorem \ref{thm:2} implies the Minkowski-Hlawka lower bound in the following fashion. From the average result, it follows that it must exist at least one $\Lambda \in \mathbb{L}$ such that $\mathcal{N}_{\Lambda}(r) \leq (\zeta(m)V)^{-1} \text{vol } B_r^m$. Now if we force the right-hand side to be equal to $2(1-\varepsilon)$, for some small $\varepsilon >0$, then, since a lattice has at least two minimum vectors, we must have $\mathcal{N}_{\Lambda}(r) = 0$. Therefore $\Lambda$ can pack balls of radius $r/2$; rearanging the terms gives us, up to $\varepsilon$, the Minkowski-Hlawka bound. If $\Lambda$ is a lattice with guaranteed number of minimum vectors (say, $L$) we can, by similar arguments, achieve density $L(1-\varepsilon)/2^{m}$.
A $k$-dimensional vector subspace $\mathcal{C} \subset \mathbb{F}_p^n$ is called a (linear) \textit{code} with parameters $(n,k,p)$ (or simply an $(n,k,p)$-code).
Throughout the paper we use the standard ``big-O'', ``big-omega'' and ``little-omega'' notations, e.g. $f(x) = \Omega(g(x))$ if $\lim \sup_{x\to \infty} |g(x)/f(x)| < +\infty$, and $f(x) = \omega(g(x))$ if $\lim_{x \to \infty} g(x)/f(x) = 0$.
\section{Generalized Reductions}
\label{sec:genRed}
From now on, let $\Lambda$ be a rank $m$ lattice and let $n \leq m$ be an integer.
\begin{defi} Let $\phi_p:\Lambda\to \mathbb{F}_p^n$ be a surjective homomorphism. Given a linear code $C$, its associated lattice via $\phi_p$ is defined as $\Lambda_p(C) \triangleq \phi_p^{-1}(C)$.
\label{def:ConsGen}
\end{defi}
A surjective homomorphism as in the above definition will, from now on, be called a \textit{reduction}. We shall see that $\Lambda_p(C)$ is indeed a lattice of rank $m$. First observe that $\Lambda_p(C)$ is a subgroup of $\Lambda_p(\mathbb{F}_p^n)=\Lambda$. Since the quotient $\Lambda/\text{ker}(\phi_p) \simeq \mathbb{F}_p^n$ is finite, $\text{ker}(\phi_p)=\Lambda_p(\left\{\mathbf{0}\right\})\triangleq\Lambda_p$ is a sub-lattice of $\Lambda$, of rank $m$. From the inclusion $\Lambda_p \subset \Lambda_p(C) \subset \Lambda$, we conclude that the three lattices have the same rank. Moreover, $\Lambda_p(C)/\Lambda_p \simeq C$, and therefore $V( \Lambda_p(C)) = \left|C\right|^{-1} p^{n} V( \Lambda)$.
\begin{rmk} There is an off-topic connection between Definition \ref{def:ConsGen} and combinatorial tilings. If in addition to being surjective, the reduction $\phi_p$ is a bijection when restricted to a set $\mathcal{P} \subset \Lambda$ of cardinality $p^n$, then $\mathcal{P}$ tiles $\Lambda$ by translations of vectors of $\Lambda_p$.
\end{rmk}
This framework contains classical Construction A \cite{Splag}, \cite{Rush1989}, the constructions in \cite{KosiOngOggier}, and \cite{ConsAOggier}. We derive sufficient conditions for this general construction to admit a Minkowski-Hlawka theorem. Set $\beta= {V^{1/m}}/{(p^{n-k}V(\Lambda)^{1/m})}$ and let
\begin{equation}
\mathbb{L}_p=\left\{ \beta \Lambda_p(C) : C \mbox{ is an } (n,k,p)-\mbox{code}\right\}
\label{eq:ensemble}
\end{equation}
be the ensemble of all lattices associated to codes of dimension $k$, normalized to volume $V$. Suppose that a lattice in $\mathbb{L}_p$ is picked at random by choosing $C$ uniformly. We shall prove a generalized version of the Minkowski-Hlawka theorem for $\mathbb{L}_p$. Instead of functions with bounded support, we will consider a wider class of functions. Let $W=\mbox{span}(\Lambda)$ be the minimal subspace of $\mathbb{R}^n$ containing $\Lambda$ (therefore, $\mbox{dim}( \mbox{span}(\Lambda_p(C))) = m$).
\begin{defi} Let $f:W\to \mathbb{R}$ be a Riemann-integrable function. We say that $f$ is \textit{semi-admissible} if
\begin{equation}
|f(\mathbf{x})| \leq \frac{b}{(1+\left\| \mathbf{x} \right\|)^{m+\delta}}, \forall \mathbf{x} \in W
\end{equation}
where $b > 0$ and $\delta > 0$ are positive constants.
\end{defi}
Notice that any bounded integrable function with compact support is semi-admissible. Of particular interest are indicator functions of bounded convex sets. Notice that, for any semi-admissible function and a rank-$m$ lattice $\Lambda$ in $W$,
$$\sum_{\mathbf{x} \in \Lambda} {f(\mathbf{x})} < +\infty.$$
\begin{rmk} If $f$ and its Fourier transform $\hat{f}$ are semi-admissible, then $f$ is said to be admissible. In this paper we will not be concerned about admissible functions, which play an important role in the development of the so-called linear programming bounds for packings.
\end{rmk}
\begin{thm} Let $(p_j)_{j=1}^{\infty}$ be an increasing sequence of prime numbers such that there exist reductions $\phi_{p_j}:\Lambda\to\mathbb{F}_{p_j}^n$ and let $f:W\to\mathbb{R}$ be a semi-admissible function. If the first minimum of $\Lambda_{p_j}=\Lambda_{p_j}(\left\{0\right\})$ satisfies
$$\lambda_1(\Lambda_{p_j}) \geq c p_{j}^{\frac{n-k}{m}+\alpha},$$ for some constants $c,\alpha>0$, then
\begin{enumerate}
\item[(i)]\begin{equation}
\lim_{p_j\to\infty} \mathbb{E}_{\mathbb{L}_{p_j}}\left[\sum_{\mathbf{x} \in \beta \Lambda_{p_j}(C)\backslash\left\{0\right\}} f(\mathbf{x}) \right] = V^{-1}\int_{\mathbb{R}^m} f(\mathbf{x})\text{d}\mathbf{x},
\label{eq:averageBehaviorNoTheta}
\end{equation}
\item[(ii)]\begin{equation}
\lim_{p_j\to\infty} \mathbb{E}_{\mathbb{L}_{p_j}}\left[\sum_{\mathbf{x} \in \beta \Lambda_{p_j}^{\prime}(C)} f(\mathbf{x}) \right] = (\zeta(m)V)^{-1}\int_{\mathbb{R}^m} f(\mathbf{x})\text{d}\mathbf{x},
\label{eq:averageBehavior}
\end{equation}
\end{enumerate}
\label{thm:AverageEnsembleGeneral}where the averages are taken over all $\beta\Lambda_{p_j}(\mathcal{C})$ in the ensemble $\mathbb{L}_{p_j}$ (Equation \eqref{eq:ensemble}).
\end{thm}
\begin{proof} We will prove the ``refined'' statement (ii). The proof of (i) is similar, except for the last step. Recall that the set of $\mathcal{C}_{n,k}$ of all $(n,k)$-codes satisfies Loeliger's balancedness equation \cite{Loeliger}
\begin{equation}
\frac{1}{| \mathcal{C}_{n,k}|} \sum_{{C} \in \mathcal{C}_{n,k}} \sum_{c \in {C} \backslash \left\{\mathbf{0}\right\}} g(c)= \frac{p_j^{k}-1}{p_j^{n}-1} \sum_{v \in \mathbb{F}_{p_j}^n \backslash \left\{\mathbf{0}\right\}}g(v),
\label{eq:LoeligerLemma}
\end{equation}
for a function
$g:\mathbb{F}_p^n \to \mathbb{R}$. Now for $f:W\to \mathbb{R}$, \begin{equation}
\begin{split} \mathbb{E}\left[\sum_{\mathbf{x} \in \beta\Lambda_{p_j}^{\prime}(C)} f(\mathbf{x}) \right] &= \mathbb{E}\left[\sum_{{\mathbf{x} \in \beta\Lambda_{p_j}^{\prime}(C)}\above 0pt {\phi_{p_j}(\mathbf{x}/\beta)=0}} f( \mathbf{x}) \right] + \mathbb{E}\left[\sum_{{\mathbf{x} \in \beta\Lambda_{p_j}^\prime(C)}\above 0pt {\phi_{p_j}(\mathbf{x}/\beta)\neq0}} f(\mathbf{x}) \right]. \\
\end{split}
\label{eq:splitAverage}
\end{equation}
From the assumption on $f$,
\begin{equation*}\left|\sum_{{\mathbf{x} \in \Lambda_{p_j}^{\prime}(C)}\above 0pt {\phi_{p_j}(\mathbf{x})=0}} f(\beta \mathbf{x})\right|=\left|\sum_{{\mathbf{x} \in \Lambda_{p_j}^{\prime}}} f(\beta \mathbf{x})\right| \leq \sum_{{\mathbf{x} \in \Lambda_{p_j}\backslash\left\{\mathbf{0}\right\}}} \frac{b}{(1+\left\| \beta \mathbf{x} \right\|)^{m+\delta}}.
\end{equation*}
Since the lattice $\Lambda_p(C)$ has rank $m \geq 1$, the series on the right-hand-side of the above inequality is absolutely convergent for any $p_j$. Moreover, since, by assumption $$\left\| \beta \mathbf{x} \right\| \geq \beta \lambda_1(\Lambda_{p_j})\geq c V^{1/m} (V(\Lambda))^{-1/m} p_j^{\alpha} \to \infty,$$ each individual term of the last sum tends to zero, as $p_j \to \infty$ and, by dominated convergence, the sum tends to zero. Let $\gamma = (p_j^k-1)/(p_j^n-1)$. For the second term of Equation \eqref{eq:splitAverage}, we have:
\begin{equation*}
\begin{split} \mathbb{E}\left[\sum_{{\mathbf{x} \in \beta\Lambda_{p_j}^{\prime}(C)}\above 0pt {\phi_{p_j}(\mathbf{x})\neq0}} f(\beta \mathbf{x}) \right] \stackrel{(a)}{=} \gamma \sum_{\mathbf{x} \in \Lambda^{\prime}}f(\beta\mathbf{x})
\stackrel{(b)}{=} \sum_{r = 1}^{\infty} \frac{\mu(r)}{r^m} \sum_{\mathbf{x} \in \Lambda\backslash\left\{0\right\}}r^{m} \gamma f(r\beta \mathbf{x})
\end{split}
\end{equation*}
where $\mu$ denotes the M\"obius function (see, e.g. \cite[Sec. VI. 3.2]{Cassels}). In the above, (a) follows from \eqref{eq:LoeligerLemma} and (b) is the M\"obius function inversion formula (see, e.g., \cite[Sec. VI. 3.2]{Cassels}). The theorem follows by using the property
$$\sum_{r=1}^{\infty}\frac{\mu(r)}{r^m} = \frac{1}{\zeta(m)}$$
and observing that for the inner sum satisfies
$$\lim_{p_j \to \infty} \sum_{\mathbf{x} \in \Lambda\backslash\left\{0\right\}}r^{m} \gamma f(r\beta \mathbf{x}) = (V^{-1}\zeta(m))\int_{W} f(\mathbf{x}) \mbox{d} \mathbf{x},$$
by the definition of Riemann integral. Exchanging the limit and the sum is justified by dominated convergence, given the condition on $f$.
\end{proof}
\begin{exe} If $\Lambda=\mathbb{Z}^n$ and $\phi_p$ is the reduction modulo $p$, we obtain mod-$p$ lattices as in \cite{Loeliger}. It is clear that $\Lambda_p = p\mathbb{Z}^n$ satisfies the hypothesis of Theorem \ref{thm:AverageEnsembleGeneral}, with $m=n$ and $\alpha = k/n$. This implies Theorem 1 of \cite{Loeliger}.
\end{exe}
\begin{defi} A sequence of surjective homomorphisms $(\phi_j)_{j=1}^{\infty} $, $\phi_j:\Lambda \to \mathbb{F}_{p_j}^n$ is said to be \textit{non-degenerate} if
$$\lambda(\Lambda_{p_j}) \geq c p_{j}^{\frac{n}{m}},$$
for some constants $c > 0$. Similarly, the sequence of associated ensembles (Equation \ref{eq:ensemble}) are said to be non-degenerate.
\end{defi}
It follows that if the reductions are non-degenerate, the associated ensemble admits the Minkowski-Hlawka theorem.
\begin{exe}[``Natural reduction''] If $m=n$, the natural reduction to $\mathbb{F}_p^n$ is as follows. Given a basis $\mathbf{x}_1,\ldots,\mathbf{x}_n$ for $\Lambda$, take $\phi_p$ to be the linear map defined by $\phi_p(\mathbf{x}_i) = \bm{e}_i \in \mathbb{F}_p^n$, where $\mathbf{e}_i$ the $i$-th canonical vector $(0,\ldots,0,1,0,\ldots,0)$. It is clear that $\phi$ is surjective and $\ker \phi_p = p \Lambda$, therefore the associated sequence of reductions is non-degenerate. This provides a systematic way of constructing good sublattices of a given lattice.
\label{ex:naturalReduction}
\end{exe}
Taking $f(x)$ to be the indicator function of a ball in part (ii) of Theorem \ref{thm:AverageEnsembleGeneral}, we recover Theorem \ref{thm:2}. Another function of interest is $f(x)= e^{-\tau \left\|x\right\|^2}$ for $\tau > 0$, yielding the \textit{theta series}
$$\Theta_{\Lambda}(\tau) = \sum_{x \in \Lambda} e^{-\tau \left\|x\right\|^2}.$$
A corollary of part (i) of the theorem is the following:
\begin{cor} The average theta series of a sequence of non-degenerate ensemble satisfies
\begin{equation}
\lim_{p_j \to \infty} E_{\mathbb{L}_{p_j}}\left[\Theta_{\Lambda}(\tau)\right] = V^{-1} \left(\frac{\pi}{\tau}\right)^{m/2} + 1.
\end{equation}
\label{cor:thetaSeries}
\end{cor}
Corollary \ref{cor:thetaSeries} can, for instance, be applied to the construction of sufficiently flat Gaussian measures for secure communications (cf \cite{LLBS_12}).
\begin{rmk} The condition for non-degeneracy can be re-written as
$$\liminf_{p_j\to\infty} \gamma(\Lambda_{p_j}) > 0,$$
where $\gamma(\Lambda) = \lambda(\Lambda)/V(\Lambda)^{1/m}$ is the Hermite parameter of $\Lambda$. In other words, non-degeneracy is equivalent to non-vanishing Hermite parameter of the sequence of kernel lattices.
\end{rmk}
We close this section with another consequence of Theorem \ref{thm:AverageEnsembleGeneral}. We shall refer to each ratio
\begin{equation}
\Delta_i(\Lambda) = \frac{\mbox{vol } \mathcal{B}_{\lambda_i/2}}{V(\Lambda)}, i = 1, \ldots, m,
\label{eq:successiveDensities}
\end{equation}
as the $i$-th successive density of a lattice $\Lambda$.
For a sequence of non-degenerate ensemble, put $\mathbb{L} = \bigcup_{j=1}^{\infty}\mathbb{L}_{p_j}$.
\begin{cor}
For any $\varepsilon > 0$, there exists $\Lambda_{p_j} \in \mathbb{L}$ such that
\begin{equation}
\prod_{i=1}^m \Delta_i(\Lambda)^{1/m} \geq \frac{2 m \zeta(m)(1-\varepsilon)}{e(1-e^{-m})}.
\end{equation}
\end{cor}
\begin{proof}
The proof follows from a method of Rogers \cite{Rogers}, choosing $f(x)$ appropriately in the Minkowski-Hlawka theorem. We shall give a complete proof in the next section, in the context of $\mathcal{O}_K$-lattices.
\end{proof}
\section{Constructions From Number Fields}
\label{sec:numbFields}
From now on we consider constructions of random ensembles based on algebraic number theory. We refer the reader to \cite{Fermat} and \cite{Notes} for an introduction to the theory, as well as undefined notation.
Let $K/\mathbb{Q}$ be a number field with degree $n$ and signature $(r_1,r_2)$. Denote its real embeddings by $\sigma_1,\ldots,\sigma_{r_1}$ and their pairs of complex embeddings by $$\sigma_{r_1+1}, \overline{\sigma_{r_1+1}},\ldots,\sigma_{r_1+r_2+1},\overline{\sigma_{r_1+r_2+1}}.$$ Let $\mathcal{O}_K$ be the ring of integers of $K$ and $\mathcal{I}\subset \mathcal{O}_K$ be an ideal. An ideal can be identified with a real lattice of dimension $(r_1+2r_2)$ via the canonical embedding
$$\sigma : \mathcal{O}_K \to \mathbb{R}^{r_1+2r_2}$$
\begin{equation*}
\begin{split}\sigma(x) = (\sigma_1(x),\ldots,\sigma_{r_1}(x),&\Re \sigma_{r_1+1}(x),\ldots \Re \sigma_{r_1+r_2+1}(x),\\ &\Im \sigma_{r_1+1}(x),\ldots,\Im \sigma_{r_1+r_2+1}(x)).
\end{split}
\end{equation*}
Lattices constructed from the embedding of ideals $\mathcal{I} \subset \mathcal{O}_K$ are called \textit{ideal lattices}, and appear in the study of modular forms, coding theory, and cryptography. In this section we study the Minkowski Hlawka theorem for $\mathcal{O}_K$-lattices and related structures.
Let $E=K\otimes_{\mathbb{Q}} \mathbb{R} $ be the Euclidean space generated by $K$. An $\mathcal{O}_K$-lattice is a free $\mathcal{O}_K$ sub-module of $E^t$, for some $t>0$. In particular, an $\mathcal{O}_K$ lattice is closed under multiplication by elements of $\mathcal{O}_K$. The Euclidean norm in $E$ is induced by the trace form. Notice that $K$ is naturally embedded in $E$. In the cases when $K$ is either totally real or a totally imaginary extension of a real number field (CM-field) some notational simplifications can be made. For instance, we can write the trace form as
$$\text{tr}(x\overline{y}) = \sigma_1(x)\overline{\sigma_1(x)} + \ldots + \sigma_n(x)\overline{\sigma_n(x)}.$$
We discuss the average behavior of a general reduction from algebraic number theory \cite{KosiOngOggier,Our,Adaptative}, defined in the sequel. A prime $p$ is said to split completely if $p\mathcal{O}_K$ can be factored into the product of prime ideals $\mathfrak{p}_1 \mathfrak{p}_2 \cdots \mathfrak{p}_n$.
\begin{defi} Let $p$ be a prime that splits completely, and $\mathfrak{p}$ an ideal above $p$. Consider $\pi: \mathcal{O}_K \to \mathcal{O}_K/\mathfrak{p} \simeq \mathbb{F}_p$ a projection onto $\mathfrak{p}$ and $\sigma$ the canonical embedding. Let $\Lambda=\sigma(\mathcal{O}_K)^t$ (the canonical embedding is applied componentwise). Take
$$\phi_p:\Lambda\to\mathbb{F}_p^t$$
$$\phi_p(\sigma(x_1,\ldots,x_t))=(\pi(x_1),\ldots,\pi(x_t))$$
and define $\Lambda_p(\mathcal{C})=\phi_p^{-1}(C) \subset \mathbb{R}^{nt}$.
\label{def:con2}
\end{defi}
\begin{lemma} The ensemble induced by Definition \ref{def:con2} is non-degenerate.
\end{lemma}
\begin{proof}
The minimum algebraic norm of an element of $\mathfrak{p}$ is greater or equal than $p$. Hence $\Lambda_p=\sigma(\mathfrak{p})^t$ has minimum norm at least $\sqrt{n} p^{1/n}$, finishing the proof.
\end{proof}
A very important caveat to the previous lemma is the fact that there must exist an infinite number of primes $p$ such that the construction above is possible. This follows from Chebotarev's density theorem (e.g. \cite{Algebraic1}{ Cor. 13.6, p. 547}), which implies that the natural density $\delta$ of primes that split completely in $K$ is positive (indeed, one has $0 < \delta \leq 1/n!$).
\begin{rmk}
Very similarly, it is possible to prove that the constructions in \cite{ConsAOggier} are non-degenerate.
\end{rmk}
Suppose that $K$ contains $r(K)$ roots of unity. Let $\mu$ be a root of unity contained in $\mathcal{O}_K$. It follows that $\left\| \sigma(\mu \mathbf{x} )\right\| = \left\| \sigma(\mathbf{x} )\right\|$ (e.g. \cite[Lem. 3.1]{Autissier}). Therefore, each $\Lambda$ constructed as in Definition \ref{def:con2} contains at least $r(K)$ minimal vectors and we automatically obtain the density $r(K)(1-\varepsilon)/2^{nt}$ (this argument was used by Venkatesh \cite{Venkatesh} to prove that cyclotomic lattices ($\mathbb{Z}[\mu]$-lattices where $\mu$ is a root of unity) achieve density $m(1-\varepsilon)/2^{2\phi(m)}$). For $t$-dimensional $\mathcal{O}_K$ lattices, however, there is a loss of a linear factor of $t$ in the enumerator. Nevertheless, we can improve the following density up to that of Corollary \ref{cor:betterCyclotomic}, using a method by Rogers \cite{Rogers}, recently employed in \cite{Vance} to quaternionic lattices. The basic idea is to apply to Theorem \ref{thm:AverageEnsembleGeneral}, rather than indicator function of a ball, to a bounded-support function that allows us to analyze the generalized densities of the ensemble. After ensuring the existence of a lattice with good generalized densities, it is possible to apply standard linear transformations (see e.g. \cite[Thm. 2.2]{Vance}) in order to ensure the existence of a lattice with good \textit{packing density}.
\mbox{} \\
\textit{Proof of Corollary \ref{cor:betterCyclotomic}:} . For $\Lambda_0 \subset \mathcal{O}_K^t$ let the $i$-th successive minima of $\Lambda_0$ (over $K$) be the smallest $i$-such that the ball $B_r^{nt}$ contains the canonical embedding of $i$ linearly independent vectors (over $K$). More formally
\begin{equation}
\begin{split}
\lambda_i^K(\Lambda_0) = \min \left\{r > 0: \dim \mbox{span}_{K} \left(\sigma^{-1} \left(\sigma(\Lambda_0) \cap \mathcal{B}_r \right)\right)=i \right\}.
\end{split}
\end{equation}
Notice that $\lambda_1^K(\Lambda_0) = \lambda_1(\sigma(\Lambda_0))$ and, in general $\lambda_i^K(\Lambda_0) \geq \lambda_i(\sigma(\Lambda_0))$. Also, if $\mathbf{x}_1,\ldots,\mathbf{x}_t$ are linearly independent over $K$ and achieve the sucessive minima of $\Lambda_0$, then the embeddings $\sigma(\mathbf{x}_1),\ldots, \sigma(\mathbf{x}_t)$ are linearly independent and primitive in $\sigma(\Lambda_0) \subset \mathbb{R}^{nt}$. Now let $f:\mathbb{R}^{nt} \to \mathbb{R}$ be the following function with limited support:
\begin{equation}
f(\mathbf{y})=\left\{ \begin{array}{cc}
1/n & \mbox{if }\left\|\mathbf{y} \right\| \leq re^{(1-t)/tn} \\
\frac{1}{nt} - \log\left(\frac{\left\|\mathbf{y}\right\|}{r}\right) & \mbox{if } re^{(1-t)/tn}\leq \left\|\mathbf{y} \right\| \leq re^{1/tn}\\
0 & \mbox{otherwise.}\end{array}\right.
\end{equation}
We have
\begin{equation*}
\int_{\mathbb{R}^{nt}} f(\mathbf{y}) \mbox{d}\mathbf{y} = \frac{e(1-e^{-t}) r^{nt} \mbox{vol }\mathcal{B}_1}{nt}.
\end{equation*}
Choose $r$ such that the right-hand side of this last equation is equal to $r(K)V\zeta(nt)(1-\varepsilon)/n$ for a small $\varepsilon < 1$.
Let $\phi_p$ be as in Definition \ref{def:ConsGen} and $\mathbb{L}_p$ its induced ensemble
\begin{equation}
\mathbb{L}_p=\left\{ \beta \Lambda_p(C) : C \mbox{ is an } (n,k,p)-\mbox{code}\right\}
\end{equation}
as in Equation \eqref{eq:ensemble}. According to Theorem \ref{thm:AverageEnsembleGeneral}, it is possible to find $\Lambda_1 = \beta \sigma(\Lambda_0) \in \mathbb{L}_{p}$ of volume $V$ such that, for $p$ sufficiently large
\begin{equation}
\sum_{\mathbf{y}\in\Lambda_1^{\prime}} f(\mathbf{y}) \leq (1-\varepsilon) \frac{r(K)}{n} < \frac{r(K)}{n},
\end{equation}
Let $v_1, \ldots, v_t$ be linearly independent vectors in $\Lambda_0$ achieving the successive minima, $\left\|\beta \sigma(v_i)\right\|=\lambda_i^K(\Lambda_0)$. We have
$$\sum_{\mathbf{y}\in\Lambda_1^{\prime}} f(\mathbf{y}) \geq \sum_{i=1}^t \sum_{\mu} f(\beta \sigma(\mu v_i)) = r(K) \sum_{i=1}^t f(\beta \sigma(v_i)),$$
where the sum with subscript $\mu$ is over all roots of unity in $K$.
From this we conclude that, for all $i$, $\beta \lambda_i^K(\Lambda_0) \geq r e^{1/n-1}$ and
$$\frac{1}{n} - \log\left(\frac{\beta^t
\prod \lambda_i^K(\Lambda_0)}{r^t}\right) < \frac{1}{n}.$$
Therefore, for the $t$ successive densities (Eq. \eqref{eq:successiveDensities}):
\begin{equation}
\left(\prod_{i=1}^{t} \Delta_i^K\right)^{1/t} = \prod_{i=1}^t \left( \frac{\mbox{vol}(\mathcal{B}_{\lambda_i^K/2})}{V} \right)^{1/t} \geq \frac{ r(K) t \zeta(nt) (1-\varepsilon)}{e(1-e^{-t})2^{nt}}.
\end{equation}
But in this case, we can find $\tilde{\Lambda}$ whose packing density (or $\Delta_1^K$) is greater or equal than $\frac{ r(K) t \zeta(nt)}{e(1-e^{-t})2^{nt}(1-\varepsilon)}$ (e.g. \cite[Thm. 2.2]{Vance}).
\qed
\section{Balanced Sets of Codes over Matrix Rings}
\label{sec:redRings}
In some contexts, the ``natural'' underlying alphabet in the reduction $\phi_p$ is, rather than the field $\mathbb{F}_p$, the ring $\mathcal{M}_n(\mathbb{F}_p)$ of $n \times n$ matrices with entries in $\mathbb{F}_p$. Although we can identify $\mathcal{M}_n(\mathbb{F}_p)$ with $\mathbb{F}_p^{n^2}$, the identification does not carry enough algebraic structure for our purposes. For instance, we cannot guarantee that the constructed lattices are closed under multiplication by units, which is crucial in order to obtain the full density improvements of these lattices, as in \cite{Vance}. For this reason, we study in this section a version of Theorem \ref{thm:AverageEnsembleGeneral} for codes over matrix rings.
\subsection{An Averaging Bound for Codes over Rings}
Let $\mathcal{R}$ be a finite ring and $\mathcal{R}^{*}$ its units. Denote by $(\mathcal{R}^n)^{*}$ the set of vectors in $\mathcal{R}^n$ such that at least one coordinate is a unit. A linear code in $\mathcal{C} \subset \mathbb{R}^n$ is a \textit{free}\footnote{This may differ from the literature, where a linear code over a ring is simply an additive subgroup of $\mathcal{R}^n$. The requirement that a linear code is a \textit{free} module is necessary for Lemma \ref{lem:balancedRings} to hold.} $\mathcal{R}$-submodule of $\mathcal{R}^n$ (with the natural scalar multiplication). Following \cite{Loeliger}, we define balanced sets of codes as follows.
\begin{defi} Consider a non-empty set of codes $\mathcal{C}_b$ of same cardinality. We say that $\mathcal{C}_b$ is \textit{balanced} if any $\mathbf{x} \in (\mathcal{R}^n)^{*}$ is contained in the same number of codes (say, $L$) of $\mathcal{C}_b$.
\end{defi}
Let $M$ be the cardinality of a code in $\mathcal{C}_b$. From a counting argument, one can see that $M |\mathcal{C}_b| \geq L |(\mathcal{R}^n)^{*}|$. The following lemma shows how to bound averages of functions in $(\mathcal{R}^n)^*$.
\begin{lemma} Let $g : \mathcal{R}^n \to \mathbb{R}^+$ be a function. For a code $\mathcal{C}$, we define $g^*(\mathcal{C}) = \sum_{\mathbf{c} \in \mathcal{C} \cap (\mathcal{R}^n)^*} g(c)$. If $\mathcal{C}_b$ is the set of all codes of rank $k$ then
$$E\left[g^*(\mathcal{C})\right] \leq \frac{\left| \mathcal{R} \right|^k}{\left|(\mathcal{R}^n)^*\right|} g^*(\mathcal{R}^n),$$
where the expectation is with respect to the uniform distribution on $\mathcal{C}_b$.
\label{lem:balancedRings}
\end{lemma}
\begin{proof}
For any balanced set of codes with cardinality $M$, we have
\begin{equation*}
\begin{split}
&E[g^*(\mathcal{C})] = E\left[\sum_{\mathbf{c} \in \mathcal{C} \cap (\mathcal{R}^n)^*} g(c)\right] = E\left[\sum_{\mathbf{x} \in (\mathcal{R}^n)^*} g(\mathbf{x}) \mathbbm{1}_{\mathcal{C}}(\mathbf{x})\right]
\\ & \sum_{\mathbf{x} \in (\mathcal{R}^n)^*} E\left[ g(x) \mathbbm{1}_{\mathcal{C}}(x)\right] = \sum_{\mathbf{x} \in (\mathcal{R}^n)^*} g(x) \frac{L}{|\mathcal{C}_b|} \leq \frac{M}{|(\mathcal{R}^n)^*|} g^*(\mathcal{R}^n).
\end{split}
\end{equation*}
We now need to prove that the set of all codes of rank $k$ is balanced. Let $\mathbf{y}$ be any element in ${{\mathcal{R}^n}^*}$. There exists an invertible linear map $T(\mathbf{y}) = (1,0,\ldots,0) = \mathbf{e}_1$. Since $T$ is rank-preserving, $\mathbf{y} \in \mathcal{C}$ if and only if $\mathbf{e}_1 \in T\mathcal{(\mathcal{C})}$, where $\mathcal{C}$ and $T(\mathcal{C})$ have same rank. This induces a bijection between the codes that contain $\mathbf{y}$ and the codes that contain $e_1$, proving the statement.
\end{proof}
\subsection{Lipschitz and Hurwitz Lattices}
The quaternion skew-field $\mathbb{H}$ is given by $\mathbb{H} = \left\{ a+bi + (c+di)j: a, b, c, d \in \mathbb{R}\right\}$, with the usual relations $i^2=j^2=-1$ and $ij = -ji$. Vance recently \cite{Vance} proved a Minkowski-Hlawka theorem for lattices in $\mathbb{H}$ over the Hurwitz order. Here we show how to recover a ``coding-theoretic'' version of this result from generalized reductions.
We first explain how to deduce a slightly simpler case, for the Lipschitz order. The \textit{Lipschitz integers} $\mathcal{L} \subset \mathbb{H}$ is the (non-maximal) order $\mathcal{L}=\left\{x+yj : x, y \in \mathbb{Z}[i]\right\}.$ Recall that a quaternion has matrix representation
$$\left(\begin{array}{cc} x & -\overline{y} \\ y & \overline{x} \end{array}\right).$$
Let $\mathfrak{p}$ be an ideal in $\mathbb{Z}[i]$ above $p$ that splits. Let $\pi:\mathbb{Z}[i]\to\mathbb{Z}[i]/\mathfrak{p}$ be a projection. We consider the following ``single-letter'' reduction:
\begin{equation*}\phi_p^{\mathbb{H}}:\mathcal{L} \to \mathcal{M}_2(\mathbb{F}_p)
\end{equation*}
\begin{equation*}
\phi_p(x+yj)^{\mathbb{H}}=\left(\begin{array}{cc} \pi(x) & -\pi(\overline{y}) \\ \pi(y) & \pi(\overline{x}) \end{array}\right).
\end{equation*}
We have $\ker \phi_p^{\mathbb{H}} = (p\mathbb{Z}[i])+(p\mathbb{Z}[i])j$. Identifying $\mathbb{H}$ with $\mathbb{R}^4$ in the natural way
$$\psi(a,b,c,d) \to a+bi + (c+di)j$$
we obtain a reduction $\phi_p:\mathbb{Z}^4 \to \mathcal{M}_2(\mathbb{F}_p)$, $\phi_p(x) = \phi_p^{\mathbb{H}}(\psi(x)).$ By abuse of notation, we will also denote by $\phi_p^{\mathbb{H}}$ the reduction applied componentwise in vector of $\mathcal{L}^m$, i.e.,
$$\phi_p^{\mathbb{H}}(x_1+y_1j,\ldots,x_m+y_mj) = (\phi_p^{\mathbb{H}}(x_1+y_1j),\ldots,\phi_p^{\mathbb{H}}(x_m+y_mj))\in\mathcal{M}_2(\mathbb{F}_p)^m.$$
Let $\mathcal{C} \subset \mathcal{M}_2(\mathbb{F}_p)^m$ be a linear code with cardinality $|\mathcal{C}|$. Then $\Lambda_p^{\mathbb{H}}(\mathcal{C})=(\phi_p^{\mathbb{H}})^{-1}(\mathcal{C})$ is a quaternionic lattice with volume $|C| p^{-4m}$. Let $\mathcal{C}_b$ be a balanced set and $\mathbb{L}_p$ the associated lattice ensembles
$$\mathbb{L}_p = \left\{ \beta \Lambda_p^{\mathbb{H}}(\mathcal{C}) : \mathcal{C} \in \mathcal{C}_b \right\},$$
where $\beta = (V/(|C|^{-1} p^{4m}))^{1/(4m)}$. The following Theorem \ref{thm:matrices} is the analogous of Theorem \ref{thm:2} for Lipschitz lattices. We need the following lemma
\begin{lemma}
If $\phi_p(x+yj)$ is non-invertible for $x+yi\in\mathcal{L}$, then the squared norm of $x+yj$ is a multiple of $p$.
\label{lem:1}
\end{lemma}
\begin{proof}
If $\det \phi_p(x+yj) = 0$, then $\pi(x \overline{x} + y \overline{y}) = 0$, i.e., $\left\| (x,y)\right\|^2 \in \mathfrak{p}$. Since the norm of a Lipschitz quaternion is an integer, and $\mathfrak{p}$ is above $p$, the result follows.
\end{proof}
\begin{thm} Let $\mathcal{C}_b$ be a balanced set of codes with rank $k > m/2$. If $f$ is a semi-admissible function then
\begin{equation}\lim_{p\to\infty} E_{\mathbb{L}_p}\left[\sum_{\mathbf{x} \in \beta \Lambda_p^{\mathbb{H}}(\mathcal{C})}f(\psi(\mathbf{x}))\right] \leq (\zeta(4m)V^{-1}) \int_{\mathbb{R}^{4m}} f(\mathbf{x}) d\mathbf{x}
\label{eq:LipschitzIntegers}
\end{equation}
\label{thm:matrices}
\end{thm}
\begin{proof}
The proof is very similar to that of Theorem \ref{thm:2}. Here we divide the expectation into invertible and non-invertible elements (we make the change of variable $\mathbf{x} = \beta \mathbf{y}$, to facilitate), i.e.
$$ \sum_{\mathbf{y} \in \Lambda_p^{\mathbb{H}}(\mathcal{C})}f(\psi(\beta \mathbf{y})) = \sum_{{\mathbf{y} \in \Lambda_p^{\mathbb{H}}(\mathcal{C})} \above 0pt {\phi({\mathbf{y}}) \in (\mathcal{M}_2(\mathbb{F}_p)^m)^{*}}}f(\psi(\beta\mathbf{y}))+\sum_{{\mathbf{x} \in \Lambda_p^{\mathbb{H}}(\mathcal{C})} \above 0pt {\phi({\mathbf{y}}) \notin (\mathcal{M}_2(\mathbb{F}_p)^m)^{*}}}f(\psi(\beta\mathbf{y})).$$
The first term tends to zero as $p\to\infty$ from Lemma \ref{lem:1}, since $f$ is semi-admissible and
\begin{equation} \beta\left\| \psi(\mathbf{y})\right\| \geq\beta \sqrt{p } = |C|^{1/4m} p^{-1/2} = p^{k/m-1/2} \to \infty \mbox{ as } p \to \infty.
\label{eq:boundPHurwitz}
\end{equation}
From Lemma \ref{lem:balancedRings} we conclude that the second term is upper bounded by the right-hand side of \eqref{eq:LipschitzIntegers} as $p \to \infty$.
\end{proof}
For the maximal Hurwitz order
$$\mathcal{H} = \left\{a+bi+cj+d(-1+i+j+ij)/2 : a, b, c, d \in \mathbb{Z}\right\},$$ the theorem follows by considering reductions from left-prime ideals $\mathcal{P} \triangleleft \mathcal{H}$. For any rational prime $p$, there exist isomorphisms $\mathcal{H}/p\mathcal{H} \sim \mathbb{F}_p(i,j,k) \sim \mathcal{M}_2(\mathbb{F}_p)$ (e.g. Wedderburn's Theorem \cite[Thm. 6.16 Lem 9.2.1]{Voight}), where non-invertible elements in $\mathcal{H}/p\mathcal{H}$ have reduced norm (determinant) proportional to $p$. Notice that in this case we obtain a reduction
$$\phi_p : D_4 \to \mathcal{M}_2(\mathbb{F}_p),$$
where $D_4$ is the checkerboard lattice in dimension four \cite[Sec. 7.2]{Splag}. An explicit realization of ring isomorphism is obtained by setting
$$\phi(1) = \left(\begin{array}{cc}1 & 0\\0 & 1\end{array}\right), \phi(i) = \left(\begin{array}{cc}0 & -1\\1 & 0\end{array}\right) \mbox{ and } \phi(j) = \left(\begin{array}{cc}a & b\\b & -a\end{array}\right),$$
where $a$ and $b$ are two integers such that $a^2+b^2 \equiv -1 \text{ (mod } p)$. Notice that such an isomorphism preserves the residue class of the reduced norm, i.e. $nrd(x) = \det \phi(x) \text{ (mod }p)$, for any $x \in \mathcal{H}$.
\section{Algorithmic Effectiveness}
\label{sec:conc}
Theorem \ref{thm:2} holds in the limit $p_j \to \infty$. However, for each $n$, under some conditions it is possible to find \textit{finite} ensembles that contain dense lattices. In the literature, this is referred to as \textit{effectiveness} (e.g. \cite[p. 18]{Splag} and \cite{Zemor}). We show conditions for a family of reductions to be effective. We need the following lemma, which is a special case of a classical result in the Geometry of Numbers (see \cite[p. 141]{Leker}) and is also valid if $\mathcal{B}_r$ is replaced by more general sets. We include a proof here for the sake of completeness.
\begin{lemma}
Let $\mathcal{P}$ be a fundamental region for $\Lambda$, an let $l_0 = \sup_{\mathbf{x}\in\mathcal{P} }\left\| \mathbf{x} \right\|.$ For $r > l_0$, we have
\begin{equation}
(r-l_0)^n V_n \leq V(\Lambda) {\mathcal{N}_{\Lambda}(r)} \leq (r+l_0)^nV_n.
\end{equation}
In particular, we can take $l_0 = \tau(\Lambda)$ to be the covering radius of $\Lambda$.
\label{lem:pointsEnumerator}
\end{lemma}
\begin{proof}
We show the set inclusion
$$\mathcal{B}_{r-l_0} \subset \bigcup_{\mathbf{x}\in\Lambda\cap \mathcal{B}_{r}}(\mathbf{x}+\mathcal{P})\subset \mathcal{B}_{r+l_0}.$$
The lemma then follows from a simple volume calculation of the three sets.
If $\mathbf{y} = \mathbf{x} + \mathbf{p}$, $\mathbf{x} \in \Lambda\cap\mathcal{B}_r$, $\mathbf{p} \in \mathcal{P}$, then $\left\| \mathbf{y} \right\| \leq \left\| \mathbf{x} \right\|+\left\| \mathbf{p}\right\| \leq r+l_0$, proving the second inclusion. For the first inclusion, let $\mathbf{y} \in \mathcal{B}_{r-l_0}$ and write it as $\mathbf{y} = \mathbf{x} + \mathbf{p}$, with $\mathbf{x} \in \Lambda$ and $\mathbf{p} \in \mathcal{P}$ (this is always true since $\mathcal{P}$ is a fundamental region). Then $\left\| \mathbf{x}\right\| \leq \left\| \mathbf{y} \right\| + \left\| \mathbf{p} \right\| \leq r$.
\end{proof}
\subsection{Effective families containing dense lattices}
The next proposition essentially says that if the base lattice $\Lambda$ is sufficiently ``thin'' and the kernel-lattices $\Lambda_{p_j}$ are not so sparse, it is possible to bound $p_j$ in terms of the rate of the underlying code. The conditions are very mild (they are achievable, for instance, by $\mathbb{Z}^n$). For convenience, we recall the definition of the Hermite parameter $\gamma(\Lambda) = \lambda_1(\Lambda)/V(\Lambda)^{1/m}$ and define the covering parameter as $\mu(\Lambda) = \tau(\Lambda)/V(\Lambda)^{1/m}$. Recall that $\rho(\Lambda) = \lambda_1(\Lambda)/2$ is the packing radius of $\Lambda$. A lattice satisfying the Minkowski-Hlawka bound has density
$$\Delta = \frac{V_m \rho(\Lambda)^m}{V(\Lambda)} > 1/2^{m-1} \Rightarrow \rho(\Lambda) \geq 2^{1-1/m}\left(\frac{V(\Lambda)}{V_m}\right)^{1/m},$$
where $V_m = \mbox{vol}\, B_1$ is the volume of the unit ball in $\mathbb{R}^m$. Recalling that $V_m^{-1/m} \sim \sqrt{m/2\pi e}$, if $V(\Lambda)$ is normalized to one, this implies that the packing radius of good lattices should scale as
$$\rho(\Lambda) \sim 2 \sqrt{\frac{{m}}{2\pi e}}.$$
\begin{prop}
Let the notation be as in Theorem \ref{thm:2} and let $\varepsilon > 0$. Let $\delta = k/n$ be the rate of the underlying codes. Suppose that
\begin{enumerate}
\item[(i)] $p_j^{n\delta/m} \gamma(\Lambda_{p_j}) = \Omega(\sqrt{m})\,\,$ and
\item[(ii)] $p_j^{n/m} = \omega( m \mu(\Lambda)/\gamma(\Lambda_{p_j})).$
\end{enumerate}
If $m$ is sufficiently large, there exists a code with parameters $(n,k,p_j)$ such that the lattice $\Lambda_{p_j}(\mathcal{C})$ has packing density greater than $(1-\varepsilon)/2^{m-1}$.
\label{prop:family_sizes}
\end{prop}
\begin{proof} For simplicity, suppose that the volume of $\Lambda_{p_j}(\mathcal{C})$ equals $1$, which can be achieved by appropriately scaling. Considering the above discussion let $r = \sqrt{m/2\pi e}$. In the notation of the proof of Theorem \ref{thm:2}, the average lattice point enumerator (Equation \eqref{eq:splitAverage}) becomes:
\begin{equation*}
\begin{split} \mathbb{E}\left[\#\left(\beta\Lambda_{p_j}^{\prime}(C)\cap \mathcal{B}_r\right) \right] &= \mathbb{E}\left[\#\left(\beta\Lambda_{p_j}^{\prime}\cap \mathcal{B}_r\right) \right] + \mathbb{E}\left[\#\left(\beta(\Lambda_{p_j}^{\prime}(C)\backslash\Lambda_p)\cap \mathcal{B}_r\right)\right].
\end{split}
\end{equation*}
The first term of the right-hand side zero whenever
\begin{equation} \frac{p_j^{k/m} \lambda_1(\Lambda_{p_j})}{V(\Lambda_{p_j})^{1/m}} \geq r.
\label{eq:effectivep}
\end{equation}
The second term satisfies
\begin{equation}
\begin{split}
\mathbb{E}\left[\#\left(\beta(\Lambda_{p_j}^{\prime}(C)\backslash\Lambda_p)\cap \mathcal{B}_r\right)\right] & = \frac{p_j^k-1}{p_j^n-1} \#\left(\beta(\Lambda^{\prime}\backslash\Lambda_{p_j})\cap \mathcal{B}_r\right) \\
\leq & p_{j}^{k-n}(r+\beta \tau(\Lambda))^m \frac{V_m}{\beta^m V(\Lambda)} = V_m r^m \left(1+\frac{\mu(\Lambda)}{r p_j^{(n-k)/m}}\right)^{m}
\end{split}
\label{eq:countPointsEffecitve}
\end{equation}
Imposing the right-hand-side of \eqref{eq:countPointsEffecitve} to be $2(1-\varepsilon)$, we obtain a lattice with density (cf. Section \ref{sec:prelim}):
\begin{equation} \Delta \geq \frac{1-\varepsilon}{2^{m-1}}\left(1+\frac{\mu(\Lambda)}{r p_j^{(n-k)/m}}\right)^{-m}.
\label{eq:densityEffective}
\end{equation}
Under the conditions of the theorem, the term in parenthesis tends to $1$ as $m \to \infty$.
\end{proof}
\begin{rmk}
Similar conditions hold for the case of quaternionic lattices. In this case, in light of the proof of Theorem \ref{thm:matrices} and Equation \eqref{eq:boundPHurwitz}, condition (i) should be replaced by $p = \Omega( r^{2m/(2k-m)})$.
\end{rmk}
\begin{exe} Let $m = n$, $\Lambda = \mathbb{Z}^n$ and $\phi_p$ be the ``modulo-$p$'' reduction. Conditions (i)-(ii) of Proposition \ref{prop:family_sizes} state that
$$p \geq c_1 m^{1/2\delta} \mbox{ and } p \geq c_2 m^{3/2+\nu},$$
where $c_1,c_2$ are constants and $\nu$ is any small number. The optimal rate is $\delta \sim 1/3$, which gives us optimal alphabet-size $p = m^{3/2+\nu}$, for any positive constant $\nu$. This provides an alternative derivation of \cite{Rush1989}.
\end{exe}
The alphabet-size in the above example can be further improved by starting the reductions with a lattice which already has a good density, as shown next.
Let $\Lambda = A_{n}^{l}$ be a Craig's lattice \cite[pp.222-224]{Splag} of rank $m = n$, where $n+1=q$ is a prime. From \cite[Prop. 4.1]{Bachoc92}, a Craig's lattice is similar to the embedding of the ideal $(1-\mu_p)^l \mathbb{Z}[\mu_p]$ in the cyclotomic field $\mathbb{Q}(\mu_p)$. A concrete realization is
$$\Lambda = \frac{1}{\sqrt{p}} \sigma((1-\mu_p)^l \mathbb{Z}[\mu_p]).$$
From this, we have $\Lambda^{*} \sim A_{n}^{n/2-l}$,
$$\frac{\lambda_1(\Lambda)}{V(\Lambda)^{1/n}} \geq \frac{\sqrt{2l}}{(n-1)^{({2l-1})/2n}} \mbox{ and }\frac{\lambda_1(\Lambda^{*})}{V(\Lambda^{*})^{1/n}} \geq \frac{\sqrt{n-2l}}{(n-1)^{({n-2l-1})/2n}}.$$
Following \cite[p. 224]{Splag}'s suggestion, we consider Craig's lattices with parameter $l = \left\lfloor n/2\log (n+1) \right\rceil$ so that, for sufficiently large $n$,
$$\frac{\lambda_1(\Lambda)}{V(\Lambda)^{1/n}} \geq \sqrt{\frac{2\pi}{\log n}}\left(\sqrt{\frac{n}{2\pi e}}+o(1)\right) \mbox{ and } \frac{\lambda_1(\Lambda^*)}{V(\Lambda^*)^{1/n}} \geq \sqrt{e}+o(1).$$
From Banaszczyk's transference bound \cite{Banaszczyk}:
$$\frac{\tau(\Lambda)}{V(\Lambda)^{1/n}} \leq \frac{\sqrt{n}}{2\sqrt{e}+o(1)}.$$
Therefore, using a natural reduction, conditions (i) and (ii) in Proposition \ref{prop:family_sizes} become
$$p \geq c_1 (\sqrt{\log n})^{1/\delta} \mbox{ and } p \geq c_2(n \sqrt{\log n})^{1+\nu},$$
for some constants $c_1,c_2$ and any positive $\nu$. We can further optimize the rate by equalizing the coefficients from where we get
\begin{equation}\delta \sim \frac{\log \log n}{2\log n + \log \log n}.
\label{eq:rate}
\end{equation}
\begin{cor}
Let $\Lambda = A_n^{\left\lfloor n/2(n+1)\right\rceil}$ and let $\phi_p$ be a natural reduction, as described in Example \ref{ex:naturalReduction}. Let $\varepsilon > 0$, $n$ sufficiently large and let the rate be as in \eqref{eq:rate}. There exists a code $\mathcal{C}$ with parameters
$$\left(n,\delta n, O((n \sqrt{\log n})^{1+\nu})\right),$$
for any positive $\nu$, such that $\Lambda_p(\mathcal{C})$ has packing density arbitrarily close to $(1-\varepsilon)/2^{m-1}$.
\end{cor}
We close this subsection with a comment on the absolute family size of a reduction. If the set of \textit{all} $(n,k,p)$ codes is considered, then the search space for a dense lattice is given by the Gaussian binomial:
\begin{equation}
\left[n \above 0pt k \right]_p = \prod_{i=1}^{k-1} \frac{p^{n}-p^{i}}{p^k-p^i} \sim p^{k(n-k)}.
\end{equation}
Plugging the bounds for $p$ in Proposition \ref{prop:family_sizes} gives an upper bound on the exhaustive search complexity. On Table \ref{tab:comparison} we provide a comparison of the parameters of some constructions in the literature in terms of rank of the base lattice $\Lambda$, code parameters, alphabet-size and log of the family-size
(contrary to a statement in \cite{Zemor,Moustrou}, the complexity of Rush's construction is $\exp(c n^2 \log n)$ rather than $\exp(n\log n)$, and therefore the gains of averaging over double-circulant codes/cyclotomic lattices are even higher than the ones stated).
\begin{table}
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Construction & $\text{rank}(\Lambda)$ & $(n,k)$ & $p$ & Log family size \\
\hline
Construction A over $\mathbb{Z}$ \cite{Rush1989,Loeliger} & $m = n$ & $(n,\delta n), \delta \sim 1/3$ & $n^{3/2}$ & $n^2 \log n$ \\
\hline
Random double-circulant \cite{Zemor} & $m = n$ & $(n,n/2)$ & $n^{2} \log n$ & $n \log n$ \\
\hline
Cyclotomic lattices \cite{Moustrou} & $m = 2\Phi(l), l \in \mathbb{N}$ & $(2,1)$ & $l^3(\log l)^{\Phi(l)}$ & $m \log m$ \\
\hline
\rule{0pt}{0.9\normalbaselineskip} Craig's reduction & $m = n$ & $(n,\delta n), \delta \sim \frac{\log \log n}{2\log n + \log \log n}$ & $n (\log n)^{1/2}$ & $n^2 \log \log n$ \\[1ex]
\hline
\end{tabular}
\end{center}
\caption{Parameters of different effective families contaning dense lattices. The rates $\delta$ are up to lower order terms, and the log family-sizes up to constants and lower order terms. $\Phi(l)$ denotes Euler's totient function.}
\label{tab:comparison}
\end{table}
\subsection{Packing Efficiency}
A cruder measure of goodness, which is suitable for coding applications, is the \textit{packing efficiency} \cite{ELZ05}. Define $\rho_{\text{eff}}(\Lambda) = (V(\Lambda)/V_m)^{1/m}$ as the radius of a ball whose volume is $V(\Lambda)$. The Minkowski bound can be rephrased in terms of \textit{packing efficiency}, as
\begin{equation}
\frac{\rho(\Lambda)}{\rho_{\text{eff}}(\Lambda)} \geq \frac{1}{2}.
\end{equation}
A ``packing-good'' family of lattices is such that its packing efficiency is arbitrarily close to $1/2$. As it shown in \cite[Sec. IV]{ELZ05}, it is possible to find families with good asymptotic packing efficiency using Loeliger's construction, provided that $p = O(m^{1/2+\beta})$, for any positive small $\beta$. Similarly, we can show that Craig's lattices constructions can achieve packing efficiency arbitrarily close to $1/2$ with alphabet-size $p = O((\log m)^{1/2+\beta})$.
\section{Final Discussion}
\label{sec:conclusion}
\textbf{Applications.} As observed by Loeliger \cite{Loeliger}, random ensembles of lattices are not only good in terms of packing density, but are also sphere-bound achieving when used as infinite constellations for the AWGN channel. Indeed, Rush \cite{Rush1989} and Loeliger's \cite{Loeliger} Construction A $\mathbb{Z}$-lattices are ubiquitous in applications to information transmission over Gaussian channels and networks. However, for other communication problems, such as information transmission in the presence of fading and multiple antennas, it is desirable to enrich the lattices with some algebraic (multiplicative) structure. To this purpose, several recent works such as \cite{Adaptative,KosiOngOggier,ConsAOggier,Our} present different constructions that attach a linear code to an algebraic lattice, but, to date, there is no unified analysis of such ensembles. The generalized reductions described here provide a method for establishing the ``goodness'' of all such constructions at once. It also provides a simple condition to verify if any new construction is ``good'' (e.g., sphere-bound achieving). This was indeed the initial motivation of the author.
\\[1\baselineskip]
\noindent\textbf{Further Perspectives. }The framework considered in this paper is used to provide simple alternative (coding-theoretic) proofs and improvements on previous refinements on the best packing density. It not only implies the existence of dense lattices, but also of \textit{structured lattices}, with the structured inherited from the underlying reduction.
The question whether it is possible to improve on the $c n\log \log n/2^{n-1}$ asymptotic behavior of cyclotomic fields by specializing the reductions (or the family of codes) appropriately is still open. Furthermore, all known lower bounds on $\Delta_n$ are of the form $\Delta_n \geq 2^{-n(1+\varepsilon(n))}$, with $\varepsilon(n) = O(\log n / n)$, which improves only marginally on the Minkowski-Hlawka lower bound. According to Gruber \cite[p. 388]{GruberDiscrete}, Hlawka believed that no essential improvement can be made, probably meaning that the exponent $2$ is optimal. Nevertheless, the best known upper bound on $\Delta_n$, due to Kabatianskii and Leveshenstein, is of the form $C^{-n}$, where $C \approx 1.51$. Closing this gap is a long-standing open problem.
\section*{Acknowledgments}
The author would like to thank Cong Ling for his constant enthusiasm on the topic and for his suggestions, as well as both reviewers for pointing out inaccuracies in the first version of the manuscript. He also acknowledges Sueli Costa and Jean-Claude Belfiore for fruitful discussions and for hosting him at University of Campinas and at T\'el\'ecom Paristech, where part of this work was developed. This work was supported in part by FAPESP (fellowship 2014/20602-8).
\bibliographystyle{unsrt}
|
2,877,628,088,949 | arxiv | \section*{Acknowledgments}
We acknowledge financial support by the Labex ACTION
program (Contract No. ANR-11-LABX-0001-01).
We are grateful to D. Van Labeke for fruitful discussion and helps in the implementation of the monomodal modal method.
|
2,877,628,088,950 | arxiv | \section{Introduction}
In the past two decades, several ground breaking works have extended the domain of quantum theories by
incooperating non-Hermitian (NH) Hamiltonians that are symmetric under combined parity and time reversal (PT)
symmetric operation \cite{ben4}-\cite{book2} . It has been shown that the fully consistent quantum theories with real energy eigenvalues,
complete set of orthonormal eigenfunctions with unitary time evolution for such complex
systems are possible by choosing appropriate Hilbert space with modified inner product \cite{met1} -\cite{met5}. The importance of
non-Hermitian operators in the various branches of theoretical and experimental physics has been realized
extensively and is being acknowledged rigorously in recent years \cite{new} -\cite{new7}. New developments and excitements are sprouting
everyday in all areas of physics not only in quantum domain such as , PT phase transition \cite{new4,new5,ant5,ptp,ptp1,ant6}- complex scattering \cite{ss}-\cite{cpa9} etc but also in the classical systems
such as optics \cite{opt1}-\cite{opt6}. The field of non-Hermitian operators has been developed enormously during the last two decades and some of the typical works can be found in \cite{pt21} -\cite{ant4}
The investigation in these fields further boosted exponentially when
some of the predictions in complex quantum theory have been verified experimentally\cite{opt3}-\cite{opt6} and many new windows have been opened up.
The non-linearities in the context of non-Hermitian quantum theories have been studied extensively \cite{nls}-\cite{nls8}.In this present work we consider a one parametric family of two dimensional PT symmetric system with quadratic non-linearities. Such systems are shown to perform periodic oscillations and hence are shown to be non-conservative.
We show that such a system can be studied by constructing an appropriate Hamiltonian. We construct
a one particle NH Hamiltonian whose canonical equations in the phase space describe this non-linear system.
This NH Hamiltonian describes a particle with position dependent mass. Due to the presence of position dependent
mass term it is difficult to study this system quantum mechanically with analytical solutions. We construct an appropriate canonical transformation
to map this system to a quasi exactly solvable (QES) system originally developed in \cite{qes} for the usual quantum theories. For discussion of QES theories in non-Hermitain systems see for instance \cite{qes0, ant4} and see \cite{qes1} for a review on QES systems. QES levels can explicitly be calculated
using BD polynomial method \cite{bd} and see for instance \cite{ant4, bpm} for BD polynomials method in Non-hermitian QES systems. The QES levels for the non-linear system are calculated and are shown to be real even though the original Hamiltonian was NH.\\
Now we present the plan of paper. In the next section we describe one parametric family of 2-d
non-linear system and show that it performs periodic motion by analysing its behaviour near different fixed points. The canonical
transformation is constructed in section 3 and QES solutions
are discussed in section 4. Section 5 is kept for conclusion and discussion.
\section{Non-linear system in 2-dimension}
We consider one parameter family of 2-d non-linear system described by the equations,
\begin{eqnarray}
\dot x &= & y + g x y \nonumber \\
\dot y &= & 1-2 x^2-\frac {g y^2} {2}
\label{1}
\end{eqnarray}
where g is a parameter.\\
We observe that this system is symmetric under the transformation $(x,y,t)\longrightarrow (x,-y,-t)$, which is one
form of a combined PT transformation in two dimension \footnote {In two dimension parity transformation can be defined in 3
alternative way $(x,y)\rightarrow (x,-y),(x,y)\rightarrow (-x,y) \ \ \mbox{and} (x,y)\rightarrow (y,x) $}.
The Jacobian matrix for this system is written as
\begin{eqnarray}
J= \left [ \begin{array}{c c}
gy & 1+gx \\
-4x &-gy
\end{array}
\right ]
\end{eqnarray}
The eigenvalues of the Jacobian matrix corresponding to different fixed points are tabulated below to discuss the dynamical behavior of the system.
\begin{table}[h]
\centering
\begin{tabular}{|c| |c| |c| |c| |c|}
\hline
Fixed points $ \rightarrow $ & $(\frac{1}{\sqrt{2}},0) $ & $ (-\frac{1}{\sqrt{2}},0) $& $(-\frac{1}{g}, \sqrt{\frac{2}{g}(1-\frac{2}{g^2}}) $ & $ (-\frac{1}{g}, -\sqrt{\frac{2}{g}(1-\frac{2}{g^2}}) $\\
Eigenvalues $\downarrow $ & &&& \\ \hline
$\lambda_1 $ & $ \sqrt{-2\sqrt{2}(1+\frac{g}{\sqrt{2}}}) $ & $\sqrt{2\sqrt{2}(1-\frac{g}{\sqrt{2}}}) $& $ \sqrt{2g(1-\frac{2}{g^2}}) $&
$\sqrt{2g(1-\frac{2}{g^2}}) $\\
&&&& \\ \hline
$\lambda_2 $& $- \sqrt{-2\sqrt{2}(1+\frac{g}{\sqrt{2}}}) $& $ -\sqrt{2\sqrt{2}(1-\frac{g}{\sqrt{2}}})$ & $-\sqrt{2g(1-\frac{2}{g^2}}) $ & $ -\sqrt{2g(1-\frac{2}{g^2}}) $\\
&&&& \\ \hline
\end{tabular}
\caption{Fixed points and Eigenvalues}
\end{table}
From this table we can conclude the following things about this system.
(i) Both the roots are imaginary for the first fixed point $(\frac{1}{\sqrt{2}},0)$ for $g> -\sqrt{2}$ and the fixed point behaves as a center. On the other hand for $g\le-\sqrt{2}$, both roots are real and one of them is positive and hence the fixed point behaves as saddle point.
(ii)Similarly the fixed point $(\frac{-1}{\sqrt{2}},0)$ behaves as saddle point for $g<\sqrt{2}$ and as center for $g\ge \sqrt{2}$
(iii) The roots for 3rd and 4th fixed points in the above table are equal and hence both the fixed points will have same behavior. For $g>\sqrt{2} $ these fixed points behave like saddle points and for $0<g<\sqrt{2}$ both roots are imaginary indicating that the fixed points are centers.
The change in stability behaviour of these fixed points can be analysed through bifurcation
diagram. This diagram is obtained by plotting the eigenvalues with the parameters g. The
bifurcation points are identified as those values of $g$ where the eigenvalues change their
sign. It must be noted that fixed point (i) is always of center type for positive values of $g$, this
is why this fixed point would not bifurcate with respect to $g$. However, fixed points (ii), (iii)
and (iv) will have different natures depending upon the values of $g$. We have identified the
regions where the natures of fixed points are different by varying the parameter $g$. These
regions have been shown for fixed point (ii) in Fig. 1, and for fixed points (iii), and (iv) in Fig.
2. The stability behaviours of fixed points (iii), and (iv) are similar.
The behaviour of this system in the vicinity of fixed point (i) is shown in Fig. 3. Based on linear
stability analysis, we find that this fixed point behaves as a center when perturbed slightly.
From Fig. 1, we see that fixed point (ii) is a saddle point when $g\le 1.415$. This behaviour is
confirmed through Fig. 4.
We note that the fixed points (iii) and (iv) would exist only for $g \ge \sqrt{2}$. However, for the $ g $
values chosen in the present analysis for QES solutions in Sec 4, the fixed points (iii) and (iv) would not come into existence.
\vspace{0.1in}
\includegraphics[scale=0.60]{Figure1.pdf} \\
{\it Fig. 1: The stability behaviour of fixed point (ii). This fixed point behaves as saddle up to
g =1.415. After this value, the equilibrium behaves as a center..}\\
\vspace{-0.2in}
\includegraphics[scale=0.60]{Figure2.pdf} \\
{\it Fig. 2:The stability behaviour of fixed point (iii) and (iv) are similar. These fixed points
behave as center up to g =1.415. After this value, these fixed points behave as a saddle. }\\
\includegraphics[scale=0.70]{ps_3.pdf}
{\it Fig. 3: It shows phase space trajectory of non-linear systems. This non-linear system performs periodic motion
for certain range of the parameter g. This is due to the fixed point $(\frac{1}{\sqrt{2}},0 ) $ which behaves like a center. In this parameter range of g the corresponding quantum system admits QES solutions as shown in Sec. 4 } \\
\vspace{-0.2in}
\includegraphics[scale=0.70]{ps_4.pdf}
{\it Fig. 4: It shows phase space trajectory of non-linear systems. The trajectories are moving away from the
equilibrium point $(-\frac{1}{\sqrt{2}},0)$This non-linear system performs periodic motion
for certain range of the parameter g.} \\
\vspace{0.1cm}
\section{ Canonical transformation}
The non-linear equations in Eq. (\ref{1}) can be obtained through the canonical equation of 1-d Hamiltonian,
\begin{equation}
H = (1+ g x)\frac{ y^2}{2} + V(x) \equiv (1+ g x)\frac{ p^2}{2} + V(x)
\label{2}
\end{equation}
where $V(x) = \frac{2} {3} x^3 - x $, y is treated as canonical conjugate to x, and henceforth is denoted by p.
This Hamiltonian is then interpreted as a single particle systems with position dependent mass.
It is straight forward to see that this system is NH as $ H \neq H^{\dagger} $. However this
system is invariant under the transformations $(x,p,t)\longrightarrow (x,-p,-t)$, which is analogous to a
PT transformation in two dimensional phase space. Due to position
dependent mass term it is difficult to find exact spectrum for the system described by H in Eq. (\ref{2}.
In the next section we find QES solution for this system.\\
In this section we construct a canonical transformation which maps the system describe by the Hamiltonian
in Eq. (\ref{2}) to a QES systems.\\
We construct the transformation
\begin{eqnarray}
x &= & \frac{2Q^2-1}{g} \nonumber \\
p &= & \frac{ g }{4} Q^{-1}P
\label{4}
\end{eqnarray}
One can easily verify that, these are canonical i.e.
\begin{equation}
\left[Q,P \right] = i \hbar; \quad \mbox {where} \left[x,p \right] = i \hbar;
\label{5}
\end{equation}
under this canonical transformation the position dependent Hamiltonian becomes
\begin{equation}
H= \frac{g^2 P^2}{16} + i\hbar\frac{g^2 Q^{-1} P}{16} + V(Q)
\label{6}
\end{equation}
where,
\begin{equation}
V(Q) = \frac{8 a Q^6}{g^3} - \frac{12 a Q^4}{g^3}+ (\frac{6 a }{g^3}-\frac{2b}{g})Q^2+(\frac{-a }{g^3}+\frac{b}{g})
\ \ \mbox{with}\ \ a=\frac{2}{3}, b=1.
\label{7}
\end{equation}
Note $ H \not= H^{\dag}$, but this NH system is PT ($Q \rightarrow -Q, P \rightarrow P, i \rightarrow -i$) symmetric.
We consider Schroedinger equation $ H \psi = E \psi $ for this system,
\begin{equation}
\frac{d^2 \psi }{dQ^2}- \frac{1}{Q} \frac{d \psi }{dQ} +\frac{16}{g^2}[ E - V(Q)]\psi = 0
\label{8}
\end{equation}
To obtain QES solution we substitute,
\begin{equation}
\psi = e^{-\alpha Q^2- \beta Q^4} \eta (Q)
\label{9}
\end{equation}
in Eq. (\ref{8}) to obtain
\begin{equation}
\eta ''(Q)+[-\frac{1}{Q}-4 (\alpha Q + 2 \beta Q^3)]\eta '(Q) +[\frac{16 E}{g^2} + \frac{16a}{g^5} -\frac{16b}{g^3}+ (4 \alpha^2-8 \beta -\frac{32}{g^2}(\frac{3a}{g^3} -\frac{b}{g}))Q^2 ]\eta (Q) = 0
\label{10}
\end{equation}
where we have chosen, $\alpha = - \sqrt{\frac{18a}{g^5}}$ and $\beta = \sqrt{\frac{8a}{g^5}}$ ( putting the coeff. of $ Q^4$ and $Q^6 $ as zero).\\
\section{Solutions Using BD Polynomial method}
One of the elegant methods are due to Bender and Dunne\cite{bd}
which uses a new set of orthogonal polynomial in energy variable, E. The main idea here is that the quantum mechanical
wavefunction for a QES systems is the generating functional for the orthogonal polynomial in energy variable $P_n(E)$.
The condition of quasi-exactly solvability is reflected in the vanishing of the norm of all polynomials whose index n exceeds a
critical polynomials, $P_{j}(E)$. The zeros of the critical polynomial $P_{j}(E)$ are the quasi-exact energy
eigenvalues of the systems. These polynomials $P_n(E)$, associated with
QES systems satisfied the necessary and sufficient condition for a family of polynomials, with degree n to form a set of orthogonal
polynomials with respect to some weight functions $W(E)$ which satisfied 3-term recursion relation of the form
\begin{equation}
P_n(E) = \left ( A_n E +B_n \right ) P_{n-1}(E) + C_n P_{n-2}(E) \, , \ \ \
n\ge 1
\label{11}
\end{equation}
where the coefficients $A_n$, $B_n$ and $C_n$ are function of E and $A_{n } \not= 0$, $C_1= 0$ and $C_n \not= 0$ for $ n > 1$.\\
For the positive integer value of the parameters j, corresponding to the QES systems, it has been shown that the norm
of $P_n(E)$ vanishes for $n > j$. All polynomials $P_n(E)$ factor into a product of
two polynomials, one of which is $P_{j}(E)$,
\begin{equation}
P_{n+j}(E) = P_j(E) Q_n(E)
\label{fac}
\end{equation}
The zeros of the critical polynomials which are the common factor for all higher polynomials $n > J$, are the QES energy
eigenvalues of quantum mechanical systems. The corresponding eigenfunction for the QES systems are obtained in a straight
forward manner from these polynomials.
On substituting,
\begin{equation}
\eta(Q) = \Sigma \frac {Q^{2n}}{2^{2n} (n)! (n-1)! } P_n(E)
\label{11}
\end{equation}
in Eq. (9) we obtain the recursion relation satisfied by $P_n(E)$ as
\begin{eqnarray}
P_n(E) +\left [\frac{16 E}{g^2} + \frac{16}{g^2}(\frac{a }{g^3}-\frac{b}{g}) + 24 (n-1)\sqrt{\frac {2a}{g^5}}\right ]P_{n-1}(E)\hspace{1in}\nonumber \\
+\frac {32}{g^5}(n-1)(n-2)[4 b g^2 -3 a -2 (2n-3)\sqrt{2 a g^5}] P_{n-2}(E) = 0
\label{10}
\end{eqnarray}
with initial conditions $P_{-1} =0 $ and $P_0=1$.\\
Let us introduce an arbitrary number J such that
\begin{equation}
4bg^2-3a = 2 (2J-1)\sqrt{2 a g^5}.
\end{equation}
We shall see that when J is a positive
integer, then this represents a QES system. In terms of $J$,
Eq. (\ref{10}) then can be rewritten as
\begin{eqnarray}
P_n (E)+ \left[ \frac{16 E}{g^2} + \frac{16}{g^2}(\frac{a }{g^3}-\frac{b}{g}) + 24 (n-1)\sqrt{\frac{2a}{g^5}}\right]
P_{n-1} (E) \hspace{1in}\nonumber \\
-128(n-1)(n-2)(n-J-1) \sqrt{{2a}{g^5}} P_{n-2} (E) =0
\label{rec1}
\end{eqnarray}
from this equation it is clear that when $J$ is positive integer, this
recursion relation will reduce to a two term recursion relation. Thus, when $J$ is a positive integer, we have a QES system. \\
These recursion relations generate a set of orthogonal polynomials of which the first few terms are
\begin{eqnarray}
P_1 &=&- \frac{16 E}{g^2} - \frac{16 a}{g^5}+\frac{16 b}{g^3}\nonumber \\
P_2 &=& \frac{E^2}{g^4} + [ \frac{2a}{g^5}-\frac{2b}{g^3}+\frac{3}{2}\sqrt{\frac{2a }{g^5}}]\frac{E}{g^2}+(\frac{a} {g^5}-\frac{b} {g^3}+\frac{3}{2}\sqrt{\frac{2a }{g^5}})(\frac{a} {g^5}-\frac{b} {g^3})
\nonumber \\ &&\hspace{2in}
\nonumber \\
\label{pp}
\end{eqnarray}
It is easily seen that when $J$ is a positive integer, exact energy
eigenvalues for the
first $J$ levels are known. Further, when $J$ is a positive integer, these
polynomials exhibit
the factorization property as given by
\begin{equation}
P_{n+J}(E) = P_J(E) Q_n(E)
\label{fac}
\end{equation}
where the polynomial set $Q_n(E)$ correspond to the non-exact spectrum
for this problem with $Q_0 (E) =1$.
For example, for $J=1$, $P_{n+1}$ will be factorized into $P_1$ and $Q_n$
and the corresponding QES energy level (which in this case is the ground
state) is obtained by putting
$P_1 =0$ i. e.
\begin{equation}
E_1= -\frac{a}{g^3} + \frac {b}{g}
\label{e1}
\end{equation}
Similarly for $J=2$, $P_{n+2}$ will be factorized into $P_2$ and $Q_n$ and
the corresponding energy levels are from $P_2$,
\begin{eqnarray}
E_1 &=& -\frac{a}{g^3} + \frac {b}{g}
\nonumber \\
E_2 &=& -\frac{a}{g^3} + \frac {b}{g} -\frac{3}{2} \sqrt{{2a}{g}}
\label{e2}
\end{eqnarray}
For $J=3$, one needs to solve $P_3(E) =0 $ which is a cubic equation in $E$. Hence we calculate the QES energy levels upto
$J=10$ numerically which are presented inTable. 2. QES solutions exists for $ g \le 0.58865 $
\begin{table}[h]
\centering
\begin{tabular}{|c||c||c|c|}
\hline
S.N & J & g & E \\
& & \\
\hline
$\{1\}$ &1 & 0.58865 & $P_1(E)=0$, $E_{1}=-0.0816416$ \\
\hline
$\{2\}$&2 & 0.477122 &$P_2(E)=0$, $E_{1}=-6.54954$, $E_{2}=-4.04201$ \\
\hline
$\{3\}$&3 & 0.417704&$P_3(E)=0$, $E_{1}=-22.2131$, $E_{2}=-19.0432$ ,$E_{3}=-16.0817$ \\
\hline
$\{4\}$&4 & 0.378671 &$P_4(E)=0$, $E_{1}=-18.3508$, $E_{2}=-15.1846$, $E_{3}=-12.2638$,\\
& & &\ \ \ $E_{4}=-9.63704$ \\
\hline
$\{5\}$&5& 0.350273 & $P_5(E)=0$, $E_{1}=-24.8198$, $E_{2}=-21.4534$ ,$E_{3}=-18.2845$,\\
& & &\ \ \ $E_{4}=-15.3393$, $E_{5}=-12.6578$ \\
\hline
$\{6\}$&6& 0.328305 &$P_6(E)=0$, $E_{1}=-31.5698$, $E_{2}=-28.0402$ ,$E_{3}=-24.6776$,\\
& & &\ \ \ $E_{4}=-21.4987$, $E_{5}=-18.5262$, $E_{6}=-15.7939$\\
\hline
$\{7\}$&7& 0.310597 & $P_7(E)=0$, $E_{1}=-38.5594$, $E_{2}=-34.8914$ ,$E_{3}=-31.3689$,\\
& & &\ \ \ $E_{4}=-28.0037$, $E_{5}=-24.8105$, $E_{6}=-21.8095$, \\
& & &\ \ \ $E_{7}=-19.0297$\\
\hline
$\{8\}$&8& 0.295893 & $P_8(E)=0$, $E_{1}=-45.7593$, $E_{2}=-41.9705$ ,$E_{3}=-38.3112$,\\
& & &\ \ \ $E_{4}=-34.7902$, $E_{5}=-31.4180$, $E_{6}=-28.2082$, \\
& & &\ \ \ $E_{7}=-25.1785$, $E_{8}=-22.3542$ \\
\hline
$\{9\}$&9 & 0.283408 &$P_9(E)=0$, $E_{1}=-53.1465$, $E_{2}=-49.2503$ ,$E_{3}=-45.4712$,\\
& & &\ \ \ $E_{4}=-41.8159$, $E_{5}=-38.2924$, $E_{6}=-34.9105$, \\
& & &\ \ \ $E_{7}=-31.6825$, $E_{8}=28.6244$, $E_{9}=-25.7584$ \\
\hline
$\{10\}$&10 & 0.272619 & $P_{10}(E)=0$, $E_{1}=-60.7037$, $E_{2}=-56.7107$ ,$E_{3}=-38.3112$,\\
& & &\ \ \ $E_{4}=-49.051$, $E_{5}=-45.3961$, $E_{6}=-41.8674$, \\
& & &\ \ \ $E_{7}=-38.4739$, $E_{8}=-35.2268$, $E_{9}=-32.1407$,\\
& & & \ \ \ $E_{10}=-29.2352$ \\
\hline
\end{tabular}
\caption {Occurrence of Energy eigenvalues for different values of parameter J and g and for fixed value $a =\frac{2}{3}$, $b=1$.}
\end{table}
\section {Conclusions}
In this work a 2-d system with quadratic non-linearities have been relized as a PT symmetric systems in 2-d phase space. We have
constructed an equivalent Hamiltonian whose canonical equation are essentially describe the non-linear systems. This
Hamiltonian is then represented as one particle systems in 1-d with position dependent mass. It is extremely difficult to solve
such a system. We further constructed an appropriate canonical transformation which maps this systems to a QES system.
Using Bender-Dunne polynomial methods, we explicitly have calculated the first few QES levels explicitly. $ J=1$ to $10$, QES energy levels are calculated numerically and presented in tabular form. We can see from the table 2 that higher $J$ values imply less $g$ values. Maximum $g$ value for which QES solution exists is $g=0.58865$ ( approximately). In this restriction of g all QES solutions are real, indicating that the system does not possess any exceptional points and the system always remains in the unbroken PT phase. Further note that for the different ranges of the parameter g we have various classical solutions as discussed in Sec. 2, whereas QES quantum solutions exist only for low values of g. The phase space trajectories are drawn ( Figs 3,4 ) in the ranges of g where the corresponding quantum system admits QES solutions. It is interesting to observe that the classical non-linear system performs the regular periodic motion and the first fixed point $(\frac{1}{\sqrt{2}},0)$ behaves as center in the parameter range of $.272<g <.588 $ (approx) [ See Fig 3] for which the corresponding quantum system admits QES solutions. The 3rd and 4 th fixed points as listed in table 1 also behave as centers in the same parameter range of g.
|
2,877,628,088,951 | arxiv | \section{Introduction}
The minimum von Neumann entropy at the output of a quantum communication channel can be crucial for the determination of its classical communication capacity \cite{holevo2013quantum}.
Most communication schemes encode the information into pulses of electromagnetic radiation, that travels through metal wires, optical fibers or free space and is unavoidably affected by attenuation and noise.
Gauge-covariant quantum Gaussian channels \cite{holevo2013quantum} provide a faithful model for these effects, and are characterized by the property of preserving the thermal states of electromagnetic radiation.
It has been recently proved \cite{mari2014quantum,giovannetti2015solution,giovannetti2015majorization,holevo2015gaussian} that the output entropy of any gauge-covariant Gaussian quantum channel is minimized when the input state is the vacuum.
This result has permitted the determination of the classical information capacity of this class of channels \cite{giovannetti2014ultimate}.
However, it is not sufficient to determine the capacity region of the quantum Gaussian broadcast channel \cite{guha2007classicalproc,guha2007classical} and the triple trade-off region of the quantum-limited Gaussian attenuator \cite{wilde2012quantum,wilde2012information}.
Indeed, solving these problems would require to prove that Gaussian thermal input states minimize the output von Neumann entropy of a quantum-limited Gaussian attenuator among all the states with a given entropy.
This still unproven result would follow from a stronger conjecture, the Entropy Photon-number Inequality (EPnI) \cite{guha2008capacity,guha2008entropy}, stating that Gaussian states minimize the output von Neumann entropy of a beamsplitter among all the couples of input states, each one with a given entropy.
So far, it has been possible to prove only a weaker version of the EPnI, the quantum Entropy Power Inequality \cite{konig2014entropy,konig2013limits,de2014generalization,de2015multimode}, that provides a lower bound to the output entropy of a beamsplitter, but is never saturated.
Actually, Ref.'s \cite{mari2014quantum,giovannetti2015majorization,holevo2015gaussian} do not only prove that the vacuum minimizes the output entropy of any gauge-covariant quantum Gaussian channel.
They also prove that the output generated by the vacuum majorizes the output generated by any other state, i.e. applying a convex combination of unitary operators to the former, we can obtain any of the latter states.
This paper goes in the same direction, and proves a generalization of this result valid for any one-mode gauge-covariant quantum Gaussian channel.
Our result states that the output generated by any quantum state is majorized by the output generated by the state with the same spectrum diagonal in the Fock basis and with decreasing eigenvalues, i.e. by the state which is {\it passive} \cite{pusz1978passive,lenard1978thermodynamical,gorecki1980passive} with respect to the number operator (see \cite{vinjanampathy2015quantum,goold2015role,binder2015quantum} for the use of passive states in the context of quantum thermodynamics).
This can be understood as follows: among all the states with a given spectrum, the one diagonal in the Fock basis with decreasing eigenvalues produces the less noisy output.
Since all the states with the same spectrum have the same von Neumann entropy, our result implies that the input state with given entropy minimizing the output entropy is certainly diagonal in the Fock basis, and then reduces the minimum output entropy quantum problem to a problem on discrete classical probability distributions.
Thanks to the classification of one-mode Gaussian channels in terms of unitary equivalence \cite{holevo2013quantum}, we extend the result to the channels that are not gauge-covariant with the exception of the singular cases $A_2)$ and $B_1)$, for which we show that an optimal basis does not exist.
We also point out that the classical channel acting on discrete probability distributions associated to the restriction of the quantum-limited attenuator to states diagonal in the Fock basis coincides with the channel already known in the probability literature under the name of thinning.
First introduced by R{\'e}nyi \cite{renyi1956characterization} as a discrete analogue of the rescaling of a continuous random variable, the thinning has been recently involved in discrete versions of the central limit theorem \cite{harremoes2007thinning,yu2009monotonic,harremoes2010thinning}
and of the Entropy Power Inequality \cite{yu2009concavity,johnson2010monotonicity}.
In particular, the Restricted Thinned Entropy Power Inequality \cite{johnson2010monotonicity} states that the Poissonian probability distribution minimizes the output Shannon entropy of the thinning among all the ultra log-concave input probability distributions with a given Shannon entropy.
The techniques of this proof could be useful to prove that Gaussian thermal states minimize the output von Neumann entropy of a quantum-limited attenuator among all the input states diagonal in the Fock basis with a given von Neumann entropy (but without the ultra log-concavity constraint).
Then, thanks to the main result of this paper it would automatically follow that Gaussian thermal states minimize the output von Neumann entropy of a quantum-limited attenuator among all the input states with a given von Neumann entropy, not necessarily diagonal in the Fock basis.
The paper is organized as follows.
In Section \ref{defs} we introduce the Gaussian quantum channels, and in Section \ref{secmaj} the majorization.
The Fock rearrangement is defined in Section \ref{rearrangement}, while Section \ref{secoptimal} defines the notion of Fock optimality and proves some of its properties.
The main theorem is proved in Section \ref{mainproof}, and the case of a generic not gauge-covariant Gaussian channel is treated in Section \ref{generic}.
Finally, Section \ref{secthinning} links our result to the thinning operation.
\section{Basic definitions}\label{defs}
In this section we recall some basic definitions and facts on Gaussian quantum channels.
The interested reader can find more details in the books \cite{holevo2013quantum,barnett2002methods,holevo2011probabilistic}.
\begin{defn}[Trace norm]
The trace norm of an operator $\hat{X}$ is
\begin{equation}
\left\|\hat{X}\right\|_1:=\mathrm{Tr}\sqrt{\hat{X}^\dag\hat{X}}\;.
\end{equation}
If $\left\|\hat{X}\right\|_1$ is finite, we say that $\hat{X}$ is a trace-class operator.
\end{defn}
\begin{defn}[Quantum operation]
A quantum operation is a linear completely positive map on trace-class operators continuous in the trace norm.
\end{defn}
\begin{rem}
A trace-preserving quantum operation is a quantum channel.
\end{rem}
We consider the Hilbert space $\mathcal{H}$ of the harmonic oscillator, i.e. the irreducible representation of the canonical commutation relation
\begin{equation}\label{CCR}
\left[\hat{a},\;\hat{a}^\dag\right]=\hat{\mathbb{I}}\;.
\end{equation}
$\mathcal{H}$ has a countable orthonormal basis
\begin{equation}
\{|n\rangle\}_{n\in\mathbb{N}}\;,\qquad \langle m|n\rangle=\delta_{mn}
\end{equation}
called the Fock basis, on which the ladder operators act as
\begin{subequations}
\begin{eqnarray}\label{acta}
\hat{a}\;|n\rangle &=& \sqrt{n}\;|n-1\rangle\\
\hat{a}^\dag\;|n\rangle &=& \sqrt{n+1}\;|n+1\rangle\;.\label{actadag}
\end{eqnarray}
\end{subequations}
We can define a number operator
\begin{equation}
\hat{N}=\hat{a}^\dag\hat{a}\;,
\end{equation}
satisfying
\begin{equation}
\hat{N}\;|n\rangle=n\;|n\rangle\;.
\end{equation}
\begin{defn}[Hilbert-Schmidt norm]
The Hilbert-Schmidt norm of an operator $\hat{X}$ is
\begin{equation}
\left\|\hat{X}\right\|_2^2:=\mathrm{Tr}\left[\hat{X}^\dag\;\hat{X}\right]\;.
\end{equation}
\end{defn}
\begin{defn}(Hilbert-Schmidt dual)\label{dagdef}
Let $\Phi$ be a linear map acting on trace class operators and continuous in the trace norm.
Its Hilbert-Schmidt dual $\Phi^\dag$ is the map on bounded operators continuous in the operator norm defined by
\begin{equation}
\mathrm{Tr}\left[\hat{Y}\;\Phi\left(\hat{X}\right)\right]=\mathrm{Tr}\left[\Phi^\dag\left(\hat{Y}\right)\;\hat{X}\right]
\end{equation}
for any trace-class operator $\hat{X}$ and any bounded operator $\hat{Y}$.
\end{defn}
\begin{defn}[Characteristic function]
The characteristic function of a trace-class operator $\hat{X}$ is
\begin{equation}
\chi_{\hat{X}}(z):=\mathrm{Tr}\left[e^{z\;\hat{a}^\dag-\bar{z}\;\hat{a}}\;\hat{X}\right]\;,\qquad z\in\mathbb{C}\;,
\end{equation}
where $\bar{z}$ denotes the complex conjugate.
\end{defn}
It is possible to prove that any trace-class operator is completely determined by its characteristic function.
\begin{thm}[Noncommutative Parceval relation]
The characteristic function provides an isometry between the Hilbert-Schmidt product and the scalar product in $L^2(\mathbb{C})$, i.e. for any two trace-class operators $\hat{X}$ and $\hat{Y}$,
\begin{equation}\label{Trint}
\mathrm{Tr}\left[\hat{X}^\dag\;\hat{Y}\right]=\int_{\mathbb{C}}\overline{\chi_{\hat{X}}(z)}\;\chi_{\hat{Y}}(z)\;\frac{d^2z}{\pi}\;.
\end{equation}
\begin{proof}
See e.g. Theorem 5.3.3 of \cite{holevo2011probabilistic}.
\end{proof}
\end{thm}
\begin{defn}[Gaussian gauge-covariant quantum channel]\label{gaugecovch}
A gauge-covariant quantum Gaussian channel with parameters $\lambda\geq0$ and $N\geq0$ can be defined by its action on the characteristic function: for any trace-class operator $\hat{X}$,
\begin{equation}\label{chiPhi}
\chi_{\Phi\left(\hat{X}\right)}(z)=e^{-|\lambda-1|\left(N+\frac{1}{2}\right)|z|^2}\;\chi_{\hat{X}}\left(\sqrt{\lambda}\;z\right)\;.
\end{equation}
The channel is called quantum-limited if $N=0$.
If $0\leq\lambda\leq1$, it is a quantum-limited attenuator, while if $\lambda\geq1$ it is a quantum-limited amplifier.
\end{defn}
\begin{lem}
Any gauge-covariant quantum Gaussian channel is continuous also in the Hilbert-Schmidt norm.
\begin{proof}
Easily follows from its action on the characteristic function \eqref{chiPhi} and the isometry \eqref{Trint}.
\end{proof}
\end{lem}
\begin{lem}\label{comp}
Any gauge-covariant quantum Gaussian channel can be written as a quantum-limited amplifier composed with a quantum-limited attenuator.
\begin{proof}
See \cite{garcia2012majorization,giovannetti2015solution,mari2014quantum,holevo2015gaussian}.
\end{proof}
\end{lem}
\begin{lem}\label{attdag}
The Hilbert-Schmidt dual of the quantum-limited attenuator of parameter $0<\lambda\leq1$ is $1/\lambda$ times the quantum-limited amplifier of parameter $\lambda'=1/\lambda\geq1$, hence its restriction to trace-class operators is continuous in the trace-norm.
\begin{proof}
Easily follows from the action of the quantum-limited attenuator and amplifier on the characteristic function \eqref{chiPhi} and formula \eqref{Trint}; see also \cite{ivan2011operator}.
\end{proof}
\end{lem}
\begin{lem}
The quantum-limited attenuator of parameter $0\leq\lambda\leq1$ admits the explicit representation
\begin{equation}\label{kraus}
\Phi_\lambda\left(\hat{X}\right)=\sum_{l=0}^\infty\frac{(1-\lambda)^l}{l!}\;\lambda^\frac{\hat{N}}{2}\;\hat{a}^l\;\hat{X}\;\left(\hat{a}^\dag\right)^l\;\lambda^\frac{\hat{N}}{2}
\end{equation}
for any trace-class operator $\hat{X}$.
Then, if $\hat{X}$ is diagonal in the Fock basis, $\Phi_\lambda\left(\hat{X}\right)$ is diagonal in the same basis for any $0\leq\lambda\leq1$ also.
\begin{proof}
The channel $\Phi_\lambda$ admits the Kraus decomposition (see Eq. (4.5) of \cite{ivan2011operator})
\begin{equation}
\Phi_\lambda\left(\hat{X}\right)=\sum_{l=0}^\infty\hat{B}_l\;\hat{X}\;\hat{B}_l^\dag\;,
\end{equation}
where
\begin{equation}
\hat{B}_l=\sum_{m=0}^\infty\sqrt{\binom{m+l}{l}}\;(1-\lambda)^\frac{l}{2}\;\lambda^\frac{m}{2}\;|m\rangle\langle m+l|\;,\qquad l\in\mathbb{N}\;.
\end{equation}
Using \eqref{acta}, we have
\begin{equation}
\hat{a}^l=\sum_{m=0}^\infty\sqrt{l!\;\binom{m+l}{l}}\;|m\rangle\langle m+l|\;,
\end{equation}
and the claim easily follows.
\end{proof}
\end{lem}
\begin{lem}
The quantum-limited attenuator of parameter $\lambda=e^{-t}$ with $t\geq0$ can be written as the exponential of a Lindbladian $\mathcal{L}$, i.e. $\Phi_\lambda=e^{t\mathcal{L}}$, where
\begin{equation}\label{lindblad}
\mathcal{L}\left(\hat{X}\right)=\hat{a}\;\hat{X}\;\hat{a}^\dag-\frac{1}{2}\hat{a}^\dag\hat{a}\;\hat{X}-\frac{1}{2}\hat{X}\;\hat{a}^\dag\hat{a}
\end{equation}
for any trace-class operator $\hat{X}$.
\begin{proof}
Putting $\lambda=e^{-t}$ into \eqref{kraus} and differentiating with respect to $t$ we have for any trace-class operator $\hat{X}$
\begin{equation}
\frac{d}{dt}\Phi_{\lambda}\left(\hat{X}\right)=\mathcal{L}\left(\Phi_{\lambda}\left(\hat{X}\right)\right)\;,
\end{equation}
where $\mathcal{L}$ is the Lindbladian given by \eqref{lindblad}.
\end{proof}
\end{lem}
\begin{lem}
Let
\begin{equation}\label{Xdiag}
\hat{X}=\sum_{k=0}^\infty x_k\;|\psi_k\rangle\langle\psi_k|\;,\quad\langle\psi_k|\psi_l\rangle=\delta_{kl}\;,\quad x_0\geq x_1\geq\ldots
\end{equation}
be a self-adjoint Hilbert-Schmidt operator.
Then, the projectors
\begin{equation}\label{Pin}
\hat{\Pi}_n=\sum_{k=0}^n |\psi_k\rangle\langle\psi_k|
\end{equation}
satisfy
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}_n\;\hat{X}\right]=\sum_{k=0}^n x_k\;.
\end{equation}
\begin{proof}
Easily follows from an explicit computation.
\end{proof}
\end{lem}
\begin{lem}[Ky Fan's Maximum Principle]\label{sumeig}
Let $\hat{X}$ be a positive Hilbert-Schmidt operator with eigenvalues $\{x_k\}_{k\in\mathbb{N}}$ in decreasing order, i.e. $x_0\geq x_1\geq\ldots\;$,
and let $\hat{P}$ be a projector of rank $n+1$.
Then
\begin{equation}\label{TrPiX}
\mathrm{Tr}\left[\hat{P}\;\hat{X}\right]\leq\sum_{k=0}^n x_k\;.
\end{equation}
\begin{proof}
(See also \cite{bhatia2013matrix,fan1951maximum}).
Let us diagonalize $\hat{X}$ as in \eqref{Xdiag}.
The proof proceeds by induction on $n$.
Let $\hat{P}$ have rank one.
Since
\begin{equation}
\hat{X}\leq x_0\;\hat{\mathbb{I}}\;,
\end{equation}
we have
\begin{equation}
\mathrm{Tr}\left[\hat{P}\;\hat{X}\right]\leq x_0\;.
\end{equation}
Suppose now that \eqref{TrPiX} holds for any rank-$n$ projector.
Let $\hat{P}$ be a projector of rank $n+1$.
Its support then certainly contains a vector $|\psi\rangle$ orthogonal to the support of $\hat{\Pi}_{n-1}$, that has rank $n$.
We can choose $|\psi\rangle$ normalized (i.e. $\langle\psi|\psi\rangle=1$), and define the rank-$n$ projector
\begin{equation}
\hat{Q}=\hat{P}-|\psi\rangle\langle\psi|\;.
\end{equation}
By the induction hypothesis on $\hat{Q}$,
\begin{equation}\label{ineqQpsi}
\mathrm{Tr}\left[\hat{P}\,\hat{X}\right]=\mathrm{Tr}\left[\hat{Q}\,\hat{X}\right]+\langle\psi|\hat{X}|\psi\rangle\leq\sum_{k=0}^{n-1}x_k+\langle\psi|\hat{X}|\psi\rangle\;.
\end{equation}
Since $|\psi\rangle$ is in the support of $\hat{\mathbb{I}}-\hat{\Pi}_{n-1}$, and
\begin{equation}
\left(\hat{\mathbb{I}}-\hat{\Pi}_{n-1}\right)\hat{X}\left(\hat{\mathbb{I}}-\hat{\Pi}_{n-1}\right)\leq x_n\;\hat{\mathbb{I}}\;,
\end{equation}
we have
\begin{equation}\label{ineqpsi}
\langle\psi|\hat{X}|\psi\rangle\leq x_n\;,
\end{equation}
and this concludes the proof.
\end{proof}
\end{lem}
\begin{lem}\label{HS*}
Let $\hat{X}$ and $\hat{Y}$ be positive Hilbert-Schmidt operators with eigenvalues in decreasing order $\{x_n\}_{n\in\mathbb{N}}$ and $\{y_n\}_{n\in\mathbb{N}}$, respectively.
Then,
\begin{equation}
\sum_{n=0}^\infty (x_n-y_n)^2\leq\left\|\hat{X}-\hat{Y}\right\|_2^2\;.
\end{equation}
\begin{proof}
We have
\begin{equation}\label{TrXYr}
\left\|\hat{X}-\hat{Y}\right\|_2^2-\sum_{n=0}^\infty (x_n-y_n)^2=2\sum_{n=0}^\infty x_ny_n-2\mathrm{Tr}\left[\hat{X}\hat{Y}\right]\geq0\,.
\end{equation}
To prove the inequality in \eqref{TrXYr}, let us diagonalize $\hat{X}$ as in \eqref{Xdiag}.
We then also have
\begin{equation}
\hat{X}=\sum_{n=0}^\infty\left(x_n-x_{n+1}\right)\hat{\Pi}_n\;,
\end{equation}
where
\begin{equation}
\hat{\Pi}_n=\sum_{k=0}^n|\psi_k\rangle\langle\psi_k|\;.
\end{equation}
We then have
\begin{eqnarray}
\mathrm{Tr}\left[\hat{X}\;\hat{Y}\right] &=& \sum_{n=0}^\infty\left(x_n-x_{n+1}\right)\mathrm{Tr}\left[\hat{\Pi}_n\;\hat{Y}\right]\nonumber\\
&\leq& \sum_{n=0}^\infty\left(x_n-x_{n+1}\right)\sum_{k=0}^n y_k\nonumber\\
&=& \sum_{n=0}^\infty x_n\;y_n\;,
\end{eqnarray}
where we have used Ky Fan's Maximum Principle (Lemma \ref{sumeig}) and rearranged the sum (see also the Supplemental Material of \cite{koenig2009strong}).
\end{proof}
\end{lem}
\section{Majorization}\label{secmaj}
We recall here the definition of majorization.
The interested reader can find more details in the dedicated book \cite{marshall2010inequalities}, that however deals only with the finite-dimensional case.
\begin{defn}[Majorization]
Let $x$ and $y$ be decreasing summable sequences of positive numbers.
We say that $x$ weakly sub-majorizes $y$, or $x\succ_w y$, iff
\begin{equation}
\sum_{i=0}^n x_i\geq\sum_{i=0}^n y_i\quad\forall\;n\in\mathbb{N}\;.
\end{equation}
If they have also the same sum, we say that $x$ majorizes $y$, or $x\succ y$.
\end{defn}
\begin{defn}
Let $\hat{X}$ and $\hat{Y}$ be positive trace-class operators with eigenvalues in decreasing order $\{x_n\}_{n\in\mathbb{N}}$ and $\{y_n\}_{n\in\mathbb{N}}$, respectively.
We say that $\hat{X}$ weakly sub-majorizes $\hat{Y}$, or $\hat{X}\succ_w\hat{Y}$, iff $x\succ_w y$.
We say that $\hat{X}$ majorizes $\hat{Y}$, or $\hat{X}\succ\hat{Y}$, if they have also the same trace.
\end{defn}
From an operational point of view, majorization can also be defined with:
\begin{thm}
Given two positive operators $\hat{X}$ and $\hat{Y}$ with the same finite trace, the following conditions are equivalent:
\begin{enumerate}
\item $\hat{X}\succ\hat{Y}$;
\item For any continuous nonnegative convex function $f:[0,\infty)\to\mathbb{R}$ with $f(0)=0\,$,
\begin{equation}\label{Trf}
\mathrm{Tr}\;f\left(\hat{X}\right)\geq\mathrm{Tr}\;f\left(\hat{Y}\right)\;;
\end{equation}
\item For any continuous nonnegative concave function $g:[0,\infty)\to\mathbb{R}$ with $g(0)=0\,$,
\begin{equation}\label{Trg}
\mathrm{Tr}\;g\left(\hat{X}\right)\leq\mathrm{Tr}\;g\left(\hat{Y}\right)\;;
\end{equation}
\item $\hat{Y}$ can be obtained applying to $\hat{X}$ a convex combination of unitary operators, i.e. there exists a probability measure $\mu$ on unitary operators such that
\begin{equation}
\hat{Y}=\int\hat{U}\,\hat{X}\,\hat{U}^\dag\;d\mu\left(\hat{U}\right)\;.
\end{equation}
\end{enumerate}
\begin{proof}
See Theorems 5, 6 and 7 of \cite{wehrl1974chaotic}.
Notice that Ref. \cite{wehrl1974chaotic} uses the opposite definition of the symbol ``$\succ$'' with respect to most literature (and to Ref. \cite{marshall2010inequalities}), i.e. there $\hat{X}\succ\hat{Y}$ means that $\hat{X}$ is majorized by $\hat{Y}$.
\end{proof}
\end{thm}
\begin{rem}
If $\hat{X}$ and $\hat{Y}$ are quantum states (i.e. their trace is one), \eqref{Trg} implies that the von Neumann entropy of $\hat{X}$ is lower than the von Neumann entropy of $\hat{Y}$, while \eqref{Trf} implies the same for all the R{\'e}nyi entropies \cite{holevo2013quantum}.
\end{rem}
\section{Fock rearrangement}\label{rearrangement}
In order to state our main theorem, we need to define
\begin{defn}[Fock rearrangement]
Let $\hat{X}$ be a positive trace-class operator with eigenvalues $\{x_n\}_{n\in\mathbb{N}}$ in decreasing order.
We define its Fock rearrangement as
\begin{equation}
\hat{X}^\downarrow:=\sum_{n=0}^\infty x_n\;|n\rangle\langle n|\;.
\end{equation}
If $\hat{X}$ coincides with its own Fock rearrangement, i.e. $\hat{X}=\hat{X}^\downarrow$, we say that it is {\it passive} \cite{pusz1978passive,lenard1978thermodynamical,gorecki1980passive} with respect to the Hamiltonian $\hat{N}$.
For simplicity, in the following we will always assume $\hat{N}$ to be the reference Hamiltonian, and an operator with $\hat{X}=\hat{X}^\downarrow$ will be called simply passive.
\end{defn}
\begin{rem}
The Fock rearrangement of any projector $\hat{\Pi}_n$ of rank $n+1$ is the projector onto the first $n+1$ Fock states:
\begin{equation}\label{Pin*}
\hat{\Pi}_n^\downarrow=\sum_{i=0}^n|i\rangle\langle i|\;.
\end{equation}
\end{rem}
We define the notion of passive-preserving quantum operation, that will be useful in the following.
\begin{defn}[Passive-preserving quantum operation]\label{*pres}
We say that a quantum operation $\Phi$ is passive-preserving if $\Phi\left(\hat{X}\right)$ is passive for any passive positive trace-class operator $\hat{X}$.
\end{defn}
We will also need these lemmata:
\begin{lem}\label{PXPlem}
For any self-adjoint trace-class operator $\hat{X}$,
\begin{equation}
\lim_{N\to\infty}\left\|\hat{\Pi}_N^\downarrow\;\hat{X}\;\hat{\Pi}_N^\downarrow-\hat{X}\right\|_2=0\;,
\end{equation}
where the $\hat{\Pi}_N^\downarrow$ are the projectors onto the first $N+1$ Fock states defined in \eqref{Pin*}.
\begin{proof}
We have
\begin{eqnarray}\label{PXP}
\left\|\hat{\Pi}_N^\downarrow\hat{X}\hat{\Pi}_N^\downarrow-\hat{X}\right\|_2^2 &=& \mathrm{Tr}\left[\hat{X}\left(\hat{\mathbb{I}}+\hat{\Pi}_N^\downarrow\right)\hat{X}\left(\hat{\mathbb{I}}-\hat{\Pi}_N^\downarrow\right)\right]\nonumber\\ &\leq&2\;\mathrm{Tr}\left[\hat{X}^2\left(\hat{\mathbb{I}}-\hat{\Pi}_N^\downarrow\right)\right]\nonumber\\
&=&2\sum_{n=N+1}^\infty\langle n|\hat{X}^2|n\rangle\;,
\end{eqnarray}
where we have used that
\begin{equation}
\hat{\mathbb{I}}+\hat{\Pi}_N^\downarrow\leq 2\;\hat{\mathbb{I}}\;.
\end{equation}
Since $\hat{X}$ is trace-class, it is also Hilbert-Schmidt, the sum in \eqref{PXP} converges, and its tail tends to zero for $N\to\infty$.
\end{proof}
\end{lem}
\begin{lem}\label{*symtr}
A positive trace-class operator $\hat{X}$ is passive iff for any finite-rank projector $\hat{P}$
\begin{equation}\label{PP*X}
\mathrm{Tr}\left[\hat{P}\;\hat{X}\right]\leq\mathrm{Tr}\left[\hat{P}^\downarrow\;\hat{X}\right]\;.
\end{equation}
\begin{proof}
First, suppose that $\hat{X}$ is passive with eigenvalues $\{x_n\}_{n\in\mathbb{N}}$ in decreasing order, and let $\hat{P}$ have rank $n+1$.
Then, by Lemma \ref{sumeig}
\begin{equation}
\mathrm{Tr}\left[\hat{P}\;\hat{X}\right]\leq\sum_{i=0}^n x_i=\mathrm{Tr}\left[\hat{P}^\downarrow\;\hat{X}\right]\;.
\end{equation}
Suppose now that \eqref{PP*X} holds for any finite-rank projector.
Let us diagonalize $\hat{X}$ as in \eqref{Xdiag}.
Putting into \eqref{PP*X} the projectors $\hat{\Pi}_n$ defined in \eqref{Pin},
\begin{equation}
\sum_{i=0}^n x_i=\mathrm{Tr}\left[\hat{\Pi}_n\;\hat{X}\right]\leq\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\hat{X}\right]\leq\sum_{i=0}^n x_i\;,
\end{equation}
where we have again used Lemma \ref{sumeig}.
It follows that for any $n\in\mathbb{N}$
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\hat{X}\right]=\sum_{i=0}^n x_i\;,
\end{equation}
and
\begin{equation}
\langle n|\hat{X}|n\rangle=x_n\;.
\end{equation}
It is then easy to prove by induction on $n$ that
\begin{equation}
\hat{X}=\sum_{n=0}^\infty x_n\;|n\rangle\langle n|\;,
\end{equation}
i.e. $\hat{X}$ is passive.
\end{proof}
\end{lem}
\begin{lem}\label{sumstar}
Let $\left\{\hat{X}_n\right\}_{n\in\mathbb{N}}$ be a sequence of positive trace-class operators with $\hat{X}_n$ passive for any $n\in\mathbb{N}$.
Then also $\sum_{n=0}^\infty\hat{X}_n$ is passive, provided that its trace is finite.
\begin{proof}
Follows easily from the definition of Fock rearrangement.
\end{proof}
\end{lem}
\begin{lem}\label{phistar}
Let $\Phi$ be a quantum operation.
Suppose that $\Phi\left(\hat{\Pi}\right)$ is passive for any passive finite-rank projector $\hat{\Pi}$.
Then, $\Phi$ is passive-preserving.
\begin{proof}
Choose a passive operator
\begin{equation}
\hat{X}=\sum_{n=0}^\infty x_n\,|n\rangle\langle n|\;,
\end{equation}
with $\{x_n\}_{n\in\mathbb{N}}$ positive and decreasing.
We then also have
\begin{equation}
\hat{X}=\sum_{n=0}^\infty z_n\;\hat{\Pi}_n^\downarrow\;,
\end{equation}
where the $\hat{\Pi}_n^\downarrow$ are defined in \eqref{Pin*}, and
\begin{equation}
z_n=x_n-x_{n+1}\geq0\;.
\end{equation}
Since by hypothesis $\Phi\left(\hat{\Pi}_n^\downarrow\right)$ is passive for any $n\in\mathbb{N}$, according to Lemma \ref{sumstar} also
\begin{equation}
\Phi\left(\hat{X}\right)=\sum_{n=0}^\infty z_n\;\Phi\left(\hat{\Pi}_n^\downarrow\right)
\end{equation}
is passive.
\end{proof}
\end{lem}
\begin{lem}\label{majprojlem}
Let $\hat{X}$ and $\hat{Y}$ be positive trace-class operators.
\begin{enumerate}
\item Suppose that for any finite-rank projector $\hat{\Pi}$
\begin{equation}\label{majproj}
\mathrm{Tr}\left[\hat{\Pi}\,\hat{X}\right]\leq\mathrm{Tr}\left[\hat{\Pi}^\downarrow\,\hat{Y}\right]\;.
\end{equation}
Then $\hat{X}\prec_w\hat{Y}$.
\item Let $\hat{Y}$ be passive, and suppose that $\hat{X}\prec_w\hat{Y}$.
Then \eqref{majproj} holds for any finite-rank projector $\hat{\Pi}$.
\end{enumerate}
\begin{proof}
Let $\{x_n\}_{n\in\mathbb{N}}$ and $\{y_n\}_{n\in\mathbb{N}}$ be the eigenvalues in decreasing order of $\hat{X}$ and $\hat{Y}$, respectively, and let us diagonalize $\hat{X}$ as in \eqref{Xdiag}.
\begin{enumerate}
\item Suppose first that \eqref{majproj} holds for any finite-rank projector $\hat{\Pi}$.
For any $n\in\mathbb{N}$ we have
\begin{equation}\label{eqtr}
\sum_{i=0}^n x_i=\mathrm{Tr}\left[\hat{\Pi}_n\,\hat{X}\right]\leq\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\,\hat{Y}\right]\leq\sum_{i=0}^n y_i\;,
\end{equation}
where the $\hat{\Pi}_n$ are defined in \eqref{Pin} and we have used Lemma \ref{sumeig}.
Then $x\prec_w y$, and $\hat{X}\prec_w\hat{Y}$.
\item Suppose now that $\hat{X}\prec_w\hat{Y}$ and $\hat{Y}=\hat{Y}^\downarrow$.
Then, for any $n\in\mathbb{N}$ and any projector $\hat{\Pi}$ of rank $n+1$,
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}\,\hat{X}\right]\leq \sum_{i=0}^n x_i\leq\sum_{i=0}^n y_i=\mathrm{Tr}\left[\hat{\Pi}^\downarrow\,\hat{Y}\right]\;,
\end{equation}
where we have used Lemma \ref{sumeig} again.
\end{enumerate}
\end{proof}
\end{lem}
\begin{lem}\label{corTr}
Let $\hat{Y}$ and $\hat{Z}$ be positive trace-class operators with $\hat{Y}\prec_w\hat{Z}=\hat{Z}^\downarrow$.
Then, for any positive trace-class operator $\hat{X}$,
\begin{equation}
\mathrm{Tr}\left[\hat{X}\;\hat{Y}\right]\leq\mathrm{Tr}\left[\hat{X}^\downarrow\;\hat{Z}\right]\;.
\end{equation}
\begin{proof}
Let us diagonalize $\hat{X}$ as in \eqref{Xdiag}.
Then, it can be rewritten as
\begin{equation}\label{XPi}
\hat{X}=\sum_{n=0}^\infty d_n\,\hat{\Pi}_n\;,
\end{equation}
where the projectors $\hat{\Pi}_n$ are as in \eqref{Pin} and
\begin{equation}
d_n=x_n-x_{n+1}\geq0\;.
\end{equation}
The Fock rearrangement of $\hat{X}$ is
\begin{equation}\label{X*Pi}
\hat{X}^\downarrow=\sum_{n=0}^\infty d_n\,\hat{\Pi}_n^\downarrow\;.
\end{equation}
We then have from Lemma \ref{majprojlem}
\begin{eqnarray}
\mathrm{Tr}\left[\hat{X}\;\hat{Y}\right] &=& \sum_{n=0}^\infty d_n\;\mathrm{Tr}\left[\hat{\Pi}_n\;\hat{Y}\right]\leq\sum_{n=0}^\infty d_n\;\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\hat{Z}\right]\nonumber\\
&=&\mathrm{Tr}\left[\hat{X}^\downarrow\;\hat{Z}\right]\;.
\end{eqnarray}
\end{proof}
\end{lem}
\begin{lem}\label{majsum}
Let $\left\{\hat{X}_n\right\}_{n\in\mathbb{N}}$ and $\left\{\hat{Y}_n\right\}_{n\in\mathbb{N}}$ be two sequences of positive trace-class operators, with $\hat{Y}_n=\hat{Y}_n^\downarrow$ and $\hat{X}_n\prec_w\hat{Y}_n$ for any $n\in\mathbb{N}$.
Then
\begin{equation}
\sum_{n=0}^\infty\hat{X}_n\prec_w\sum_{n=0}^\infty\hat{Y}_n\;,
\end{equation}
provided that both sides have finite traces.
\begin{proof}
Let $\hat{P}$ be a finite-rank projector.
Since $\hat{X}_n\prec_w\hat{Y}_n$ and $Y_n=Y_n^\downarrow$, by the second part of Lemma \ref{majprojlem}
\begin{equation}
\mathrm{Tr}\left[\hat{P}\;\hat{X}_n\right]\leq\mathrm{Tr}\left[\hat{P}^\downarrow\;\hat{Y}_n\right]\qquad\forall\;n\in\mathbb{N}\;.
\end{equation}
Then,
\begin{equation}
\mathrm{Tr}\left[\hat{P}\;\sum_{n=0}^\infty\hat{X}_n\right]\leq\mathrm{Tr}\left[\hat{P}^\downarrow\;\sum_{n=0}^\infty\hat{Y}_n\right]\;,
\end{equation}
and the submajorization follows from the first part of Lemma \ref{majprojlem}.
\end{proof}
\end{lem}
\begin{lem}\label{HS**}
The Fock rearrangement is continuous in the Hilbert-Schmidt norm.
\begin{proof}
Let $\hat{X}$ and $\hat{Y}$ be trace-class operators, with eigenvalues in decreasing order $\{x_n\}_{n\in\mathbb{N}}$ and $\{y_n\}_{n\in\mathbb{N}}$, respectively.
We then have
\begin{equation}
\left\|\hat{X}^\downarrow-\hat{Y}^\downarrow\right\|_2^2=\sum_{n=0}^\infty(x_n-y_n)^2\leq\left\|\hat{X}-\hat{Y}\right\|_2^2\;,
\end{equation}
where we have used Lemma \ref{HS*}.
\end{proof}
\end{lem}
\section{Fock-optimal quantum operations}\label{secoptimal}
We will prove that any gauge-covariant Gaussian quantum channel satisfies this property:
\begin{defn}[Fock-optimal quantum operation]
We say that a quantum operation $\Phi$ is Fock-optimal if for any positive trace-class operator $\hat{X}$
\begin{equation}\label{conjectureeq}
\Phi\left(\hat{X}\right)\prec_w\Phi\left(\hat{X}^\downarrow\right)\;,
\end{equation}
i.e. Fock-rearranging the input always makes the output less noisy, or among all the quantum states with a given spectrum, the passive one generates the least noisy output.
\end{defn}
\begin{rem}
If $\Phi$ is trace-preserving, weak sub-majorization in \eqref{conjectureeq} can be equivalently replaced by majorization.
\end{rem}
We can now state the main result of the paper:
\begin{thm}\label{maintheorem}
Any one-mode gauge-covariant Gaussian quantum channel is passive-preserving and Fock-optimal.
\begin{proof}
See Section \ref{mainproof}.
\end{proof}
\end{thm}
\begin{cor}
Any linear combination with positive coefficients of gauge-covariant quantum Gaussian channels is Fock-optimal.
\begin{proof}
Follows from Theorem \ref{maintheorem} and Lemma \ref{convexhull}.
\end{proof}
\end{cor}
In the remainder of this section, we prove some general properties of Fock-optimality that will be needed in the main proof.
\begin{lem}\label{majprojprop}
Let $\Phi$ be a passive-preserving quantum operation.
If for any finite-rank projector $\hat{P}$
\begin{equation}\label{hypproj}
\Phi\left(\hat{P}\right)\prec_w\Phi\left(\hat{P}^\downarrow\right)\;,
\end{equation}
then $\Phi$ is Fock-optimal.
\begin{proof}
Let $\hat{X}$ be a positive trace-class operator as in \eqref{XPi}, with Fock rearrangement as in \eqref{X*Pi}.
Since $\Phi$ is passive-preserving, for any $n\in\mathbb{N}$
\begin{equation}
\Phi\left(\hat{\Pi}_n\right)\prec_w\Phi\left(\hat{\Pi}_n^\downarrow\right)=\Phi\left(\hat{\Pi}_n^\downarrow\right)^\downarrow\;.
\end{equation}
Then we can apply Lemma \ref{majsum} to
\begin{equation}
\Phi\left(\hat{X}\right)=\sum_{n=0}^\infty d_n\;\Phi\left(\hat{\Pi}_n\right)\prec_w\sum_{n=0}^\infty d_n\;\Phi\left(\hat{\Pi}_n^\downarrow\right)=\Phi\left(\hat{X}^\downarrow\right)\;,
\end{equation}
and the claim follows.
\end{proof}
\end{lem}
\begin{lem}\label{conjPQprop}
A quantum operation $\Phi$ is passive-preserving and Fock-optimal iff
\begin{equation}\label{conjPQeq}
\mathrm{Tr}\left[\hat{Q}\;\Phi\left(\hat{P}\right)\right]\leq\mathrm{Tr}\left[\hat{Q}^\downarrow\;\Phi\left(\hat{P}^\downarrow\right)\right]
\end{equation}
for any two finite-rank projectors $\hat{Q}$ and $\hat{P}$.
\begin{proof}
Suppose first that $\Phi$ is passive-preserving and Fock-optimal, and let $\hat{P}$ and $\hat{Q}$ be finite-rank projectors.
Then
\begin{equation}
\Phi\left(\hat{P}\right)\prec_w\Phi\left(\hat{P}^\downarrow\right)=\Phi\left(\hat{P}^\downarrow\right)^\downarrow\;,
\end{equation}
and \eqref{conjPQeq} follows from Lemma \ref{majprojlem}.
Suppose now that \eqref{conjPQeq} holds for any finite-rank projectors $\hat{P}$ and $\hat{Q}$.
Choosing $\hat{P}$ passive, we get
\begin{equation}
\mathrm{Tr}\left[\hat{Q}\;\Phi\left(\hat{P}\right)\right]\leq\mathrm{Tr}\left[\hat{Q}^\downarrow\;\Phi\left(\hat{P}\right)\right]\;,
\end{equation}
and from Lemma \ref{*symtr} also $\Phi\left(\hat{P}\right)$ is passive, so from Lemma \ref{phistar} $\Phi$ is passive-preserving.
Choosing now a generic $\hat{P}$, by Lemma \ref{majprojlem}
\begin{equation}
\Phi\left(\hat{P}\right)\prec_w\Phi\left(\hat{P}^\downarrow\right)\;,
\end{equation}
and from Lemma \ref{majprojprop} $\Phi$ is also Fock-optimal.
\end{proof}
\end{lem}
We can now prove the two fundamental properties of Fock-optimality:
\begin{thm}\label{Phidag}
Let $\Phi$ be a quantum operation with the restriction of its Hilbert-Schmidt dual $\Phi^\dag$ to trace-class operators continuous in the trace norm.
Then, $\Phi$ is passive-preserving and Fock-optimal iff $\Phi^\dag$ is passive-preserving and Fock-optimal.
\begin{proof}
Condition \eqref{conjPQeq} can be rewritten as
\begin{equation}\label{PQdag}
\mathrm{Tr}\left[\Phi^\dag\left(\hat{Q}\right)\hat{P}\right]\leq\mathrm{Tr}\left[\Phi^\dag\left(\hat{Q}^\downarrow\right)\hat{P}^\downarrow\right]\;,
\end{equation}
and is therefore symmetric for $\Phi$ and $\Phi^\dag$.
\end{proof}
\end{thm}
\begin{thm}\label{qcirc}
Let $\Phi_1$ and $\Phi_2$ be passive-preserving and Fock-optimal quantum operations with the restriction of $\Phi_2^\dag$ to trace-class operators continuous in the trace norm.
Then, their composition $\Phi_2\circ\Phi_1$ is also passive-preserving and Fock-optimal.
\begin{proof}
Let $\hat{P}$ and $\hat{Q}$ be finite-rank projectors.
Since $\Phi_2$ is Fock-optimal and passive-preserving,
\begin{equation}
\Phi_2\left(\Phi_1\left(\hat{P}\right)\right)\prec_w\Phi_2\left(\Phi_1\left(\hat{P}\right)^\downarrow\right)=\Phi_2\left(\Phi_1\left(\hat{P}\right)^\downarrow\right)^\downarrow\;,
\end{equation}
and by Lemma \ref{majprojlem}
\begin{align}\label{eqphi12}
\mathrm{Tr}\left[\hat{Q}\;\Phi_2\left(\Phi_1\left(\hat{P}\right)\right)\right] &\leq \mathrm{Tr}\left[\hat{Q}^\downarrow\;\Phi_2\left(\Phi_1\left(\hat{P}\right)^\downarrow\right)\right]\nonumber\\
&= \mathrm{Tr}\left[\Phi_2^\dag\left(\hat{Q}^\downarrow\right)\Phi_1\left(\hat{P}\right)^\downarrow\right]\;.
\end{align}
Since $\Phi_1$ is Fock-optimal and passive-preserving,
\begin{equation}
\Phi_1\left(\hat{P}\right)^\downarrow\prec_w\Phi_1\left(\hat{P}^\downarrow\right)=\Phi_1\left(\hat{P}^\downarrow\right)^\downarrow\;.
\end{equation}
From Theorem \ref{Phidag} also $\Phi_2^\dag$ is passive-preserving, and $\Phi_2^\dag\left(\hat{Q}^\downarrow\right)$ is passive.
Lemma \ref{corTr} implies then
\begin{align}\label{eqphi3}
\mathrm{Tr}\left[\Phi_2^\dag\left(\hat{Q}^\downarrow\right)\Phi_1\left(\hat{P}\right)^\downarrow\right] &\leq \mathrm{Tr}\left[\Phi_2^\dag\left(\hat{Q}^\downarrow\right)\Phi_1\left(\hat{P}^\downarrow\right)\right]\nonumber\\
&= \mathrm{Tr}\left[\hat{Q}^\downarrow\;\Phi_2\left(\Phi_1\left(\hat{P}^\downarrow\right)\right)\right]\;,
\end{align}
and the claim follows from Lemma \ref{conjPQprop} combining \eqref{eqphi3} with \eqref{eqphi12}.
\end{proof}
\end{thm}
\begin{lem}\label{Phifinite}
Let $\Phi$ be a quantum operation continuous in the Hilbert-Schmidt norm.
Suppose that for any $N\in\mathbb{N}$ its restriction to the span of the first $N+1$ Fock states is passive-preserving and Fock-optimal, i.e. for any positive operator $\hat{X}$ supported on the span of the first $N+1$ Fock states
\begin{equation}
\Phi\left(\hat{X}\right)\prec_w\Phi\left(\hat{X}^\downarrow\right)=\Phi\left(\hat{X}^\downarrow\right)^\downarrow\;.
\end{equation}
Then, $\Phi$ is passive-preserving and Fock-optimal.
\begin{proof}
Let $\hat{P}$ and $\hat{Q}$ be two generic finite-rank projectors.
Since the restriction of $\Phi$ to the support of $\hat{\Pi}_N^\downarrow$ is Fock-optimal and passive-preserving,
\begin{align}
\Phi\left(\hat{\Pi}_N^\downarrow\;\hat{P}\;\hat{\Pi}_N^\downarrow\right) &\prec_w \Phi\left(\left(\hat{\Pi}_N^\downarrow\;\hat{P}\;\hat{\Pi}_N^\downarrow\right)^\downarrow\right)\nonumber\\ &=\left(\Phi\left(\left(\hat{\Pi}_N^\downarrow\;\hat{P}\;\hat{\Pi}_N^\downarrow\right)^\downarrow\right)\right)^\downarrow\;.
\end{align}
Then, from Lemma \ref{majprojlem}
\begin{equation}\label{TrPQN}
\mathrm{Tr}\left[\hat{Q}\;\Phi\left(\hat{\Pi}_N^\downarrow\;\hat{P}\;\hat{\Pi}_N^\downarrow\right)\right] \leq \mathrm{Tr}\left[\hat{Q}^\downarrow\;\Phi\left(\left(\hat{\Pi}_N^\downarrow\;\hat{P}\;\hat{\Pi}_N^\downarrow\right)^\downarrow\right)\right]\;.
\end{equation}
From Lemma \ref{PXPlem},
\begin{equation}
\left\|\hat{\Pi}_N^\downarrow\;\hat{P}\;\hat{\Pi}_N^\downarrow-\hat{P}\right\|_2\to0\qquad\text{for}\;N\to\infty\;,
\end{equation}
and since $\Phi$, the Fock rearrangement (see Lemma \ref{HS**}) and the Hilbert-Schmidt product are continuous in the Hilbert-Schmidt norm, we can take the limit $N\to\infty$ in \eqref{TrPQN} and get
\begin{equation}
\mathrm{Tr}\left[\hat{Q}\;\Phi\left(\hat{P}\right)\right] \leq \mathrm{Tr}\left[\hat{Q}^\downarrow\;\Phi\left(\hat{P}^\downarrow\right)\right]\;.
\end{equation}
The claim now follows from Lemma \ref{conjPQprop}.
\end{proof}
\end{lem}
\begin{lem}\label{convexhull}
Let $\Phi_1$ and $\Phi_2$ be Fock-optimal and passive-preserving quantum operations.
Then, also $\Phi_1+\Phi_2$ is Fock-optimal and passive-preserving.
\begin{proof}
Easily follows from Lemma \ref{conjPQprop}.
\end{proof}
\end{lem}
\section{Proof of the main theorem}\label{mainproof}
First, we can reduce the problem to the quantum-limited attenuator:
\begin{lem}\label{att->all}
If the quantum-limited attenuator is passive-preserving and Fock-optimal, the property extends to any gauge-covariant quantum Gaussian channel.
\begin{proof}
From Lemma \ref{comp}, any quantum gauge-covariant Gaussian channel can be obtained composing a quantum-limited attenuator with a quantum-limited amplifier.
From Lemma \ref{attdag}, the Hilbert-Schmidt dual of a quantum-limited amplifier is proportional to a quantum-limited attenuator, and from Lemma \ref{Phidag} also the amplifier is passive-preserving and Fock-optimal.
Finally, the claim follows from Theorem \ref{qcirc}.
\end{proof}
\end{lem}
By Lemma \ref{Phifinite}, we can restrict to quantum states $\hat{\rho}$ supported on the span of the first $N+1$ Fock states.
Let now
\begin{equation}
\hat{\rho}(t)=e^{t\mathcal{L}}\left(\hat{\rho}\right)\;,
\end{equation}
where $\mathcal{L}$ is the generator of the quantum-limited attenuator defined in \eqref{lindblad}.
From the explicit representation \eqref{kraus}, it is easy to see that $\hat{\rho}(t)$ remains supported on the span of the first $N+1$ Fock states for any $t\geq0$.
In finite dimension, the quantum states with non-degenerate spectrum are dense in the set of all quantum states.
Besides, the spectrum is a continuous function of the operator, and any linear map is continuous.
Then, without loss of generality we can suppose that $\hat{\rho}$ has non-degenerate spectrum.
Let
\begin{equation}
p(t)=\left(p_0(t),\ldots,p_N(t)\right)
\end{equation}
be the vectors of the eigenvalues of $\hat{\rho}(t)$ in decreasing order, and let
\begin{equation}
s_n(t)=\sum_{i=0}^n p_i(t)\;,\qquad n=0,\ldots,\,N\;,
\end{equation}
their partial sums, that we similarly collect into the vector $s(t)$.
Let instead
\begin{equation}\label{pndt}
p_n^\downarrow(t)=\langle n|e^{t\mathcal{L}}\left(\hat{\rho}^\downarrow\right)|n\rangle\;,\qquad n=0,\,\ldots,\,N
\end{equation}
be the eigenvalues of $e^{t\mathcal{L}}\left(\hat{\rho}^\downarrow\right)$ (recall that it is diagonal in the Fock basis for any $t\geq0$), and
\begin{equation}
s_n^\downarrow(t)=\sum_{i=0}^n p_i^\downarrow(t)\;,\qquad n=0,\,\ldots,\,N\;,
\end{equation}
their partial sums.
Notice that $p(0)=p^\downarrow(0)$ and then $s(0)=s^\downarrow(0)$.
Combining \eqref{pndt} with the expression for the Lindbladian \eqref{lindblad}, with the help of \eqref{acta} and \eqref{actadag} it is easy to see that the eigenvalues $p_n^\downarrow(t)$ satisfy
\begin{equation}
\frac{d}{dt}p_n^\downarrow(t)=\left(n+1\right)p_{n+1}^\downarrow(t)-n\,p_n^\downarrow(t)\;,
\end{equation}
implying
\begin{equation}
\frac{d}{dt}s_n^\downarrow(t)=(n+1)\left(s^\downarrow_{n+1}(t)-s^\downarrow_n(t)\right)
\end{equation}
for their partial sums.
The proof of Theorem \ref{maintheorem} is a consequence of:
\begin{lem}\label{deg}
The spectrum of $\hat{\rho}(t)$ can be degenerate at most in isolated points.
\end{lem}
\begin{lem}\label{lemma1}
$s(t)$ is continuous in $t$, and for any $t\geq0$ such that $\hat{\rho}(t)$ has non-degenerate spectrum it satisfies
\begin{equation}\label{sdot}
\frac{d}{dt}s_n(t)\leq(n+1)(s_{n+1}(t)-s_n(t))\;,\qquad n=0,\,\ldots,\,N-1\;.
\end{equation}
\end{lem}
\begin{lem}\label{lemma2}
If $s(t)$ is continuous in $t$ and satisfies \eqref{sdot}, then
\begin{equation}
s_n(t)\leq s_n^\downarrow(t)
\end{equation}
for any $t\geq0$ and $n=0,\,\ldots,\,N$.
\end{lem}
Lemma \ref{lemma2} implies that the quantum-limited attenuator is passive-preserving.
Indeed, let us choose $\hat{\rho}$ passive.
Since $e^{t\mathcal{L}}\left(\hat{\rho}\right)$ is diagonal in the Fock basis, $s_n^\downarrow(t)$ is the sum of the eigenvalues corresponding to the first $n+1$ Fock states $|0\rangle,\;\ldots,\;|n\rangle$.
Since $s_n(t)$ is the sum of the $n+1$ greatest eigenvalues, $s_n^\downarrow(t)\leq s_n(t)$.
However, Lemma \ref{lemma2} implies $s_n(t)=s_n^\downarrow(t)$ for $n=0,\,\ldots,\,N$.
Thus $p_n(t)=p_n^\downarrow(t)$, so the operator $e^{t\mathcal{L}}\left(\hat{\rho}\right)$ is passive for any $t$, and the channel $e^{t\mathcal{L}}$ is passive-preserving.
Then from the definition of majorization and Lemma \ref{lemma2} again,
\begin{equation}
e^{t\mathcal{L}}\left(\hat{\rho}\right)\prec_w e^{t\mathcal{L}}\left(\hat{\rho}^\downarrow\right)
\end{equation}
for any $\hat{\rho}$, and the quantum-limited attenuator is also Fock-optimal.
\subsection{Proof of Lemma \ref{deg}}
The matrix elements of the operator $e^{t\mathcal{L}}\left(\hat{\rho}\right)$ are analytic functions of $t$.
The spectrum of $\hat{\rho}(t)$ is degenerate iff the function
\begin{equation}
\phi(t)=\prod_{i\neq j}\left(p_i(t)-p_j(t)\right)
\end{equation}
vanishes.
This function is a symmetric polynomial in the eigenvalues of $\hat{\rho}(t)=e^{t\mathcal{L}}\left(\hat{\rho}\right)$.
Then, for the Fundamental Theorem of Symmetric Polynomials (see e.g Theorem 3 in Chapter 7 of \cite{cox2015ideals}), $\phi(t)$ can be written as a polynomial in the elementary symmetric polynomials in the eigenvalues of $\hat{\rho}(t)$.
However, these polynomials coincide with the coefficients of the characteristic polynomial of $\hat{\rho}(t)$, that are in turn polynomials in its matrix elements.
It follows that $\phi(t)$ can be written as a polynomial in the matrix elements of the operator $\hat{\rho}(t)$.
Since each of these matrix element is an analytic function of $t$, also $\phi(t)$ is analytic.
Since by hypothesis the spectrum of $\hat{\rho}(0)$ is non-degenerate, $\phi$ cannot be identically zero, and its zeroes are isolated points.
\subsection{Proof of Lemma \ref{lemma1}}
The matrix elements of the operator $e^{t\mathcal{L}}\left(\hat{\rho}\right)$ are analytic (and hence continuous and differentiable) functions of $t$.
Then for Weyl's Perturbation Theorem $p(t)$ is continuous in $t$, and also $s(t)$ is continuous (see e.g. Corollary III.2.6 and the discussion at the beginning of Chapter VI of \cite{bhatia2013matrix}).
Let $\hat{\rho}(t_0)$ have non-degenerate spectrum.
Then, $\hat{\rho}(t)$ has non-degenerate spectrum for any $t$ in a suitable neighbourhood of $t_0$.
In this neighbourhood, we can diagonalize $\hat{\rho}(t)$ with
\begin{equation}
\hat{\rho}(t)=\sum_{n=0}^N p_n(t) |\psi_n(t)\rangle\langle\psi_n(t)|\;,
\end{equation}
where the eigenvalues in decreasing order $p_n(t)$ are differentiable functions of $t$ (see Theorem 6.3.12 of \cite{horn2012matrix}),
and
\begin{equation}
\frac{d}{dt}p_n(t)=\langle\psi_n(t)|\mathcal{L}\left(\hat{\rho}(t)\right)|\psi_n(t)\rangle\;.
\end{equation}
We then have
\begin{equation}
\frac{d}{dt}s_n(t)=\mathrm{Tr}\left[\hat{\Pi}_n(t)\;\mathcal{L}\left(\hat{\rho}(t)\right)\right]\;,
\end{equation}
where
\begin{equation}
\hat{\Pi}_n(t)=\sum_{i=0}^n|\psi_i(t)\rangle\langle\psi_i(t)|\;.
\end{equation}
We can write
\begin{equation}
\hat{\rho}(t)=\sum_{n=0}^N d_n(t)\;\hat{\Pi}_n(t)\;,
\end{equation}
where
\begin{equation}
d_n(t)=p_n(t)-p_{n+1}(t)\geq0\;,
\end{equation}
so that
\begin{equation}
\frac{d}{dt}s_n(t)=\sum_{k=0}^N d_k(t)\;\mathrm{Tr}\left[\hat{\Pi}_n(t)\;\mathcal{L}\left(\hat{\Pi}_k(t)\right)\right]\;.
\end{equation}
With the explicit expression \eqref{lindblad} for $\mathcal{L}$, it is easy to prove that
\begin{equation}
\sum_{k=0}^N d_k(t)\;\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\mathcal{L}\left(\hat{\Pi}_k^\downarrow\right)\right]=(n+1)(s_{n+1}(t)-s_n(t))\;,
\end{equation}
so it would be sufficient to show that
\begin{equation}\label{PL}
\mathrm{Tr}\left[\hat{\Pi}_n(t)\;\mathcal{L}\left(\hat{\Pi}_k(t)\right)\right]\overset{?}{\leq} \mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\mathcal{L}\left(\hat{\Pi}_k^\downarrow\right)\right]\;.
\end{equation}
We write explicitly the left-hand side of \eqref{PL}:
\begin{equation}\label{PLext}
\mathrm{Tr}\left[\hat{\Pi}_n(t)\;\hat{a}\;\hat{\Pi}_k(t)\;\hat{a}^\dag-\hat{\Pi}_n(t)\;\hat{\Pi}_k(t)\;\hat{a}^\dag\hat{a}\right]\;,
\end{equation}
where we have used that $\hat{\Pi}_n(t)$ and $\hat{\Pi}_k(t)$ commute.
\begin{itemize}
\item Suppose $n\geq k$.
Then
\begin{equation}
\hat{\Pi}_n(t)\;\hat{\Pi}_k(t)=\hat{\Pi}_k(t)\;.
\end{equation}
Using that
\begin{equation}
\hat{\Pi}_n(t)\leq\hat{\mathbb{I}}
\end{equation}
in the first term of \eqref{PLext}, we get
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}_n(t)\;\hat{a}\;\hat{\Pi}_k(t)\;\hat{a}^\dag-\hat{\Pi}_n(t)\;\hat{\Pi}_k(t)\;\hat{a}^\dag\hat{a}\right]\leq0\;.
\end{equation}
On the other hand, since the support of $\hat{a}\,\hat{\Pi}_k^\downarrow\,\hat{a}^\dag$ is contained in the support of $\hat{\Pi}_{k-1}^\downarrow$, and hence in the one of $\hat{\Pi}_n^\downarrow$, we have also
\begin{equation}
\hat{\Pi}_n^\downarrow\;\hat{a}\;\hat{\Pi}_k^\downarrow\;\hat{a}^\dag=\hat{a}\;\hat{\Pi}_k^\downarrow\;\hat{a}^\dag\;,
\end{equation}
so that
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\hat{a}\;\hat{\Pi}_k^\downarrow\;\hat{a}^\dag-\hat{\Pi}_n^\downarrow\;\hat{\Pi}_k^\downarrow\;\hat{a}^\dag\hat{a}\right]=0\;.
\end{equation}
\item Suppose now that $k\geq n+1$.
Then
\begin{equation}
\hat{\Pi}_n(t)\;\hat{\Pi}_k(t)=\hat{\Pi}_n(t)\;.
\end{equation}
Using that
\begin{equation}
\hat{\Pi}_k(t)\leq\hat{\mathbb{I}}
\end{equation}
in the first term of \eqref{PLext}, together with the commutation relation \eqref{CCR}, we get
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}_n(t)\;\hat{a}\;\hat{\Pi}_k(t)\;\hat{a}^\dag-\hat{\Pi}_n(t)\;\hat{\Pi}_k(t)\;\hat{a}^\dag\hat{a}\right]\leq n+1\;.
\end{equation}
On the other hand, since the support of $\hat{a}^\dag\,\hat{\Pi}_n^\downarrow\,\hat{a}$ is contained in the support of $\hat{\Pi}_{n+1}^\downarrow$ and hence in the one of $\hat{\Pi}_k^\downarrow$, we have also
\begin{equation}
\hat{\Pi}_k^\downarrow\;\hat{a}^\dag\;\hat{\Pi}_n^\downarrow\;\hat{a}=\hat{a}^\dag\;\hat{\Pi}_n^\downarrow\;\hat{a}\;,
\end{equation}
so that
\begin{equation}
\mathrm{Tr}\left[\hat{\Pi}_n^\downarrow\;\hat{a}\;\hat{\Pi}_k^\downarrow\;\hat{a}^\dag-\hat{\Pi}_n^\downarrow\;\hat{\Pi}_k^\downarrow\;\hat{a}^\dag\hat{a}\right]=n+1\;.
\end{equation}
\end{itemize}
\subsection{Proof of Lemma \ref{lemma2}}
Since the quantum-limited attenuator is trace-preserving, we have
\begin{equation}
s_N(t)=\mathrm{Tr}\left[\hat{\rho}(t)\right]=1=s_N^\downarrow(t)\;.
\end{equation}
We will use induction on $n$ in the reverse order: suppose to have proved
\begin{equation}
s_{n+1}(t)\leq s_{n+1}^\downarrow(t)\;.
\end{equation}
We then have from \eqref{sdot}
\begin{equation}
\frac{d}{dt}s_n(t)\leq(n+1)\left(s_{n+1}^\downarrow(t)-s_n(t)\right)\;,
\end{equation}
while
\begin{equation}
\frac{d}{dt}s_n^\downarrow(t)=(n+1)\left(s_{n+1}^\downarrow(t)-s_n^\downarrow(t)\right)\;.
\end{equation}
Defining
\begin{equation}
f_n(t)=s_n^\downarrow(t)-s_n(t)\;,
\end{equation}
we have $f_n(0)=0$, and
\begin{equation}
\frac{d}{dt}f_n(t)\geq-(n+1)f_n(t)\;.
\end{equation}
This can be rewritten as
\begin{equation}
e^{-(n+1)t}\;\frac{d}{dt}\left(e^{(n+1)t}\;f_n(t)\right)\geq0\;,
\end{equation}
and implies
\begin{equation}
f_n(t)\geq0\;.
\end{equation}
\section{Generic one-mode Gaussian channels}\label{generic}
In this section we extend Theorem \ref{maintheorem} to any one-mode quantum Gaussian channel.
\begin{defn}
We say that two quantum channels $\Phi$ and $\Psi$ are equivalent if there are a unitary operator $\hat{U}$ and a unitary or anti-unitary $\hat{V}$ such that
\begin{equation}\label{Psi}
\Psi\left(\hat{X}\right)=\hat{V}\;\Phi\left(\hat{U}\;\hat{X}\;\hat{U}^\dag\right)\;\hat{V}^\dag
\end{equation}
for any trace-class operator $\hat{X}$.
\end{defn}
Clearly, a channel equivalent to a Fock-optimal channel is also Fock-optimal with a suitable redefinition of the Fock rearrangement:
\begin{lem}\label{U*}
Let $\Phi$ be a Fock-optimal quantum channel, and $\Psi$ be as in \eqref{Psi}.
Then, for any positive trace-class operator $\hat{X}$,
\begin{equation}
\Psi\left(\hat{X}\right)\prec_w\Psi\left(\hat{U}^\dag\left(\hat{U}\;\hat{X}\;\hat{U}^\dag\right)^\downarrow\hat{U}\right)\;.
\end{equation}
\end{lem}
The problem of analyzing any Gaussian quantum channel from the point of view of majorization is then reduced to the equivalence classes.
\subsection{Quadratures and squeezing}
In order to present such classes, we will need some more definitions.
The quadratures
\begin{eqnarray}
\hat{Q} &=& \frac{\hat{a}+\hat{a}^\dag}{\sqrt{2}}\\
\hat{P} &=& \frac{\hat{a}-\hat{a}^\dag}{i\sqrt{2}}
\end{eqnarray}
satisfy the canonical commutation relation
\begin{equation}
\left[\hat{Q},\;\hat{P}\right]=i\;\hat{\mathbb{I}}\;.
\end{equation}
In this section, and only in this section, $\hat{Q}$ and $\hat{P}$ will denote the above quadratures, and not generic projectors.
We can define a continuous basis of not normalizable vectors $\left\{|q\rangle\right\}_{q\in\mathbb{R}}$ with
\begin{eqnarray}
\hat{Q}|q\rangle &=& q|q\rangle\;,\\
\langle q|q'\rangle &=& \delta(q-q')\;,\\
\int_{\mathbb{R}}|q\rangle\langle q|\;dq &=& \hat{\mathbb{I}}\;,\\
e^{-iq\hat{P}}|q'\rangle &=& |q'+q\rangle\;,\qquad q,\,q'\in\mathbb{R}\;.
\end{eqnarray}
For any $\kappa>0$ we define the squeezing unitary operator \cite{barnett2002methods} $\hat{S}_\kappa$ with
\begin{equation}
\hat{S}_\kappa |q\rangle=\sqrt{\kappa}\;|\kappa q\rangle
\end{equation}
for any $q\in\mathbb{R}$.
It satisfies also
\begin{equation}
\hat{S}_\kappa^\dag\;\hat{P}\;\hat{S}_\kappa = \frac{1}{\kappa}\;\hat{P}\;.
\end{equation}
\subsection{Classification theorem}
Then, the following classification theorem holds \cite{holevo2007one,holevo2013quantum}:
\begin{thm}
Any one-mode quantum Gaussian channel is equivalent to one of the following:
\begin{enumerate}
\item a gauge-covariant Gaussian channel as in Definition \ref{gaugecovch} (cases $A_1)$, $B_2)$, $C)$ and $D)$ of \cite{holevo2007one});
\item a measure-reprepare channel $\Phi$ of the form
\begin{equation}\label{class2}
\Phi\left(\hat{X}\right)=\int_{\mathbb{R}}\langle q|\hat{X}|q\rangle\;e^{-iq\hat{P}}\;\hat{\rho}_0\;e^{iq\hat{P}}\;dq
\end{equation}
for any trace-class operator $\hat{X}$, where $\rho_0$ is a given Gaussian state (case $A_2)$ of \cite{holevo2007one});
\item a random unitary channel $\Phi_\sigma$ of the form
\begin{equation}\label{Phieta}
\Phi_\sigma\left(\hat{X}\right)=\int_{\mathbb{R}}e^{-iq\hat{P}}\;\hat{X}\;e^{iq\hat{P}}\;\frac{e^{-\frac{q^2}{2\sigma}}}{\sqrt{2\pi\sigma}}\;dq
\end{equation}
for any trace-class operator $\hat{X}$, with $\sigma>0$ (case $B_1)$ of \cite{holevo2007one}).
\end{enumerate}
\end{thm}
From Lemma \ref{U*}, with a suitable redefinition of Fock rearrangement all the channels of the first class are Fock-optimal.
On the contrary, for both the second and the third classes the optimal basis would be an infinitely squeezed version of the Fock basis:
\subsection{Class 2}
We will show that the channel \eqref{class2} does not have optimal inputs.
Let $\hat{\omega}$ be a generic quantum state.
Since $\Phi$ applies a random displacement to the state $\hat{\rho}_0$,
\begin{equation}\label{PhiX0}
\Phi\left(\hat{\omega}\right)\prec\hat{\rho}_0\;.
\end{equation}
Moreover, $\Phi\left(\hat{\omega}\right)$ and $\hat{\rho}_0$ cannot have the same spectrum unless the probability distribution $\langle q|\hat{\omega}|q\rangle$ is a Dirac delta, but this is never the case for any quantum state $\hat{\omega}$, so the majorization in \eqref{PhiX0} is always strict.
Besides, in the limit of infinite squeezing the output tends to $\hat{\rho}_0$ in trace norm:
\begin{align}
&\left\|\Phi\left(\hat{S}_\kappa\;\hat{\omega}\;\hat{S}_\kappa^\dag\right)-\hat{\rho}_0\right\|_1\nonumber\\
&=\left\|\int_{\mathbb{R}}\langle q|\hat{\omega}|q\rangle\left(e^{-i\kappa q\hat{P}}\;\hat{\rho}_0\;e^{i\kappa q\hat{P}}-\hat{\rho}_0\right)dq\right\|_1\nonumber\\
&\leq\int_{\mathbb{R}}\langle q|\hat{\omega}|q\rangle\left\|e^{-i\kappa q\hat{P}}\;\hat{\rho}_0\;e^{i\kappa q\hat{P}}-\hat{\rho}_0\right\|_1dq\;,
\end{align}
and the last integral tends to zero for $\kappa\to0$ since the integrand is dominated by the integrable function $2\langle q|\hat{\omega}|q\rangle$, and tends to zero pointwise.
It follows that the majorization relation
\begin{equation}
\Phi\left(\hat{S}_\kappa\;\hat{\omega}\;\hat{S}_\kappa\right)\prec\Phi\left(\hat{\omega}\right)
\end{equation}
will surely not hold for some positive $\kappa$ in a neighbourhood of $0$, and $\hat{\omega}$ is not an optimal input for $\Phi$.
\subsection{Class 3}
For the channel \eqref{Phieta}, squeezing the input always makes the output strictly less noisy.
Indeed, it is easy to show that for any positive $\sigma$ and $\sigma'$
\begin{equation}
\Phi_\sigma\circ\Phi_{\sigma'}=\Phi_{\sigma+\sigma'}\;.
\end{equation}
Then, for any $\kappa>1$ and any positive trace-class $\hat{X}$
\begin{align}
\hat{S}_\kappa\;\Phi_\sigma\left(\hat{X}\right)\;\hat{S}_\kappa^\dag &= \Phi_{\kappa^2\sigma}\left(\hat{S}_\kappa\;\hat{X}\;\hat{S}_\kappa^\dag\right)\nonumber\\
&=\Phi_{(\kappa^2-1)\sigma}\left(\Phi_{\sigma}\left(\hat{S}_\kappa\;\hat{X}\;\hat{S}_\kappa^\dag\right)\right)\;,
\end{align}
hence, recalling that $\Phi$ applies a random displacement,
\begin{equation}
\Phi_\sigma\left(\hat{X}\right)\prec \Phi_{\sigma}\left(\hat{S}_\kappa\;\hat{X}\;\hat{S}_\kappa^\dag\right)\;.
\end{equation}
\section{The thinning}\label{secthinning}
The thinning \cite{renyi1956characterization} is the map acting on classical probability distributions on the set of natural numbers that is the discrete analogue of the continuous rescaling operation on positive real numbers.
In this Section we show that the thinning coincides with the restriction of the Gaussian quantum-limited attenuator to quantum states diagonal in the Fock basis, and we hence extend Theorem \ref{maintheorem} to the discrete classical setting.
\begin{defn}[$\ell^1$ norm]
The $\ell^1$ norm of a sequence $\{x_n\}_{n\in\mathbb{N}}$ is
\begin{equation}
\|x\|_1=\sum_{n=0}^\infty |x_n|\;.
\end{equation}
We say that $x$ is summable if $\|x\|_1<\infty$.
\end{defn}
\begin{defn}
A discrete classical channel is a linear positive map on summable sequences that is continuous in the $\ell^1$ norm and preserves the sum, i.e. for any summable sequence $x$
\begin{equation}
\sum_{n=0}^\infty\left[\Phi(x)\right]_n=\sum_{n=0}^\infty x_n\;.
\end{equation}
\end{defn}
The definitions of passive-preserving and Fock-optimal channels can be easily extended to the discrete classical case:
\begin{defn}
Given a summable sequence of positive numbers $\{x_n\}_{n\in\mathbb{N}}$, we denote with $x^\downarrow$ its decreasing rearrangement.
\end{defn}
\begin{defn}
We say that a discrete classical channel $\Phi$ is passive-preserving if for any decreasing summable sequence $x$ of positive numbers $\Phi(x)$ is still decreasing.
\end{defn}
\begin{defn}
We say that a discrete classical channel $\Phi$ is Fock-optimal if for any summable sequence $x$ of positive numbers
\begin{equation}\label{optimalcl}
\Phi(x)\prec\Phi\left(x^\downarrow\right)\;.
\end{equation}
\end{defn}
Let us now introduce the thinning.
\begin{defn}[Thinning]
Let $N$ be a random variable with values in $\mathbb{N}$.
The thinning with parameter $0\leq\lambda\leq1$ is defined as
\begin{equation}
T_\lambda(N)=\sum_{i=1}^N B_i\;,
\end{equation}
where the $\{B_n\}_{n\in\mathbb{N}^+}$ are independent Bernoulli variables with parameter $\lambda$, i.e. each $B_i$ is one with probability $\lambda$, and zero with probability $1-\lambda$.
\end{defn}
From a physical point of view, the thinning can be understood as follows:
consider a beam-splitter of transmissivity $\lambda$, where each incoming photon has probability $\lambda$ of being transmitted, and $1-\lambda$ of being reflected, and suppose that what happens to a photon is independent from what happens to the other ones.
Let $N$ be the random variable associated to the number of incoming photons, and $\{p_n\}_{n\in\mathbb{N}}$ its probability distribution, i.e. $p_n$ is the probability that $N=n$ (i.e. that $n$ photons are sent).
Then, $T_\lambda(p)$ is the probability distribution of the number of transmitted photons.
It is easy to show that
\begin{equation}\label{Tn}
\left[T_\lambda(p)\right]_n=\sum_{k=0}^\infty r_{n|k}\;p_k\;,
\end{equation}
where the transition probabilities $r_{n|k}$ are given by
\begin{equation}\label{rnk}
r_{n|k}=\binom{k}{n}\lambda^n(1-\lambda)^{k-n}\;,
\end{equation}
and vanish for $k<n$.
The map \eqref{Tn} can be uniquely extended by linearity to the set of summable sequences:
\begin{equation}\label{Tne}
\left[T_\lambda(x)\right]_n=\sum_{k=0}^\infty r_{n|k}\;x_k\;,\qquad \|x\|_1<\infty\;.
\end{equation}
\begin{prop}
The map $T_\lambda$ defined in \eqref{Tne} is continuous in the $\ell^1$ norm and sum-preserving.
\begin{proof}
For any summable sequence $x$ we have
\begin{equation}
\sum_{n=0}^\infty\left|T_\lambda(x)\right|_n\leq\sum_{n=0}^\infty\sum_{k=0}^\infty r_{n|k}\;|x_k|=\sum_{k=0}^\infty|x_k|\;,
\end{equation}
where we have used that for any $k\in\mathbb{N}$
\begin{equation}
\sum_{n=0}^\infty r_{n|k}=1\;.
\end{equation}
Then, $T_\lambda$ is continuous in the $\ell^1$ norm.
An analogous proof shows that $T_\lambda$ is sum-preserving.
\end{proof}
\end{prop}
\begin{thm}\label{thinatt}
Let $\Phi_\lambda$ and $T_\lambda$ be the quantum-limited attenuator and the thinning of parameter $0\leq\lambda\leq1$, respectively.
Then for any summable sequence $x$
\begin{equation}
\Phi_\lambda\left(\sum_{n=0}^\infty x_n\;|n\rangle\langle n|\right)=\sum_{n=0}^\infty \left[T_\lambda(x)\right]_n\;|n\rangle\langle n|\;.
\end{equation}
\begin{proof}
Easily follows from the representation \eqref{kraus}, \eqref{Tn} and \eqref{rnk}.
\end{proof}
\end{thm}
As easy consequence of Theorem \ref{thinatt} and Theorem \ref{maintheorem}, we have
\begin{thm}
The thinning is passive-preserving and Fock-optimal.
\end{thm}
\section{Conclusions}
We have proved that for any one-mode gauge-covariant bosonic Gaussian channel, the output generated by any state diagonal in the Fock basis and with decreasing eigenvalues majorizes the output generated by any other input state with the same spectrum.
Then, the input state with a given entropy minimizing the output entropy is certainly diagonal in the Fock basis and has decreasing eigenvalues.
The non-commutative quantum constrained minimum output entropy problem is hence reduced to a problem in classical discrete probability, that for the quantum-limited attenuator involves the thinning channel, and whose proof could exploit the techniques of the proof of the Restricted Thinned Entropy Power Inequality \cite{johnson2010monotonicity}.
Exploiting unitary equivalence we also extend our results to one-mode trace-preserving bosonic Gaussian channel which are not gauge-covariant, with the notable exceptions of those special maps admitting normal forms $A_2)$ and $B_1)$ \cite{holevo2007one} for which we show that no general majorization ordering is possible.
\section*{Acknowledgment}
The Authors thank Andrea Mari, Luigi Ambrosio, Seth Lloyd and Alexander S. Holevo for comments and fruitful discussions.
GdP thanks G. Toscani and G. Savar\'e for the ospitality and the useful discussions in Pavia.
\bibliographystyle{IEEEtran}
|
2,877,628,088,952 | arxiv | \section{Introduction}
\label{sec:intro}
In the industry, business analysts are usually not concerned with the algorithms, feature selection, feature engineering or selection of appropriate hyperparameters. All they want is a fast track to a highly accurate predictive model which they can apply with minimum knowledge and effort on their problems and datasets. To satisfy this need, many "one-click" machine learning platforms have emerged that specifically target those users. Platforms such as Google Predict and BigML take the dataset as input from the end user and provide them with a predictive model for the dataset and a web service to consume it but that is beyond the scope of this paper.
In machine learning research, this topic has arrived under the umbrella term AutoML that subsumes and integrates disjunct areas of research such as identification of the problem (classification/regression, identifying the type of data, types of features and selection of features). Besides connecting these areas, AutoML also offers the possibility to delve into meta-learning where generalization from one dataset would tell which approaches can also be applied successfully on similar datasets.
We propose a system that automates a lot of the classical machine learning cycle and tries to build a predictive model without (or with a very little) human interference.
With the advent of various popular machine learning competition platforms such as CodaLab \citep{codalab}, Kaggle \citep{kaggle}, DrivenData \citep{dd}, etc., it is now easy to gather a broad set of distinct datasets with different features that represent real-world machine learning problems. Our hypothesis is that, an AutoCompete framework to ease the life of a business user should be able to benefit from the learnings of a human expert on a large enough set of ML competitions at least in the form of codified knowledge.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.29]{participants_rank2.png}
\caption{Number of participants v/s rank obtained in various machine learning competitions (log scaled). Only the data for Kaggle is shown here.}
\end{figure}
We use this knowledge to train the AutoCompete system tackle different types of datasets. The system has been trained from knowledge acquired over a period of more than two years and more than 100 machine learning competitions. Figure 1 shows the performance of our human expert supported by earlier versions of this framework in selected machine learning competitions. It is to be noted that the good performance is obtained as a result of both the human expert and the AutoCompete framework. The framework was developed over time and new pipelines were added according to the requirement of the datasets seen by the human expert.
This paper is divided into five sections. Section 2 discusses our approach to AutoCompete. In section 3 we discuss the main components of the proposed AutoCompete framework followed by section 4 which discusses results on standard datasets and comparison with other such systems. Section 5 gives the conclusion and future work along with the feasibility of such a system.
\section{Base Framework}
\label{sec:baseframework}
As most competition datasets follow this layout, our current AutoCompete system works only with datasets in tabular format. Such a dataset can be defined as a set $X$ and a vector $y$, where, every row in $X$ represents a sample and every corresponding row in $y$ is the label (or output feature) of that sample. Every column of $X$ is an input feature. The proposed system is presently unable to deal with datasets in other formats. If such a dataset is encountered, a human expert is invited to convert the format which can then be used for predictive modelling using the proposed AutoCompete system.
The most important components of the proposed AutoCompete system, as depicted in Figure 2 are the ML Model Selector and Hyper-parameter Selector. In addition to these, there is a data splitter, data type identifier, feature stacker, decomposition tools and feature selector.
Once a tabular data is fed into the AutoCompete system, the very first step taken by it is splitting the dataset into training and validation sets. If a classification task is encountered, the dataset is split in a stratified manner, such that both the training and validation set have the same distribution of labels. The validation set is always kept separate from any transformations being used on the training set and is not touched at any point in the pipeline.
All the transformations on the training set are saved and then applied on the validation set in the end. This ensures that the system is not over-fitting and the models thus produced as a result of the AutoCompete pipeline generalize on unseen datasets. Once the splitting is done, the type of features are identified. The data types for every feature can be supplied by the user. However, if the manually specified data types are not available, the system distinguishes between different features on its own by applying basic heuristics. For example, if text dataset is encountered, AutoCompete system will deploy natural language processing based algorithms and text transformers. For others, data type is identified and appropriate transformations are used. Each transformation is then fed through a feature selection mechanism which in turn sends the selected features and the transformation pipeline through model selector and hyper-parameter selector. The transformation and the model with the best performance is used in the end.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.25]{ml_flow2.png}
\caption{Base framework of the proposed system. Pink lines represent the most common path followed.}
\end{figure}
The next section describes the most important components of the AutoCompete system in greater detail and also the strategy used for selection of models and tuning hyper-parameters.
\section{Components of AutoCompete}
\label{components}
The Dataset component of the AutoCompete framework receives the data from user in a tabular form. The data is splitted into training and validation set using the Splitter component. Various identifiers then identify the type of data and pass it to different pipelines and preprocessing steps. At every stage, the dataset is sent to ML Model Selector for model selection and evaluation. The Stacker takes the different types of preprocessed features and stacks them into one dataset for further decomposition and feature selection. Feature selection is also performed on the original dataset. The final output is the best pipeline with highest score or lowest loss in the evaluation.
Two major components of the proposed AutoCompete framework are the ML Model Selector and Hyper-parameter Selector, as highlighted in Figure 2. Table 1 shows the different classification and regression algorithms currently used by the AutoCompete framework. In addition to the modules specified in Table 1, we also introduce bagging and boosting for different models for improved performance at a later stage.
\begin{table}[H]
\caption{Classification and regression modules present in the current AutoCompete framework}
\begin{center}
\begin{tabular}{ | l | l |}
\hline
\textbf{Classification} & \textbf{Regression} \\ \hline
Random Forest & Random Forest \\ \hline
Gradient Boosting & Gradient Boosting \\ \hline
Logistic Regression & Logistic Regression \\ \hline
Ridge Classifier & Ridge \\ \hline
Naive Bayes & Lasso \\ \hline
SVM & Support Vector Regression\\ \hline
Nearest Neighbors & Linear Regressor \\ \hline
\end{tabular}
\end{center}
\end{table}
We propose two different selectors for selection of model and the corresponding hyper-parameters: (a) random search, (b) grid-search on a given parameter space. For both random search and grid search a parameter space is specified in the AutoCompete module according to different types of datasets encountered in the past.
For example, in case of a text dataset, the modules selected are Term Frequency - Inverse Document Frequency followed by a decomposition method such as Singular Value Decomposition. After the decomposition process, the models Random Forest \citep{rf} and Support Vector Machines \citep{svm} are selected for initial results. To make the system fast, we tune only certain hyper-parameters and have a specified search space for these parameters. In case the Random Forest module is selected, we limit our search to number of estimators, minimum number of samples at each split and maximum number of features to be used by each estimator. Similarly, in case of SVMs, the kernel is fixed to radial basis function (rbf) and only the penalty parameter and gamma (kernel coefficient) is tuned.
It is observed that even though we limit our system to tuning only certain parameters, we get results comparable to systems like hyperopt \citep{hyperopt} (these results have been discussed in the Experiments section) and also the results are obtained faster.
\section{Experiments}
\label{sec:exp}
We tested our framework on standard datasets such as MNIST \citep{mnist}, newsgroup-20 \citep{newsgroup}, adult dataset, smartphone dataset for human activity prediction \citep{smartphone} and housing dataset. These five datasets selected differ a lot from each other in terms of the number of variables, kind of data, machine learning task type to be applied and selection of evaluation metrics. They, thus, form a nice benchmark that can be used to develop other AutoML algorithms and frameworks on. Table 2 shows the different parameters for the datasets used.
\begin{table}[H]
\caption{Datasets used for testing AutoCompete framework}
\begin{center}
\begin{tabular}{ | l | l | p{5cm} |}
\hline
\textbf{Dataset} & \textbf{No. of Variables} & \textbf{Task Type} \\ \hline
MNIST & 784 & Multiclass Classification \\ \hline
Newsgroup-20 & \textasciitilde 100k & Multiclass Classification \\ \hline
Adult & 14 & Binary Classification\\ \hline
Smartphone & 561 & Binary Classification \\ \hline
Housing & 14 & Regression \\ \hline
\end{tabular}
\end{center}
\end{table}
Results on adult dataset with a much smaller number of variables are presented first. For a small dataset like this one, AutoCompete selects a few fast models and then optimizes the hyper-parameters for the model with highest area under the ROC curve. AUC is chosen as the evaluation metric since the labels are skewed and a threshold on predicted probabilities will be more intuitive than classification accuracy.
Figure 3 shows the models which were evaluated and their performance on the Adult dataset. The selected model with a grid based hyper-parameters for small dataset gives an AUC of 0.88.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.3]{adult_eval.png}
\caption{ROC AUC for different model evaluations on the Adult dataset.}
\end{figure}
For MNIST, the parameters were chosen using both random search and grid search. A pipeline with PCA was selected with Random Forest as the model as prior information about the type of data is available to us. The accuracy on the test dataset was reported to be 0.96. Current framework is limited to 30 minutes of wall time and models are not evaluated further if this limit is reached.
\begin{table}[H]
\caption{Results on MNIST dataset}
\begin{center}
\begin{tabular}{ | l | l |}
\hline
\textbf{Algorithm} & \textbf{Accuracy Score} \\ \hline
Convnets & 99.8\% \\ \hline
hyperopt-sklearn & 98.7\% \\ \hline
libsvm grid-search & 98.6\%\\ \hline
\textbf{AutoCompete} & \textbf{96\%}\\ \hline
\end{tabular}
\end{center}
\end{table}
In case of Newsgroups-20 dataset, the AutoCompete framework takes less than 10 minutes of wall time to beat hyperopt's results \citep{hyperopt}. The pipeline chosen in this case was a text transformer (TF-IDF) and logistic regression.
\begin{table}[H]
\caption{Results on Newsgroups-20 dataset}
\begin{center}
\begin{tabular}{ | l | l |}
\hline
\textbf{Algorithm} & \textbf{Weighted Average F1 Score} \\ \hline
\textbf{AutoCompete} & \textbf{0.864}\\ \hline
hyperopt-sklearn & 0.856 \\ \hline
SVMTorch & 0.848 \\ \hline
LibSVM & 0.843 \\ \hline
\end{tabular}
\end{center}
\end{table}
The next two datasets, we tested AutoCompete framework on were the smartphone dataset and housing dataset. The smartphone dataset is a classification dataset and housing dataset on the other hand is a regression dataset. Smartphone dataset consists of 561 variables and all of them are numeric and housing dataset consists of 14 attributes which are a mixture of categorical, integers and real numbers. The selected pipeline and scores obtained on evaluation for all the datasets are shown in Table 5.
\begin{table}[H]
\caption{Selected pipeline and evaluation score for different datasets}
\begin{center}
\begin{tabular}{ | l | l | l |}
\hline
\textbf{Dataset} & \textbf{Selected Pipeline} & \textbf{Evaluation Score} \\ \hline
Smartphone & Logistic Regression & 0.921 (AUC) \\ \hline
Housing & RF(Features) + SVR & 2.3 (RMSE) \\ \hline
MNIST & PCA + RF & 0.96 (Accuracy) \\ \hline
Newsgroup-20 & TFIDF + LR & 0.864 (Weighted F1) \\ \hline
Adult & Model Stacker & 0.85 (AUC) \\ \hline
\end{tabular}
\end{center}
\end{table}
We also used AutoCompete in the AutoML Challenge. For the challenge, the AutoCompete system did not require any human interference. We ranked 2nd in the Phase0 of the competition. Since the AutoML phase required python code submission, which is still under development for AutoCompete, we did not participate in that phase. This will be incorporated and AutoCompete will be used in all the upcoming phases of the AutoML challenge. The results are shown in Figure 4.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.35]{Selection_171.png}
\caption{Our result in the AutoML challenge.}
\end{figure}
All the computations were performed on a laptop with 4th gen Intel Core i7-4650U Processor (3.3 GHz, 4M Cache) and 16 GB RAM without any GPU power.
\section{Conclusion and Future Work}
\label{sec:conclu}
We introduce a highly automated framework for tackling machine learning problems. The framework and all pipelines inside were learned and designed based on the experience obtained by taking part in hundreds of machine learning competition over a period of two years. The comparison of AutoCompete with well established frameworks like hyperopt \citep{hyperopt} tells us that there is a high potential in AutoCompete in terms of minimizing human effort and bringing machine learning to masses. The proposed framework enables a novice in machine learning to create and build benchmarks for tabular datasets without much (or any) intervention. It is also seen that the system performs nicely on machine learning challenges. The underlying implementation is based purely on Python and scikit-learn \citep{sklearn} with some modules written in Cython.
To extend the research in this field, our next steps (currently under research) would be to include a gender based genetic algorithm (GGA) \citep{gga}, Sequential Model-based Algorithm Configuration \citep{smac} and TPE \citep{tpe} for both selection of the machine learning model and tuning the hyper-parameters. Our future research also includes better stacking, ensembling of models and model blending to optimize for a required evaluation metric. We plan to release a usable version in the future and it will be available on the website of our research group.
|
2,877,628,088,953 | arxiv | \section{Introduction}\label{sec:introduction}
The reconstruction of flow fields using spatially limited data has long been a topic of interest in the fluid dynamics community. With the rapid development of particle image velocimetry (PIV) and computational power for performing direct numerical simulations (DNS), high-fidelity turbulence data can be generated. However, as turbulence exhibits a chaotic behaviour with a wide range of spatio-temporal scales, an expensive experimental setup and a substantial computational cost are required to obtain high-resolution turbulent flow fields. With the availability of enormous amounts of data that can be obtained from experimental and DNS studies, deep learning techniques have a great potential to serve as an alternative data-driven methods to tackle turbulent flow problems. Deep learning is a subset of machine learning, where deep neural networks are used for classification, prediction, and feature extraction \cite{LeCunetal2015}. In this study, we demonstrate a deep learning-based approach to reconstruct high-resolution laminar and turbulent flow fields using coarse data, which are represented by a limited number of distributed points. \par
Recently, there have been considerable developments in deep learning algorithms that can be practically utilised in the field of fluid dynamics \cite{Bruntonetal2020, Kutz2017}. Various deep learning-based approaches have been applied for different applications in turbulence, such as turbulence modelling \cite{Duraisamyetal2019, Gamahara&Hattori2017, Lingetal2016, Wangetal2017}, flow prediction \cite{Lee&You2019, Srinivasanetal2019} and flow control \cite{Fanetal2020, Rabaultetal2019}. On the other hand, high-resolution turbulent flow reconstruction has recently become an active research topic after the introduction of various supervised and unsupervised deep learning-based algorithms designed to deal with high-resolution image reconstruction \cite{Bashiretal2021}. These developments in deep learning techniques accompanied with the rapid growth in graphics processing unit (GPU) power have opened the doors to explore novel methods that could help reconstruct super-resolution flow fields using extremely low-resolution experimental measurements or simulation results rather than applying traditional handcrafted super-resolution methods, such as bicubic interpolation \cite{Keys1981}. \par
Fukami {\it et al.} \cite{Fukami2019, Fukami2021} proposed models based on convolutional neural networks (CNNs) for spatial and spatio-temporal super-resolution reconstruction of turbulent flows. They reported good reconstruction of velocity and vorticity fields using extremely low-resolution data along with good prediction of the temporal evolution for the intervals for which the model was trained.
Onishi {\it et al.} \cite{Onishietal2019} presented a super-resolution method based on CNN for reconstructing high-resolution data from low-resolution urban meteorology simulation data. Their results revealed that the CNN-based model highly outperformed the conventional interpulation methods. Liu et al. \cite{Liuetal2020} considered the temporal behaviour of the flow in the reconstruction of the high-resolution turbulent flow fields using CNN-based model. They showed that by applying this approach, the reconstruction accuracy could be remarkably improved compared with the static CNN-based model. \par
Recently, Kim et al. \cite{Kimetal2021} showed that unsupervised deep learning has great potential for reconstructing high-resolution turbulence using unpaired training data. They used a cycle-consistent generative adversarial network (CycleGAN) \cite{Zhuetal2017} to reconstruct high-resolution velocity fields from low-resolution DNS and large eddy simulation (LES) data. Their results showed better reconstruction accuracy compared with that of bicubic interpolation and CNN-based model. \par
In terms of experimental studies, Deng {\it et al.} \cite{Dengetal2019} applied a super-resolution GAN (SRGAN) \cite{Ledigetal2017} and enhanced SRGAN (ESRGAN) \cite{Wangetal2018} to reconstruct high-resolution flow fields using PIV measurements of flow around a cylinder. They reported an accurate reconstruction of the mean and fluctuation flow fields. They observed that the reconstruction capability of ESRGAN was better than that of SRGAN. Cai {\it et al.} \cite{Caietal2019} proposed a CNN-based model for estimating high-resolution velocity fields from PIV measurments. Their model showed a better performance compared with the traditional cross-correlation algorithms. Morimoto {\it et al.} \cite{Morimotoetal2020} applied a CNN-based model to estimate the velocity fields using PIV measurements with missing regions. They reported a relatively good estimation of the missing regions in the velocity.\par
In this study, we focus on the reconstruction of high-resolution flow fields using extremely spatially limited data that are represented by distributed points in the flow. This approach can mimic high-resolution flow fields reconstruction from spatially limited PIV or hot wire measurements. We apply ESRGAN-based model, i.e. multi-scale ESRGAN (MS-ESRGAN), to reconstruct high-resolution flow fields using data with various coarseness levels. As representative examples, we consider the DNS of laminar flow around a square cylinder and turbulent channel flow. \par
The remainder of this paper is organised as follows. In Section 2, the methodology of reconstruction high-resolution flow fields using the proposed deep learning model is explained. The generation of the training data using DNS is described in Section 3. In Section 4, the results of testing the proposed model are discussed. Finally, the conclusions of this study are presented in Section 5. \par
\section{Methodology}
Since the first version of GAN was introduced by Goodfellow {\it et al.} \cite{Goodfellowetal2014}, variants of GAN have been proposed to tackle different types of image transformation and super-resolution problems \cite{Ledigetal2017, Mirza&Osindero2014, Wangetal2018, Zhuetal2017}. The architecture of GAN is designed to be different from the traditional architecture of multilayer perceptron (MLP) or CNN-based models. In GAN, two adversarial networks, i.e. the generator ($G$) and the discriminator ($D$), compete with each other. Here, $G$ generates fake images similar to the real ones, whereas $D$ distinguishes the fake images from the real ones. $G$ and $D$ are usually MLPs or CNNs that are trained simultaneously. The goal of the training process is to make $G$ generate fake images that are difficult to distinguish using $D$. This process can be expressed as a min-max two-player game with a value function $V(D, G)$ such that:
\begin{equation} \label{eqn:eq1}
\begin{split}
\substack{min\\G} ~\substack{max\\D} ~V(D,G) = \mathbb{E}_{x_r \sim P_{data(x_r)}} [ {\rm log} D(x_r )] + \mathbb{E}_{z \sim P_z(z) } [ {\rm log} (1-D(G(z)))],
\end{split}
\end{equation}
\noindent where $x_r$ is the image from the ground truth data, whereas $P_{data(x_r )}$ is the real image distribution. $\mathbb{E}$ represents the operation of calculating the average of all the data in the training mini-batch. In the second right term of Eq.~\ref{eqn:eq1}, $z$ is a random vector used as an input to $G$, whereas $D(x_r )$ represents the probability that the image is real and not generated by $G$. $G(z)$ is the output from $G$, which is expected to generate an image that is similar to the real image, such that the value of $D(G(z))$ is close to 1. On the other hand, in $D$, $D(x_r )$ returns a value close to 1, whereas $D(G(z))$ returns a value close to 0. Thus, in the training process, $G$ is trained in a direction that minimises $V(D,G)$, and $D$ is trained in a direction that maximises $V(D,G)$. After successful training, $G$ is expected to produce an image with a distribution similar to the real image that $D$ cannot judge whether it is real or fake.
This study applies a newly developed high-fidelity deep learning framework based on ESRGAN \cite{Wangetal2018} to reconstruct a high-resolution flow fields using extremely coarse data as input to $G$. We adopted the generator network to obtain MS-ESRGAN. The architecture of $G$ in MS-ESRGAN is shown in Fig.~\ref{eqn:eq1}(a). Here, $G$ consists of a deep convolution neural network represented by residuals in residual dense blocks (RRDBs) and multi-scale branches.
The coarse input data are first passed through a convolution layer and then through a series of RRDBs. The multi-scale part, which consists of three parallel convolutional sub-models with different kernel sizes, is applied to the data features that are extracted by the RRDBs. Finally, the outputs of the three branches are simply summed and passed through a final convolutional layer to generate a high-resolution fake image ($x_f$). Fig.~\ref{eqn:eq1}(b) shows the architecture of $D$. As mentioned earlier, $D$ is designed to distinguish between the fake high-resolution image and the real image. The fake and real images are fed to $D$ and passed through a series of convolutional, batch normalisation, and leaky ReLU layers. Then, the data are passed through a final convolutional layer. The non-transformed discriminator outputs using the real and fake images, i.e. $C(x_r )$ and $C(x_f )$, are used to calculate the relativistic average discriminator value $D_{Ra}$ \cite{Jolicoeur-Martineau2018}:
\begin{equation} \label{eqn:eq2}
D_{Ra} (x_r , x_f ) = \sigma (C \left(x_r ) \right) - \mathbb{E}_{x_f} \left[C ( x_f) \right],
\end{equation}
\begin{equation} \label{eqn:eq3}
D_{Ra} (x_f , x_r ) = \sigma (C \left(x_f ) \right) - \mathbb{E}_{x_r} \left[C ( x_r) \right],
\end{equation}
\noindent where $\sigma$ is the sigmoid function. In Eqs.~\ref{eqn:eq2} and ~\ref{eqn:eq3}, $D_{Ra}$ predicts the probability that the output from $D$ using the real image is relatively more realistic than the output using the fake image.
The discriminator loss is then defined as:
\begin{equation} \label{eqn:eq4}
L_D^{Ra} = -\mathbb{E}_{x_r} \left[ {\rm log} (D_{Ra} (x_r , x_f )) \right] - \mathbb{E}_{x_f} \left[ {\rm log} (1 - D_{Ra} (x_f , x_r )) \right].
\end{equation}
The adversarial loss of the generator can be expressed in a symmetrical form as:
\begin{equation} \label{eqn:eq5}
L_G^{Ra} = -\mathbb{E}_{x_r} \left[ {\rm log} (1 - D_{Ra} (x_r , x_f )) \right] - \mathbb{E}_{x_f} \left[ {\rm log} (D_{Ra} (x_f , x_r )) \right].
\end{equation}
In addition to the adversarial loss, four additional loss terms are used to form the combined loss function of the generator: pixel loss $(L_{pixel})$, perceptual loss $(L_{perceptual})$, gradient loss $(L_{gradient})$, and Reynolds stress loss $( L_{Reynolds~stress})$. $L_{pixel}$ is the pixel-based error between the generated data and the ground truth data. $L_{perceptual}$ represents the difference in the extracted features of the real and fake data. The pre-trained CNN VGG-19 \cite{Simonyan&Zisserman2015} is used to extract the features. While one layer of VGG-19 was applied in the model of Wang {\it et al.} \cite{Wangetal2018}, we apply three different layers to extract the features. This strategy has showed a remarkable improvement in the training stability. $L_{gradient}$ represents the difference in the gradient of the generated fake data and real data, whereas $L_{Reynolds~stress}$ is the error that represents the difference between the Reynolds stress tensor of the velocity data that are obtained from the generator and the Reynolds stress tensor of the ground truth velocity data. The mean squared error (MSE) is used to calculate all the loss terms except $L_G^{Ra}$. \par
The combined loss function of the generator is expressed as:
\begin{equation} \label{eqn:eq6}
\mathcal{L}_G = L_G^{Ra} + \lambda_1 L_{pixel} + L_{perceptual} + \lambda_2 L_{gradient} + \lambda_3 L_{Reynolds~stress},
\end{equation}
\noindent where $\lambda_1, \lambda_2$, and $\lambda_3$ represent the coefficients that are used to balance the different loss terms whose values are set to be 5000, 10, and 100, respectively. $L_{gradient}$, is used to consider the non-uniform distribution of the grid points in the training process, whereas $L_{perceptual}$ can help in overcoming the training instability. The turbulence statistics can be improved by applying $L_{Reynolds~stress}$, which forces the model to consider the components of the Reynolds stress tensor in the training process. \par
In this study, the adaptive moment estimation (ADAM) optimisation algorithm \cite{Kingma&Ba2017} is applied to update the weights of the model. The training data are divided into mini-batches, and the size of each mini-batch is set to be 16. \par
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.9\textwidth]{./1.eps}
\caption[]{MS-ESRGAN architecture: (a) the generator ($\beta$ is the residual scaling parameter = 0.2), and (b) the discriminator.}
\label{fig:1-ESRGAN}
\end{figure}
\section{Generation of training data}
Two examples are used in this study, two-dimensional laminar flow around a square cylinder at $Re_d = 100$ is used as a demonstration, and turbulent channel flow at $Re_\tau = 180$ is used as a test case for reconstructing high-resolution wall-bounded turbulent flow. The training data of each example are generated by performing DNS. \par
The momentum equation for an incompressible viscous fluid is:
\begin{equation}\label{eqn:eq7}
\frac{\partial {\bf u}}{\partial t} + {\bf u}\cdot \nabla {\bf u} = - \frac{1}{\rho} \nabla p + {\it \nu} \nabla^2 {\bf u},
\end{equation}
\noindent where ${\bf u}$ is the velocity of the fluid, $\rho$ is the density, $p$ is the pressure, and $\nu$ is the kinematic viscosity. The continuity equation that expresses the incompressibility of the fluid is defined as:
\begin{equation}\label{eqn:eq8}
\nabla \cdot {\bf u} = 0.
\end{equation}
The open-source computational fluid
dynamics (CFD) finite-volume code OpenFOAM-5.0x is used to perform the DNS. In the case of two-dimensional laminar flow around a square cylinder, the Reynolds number is based on the free-stream velocity and cylinder width, i.e. $Re_d = U_\infty d /{\it \nu}$, where $U_\infty$ is the free-stream velocity and $d$ is the cylinder width. The domain size is set to be $(xd{\times}yd) = (15{\times}20)$ with the corresponding grid size of $(381{\times}221)$. The time step of the simulation is set to be $\Delta t =10^{-2}$. The statistics obtained from the simulation have been validated agnainst DNS results obtained by Anzai {\it et al.}\cite{Anzaietal2017}
In the case of turbulent channel flow, the friction Reynolds number, i.e. $Re_\tau = u_\tau \delta / {\it \nu}$ is set to be 180, where $u_\tau$ is the friction velocity and $\delta$ is half of the channel height. The dimensions of the computational domain are set to be $4\pi \delta$, $2\delta$, and $2 \pi \delta$ in the streamwise ($x$), wall-normal ($y$), and spanwise ($z$) directions, respectively. The corresponding grid points are 256, 128, and 256, respectively. Uniform grid points distributions with spaceing $\Delta x^+$ $\approx$ 6.3 and $\Delta z^+$ $\approx$ 2.8 are used in the streamwise and spanwise directions. Note that the superscript $+$ indicates that the quantity is made dimensionless using the wall variables, i.e. $u_\tau$ and $\it \nu$. A non-uniform grid points distribution is used in the wall-normal direction. The first grid piont away from the wall is located at $y^+$ $\approx$ 0.63 and the maximum spacing (at the centreline of the channel), i.e. $\Delta y^+_{max}$ $\approx$ 6.4. The periodic boundary condition is assigned to the streamwise and spanwise directions, whereas the no-slip condition is applied to the upper and lower walls of the channel. The time step of the simulation is set to be $\Delta t = 10^{-2}$ corresponding to $\Delta t^+ = 0.1134$. The turbulent statistics obtained from the simulation have been validated using DNS data obtained by Moser {\it et al.} \cite{Moseretal1999}.\par
For both simulations, the pressure implicit split operator algorithm is employed to solve the coupled pressure momentum system. The convective fluxes are discretised with a second-order accurate linear upwind scheme and all other discretisation schemes that are used in each simulation have second-order accuracy. The maximum Courant–Friedrichs–Lewy (CFL) number is maintained to be less than 1 to ensure simulation stability.\par
The training data are obtained with 6,000 snapshots from the simulation of two-dimensional laminar flow around a square cylinder and 10,000 snapshots from a single ($y-z$) plane of the turbulent channel simulation. The domain size used for training in the case of two-dimensional laminar flow around square cylinder is fixed to be $(xd \times yd) = (16.82 \times 8)$, which is equivalent to a grid size of ($320 \times 160$). The same grid size obtained from the simulation is used for training in the turbulent channel flow case. In both cases, the interval between the collected snapshots of the flow fields is 10 times the time step used in the simulation.\par
As mentioned earlier, the low-resolution data are obtained by selecting a distributed points in the flow, i.e. no filtering operation is used. The distribution of the selected points at different coarseness levels for the two cases is shown in Fig.~\ref{fig:2-Coarseness}. The distribution of the points ($n_x{\times}n_y$) in the case of two-dimensional laminar flow around a square cylinder, as shown in Fig.~\ref{fig:2-Coarseness}(a), has three levels of coarseness: case 1 (40 $\times$ 20), case 2 (20 $\times$ 10), and case 3 (10 $\times$ 5). Figure~\ref{fig:2-Coarseness}(b) shows the three coarseness levels of the points distribution ($n_y \times n_z$) in the case of turbulent channel flow: case 1 (16 $\times$ 32), case 2 (8 $\times$ 16), and case 3 (4 $\times$ 8). To prepare the data for the training process, the data are normalised using the min-max normalisation function to produce values between 0 and 1. The shape of the input data to $G$ is fixed to be (40 $\times$ 20) in the case of laminar flow around a square cylinder and (16$\times$32) in the case of turbulent channel flow. To achieve these shapes, upsampling is performed on cases 1 and 2 for each of the two cases used in this study. \par
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.9\textwidth]{./2.eps}
\caption[]{The distribution of the selected points at three different coarseness levels for the case of (a) two-dimensional laminar flow around a square cylinder, and (b) turbulent channel flow.}
\label{fig:2-Coarseness}
\end{figure}
\section{Results and discussion}
\subsection{Flow around a square cylinder}
In this section, we examine the ability of MS-ESRGAN to reconstruct high-resolution flow fields using coarse data of two-dimensional laminar flow around a square cylinder. Note that all the results are obtained using the test data that are not included in the training process. The reconstructed instantaneous velocity fields ($u$ and $\upsilon$), pressure field, and root-mean-square error (RMSE) of the reconstruction are shown in Fig.~\ref{fig:3-Cyl-RMSE}. Here, the velocity components are normalised by $U_\infty$, and the dimensionless pressure is given as $C_p = (p- p_\infty) / 0.5\rho U_\infty $, where $p_\infty$ is the free-stream pressure. The reconstructed fields show a commendable agreement with the DNS results even when a high coarseness level is used (i.e. case 3). As shown in the figure, the RMSE is proportional to the coarseness level with a well-accepted maximum value.
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.7\textwidth]{./3.eps}
\caption[]{Reconstructed instantaneous flow fields and RMSE for the case of two-dimensional laminar flow around a square cylinder: (a) streamwise velocity, (b) spanwise velocity, and (c) pressure.}
\label{fig:3-Cyl-RMSE}
\end{figure}
To further validate the model, the capability of the model to reconstruct high-resolution flow fields is examined statistically. Figure~\ref{fig:4-Cyl-PDF} shows the probability density function (PDF) of the reconstructed velocity components and the pressure. All the reconstruction results show excellent agreement with the results obtained from the DNS, thus indicating the ability of the model to reconstruct instantaneous high-resolution flow fields. \par
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.8\textwidth]{./4.eps}
\caption[]{Probability density functions of the reconstructed velocity components and pressure for the case of two-dimensional laminar flow around a square cylinder.}
\label{fig:4-Cyl-PDF}
\end{figure}
To observe the flow characteristics, the profiles of the mean streamwise velocity and mean pressure are calculated using 5000 reconstructed snapshots. As shown in Fig.~\ref{fig:5-Cyl-MeanVP}(a) and (b), the mean streamwise velocity and pressure are in commendable agreement with the results obtained from the DNS. However, a scattering in the pressure values near the rear of the cylinder, i.e. $x/d$ $\approx$ $0.5$ can be seen in Fig.~\ref{fig:5-Cyl-MeanVP}(b) while it is not shown in the velocity profile in Fig.~\ref{fig:5-Cyl-MeanVP}(a). This behaviour can be attributed to the rapid change of the pressure values in this region and the fewer physics constraints in the combined loss function that are dedicated to the pressure compared with those that are dedicated to the velocity components. For the pressure, there is only the gradient term, while for the velocity, there are the gradient and the Reynolds stress tenser terms. We believe that to achieve more accurate reconstruction for the pressure field, more studies that focus on the pressure-based physics constrains are required.
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.9\textwidth]{./5.eps}
\caption[]{Profiles of the reconstructed mean streamwise velocity (a) and mean pressure (b) for the case of two-dimensional laminar flow around a square cylinder.}
\label{fig:5-Cyl-MeanVP}
\end{figure}
Figure~\ref{fig:6-Cyl-Spectrum} shows the power spectrum density (PSD) of the streamwise velocity fluctuations at two different streamwise locations plotted against Strouhal number ($St$=$fd$/$U_\infty$), where $f$ is the frequency. The results obtained from the reconstructed velocity data are in excellent
agreement with the results obtained from the DNS for all the three levels of coarseness, indicating that the reconstructed data match the temporal behaviour of the ground truth data accurately. \par
The aforementioned results suggest that MS-ESRGAN can reconstruct laminar flow fields with high spatial resolution and reproduce the same dynamics as the ground truth data. \par
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=1.0\textwidth]{./6.eps}
\caption[]{Power spectrum density of the reconstructed streamwise velocity fluctuations at two different streamwise locations for the case of two-dimensional laminar flow around a square cylinder.}
\label{fig:6-Cyl-Spectrum}
\end{figure}
\subsection{Turbulent channel flow}
The ability of MS-ESRGAN to reconstruct high-resolution wall-bounded turbulent flow fields is validated in this section using a plane normal to the streamwise direction in the turbulent channel flow case, i.e. ($y-z$) plane. The reconstructed instantaneous velocity fields (${u^+}$, ${\upsilon^+}$, and ${w^+}$), and RMSE are shown in Fig.~\ref{fig:7-Chan-RMSE}. The results of all the velocity components are in agreement with the DNS data, including that of case 3, where minimal information about the flow field is available. While the RMSE of the streamwise velocity component is noticeably affected by the level of coarseness, the wall-normal and spanwise velocity components show less sensitivity to the coarseness level.\par
As shown in Fig.~\ref{fig:8-Chan-PDF}, the PDF plots of the reconstructed velocity components reveal a commendable agreement with the results of the streamwise and wall-normal velocities that obtained from the DNS. However, a slight deviation can be observed for the spanwise velocity, which increases with the increase in coarseness level. This can be attributed to the limited information about the spanwise velocity component that is available considering the more random behaviour of the spanwise velocity component compared with that of the other two velocity components.
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.8\textwidth]{./7.eps}
\caption[]{Reconstructed instantaneous velocity fields and RMSE for the case of turbulent channel flow: (a) streamwise velocity, (b) wall-normal velocity, and (c) spanwise velocity.}
\label{fig:7-Chan-RMSE}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.8\textwidth]{./8.eps}
\caption[]{Probability density functions of the reconstructed velocity components for the case of turbulent channel flow.}
\label{fig:8-Chan-PDF}
\end{figure}
To further examine the capability of the model to reproduce the velocity fields with accurate spatial resolution, two-dimensional cross-correlation $(R_{ii}(\Delta y,\Delta z ))$ plots of the velocity fluctuations are examined, as shown in Fig.~\ref{fig:9-2D-Corrln}. The correlations of all the three levels of coarseness are generally in good agreement with the correlations obtained from the DNS results, indicating the excellent ability of MS-ESRGAN to reproduce the high-resolution velocity fields with an accurate spatial distribution.
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.8\textwidth]{./9.eps}
\caption[]{Two-dimensional cross-correlations of the reconstructed velocity components for the case of turbulent channel flow.}
\label{fig:9-2D-Corrln}
\end{figure}
Furthermore, the statistics of 20,000 generated velocity fields corresponding to $t^+ = 22,680$ are compared with the statistics obtained from the DNS results, as shown in Fig.~\ref{fig:10-Chan-TBL}. As can be seen in Fig.~\ref{fig:10-Chan-TBL}(a), the mean streamwise velocity profile for all the three cases of coarseness levels shows excellent agreement with the DNS data obtained within the wall distance range, i.e. the linear viscous sublayer, buffer layer, and logarithmic region. Similarly, the root-mean-square (RMS) profiles of the streamwise and wall-normal velocity components ($u_{rms}^+$ and $\upsilon_{rms}^+$) are also in good agreement with the DNS results for all the coarseness levels, as shown in Fig.~\ref{fig:10-Chan-TBL}(b) and (c). Although the RMS profile of the spanwise velocity component ($w_{rms}^+$) for cases 1 and 2 is in good agreement with the profile obtained using the DNS results, it shows an offset in case 3, as shown in Fig.~\ref{fig:10-Chan-TBL}(d). As mentioned earlier, this can be regarded as the limited information regarding spanwise velocity in case 3. Fig.~\ref{fig:10-Chan-TBL}(e) shows the mean shear stress profile $-\overline{u'^+\upsilon'^+}$. The values are generally in good agreement with the DNS results for all the coarseness levels. Nevertheless, a noticeable scattering of the values can be seen near $y^+$ $\approx$ 20 - 25 for cases 1 and 2. This can be attributed to the maximum shear stress values that appear in this region which are harder to capture by the model compared with the values that appear in the other regions along the wall distance. Interestingly, the values of $-\overline{u'^+\upsilon'^+}$ for case 3 show more smooth profile compared with the values for cases 1 and 2 which is contrary to the general expectations considering the previous results. This might be atributed to the under and over-prediction of the streamwise and wall-normal velocity components that can be compensated during the multiplication and averaging processes.
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.9\textwidth]{./10.eps}
\caption[]{Turbulence statistics of the reconstructed data for the case of turbulent channel flow: (a) mean streamwise velocity profile, (b) RMS profile of the streamwise velocity, (c) RMS profile of the wall-normal velocity, (d) RMS profile of the spanwise velocity, (e) mean shear stress profile, and (f) RMS profile of the streamwise vorticity}
\label{fig:10-Chan-TBL}
\end{figure}
The RMS profiles of the streamwise vorticity ($\omega_{rms}^+$) are shown in Fig.~\ref{fig:10-Chan-TBL}(f). The results for cases 1 and 2 show commendable match with the DNS results. However, case 3 shows less values for most of regions along the wall distance, indicating that the effect of the coarseness level in case 3 is more noticeable compared with those of cases 1 and 2. This would be attributed to a lack of information resulting in impaired reproduction of the streamwise vorticity.
To further investigate the capability of MS-ESRGAN to reconstruct high-resolution velocity fields with realistic behaviour, the one-dimensional spanwise energy spectra for the three cases at different wall distances are shown in Fig.~\ref{fig:11-Chan-Spetrum}. It can be observed from the figure that in all the three cases, the spectral content is reproduced appropriately with a slight deviation at high wavenumbers. These results suggest that the model could successfully reproduce spectra similar to that obtained from the DNS. \par
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.8\textwidth]{./11.eps}
\caption[]{One-dimensional spanwise energy spectra of the reconstructed velocity components for the case of turbulent channel flow at different wall distances.}
\label{fig:11-Chan-Spetrum}
\end{figure}
The temporal evolution of the reconstructed snapshots is examined using the time correlation $(R_{ii}(t))$ of each velocity component at $y^+$ $\approx$ 177.6, as shown in Fig.~\ref{fig:12-Time-Corrln}. The results obtained from the reconstructed data are in excellent agreement with the DNS results. Here, the model shows a remarkable ability to reconstruct the velocity data with the same dynamics as the ground truth data.
\begin{figure}
\centering
\includegraphics[angle=0, trim=0 0 0 0, width=0.8\textwidth]{./12.eps}
\caption[]{Time correlations of the reconstructed velocity components for the case of turbulent channel flow.}
\label{fig:12-Time-Corrln}
\end{figure}
In summary, the results obtained for the turbulent channel flow case indicate that the proposed MS-ESRGAN, which can successfully reconstruct the velocity fields with high spatial resolution, can reproduce turbulence statistics that are similar to the turbulence statistics of the ground truth data. \par
Our primitive studies revealed that using only $L_G^{Ra}$, $L_{pixel}$, and $L_{perceptual}$ in the generator loss function can result in distorted fluctuations of the velocity components. Furthermore, the reconstructed images showed less sharpness in terms of the flow field details compared with the images reconstructed using the combined loss function. This suggests that using physics-based constraints can remarkably improve the accuracy of the model output.
\subsection{Computational cost}
As a final remark, the computational cost of the proposed MS-ESRGAN is presented in Table~\ref{tab:Table1}. The total number of trainable parameters for both examples in this study is approximately 51 million (45.8 million for $G$ and 5.2 million for $D$). The training of the model on a single GPU machine (Nvidia TITAN RTX) requires approximately 18.8 h for the case of two-dimensional laminar flow around a square cylinder, whereas the training of the turbulent channel flow case requires approximately 65.6 h. This computational cost is required only once to learn how to map the low-resolution flow fields to the high-resolution ones. The reconstruction process of the high-resolution flow fields using the proposed MS-ESRGAN is considered to be computationally inexpensive, as shown at the bottom of Table~\ref{tab:Table1}.
\begin{table}
\begin{center}
\begin{tabular}{ccc} \hline
&Flow around a square cylinder ~~~ & ~~~ Turbulent channel flow \\ \hline
&$G$~~~~~~~~~~~~$D$&$G$~~~~~~~~~~~~$D$ \\ \hline
\makecell{No. of trainable\\ parameters (million)}& 45.8~~~~~~~~~~~~5.2&45.8~~~~~~~~~~~~5.2\\ \hline
Training time(h)&18.8&65.6\\ \hline
Reconstruction time (s)&$1.22\times10^{-2}$&$7.81\times10^{-3}$\\ \hline
\end{tabular}
\caption{Number of trainable parameters and computational cost of MS-ESRGAN.}
\label{tab:Table1}
\end{center}
\end{table}
\section{Conclusions}
In this study, a deep learning-based framework was proposed for the reconstruction of high-resolution turbulent flow fields from spatially limited flow data. We developed an improved version of ESRGAN, i.e. MS-ESRGAN, and applied it to reconstruct the flow fields using distributed points at different levels of coarseness. A combined loss function that includes physics-based loss terms was utilised in $G$ to obtain more realistic results. First, two-dimensional laminar flow around a square cylinder at $Re_d = 100$ simulated using DNS was considered as an illustration of the careful observation of high-resolution flow fields reconstruction using MS-ESRGAN. The model showed a remarkable ability to reconstruct the laminar flow with precise spatial and temporal details even when only minimal spatial flow information was available. The ability of the model to reconstruct wall-bounded turbulence was examined using data from DNS of turbulent channel flow at $Re_\tau = 180$. The model reproduced the instantaneous velocity fields successfully with commendable accuracy for all the three coarseness levels used in the study. Moreover, the turbulence statistics were reproduced appropriately with a slight deviation, which was noticed when very limited spatial information of the velocity fields was provided. The spectra, spatial correlations, and time correlations were also in agreement with the data obtained from the DNS, indicating that the developed model could accurately reconstruct the velocity fields with similar spatial and temporal accuracy as that of the DNS data. In this study, the developed MS-ESRGAN could effectively map flow fields with minimal spatial distribution to a high-resolution ones by utilising the principle of GAN combined with the physics-based loss function. This motivates us to explore more physics-guided deep learning models that can serve as an efficient and inexpensive data-driven methods for recovering high-resolution turbulent flow fields from limited spatial information.
\begin{acknowledgments}
This work was supported by 'Human Resources Program in Energy Technology' of the Korea Institute of Energy Technology Evaluation and Planning (KETEP), granted financial resource from the Ministry of Trade, Industry \& Energy, Republic of Korea (no. 20214000000140). In addition, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (no. 2019R1I1A3A01058576).
\end{acknowledgments}
\section*{Data Availability}
The data that supports the findings of this study are available within this article.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.